logo

Models

Understand how to configure AI models and settings in MultitaskAI

models

MultitaskAI allows you to customize and configure a variety of AI models to suit your specific needs. By adjusting settings globally, per chat, or within agents, you can optimize the AI's behavior to enhance your workflow. The Models section is a main component of the application, accessible at https://app.multitaskai.com/models.

Accessing Models

To manage your AI models, navigate to the Models page from the main navigation menu or go to multitaskai.com/models. Here, you'll find a list of all available AI models from providers like OpenAI and Anthropic.

Setting Default Model Preferences

On the Models page, you can set your default AI model by selecting it from the list. This default model will be used throughout the application unless you override it in individual chats or agents. Adjusting the default model allows you to align the AI's behavior with your general workflow.

Configuring Model Settings

For each model, you can adjust various parameters that fine-tune how the AI responds. These settings include:

  • Temperature: Controls the randomness or creativity of the AI's responses.
  • Context Limit: Defines how many previous messages the AI considers.
  • Max Tokens: Sets the maximum length of the AI's response.
  • Presence Penalty: Adjusts the likelihood of the AI introducing new topics.
  • Frequency Penalty: Controls repetition in the AI's output.

To configure these settings:

  1. Select a model from the list on the Models page.
  2. Adjust the parameters according to your preferences.
  3. Click Save to apply the changes.

Per-Chat Model Configuration

In addition to global model settings, MultitaskAI allows you to configure models on a per-chat basis. Within a chat session, you can select a different model from the Model dropdown menu located in the chat interface. This flexibility enables you to tailor the AI's behavior to the specific context of each conversation.

You can also adjust model settings specific to that chat without affecting your global preferences. This is particularly useful when switching between tasks that require different AI behaviors.

Model Hierarchy

The application applies model settings based on the following hierarchy:

  1. Agent Model Settings: If you're using an agent with a specified model, the agent's settings take precedence.
  2. Selected Model in Chat: If you manually select a model during the chat, these settings come next.
  3. Default Model: If neither of the above is set, the application uses the default model specified in the Models section.

This hierarchy ensures that you have granular control over the AI's behavior at every level.

Configuring Models in Agents

When creating or editing an agent, you have the option to specify a custom model and adjust its settings. This allows the agent to operate with parameters optimized for its specific role.

To configure models within an agent:

  1. Go to the Agents section at https://app.multitaskai.com/agents and select an agent or create a new one.
  2. In the agent configuration, choose a model from the Model dropdown menu.
  3. Adjust the model settings such as temperature, context limit, and max tokens.
  4. Save the agent to apply the changes.

Available Models and Settings

MultitaskAI supports a range of AI models from different providers, each offering unique capabilities and settings.

OpenAI Models

OpenAI provides advanced language models such as gpt-3.5-turbo, gpt-4, and others. These models are suitable for various applications, from conversational assistance to complex problem-solving.

Anthropic Models

Anthropic offers models like Claude 3 series, optimized for specific use cases and providing unique perspectives.

Google Models

Google provides the Gemini series of models, including:

  • Gemini 1.5 Pro: Features an extensive 2M token context window and vision capabilities
  • Gemini 1.5 Flash: Optimized for faster responses with a 1M token context window
  • Gemini 1.5 Flash 8b: A lightweight variant maintaining quick response times

Each model supports settings like temperature, max tokens, presence penalty, and frequency penalty, allowing you to fine-tune responses according to your needs.

Understanding Model Settings

Below is a detailed explanation of the key model settings and how they influence the AI's responses:

Temperature

The Temperature setting influences the randomness or creativity of the AI's output. Higher values encourage the AI to take more risks, generating innovative and less predictable responses. Lower values result in more conservative and focused replies, beneficial for tasks requiring accuracy.

Context Limit

The Context Limit determines the amount of conversation history the AI considers when generating a response. You can choose to include all previous messages, only the last message, or a specific number of recent messages.

Max Tokens

The Max Tokens setting controls the maximum length of the AI's response in tokens. Tokens are units of text that can be whole words or fragments. Limiting the number of tokens allows you to control the verbosity of the AI's replies.

Presence Penalty

The Presence Penalty adjusts the likelihood of the AI introducing new topics or diverging from the current conversation thread. A higher presence penalty encourages the model to explore new subjects, making the interaction more dynamic.

Frequency Penalty

The Frequency Penalty reduces the repetition of words or phrases in the AI's responses. By increasing this penalty, you prompt the AI to use a broader vocabulary and avoid redundancy.

Tips for Optimizing Models

To get the most out of MultitaskAI's model configurations, consider the following strategies:

  • Experiment with Settings: Adjust model parameters to discover what works best for your use case.
  • Use Appropriate Models: Select models that align with the complexity and nature of your task.
  • Utilize Agent-Specific Models: Configure custom models within agents for specialized roles.