Models
Understand how to configure AI models and settings in MultitaskAI
MultitaskAI allows you to customize and configure a variety of AI models to suit your specific needs. By adjusting settings globally, per chat, or within agents, you can optimize the AI's behavior to enhance your workflow. The Models section is a main component of the application, accessible at https://app.multitaskai.com/models.
Accessing Models
To manage your AI models, navigate to the Models page from the main navigation menu or go to multitaskai.com/models. Here, you'll find a list of all available AI models from providers like OpenAI and Anthropic.
Setting Default Model Preferences
On the Models page, you can set your default AI model by selecting it from the list. This default model will be used throughout the application unless you override it in individual chats or agents. Adjusting the default model allows you to align the AI's behavior with your general workflow.
Configuring Model Settings
For each model, you can adjust various parameters that fine-tune how the AI responds. These settings include:
- Temperature: Controls the randomness or creativity of the AI's responses.
- Context Limit: Defines how many previous messages the AI considers.
- Max Tokens: Sets the maximum length of the AI's response.
- Presence Penalty: Adjusts the likelihood of the AI introducing new topics.
- Frequency Penalty: Controls repetition in the AI's output.
To configure these settings:
- Select a model from the list on the Models page.
- Adjust the parameters according to your preferences.
- Click Save to apply the changes.
Per-Chat Model Configuration
In addition to global model settings, MultitaskAI allows you to configure models on a per-chat basis. Within a chat session, you can select a different model from the Model dropdown menu located in the chat interface. This flexibility enables you to tailor the AI's behavior to the specific context of each conversation.
You can also adjust model settings specific to that chat without affecting your global preferences. This is particularly useful when switching between tasks that require different AI behaviors.
Model Hierarchy
The application applies model settings based on the following hierarchy:
- Selected Model in Chat: If you manually select a model during the chat, these settings take precedence.
- Agent Model Settings: If you're using an agent with a specified model, these settings are used next.
- Default Model: If neither of the above is set, the application uses the default model specified in the Models section.
This hierarchy ensures that you have granular control over the AI's behavior at every level.
Configuring Models in Agents
When creating or editing an agent, you have the option to specify a custom model and adjust its settings. This allows the agent to operate with parameters optimized for its specific role.
To configure models within an agent:
- Go to the Agents section at
https://app.multitaskai.com/agents
and select an agent or create a new one. - In the agent configuration, choose a model from the Model dropdown menu.
- Adjust the model settings such as temperature, context limit, and max tokens.
- Save the agent to apply the changes.
Available Models and Settings
MultitaskAI supports a range of AI models from different providers, each offering unique capabilities and settings.
OpenAI Models
OpenAI provides advanced language models such as gpt-3.5-turbo
, gpt-4
, and others. These models are suitable for various applications, from conversational assistance to complex problem-solving.
Anthropic Models
Anthropic offers models like Claude 3 series, optimized for specific use cases and providing unique perspectives.
Google Models
Google provides the Gemini series of models, including:
- Gemini 1.5 Pro: Features an extensive 2M token context window and vision capabilities
- Gemini 1.5 Flash: Optimized for faster responses with a 1M token context window
- Gemini 1.5 Flash 8b: A lightweight variant maintaining quick response times
OpenRouter Models
OpenRouter provides access to a variety of AI models from different providers. To add an OpenRouter model:
- Click the Add Model button in the Models section
- Select "OpenRouter" from the type dropdown
- Choose your desired model from the available options
- Configure the model settings as needed
- Click Save to add the model to your available models list
When using an OpenRouter model in a chat, you'll notice a web search button in the input box. This feature allows the model to access current information from the internet during your conversation. Note that web search can only be enabled at the start of a new conversation. When enabled:
- The model can fetch up to 5 web results per request
- Search results are automatically incorporated into the model's responses
- The model will cite sources using markdown links
- Web search can be toggled on/off during the conversation, but only before sending your first message
To use web search:
- Start a new conversation and select an OpenRouter model for your chat
- Click the web search icon in the input box to enable/disable the feature before sending your first message
- Your messages will now include relevant, up-to-date information from the web
OpenAI-Compatible Endpoints
MultitaskAI supports integration with any OpenAI-compatible API endpoint. This allows you to connect to various AI providers that implement the OpenAI API specification. To add a custom OpenAI-compatible endpoint:
- Click the Add Model button in the Models section
- Select "OpenAI Compatible Endpoint" from the type dropdown
- Configure your endpoint settings:
- Enter the base URL for your API endpoint
- Set up your API key in the Settings page
- Configure any custom headers required by your provider
- Add custom JSON body parameters if needed
- Test your configuration using the built-in test feature
- Save your settings to add the model to your available models list
Managing API Keys and Custom Settings
To configure API keys and custom settings for OpenAI-compatible endpoints:
- Navigate to the Settings page
- Locate the API Keys section
- Add your API key for the custom endpoint
- Configure additional settings:
- Custom Headers: Add any required HTTP headers for authentication or configuration
- Custom Body Parameters: Specify additional JSON parameters to be included in API requests
- Test Configuration: Verify your setup using the test feature before saving
These settings allow you to integrate with various AI providers while maintaining the familiar OpenAI API interface within MultitaskAI.
Each model supports settings like temperature, max tokens, presence penalty, and frequency penalty, allowing you to fine-tune responses according to your needs.
Understanding Model Settings
Below is a detailed explanation of the key model settings and how they influence the AI's responses:
Temperature
The Temperature setting influences the randomness or creativity of the AI's output. Higher values encourage the AI to take more risks, generating innovative and less predictable responses. Lower values result in more conservative and focused replies, beneficial for tasks requiring accuracy.
Context Limit
The Context Limit determines the amount of conversation history the AI considers when generating a response. You can choose to include all previous messages, only the last message, or a specific number of recent messages.
Max Tokens
The Max Tokens setting controls the maximum length of the AI's response in tokens. Tokens are units of text that can be whole words or fragments. Limiting the number of tokens allows you to control the verbosity of the AI's replies.
Presence Penalty
The Presence Penalty adjusts the likelihood of the AI introducing new topics or diverging from the current conversation thread. A higher presence penalty encourages the model to explore new subjects, making the interaction more dynamic.
Frequency Penalty
The Frequency Penalty reduces the repetition of words or phrases in the AI's responses. By increasing this penalty, you prompt the AI to use a broader vocabulary and avoid redundancy.
Tips for Optimizing Models
To get the most out of MultitaskAI's model configurations, consider the following strategies:
- Experiment with Settings: Adjust model parameters to discover what works best for your use case.
- Use Appropriate Models: Select models that align with the complexity and nature of your task.
- Utilize Agent-Specific Models: Configure custom models within agents for specialized roles.