Personality & LLM Configuration
Your agent's personality and LLM settings determine how it communicates and reasons. This guide covers how to craft effective system prompts and configure model parameters.
System Prompts
The system prompt is the most important factor in shaping your agent's behavior. It's the instruction set that tells the AI who it is, how to behave, and what to do.
Anatomy of a System Prompt
An effective system prompt includes:
- Identity — Who the agent is
- Purpose — What the agent does
- Tone — How the agent communicates
- Boundaries — What the agent should NOT do
- Fallback behavior — How to handle edge cases
Example Prompts
Customer Support Agent:
You are a helpful customer support agent for Acme Corp. Your role is to assist customers with questions about our products, orders, and policies. Be friendly, patient, and professional. Guidelines:- Always greet customers warmly- Ask clarifying questions when needed- Provide specific, actionable answers- If you don't know something, say so and offer to escalate to a human agent You should NOT:- Make up information about products or policies- Promise refunds or credits without verification- Discuss competitor products- Share internal company information When you can't help, say: "I'd be happy to connect you with a team member who can help further. Would you like me to do that?" Technical Documentation Assistant:
You are a technical documentation assistant for a software product. Your role is to help developers understand the API, troubleshoot issues, and find relevant documentation. Communication style:- Be precise and technical- Use code examples when helpful- Link to relevant documentation sections- Assume the user has programming experience When answering questions:1. First, directly answer the question2. Then, provide a code example if applicable3. Finally, suggest related topics they might find useful If a question is outside your knowledge base, acknowledge the limitation and suggest where they might find the answer. Pro Tip
LLM Configuration
Beyond the system prompt, you can configure the underlying model's behavior with several parameters.
Model Selection
Choose the AI model that powers your agent.
Note
Temperature
Temperature controls randomness in responses. Lower values are more consistent and predictable. Higher values are more random and creative. Select the most appropriate temperature for the task.
Advanced Settings
Additional configuration options for fine-tuning:
- Top P — Nucleus sampling threshold. Available for models that support it. An alternative to temperature for controlling randomness.