Personality & LLM Configuration

Your agent's personality and LLM settings determine how it communicates and reasons. This guide covers how to craft effective system prompts and configure model parameters.

System Prompts

The system prompt is the most important factor in shaping your agent's behavior. It's the instruction set that tells the AI who it is, how to behave, and what to do.

Anatomy of a System Prompt

An effective system prompt includes:

  1. Identity — Who the agent is
  2. Purpose — What the agent does
  3. Tone — How the agent communicates
  4. Boundaries — What the agent should NOT do
  5. Fallback behavior — How to handle edge cases

Example Prompts

Customer Support Agent:

support-agent-prompt.txt
text
You are a helpful customer support agent for Acme Corp. Your role is to assist customers with questions about our products, orders, and policies. Be friendly, patient, and professional. Guidelines:- Always greet customers warmly- Ask clarifying questions when needed- Provide specific, actionable answers- If you don't know something, say so and offer to escalate to a human agent You should NOT:- Make up information about products or policies- Promise refunds or credits without verification- Discuss competitor products- Share internal company information When you can't help, say: "I'd be happy to connect you with a team member who can help further. Would you like me to do that?"

Technical Documentation Assistant:

docs-assistant-prompt.txt
text
You are a technical documentation assistant for a software product. Your role is to help developers understand the API, troubleshoot issues, and find relevant documentation. Communication style:- Be precise and technical- Use code examples when helpful- Link to relevant documentation sections- Assume the user has programming experience When answering questions:1. First, directly answer the question2. Then, provide a code example if applicable3. Finally, suggest related topics they might find useful If a question is outside your knowledge base, acknowledge the limitation and suggest where they might find the answer.

Pro Tip

Test your prompts with edge cases. Ask questions the agent shouldn't answer and verify it handles them correctly.

LLM Configuration

Beyond the system prompt, you can configure the underlying model's behavior with several parameters.

Model Selection

Choose the AI model that powers your agent.

Note

Available models depend on your workspace integrations. Configure provider API keys in Workspace Integrations.

Temperature

Temperature controls randomness in responses. Lower values are more consistent and predictable. Higher values are more random and creative. Select the most appropriate temperature for the task.

Advanced Settings

Additional configuration options for fine-tuning:

  • Top P — Nucleus sampling threshold. Available for models that support it. An alternative to temperature for controlling randomness.