Skip to main content

Prerequisites

Before you begin, make sure you have:
  • Rust 1.70 or higher installed
  • An Anthropic API key, Google AI API key (Gemini), or OpenAI API key
  • Basic familiarity with async Rust and Tokio

Installation

Add PiCrust to your project’s Cargo.toml:
[dependencies]
picrust = { path = "path/to/picrust" }
tokio = { version = "1", features = ["full"] }
anyhow = "1.0"

Environment Setup

Set up your LLM provider credentials:
# For Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-..."
export ANTHROPIC_MODEL="claude-sonnet-4-5@20250929"

# Or for Google Gemini
export GEMINI_API_KEY="..."
export GEMINI_MODEL="gemini-3-flash-preview"

# Or for OpenAI
export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="gpt-4o"

Your First Agent

Create a new file src/main.rs and add the following code:
use std::sync::Arc;
use picrust::{
    agent::{AgentConfig, StandardAgent},
    llm::{AnthropicProvider, LlmProvider},
    runtime::AgentRuntime,
    session::AgentSession,
    tools::ToolRegistry,
};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // 1. Create LLM provider (reads ANTHROPIC_API_KEY and ANTHROPIC_MODEL from env)
    let llm: Arc<dyn LlmProvider> = Arc::new(AnthropicProvider::from_env()?);

    // 2. Create runtime (manages all agents)
    let runtime = AgentRuntime::new();

    // 3. Create session (persists conversation)
    let session = AgentSession::new(
        "my-session",      // unique ID
        "assistant",       // agent type
        "My Assistant",    // display name
        "A helpful agent", // description
    )?;

    // 4. Configure and create agent
    let config = AgentConfig::new("You are a helpful assistant.")
        .with_streaming(true);
    let agent = StandardAgent::new(config, llm);

    // 5. Spawn agent
    let handle = runtime
        .spawn(session, |internals| agent.run(internals))
        .await;

    // 6. Send input and receive output
    let mut rx = handle.subscribe();
    handle.send_input("Hello!").await?;

    // 7. Process output stream
    loop {
        match rx.recv().await {
            Ok(chunk) => {
                use picrust::core::OutputChunk;
                match chunk {
                    OutputChunk::TextDelta(text) => print!("{}", text),
                    OutputChunk::Done => break,
                    OutputChunk::Error(e) => eprintln!("Error: {}", e),
                    _ => {}
                }
            }
            Err(_) => break,
        }
    }

    Ok(())
}

Run Your Agent

Execute your agent:
cargo run
You should see the agent’s response stream to your terminal.

Understanding the Code

Let’s break down what’s happening:

1. LLM Provider

let llm: Arc<dyn LlmProvider> = Arc::new(AnthropicProvider::from_env()?);
Creates an LLM provider that reads credentials from environment variables. The SDK supports multiple providers (Claude, Gemini) through the LlmProvider trait.

2. Agent Runtime

let runtime = AgentRuntime::new();
The runtime manages agent lifecycles. It spawns agents as async tasks and maintains a registry of running agents.

3. Session

let session = AgentSession::new(
    "my-session",
    "assistant",
    "My Assistant",
    "A helpful agent",
)?;
Sessions persist conversation history to disk automatically (in ./sessions/{id}/). This enables conversation continuity across restarts.

4. Agent Configuration

let config = AgentConfig::new("You are a helpful assistant.")
    .with_streaming(true);
let agent = StandardAgent::new(config, llm);
AgentConfig defines agent behavior. StandardAgent is the main agent implementation that handles the request-response loop.

5. Spawning

let handle = runtime
    .spawn(session, |internals| agent.run(internals))
    .await;
Spawns the agent as an async task and returns a handle for communication.

6. Communication

let mut rx = handle.subscribe();  // Subscribe BEFORE sending
handle.send_input("Hello!").await?;
Critical Pattern: Always subscribe to the output stream before sending input. Otherwise, you’ll miss early output chunks.

7. Output Processing

match chunk {
    OutputChunk::TextDelta(text) => print!("{}", text),
    OutputChunk::Done => break,
    OutputChunk::Error(e) => eprintln!("Error: {}", e),
    _ => {}
}
The agent streams output in chunks. TextDelta contains text tokens, Done signals completion.

Adding Tools

Let’s make the agent more useful by adding file access:
use picrust::tools::common::*;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let llm: Arc<dyn LlmProvider> = Arc::new(AnthropicProvider::from_env()?);
    let runtime = AgentRuntime::new();

    // Create and register tools
    let mut tools = ToolRegistry::new();
    tools.register(ReadTool::new()?);
    tools.register(WriteTool::new()?);
    tools.register(BashTool::new()?);
    let tools = Arc::new(tools);

    let session = AgentSession::new("session-1", "coder", "Code Assistant", "")?;

    // Add tools to config
    let config = AgentConfig::new("You are a helpful coding assistant.")
        .with_tools(tools)
        .with_streaming(true);

    let agent = StandardAgent::new(config, llm);
    let handle = runtime.spawn(session, |internals| agent.run(internals)).await;

    let mut rx = handle.subscribe();
    handle.send_input("Create a hello.rs file with a hello world program").await?;

    // Process output
    loop {
        match rx.recv().await {
            Ok(chunk) => {
                use picrust::core::OutputChunk;
                match chunk {
                    OutputChunk::TextDelta(text) => print!("{}", text),
                    OutputChunk::ToolStart { name, .. } => {
                        println!("\n[Using tool: {}]", name);
                    }
                    OutputChunk::Done => break,
                    OutputChunk::Error(e) => eprintln!("Error: {}", e),
                    _ => {}
                }
            }
            Err(_) => break,
        }
    }

    Ok(())
}
Now the agent can read files, write files, and execute shell commands!

Switching Providers

All three providers are interchangeable — just swap one line:
use picrust::llm::{AnthropicProvider, GeminiProvider, OpenAIProvider};

// Anthropic Claude (default)
let llm: Arc<dyn LlmProvider> = Arc::new(AnthropicProvider::from_env()?);

// Google Gemini
let llm: Arc<dyn LlmProvider> = Arc::new(GeminiProvider::from_env()?);

// OpenAI
let llm: Arc<dyn LlmProvider> = Arc::new(OpenAIProvider::from_env()?);
The rest of the code stays the same regardless of which provider you use.
When using OpenAI, disable prompt caching (it’s an Anthropic-specific feature) and use the OpenAI env vars:
export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="gpt-4o"
# Optional: custom endpoint (proxy, Azure, etc.)
# export OPENAI_BASE_URL="https://my-proxy.example.com/v1/responses"
let config = AgentConfig::new().with_prompt_caching(false);

Handling Permissions

By default, tools require user permission. Handle permission requests:
loop {
    match rx.recv().await {
        Ok(chunk) => {
            use picrust::core::OutputChunk;
            match chunk {
                OutputChunk::TextDelta(text) => print!("{}", text),

                OutputChunk::PermissionRequest { tool_name, action, .. } => {
                    println!("\n[Permission needed: {} wants to {}]", tool_name, action);

                    // Approve the request
                    handle.send_permission_response(
                        tool_name,
                        true,   // allowed
                        false   // don't remember
                    ).await?;
                }

                OutputChunk::Done => break,
                _ => {}
            }
        }
        Err(_) => break,
    }
}
Or skip permissions for trusted scenarios (use with caution):
let config = AgentConfig::new("You are a helpful assistant.")
    .with_dangerous_skip_permissions(true)  // Skip all permission checks
    .with_streaming(true);

Viewing Conversation History

All conversations are automatically saved to disk. View them:
use picrust::session::AgentSession;

// Load and display history
let history = AgentSession::get_history("my-session")?;

for message in history {
    println!("{}: {:?}", message.role, message.content);
}

Complete Example

Here’s a complete interactive agent:
use std::sync::Arc;
use std::io::{self, Write};
use picrust::{
    agent::{AgentConfig, StandardAgent},
    llm::{AnthropicProvider, LlmProvider},
    runtime::AgentRuntime,
    session::AgentSession,
    tools::{ToolRegistry, common::*},
    core::OutputChunk,
};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Setup
    let llm: Arc<dyn LlmProvider> = Arc::new(AnthropicProvider::from_env()?);
    let runtime = AgentRuntime::new();

    let mut tools = ToolRegistry::new();
    tools.register(ReadTool::new()?);
    tools.register(WriteTool::new()?);
    tools.register(BashTool::new()?);
    let tools = Arc::new(tools);

    let session = AgentSession::new(
        "interactive-session",
        "assistant",
        "Interactive Assistant",
        "",
    )?;

    let config = AgentConfig::new("You are a helpful assistant.")
        .with_tools(tools)
        .with_streaming(true);

    let agent = StandardAgent::new(config, llm);
    let handle = runtime.spawn(session, |internals| agent.run(internals)).await;

    // Interactive loop
    println!("Agent ready! Type 'exit' to quit.\n");

    loop {
        print!("> ");
        io::stdout().flush()?;

        let mut input = String::new();
        io::stdin().read_line(&mut input)?;
        let input = input.trim();

        if input == "exit" {
            break;
        }

        if input.is_empty() {
            continue;
        }

        // Subscribe and send
        let mut rx = handle.subscribe();
        handle.send_input(input).await?;

        // Process output
        loop {
            match rx.recv().await {
                Ok(chunk) => match chunk {
                    OutputChunk::TextDelta(text) => print!("{}", text),
                    OutputChunk::ToolStart { name, .. } => {
                        print!("\n[Tool: {}] ", name);
                    }
                    OutputChunk::PermissionRequest { tool_name, action, .. } => {
                        println!("\n[Permission: {} - {}]", tool_name, action);
                        handle.send_permission_response(tool_name, true, false).await?;
                    }
                    OutputChunk::Done => {
                        println!("\n");
                        break;
                    }
                    OutputChunk::Error(e) => {
                        eprintln!("Error: {}", e);
                        break;
                    }
                    _ => {}
                },
                Err(_) => break,
            }
        }
    }

    Ok(())
}

Next Steps

Now that you have a working agent, explore more features:

Core Concepts

Learn about Runtime, Sessions, and Agent States

Built-in Tools

Explore all available tools

Permission System

Understand the three-tier permission system

Hooks

Intercept and modify agent behavior

Common Issues

”Permission denied” errors

Make sure to handle OutputChunk::PermissionRequest or use:
.with_dangerous_skip_permissions(true)

No output streaming

Remember to subscribe before sending input:
let mut rx = handle.subscribe();  // Do this first!
handle.send_input("Hello").await?;  // Then this

API key not found

Set the env vars for your chosen provider:
# Anthropic
export ANTHROPIC_API_KEY="your-key"
export ANTHROPIC_MODEL="claude-sonnet-4-5@20250929"

# Gemini
export GEMINI_API_KEY="your-key"
export GEMINI_MODEL="gemini-3-flash-preview"

# OpenAI
export OPENAI_API_KEY="your-key"
export OPENAI_MODEL="gpt-4o"

What’s Next?

You’ve built your first agent! Continue learning:
Ready to dive deeper? Explore the Core Concepts next!