Agent Framework

ep 2: abstractions & multi-model support

Refactoring the agent framework to support multiple LLM providers—creating generic interfaces, implementing the read file tool, and organizing code into packages.

· 87 minutes

Introduction

In this second episode, 04:40 we continued building our agent framework by tackling a crucial challenge: how do we support multiple LLM providers without rewriting our entire agent?

tdlr;

  • Created generic LLM interfaces to abstract away provider-specific code
  • Implemented the read_file tool with security constraints
  • Organized code into proper packages (llm/, tool/)
  • Maintained backward compatibility—everything still works!
The Problem: Tight Coupling

08:00 In episode 1, we integrated directly with Anthropic’s SDK. This worked, but had a problem: our agent code was tightly coupled to Anthropic’s types and APIs.

// This only works with Anthropic
inputMessages := []anthropic.MessageParam{
    anthropic.NewUserMessage(anthropic.NewTextBlock(systemPrompt)),
}
respMessage := client.Messages.New(context.Background(), anthropic.MessageNewParams{
    Messages: inputMessages,
    Model:    anthropic.ModelClaudeSonnet4_5_20250929,
})

If we wanted to support OpenAI or Gemini, we’d need to rewrite significant portions of our agent loop. Not ideal.

Creating the LLM Abstraction

22:00 The solution is to create a generic interface that any LLM provider can implement. We started by defining what an LLM needs to do from our agent’s perspective.

The Core Interface

23:00 In llm/type.go, we defined:

type LLM interface {
    RunInference(messages []Message, tools []ToolDefinition) ([]Message, error)
}

This is beautifully simple: give the LLM some messages and tool definitions, get back new messages. Everything else is implementation details.

Generic Message Type

24:00 Messages needed to support three types of content:

type Message struct {
    Role       Role
    Text       string
    ToolResult *ToolResult
    ToolUse    *ToolUse
}

type Role string

const (
    RoleUser      Role = "user"
    RoleAssistant Role = "assistant"
    RoleSystem    Role = "system"
)

A message can contain:

  • Plain text from user or assistant
  • A tool use request from the LLM
  • A tool result being sent back to the LLM

Tool Types

28:00 We defined generic tool types:

type ToolUse struct {
    ID    string
    Name  string
    Input json.RawMessage
}

type ToolResult struct {
    ID      string
    Content string
    IsError bool
}

type ToolDefinition struct {
    Name                string
    Description         string
    InputSchemaInstance interface{}
    Func                func(json.RawMessage) (string, error)
}

The ToolDefinition is particularly clever: it includes both the schema (as a Go struct) and the implementation function. This keeps everything together.

Implementing the Anthropic Adapter

38:00 Now we needed to implement our generic interface for Anthropic. This went in llm/anthropic.go.

Client Setup

type anthropicLLM struct {
    client anthropic.Client
}

func NewAnthropicClient() *anthropicLLM {
    anthropicApiKey := os.Getenv("ANTHROPIC_API_KEY")
    client := anthropic.NewClient(
        option.WithAPIKey(anthropicApiKey),
    )
    return &anthropicLLM{
        client: client,
    }
}

Message Translation

44:00 The tricky part was translating between our generic message format and Anthropic’s specific format:

func transformToAnthropicMessages(messages []Message) []anthropic.MessageParam {
    anthropicMessages := make([]anthropic.MessageParam, len(messages))
    for i, msg := range messages {
        if msg.ToolUse != nil {
            anthropicMessages[i] = anthropic.NewAssistantMessage(
                anthropic.NewToolUseBlock(msg.ToolUse.ID, msg.ToolUse.Input, msg.ToolUse.Name))
        } else if msg.ToolResult != nil {
            anthropicMessages[i] = anthropic.NewUserMessage(
                anthropic.NewToolResultBlock(msg.ToolResult.ID, msg.ToolResult.Content, msg.ToolResult.IsError))
        } else if msg.Text != "" {
            if msg.Role == RoleUser {
                anthropicMessages[i] = anthropic.NewUserMessage(anthropic.NewTextBlock(msg.Text))
            } else if msg.Role == RoleAssistant {
                anthropicMessages[i] = anthropic.NewAssistantMessage(anthropic.NewTextBlock(msg.Text))
            }
        }
    }
    return anthropicMessages
}

This function handles all three message types and converts them to Anthropic’s format. Going the other way (Anthropic → generic) required similar logic.

Tool Translation

53:00 Tools also needed translation:

func transformToAnthropicTools(tools []ToolDefinition) []anthropic.ToolUnionParam {
    toolParams := []anthropic.ToolParam{}
    for _, tool := range tools {
        toolParams = append(toolParams, anthropic.ToolParam{
            Name:        tool.Name,
            Description: anthropic.String(tool.Description),
            InputSchema: GenerateSchema(tool.InputSchemaInstance),
        })
    }
    // Convert to union type...
    return anthropicTools
}

The GenerateSchema function uses reflection to convert our Go struct into a JSON schema that Anthropic expects.

Implementing the Read File Tool

09:00 With our abstractions in place, adding a new tool became much cleaner. We moved all tool code to tool/filesystem.go.

Tool Definition

var ReadFileToolDefinition = llm.ToolDefinition{
    Name:                "read_file",
    Description:         "Reads a file of the given path.",
    InputSchemaInstance: ReadFileInput{},
    Func:                ReadFileImpl,
}

type ReadFileInput struct {
    Path string `json:"path" jsonschema_description:"The path to the file"`
}

Implementation with Security

18:00 The implementation includes a security check:

var ReadFileImpl = func(message json.RawMessage) (string, error) {
    var input ReadFileInput
    if err := json.Unmarshal(message, &input); err != nil {
        return "", err
    }
    path := input.Path

    // Security: Don't let the agent read secrets
    if path == ".env" {
        return "", fmt.Errorf(".env file is not allowed to be read")
    }

    data, err := os.ReadFile(path)
    if err != nil {
        return "", fmt.Errorf("error reading file: %w", err)
    }
    return string(data), nil
}

This prevents the agent from accidentally exposing secrets. We could extend this to check other sensitive files or patterns.

Tool Registry

60:00 We created a tool registry for easy lookup:

var ToolMap = map[string]llm.ToolDefinition{
    "list_files": ListFilesToolDefinition,
    "read_file":  ReadFileToolDefinition,
}

func ExecuteTool(name string, input json.RawMessage) (string, error) {
    def, ok := ToolMap[name]
    if !ok {
        return "", errors.New("Tool " + name + " not found")
    }
    return def.Func(input)
}

This makes it trivial to add new tools—just add them to the map.

Refactoring List Files

11:00 We also cleaned up the list_files tool:

var ListFileImpl = func(message json.RawMessage) (string, error) {
    var input ListFilesInput
    if err := json.Unmarshal(message, &input); err != nil {
        return "", err
    }
    entries, err := os.ReadDir(input.Directory)
    if err != nil {
        return "", fmt.Errorf("error reading directory: %w", err)
    }

    var files []string
    for _, entry := range entries {
        name := entry.Name()
        if entry.IsDir() {
            name += "/"
        }
        files = append(files, name)
    }
    return strings.Join(files, "\n"), nil
}

Instead of returning JSON, we now return a simple newline-separated list. This is easier for the LLM to understand.

Simplifying Main

66:00 With all these abstractions, main.go became much cleaner:

func main() {
    err := godotenv.Load()
    if err != nil {
        log.Fatal("Error loading .env file")
    }

    var goal = flag.String("goal", "", "What would you like the agent to do?")
    flag.Parse()
    userGoal := *goal

    // Create LLM client (could be anthropic, openai, etc.)
    client, err := llm.NewClient("anthropic")
    if err != nil {
        log.Fatal(err)
    }

    // Setup initial messages
    inputMessages := []llm.Message{
        {
            Role: llm.RoleUser,
            Text: systemPrompt,
        },
        {
            Role: llm.RoleUser,
            Text: userGoal,
        },
    }

    // Tool definitions
    allTools := []llm.ToolDefinition{
        tool.ListFilesToolDefinition,
        tool.ReadFileToolDefinition,
    }

    // Agent loop
    for {
        // Run inference
        respMessage, err := client.RunInference(inputMessages, allTools)
        if err != nil {
            log.Fatal(err)
        }

        // Print responses
        for _, message := range respMessage {
            if message.Text != "" {
                fmt.Println(message.Text)
            } else if message.ToolUse != nil {
                inputJson, _ := json.MarshalIndent(message.ToolUse.Input, "", "  ")
                fmt.Println(message.ToolUse.Name + ": " + string(inputJson))
            }
        }

        // Add to history
        inputMessages = append(inputMessages, respMessage...)

        // Execute tools
        toolResult := []llm.ToolResult{}
        for _, block := range respMessage {
            if block.ToolUse != nil {
                toolResp, toolErr := tool.ExecuteTool(block.ToolUse.Name, block.ToolUse.Input)

                if toolErr != nil {
                    toolResult = append(toolResult, llm.ToolResult{
                        ID:      block.ToolUse.ID,
                        IsError: true,
                        Content: toolErr.Error(),
                    })
                } else {
                    toolResult = append(toolResult, llm.ToolResult{
                        ID:      block.ToolUse.ID,
                        IsError: false,
                        Content: toolResp,
                    })
                }
            }
        }

        if len(toolResult) == 0 {
            break
        }

        for _, tr := range toolResult {
            inputMessages = append(inputMessages, llm.Message{
                ToolResult: &tr,
            })
        }
    }
}

Notice how we’re no longer using any Anthropic-specific types in main. Everything goes through our generic interfaces.

Environment Variables

06:00 We added support for .env files:

import "github.com/joho/godotenv"

err := godotenv.Load()
if err != nil {
    log.Fatal("Error loading .env file")
}

And added .gitignore to keep secrets out of version control:

.env

Your .env file should contain:

ANTHROPIC_API_KEY=your-api-key-here
Package Organization

52:00 We organized the code into logical packages:

agent-framework/
├── llm/
│   ├── anthropic.go    # Anthropic implementation
│   ├── llm.go          # Client factory
│   └── type.go         # Generic interfaces
├── tool/
│   ├── filesystem.go   # File system tools
│   └── tool.go         # Tool execution
└── main.go             # Agent loop

This makes it easy to:

  • Add new LLM providers (just implement the LLM interface)
  • Add new tools (just add to ToolMap)
  • Keep concerns separated
Testing

79:00 We tested everything with the same goal from episode 1:

go run main.go -goal "explain this project"

The agent:

  1. Lists files in the root directory
  2. Reads go.mod, README.md, and other files
  3. Provides a comprehensive explanation

Everything worked! We didn’t break any functionality—we just made the code more extensible.

Key Concepts
Interface-Based Design

By programming to interfaces rather than concrete types, we made our agent provider-agnostic. The core agent loop doesn’t know or care whether it’s talking to Anthropic, OpenAI, or Gemini.

Translation Layers

Each LLM provider has its own API format. The translation layer (transformToAnthropicMessages, etc.) handles converting between our generic format and the provider’s specific format.

This is the “adapter pattern” from software design. E4-Ba01l0pM

Tool as Data + Function

Our ToolDefinition type combines:

  • Schema (what inputs the tool expects)
  • Implementation (what the tool does)

This keeps everything together and makes tools self-contained.

Security by Default

The .env file check in read_file shows how we can build security into our tools from the start. As we add more tools, we should think about:

  • What files/directories should be off-limits?
  • What commands are dangerous to execute?
  • How do we sandbox the agent?
What’s Next

In episode 3, we’ll:

  • Add OpenAI (ChatGPT) integration
  • Add Gemini integration
  • Test that the same agent code works with all three providers
  • Potentially add more tools (write file, execute commands)
Common Issues
”Error loading .env file”

Make sure you have a .env file in your project root with:

ANTHROPIC_API_KEY=your-key-here
Type Mismatches

If you see errors about Message vs anthropic.Message, make sure you’re importing from the right package:

import "github.com/agentengineering.dev/agent-framework/llm"

// Use llm.Message, not anthropic.Message
Tool Not Found

If the agent tries to use a tool that doesn’t exist, check that it’s registered in ToolMap:

var ToolMap = map[string]llm.ToolDefinition{
    "list_files": ListFilesToolDefinition,
    "read_file":  ReadFileToolDefinition,
}

Full Code

You can find the complete code from this stream at: https://github.com/agentengineering-dev/agent-framework/tree/ep-002