Introduction
In this first episode of the Agent Framework live-stream series, I Introduced the foundational blog for the series and discussed some FAQs.
tdlr;
- Birth of a new Discipline: Agent Engineering.
- Why agents (like claude-code, codex) should be transparent for devs to use them effectively.
- We will build an open-source Agent Framework on live-stream
What is an Agent Framework?
An Agent Framework is the set of libraries, services, evaluation tools, and runtime components that make it possible to build, observe, and improve agents.
An Agent is an instance of a process that perceives its environment and takes actions—often through tools or APIs—to achieve a goal.
flowchart LR A[Agent] -->|perceive| B[Environment] A -->|actions| B
TODO
- Setting up a golang project.
- Integrating with Anthropic’s golang sdk.
- Tools.
- Agent loop.
Setting up a golang project
- Setup Github repo
- Install Go 1.21 or later. I’m using 1.24.4
- In this stream I use Goland for coding, but VSCode can also be used.
- Initialize the project with
go mod init
Integrating with Anthropic’s golang sdk
- We will start with integrating with Claude. We could make http calls to Anthropic api. But in order to learn Anthropic’s abstractions, we will integrate our agent framework with Anthropic’s golang sdk.
go get github.com/anthropics/anthropic-sdk-go
- Create API key in console
- Export your Anthropic API key as an environment variable:
export ANTHROPIC_API_KEY="your-api-key-here"
- Make LLM inference with Claude in
main.go.
client := anthropic.NewClient(
option.WithAPIKey("my-anthropic-api-key"), // defaults to os.LookupEnv("ANTHROPIC_API_KEY")
)
message, err := client.Messages.New(context.TODO(), anthropic.MessageNewParams{
MaxTokens: 1024,
Messages: []anthropic.MessageParam{
anthropic.NewUserMessage(anthropic.NewTextBlock("What is a quaternion?")),
},
Model: anthropic.ModelClaudeOpus4_5_20251101,
})
if err != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", message.Content)
- This should have our first inference run
- We refactor this to keep the client setup in main.go and abstract the inference login in
runInferencefunction.
func runInference(client anthropic.Client, input []anthropic.MessageParam, tools []anthropic.ToolUnionParam) *anthropic.Message {
message, err := client.Messages.New(context.Background(), anthropic.MessageNewParams{
MaxTokens: 1024,
Messages: input,
Model: anthropic.ModelClaudeSonnet4_5_20250929,
Tools: tools,
})
if err != nil {
panic(err.Error())
}
return message
}
System prompt and user goal.
Our Agent will wake up to the first message being it’s system instructions. These are the instruction we provide agent’s general behavior, role, and constraints.
const systemPrompt = `
You are an autonomous agent working in a project repository.
Follow the goal given below:
`
Following the system message we need to provide a goal to the agent. What should it achieve in this session? In main function
var goal = flag.String("goal", "", "What would you like the agent to do?")
flag.Parse()
userGoal := *goal
Stitch them together to form the instructions to the agent.
inputMessages := []anthropic.MessageParam{
anthropic.NewUserMessage(anthropic.NewTextBlock(systemPrompt)),
anthropic.NewUserMessage(anthropic.NewTextBlock("Goal: " + userGoal)),
}
Before we go onto make the Agent loop, let’s create a tool.
Tools
Tools are a way for LLM’s to perceive its environment and take actions in them. We define what tools are available using JSON schema, which are passed to LLM during each inference.
We will start with our first tools for our agent to learn about its environment list_files
Tools consist of three things.
- Input schema: What json should the LLM produce in order to call this tool. If the LLM wants to read files in a directory, it has to specify which directory. Thus, we have an input schema.
type ListFilesInput struct {
Directory string `json:"directory" jsonschema_description:"Path of the directory"`
}
func GenerateSchema[T any]() anthropic.ToolInputSchemaParam {
reflector := jsonschema.Reflector{
AllowAdditionalProperties: false,
DoNotReference: true,
}
var v T
schema := reflector.Reflect(v)
return anthropic.ToolInputSchemaParam{
Properties: schema.Properties,
}
}
var ListFileInputSchema = GenerateSchema[ListFilesInput]()
The jsonschema_description helps understand the LLM how to populate the directory input field. The GenerateSchema converts our ListFileInput struct definition to a jsonschema.Properties object required by Anthropic’s sdk.
- Execution: The execution code.
The
listFilefunction reads a directory and returns file names:
func listFile(directory string) (*ListFilesOutput, error) {
entries, err := os.ReadDir(directory)
if err != nil {
return nil, fmt.Errorf("error reading directory: %w", err)
}
var files []string
for _, entry := range entries {
name := entry.Name()
if entry.IsDir() {
name += "/"
}
files = append(files, name)
}
return &ListFilesOutput{Files: files}, nil
}
- Tool Definition:
toolParams := []anthropic.ToolParam{
{
Name: "list_files",
Description: anthropic.String("Returns a list of files in the current directory."),
InputSchema: ListFileInputSchema,
},
}
tools := make([]anthropic.ToolUnionParam, len(toolParams))
for i, toolParam := range toolParams {
tools[i] = anthropic.ToolUnionParam{OfTool: &toolParam}
}
Agent Loop
The core of our agent is a simple loop:
- Run inference (ask Claude what to do next)
- Check if Claude wants to use a tool
- If yes, execute the tool and send results back
- If no, we’re done
for {
// Run inference
respMessage := runInference(client, inputMessages, tools)
// Print response
for _, block := range respMessage.Content {
// Handle text or tool use blocks
}
// Execute tools if requested
toolResult := []anthropic.ContentBlockParamUnion{}
// ... tool execution logic
// If no tools called, we're done
if len(toolResult) == 0 {
break
}
// Add tool results back to conversation
inputMessages = append(inputMessages, anthropic.NewUserMessage(toolResult...))
}
Test the agent by giving it a goal:
go run main.go -goal "explain this project"
What You’ll See
When you run the agent, you’ll observe:
- Initial reasoning: The agent says “I’ll help you explain this project. Let me explore…”
- Tool calls: The agent decides to list files in the current directory
- Exploration: Based on what it finds, it explores subdirectories
- Final response: The agent provides a summary of the project structure
Example output:
I'll help you explain this project. Let me explore the directory structure.
list_files: {
"directory": "."
}
{"files":["go.mod","go.sum","main.go",".git/",".idea/"]}
Based on the files I can see, this is a Go-based project...
Key Concepts
Tool Calling
Tools are how LLMs interact with the outside world. We:
- Define the tool with a name, description, and JSON schema
- Send tool definitions to Claude with each inference
- Execute tools when Claude requests them
- Return results back to Claude to continue the conversation
The Agent Loop
The agent loop is the heart of autonomy:
- Each iteration, the agent decides what to do next
- It can use tools, respond with text, or signal completion
- The conversation history grows with each iteration
- Tools give the agent “senses” and “actions”
Conversation History
We maintain a list of messages that includes:
- System prompt (tells the agent what it is)
- User goal (what we want it to do)
- Assistant responses (what Claude said)
- Tool results (what happened when tools ran)
Common Issues
”unauthorized” Error
Make sure your ANTHROPIC_API_KEY environment variable is set correctly.
”tool use ids were found without tool result”
This happens if you don’t append tool results back to the conversation. Make sure to add:
inputMessages = append(inputMessages, anthropic.NewUserMessage(toolResult...))
Agent Runs Forever
If the agent doesn’t stop, it means it never sent a final text-only response. This is controlled by the break condition:
if len(toolResult) == 0 {
break
}
Full Code
You can find the complete code from this stream at: https://github.com/agentengineering-dev/agent-framework/tree/ep-001