Agent Framework

ep 3: openai integration (part 1)

Adding AGPL-3.0 license, implementing message type enums for better code clarity, and attempting to integrate OpenAI as a second LLM provider—discovering the challenges of multi-provider support.

· 85 minutes

Introduction

In this third episode, we took on two important tasks: making our agent framework officially open source with AGPL-3.0, and attempting to add OpenAI (ChatGPT) as our second LLM provider.

tdlr;

  • Added AGPL-3.0 license to make the project officially open source
  • Refactored message handling with explicit MessageType enum
  • Started OpenAI provider implementation in llm/openai.go
  • Discovered OpenAI’s tool message format differs significantly from Anthropic
  • Hit roadblocks with OpenAI integration—incomplete by stream end
Adding Open Source License

We started by adding an AGPL-3.0 license to the project. The AGPL (Affero General Public License) is a copyleft license that ensures if anyone uses this framework in a service, they must also share their modifications.

Why AGPL-3.0?

The regular GPL requires sharing code when you distribute software. But AGPL goes further: if you run modified AGPL code as a network service, you must make the source available to users of that service. The rationale behind this is to make this framework for individuals first and corporations later.

Read License.txt

This means anyone can use, modify, and build on this framework, but they must share their improvements back to the community.

Refactoring: Message Type Enum

Before tackling OpenAI integration, we improved our message handling by adding explicit types. Previously, we determined message type by checking which fields were set:

// Old way - implicit types
if msg.ToolUse != nil {
    // This is a tool use message
} else if msg.ToolResult != nil {
    // This is a tool result message
} else if msg.Text != "" {
    // This is a text message
}

This works but isn’t ideal. We refactored to use explicit types.

Adding MessageType Enum

In llm/type.go:

type MessageType string

const (
    MessageTypeText       MessageType = "text"
    MessageTypeToolUse    MessageType = "tool_use"
    MessageTypeToolResult MessageType = "tool_result"
)

type Message struct {
    Type       MessageType    // Explicit type
    Role       Role
    Text       string
    ToolResult *ToolResult
    ToolUse    *ToolUse
}

Now every message has an explicit Type field. This makes the code clearer and helps catch bugs.

Updating Anthropic Adapter

In llm/anthropic.go, we updated the transformation function:

func transformToAnthropicMessages(messages []Message) []anthropic.MessageParam {
    anthropicMessages := make([]anthropic.MessageParam, len(messages))
    for i, msg := range messages {
        switch msg.Type {
        case MessageTypeToolUse:
            anthropicMessages[i] = anthropic.NewAssistantMessage(
                anthropic.NewToolUseBlock(msg.ToolUse.ID, msg.ToolUse.Input, msg.ToolUse.Name))
        case MessageTypeToolResult:
            anthropicMessages[i] = anthropic.NewUserMessage(
                anthropic.NewToolResultBlock(msg.ToolResult.ID, msg.ToolResult.Content, msg.ToolResult.IsError))
        case MessageTypeText:
            if msg.Role == RoleUser {
                anthropicMessages[i] = anthropic.NewUserMessage(anthropic.NewTextBlock(msg.Text))
            } else if msg.Role == RoleAssistant {
                anthropicMessages[i] = anthropic.NewAssistantMessage(anthropic.NewTextBlock(msg.Text))
            }
        }
    }
    return anthropicMessages
}

Using a switch on msg.Type is cleaner than checking nil pointers. This also makes it easier to add new message types later.

Updating Main

In main.go, we now specify message types explicitly:

inputMessages := []llm.Message{
    {
        Type: llm.MessageTypeText,
        Role: llm.RoleUser,
        Text: systemPrompt,
    },
    {
        Type: llm.MessageTypeText,
        Role: llm.RoleUser,
        Text: userGoal,
    },
}

This makes it immediately clear what kind of message we’re creating.

Implementing OpenAI Provider

With the refactoring done, we started implementing OpenAI support. We added the OpenAI Go SDK:

go get github.com/openai/openai-go

Client Setup

In llm/openai.go:

type openaiLLM struct {
    client *openai.Client
}

func NewOpenAILLM() *openaiLLM {
    openaiApiKey := os.Getenv("OPENAI_API_KEY")
    client := openai.NewClient(
        option.WithAPIKey(openaiApiKey),
    )
    return &openaiLLM{
        client: client,
    }
}

This follows the same pattern as our Anthropic implementation.

Message Translation

We created a function to convert our generic messages to OpenAI’s format:

func transformToOpenAIMessages(messages []Message) []openai.ChatCompletionMessageParamUnion {
    openaiMessages := []openai.ChatCompletionMessageParamUnion{}

    for _, msg := range messages {
        switch msg.Type {
        case MessageTypeText:
            if msg.Role == RoleUser {
                openaiMessages = append(openaiMessages,
                    openai.UserMessage(msg.Text))
            } else if msg.Role == RoleAssistant {
                openaiMessages = append(openaiMessages,
                    openai.AssistantMessage(msg.Text))
            }
        case MessageTypeToolUse:
            // Tool use in OpenAI format
            openaiMessages = append(openaiMessages,
                openai.ChatCompletionMessage{
                    Role: openai.ChatCompletionMessageRoleAssistant,
                    ToolCalls: []openai.ChatCompletionMessageToolCall{
                        {
                            ID:   msg.ToolUse.ID,
                            Type: openai.ChatCompletionMessageToolCallTypeFunction,
                            Function: openai.ChatCompletionMessageToolCallFunction{
                                Name:      msg.ToolUse.Name,
                                Arguments: string(msg.ToolUse.Input),
                            },
                        },
                    },
                })
        case MessageTypeToolResult:
            // Tool results in OpenAI format
            openaiMessages = append(openaiMessages,
                openai.ToolMessage(msg.ToolResult.ID, msg.ToolResult.Content))
        }
    }

    return openaiMessages
}

Schema Generation

OpenAI uses a similar JSON schema format to Anthropic, but with slight differences. We created a schema generator:

func GenerateOpenAISchema(instance interface{}) interface{} {
    reflector := jsonschema.Reflector{
        AllowAdditionalProperties: false,
        DoNotReference:            true,
    }
    schema := reflector.Reflect(instance)

    return map[string]interface{}{
        "type":       "object",
        "properties": schema.Properties,
        "required":   schema.Required,
    }
}

Tool Translation

Converting our generic tool definitions to OpenAI’s format:

func transformToOpenAITools(tools []ToolDefinition) []openai.ChatCompletionToolParam {
    openaiTools := []openai.ChatCompletionToolParam{}

    for _, tool := range tools {
        openaiTools = append(openaiTools, openai.ChatCompletionToolParam{
            Type: openai.F(openai.ChatCompletionToolTypeFunction),
            Function: openai.F(openai.FunctionDefinitionParam{
                Name:        openai.String(tool.Name),
                Description: openai.String(tool.Description),
                Parameters:  openai.F(GenerateOpenAISchema(tool.InputSchemaInstance)),
            }),
        })
    }

    return openaiTools
}

Where We Got Stuck

By the end of the stream, we were seeing errors like:

400 Bad Request: messages.[3].role

Despite multiple attempts to fix the message transformation logic, we couldn’t get OpenAI to accept our message sequence. The fundamental issue is that our abstraction was designed around Anthropic’s message model, and OpenAI’s model doesn’t map cleanly to it.

Adding Provider Selection

We did manage to add a command-line flag for choosing providers:

In llm/llm.go:

func NewClient(provider string) (LLM, error) {
    switch provider {
    case "anthropic":
        return NewAnthropicClient(), nil
    case "openai":
        return NewOpenAILLM(), nil
    default:
        return nil, fmt.Errorf("unknown provider: %s", provider)
    }
}

In main.go:

var provider = flag.String("provider", "anthropic", "LLM provider (anthropic or openai)")
flag.Parse()

client, err := llm.NewClient(*provider)

Now you can run:

go run main.go -provider anthropic -goal "explain this project"
# or
go run main.go -provider openai -goal "explain this project"

Though the OpenAI version doesn’t work yet.

Key Concepts
Explicit vs Implicit Types

Adding MessageType enum is an example of making implicit state explicit. Instead of inferring type from which fields are set, we declare it upfront. This:

  • Makes code easier to read
  • Catches bugs at compile time
  • Documents intent clearly
Abstraction Leaks

Our attempt to support OpenAI revealed an “abstraction leak.” We created an abstraction (the Message type) based on how Anthropic works. When we tried to support OpenAI, we discovered our abstraction doesn’t fit their model perfectly.

This is a common challenge in multi-provider systems: finding an abstraction that works for all providers without becoming too complex.

What’s Next

In episode 4, we’ll need to:

  1. Revisit our message abstraction to better accommodate OpenAI
  2. Consider whether we need provider-specific message handling
  3. Study OpenAI’s message sequence requirements more carefully
  4. Possibly add a translation layer that maintains conversation state differently

The lesson here: abstractions are hard, and you often need to iterate based on real-world use cases.

Common Issues
”Tool role must follow assistant with tool_calls”

OpenAI enforces strict message sequencing. Tool messages must immediately follow an assistant message that made tool calls.

Full Code

You can find the code from this stream at: https://github.com/agentengineering-dev/agent-framework/tree/ep-003

Note: The OpenAI integration is incomplete in this branch. We’ll continue working on it in episode 4.