Building complex AI-powered applications often requires that the system has to parse and process an output of an AI. Previously, Amazon Bedrock developers had to instruct the AI to print JSON or XML as the response in their model prompts, which was error-prone and often contained hallucinations. With the introduction of Bedrock Agents, developers can define schemas in the OpenAPI format and instruct their AI to use external APIs to retrieve the required data or to perform the requested operations.

Architecture

In this article we’re going to build a serverless API that can answer any question. But rather than just using an AI chat interface, we’re going to instruct the AI to send its answers to an API in a strictly structured JSON format. This means that the response of the AI is going to have a pre-defined schema and is guaranteed to have a valid syntax, compared to a regular chat prompt response. Effectively, this means that the AI can communicate with another machine when answering a question, not just with a human.

The source code for this project is available at GitHub. It’s based on the Serverless framework and consists of two Lambda functions: one that can relay questions from the user to the AI and the other that the AI can call itself when nececcary. Answers are stored in a DynamoDB table and shared between those two Lambdas:

AWS diagram

Implementation

When creating a Bedrock model, we’re going to use a very simple instruction for the AI:

You are an agent that answers questions. You must save the answer to POST::API::answerQuestion API.

And when asking questions, the prompt will be the following:

Save the answer the following question: <question>{question}</question>

Notice the POST::API::answerQuestion part. It is the name of one of the APIs available to the AI. Bedrock agent can be provided with an OpenAPI schema of operations that it can use. The schema for our project looks like this:

Schema of the agent APIsource
{
"openapi": "3.0.0",
"info": {
"version": "1.0.0",
"title": "AI agent API",
"description": "APIs available for AI agents."
},
"paths": {
"/answer-question": {
"post": {
"summary": "API to save answers to questions",
"description": "Save answers to questions.",
"operationId": "answerQuestion",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"type": "object",
"required": ["answer"],
"properties": {
"answer": {
"type": "string",
"description": "Answer to the question."
}
}
}
}
}
},
"responses": {
"200": { "description": "Answer saved successfully." }
}
}
}
}
}

The schema only provides the structure of available operations, agents can’t call an API directly and instead rely on a Lambda function for execution. Our Lambda function for handling agent API calls has the following code:

Agent API Lambdasource
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const router = new Router();

const api = async (event) => {
return await router.handle(event.httpMethod, event.apiPath, event);
};

router.add('POST', '/answer-question', async (request: AgentRequest) => {
const input = getAgentInput<AnswerQuestionInput>(request); // Parse the request

await putDocument(answersTable, {
id: request.sessionId,
answer: input.answer,
}); // Save the answer to DB

return createAgentResponse(request, { status: 'OK' });
});

It saves the answer from the AI in a DynamoDB table so that in can be retrieved later. But in order to process this call, we first need ask the AI to answer a question. We do that via another Lambda function that handles requests from a REST endpoint in API Gateway:

Question API Lambdasource
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const api = async (event) => {
const inputText = textQuestionPrompt // Put question into the prompt template
.replace('{question}', event.body.question);

// Call Bedrock agent with the text
const { completion, sessionId } = await invokeAgent(inputText);

for await (const response of completion) {
if (response.trace?.trace) {
// Output trace info from the agent
console.log(JSON.stringify(response.trace.trace));
}
}

// Read the saved answer from DB
const document = await readDocument(answersTable, sessionId);

return formatJSONResponse(document);
};

Let’s try to call this function with a simple math question:

Question API requestsource
{
"question": "Betty had a pack of 25 pencil crayons. She gave five to her friend Theresa. She gave three to her friend Mary. How many pencil crayons does Betty have left?"
}

If tracing is enabled, the AI should log its reasoning and implementation details:

To answer this question, I will:

  1. Solve the math problem: Betty originally had 25 pencil crayons. She gave 5 to Theresa and 3 to Mary. So she gave away 5 + 3 = 8 pencil crayons. So she has 25 - 8 = 17 pencil crayons left.
  2. Call the POST::m2m-ai-agent-api::answerQuestion function to save the answer.

I have checked that I have access to the POST::m2m-ai-agent-api::answerQuestion function.
<function_call>post::m2m-ai-agent-api::answerQuestion(answer="17")</function_call>
<function_result>{"status":"OK"}</function_result>

And the API response will contain the answer to the original question:

Question API response
{
"id": "643dc42c-f492-43ce-bfb8-ac991b2090e5",
"answer": "17"
}

Conclusion

Agents for Amazon Bedrock is a powerful feature that can bring AI applications to a whole new level. It provides AIs with the tools to solve complex problems and to perform operations on your behalf by making possible the integration with other systems in a scalable and secure way.