AWS Bedrock
Large Language Model service implementation using Amazon Bedrock API
Overview
AWS Bedrock LLM service provides access to Amazon’s foundation models including Anthropic Claude and Amazon Nova, with streaming responses, function calling, and multimodal capabilities through Amazon’s managed AI service.
API Reference
Complete API documentation and method details
AWS Bedrock Docs
Official AWS Bedrock documentation and features
Example Code
Working example with function calling
Installation
To use AWS Bedrock services, install the required dependencies:
You’ll also need to set up your AWS credentials as environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
(if using temporary credentials)AWS_REGION
(defaults to “us-east-1”)
Set up an IAM user with Amazon Bedrock access in your AWS account to obtain credentials.
Frames
Input
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision processingLLMUpdateSettingsFrame
- Runtime parameter updates
Output
LLMFullResponseStartFrame
/LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksFunctionCallInProgressFrame
/FunctionCallResultFrame
- Function call lifecycleErrorFrame
- API or processing errors
Function Calling
Function Calling Guide
Learn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Context Management
Context Management Guide
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Usage Example
Metrics
The service provides comprehensive AWS Bedrock metrics:
- Time to First Byte (TTFB) - Latency from request to first response token
- Processing Duration - Total request processing time
- Token Usage - Input tokens, output tokens, and total usage
Enable with:
Additional Notes
- Streaming Responses: All responses are streamed for low latency
- Context Persistence: Use context aggregators to maintain conversation history
- Error Handling: Automatic retry logic for rate limits and transient errors
- Message Format: Automatically converts between OpenAI and AWS Bedrock message formats
- Performance Modes: Choose “standard” or “optimized” latency based on your needs
- Regional Availability: Different models available in different AWS regions
- Vision Support: Image processing available with compatible models like Claude 3