Large Language Model service implementation using Google’s Gemini API
GoogleLLMService
provides integration with Google’s Gemini models, supporting streaming responses, function calling, and multimodal inputs. It includes specialized context handling for Google’s message format while maintaining compatibility with OpenAI-style contexts.
GoogleLLMService
, install the required dependencies:
GOOGLE_API_KEY
.
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision processingLLMUpdateSettingsFrame
- Runtime parameter updatesLLMFullResponseStartFrame
/ LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksLLMSearchResponseFrame
- Search grounding results with citationsFunctionCallInProgressFrame
/ FunctionCallResultFrame
- Function call lifecycleErrorFrame
- API or processing errorsLLMSearchResponseFrame
with detailed citation information: