Top 1.6% dependent packages on proxy.golang.org
Top 3.5% dependent repos on proxy.golang.org
Top 1.0% forks on proxy.golang.org
Top 0.3% docker downloads on proxy.golang.org
proxy.golang.org : github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai
Example_audioTranscription demonstrates how to transcribe speech to text using Azure OpenAI's Whisper model. This example shows how to: - Create an Azure OpenAI client with token credentials - Read an audio file and send it to the API - Convert spoken language to written text using the Whisper model - Process the transcription response The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Audio transcription is useful for accessibility features, creating searchable archives of audio content, generating captions or subtitles, and enabling voice commands in applications. Example_audioTranslation demonstrates how to translate speech from one language to English text. This example shows how to: - Create an Azure OpenAI client with token credentials - Read a non-English audio file - Translate the spoken content to English text - Process the translation response The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Speech translation is essential for cross-language communication, creating multilingual content, and building applications that break down language barriers. Example_chatCompletionStream demonstrates streaming responses from the Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Process incremental response chunks - Handle streaming errors and completion The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Streaming is useful for: - Real-time response display - Improved perceived latency - Interactive chat interfaces - Long-form content generation Example_chatCompletionsLegacyFunctions demonstrates using the legacy function calling format. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema using the legacy format - Use tools API for backward compatibility - Handle function calling responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Legacy function support ensures: - Compatibility with older implementations - Smooth transition to new tools API - Support for existing function-based workflows Example_chatCompletionsStructuredOutputs demonstrates using structured outputs with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define complex JSON schemas for structured output - Request specific data structures through function calls - Parse and validate structured responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Structured outputs are useful for: - Database query generation - Data extraction and transformation - API request formatting - Consistent response formatting Example_completions demonstrates how to use Azure OpenAI's legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text completion request - Handle the completion response - Process the generated text output The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Legacy completions are useful for: - Simple text generation tasks - Completing partial text - Single-turn interactions - Basic language generation scenarios Example_createImage demonstrates how to generate images using Azure OpenAI's DALL-E model. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure image generation parameters including size and format - Generate an image from a text prompt - Verify the generated image URL is accessible The example uses environment variables for configuration: - AOAI_DALLE_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_DALLE_MODEL: The deployment name of your DALL-E model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Image generation is useful for: - Creating custom illustrations and artwork - Generating visual content for applications - Prototyping design concepts - Producing visual aids for documentation Example_deepseekReasoningBasic demonstrates basic chat completions using DeepSeek-R1 reasoning model. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple prompt to the DeepSeek-R1 reasoning model - Configure parameters for optimal reasoning performance - Process the response with step-by-step reasoning The example uses environment variables for configuration: - AOAI_DEEPSEEK_ENDPOINT: Your Azure OpenAI endpoint URL with DeepSeek model access - AOAI_DEEPSEEK_MODEL: The DeepSeek model deployment name (e.g., "deepseek-r1") DeepSeek-R1 is a reasoning model that provides detailed step-by-step analysis for complex problems, making it ideal for mathematical reasoning, logical deduction, and analytical problem solving. Example_deepseekReasoningMultiTurn demonstrates multi-turn conversations with DeepSeek-R1. This example shows how to: - Maintain conversation context across multiple turns - Build upon previous reasoning steps - Ask follow-up questions that reference earlier parts of the conversation - Handle complex problem-solving scenarios that require multiple interactions - Manage conversation history in a chat application When using the model for a chat application, you'll need to manage the history of that conversation and send the latest messages to the model. The example uses environment variables for configuration: - AOAI_DEEPSEEK_ENDPOINT: Your Azure OpenAI endpoint URL with DeepSeek model access - AOAI_DEEPSEEK_MODEL: The DeepSeek model deployment name (e.g., "deepseek-r1") Example_deepseekReasoningStreaming demonstrates streaming responses with DeepSeek-R1. This example shows how to: - Create a streaming chat completion request - Process streaming responses as they arrive - Handle the reasoning process in real-time - Provide a better user experience with immediate feedback The example uses environment variables for configuration: - AOAI_DEEPSEEK_ENDPOINT: Your Azure OpenAI endpoint URL with DeepSeek model access - AOAI_DEEPSEEK_MODEL: The DeepSeek model deployment name (e.g., "deepseek-r1") - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. This example uses a simple math problem to demonstrate DeepSeek-R1's step-by-step reasoning capabilities in a streaming context. Example_embeddings demonstrates how to generate text embeddings using Azure OpenAI's embedding models. This example shows how to: - Create an Azure OpenAI client with token credentials - Convert text input into numerical vector representations - Process the embedding vectors from the response - Handle embedding results for semantic analysis The example uses environment variables for configuration: - AOAI_EMBEDDINGS_MODEL: The deployment name of your embedding model (e.g., text-embedding-ada-002) - AOAI_EMBEDDINGS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Text embeddings are useful for: - Semantic search and information retrieval - Text classification and clustering - Content recommendation systems - Document similarity analysis - Natural language understanding tasks Example_generateSpeechFromText demonstrates how to convert text to speech using Azure OpenAI's text-to-speech service. This example shows how to: - Create an Azure OpenAI client with token credentials - Send text to be converted to speech - Specify voice and audio format parameters - Handle the audio response stream The example uses environment variables for configuration: - AOAI_TTS_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_TTS_MODEL: The deployment name of your text-to-speech model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Text-to-speech conversion is valuable for creating audiobooks, virtual assistants, accessibility tools, and adding voice interfaces to applications. Example_getChatCompletions demonstrates how to use Azure OpenAI's Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Structure a multi-turn conversation with different message roles - Send a chat completion request and handle the response - Process multiple response choices and finish reasons The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Chat completions are useful for: - Building conversational AI interfaces - Creating chatbots with personality - Maintaining context across multiple interactions - Generating human-like text responses Example_chatCompletionsFunctions demonstrates how to use Azure OpenAI's function calling feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema for weather information - Request function execution through the chat API - Parse and handle function call responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Tool calling is useful for: - Integrating external APIs and services - Structured data extraction from natural language - Task automation and workflow integration - Building context-aware applications Example_responsesApiChaining demonstrates how to chain multiple responses together in a conversation flow using the Azure OpenAI Responses API. This example shows how to: - Create an initial response - Chain a follow-up response using the previous response ID - Process both responses - Delete both responses to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiFunctionCalling demonstrates how to use the Azure OpenAI Responses API with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define tools (functions) that the model can call - Process the response containing function calls - Provide function outputs back to the model - Delete the responses to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiImageInput demonstrates how to use the Azure OpenAI Responses API with image input. This example shows how to: - Create an Azure OpenAI client with token credentials - Fetch an image from a URL and encode it to Base64 - Send a query with both text and a Base64-encoded image - Process the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Note: This example fetches and encodes an image from a URL because there is a known issue with image url based image input. Currently only base64 encoded images are supported. Example_responsesApiReasoning demonstrates how to use the Azure OpenAI Responses API with reasoning. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a complex problem-solving request that requires reasoning - Enable the reasoning parameter to get step-by-step thought process - Process the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiStreaming demonstrates how to use streaming with the Azure OpenAI Responses API. This example shows how to: - Create a streaming response - Process the stream events as they arrive - Clean up by deleting the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiTextGeneration demonstrates how to use the Azure OpenAI Responses API for text generation. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text prompt - Process the response - Delete the response to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") The Responses API is a new stateful API from Azure OpenAI that brings together capabilities from chat completions and assistants APIs in a unified experience. Example_streamCompletions demonstrates streaming responses from the legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming completion request - Process incremental text chunks - Handle streaming errors and completion The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Streaming completions are useful for: - Real-time text generation display - Reduced latency in responses - Interactive text generation - Long-form content creation Example_structuredOutputsResponseFormat demonstrates using JSON response formatting. This example shows how to: - Create an Azure OpenAI client with token credentials - Define JSON schema for response formatting - Request structured mathematical solutions - Parse and process formatted JSON responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Response formatting is useful for: - Mathematical problem solving - Step-by-step explanations - Structured data generation - Consistent output formatting Example_usingAzureContentFiltering demonstrates how to use Azure OpenAI's content filtering capabilities. This example shows how to: - Create an Azure OpenAI client with token credentials - Make a chat completion request - Extract and handle content filter results - Process content filter errors - Access Azure-specific content filter information from responses The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Content filtering is essential for: - Maintaining content safety and compliance - Monitoring content severity levels - Implementing content moderation policies - Handling filtered content gracefully Example_usingAzureOnYourData demonstrates how to use Azure OpenAI's Azure-On-Your-Data feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure an Azure Cognitive Search data source - Send a chat completion request with data source integration - Process Azure-specific response data including citations and content filtering results The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - COGNITIVE_SEARCH_API_ENDPOINT: Your Azure Cognitive Search endpoint - COGNITIVE_SEARCH_API_INDEX: The name of your search index - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Azure-On-Your-Data enables you to enhance chat completions with information from your own data sources, allowing for more contextual and accurate responses based on your content. Example_usingAzurePromptFilteringWithStreaming demonstrates how to use Azure OpenAI's prompt filtering with streaming responses. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Handle streaming responses with Azure extensions - Monitor prompt filter results in real-time - Accumulate and process streamed content The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Streaming with prompt filtering is useful for: - Real-time content moderation - Progressive content delivery - Monitoring content safety during generation - Building responsive applications with content safety checks Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials. This example shows how to: - Create an Azure OpenAI client using DefaultAzureCredential - Configure authentication options with tenant ID - Make a simple request to test the authentication The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_TENANT_ID: Your Azure tenant ID - AZURE_CLIENT_ID: (Optional) Your Azure client ID - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret DefaultAzureCredential supports multiple authentication methods including: - Environment variables - Managed Identity - Azure CLI credentials Example_usingEnhancements demonstrates how to use Azure OpenAI's enhanced features. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure chat completion enhancements like grounding - Process Azure-specific response data including content filtering - Handle message context and citations The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Azure OpenAI enhancements provide additional capabilities beyond standard OpenAI features, such as improved grounding and content filtering for more accurate and controlled responses. Example_usingManagedIdentityCredential demonstrates how to authenticate with Azure OpenAI using Managed Identity. This example shows how to: - Create an Azure OpenAI client using ManagedIdentityCredential - Support both system-assigned and user-assigned managed identities - Make authenticated requests without storing credentials The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model Managed Identity is ideal for: - Azure services (VMs, App Service, Azure Functions, etc.) - Azure DevOps pipelines with the Azure DevOps service connection - CI/CD scenarios where you want to avoid storing secrets - Production workloads requiring secure, credential-free authentication Example_vision demonstrates how to use Azure OpenAI's Vision capabilities for image analysis. This example shows how to: - Create an Azure OpenAI client with token credentials - Send an image URL to the model for analysis - Configure the chat completion request with image content - Process the model's description of the image The example uses environment variables for configuration: - AOAI_VISION_MODEL: The deployment name of your vision-capable model (e.g., gpt-4-vision) - AOAI_VISION_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Vision capabilities are useful for: - Image description and analysis - Visual question answering - Content moderation - Accessibility features - Image-based search and retrieval
Registry
-
Source
- Documentation
- JSON
- codemeta.json
purl: pkg:golang/github.com/%21azure/azure-sdk-for-go/sdk/ai/azopenai
Keywords:
azure
, azure-sdk
, go
, golang
, hacktoberfest
, microsoft
, rest
, sdk
License: MIT
Latest release: 7 days ago
First release: over 2 years ago
Namespace: github.com/Azure/azure-sdk-for-go/sdk/ai
Dependent packages: 28
Dependent repositories: 2
Stars: 1,739 on GitHub
Forks: 916 on GitHub
Docker dependents: 6
Docker downloads: 78,492,794
Total Commits: 7559
Committers: 246
Average commits per author: 30.728
Development Distribution Score (DDS): 0.793
More commit stats: commits.ecosyste.ms
See more repository details: repos.ecosyste.ms
Last synced: about 4 hours ago