Conversation
| response = chat_client.complete( | ||
| model=CHAT_COMPLETION_CONFIG.model, | ||
| messages=chat_messages, | ||
| max_tokens=CHAT_COMPLETION_CONFIG.max_tokens, |
There was a problem hiding this comment.
This is currently removed due to a bug in the python sdk Azure/azure-sdk-for-python#44455
There was a problem hiding this comment.
Pull request overview
This PR updates the Python ChatApp sample to use Azure AI Foundry (via azure-ai-inference) instead of the openai Azure OpenAI client, while keeping Azure App Configuration as the settings source.
Changes:
- Swap dependency from
openaitoazure-ai-inference. - Rename and simplify configuration models from “Azure OpenAI” to “Azure AI Foundry”.
- Update the app to create a
ChatCompletionsClientusingDefaultAzureCredentialand invokecomplete(...).
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| examples/Python/ChatApp/requirements.txt | Replaces openai dependency with azure-ai-inference. |
| examples/Python/ChatApp/models.py | Renames service config model and reshapes chat completion configuration fields. |
| examples/Python/ChatApp/app.py | Switches client creation and chat completion call to Azure AI Foundry inference client. |
| examples/Python/ChatApp/README.md | Updates sample documentation to reference Azure AI Foundry and new configuration keys. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| chat_messages = list(CHAT_COMPLETION_CONFIG.messages) | ||
| chat_messages.extend(chat_conversation) |
There was a problem hiding this comment.
ChatCompletionConfiguration.messages is optional (defaults to None), but main() unconditionally does list(CHAT_COMPLETION_CONFIG.messages). If messages isn’t configured in App Configuration, this will raise TypeError: 'NoneType' object is not iterable. Consider making messages required, or defaulting it to an empty list (e.g., via default_factory=list) and/or handling None here.
| model: Optional[str] = None | ||
| max_completion_tokens: Optional[int] = None | ||
| reasoning_effort: Optional[str] = None | ||
| verbosity: Optional[str] = None | ||
| stream: Optional[bool] = None | ||
| messages: Optional[List[Dict[str, str]]] = None |
There was a problem hiding this comment.
messages is typed as Optional[...] = None, but the application logic expects a list of messages to always exist (it clones and extends it each turn). Consider changing this field to a non-optional list with a default_factory=list to reflect actual usage and avoid None at runtime.
No description provided.