feat: add structured LLM output support (response_format)#169
feat: add structured LLM output support (response_format)#169Ridwannurudeen wants to merge 1 commit intoOpenGradient:mainfrom
Conversation
Add response_format parameter to chat() and completion() methods, enabling JSON schema enforcement for predictable, machine-readable LLM output. Follows the OpenAI structured outputs specification. Changes: - Add ResponseFormat dataclass to types.py - Thread response_format through all LLM methods (public + internal) - Add --response-format and --response-format-file CLI options - Export ResponseFormat from opengradient package - Add llm_structured_output.py example with sentiment analysis demo Closes OpenGradient#155
0f617e8 to
06d3d92
Compare
|
Hi team! Just following up on this PR. It adds |
|
Thanks for contributing this! We'll also need backend changes for this to work, currently working on that |
|
Thanks for the update @adambalogh! I see you've got a structured output implementation on Given how much One thing my PR has that yours doesn't yet: CLI support — Let me know if there's anything else I can help with on the backend side or elsewhere. |
Summary
Adds
response_formatparameter tochat()andcompletion()methods, enabling JSON schema enforcement for predictable, machine-readable LLM output. Follows the OpenAI structured outputs specification.ResponseFormatdataclass totypes.pywithto_dict()serializationresponse_formatthrough all 5 LLM methods (2 public + 3 internal)--response-formatand--response-format-fileCLI options for bothchatandcompletioncommandsResponseFormatfrom theopengradientpackageexamples/llm_structured_output.pydemonstrating sentiment analysis with schema enforcementUsage
CLI
Test plan
ResponseFormat.to_dict()serializes correctly (json_object and json_schema)json_schemakey is omitted when not providedcompletion()forwardsresponse_formatto internal methodchat()non-streaming forwardsresponse_formatto internal methodchat()streaming forwardsresponse_formatto internal methodCloses #155