llm 0.31
Summary
The release of llm version 0.31 introduces support for OpenAI's GPT-5.5 model and implements new configuration parameters for controlling output characteristics. These updates provide developers with more granular control over text verbosity and image processing for the GPT-5 model series.
Key Points
- Added support for the
gpt-5.5OpenAI model. - Introduced a new
-o verbosityoption for GPT-5+ models, supportinglow,medium, andhighlevels. - Added an
-o image_detailoption for OpenAI models with valueslow,high, andauto. - Enabled the
originalvalue for theimage_detailparameter specifically for GPT-5.4 and GPT-5.5 models. - Updated the registration process so that models listed in
extra-openai-models.yamlare now registered as asynchronous.
Technical Details
The update introduces specific parameter controls for the GPT-5 architecture via the CLI. Users can now manage the level of detail in text responses using the -o verbosity <level> flag, which accepts low, medium, or high. For multimodal tasks involving image attachments, the -o image_detail <level> flag allows for the adjustment of detail levels. While low, high, and auto are available across OpenAI models, the original setting is a new capability exclusive to the GPT-5.4 and GPT-5.5 models, allowing for the use of unscaled image inputs.
Additionally, the underlying registration logic for models defined in extra-openai-models.yaml has been modified. These models are now registered as asynchronous, which may improve performance and concurrency when executing tasks involving these specific model configurations.
Impact / Why It Matters
Developers can now optimize token consumption and response precision by fine-tuning verbosity and image resolution settings. The transition to asynchronous registration for extra models also provides better efficiency for workflows involving custom-defined OpenAI models.