★ 6/10 · Ai · 2026-04-24

llm 0.31

The release of `llm` version 0.31 introduces support for OpenAI's GPT-5.5 model and implements new configuration parameters for controlling output characteristics. These updates provide developers with more granular...

llm 0.31

Summary

The release of llm version 0.31 introduces support for OpenAI's GPT-5.5 model and implements new configuration parameters for controlling output characteristics. These updates provide developers with more granular control over text verbosity and image processing for the GPT-5 model series.

Key Points

  • Added support for the gpt-5.5 OpenAI model.
  • Introduced a new -o verbosity option for GPT-5+ models, supporting low, medium, and high levels.
  • Added an -o image_detail option for OpenAI models with values low, high, and auto.
  • Enabled the original value for the image_detail parameter specifically for GPT-5.4 and GPT-5.5 models.
  • Updated the registration process so that models listed in extra-openai-models.yaml are now registered as asynchronous.

Technical Details

The update introduces specific parameter controls for the GPT-5 architecture via the CLI. Users can now manage the level of detail in text responses using the -o verbosity <level> flag, which accepts low, medium, or high. For multimodal tasks involving image attachments, the -o image_detail <level> flag allows for the adjustment of detail levels. While low, high, and auto are available across OpenAI models, the original setting is a new capability exclusive to the GPT-5.4 and GPT-5.5 models, allowing for the use of unscaled image inputs.

Additionally, the underlying registration logic for models defined in extra-openai-models.yaml has been modified. These models are now registered as asynchronous, which may improve performance and concurrency when executing tasks involving these specific model configurations.

Impact / Why It Matters

Developers can now optimize token consumption and response precision by fine-tuning verbosity and image resolution settings. The transition to asynchronous registration for extra models also provides better efficiency for workflows involving custom-defined OpenAI models.

ai cli llm