GPT 3.5 Turbo

GPT 3.5 Turbo model

OpenAI is committed to constant improvement of their ChatGPT models and making these enhancements easily available to developers. The gpt-3.5-turbo model is offered to developers to ensure they receive OpenAI’s recommended stable model, while still having the ability to choose a specific model version.

The Turbo model family is specifically designed for conversational chat input and output. However, it is also equally competent in completions compared to the Davinci model family. Any use case that can be executed effectively in ChatGPT should work well with the Turbo model family in the API.

OpenAI is releasing gpt-3.5-turbo-0301, which will be supported until at least June 1st, and they plan to update Turbo Model to a new stable release in April 2023. The models page will be updated with all necessary information regarding switchover.

An example API call looks as follows:

Because “gpt-3.5-turbo” performs at a similar capability to text-davinci-003 but at 10% the price per token, OpenAI recommends gpt-3.5-turbo for most use cases.

Also, read article Chat GPT Prompt Guide.

Turbo Model Capabilities

With the OpenAI API and gpt-3.5-turbo, you can create your own applications that can do various tasks such as:

  • Writing emails or other pieces of text
  • Coding in Python
  • Answering questions about a set of documents
  • Developing conversational agents
  • Providing a natural language interface for software
  • Teaching different subjects
  • Translating languages
  • Simulating characters for video games
  • and many more..

Dedicated instances

In addition, OpenAI is currently providing dedicated instances to users who require greater control over the system performance and specific model version.

The API initially runs on shared compute infrastructure with other users, who pay per request. OpenAI’s API operates on Azure and developers pay for an allocation of reserved compute infrastructure to process their requests, calculated based on a specific time period.

More articles in our blog: