OpenAI unveils GPT-3.5 Turbo, which can keep up with GPT-4 in 'certain narrow tasks'

ChatGPT app on iPhone
(Image credit: Kevin Okemwa)

What you need to know

  • OpenAI has announced the availability of fine-tuning for GPT-3.5 Turbo.
  • The feature enhances performance, allowing organizations to use shorter prompts.
  • Fine-tuning with GPT-3.5-Turbo has the capability of handling 4K tokens.
  • Support for fine-tuning in GPT-4 is expected later this fall.

OpenAI recently announced the availability of a new update that allows developers to fine-tune GPT-3.5 Turbo. The company explained that "this update gives developers the ability to customize models that perform better for their use cases and run these custom models at scale."

With this new capability in place, OpenAI promises that GPT-3.5 Turbo will be able to match up to GPT-4, if not surpass its offerings entirely "on certain narrow tasks." The company also indicated that fine-tuning capabilities should be shipping to GPT-4 later this fall.

The company also noted that the fine-tuning API will keep the customers' data safe, citing that the information will strictly be used for fine-tuning purposes only. OpenAI has highlighted several fine-tuning use case scenarios:

  • Improved steerability: Fine-tuning allows businesses to make the model follow instructions better, such as making outputs terse or always responding in a given language. For instance, developers can use fine-tuning to ensure that the model always responds in German when prompted to use that language.
  • Reliable output formatting: Fine-tuning improves the model's ability to consistently format responses—a crucial aspect for applications demanding a specific response format, such as code completion or composing API calls. A developer can use fine-tuning to more reliably convert user prompts into high-quality JSON snippets that can be used with their own systems.
  • Custom tone: Fine-tuning is a great way to hone the qualitative feel of the model output, such as its tone, so it better fits the voice of businesses’ brands. A business with a recognizable brand voice can use fine-tuning for the model to be more consistent with their tone.

Besides enhancing performance, the new feature will let organizations use shorter prompts but still retain the same performance. According to OpenAI, "Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs." 

Additionally, fine-tuning with GPT-3.5-Turbo can handle 4K tokens, which translates to double the size of previous fine-tuned models. OpenAI recommends combining fine-tuning with techniques like prompt engineering, information retrieval, or function calling for best results.

The cost for fine-tuning is as follows:

  • Training: $0.008 / 1K Tokens
  • Usage input: $0.012 / 1K Tokens
  • Usage output: $0.016 / 1K Tokens
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.