Microsoft demonstrates how it's training small language models to reason better than ChatGPT and Copilot

Microsoft Logo Building Redmond
(Image credit: Windows Central)

What you need to know

  • Microsoft recently published a new blog post highlighting its efforts toward teaching small language models how to reason.
  • It unveiled Orca 2, a small language model demonstrating strong reasoning abilities by imitating the step-by-step reasoning traces of more capable LLMs like ChatGPT and Copilot.
  • According to benchmarks, Orca 2 sports advanced performance capabilities compared to other LLMS when put to the test to handle complex tasks. 
  • Microsoft intends to train smaller language models using LLMs, ultimately expanding their capabilities.

There's no doubt that Microsoft has placed all its bets on generative AI, especially after making a multi-billion dollar investment in the tech, further extending its partnership with OpenAI.

Speaking of OpenAI, we've witnessed what most might refer to as a paradigm shift affecting the top management of the tech firm. OpenAI's board of directors stripped Sam Altman of his position, citing a lack of confidence in his leadership skills. Shortly after, Altman was offered a job at Microsoft leading the Advanced AI team alongside Greg Brockman (former OpenAI co-founder who resigned shortly after Altman's ousting).

As all these unfold, Microsoft has published a new blog post highlighting its efforts toward teaching small language models how to reason. A few months ago, the company debuted Orca. "A 13-billion language model demonstrating strong reasoning abilities by imitating the step-by-step reasoning traces of more capable LLMs."

And now, it has unveiled Orca 2 (which comes in two sizes - 7 billion and 13 billion parameters) as part of its efforts to tap into the capabilities of smaller LMs. According to Microsoft, Orca 2 sports "improved training signals and methods can empower smaller language models to achieve enhanced reasoning abilities." This is a significant feat, considering these capabilities are often found on larger language models, including ChatGPT and Copilot

Admittedly, both chatbots have faced numerous setbacks this year, with several users citing that ChatGPT is getting dumber amid claims that OpenAI is on the verge of bankruptcy. On the other hand, a report cited that Bing's user base has stagnated for three months consecutively, despite Microsoft's hefty investment in the tech.

Microsoft further cites that Orca 2 stacks miles ahead of other similar models, even the original Orca model. Moreover, the company indicated that it sports advanced performance levels compared to other larger models when handling complex tasks that test "advanced reasoning abilities in zero-shot settings."

Frontier Language Models such as GPT-4, PaLm, and others have demonstrated a remarkable ability to reason, for example, answering complex questions, generating explanations, and even solving problems that require multi-step reasoning; capabilities that were once considered beyond the reach of AI. Traditionally, such abilities have not been observed in smaller language models, so the challenge is how to use our growing knowledge of large language models to increase the abilities of these smaller models.

Microsoft

Microsoft's Advanced AI team is oncourse

OpenAI staffers joining Microsoft

(Image credit: Windows Central | Bing Image Creator)

Amid the OpenAI fiasco over the weekend, Microsoft CEO Satya Nadella, announced that Sam Altman would be joining the company as lead of the Advanced AI team. A job that suits his capabilities and skill set. 

Following the unfortunate Altman news, more than 500 OpenAI staffers penned a letter to the board of directors requesting his reinstatement, citing that the decision undermined their vision. The employees indicated they'd leave the company if their demands weren't met, citing that "there's no OpenAI without its people."

According to sources familiar with the situation, Microsoft is ready to absorb all OpenAI employees into the AI division should they decide to make good on their promise and leave the company.

Microsoft will likely leverage OpenAI's team to achieve more with Orca 2. Consequently, this will allow the company to use LLMs to train smaller language models, ultimately expanding the capabilities of smaller language models.

Do you think Microsoft is on track with its Orca 2 ventures? Share your thoughts with us in the comments.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.