Microsoft has introduced a number of updates to its Azure AI platform, including the expansion of the Phi-3 family of small language models (SLMs).
The company has added two new models to the family: Phi-3.5-MoE and Phi-3.5-mini, which are designed to improve efficiency and accuracy.
Among the main benefits of the new Microsoft models are their multilingual capabilities: they now support more than 20 models.
Microsoft adds two new Phi-3 models
Phi-3.5-MoE, a 42-billion-parameter Mixture of Experts model, combines 16 smaller models into one. In this way, Microsoft can combine the speed and computational efficiency of the smaller models with the quality and accuracy of the larger ones.
Phi-3.5-mini is significantly smaller, with 3.8 billion parameters, but its multilingual capabilities enable broader global use. It supports Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish and Ukrainian.
Microsoft says the Phi-3.5-mini serves as a significant upgrade over the Phi-3-mini model launched two months ago, based on user feedback.
In addition to two new models, Microsoft has also introduced several new tools and services within Azure AI to make it easier to extract insights from unstructured data.
More broadly, Microsoft will launch AI21 Jamba 1.5 Large and Jamba 1.5 models on Azure AI models as a service, offering long context processing capabilities.
Other announcements included the general availability of the VS Code extension for Azure Machine Learning and the general availability of the Conversational PII Detection Service in Azure AI Language.
“We continue to invest across the Azure AI stack to deliver next-generation innovation to our customers so they can build, deploy, and scale their AI solutions securely and confidently,” said Eric Boyd, Corporate Vice President of Azure AI Platform.