Microsoft has officially launched the Phi-3, a new family of small language models (SLM). These AI models can help developers with specific use cases. The Phi-3 SLMs are trained on similar high-quality data over which Microsoft’s top-end AI models were trained.
Microsoft’s Phi-3 SLMs: Details
Microsoft says that its Phi-3 SLM can outperform models of similar size by offering better language, reasoning, coding, and math benchmarks. They can cater well to developers who need repetitive usage of the same functions, instead of using an entire LLM.
Phi-3 SLM will be available in three sub-models; mini, small, and medium. Microsoft has released Phi-3-mini on its Azure AI Studio, Hugging Face, and Ollama. Developers can use the SLM for specific AI features for their software. It supports tokens between a range of 4K to 128K.
Phi-3-mini is also optimized for Nvidia GPUs and Windows DirectML for maximum compatibility across various systems. It is instruction-tuned making it ready to deploy in any software as it can follow instructions contextually. It can also be implemented locally on mobile devices without an active connection to cloud servers.
Microsoft will also release Phi-3-small (7B) and Phi-3-medium (14B) over the next few weeks. All of these models are also compliant with Microsoft’s Responsible AI Standards.