Copilot Podcast: Understanding SLMs and Multimodal AI Models
In episode 23 of the “Copilot Podcast,” AI Expert Aaron Back breaks down the various functionals of AI models, including large language models (LLMs), small language models (SLMs), and multimodal AI models.
This episode is sponsored by Community Summit North America, the largest independent gathering of the Microsoft Business Applications ecosystem, taking place Oct. 13-17, 2024, in San Antonio, Texas. Register today to connect with thousands of users across the Microsoft business applications ecosystem at the for user, by user event.
Highlights
00:58 — Large language models (LLMs) are well-known, but Aaron notes that there are other AI models, including small language models (SLMs) and multimodal AI models (MLMs). SLMs and LLMs are similar, but vary in size and purpose. Specifically, SLMs are purpose-built and, as the name suggests, smaller, making them easier to maintain.
03:08 — Aaron notes that “purpose-built” SLMs are versatile and can be used in many scenarios, are easily fine-tuned, and still rely on quality data.
03:51 — Multimodal AI models come into play with image touch-ups, for example, and help bolster creativity.
05:32 — Aaron suspects that in 2024, there will be more usage and applicable use cases for SLMs, due to the fact their versatile, easy to maintain, and are purpose-built. SLMs fuel Copilot for specific Dynamics 365 applications. In fact, Microsoft has released a small language model, Phi-2, which is now available inside of Azure AI Studio, and it’s “only a matter of time before we see this impacting Microsoft Copilot.”
Stream the audio version of this episode here: