Microsoft Reducing AI Compute Needs With SLMs: Tech Disruptors

Microsoft Reducing AI Compute Needs With SLMs
Tech Disruptors
“Microsoft is making a bet that we’re not going to need a single AI, we’re going to need many different AIs” Sebastien Bubeck, Microsoft’s vice president of generative AI research notes to Bloomberg senior technology analyst Anurag Rana. Together on this Tech Disruptors episode, the two spend considerable time talking on the differences between a large language model like ChatGPT-4o and a small language model such as Microsoft’s Phi-3 family. Bubeck and Rana account for various use cases of the models across various industries and workflows. The two also compare the costs and differences in compute/GPU requirements between SLMs and LLMs.
Jul 25, 2024