Meta’s Llama 4 is here — and it’s big news for video producers using AI
Bill Milling here — AI video producer at American Movie Company. Meta just released two models in its new Llama 4 family: Scout and Maverick, each packing 17B active parameters. These are designed for more personalized multimodal experiences — meaning they can better understand and generate both text and visuals.
They’re Meta’s first open-weight, natively multimodal models with support for longer context lengths — huge for video teams working on long-form content, scene breakdowns, or AI-assisted storyboarding.
Under the hood is a Mix of Experts (MoE) setup—basically a more efficient way to achieve high performance without overloading systems. That’s a win for anyone building or adapting AI tools in their pipeline.
Meta also teased Llama 4 Behemoth, which has a staggering 288B parameters. It’s being used to train other models, such as a super-intelligent teacher model.
If you’re a video producer exploring AI like I am, keep a close eye on this. The tools are evolving fast — and they’re only getting smarter.