AI Decoded
← Back to this week

This Week in AI

5 curated reads for the week of February 24, 2026

0 of 5 read this week

Business12 minGood for Sunday

How McKinsey Is Deploying AI Across Its Consulting Workforce

McKinsey Global Institute

McKinsey has rolled out AI tools to over 40,000 consultants, automating research synthesis and first-draft writing. Productivity benchmarks show 20-30% time savings on research tasks, though client judgment and relationship work remain firmly human. Worth reading for the honest assessment of what automation can and cannot replace.

#consulting#enterprise
Models8 minGood for midweek

Gemini 2.0 Model Updates: Flash, Flash-Lite, and Pro Experimental

Google DeepMind

Google's official breakdown of what changed in the Gemini 2.0 family — Flash goes generally available with higher rate limits, Flash-Lite arrives as the most cost-efficient option yet, and Pro Experimental targets complex coding tasks. Useful for understanding how model tiers are designed and what "generally available" signals about production readiness versus experimental releases.

#google#llm#benchmarks
Tools6 minGood for midweek

How Generative AI Can Boost Highly Skilled Workers Productivity

MIT Sloan

MIT Sloan's summary of the landmark BCG/Harvard field experiment on AI and knowledge worker productivity. Consultants using AI finished 12% more tasks, 25% faster, and produced 40% higher quality output. The nuance — AI helped most on tasks inside its capability frontier, and hurt performance on tasks outside it. Practical implications for how to decide which work to delegate to AI.

#productivity#tools
Regulation10 minGood for Friday

Regulating General-Purpose AI: Areas of Convergence and Divergence Across the EU and the US

Brookings Institution

Brookings maps where EU and US AI regulation is converging — transparency requirements, risk tiering, liability frameworks — and where it's diverging, particularly on enforcement mechanisms and foundational model obligations. Essential reading for any organization operating across both jurisdictions as compliance timelines approach.

#eu#regulation#compliance
Models15 minGood for Friday

Mixture of Experts Explained

Hugging Face

A thorough technical explainer on Mixture of Experts (MoE) architectures — the design powering Mixtral, DeepSeek, and Gemini 1.5. Covers how sparse routing enables larger parameter counts without proportional compute costs, load balancing challenges, and why MoE models behave differently from dense models at inference time. Assumes comfort with transformer fundamentals.

#moe#architecture#llm

Going Deeper

Optional reads for those who want more. (Some may be behind a paywall)