Ibm

Granite 4.0: Small AI Models, Big Efficiency

Granite 4.0: Small AI Models, Big Efficiency

IBM's Granite 4.0 models introduce a groundbreaking hybrid architecture combining Mamba-2 and Transformer blocks with a Mixture of Experts (MoE) design. This approach enables smaller models to achieve superior performance, speed, and memory efficiency, even outperforming much larger models on key enterprise tasks while running on consumer-grade hardware.

Monster prompt, OpenAI’s business play, nano-banana and US Open experimentations

Monster prompt, OpenAI’s business play, nano-banana and US Open experimentations

The panel discusses KPMG's 100-page prompt for its TaxBot, debating the future of prompt engineering versus fine-tuning. They also analyze OpenAI's potential move into selling cloud infrastructure, the impressive capabilities of Google's new image model, Nano-Banana, and new AI-powered fan experiences at the US Open.