Model deployment

Compilers in the Age of LLMs — Yusuf Olokoba, Muna

Compilers in the Age of LLMs — Yusuf Olokoba, Muna

Yusuf Olokoba, founder of Muna, details a compiler-based approach to transform Python AI functions into self-contained native binaries. This talk explores the technical pipeline, including custom AST-based tracing, type propagation, and the strategic use of LLMs for code generation, enabling a universal, OpenAI-style client for running any model on any platform.

The CEO Behind the Fastest-Growing AI Inference Company | Tuhin Srivastava

The CEO Behind the Fastest-Growing AI Inference Company | Tuhin Srivastava

Tuhin Srivastava, CEO of Baseten, joins Gradient Dissent to discuss the core challenges of AI inference, from infrastructure and runtime bottlenecks to the practical differences between vLLM, TensorRT-LLM, and SGLang. He shares how Baseten navigated years of searching for a market before the explosion of large-scale models, emphasizing a company-building philosophy focused on avoiding premature scaling and "burning the boats" to chase the biggest opportunities.