Efficient Reinforcement Learning – Rhythm Garg & Linden Li, Applied Compute
At Applied Compute, efficient Reinforcement Learning is critical for delivering business value. This talk explores the transition from inefficient synchronous RL to a high-throughput asynchronous 'Pipeline RL' system. The core challenge is managing 'staleness'—a side effect of in-flight weight updates that can destabilize training. The speakers detail their first-principles systems model, based on the Roofline model, used to simulate and find the optimal allocation of GPU resources between sampling and training, balancing throughput with algorithmic stability and achieving significant speedups.