Self supervised learning

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

Dr. Jeff Beck explores the philosophical and technical definitions of agency, arguing that the distinction between an agent and an object lies in computational sophistication, particularly the capacity for planning and counterfactual reasoning. The conversation provides a deep dive into Energy-Based Models (EBMs), Yann LeCun's JEPA for learning in latent space, and a pragmatic approach to AI safety centered on inverse reinforcement learning rather than fears of rogue superintelligence.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma presents a unified mathematical theory of intelligence built on two principles: parsimony and self-consistency. He challenges the notion that large language models (LLMs) understand, arguing they are sophisticated memorization systems, and demonstrates how architectures like the Transformer can be derived from the first principle of compression.

Intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact Models

Intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact Models

This research explores the distillation and pruning of large, self-supervised speech quality assessment models into compact and efficient versions. Starting with the high-performing but large XLSR-SQA model, the work details a process of knowledge distillation using a teacher-student framework with a diverse, on-the-fly generated dataset. The resulting compact models successfully close over half the performance gap to the teacher, making them suitable for on-device and production applications where model size is a critical constraint.