Tokenless

Machine Learning

View All
Migrating from Neptune to Weights & Biases

Migrating from Neptune to Weights & Biases

A technical guide on migrating ML experiments from Neptune to Weights & Biases, covering the migration script, API-level code changes, and best practices for organizing projects and analyzing results in the W&B platform before the Neptune sunset.

W&B Models end-to-end demo

W&B Models end-to-end demo

W&B Models is the system of record for the entire model development lifecycle. This guide explores how to monitor training, tune hyperparameters, track artifacts and lineage for reproducibility, and automate MLOps workflows like evaluation and deployment using a central platform.

Post-training best-in-class models in 2025

Post-training best-in-class models in 2025

An expert overview of post-training techniques for language models, covering the entire workflow from data generation and curation to advanced algorithms like Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Learning (RL), along with practical advice on evaluation and iteration.

Artificial Intelligence

View All
This Is The Next Industry AI Will Disrupt

This Is The Next Industry AI Will Disrupt

Onshore founder Dominic Vitucci discusses how AI is causing a tectonic shift in the accounting industry, moving from a model based on billable hours to one of technology-driven outcomes, and what this means for the future of the profession and the legacy firms that dominate it.

OWASP's Top 10 Ways to Attack LLMs: AI Vulnerabilities Exposed

OWASP's Top 10 Ways to Attack LLMs: AI Vulnerabilities Exposed

A detailed breakdown of the updated OWASP Top 10 vulnerabilities for Large Language Models (LLMs), explaining threats like prompt injection, data poisoning, and supply chain risks, along with practical defense strategies.

AI Won't Replace You—But Someone Using AI Will

AI Won't Replace You—But Someone Using AI Will

In this episode, Ben Lorica and Evangelos Simoudis discuss how AI is fundamentally reshaping the modern workplace. They explore the necessary evolution of knowledge work, from a focus on routine execution to problem definition and spec-driven development, and outline the critical skills professionals must cultivate—including rapid experimentation, AI agent orchestration, and systems thinking—to remain valuable and navigate a more volatile labor market.

Technology

View All
Platform Engineering • Ajay Chankramath & Nic Cheneweth • GOTO 2026

Platform Engineering • Ajay Chankramath & Nic Cheneweth • GOTO 2026

Ajay Chankramath and Nic Cheneweth discuss the critical elements of effective platform engineering, emphasizing a product mindset, the foundational role of control planes and API-first design, the common pitfalls of implementing Backstage, and the emerging impact of AI and agents on the platform landscape.

SW Design, Architecture & Clarity at Scale • Sam Newman, Jacqui Read & Simon Rohrer

SW Design, Architecture & Clarity at Scale • Sam Newman, Jacqui Read & Simon Rohrer

Experts Sam Newman, Jacqui Read, and Simon Rohrer explore the nuances of software design, its intersection with architecture, and the critical role of communication in scaling technical clarity. The discussion covers practical advice on implementing Architectural Decision Records (ADRs), the evolving role of the architect as a facilitator, and strategies for creating agile enterprise architectures.

Learn Docker in a Month of Lunches • Elton Stoneman & Bret Fisher • GOTO 2026

Learn Docker in a Month of Lunches • Elton Stoneman & Bret Fisher • GOTO 2026

Docker educators Bret Fisher and Elton Stoneman discuss the second edition of Stoneman's book, "Learn Docker in a Month of Lunches". They explore why Docker fundamentals remain crucial in a Kubernetes-dominated world, the evolution of the container ecosystem over the past five years, and the key skills that differentiate a Docker expert from a beginner, such as multi-platform builds, security, and configuration management.


Recent Post

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Alexander Embiricos, product lead for OpenAI's Codex, shares the vision of AI as a software engineering teammate, not just a tool. He explains how a strategic shift to a local, interactive experience unlocked 20x growth, details how the Sora Android app was built in 28 days, and argues that the real bottleneck to AGI-level productivity is now human review speed, not model capability.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma challenges our understanding of intelligence, proposing a unified mathematical theory based on two principles: parsimony and self-consistency. He argues that current large models merely memorize statistical patterns in already-compressed human knowledge (like text) rather than achieving true understanding. This framework re-contextualizes deep learning as a process of compression and denoising, allowing for the derivation of Transformer architectures like CRATE from first principles, paving the way for a more interpretable, white-box approach to AI.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma presents a unified mathematical theory of intelligence built on two principles: parsimony and self-consistency. He challenges the notion that large language models (LLMs) understand, arguing they are sophisticated memorization systems, and demonstrates how architectures like the Transformer can be derived from the first principle of compression.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma presents a unified mathematical theory of intelligence based on two principles: Parsimony and Self-Consistency. He argues that current AI, particularly LLMs, excels at memorization by compressing already-compressed human knowledge (text), but fails at true abstraction and understanding. His framework, centered on maximizing the coding rate reduction of data, provides a first-principles derivation for architectures like Transformers (CRATE) and explains phenomena like the effectiveness of gradient descent through the concept of benign non-convex landscapes.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI, predicting 'minimal AGI' within years and 'full AGI' within a decade. He details a path to more reliable systems and introduces 'System 2 Safety' for building ethical AI. Legg issues an urgent call for society to prepare for the massive economic and structural transformations that advanced AI will inevitably bring.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI levels, predicts a 50% chance of minimal AGI by 2028, and discusses the profound societal and economic transformations that will follow.

Stay In The Loop! Subscribe to Our Newsletter.

Get updates straight to your inbox. No spam, just useful content.