After figuring out interviews and production machine learning separately, I combined them into one path.
For engineers who write Python and want to go all the way to classical machine learning, large language model engineering, and shipping AI systems in production.
Every path shares the same rigorous foundations. Where you go after that depends on what you are building.
Build, deploy, and monitor a production machine learning system end to end. Churn prediction model served via an API, retrieval pipeline, streaming data, continuous deployment, Kubernetes.
Retrieval pipelines, agents, and language model-powered products. Skips classical machine learning and the math-heavy modules. Goes straight to shipping with the Anthropic API.
Self-host, fine-tune, and train models at scale on GPU clusters. Pre-train a 1 billion parameter model, fine-tune with low-rank adaptation, serve at production throughput, run multi-node training jobs.
Machine learning engineering interviews: algorithms and data structures, machine learning coding, system design, and behavioral. Focused on exactly the fundamentals that show up at top companies.
My path into AI started in theoretical chemistry. At the University of Washington I worked on quantum mechanics simulations, Hamiltonian systems on computing clusters. That was my first experience building and debugging large computational systems.
When I moved into industry, I noticed the same problem everywhere. Teams could build models. Getting those models to run reliably in production was a completely different challenge. Pipelines failed. Inference systems broke. Models behaved unpredictably at scale.
That is the skill this curriculum teaches. Not just how to train models. How to ship systems that work in the real world.