Building Deep Learning Expertise Through Real Experience

We started because we saw too many programmers hit a wall when neural networks got complicated. Not from lack of talent—from lack of structured guidance that actually makes sense.

Started Small, Learned What Actually Works

Back in early 2020, three of us were helping colleagues debug their first neural networks over coffee. Same questions kept coming up. Why won't this converge? What's happening in these layers? Is this accuracy actually good?

We noticed something. People didn't need another tutorial on backpropagation math. They needed someone to sit with them while their model trained and explain what those loss curves meant. So we did that—informally at first.

By mid-2021, we'd worked with about forty developers. Some sessions went great. Others flopped because we tried covering too much. We learned to focus on one concept at a time, with actual code they could modify and break safely.

Learning environment where students work through neural network implementations

What Shapes How We Teach

These aren't corporate values we printed on a poster. They're lessons we learned from watching hundreds of people struggle with and eventually master deep learning concepts.

Debugging Over Theory

Most breakthroughs happen when students fix broken code, not when they watch perfect demos. We intentionally include bugs in exercises because finding them teaches more than avoiding them ever could.

Iteration Is Normal

Your first model architecture will probably underperform. That's expected. We structure programs so you build, evaluate, rebuild—just like real projects. Nobody gets it right on attempt one.

Context Before Complexity

We don't introduce transformers until you've built and understood RNNs. Each concept builds on what you already know. Skipping steps to reach trending topics faster almost always backfires.

How We Got Here

Not a smooth path. We made plenty of mistakes figuring out what actually helps people learn complex technical material.

2020

Informal Study Groups

Started meeting developers at a coworking space in Hsinchu. No curriculum, just helping debug whatever people were stuck on. Realized there were patterns to the confusion—similar misunderstandings kept appearing.

2021

First Structured Cohort

Tested a twelve-week program with eight participants. Half dropped out because we moved too fast. Painful feedback, but necessary. We slowed down, added more hands-on time, cut the lecture portions in half.

2022

Curriculum Refinement

Ran four cohorts with the revised approach. Completion rate improved to 78%. Students told us they valued the small group debugging sessions more than anything else, so we made those weekly instead of biweekly.

2023

Taiwan Expansion

Opened proper teaching space in Hsinchu. Added weekend sessions for working professionals. Started tracking which exercises caused the most confusion and rewrote about thirty percent of our materials based on that data.

2024-2025

Continuous Iteration

Currently working with our fifteenth cohort. Still finding ways to improve. Recently added optional advanced modules for people who finish early. Planning next cohort for September 2025.

People Behind the Programs

Small team. Everyone teaches. No sales staff, no marketing department—just developers who figured out how to explain difficult concepts clearly.

Portrait of Ilmari Korhonen

Ilmari Korhonen

Lead Instructor

Spent six years building computer vision systems before this. Got tired of explaining the same debugging techniques repeatedly at work, so started teaching them systematically. Prefers working with smaller groups where he can see exactly where people get stuck.

Portrait of Saoirse Brennan

Saoirse Brennan

Curriculum Developer

Background in natural language processing and a weird talent for noticing when examples don't actually demonstrate what they're supposed to. Rewrites exercises constantly based on where students struggle. Runs the weekend sessions and handles most of the code review.

Students collaborating on neural network architecture design Instructor reviewing code implementation with student

How We Actually Run Programs

Most of our teaching happens in small groups—usually six to eight people. Everyone works on the same core project but implements it differently. This creates natural opportunities for comparison. Why did your loss stabilize faster than mine? What happens if we swap activation functions?

We don't lecture much. Typical session: fifteen minutes of context, then everyone codes for ninety minutes while instructors circulate. Questions come up organically. Sometimes we pause to address something affecting multiple people. Otherwise, you're writing and debugging actual neural networks.

Between sessions, you have access to our code review queue. Push your implementation, get feedback within 24 hours. Usually detailed—not just "this works" but "here's why this approach might cause problems with larger datasets."

We track completion rates, post-program employment changes, and how long concepts stick. Current cohort shows 82% completing all projects, with participants reporting meaningful skill improvement in follow-ups six months later.