When Teaching becomes Transformation - Reflections on Five Lectures that Changed How Students See AI
posted: 07-Jan-2026 & updated: 08-Jan-2026
Share on LinkedIn | Instagram | Twitter (X) | Facebook
The English translation of the original feedback can be found at Feedback and Request for Advanced Learning
There is a profound hunger for education that doesn’t just transfer information but transforms understanding. For learning that doesn’t just add knowledge but shifts paradigms. For discourse that doesn’t just deliver answers but models how to ask better questions.
… You cannot separate the algorithm from the architecture, the technology from the market, the system from society, the technical from the philosophical.
… I wasn’t just teaching content. I was facilitating a paradigm shift.
… I explained that hallucination isn’t a bug to be eliminated—it’s the structural engine that enables creativity and performance in language models. A completely sanitized, hallucination-free model might paradoxically become less useful, less interesting, less capable of the kind of generative leaps that make LLMs valuable.
… but because it suggests how rare genuine intellectual discourse has become in our educational systems. We’ve optimized for content delivery, for measurable outcomes, for scalable pedagogy. But we’ve lost something essential – the modeling of how a mind actually grapples with complex, multifaceted problems.
Understanding AI not as an algorithm or tool, but as a “social system”!
AI is not merely a collection of algorithms—it’s a sociotechnical system embedded in markets, organizations, power structures, epistemological frameworks, and human meaning-making practices.
The most transformative teaching doesn’t happen when we transfer knowledge. It happens when we model a way of thinking.
… not by mastering separate domains but by developing a way of thinking that transcends domain boundaries. … And apparently, this way of thinking is itself valuable. Worth crossing an ocean for.
… Because I’ve come to believe that the bottleneck in AI progress isn’t algorithms or compute—it’s people who can think across boundaries.
Pleasant Surprise - The Unexpected Gift of Teaching
Last week, something remarkable happened. Over 4 days spanning December 29, 2025 to January 1, 2026, I had the honor of delivering a special AI lecture series (5 lectures) organized by San José State University for students who had traveled all the way from Kyungpook National University in South Korea—crossing the Pacific Ocean not for tourism, but for knowledge. This alone humbled me. That anyone would journey so far to hear what I have to say about AI felt like both a privilege and a responsibility.
But what transpired over those 4 days exceeded anything I could have anticipated.
What I Thought I was Teaching
I designed these lectures to be different. Not because I set out to be contrarian, but because I’ve spent decades at the intersection of theory and practice—from Convex Optimization at Stanford to semiconductor manufacturing at Samsung, from AI systems at Amazon to biotech innovation at Erudio Bio—and I’ve learned that AI cannot be understood in fragments. You cannot separate the algorithm from the architecture, the technology from the market, the system from society, the technical from the philosophical.
So I structured the series to cover.
- The Historical Foundation & Modern Landscape - From the Symbolism vs. Connectionism debates to the Agentic Revolution, tracing how we arrived at today’s LLMs and multimodal intelligence
- LLM & GenAI Deep Dive - Inside Transformers, the technology-market nexus, and what’s really happening in production systems
- AI + Biotech & Physical AI - AlphaFold 3, drug discovery, humanoid robots, and the convergence revolution unfolding in real-time
- Philosophy, Ethics & Consciousness - The hard questions about what AI can and cannot do, about consciousness, knowledge, belief, and the limits of machine reasoning
- Industrial AI from First-Hand Experience - Real production stories from semiconductor fabs and e-commerce platforms, including the failures and bottlenecks nobody talks about in papers
But here’s what I didn’t fully anticipate – I wasn’t just teaching content. I was facilitating a paradigm shift.
The Feedback that Stopped Me in My Tracks
A few days after the lectures concluded, I received a document from the students. Reading it, I had to pause. Multiple times.
They wrote things like:
“My entire perspective on AI has completely changed”
“Deep insights that are difficult to find on the web”
“AI technology is not something to ‘learn about’ but ‘an entity to think with’”
What struck me wasn’t just the praise—though I won’t pretend it didn’t feel good—but rather what they found transformative. It wasn’t the technical content alone. It was something else entirely.
The Hallucination that became an Epiphany
One theme kept appearing in their feedback: my treatment of hallucination in LLMs.
Here’s what happened in that session. I explained that hallucination isn’t a bug to be eliminated—it’s the structural engine that enables creativity and performance in language models. A completely sanitized, hallucination-free model might paradoxically become less useful, less interesting, less capable of the kind of generative leaps that make LLMs valuable.
I drew parallels to human cognition: our confirmation biases, our pattern-matching that sometimes sees patterns that aren’t there, our ability to make creative connections precisely because we don’t require 100% logical certainty before synthesizing ideas.
The students described this as a complete reframing. One wrote: “Understanding hallucination not simply as an error or bug, but as the structural engine that enables LLM creativity and performance.”
This moment crystallized something I’ve long believed: the most important insights in AI aren’t purely technical—they’re conceptual bridges between the mathematical, the computational, the cognitive, and the philosophical.
What Made It Different - The Discourse Itself
But there was something else in the feedback that surprised me even more. Multiple students emphasized that this wasn’t a typical lecture format. They described it as:
“Interactive, discussion-based lecture centered on Q&A rather than theory-focused instruction”
“Free Q&A and real case-based insights”
“Discussion-based class format”
Here’s my confession: I didn’t plan it this way because I thought it would be pedagogically superior. I did it because it’s the only way I know how to think about these topics.
When I work through a problem—whether it’s optimizing a semiconductor process or designing a biomarker detection platform—I don’t proceed linearly through a textbook. I question, I probe, I follow threads, I backtrack, I connect disparate ideas, I challenge my own assumptions. So when I teach, I naturally invite students into that process.
Indeed, I even several times said
“Don’t just trust me. I might be wrong on this.”
“Challenge me if you have different ideas!”
What they experienced wasn’t a performance of interactivity. It was actual thinking happening in real-time, with all its messiness and uncertainty and sudden moments of clarity. And apparently, this was revelatory for them.
One student wrote that they had
“never experienced this kind of discourse in class before.”
This breaks my heart a little. Not because of any failing on the part of other educators, but because it suggests how rare genuine intellectual discourse has become in our educational systems. We’ve optimized for content delivery, for measurable outcomes, for scalable pedagogy. But we’ve lost something essential – the modeling of how a mind actually grapples with complex, multifaceted problems.
The Holistic Landscape They’d Never Seen
Another recurring theme – students valued what they called my “holistic” approach to AI.
They appreciated that I didn’t just explain Transformers or backpropagation or reinforcement learning in isolation. Instead, I showed them
- How the Symbolism vs. Connectionism debate from the 1960s echoes in today’s discussions of System 1 vs. System 2 thinking
- Why architectural changes + data + computing resources create nonlinear performance jumps at critical thresholds
- How Silicon Valley’s culture of trust, physical clustering, and rapid iteration enables the kind of innovation that can’t be replicated through policy documents alone
- Why AlphaFold 3 represents not just a protein structure prediction tool but a fundamental shift in how we approach the protein-drug interaction problem space
- How RAG, AI Agents, and multimodal systems actually get implemented in production environments, with all the unglamorous data pipeline and operational challenges
One student captured it perfectly!
Understanding AI not as an algorithm or tool, but as a “social system”!
This is it. This is what I’ve been trying to articulate for years. AI is not merely a collection of algorithms—it’s a sociotechnical system embedded in markets, organizations, power structures, epistemological frameworks, and human meaning-making practices.
You cannot understand AI without understanding optimization theory. But you also cannot understand AI without understanding business models, semiconductor supply chains, regulatory frameworks, cognitive biases, philosophical questions about knowledge and belief, and the cultural contexts in which these systems are developed and deployed.
I’ve spent my entire career at these intersections—mathematics and engineering, theory and industry, algorithm and application, technology and humanity. And apparently, this integrated perspective is rarer than I thought.
I’m humbled by this. And also motivated. Because it suggests there’s a hunger for this kind of holistic understanding that isn’t being satisfied by our current educational structures—not because educators don’t want to provide it, but because few people have traversed these particular intersections in their own careers.
What They Want Next - Adapting to Their Journey
The most beautiful part of the feedback document was the section on “Requests for Advanced Learning.” The students articulated four main themes for the remaining three lectures:
I. Real Production-Level Silicon Valley AI
They want to move beyond concepts to actual operational workflows and technology stacks. How do companies really implement RAG? What does an AI Agent architecture look like in production? What are the bottlenecks, the data pipeline challenges, the operational issues that don’t make it into research papers?
II. Business Impact and Market Transformation
How does AI actually make money? How does it reshape revenue structures, business models, competitive advantages? What’s the process by which technological possibility becomes monetizable value?
III. Career, Entrepreneurship, and Real Pathways
They want my personal story—what triggered the decision to start companies? What were the hardest moments? How does a Korean undergraduate actually break into Silicon Valley? What competencies matter in today’s market?
IV. Enduring Human Capabilities
What can we delegate to AI and what must we verify? What are the core competencies—thinking skills, questioning ability, verification capacity—that remain essential?
Reading these requests, I realized - these students aren’t just learning about AI. They’re trying to figure out how to build meaningful careers and lives in an AI-transformed world.
And here’s what moved me most – they trust me enough to ask these questions. Questions about purpose, about career, about what matters, about how to navigate uncertainty.
So yes, I’m revising my remaining three lectures. Not drastically—the core structure was always designed to address these themes—but with much more explicit attention to
- Concrete production architecture patterns from my Samsung, Amazon, and Erudio Bio experiences
- The business strategy layer that translates technical capabilities into market value
- My personal journey, including the failures and uncertainties that don’t make it into my CV
- Frameworks for thinking about human-AI collaboration that go beyond simplistic and shallow view of “AI will replace X jobs” narratives
What This Experience Teaches Me about Teaching
This experience has crystallized something I’ve long intuited but never quite articulated!
The most transformative teaching doesn’t happen when we transfer knowledge. It happens when we model a way of thinking.
These students didn’t just learn facts about Transformers or hallucination or AlphaFold. They experienced a different way of approaching these topics—one that:
- Connects rather than separates - seeing AI not as isolated algorithms but as part of interconnected technological, economic, social, and philosophical systems
- Questions rather than accepts - treating even foundational concepts like “what is knowledge?” or “what is intelligence?” as open questions worthy of examination
- Integrates rather than fragments - bringing mathematical rigor together with business pragmatism, technical depth with humanistic concern
- Engages rather than receives - thinking with rather than being taught at
This is how I’ve always worked. It’s how I went from Convex Optimization to semiconductor manufacturing to e-commerce AI to biotech innovation—not by mastering separate domains but by developing a way of thinking that transcends domain boundaries.
And apparently, this way of thinking is itself valuable. Worth crossing an ocean for.
The Privilege and Responsibility of Teaching
I didn’t become an educator by design. I became a mathematician, an engineer, a researcher, a philosopher, a thinker (or rather a seek), an entrepreneur, a technologist, and a connector. But increasingly, I find myself in teaching situations—not just formal lectures like this SJSU series, but the 50+ special AI lectures, seminars, consultations I delivered in 2025 across universities, conferences, forums, corporations, government organizations, and international forums.
Why do I do this, given how busy I am building Erudio Bio and leading K-PAI and advising multiple organizations?
Because I’ve come to believe that the bottleneck in AI progress isn’t algorithms or compute—it’s people who can think across boundaries.
We need more people who can
- Understand both the mathematics of optimization and the realities of production systems
- Navigate both technical feasibility and business viability
- Consider both what we can build and what we should build
- Bridge both Silicon Valley and other innovation ecosystems around the world
- Connect both technical progress and human flourishing
And the only way to develop such people is to model this kind of thinking. To show, through example, what it looks like to hold multiple perspectives simultaneously. To demonstrate that you don’t have to choose between rigor and relevance, between depth and breadth, between theory and practice.
What Comes Next
Next week, I’ll deliver the final three lectures of this series:
- Lecture VI: Prerequisites, Learning Strategies, and LLM as Learning Partner
- Lecture VII: ML Fundamentals and Live Coding
- Lecture VIII: Reinforcement Learning, Recent Progress, and App Development
But now I understand these lectures differently. They’re not just about content. They’re about empowering these students to become the kind of thinkers the AI era needs—people who can navigate complexity, question assumptions, bridge domains, and ultimately contribute to building AI systems that serve humanity well.
And beyond these students, I’m thinking about the broader ecosystem. How can we create more opportunities for this kind of transformative education? How can we scale not the content (that’s easy—everything is on YouTube) but the discourse? How can we foster more communities where genuine intellectual exchange happens?
This is part of why I founded K-PAI—not just to share knowledge about AI, but to create a space where technologists, entrepreneurs, investors, scientists, and policymakers can think together about the challenges we face.
A Final Reflection
When those students traveled across the Pacific to attend these lectures, I wonder if they knew they would write feedback that would stop me in my tracks and make me rethink how I approach teaching.
I wonder if they knew that their hunger for holistic understanding, for genuine discourse, for integration of technical and humanistic perspectives would validate everything I believe about education.
I wonder if they knew that their questions about career and purpose and human capabilities in the AI era would remind me why this work matters.
Probably not. They just came to learn about AI.
But in teaching them, they taught me something essential
There is a profound hunger for education that doesn’t just transfer information but transforms understanding. For learning that doesn’t just add knowledge but shifts paradigms. For discourse that doesn’t just deliver answers but models how to ask better questions.
And as I prepare for next week’s lectures, I’m reminded once again that the greatest gift of teaching isn’t what we give to students.
It’s what they give back to us!
The San Jose State University Special AI Lecture Series continues January 14-16, 2026, with three additional sessions focusing on learning strategies, practical implementation, and recent AI advances. Student feedback and requests continue to shape the curriculum in real-time.
If you’re interested in the complete lecture series materials, abstracts are available on my seminars page.