19 minute read

posted: 14-Feb-2026 & updated: 17-Feb-2026

Share on LinkedIn | Instagram | Twitter (X) | Facebook

Deep Dive - The 1.7x Speed Industrial AI Marathon (41:15)
Deep Dive - The Connector Mindset for Industrial AI (16:04)
Deep Dive - The Connector Mindset for Industrial AI (14:35)
Debate - Industrial AI Requires a Connector’s Mindset (13:28)

The LinkedIn messages keep coming. The students and the audience from both lectures. Professionals seeking to stay connected. PhD students wanting to discuss how Artificial Intelligence (AI) and related machine learning (ML) techniques could and should be applied to their research domains. Each one a reminder that when teaching works, it reverberates far beyond the lecture hall.

I’m writing this on the plane back to California, still buzzing from what might have been one of the most intense—and most rewarding—teaching day of my career. Yesterday (Friday, 13-Feb-2026) I delivered two special AI lectures in Texas; one in the morning at Texas A&M University in College Station, and another in the evening at The University of Texas at Austin invited by the Korean-American Scientists and Engineers Association (KSEA) Austin Chapter. Two different audiences, two approaches to the same vast landscape of AI, and—if the feedback I’ve been receiving is any indication—two experiences that genuinely shifted how people think about Artificial Intelligence (AI) and their places in its unfolding story.

The exhaustion is real. The exhilaration is realer.

The Marathon that Became a Sprint

Let me be honest about the logistics first, because they set the stage for everything else. College Station sits about 90 miles northwest of Houston, roughly 100 miles northeast of Austin—close enough that doing both events in one day seemed plausible on paper, ambitious in execution, and in retrospect, slightly insane. But when opportunities align, you don’t optimize for comfort; you optimize for impacts.

The morning lecture at Texas A&M was titled “AI Technology, Trends, and Market - Industrial AI.” The invitation came from Professor Su-in Yi, who had arranged for me to speak to engineering students and faculty about how AI actually works in production environments—not the sanitized version you read about in press releases, but the messy, complex, fascinating reality of deploying AI systems at scale across semiconductor manufacturing, e-commerce, and biotech.

More than anything, though, I wanted to give students something rarely offered in traditional AI courses—a panoramic, historically grounded view of how we arrived here, and why the current moment is as consequential as it feels.

I opened with the Mechanical Turk, an 18th-century automaton that fascinated European courts by appearing to play chess—a mechanical illusion, yet one that reveals how deeply and how long humanity has dreamed of creating entities that think and act like us. The term “Artificial Intelligence” itself wasn’t coined until John McCarthy introduced it at the Dartmouth Conference in 1956, but the underlying impulse is centuries older. Understanding this history matters—it grounds the current excitement in something more enduring than hype.

From there, we traced the arc of how AI finally delivered on its promise. The pivotal moment arrived in 2012, when AlexNet—developed by Geoffrey Hinton’s group at the University of Toronto—won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) by a stunning margin, demonstrating that deep learning could achieve superhuman performance on complex visual tasks. It was a watershed - suddenly, decades of theoretical work had practical teeth. The pace of progress accelerated rapidly from that point, and it has not slowed since.

Then came 2017—another inflection point that would permanently reshape the landscape. Eight researchers, seven from Google and one from the University of Toronto, submitted a paper to NeurIPS with a deceptively simple title - “Attention Is All You Need.” The Transformer architecture introduced in that paper now forms the backbone of majority of large language models (LLMs) in existence. It is difficult to overstate its significance—it is, in the history of AI, something like what the transistor was in the history of computing.

But I was equally careful to emphasize that no single breakthrough explains the current AI revolution. The unprecedented pace of recent progress reflects a rare convergence of forces - raw computational power enabling parallelism at previously unimaginable scales; the sheer volume of digital data accumulated over decades of internet activity; algorithmic innovations—particularly stochastic gradient descent with backpropagation—that make training deep networks tractable; an extraordinary culture of openness in the AI research community, where ideas and code flow freely through platforms like arXiv and GitHub; and—one cannot ignore this—the enormous capital concentration in Silicon Valley, which has funded talent, compute, and experimentation at a scale no academic institution could match.

Perhaps the lecture’s most distinctive moment came when I laid out my own framework for understanding large language models—one that departs from conventional wisdom in an important way. Most people think of LLMs as language tools. I argued they are something more fundamental - the most effective knowledge-transfer and representation-learning mechanisms humanity has yet devised. This reframing has profound implications. It explains why LLMs serve as the cognitive backbone of agentic AI systems—not merely as text generators, but as universal interfaces between human knowledge and machine action. It explains why multimodal AI feels so natural as an extension of LLMs, and why the era of AI agents is not a departure from LLM-centric AI but its natural culmination. Several students and faculty members told me afterward this perspective genuinely shifted how they think about where the field is heading.

The evening lecture in Austin, “Beyond ChatGPT - The Complete AI Landscape of Technology, Hardware, Markets, and Bridging Research to Startup Reality,” addressed a different hunger while conveying similar basic messages that I delivered at Texas A&M University—the Korean-American professionals, researchers, and entrepreneurs trying to navigate career decisions in an AI-saturated world. How do you identify genuine opportunities versus hype? Why are biotech and physical AI new promising lands of AI? What does it actually take to go from research to startup, from large company to founding your own venture? And philosophical questions and extremely grand questions regarding Artificial General Intelligence (AGI)!

The same speaker. The same core material. But different framing, pacing, and emphasis.

According to my friend, who introduced me to KSEA, Austin Chapter, I delivered at approximately 1.7 times normal human speaking speed, which meant what was advertised as a 90-minute talk became “practically more than 2 hours” of content compressed into the allotted time.

Morning - When Students Won’t Let You Leave

The Texas A&M lecture started at 11:30 AM. I had prepared what I thought was a comprehensive overview of industrial AI—tracing the arc from early expert systems through modern deep learning, exploring real production systems at Samsung (virtual metrology, yield optimization), Amazon (recommendation engines generating $200M+ in revenue), and Erudio Bio (AI-powered cancer diagnostics). The goal was to cut through both the hype and the skepticism, offering a rigorous but accessible framework for understanding what AI can and cannot do in practice.

What I didn’t anticipate was the level of engagement.

Questions started after I finished delivering my main points—not polite, perfunctory questions, but genuine intellectual sparring. Students asked profound and fundamental questions - Should we trust AI more than human experts in clinical settings—say, in radiology, where AI systems now achieve diagnostic accuracy that rivals or exceeds board-certified radiologists? What are the ethical guardrails? Who bears responsibility when an AI system errs?

Professor Yi had thoughtfully reserved a separate meeting room for additional Q&A, expecting maybe a few students would want to continue the conversation.

Five students showed up.

Not “showed up and asked one polite question before leaving.” Showed up and stayed. For over an hour. The conversation ranged across terrain I genuinely love—exactly the cross-domain questions that reveal where the most interesting problems live.

How can AI techniques be applied to biology and biotech? This question animated some of the richest discussion. Students were fascinated by how the same deep learning architectures that power language models now underpin protein structure prediction (AlphaFold), drug-target interaction modeling, and genomic sequence analysis. The unifying insight—that biological sequences are, in a profound sense, languages with their own grammar and semantics—unlocks a powerful transfer of techniques across domains that once seemed completely separate.

The insufficient data problem generated equally passionate debate. In industry, you often have exactly the data you have—you cannot simply generate more labeled examples of rare cancer subtypes, or anomalous semiconductor defects, or novel antibody responses. We discussed strategies that mature engineering teams actually use - transfer learning from large pre-trained models, data augmentation tailored to specific domains, active learning to prioritize what gets labeled next, synthetic data generation, and the underappreciated importance of domain expertise in designing features when data is scarce. The key insight I kept returning to - data efficiency is not a research problem to be solved once and for all, but an engineering discipline to be practiced continuously.

How do you actually productionize AI? This is where the gap between academic research and industrial deployment became the central topic. Students were surprised—and I think usefully unsettled—to learn how rarely model architecture is the decisive factor in production systems. What actually determines whether an AI system delivers value at scale turns out to be - data pipeline reliability (garbage in, garbage out, at industrial speed); monitoring and drift detection (models degrade silently as the world changes); the organizational dynamics of getting domain experts and ML engineers to collaborate effectively; and the unglamorous work of A/B testing, rollback infrastructure, and failure mode analysis. Several students who had been planning to focus purely on model research visibly reconsidered whether they needed broader engineering and systems knowledge.

The tension between academic research and industry applications deserves its own conversation. Academic research optimizes for novelty—pushing the frontier on benchmark datasets under controlled conditions. Industry optimizes for reliability—making something that works for real users, on messy real-world data, under operational constraints, at scale. Neither is superior; they’re different disciplines with different success criteria. But engineers who understand both—who can translate between research insights and production realities—are extraordinarily valuable and, in my experience, quite rare.

Professor Yi eventually had to physically extract me from that room because my next commitment was waiting. The students wanted to keep going. I wanted to keep going. But schedules are schedules, and Texas wasn’t done with me yet—I still had a 100-mile drive to Austin and an evening lecture to deliver.

As I walked to Prof. Yi’s car, I checked LinkedIn. One attendee from my lecture had already sent a connection request, praising the talk as “incredibly insightful.” Shortly after, another message arrived—from a student who’d attended the morning lecture, expressing admiration for both the content and the evident dedication behind it.

It’s these moments—when you realize you’ve genuinely affected how someone thinks about their field, their career, their possibilities—that make the insane travel schedules worthwhile.

Interlude - The Drive from College Station to Austin

The thing about giving two intense lectures in one day is that the middle part—the transition—becomes this strange liminal space. And I was fortunate not to make it alone.

Professor Yi, with characteristic generosity, offered to drive me from College Station to Austin himself—a 100-mile journey through Texas hill country that I will remember for a long time. There is something wonderfully disarming about a car ride - the conversation that unfolds when you’re both looking at the road rather than at each other, when there’s no agenda and no audience. We talked about AI’s trajectory in academia and industry, about the specific challenges facing engineering education in a world where the relevant tools are changing faster than curricula can update, about what kinds of students tend to thrive in research versus production environments, about the peculiar experience of watching a field you’ve spent your career in suddenly become the center of global attention.

It was, in its own way, a continuation of the morning’s seminar—but quieter, more reflective, two practitioners thinking aloud together rather than one talking and many listening.

I spent part of that drive also processing what I’ve come to call “the connector’s mindset.” It’s not about knowing everything. It’s about recognizing deep structural patterns that transcend surface differences. How semiconductor manufacturing insights inform biotech automation. How e-commerce personalization techniques transfer to medical diagnostics. Why Silicon Valley’s culture of rapid experimentation accelerates innovation in ways traditional R&D structures cannot match.

The students that morning got one facet of that synthesis. The evening audience would get another.

Evening - When the Audience Can’t Get Enough

The KSEA lecture started at 6:30 PM. I was tired—not going to lie—but there’s this peculiar energy that kicks in when you’re standing in front of people who genuinely want what you’re offering. Not passive reception. Active hunger.

The title promised the complete AI landscape, and I delivered. But the real value—at least according to the feedback I’m getting—wasn’t the technical content. It was the contextual synthesis. The why and how of AI’s current moment. The geopolitical semiconductor dynamics. The market forces driving specific architectural choices. The organizational and cultural factors that determine whether AI research becomes impactful products.

One person told me afterward. “Your AI talk wasn’t just another technical talk. With a truly holistic understanding of not only the technology itself, but also the broader contextual background surrounding AI’s development and progress, you let the audience see the whole landscape—how and why AI arrived here, what societal and economic conditions, and even what serendipitous events initiated this whole revolution—and told those stories in a way that made the audience want to know and learn more!”

He continued. “I tried my best NOT to miss any single word you said. You talked probably 1.7 times faster than normal people, so it was practically more than 2 hour talk, but we wanted to learn more! We should do another talk. KSEA will invite you at least once more!”

The questions were relentless. Passionate. Serious. People wanted to understand not just the technology but the implications—for their careers, for their companies, for their decisions about specialization versus breadth, research versus entrepreneurship, stability versus risk.

And afterward, as I was heading out, a LinkedIn message arrived from one attendee expressing that the lecture had given him much inspiration and that he wanted to keep communicating and learning. Simple words, but they carried real weight.

But perhaps the most touching message came from a PhD student finishing his degree in antibody discovery. He wrote to say he had found the talk insightful and inspiring—and then did something that genuinely moved me. He went home, watched our YouTube podcast about Erudio Bio’s AFM-based technology, reflected thoughtfully on the challenges of developing it, and reached out specifically to discuss how machine learning might apply to his own research on predicting antibody responses from large NGS and proteomics datasets. As someone approaching graduation and beginning to think about career directions, he wanted to stay connected and learn more.

That message—its specificity, its intellectual curiosity, its sense of someone actively trying to connect dots between what they heard and what they do—represents exactly what these lectures are designed to catalyze.

That message captures something essential about what yesterday’s lectures achieved. This student represents exactly the kind of person these talks are designed to reach—someone at a career transition point (finishing PhD), working with large and complex datasets, seeking to understand how ML applies to their domain, and looking for not just technical knowledge but mentorship and genuine connection. He even went back and watched our content about Erudio Bio’s AFM-based technology, trying to understand the full picture of how AI-powered biotech actually works in practice.

This is what I mean when I say teaching creates transformation beyond the classroom. You deliver a lecture about the complete AI landscape. Someone hears it, connects it to their own research challenges, seeks out additional resources to deepen their understanding, and reaches out not just to say “thank you” but to begin a relationship that might shape their career trajectory.

What Teaching at Full Speed Reveals

Here’s what I’ve learned from experiences like this - when you speak at “1.7 times normal human speed” because you’re genuinely excited about the material and deeply respect your audience’s intelligence, something unexpected happens—you create a kind of intellectual covenant.

You’re saying - I’m not going to waste your time with platitudes. I’m not going to dumb things down. I’m going to assume you can keep up, and I’m going to give you everything I’ve got.

And when audiences sense that—when they realize you’re not performing but genuinely trying to transfer understanding as efficiently as possible—they lean in. They ask better questions. They stay after. They send you messages days later saying the talk changed how they think about their career.

These happened at my San Jose State University lectures in January. It happened many times in 2025. Then once again, it happened yesterday, twice, in Texas.

It’s not about charisma or performance. It’s about something I explored in my recent blog post “When Teaching Becomes Transformation“—the recognition that education at its best is not information transfer but framework transformation.

You’re not teaching people facts about AI. You’re teaching them how to think about AI. How to evaluate claims. How to identify patterns. How to navigate uncertainty with informed judgment rather than either blind optimism or paralytic skepticism.

The Connector’s Mindset in Action

Both lectures illustrated what I’ve come to see as my distinctive contribution—not depth in any single domain (there are people who know more about semiconductors than I do, more about biotech than I do), but the ability to synthesize across domains in ways that reveal hidden patterns.

At Texas A&M, students asked about AI’s roles in radiology and ethical issues in AI. I could answer not just from Samsung experience but also by drawing analogies to how Amazon does demand forecasting—different domains, similar statistical challenges, transferable insights about what makes production systems reliable versus brittle.

At KSEA, professionals asked about transitioning from research to startups. I could speak from personal experience (Samsung → Amazon → Erudio Bio, Korea → US, engineering → entrepreneurship) while also situating those transitions within broader patterns about how Silicon Valley actually works—why certain transitions succeed, what skills transfer, what mindsets matter.

This is what I mean by “the connector’s mindset.” It’s the recognition that the most valuable insights emerge not from deep specialization but from cross-domain synthesis. The semiconductor engineer who understands machine learning. The ML researcher who grasps hardware constraints. The biotech founder who knows industrial-scale production systems.

In my entrepreneurial journey essay, I described myself as “Connector Rather Than Expert”—”I am neither full-time mathematician nor scientist nor engineer nor biologist. But I am all of it, and I connect all these fields.”

Yesterday was that philosophy in action. Twice.

The Gift of Teaching

Flying back to California now, I’m struck by something - I was supposed to be the teacher yesterday. I was the one delivering lectures. But I learned at least as much as I taught.

From the Texas A&M students, I learned what the next generation of engineers cares about—not just technical capability but meaningful impact, not just career success but intellectual integrity. The questions they asked in that reserved meeting room—about trusting AI in high-stakes clinical decisions, about the ethics of algorithmic judgment, about the gap between what a paper demonstrates and what a production system delivers—were the questions of people who want to understand the world deeply, not just pass an exam. The fact that two of them had already sent LinkedIn messages by the time I reached Prof. Yi’s car, reflecting on what they’d taken from the lecture, suggests that something genuinely transferred—not just information, but a way of engaging with these questions.

From the KSEA professionals, I learned what anxieties drive career decisions in an AI-saturated world—the tension between stability and risk, between specialization and breadth, between following conventional wisdom and trusting your own judgment. The organizer’s words stayed with me - that the audience “wanted to know and learn more,” that people tried “not to miss a single word,” that the lecture prompted them to see AI’s development not as an abstract technological progression but as a human story—shaped by societal forces, economic incentives, serendipitous breakthroughs, and individual decisions. And what confirmed this most powerfully was not the applause in the room but the messages that arrived afterward - from a PhD student in antibody discovery who went home and watched our research videos, connecting the lecture to his own work on NGS and proteomics data; from others who said it gave them inspiration and that they wanted to stay in touch and keep learning.

This is the evidence I find most persuasive that something real happened yesterday—not only the energy in the room during the talks, but also the emails and messages that arrive hours and days afterward, when the initial excitement has settled and what remains is genuine intellectual engagement.

And from both audiences, I learned something about the hunger for synthesis. Not just more information (we’re drowning in information!), but frameworks that make sense of information, that connect dots, that reveal patterns, and that empower action.

This connects to something I’ve been exploring in my recent philosophical writing—particularly in “Nor is Complete Information Sufficient!” where I argue that even complete information is insufficient for proper understanding. You need frameworks. You need synthesis. You need what I call “the synthesis capacity”—the ability to integrate disparate elements into coherent wholes.

Teaching is that integration process externalized. You’re not just transmitting facts. You’re demonstrating a way of thinking. And when it works—when students or professionals lean in, ask deeper questions, stay after, send you messages days later—you know you’ve transmitted something more valuable than information.

You’ve transmitted understanding.

Looking Forward

The KSEA organizer said they want to invite me back—”at least once more.” Professor Yi mentioned future collaboration opportunities. Students and professionals are sending LinkedIn connection requests, asking about research directions, seeking advice on career decisions.

This is how impact compounds. Not through one dramatic moment but through sustained engagement. Through building relationships. Through demonstrating that rigorous thinking and genuine care for student success aren’t mutually exclusive—they reinforce each other.

I have lectures scheduled at Seoul National University College of Engineering in March. Continued university seminars and special AI lectures across Korea and the US. Each one is an opportunity not just to teach but to learn, not just to inform but to connect, not just to share knowledge but to catalyze transformation.

Because that’s what the best teaching does. It doesn’t fill empty vessels with information. It transforms how people see themselves and their possibilities.

Yesterday, two lectures in one day across 100 miles of Texas. Exhausting? Absolutely. Worth it?

Ask the five students who wouldn’t let me leave that Texas A&M meeting room.

Ask the KSEA audience members who said they tried not to miss a single word.

Ask the antibody discovery PhD student who went home after the evening lecture, watched our research videos about AFM-based cancer diagnostics, and reached out to explore how machine learning might apply to his work on predicting antibody responses—exactly the kind of cross-domain synthesis that will define the next generation of biotech innovation.

Ask me in five years when some of yesterday’s attendees are founding companies, publishing breakthrough research, or making career transitions they couldn’t have imagined before understanding the complete AI landscape.

The answer, I think, will be clear.

Sunghee

Co-Founder & CTO @ Erudio Bio, Inc
Leader of Silicon Valley Privacy-Preserving AI Forum (K-PAI)
Philosopher, Mathematician, Thinker, and Universal Truth Seeker
Entertainer, Entrepreneur, Engineer, Scientist, Researcher, Creator, and Connector of Ideas, and, most of all, PEOPLE!

Updated: