AI 시대의 사회 설계 원리

Three-Book Series: Comprehensive Strategic Architecture

Series Title: Designing Society in the Age of AI Korean Title: AI 시대의 사회 설계 원리

Authors:

  • Sunghee Yun (윤성희) - Lead Author, Technical & Personal Narrative
  • Jeff Lee (이보형) - Co-Author, Legal & Governance Framework
  • Hayden Song (송영욱) - Co-Author, Policy Analysis & Institutional Design

Document Version: 1.0
Date: March 8, 2026
Status: Strategic Planning Document for Collaboration


EXECUTIVE SUMMARY

The Core Problem

Current AI discourse operates within three fatally flawed frames:

  1. Human vs. Machine dichotomy - “Where does human end, machine begin?”
  2. Technology as destiny - “Innovation automatically equals progress”
  3. Regulation as follower - “Policy must catch up to technology”

The Core Insight

AI is not technology. AI is a power redistribution device and choice architecture redesign tool. Understanding this requires three integrated perspectives:

  1. Epistemological (Book I): How we think about AI determines what questions we ask
  2. Structural (Book II): How AI reshapes markets, organizations, and information flows
  3. Institutional (Book III): How we design governance for sustainable AI societies

The Three-Book Architecture

Book I - PARADIGM RESET: Establishes new cognitive framework for understanding AI
Book II - SYSTEM INTERACTION: Analyzes AI’s restructuring of markets and power
Book III - INSTITUTIONAL DESIGN: Proposes concrete governance frameworks

Target Impact

  • Primary: Influence government policy in Korea, US, and internationally
  • Secondary: Transform public discourse about AI from hype/fear to design thinking
  • Tertiary: Create intellectual foundation for next generation of AI governance scholars

PHILOSOPHICAL FOUNDATION

The Epistemological Trilogy

The three books rest on Sunghee Yun’s philosophical framework developed through:

  • “Wisdom of Strategic Ignorance” (September 2025)
  • “Impossibility of Full Knowledge” (December 2025)
  • Vitamin Cost Minimization & Duality (ongoing blog series)

Core Insights:

  1. Partial information is often more dangerous than complete ignorance → triggers false confidence
  2. Complete information is never sufficient → knowledge requires interpretation frameworks
  3. Optimization reveals hidden structure → duality shows what we can’t see directly

These three insights map directly to AI governance challenges:

  • AI systems operate on partial information (Strategic Ignorance)
  • No amount of data creates complete knowledge (Impossibility)
  • Governance must account for hidden structures (Duality)

BOOK I: PARADIGM RESET

Subtitle Options

  1. “Beyond Human vs. Machine: AI as Choice Architecture”
  2. “The Wrong Questions: Reframing AI for the Age of Design”
  3. “From Technology to Power: Understanding AI as Social Infrastructure”

Core Thesis

AI discourse consistently asks wrong questions because it treats AI as technology to be controlled rather than power to be distributed. We must shift from “What can AI do?” to “What choices does AI enable, constrain, and redistribute?”

Target Audience

Primary: Policymakers, industry leaders, public decision-makers
Secondary: AI practitioners, researchers, intellectually curious general readers
Tertiary: Graduate students in law, policy, economics, computer science

Strategic Objectives

  1. Demolish existing frames that prevent clear thinking about AI
  2. Establish new vocabulary for discussing AI as choice architecture
  3. Embed governance thinking from the beginning (not as afterthought)
  4. Use personal narrative to make abstract concepts concrete
  5. Create intellectual authority that enables Books II & III

BOOK I: TABLE OF CONTENTS

PART I: THE JOURNEY TO THE RIGHT QUESTIONS

Personal narrative establishing authority and revealing the problem

Prologue: The Moment of Doubt

  • Scene: August 2019, Vancouver office, A/B test results showing $207M revenue
  • Pivot: Success that feels hollow—optimizing objectives you didn’t choose
  • Flashback: 14-year-old at piano, experiencing beauty beyond measurement
  • Theme: The questions that matter can’t be optimized away

Strategic Purpose:

  • Establish Sunghee’s credibility (Stanford PhD, Samsung, Amazon, $207M impact)
  • Show the PROBLEM viscerally: technical success ≠ meaningful achievement
  • Introduce the central tension: mathematical optimization vs. human values
  • Make readers feel the inadequacy of current frameworks

Key Elements:

  • Desk items as career artifacts (Boyd photo, DRAM board, Leadership Principles, daughter’s piano drawing)
  • Contrast: precision of algorithms vs. imprecision of what matters
  • Emotional resonance without sentimentality

Chapter 1: The Foundation — Stanford, Boyd, and the Mathematics of Optimization

Opening Scene: Winter 1999, Boyd’s office, PhD acceptance moment

Content:

  1. The Miracle of Convexity (technical foundation without equations)
  2. First-Order Optimality Condition (finding the best)
  3. Duality: The Hidden Perspective (seeing what you can’t see directly)
    • Samsung example: derivative calculation through dual problem
    • Amazon example: debugging collaborative filtering
    • Erudio Bio example: biomarker discovery
  4. KKT Conditions (when constraints bind)
  5. Interior-Point Methods & Boyd’s Textbook (84,000+ citations)
  6. Why This Powers Modern AI (convex foundations of non-convex systems)
  7. Mathematical Lineage & Universal Truths (Erdős number 3, lineage to Gauss)
  8. Finding My Place in the Lineage
  9. What Stanford Taught Me (Beyond Mathematics)
  10. Leaving Stanford, Carrying Questions

Strategic Purpose:

  • Show optimization as choosing what to optimize
  • Introduce duality as seeing hidden structures (central metaphor for governance)
  • Establish that technical excellence doesn’t answer “what’s worth optimizing?”
  • Plant seeds for later governance concepts

Governance Lens Section (NEW): “Optimization theory teaches us: before finding the best solution, we must choose what ‘best’ means. This is not a technical question—it’s a governance question. Who decides the objective function? Who bears the cost of constraints? What gets optimized away?”

Chapter 2: Samsung — When AI Met Silicon (2004-2017)

Content:

  1. From Theory to Silicon (CAE Team, Memory Business Division)
  2. Building Trust, One Problem at a Time (15% faster, 10% lower power, 8% smaller)
  3. iOpt Platform: Democratizing Optimization (hundreds of engineers, decade+ use)
  4. DRAM Cell Design Revolution (2009, strategic importance)
  5. Doing the Right Thing (working against team politics)
  6. DRAM Team Head Quote: “First time I can say with certainty: ‘This is the best possible design’”
  7. The Long Game: Ten Years Later (SK Group offer)
  8. Discovering Machine Learning 2010-2015 (AlexNet 2012, sensing AI era)
  9. Decision to Join Amazon 2017
  10. What Thirteen Years at Samsung Taught Me

Strategic Purpose:

  • Show AI reshaping organizational decisions
  • Demonstrate optimization changing power dynamics (junior engineer vs. senior politics)
  • Reveal long time horizons matter (10 years for impact recognition)
  • Plant question: “What if we optimize for wrong metrics?”

Governance Lens Section (NEW): “At Samsung, I learned that algorithms don’t just solve problems—they redistribute power. The engineer with the best optimization tool wins arguments against senior managers. But who decides what gets optimized? The DRAM project optimized for chip performance. Should we have optimized for worker wellbeing? Environmental impact? These questions weren’t in the objective function.”

Chapter 3: Amazon — When Optimization Met Scale (2017-2020)

Content:

  1. Weird Feeling 2012: Knowing AI era was coming
  2. Preparing for the Wave 2010-2017
  3. Decision: Amazon 2017
  4. Amazon’s Secret Weapon: Culture, Not Technology
    • Leadership Principles deep dive
    • “Have Backbone; Disagree and Commit” (Sunghee’s favorite)
    • “Bias for Action” (Bezos: “70% of perfect data is enough”)
  5. PR/FAQ Process
  6. Project 1: Mobile App Cold Start (S-Team goal, 50%→97% accuracy)
  7. Project 2: Main Menu Personalization ($207M, initial failure then success)
  8. Project 3: TestBot (deep RL, auto-healing tests)
  9. COVID-19 Tsunami of Decisions
  10. Daughter Dialogue: “Don’t worry, I’m staying at stable companies”
  11. The Offer I Couldn’t Refuse (SK Chairman approval, US office)
  12. What Three Years at Amazon Taught Me

Strategic Purpose:

  • Show AI at massive scale ($207M impact)
  • Reveal culture shapes technology use (not vice versa)
  • Demonstrate algorithmic decision-making in practice
  • Set up crisis: Success on Amazon’s terms, but whose terms?

Governance Lens Section (NEW): “Amazon taught me that scale amplifies everything—including unexamined assumptions. The $207M personalization algorithm optimized for revenue. It worked brilliantly. But did it optimize for customer wellbeing? For employee satisfaction? For societal impact? These weren’t in the objective function because they weren’t in Amazon’s objective function. This isn’t Amazon’s failure—it’s a governance gap. We let corporations define their own optimization targets.”

Chapter 4: Gauss Labs — When Industrial AI Meets Reality (2020-2023)

Content Structure (TO BE DRAFTED):

  1. SK Group’s AI Bet: Why a semiconductor conglomerate needs AI
  2. Building from Scratch: US headquarters in pandemic
  3. Manufacturing AI: Theory meets factory floor
  4. The Human Factor: When workers distrust algorithms
  5. Success Stories and Hard Lessons
  6. Why I Left: The pull of biotech

Strategic Purpose:

  • Show AI in industrial context (beyond tech companies)
  • Reveal human resistance to algorithmic management
  • Demonstrate gap between AI capability and organizational readiness
  • Transition to biotech (where stakes are life/death)

Governance Lens Section (PLANNED): “Industrial AI exposed the governance vacuum. Who’s responsible when an algorithm makes a manufacturing decision that costs millions? The data scientist? The factory manager? The CEO? The algorithm itself? Our legal frameworks assume human decision-makers. Industrial AI requires new accountability structures.”

Chapter 5: Erudio Bio — AI Meets the Complexity of Life (2023-Present)

Content Structure (TO BE DRAFTED):

  1. Why Biotech: From circuits to cells
  2. AlphaFold Revolution: What changed
  3. Bio-TCAD Vision: Virtual laboratories for drug discovery
  4. Gates Foundation Grant: Validation and responsibility
  5. Cancer Biomarker Detection: VSA platform
  6. Korea-US Dual Structure: Leveraging comparative advantages
  7. What’s at Stake: Healthspan vs. lifespan

Strategic Purpose:

  • Show AI in highest-stakes domain (human health)
  • Reveal data scarcity problem (healthcare ≠ e-commerce)
  • Demonstrate regulatory complexity (IRB, MFDS, FDA)
  • Connect to Book II themes (Korea vs. US advantages)

Governance Lens Section (PLANNED): “Healthcare AI forces the governance question: Who benefits? Erudio Bio’s Korea-US structure exploits comparative advantages—Korea’s centralized health data and screening culture, US capital and risk appetite. But this raises questions: Who owns the data? Who profits from AI-discovered drugs? How do we ensure global access to AI-enabled treatments? These aren’t technical questions.”

Interlude: The Pattern Emerges

Brief reflection connecting five career stages to central insights

Strategic Purpose:

  • Crystallize the pattern across chapters
  • Transition from narrative to analysis
  • Set up Part II’s conceptual framework

PART II: REFRAMING AI — THE QUESTIONS WE SHOULD ASK

Analytical section deconstructing current AI discourse and establishing new framework

Chapter 6: What Existing AI Discourse Gets Wrong

Based on Jeff’s 3.5.1: “기존 AI 담론은 무엇을 잘못 묻고 있는가”

Content:

  1. The Human vs. Machine Trap
    • Why “AGI arrival date” is the wrong question
    • The intelligence spectrum fallacy
    • Category errors in consciousness debates
  2. The Technology Determinism Myth
    • “Innovation = Progress” assumption
    • Ignoring choice and design
    • Historical parallels (nuclear power, social media)
  3. The Hype/Fear Binary
    • Utopian vs. dystopian framings both miss the point
    • Why both frames serve existing power structures
    • The missing middle: design and governance
  4. The Regulation-Follows-Technology Frame
    • Why “regulators must catch up” is backwards
    • Co-design as superior model
    • Examples where regulation enabled innovation

Strategic Purpose:

  • Demolish existing frames systematically
  • Show these frames serve interests (not neutral descriptions)
  • Create space for new framework
  • Use philosophy strategically to shake assumptions

Philosophical Touchpoints:

  • Wittgenstein’s language limits (we can’t think outside our questions)
  • Kuhn’s paradigm shifts (normal science vs. revolution)
  • Foucault’s power/knowledge (who benefits from current frames?)

Governance Insight: “The questions we ask determine the solutions we see. If we ask ‘When will AI be as smart as humans?’ we design Turing tests. If we ask ‘How should AI reshape power?’ we design governance structures. Current AI discourse asks the wrong questions because existing power structures benefit from those questions.”

Chapter 7: AI as Power Redistribution Device

Based on Jeff’s 3.5.2: “AI는 기술이 아니라 권력과 선택 구조를 재배치하는 장치다”

Content:

  1. What AI Actually Does (Not What We Think It Does)
    • Pattern recognition and completion
    • Statistical correlation as prediction
    • Automating decisions (not making decisions)
  2. How This Redistributes Power
    • Who controls the training data?
    • Who sets the objective function?
    • Who interprets the outputs?
    • Who bears the costs of errors?
  3. Examples from Personal Experience
    • Samsung: Junior engineer beats senior manager
    • Amazon: Algorithm allocates $207M budget
    • Gauss Labs: Factory algorithms vs. worker autonomy
    • Erudio Bio: AI decides which biomarkers matter
  4. The Choice Architecture Lens
    • Nudge theory meets AI systems
    • Default options become algorithmic decisions
    • “Neutral” systems encode choices

Strategic Purpose:

  • Reframe AI from technology to governance
  • Show power redistribution through concrete examples
  • Introduce technical concepts as choice structures
  • No equations, but rigorous analysis

Technical Concepts as Power Examples:

  • Attention mechanisms: Systems decide what matters (who decides what the system attends to?)
  • Classification boundaries: Systems create categories (who decides the categories?)
  • Regularization: Systems enforce constraints (who decides the constraints?)
  • Loss functions: Systems optimize objectives (who decides the objectives?)

Governance Insight: “Every AI system embeds choices: What data to use? What to optimize? What to ignore? These aren’t technical decisions—they’re governance decisions. The question isn’t ‘Can AI do this?’ but ‘Should we let AI do this, and who decides?’”

Chapter 8: The Human-Machine Boundary is a Choice, Not a Fact

Based on Jeff’s 3.5.3: “인간–기계 경계는 존재하는 선이 아니라 우리가 설정한 규칙이다”

Content:

  1. The Boundary Has Always Been Negotiated
    • Historical examples: calculators, spreadsheets, spell-checkers
    • Each time, we decided what remains “human”
    • The boundary moves based on social agreement
  2. Current Boundary Debates
    • Should AI diagnose diseases?
    • Should AI make hiring decisions?
    • Should AI write judicial opinions?
    • Should AI create art?
  3. What Determines the Boundary?
    • Not capability (what AI can do)
    • But values (what we want humans to do)
    • And power (who gets to decide)
  4. Case Studies in Boundary-Setting
    • Medical AI: FDA approval frameworks
    • Autonomous vehicles: Liability frameworks
    • Content moderation: Platform vs. human decisions
    • Creative AI: Copyright frameworks

Strategic Purpose:

  • Show boundary is social/legal construct, not natural fact
  • Demonstrate boundary-setting is governance work
  • Reveal whose interests shape current boundaries
  • Prepare for Chapter 9’s “Right Questions” framework

Governance Insight: “Where we draw the human-machine boundary is a political decision disguised as a technical one. We don’t ask ‘Can AI do this?’ We ask: ‘Who should do this? Who’s accountable if it goes wrong? Who benefits if it goes right?’ These are governance questions.”

Chapter 9: Why We Always Start with the Wrong Questions

Based on Jeff’s 3.5.4: “우리는 왜 항상 틀린 질문부터 시작하는가” [CENTRAL CHAPTER - Jeff emphasizes this as the pivot point]

Content:

  1. The Right Questions Framework
    • Questions don’t lead to answers
    • Questions create problem spaces
    • Questions determine what solutions we can see
  2. Why We Get Questions Wrong
    • Path dependency (historical accidents become norms)
    • Power structures (some questions serve existing interests)
    • Conceptual limitations (we can’t ask questions we can’t conceive)
    • Measurement bias (we ask what we can measure)
  3. Examples from Personal Experience
    • Samsung DRAM: Asked “What’s fastest chip?” not “What chip serves society best?”
    • Amazon personalization: Asked “What increases revenue?” not “What increases wellbeing?”
    • Erudio Bio: Asking “What saves lives?” vs. “What makes profit?”
  4. The Epistemological Trilogy Applied
    • Strategic Ignorance: Sometimes wrong questions are safer than right ones
    • Impossibility of Full Knowledge: Right questions reveal what we can’t know
    • Duality: Right questions show hidden structures

Strategic Purpose:

  • Pivot point of entire book
  • Connect personal narrative to philosophical framework
  • Establish “Right Questions” as design methodology
  • Transition to concrete governance applications

Philosophical Deep Dive:

  • Heideggerian notion of questioning (questions open worlds)
  • Pragmatist epistemology (truth as what works)
  • Buddhist wisdom (right questions > right answers)

The Right Questions for AI Governance: Not “How smart will AI get?” but “Who benefits from smarter AI?” Not “When will AGI arrive?” but “Who controls advanced AI?” Not “How do we make AI safe?” but “Safe for whom, defined by whom?” Not “How do we regulate AI?” but “How do we design AI governance?”

Governance Insight: “The question ‘How do we regulate AI?’ assumes technology exists independently and regulation follows. The right question is: ‘How do we co-design AI and governance?’ This shift from regulation to design is the paradigm reset.”

Chapter 10: Information Accumulation vs. Information Concentration

Based on Jeff’s 3.5.5: “정보는 축적되는가, 아니면 집중되는가”

Content:

  1. The Information Accumulation Myth
    • More data = better outcomes (assumed)
    • Digital age = democratized information (assumed)
    • Open source = shared progress (assumed)
  2. The Reality: Information Concentrates
    • Data network effects (more data → better AI → more users → more data)
    • Computational resource concentration
    • Talent concentration in tech hubs
    • Reinforcing loops, not equalizing spirals
  3. Examples from Experience
    • Amazon’s data moat ($207M algorithm required Amazon-scale data)
    • Samsung’s semiconductor design data (proprietary, accumulated over decades)
    • Erudio Bio’s biomarker data (Korea’s advantage: centralized health screening)
  4. Governance Implications
    • Who owns training data?
    • Data portability rights
    • Mandatory data sharing?
    • Public data infrastructure?

Strategic Purpose:

  • Challenge techno-optimist assumptions
  • Show market dynamics concentrate rather than distribute
  • Reveal data governance as central challenge
  • Connect to Book II’s market structure analysis

Governance Insight: “Information wants to be expensive, not free. AI accelerates information concentration because data has network effects. Without governance intervention, AI makes information inequality worse, not better. The question is: Do we accept this, or design alternative structures?”

Chapter 11: Error as Permitted Choice, Not Technical Flaw

Based on Jeff’s 3.5.6: “오류는 기술적 결함이 아니라 허용된 선택이다”

Content:

  1. The Error Tolerance Question
    • Every AI system has error rates
    • These aren’t bugs—they’re tradeoffs
    • We choose acceptable error rates
  2. Who Decides Acceptable Error?
    • Medical AI: 1% error vs. 5% error means lives
    • Credit scoring: Error rates differ by demographic
    • Facial recognition: Error rates differ by race
    • Content moderation: Errors in both directions (over-censorship vs. under-censorship)
  3. Error Distribution is Political
    • Errors don’t distribute randomly
    • Some groups bear more errors
    • This is choice, not technical necessity
  4. Examples from Experience
    • Amazon A/B tests: We chose revenue threshold for “acceptable” performance
    • Samsung yield optimization: We chose defect rate tolerance
    • Erudio Bio biomarkers: We choose false positive vs. false negative tradeoff

Strategic Purpose:

  • Show technical parameters are political choices
  • Reveal responsibility allocation as governance problem
  • Connect to accountability frameworks in Book III
  • Prepare for discussion of regulation as co-design

Governance Insight: “When a medical AI has a 2% error rate, someone chose 2%. It could be 1% with more training data, but that costs money. It could be 5% if we accept worse performance for faster deployment. The error rate is a social choice about acceptable risk. Who makes this choice? Who bears the cost if the error affects them?”

Responsibility Redesign Principle (Jeff’s 3.7.1): “Responsibility must be based on influence and controllability, not human/machine distinction. If a company deploys AI with known error rates, the company is responsible for those errors—not the algorithm.”

Chapter 12: Do Markets Use Algorithms, or Do Algorithms Redesign Markets?

Based on Jeff’s 3.5.7: “시장은 알고리즘을 사용하는가, 아니면 알고리즘이 시장을 재설계하는가”

Content:

  1. The Platform Economy Shift
    • Markets were physical spaces → Now algorithmic spaces
    • Supply and demand were discovered → Now algorithmically matched
    • Prices were negotiated → Now algorithmically set
  2. How Algorithms Reshape Markets
    • Dynamic pricing (Uber surge, Amazon price optimization)
    • Recommendation systems (create demand, not just respond)
    • Search ranking (determines visibility = market access)
    • Marketplace matching (who sees what opportunities)
  3. Examples from Experience
    • Amazon personalization: Algorithms allocate $207M across products
    • Platform power: Algorithms as market-makers
  4. Market Design Questions
    • Should algorithms be auditable?
    • Should algorithmic pricing be regulated?
    • Who bears algorithmic market failures?
    • Competition policy for algorithmic markets?

Strategic Purpose:

  • Show markets are designed spaces, not natural phenomena
  • Reveal algorithms as market architects, not market participants
  • Connect to Book II’s market structure analysis
  • Establish groundwork for Book III’s competition policy

Governance Insight: “Algorithms don’t just participate in markets—they design markets. When Amazon’s algorithm decides which products to show you, it’s not responding to market signals; it’s creating them. This requires new market design principles, not just antitrust enforcement.”

Incentive Design Principle (Jeff’s 3.7.2): “Rules don’t control behavior—they redesign choice structures. Effective governance doesn’t ban algorithmic pricing; it structures incentives so algorithmic pricing serves social goals.”

Chapter 13: Is Autonomy Expanded or Redefined?

Based on Jeff’s 3.5.8: “자율은 확대되는가, 아니면 재정의되는가”

Content:

  1. The Autonomy Paradox
    • AI promises more choice (personalization, optimization)
    • AI narrows choice (algorithmic sorting, filter bubbles)
    • Both are true—autonomy is being redefined
  2. Examples of Autonomy Redefinition
    • “You might also like” → Curated choice ≠ Free choice
    • Algorithmic management → Flexibility ≠ Autonomy
    • Smart assistants → Delegated decisions ≠ Controlled decisions
    • Social media feeds → Algorithmically relevant ≠ Self-directed
  3. Who Defines “Better” Autonomy?
    • Platform companies optimize for engagement
    • Should we optimize for agency instead?
    • Trade-offs between convenience and control
  4. Personal Reflection
    • Amazon’s “bias for action” → Speed over deliberation
    • Is faster decision-making more autonomous?
    • Or is reflection time essential to autonomy?

Strategic Purpose:

  • Challenge assumption that AI inherently empowers
  • Show autonomy as contested concept
  • Reveal value choices in system design
  • Connect to Books II & III on workplace autonomy, consumer protection

Governance Insight: “AI systems promise expanded choice but often deliver curated choice. This isn’t failure—it’s redefinition. The question is: Who defines what counts as autonomy? Do platforms decide that showing you more engaging content maximizes your autonomy? Or do you decide that seeing diverse perspectives maximizes your autonomy?”

Chapter 14: Do Institutions Follow Technology or Co-Design It?

Based on Jeff’s 3.5.9: “제도는 기술을 따라가는가, 아니면 함께 설계되는가”

Content:

  1. The “Catch-Up” Frame Examined
    • “Technology moves fast, regulation moves slow”
    • This frame serves tech companies
    • Historical counter-examples: FDA, FCC, aviation
  2. Co-Design as Alternative Model
    • Medical devices: Regulation shaped innovation
    • Aviation: Safety rules enabled industry
    • Internet: Standards enabled growth
    • What would AI co-design look like?
  3. Examples from Experience
    • Samsung: Industry standards shaped chip design
    • Amazon: Internal review boards shaped product development
    • Erudio Bio: IRB/MFDS regulations shaping AI development
  4. Case for Co-Design
    • Early governance prevents lock-in
    • Regulation can enable innovation (not just constrain)
    • Public interest in design, not just deployment

Strategic Purpose:

  • Demolish “regulation stifles innovation” myth
  • Show institutions can be proactive, not reactive
  • Establish co-design as superior model
  • Transition to Book III’s concrete governance proposals

Governance Insight: “The ‘regulation must catch up’ frame assumes technology develops independently, then regulation responds. This is historically false and strategically harmful. Effective governance co-designs technology and institutions. The question isn’t ‘How do we regulate AI?’ but ‘How do we design AI governance together?’”

Institutions as Co-Designers (Jeff’s 3.7.3): “Error tolerance, acceptable risk, responsibility allocation—these aren’t technical parameters to be set after deployment. They’re design choices to be made during development. Institutions don’t follow technology; they co-design it.”

Chapter 15: What We Repeat if We Can’t See Structure

Based on Jeff’s 3.5.10: “다음 단계: 구조를 보지 못하면 무엇을 반복하게 되는가”

Content:

  1. Historical Patterns We’re Repeating
    • Industrial Revolution: Efficiency over worker welfare
    • Nuclear power: Deployment before governance
    • Social media: Growth before safety
    • AI: Same patterns emerging
  2. Why We Repeat Patterns
    • Can’t see structure (trapped in current frames)
    • Path dependency (easier to repeat than redesign)
    • Power structures (repetition serves interests)
  3. What We’ll Repeat with AI
    • Concentration of power
    • Inequality acceleration
    • Externalized costs
    • Post-hoc governance attempts
  4. How to Break the Pattern
    • See structure (Book I’s work)
    • Analyze systems (Book II’s work)
    • Design institutions (Book III’s work)

Strategic Purpose:

  • Show urgency of paradigm shift
  • Connect historical patterns to current moment
  • Create motivation for Books II & III
  • Transition from analysis to action

Governance Insight: “Every technology follows a pattern: Deploy rapidly, discover harms, regulate belatedly, entrench existing power structures. We’re repeating this pattern with AI. Breaking it requires seeing the structure, not just responding to symptoms. That’s what this book series does.”


PART III: TOWARDS A NEW FRAMEWORK

Synthesis and transition to action

Chapter 16: The Design Principles That Matter

Content:

  1. Synthesizing the Right Questions Framework
  2. Core Principles for AI Governance
    • Responsibility based on influence/controllability
    • Co-design over regulation
    • Choice architecture transparency
    • Error distribution equity
    • Power distribution intentionality
  3. How These Principles Apply
  4. Transition to Book II (System Analysis) and Book III (Institutional Design)

Strategic Purpose:

  • Consolidate Book I’s insights
  • Preview Books II & III
  • Provide actionable framework even for readers who don’t continue
  • Create bridge to next volumes

BOOK I: STRATEGIC ELEMENTS

Content Strategy

Personal Narrative (40%):

  • Chapters 1-5: Career journey as proof of authority
  • Emotional resonance: Make readers feel the inadequacy of current frames
  • Concrete examples: Every abstract concept illustrated from experience

Conceptual Framework (40%):

  • Chapters 6-15: Systematic deconstruction and reconstruction
  • Philosophy used strategically (to shake assumptions, not as full treatise)
  • Technical concepts as choice structure examples (no equations)

Governance Embedding (20%):

  • Governance lens sections throughout
  • Not separate from narrative/analysis—integrated
  • Prepares readers for Books II & III

Philosophical Strategy

Philosophy Placement (per Jeff’s 3.6.1): Used at disruption points to shake cognitive structures, not as comprehensive explanations:

  • Wittgenstein’s language limits → Chapter 6 (demolishing existing frames)
  • Kuhn’s paradigm shifts → Chapter 9 (right questions framework)
  • Buddhist epistemology → Chapter 9 (questions > answers)
  • Pragmatist philosophy → Throughout (truth as what works for design)

Not Included:

  • Comprehensive philosophy of mind
  • Detailed consciousness debates
  • Academic philosophy for its own sake

Technical Strategy

Technical Concepts as Choice Structures (per Jeff’s 3.6.2):

  • Attention mechanisms → “Systems decide what matters”
  • Classification boundaries → “Systems create categories”
  • Regularization → “Systems enforce constraints”
  • Loss functions → “Systems optimize objectives”
  • Algorithmic pricing → “Systems set value”

Never Included:

  • Mathematical equations
  • Algorithm pseudocode
  • Technical implementation details
  • “Here’s how neural networks work” explanations

Writing Style

Tone:

  • Direct and honest
  • Occasionally self-deprecating
  • Proud of work without boasting
  • Accessible without dumbing down
  • Emotional resonance without sentimentality

Voice Consistency:

  • Sunghee’s distinctive voice throughout
  • Jeff & Hayden’s “Governance Lens” sections clearly marked
  • Seamless integration, not jarring interruptions

Length Target

  • Book I Target: 75,000-90,000 words
  • Prologue + Chapters 1-5: ~30,000 words (personal narrative)
  • Chapters 6-15: ~45,000 words (conceptual framework)
  • Chapter 16 + Conclusion: ~8,000 words (synthesis)
  • Governance Lens sections: ~5,000 words total (distributed throughout)

BOOK I: PRODUCTION TIMELINE

Phase 1: Completion of Personal Narrative (Months 1-3)

Target: June 2026

Month 1 (March 2026):

  • ✅ Prologue complete
  • ✅ Chapter 1 complete
  • ✅ Chapter 2 complete
  • ✅ Chapter 3 complete
  • Week 4: Chapter 4 first draft (Gauss Labs)

Month 2 (April 2026):

  • Week 1-2: Chapter 5 first draft (Erudio Bio)
  • Week 3: Interlude draft
  • Week 4: Revise Chapters 1-5 based on three-book architecture

Month 3 (May 2026):

  • Week 1-2: Governance Lens sections for Chapters 1-5 (Jeff & Hayden lead)
  • Week 3-4: Integration and polish of Part I

Phase 2: Conceptual Framework Chapters (Months 4-7)

Target: October 2026

Month 4 (June 2026):

  • Chapter 6 draft (What Existing Discourse Gets Wrong)
  • Chapter 7 draft (AI as Power Redistribution)

Month 5 (July 2026):

  • Chapter 8 draft (Human-Machine Boundary)
  • Chapter 9 draft (Why Wrong Questions) ← CRITICAL CHAPTER

Month 6 (August 2026):

  • Chapter 10 draft (Information Concentration)
  • Chapter 11 draft (Error as Choice)

Month 7 (September 2026):

  • Chapter 12 draft (Algorithms Redesign Markets)
  • Chapter 13 draft (Autonomy Redefined)

Phase 3: Synthesis and Integration (Months 8-9)

Target: December 2026

Month 8 (October 2026):

  • Chapter 14 draft (Co-Design Framework)
  • Chapter 15 draft (What We Repeat)
  • Chapter 16 draft (Design Principles)

Month 9 (November 2026):

  • Governance Lens sections for Chapters 6-16
  • Overall integration pass
  • Consistency check across all chapters

Phase 4: Revision and Polish (Months 10-12)

Target: March 2027

Month 10 (December 2026):

  • First complete manuscript review
  • Major structural revisions
  • Voice consistency pass

Month 11 (January 2027):

  • Second revision pass
  • External reader feedback (select policymakers, academics)
  • Incorporate feedback

Month 12 (February 2027):

  • Final polish
  • Manuscript to agent/publisher
  • Begin Book II planning in parallel

Publication Target: Fall 2027


BOOK II: SYSTEM INTERACTION

Subtitle Options

  1. “Markets, Power, and Information in the Age of Algorithms”
  2. “How AI Restructures Economic and Social Systems”
  3. “From Markets to Platforms: The Algorithmic Reorganization”

Core Thesis

AI doesn’t participate in existing economic and social systems—it fundamentally restructures them. Understanding this restructuring requires analyzing information asymmetry, power concentration, platform economics, and data governance as interconnected system dynamics.

Target Audience

Primary: Policymakers, economists, business strategists, regulators
Secondary: Tech industry leaders, investors, academic researchers
Tertiary: Engaged citizens seeking to understand economic transformation

Strategic Objectives

  1. Analyze (not prescribe): Book II describes how systems change
  2. Comparative frameworks: Korea vs. US, different national models
  3. Market design theory: Apply mechanism design to AI systems
  4. Evidence-based: Use Sunghee’s entrepreneurship article + academic research
  5. Prepare for Book III: Identify governance chokepoints

BOOK II: TABLE OF CONTENTS (Preliminary)

PART I: INFORMATION ASYMMETRY TRANSFORMED

Chapter 1: From Information Scarcity to Information Abundance

  • Historical: Information as scarce resource
  • Digital transformation: Abundance ≠ Access
  • AI acceleration: Information overwhelm
  • New scarcity: Attention, interpretation, trust

Chapter 2: Platform Economics and Market Power

  • Two-sided markets → Multi-sided platforms
  • Network effects and winner-take-all dynamics
  • Data as moat: Why AI companies consolidate
  • Case studies: Amazon, Google, emerging platforms

Chapter 3: Algorithmic Price Discrimination

  • From fixed prices to dynamic pricing
  • Personalized pricing: Efficiency vs. fairness
  • Information advantage exploitation
  • Regulatory responses: EU, US, Asia

Chapter 4: Search, Recommendation, and Market Access

  • Algorithms as gatekeepers
  • Visibility = existence in digital markets
  • SEO/algorithmic optimization arms race
  • Who controls market access?

PART II: ORGANIZATIONAL TRANSFORMATION

Chapter 5: Algorithmic Management

  • From human managers to algorithm managers
  • Gig economy: Freedom or precarity?
  • Surveillance capitalism in workplace
  • Worker resistance and organizing

Chapter 6: Decision-Making Authority Redistribution

  • Who decides when algorithm decides?
  • Accountability gaps in algorithmic decisions
  • Case studies from Samsung, Amazon, Gauss Labs
  • New organizational structures emerging

Chapter 7: Knowledge Work in the AI Era

  • Which jobs augmented, which replaced?
  • Skill polarization acceleration
  • The “centaur” model: Human-AI collaboration
  • Education and training implications

PART III: DATA GOVERNANCE AND POWER

Chapter 8: Who Owns Data?

  • Personal data: Property, privacy, or commons?
  • Corporate data: Proprietary vs. portable
  • Public data: Infrastructure or asset?
  • Data sovereignty: National security implications

Chapter 9: Comparative Advantages in AI Era

[Draws heavily from Sunghee’s entrepreneurship article]

  • Korea: Data centralization, testing culture, execution speed
  • US: Capital, risk tolerance, scale
  • EU: Regulation-first approach, GDPR model
  • China: State coordination, data access
  • Complementary roles vs. competition

Chapter 10: Data as Infrastructure

  • Digital public infrastructure concept
  • Open data mandates
  • Data trusts and cooperatives
  • National data strategies compared

PART IV: POWER CONCENTRATION DYNAMICS

Chapter 11: The AI Oligopoly

  • Why AI concentrates: Compute, data, talent
  • Vertical integration strategies
  • Moats and competitive dynamics
  • Can startups compete?

Chapter 12: Geopolitical Implications

  • US-China AI competition
  • Technology sovereignty movements
  • Export controls and supply chains
  • International governance challenges

Chapter 13: Inequality Acceleration

  • Within countries: Skill premium widening
  • Between countries: AI divide
  • Within companies: Data-rich vs. data-poor
  • Policy responses: Taxation, redistribution, access

PART V: MARKET DESIGN FOR AI ERA

Chapter 14: Mechanism Design Principles Applied

  • Incentive compatibility in algorithmic systems
  • Auction design for computational resources
  • Matching algorithms and fairness
  • Market design as governance tool

Chapter 15: Competition Policy Reconsidered

  • Why traditional antitrust fails for platforms
  • Data portability and interoperability
  • Essential facilities doctrine applied
  • New frameworks emerging (EU DMA, etc.)

Chapter 16: From Analysis to Design

  • Synthesizing system dynamics insights
  • Identifying governance intervention points
  • Transition to Book III’s institutional proposals
  • Preview of governance architecture

BOOK II: STRATEGIC ELEMENTS

Content Strategy

Analytical Depth (60%):

  • Economic theory applied rigorously
  • Market design frameworks
  • Platform economics literature
  • Data governance scholarship

Comparative Analysis (25%):

  • Korea vs. US (primary comparison)
  • EU, China, other models
  • Identify strengths, weaknesses, complementarities
  • Evidence from Sunghee’s entrepreneurship article

Case Studies (15%):

  • Real companies, real markets
  • Sunghee’s experience: Samsung, Amazon, Gauss Labs, Erudio Bio
  • Public data: Platform behaviors, market dynamics
  • Policy experiments: GDPR, DMA, various national approaches

Philosophical Strategy

  • Less philosophy than Book I (analysis over paradigm-shifting)
  • Institutional economics (North, Ostrom)
  • Market design theory (Roth, Milgrom)
  • Platform economics (Rochet, Tirole)

Technical Strategy

  • More technical than Book I, but still accessible
  • Economic models explained conceptually
  • Market mechanisms illustrated with examples
  • Platform algorithms described structurally (not implemented)

Research Requirements

  • Academic literature review: Platform economics, data governance
  • Policy analysis: Compare regulatory approaches
  • Interview data: Possibly interview other industry leaders, policymakers
  • Market data: Document concentration, pricing patterns, access barriers

Author Roles

  • Sunghee: Primary author, personal case studies, technical systems analysis
  • Jeff: Legal-economic analysis, comparative frameworks, market design
  • Hayden: Policy analysis, regulatory comparison, international governance

Length Target

Book II Target: 85,000-100,000 words (more analytical density)


BOOK II: PRODUCTION TIMELINE

Phase 1: Research and Framework (Months 1-4)

Target: July 2027 (overlaps with Book I final revision)

Month 1-2 (March-April 2027):

  • Literature review: Platform economics, data governance
  • Framework development: Adapt market design theory to AI
  • Outline refinement

Month 3-4 (May-June 2027):

  • Comparative data gathering: Korea, US, EU, China policies
  • Interview planning (if pursued)
  • First draft chapters 1-3

Phase 2: Drafting (Months 5-11)

Target: March 2028

Months 5-8 (July-October 2027):

  • Chapters 1-8 drafts (information asymmetry + organizational transformation)

Months 9-11 (November 2027-January 2028):

  • Chapters 9-16 drafts (data governance + power concentration + market design)

Phase 3: Revision (Months 12-14)

Target: June 2028

Months 12-14 (February-April 2028):

  • Integration with Book I frameworks
  • Expert review (economists, policymakers)
  • Revision and polish

Publication Target: Fall 2028


BOOK III: INSTITUTIONAL DESIGN

Subtitle Options

  1. “Governance Frameworks for the Age of Intelligent Systems”
  2. “From Principles to Policy: Designing AI Institutions”
  3. “Building Accountable AI: Institutional Architecture for Tomorrow”

Core Thesis

AI governance requires new institutional architecture designed around influence/controllability rather than human/machine distinctions. Effective governance co-designs technology and institutions, structures incentives (not just controls), and distributes power intentionally.

Target Audience

Primary: Government officials, regulators, legislators, international organizations
Secondary: Corporate compliance officers, legal practitioners, policy researchers
Tertiary: Civil society organizations, advocacy groups, engaged citizens

Strategic Objectives

  1. Propose concrete frameworks: Actionable, not just theoretical
  2. Cross-jurisdictional: Apply to multiple legal systems
  3. Evidence-based: Build from Book II’s analysis
  4. Implementable: Realistic political economy considerations
  5. Create model legislation: Provide templates for adoption

BOOK III: TABLE OF CONTENTS (Preliminary)

PART I: FOUNDATIONAL PRINCIPLES

Chapter 1: From Regulation to Co-Design

  • Why post-hoc regulation fails
  • Co-design model: Technology + institutions together
  • Historical precedents: FDA, aviation, telecommunications
  • Principles for AI co-design

Chapter 2: Responsibility Architecture

  • Influence + controllability framework
  • Liability allocation when algorithms decide
  • Corporate responsibility for deployed AI
  • Criminal vs. civil liability

Chapter 3: Transparency and Explainability Requirements

  • Right to explanation: Scope and limits
  • Algorithmic transparency mandates
  • Trade secrets vs. public accountability
  • Graduated requirements by stakes

Chapter 4: Error Distribution and Acceptable Risk

  • Who decides acceptable error rates?
  • Differential impact assessment requirements
  • Error monitoring and reporting mandates
  • Remedy and redress mechanisms

PART II: SECTOR-SPECIFIC GOVERNANCE

Chapter 5: Healthcare AI Governance

  • FDA/MFDS models for medical AI
  • Clinical trial requirements
  • Post-market surveillance
  • Equity in access and outcomes
  • Case study: Erudio Bio’s regulatory path

Chapter 6: Financial Services AI

  • Credit scoring fairness requirements
  • Algorithmic trading oversight
  • Consumer protection in AI finance
  • Systemic risk from algorithmic decisions

Chapter 7: Employment and Workplace AI

  • Algorithmic management regulations
  • Worker data rights
  • Transparency in hiring/firing algorithms
  • Collective bargaining in AI workplaces

Chapter 8: Criminal Justice and Public Sector

  • Limits on AI in sentencing, parole, policing
  • Due process requirements
  • Human-in-the-loop mandates
  • Auditing and accountability

Chapter 9: Content Moderation and Speech

  • Platform liability frameworks
  • Algorithmic amplification transparency
  • Balancing speech and safety
  • Cross-border challenges

PART III: MARKET STRUCTURE AND COMPETITION

Chapter 10: Data Governance Regimes

  • Data portability mandates
  • Interoperability requirements
  • Data trusts and cooperatives
  • Public data infrastructure

Chapter 11: Competition Policy for AI Era

  • Market definition in platform economies
  • Essential facilities doctrine for AI
  • Merger review for data/AI acquisitions
  • Pro-competitive interventions

Chapter 12: Intellectual Property Reconsidered

  • AI-generated works: Copyright implications
  • Training data and fair use
  • Patent protection for AI innovations
  • Balancing innovation and access

PART IV: INTERNATIONAL GOVERNANCE

Chapter 13: Cross-Border AI Governance

  • International standards coordination
  • Data sovereignty vs. free flow
  • Regulatory arbitrage prevention
  • Global minimum standards?

Chapter 14: Comparative Governance Models

  • EU: Regulatory leadership (AI Act, DMA, GDPR)
  • US: Sectoral approach
  • Korea: Development + regulation balance
  • China: State coordination model
  • Which elements to adopt?

Chapter 15: International Cooperation Mechanisms

  • Treaty frameworks
  • Multi-stakeholder governance
  • Technical standard-setting
  • Enforcement coordination

PART V: IMPLEMENTATION AND FUTURE

Chapter 16: Building Institutional Capacity

  • Regulator expertise development
  • Industry-government collaboration
  • Academic-policy partnerships
  • Rapid response mechanisms

Chapter 17: Adaptive Governance

  • Governance that learns
  • Sunset provisions and review
  • Experimentation frameworks
  • Iterative improvement

Chapter 18: The Path Forward

  • Near-term priorities (1-3 years)
  • Medium-term development (3-7 years)
  • Long-term vision (7-15 years)
  • Call to action

APPENDICES

Appendix A: Model Legislation Templates

  • AI Transparency Act (draft)
  • Algorithmic Accountability Act (draft)
  • AI Safety Standards (draft)

Appendix B: Regulatory Assessment Framework

  • Checklist for evaluating AI governance proposals
  • Impact assessment methodology
  • Stakeholder consultation guidelines

Appendix C: International Governance Comparison Matrix

  • Side-by-side comparison of major jurisdictions
  • Strengths, weaknesses, best practices

Appendix D: Case Studies in AI Governance

  • Detailed analysis of governance successes/failures
  • Lessons for future policy

BOOK III: STRATEGIC ELEMENTS

Content Strategy

Concrete Proposals (70%):

  • Specific legal frameworks
  • Detailed policy recommendations
  • Model legislation
  • Implementation roadmaps

Comparative Analysis (20%):

  • Which governance models work where?
  • What can different jurisdictions learn?
  • How to adapt frameworks to local context?

Theoretical Grounding (10%):

  • Connect to Book I’s principles
  • Build from Book II’s analysis
  • Justify with legal/political theory

Philosophical Strategy

  • Minimal philosophy (Book III is most practical)
  • Legal theory: Administrative law, constitutional law
  • Political theory: Democratic legitimacy, accountability
  • Economic theory: Public choice, political economy

Technical Strategy

  • Technical appendices as needed
  • Focus on governance mechanisms, not technical implementation
  • Assume readers understand Books I & II frameworks

Authorship Strategy

  • Jeff & Hayden: Primary authors (legal expertise essential)
  • Sunghee: Case studies, technical feasibility assessment
  • Collaborative: All three on cross-cutting chapters

Research Requirements

  • Comparative legal analysis (extensive)
  • Policy evaluation studies
  • Expert consultation (legal scholars, regulators, industry)
  • Drafting expertise (model legislation)

Length Target

Book III Target: 90,000-110,000 words (most detailed, practical)


BOOK III: PRODUCTION TIMELINE

Phase 1: Research and Framework (Months 1-6)

Target: March 2029

Months 1-3 (September-November 2028):

  • Comparative legal research
  • Policy evaluation studies
  • Expert consultation planning
  • Framework development

Months 4-6 (December 2028-February 2029):

  • Model legislation drafting begins
  • Stakeholder interviews
  • First draft chapters 1-5

Phase 2: Drafting (Months 7-15)

Target: January 2030

Months 7-12 (March-August 2029):

  • Chapters 1-10 drafts (principles + sector-specific)

Months 13-15 (September-November 2029):

  • Chapters 11-18 drafts (competition + international + implementation)
  • Appendices drafting

Phase 3: Expert Review and Revision (Months 16-20)

Target: July 2030

Months 16-18 (December 2029-February 2030):

  • Expert review (legal scholars, policymakers, industry)
  • Model legislation testing with practitioners
  • Major revisions based on feedback

Months 19-20 (March-April 2030):

  • Final integration with Books I & II
  • Polish and finalization

Publication Target: Fall 2030


OVERALL SERIES STRATEGY

Three-Book Coherence Mechanisms

Conceptual Threading

“Design” as central metaphor:

  • Book I: Design new cognitive frameworks
  • Book II: Analyze designed systems
  • Book III: Design new institutions

“Right Questions” framework:

  • Book I: What questions should we ask?
  • Book II: What do these questions reveal about systems?
  • Book III: How do we institutionalize right questions?

“Power Distribution”:

  • Book I: AI redistributes power (descriptive)
  • Book II: How power redistributes (analytical)
  • Book III: How to redistribute intentionally (prescriptive)

Cross-References

  • Each book references others explicitly
  • Readers can start anywhere but benefit from sequence
  • Standalone value + series value

Consistent Voice

  • Sunghee’s voice in narrative sections across all three
  • Governance lens sections marked consistently
  • Collaborative voice in analytical sections

Publication Strategy

Why:

  • Each book builds authority for next
  • Real-world feedback shapes later volumes
  • Audience building over time
  • Revenue funds later book production

Timeline:

  • Book I: Fall 2027
  • Book II: Fall 2028 (1 year later)
  • Book III: Fall 2030 (2 years later)

Risks:

  • AI field moves fast—content may date
  • Mitigation: Focus on structural analysis, not current events

Why not:

  • 5-year production cycle too long
  • No audience feedback incorporation
  • Massive upfront investment
  • Market risk concentration

Target Readerships (Cumulative)

Book I Readers

  • 50,000 copies (ambitious for policy/tech book)
  • Policymakers, industry leaders, engaged public
  • Creates foundation for Books II & III

Book II Readers

  • 30,000 copies (more specialized)
  • All Book I readers + economists, business strategists
  • Deepens engagement

Book III Readers

  • 20,000 copies (most specialized)
  • Policymakers, lawyers, advocates who need concrete frameworks
  • Implementation guidance

Total Series Impact

  • 100,000 copies across series
  • Policy influence: Multiple jurisdictions adopt frameworks
  • Academic influence: Courses, citations, research programs
  • Public discourse: “Design thinking for AI governance” becomes standard frame

Revenue Model

Traditional Publishing

  • Advance: $50,000-150,000 per book (estimate)
  • Royalties: 10-15% of net sales
  • Korean translation rights: Separate deal
  • Total series potential: $500,000-1,000,000

Speaking/Consulting

  • Book series establishes authority
  • Keynotes: $10,000-50,000 per engagement
  • Consulting: Government/corporate advisory
  • Total potential: $200,000-500,000 annually

Academic/Policy Impact

  • Course adoptions
  • Policy fellowship opportunities
  • Think tank affiliations
  • Long-term influence > short-term revenue

Marketing Strategy

Book I Launch

  • Op-eds in major outlets (NYT, WSJ, FT, 조선일보, 중앙일보)
  • Podcast circuit (Ezra Klein, Sam Harris, tech podcasts)
  • K-PAI forums as platform
  • University speaking tour
  • Corporate/government briefings

Book II Launch

  • Economic/business media focus
  • Industry conferences (tech, policy, economics)
  • Academic conferences (economics, law, policy schools)
  • International events (Davos, etc. if accessible)

Book III Launch

  • Policy-focused rollout
  • Model legislation promotion to legislators
  • International governance forums
  • UN, OECD, other international organizations
  • Think tank partnerships

Series Marketing

  • Each book promotes next
  • Trilogy positioning from start
  • “Complete your set” campaigns
  • Academic package deals

COLLABORATION STRUCTURE

Author Roles by Book

Book I: PARADIGM RESET

  • Sunghee: 80% (primary author, personal narrative, technical expertise)
  • Jeff: 12% (governance lens sections, structural editing)
  • Hayden: 8% (governance lens sections, policy examples)

Book II: SYSTEM INTERACTION

  • Sunghee: 50% (case studies, technical systems analysis)
  • Jeff: 30% (legal-economic analysis, market design)
  • Hayden: 20% (policy analysis, comparative frameworks)

Book III: INSTITUTIONAL DESIGN

  • Jeff: 40% (primary legal framework, model legislation)
  • Hayden: 40% (policy implementation, international governance)
  • Sunghee: 20% (technical feasibility, case studies)

Decision-Making Process

  • Strategic direction: Consensus of all three
  • Content decisions: Primary author(s) have final say
  • Conflicts: Discuss until consensus or majority rule
  • External advice: Consult mentors (Boyd, others) as needed

Work Schedule

  • Weekly check-ins: During active drafting phases
  • Monthly reviews: Overall progress and alignment
  • Quarterly retreats: Strategic planning, major decisions
  • Ad-hoc meetings: As needed for urgent decisions

Division of Labor

Research

  • Sunghee: Technical literature, industry case studies
  • Jeff: Legal scholarship, comparative law
  • Hayden: Policy analysis, regulatory frameworks

Writing

  • First drafts: Primary author per section
  • Revisions: Collaborative editing
  • Final polish: Lead author per book

External Relations

  • Academic: All three (different networks)
  • Policy: Jeff & Hayden lead
  • Industry: Sunghee leads
  • Media: Coordinated, context-dependent

RISK MITIGATION

Content Risks

Risk: Field Moves Too Fast

Mitigation:

  • Focus on structural analysis, not current events
  • Timeless principles over specific technologies
  • Update preface for subsequent editions

Risk: Political Backlash

Mitigation:

  • Evidence-based, not ideological
  • Present multiple perspectives
  • Avoid partisan framing
  • Focus on design, not blame

Risk: Technical Obsolescence

Mitigation:

  • Emphasize governance principles over technical details
  • Use technical concepts as examples, not core content
  • Revise editions as needed

Process Risks

Risk: Timeline Slippage

Mitigation:

  • Built-in buffer time
  • Milestone tracking
  • Accountability mechanisms
  • Adjust scope if needed

Risk: Author Conflicts

Mitigation:

  • Clear role definitions upfront
  • Regular communication
  • Decision-making process agreed
  • External mediation if needed (Boyd?)

Risk: Publisher Rejection

Mitigation:

  • Start with Book I proposal
  • Build publishing relationships early
  • Consider university presses if commercial passes
  • Self-publishing as last resort (less preferred)

Market Risks

Risk: Limited Audience

Mitigation:

  • Accessible writing style
  • Multi-audience approach
  • Strong marketing plan
  • Speaking circuit to build awareness

Risk: Competition

Mitigation:

  • Unique combination: Technical + legal + personal
  • First-mover advantage in governance design space
  • Series depth hard to replicate

SUCCESS METRICS

Book I Success Indicators

Year 1:

  • 25,000 copies sold
  • 10+ major media mentions (NYT, WSJ, etc.)
  • 5+ policy briefings with government officials
  • 3+ university course adoptions

Year 2-3:

  • 50,000 total copies
  • Translation into 3+ languages
  • Citation in academic papers
  • Influence on at least one policy proposal

Book II Success Indicators

  • 15,000 copies sold (Year 1)
  • Business school adoption
  • Citation in economic/legal scholarship
  • Consultation requests from companies/governments

Book III Success Indicators

  • 10,000 copies sold (Year 1)
  • Model legislation introduced in at least one jurisdiction
  • International organization engagement (OECD, UN, etc.)
  • Corporate governance adoption

Series Success Indicators

  • 100,000 total copies across three books
  • “AI governance design” becomes standard frame in discourse
  • Policy impact in multiple jurisdictions
  • Academic research program spawned
  • Next generation of scholars/practitioners trained

NEXT STEPS (IMMEDIATE)

March 2026 (This Month)

  1. ✅ Finalize three-book architecture (this document)
  2. 🔄 Meeting with Jeff & Hayden to confirm vision
  3. 🔄 Agree on collaboration structure
  4. ⬜ Begin Chapter 4 (Gauss Labs) first draft
  5. ⬜ Outline governance lens sections for Chapters 1-3

April 2026

  1. Complete Chapter 4 draft
  2. Begin Chapter 5 draft (Erudio Bio)
  3. Jeff & Hayden draft first governance lens sections
  4. First collaborative editing session
  5. Develop detailed Book I chapter outlines (Chapters 6-16)

May 2026

  1. Complete Chapter 5 draft
  2. Complete Interlude
  3. Integrate governance lens sections into Chapters 1-5
  4. Begin drafting Chapters 6-7
  5. Publisher research and proposal planning

June 2026

  1. Chapters 6-7 complete
  2. Book I Part I (Chapters 1-5) polished
  3. Begin publisher outreach
  4. Plan Book II research phase
  5. Milestone review: Are we on track for Fall 2027 Book I publication?

APPENDIX: KEY QUESTIONS FOR DISCUSSION

Strategic Direction

  1. Do we commit to three-book series or Book I standalone with option?
  2. What’s the primary goal: Policy influence or discourse transformation?
  3. Which jurisdictions do we prioritize: Korea, US, EU, global?
  4. Timeline aggressive enough? Too aggressive?

Content Decisions

  1. Book I length: 75,000 or 90,000 words?
  2. How much technical depth in Book II?
  3. Should Book III include actual legislative drafts?
  4. Where does entrepreneurship article content go? (Book II Chapter 9?)

Collaboration

  1. Comfortable with proposed role distribution?
  2. Weekly check-ins feasible given everyone’s schedules?
  3. Decision-making process agreeable?
  4. Who handles publisher negotiations?

Marketing & Impact

  1. Which policy audiences most important?
  2. Korean translation simultaneous or sequential?
  3. Academic vs. trade publisher?
  4. Speaking/consulting strategy during book production?

END OF STRATEGIC ARCHITECTURE DOCUMENT

This living document will be updated as the project evolves. Version control and collaborative editing to be established.


REVISION HISTORY

  • v1.0 (March 8, 2026): Initial comprehensive architecture for three-book series