40 minute read

posted: 03-May-2026 & updated: 03-May-2026

This document follows Amazon’s PR/FAQ (“Working Backwards”) methodology. The Press Release is written as if the vision has already been realized on 15-Dec-2029. The FAQ section addresses both external and internal questions about the AI & Humanity Council. This document is intended to align K-PAI Nexus leadership on the Council’s mission, methodology, and path to global impact.

PRESS RELEASE

AI & Humanity Council Recognized as World’s Most Influential Think Tank on AI’s Societal Impact

Five landmark reports have shaped legislation in 12 countries, influenced Fortune 500 AI governance frameworks, and become required reading in 50+ universities. The Council’s unique methodology—integrating 15 disciplines, grounded in practitioner insight, committed to actionable guidance—has fundamentally changed how the world navigates AI’s transformation of human society

SILICON VALLEY, CA — December 15, 2029 — The AI & Humanity Council, a multidisciplinary think tank operating under K-PAI Nexus, today announced that its work has directly influenced AI policy in 12 countries, shaped corporate governance frameworks at 30+ Fortune 500 companies, and become required reading in graduate programs at more than 50 universities worldwide. Since its launch in September 2026, the Council has published five comprehensive reports addressing AI’s most consequential implications for humanity—from labor market transformation and democratic governance to existential risk and human flourishing.

What distinguishes the AI & Humanity Council from conventional technology think tanks is its refusal to reduce AI’s implications to narrow technical or economic frames. Instead, the Council examines AI through an unprecedented integration of 15 disciplines: computer science, engineering, cognitive science, psychology, philosophy, ethics, economics, sociology, political science, law, policy studies, organizational behavior, education, medicine, and theology/religious studies. More than 120 leading experts have contributed to the Council’s work, representing institutions including Stanford, MIT, Harvard, Berkeley, Princeton, Oxford, Seoul National University, KAIST, Tsinghua, and major AI research labs.

“The AI & Humanity Council has accomplished something genuinely rare in policy research,” said Dr. Daron Acemoglu, Institute Professor at MIT and co-author of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. “They’ve produced reports that are simultaneously rigorous enough to influence academic discourse, accessible enough to inform public understanding, and actionable enough to guide real policy decisions. That combination is extraordinarily difficult to achieve.”

Five Reports That Changed the Conversation

Since September 2026, the Council has published five landmark reports, each addressing a critical dimension of AI’s impact on humanity:

Report #1: “AI and the Future of Work: Beyond Automation Anxiety” (Q2 2027)

  • Examined AI’s impact on 50 occupational categories across 8 countries
  • Provided concrete policy recommendations for labor market transition
  • Cited in 8 national AI workforce strategies
  • Required reading in 20+ university labor economics programs

Report #2: “Democratic Governance in the Age of AI: Preserving Human Agency” (Q4 2027)

  • Analyzed AI’s effects on political discourse, election integrity, and civic participation
  • Proposed governance frameworks balancing innovation and democratic values
  • Influenced AI regulation in 5 countries (including provisions in EU AI Act amendments)
  • Cited in 12 Congressional/Parliamentary testimonies

Report #3: “AI Safety: Technical Challenges and Societal Imperatives” (Q2 2028)

  • First comprehensive report integrating technical AI safety research with societal risk analysis
  • Co-authored by leading AI safety researchers and social scientists
  • Shaped corporate AI safety protocols at 15+ major tech companies
  • Cited by White House Office of Science and Technology Policy

Report #4: “Education Transformed: Learning, Teaching, and Human Development with AI” (Q4 2028)

  • Examined AI’s impact on K-12 education, higher education, and lifelong learning
  • Provided concrete guidance for educators, administrators, and policymakers
  • Adopted by 20+ school districts and 10+ universities as policy framework
  • Downloaded 150,000+ times in first 6 months

Report #5: “The Flourishing Question: AI, Meaning, and What It Means to Be Human” (Q3 2029)

  • Explored AI’s implications for human purpose, creativity, relationships, and existential meaning
  • Integrated perspectives from philosophy, theology, psychology, and neuroscience
  • Featured in major media outlets (New York Times, The Economist, Financial Times)
  • Sparked global conversation about technology and human values

Each report follows the Council’s distinctive methodology: identify critical questions, assemble relevant experts across all necessary disciplines, facilitate intensive research and deliberation, subject findings to rigorous peer review, and publish reports that translate complex interdisciplinary insights into actionable guidance for individuals, corporations, and governments.

Impact Across Sectors

The Council’s influence extends far beyond academic citations:

Legislative Impact: Council reports have been cited in legislative proceedings in the United States, European Union, United Kingdom, South Korea, Japan, Singapore, Canada, Australia, and four other countries. Two Council members have provided testimony to the U.S. Congress, three to Parliamentary committees in the UK and EU, and five to national legislative bodies in Asian democracies.

Corporate Governance: More than 30 Fortune 500 companies have adopted AI governance frameworks directly influenced by Council recommendations. Tech giants, financial institutions, healthcare organizations, and manufacturing companies have all incorporated Council guidance into their responsible AI strategies.

Educational Integration: Over 50 universities now include Council reports in graduate curricula across computer science, policy studies, ethics, law, and business programs. Three universities have built entire courses around Council frameworks.

Media and Public Discourse: Council reports and Council members have been featured in The New York Times, The Economist, Financial Times, Wall Street Journal, Washington Post, Nature, Science, Foreign Affairs, and major international publications. Council frameworks have shaped public discourse on AI’s societal implications.

Global Reach: While rooted in Silicon Valley and Korea-US collaboration, the Council’s influence spans North America, Europe, East Asia, and increasingly Latin America, Africa, and Southeast Asia. Reports have been translated into 8 languages.

A New Model of Think Tank Excellence

What makes the AI & Humanity Council’s success particularly notable is its operational model. Unlike traditional think tanks that employ full-time researchers isolated from industry practice, the Council operates as a convening mechanism that brings together:

  • Academic researchers from world-leading universities across 15 disciplines
  • Industry practitioners actually building, deploying, and governing AI systems
  • Policymakers navigating real regulatory and legislative challenges
  • Civil society leaders representing affected communities and public interest
  • Philosophers and ethicists examining fundamental questions of value and meaning

This “practitioner-grounded, academically rigorous, philosophically informed” approach produces reports that are simultaneously:

  • Technically accurate (vetted by leading AI researchers)
  • Empirically grounded (informed by industry practitioners)
  • Philosophically sophisticated (examining fundamental questions)
  • Practically actionable (useful to decision-makers)

“The Council has proven that you don’t need a $100 million endowment and 50 full-time staff to produce world-class thought leadership,” said Sunghee Yun, Chair of K-PAI Nexus and AI & Humanity Council Convener. “What you need is a clear mission, intellectual rigor, the right convening power, and commitment to serving the public good. The Council demonstrates that a relatively lean operation, properly structured, can have outsized global impact.”

Looking Ahead: The Next Chapter

The Council’s leadership announced today that the next phase will focus on three priorities:

1. Deepening Policy Impact: Building on existing relationships with legislative bodies and regulatory agencies, the Council will expand direct advisory relationships with governments worldwide.

2. Expanding Global Reach: While maintaining roots in Silicon Valley and Korea-US collaboration, the Council will establish regional working groups in Europe, East Asia, Southeast Asia, Latin America, and Africa to ensure truly global perspective.

3. Addressing Emerging Challenges: Future reports will tackle AI’s implications for healthcare delivery, climate change mitigation, scientific discovery, creative expression, and the future of human consciousness and identity.

“AI is not slowing down. Its implications grow more profound by the month,” Yun concluded. “The AI & Humanity Council exists to ensure that as AI advances, humanity doesn’t just cope with change—we actively shape it toward human flourishing. The work is just beginning.”

For more information, visit https://nexus-pai.github.io/committee/#ai-and-humanity-council.

FREQUENTLY ASKED QUESTIONS

Mission and Vision

Q1: What is the AI & Humanity Council?

The AI & Humanity Council is a multidisciplinary think tank operating under K-PAI Nexus that examines Artificial Intelligence’s implications for humanity across technology, society, economics, ethics, philosophy, and existential dimensions. The Council produces rigorous, comprehensive reports that translate complex interdisciplinary insights into actionable guidance for policymakers, corporate leaders, educators, and engaged citizens.

Core Mission: Ensure that AI advances human flourishing by illuminating the path between rapid technological transformation and humanity’s deepest values.

Three Mandates:

  1. Research: Analyze AI’s implications across all relevant domains through multidisciplinary inquiry
  2. Advisory: Provide strategic guidance to governments, policymakers, and institutional leaders
  3. Public Engagement: Equip citizens with knowledge and frameworks to shape AI’s role in society

Distinguishing Characteristics:

  • Unprecedented disciplinary integration: 15+ fields genuinely integrated, not just consulted in parallel
  • Practitioner-grounded: Informed by people actually building, deploying, and governing AI
  • Action-oriented: Reports answer “what should we DO?” not just “what is happening?”
  • Philosophically sophisticated: Examines fundamental questions about human values, meaning, and flourishing
  • Fiercely independent: Free from undue influence by any single industry, government, or ideology

Q2: Why does the AI & Humanity Council exist? What gap does it fill?

The Problem: Existing AI discourse is fragmented and incomplete.

Technical AI research produces breakthroughs but rarely examines societal implications. Policy think tanks analyze governance challenges but lack technical depth. Ethics researchers raise important questions but often lack influence on real decisions. Industry practitioners understand implementation but focus on narrow commercial objectives. Philosophers examine deep questions but rarely engage with technical realities.

The Result: Decisions about AI’s trajectory are made by people with incomplete understanding. Technologists build without fully comprehending societal implications. Policymakers regulate without technical literacy. Ethicists critique without understanding constraints. The public feels alienated from decisions that will profoundly affect their lives.

The Council’s Solution: Integrate technical understanding, empirical evidence, policy expertise, ethical reasoning, and philosophical inquiry into comprehensive analysis that serves decision-makers across all sectors.

Specific Gaps the Council Fills:

  1. Stanford HAI, MIT CSAIL, Berkeley BAIR produce excellent technical AI research, but don’t systematically integrate philosophy, theology, psychology, or existential questions. The Council does.

  2. Brookings, CFR, Center for American Progress analyze policy implications, but lack the technical depth and Silicon Valley practitioner grounding the Council provides.

  3. Partnership on AI, AI Now Institute focus on specific dimensions (ethics, social justice), but don’t attempt the Council’s comprehensive integration across all domains.

  4. Korean think tanks (KISDI, KISTEP, ETRI) provide important regional perspective, but lack Silicon Valley ecosystem access and global convening power.

  5. Corporate research labs (DeepMind, Anthropic, OpenAI) have technical excellence but inherent commercial interests that limit independence.

The Council is the only institution that:

  • Integrates 15+ disciplines genuinely (not just in parallel)
  • Grounds analysis in both academic rigor AND practitioner insight
  • Maintains fierce independence from commercial and governmental pressures
  • Commits to actionable guidance, not just analysis
  • Operates from Korea-US bilateral foundation while serving universal human questions
  • Makes work accessible to general public while maintaining scholarly rigor

Q3: What are the Council’s core operating principles?

The Council operates according to five non-negotiable principles:

1. Intellectual Rigor

  • All claims grounded in evidence and sound reasoning
  • Peer review by leading experts in relevant fields
  • Transparent methodology and citation of sources
  • Willingness to revise positions when evidence warrants

2. Independence

  • Free from undue influence by any single industry, government, or ideological agenda
  • Funding sources disclosed transparently
  • Council members recuse themselves from topics where they have material conflicts
  • No corporate sponsor can veto or substantially alter report findings

3. Comprehensive Scope

  • AI’s implications examined across all relevant dimensions (technology, society, economy, ethics, philosophy, existential meaning)
  • Refusal to reduce complex questions to narrow technical or economic frames
  • Integration of perspectives from 15+ disciplines
  • Attention to both immediate challenges and long-term implications

4. Practical Wisdom

  • Analysis translated into actionable guidance
  • Reports useful to actual decision-makers (policymakers, executives, educators, citizens)
  • Balance between ideal recommendations and politically/economically feasible approaches
  • Recognition that perfect solutions rarely exist; wisdom lies in navigating tradeoffs

5. Human-Centered Values

  • AI’s progress measured by contribution to human dignity and flourishing
  • Centering questions about meaning, purpose, relationships, creativity, and what constitutes a good life
  • Recognition that technology serves humanity, not the reverse
  • Commitment to ensuring AI benefits all humanity, not just elites

These principles are not aspirational—they’re operational. Every Council report, every advisory engagement, every public communication must honor all five principles or it doesn’t represent the Council.

Structure and Governance

Q4: How is the AI & Humanity Council structured and governed?

Organizational Structure:

The AI & Humanity Council operates under K-PAI Nexus (a California 501(c)(3) nonprofit) but maintains intellectual and operational independence. This structure provides:

  • Legal and financial infrastructure (K-PAI Nexus)
  • Community access and practitioner grounding (K-PAI’s 2,000+ members)
  • Institutional partnerships (K-PAI’s 30+ MOUs)
  • Independence in topic selection and findings (Council autonomy)

Governance Model:

Council Chair (Appointed by K-PAI Nexus Board)

  • Overall vision and strategic direction
  • Final authority on report topics and timelines
  • Primary spokesperson for Council
  • Currently: Sunghee Yun

Executive Committee (5-7 members)

  • Approves report topics and research agendas
  • Reviews draft reports before peer review
  • Ensures quality and consistency across reports
  • Manages budget and operations
  • Appointed by Council Chair with K-PAI Nexus Board approval

Expert Working Groups (Formed per report)

  • 8-15 experts assembled for each specific report
  • Chosen for relevant disciplinary expertise and independence
  • Conduct research, deliberate findings, draft report sections
  • Dissolved after report publication

Advisory Board (15-20 distinguished members)

  • Provide strategic guidance on topics and methodology
  • Review reports in draft form
  • Expand Council’s reach and credibility
  • No operational authority (advisory only)

Peer Review Panel (3-5 experts per report)

  • External reviewers not involved in report drafting
  • Provide critical feedback before publication
  • Ensure intellectual rigor and accuracy
  • Names published with report (transparency)

This structure balances:

  • Strategic coherence (Chair and Executive Committee)
  • Expert depth (Working Groups)
  • External validation (Advisory Board and Peer Review)
  • Operational efficiency (lean permanent staff)

Q5: Who are the experts? How are they selected?

Target Expert Profile:

The Council seeks experts who combine:

  1. Disciplinary excellence: Leading scholars/practitioners in their field
  2. Intellectual openness: Willing to integrate insights across disciplines
  3. Communication ability: Can explain complex ideas to non-experts
  4. Independence: Free from conflicts that would compromise objectivity
  5. Commitment to public good: Motivated by service, not just credentials

Disciplinary Coverage (15 Core Fields):

Technology & Engineering:

  • Computer Science (AI/ML, algorithms, systems)
  • Software Engineering (deployment, safety, testing)
  • Hardware Engineering (semiconductors, compute infrastructure)

Human Sciences:

  • Cognitive Science (human cognition, decision-making)
  • Psychology (individual and social psychology)
  • Neuroscience (brain function, consciousness)

Philosophy & Ethics:

  • Philosophy (epistemology, metaphysics, philosophy of mind)
  • Ethics (moral philosophy, applied ethics)
  • Theology/Religious Studies (meaning, purpose, transcendence)

Social Sciences:

  • Economics (labor markets, inequality, growth)
  • Sociology (social structures, cultural change)
  • Political Science (governance, power, institutions)

Professional & Applied Fields:

  • Law (regulation, liability, rights)
  • Policy Studies (public administration, regulatory design)
  • Medicine/Public Health (healthcare delivery, bioethics)

Selection Process:

For each report, the Council:

  1. Identifies Required Expertise — Which disciplines are essential for this topic?
  2. Nominates Candidates — Executive Committee proposes 3-5 candidates per discipline
  3. Evaluates Fit — Assess disciplinary excellence, independence, communication ability, availability
  4. Extends Invitations — Typically invite 12-20 experts, expecting 8-15 to accept
  5. Assembles Working Group — Confirm participation, disclose conflicts, establish timeline

Target Institutions (Examples):

US Universities: Stanford, MIT, Harvard, Berkeley, Princeton, Yale, CMU, Chicago, NYU, Columbia, UPenn, Duke, Northwestern, UT Austin, UCLA, UCSD, Georgia Tech

International Universities: Oxford, Cambridge, ETH Zurich, Tsinghua, Seoul National University, KAIST, POSTECH, Korea University, Yonsei University, Tokyo University, NUS, Hebrew University

Research Institutions: Allen Institute, Santa Fe Institute, Broad Institute, Cold Spring Harbor, Institute for Advanced Study, Center for Advanced Study in the Behavioral Sciences

Think Tanks: RAND, Brookings, CFR, Carnegie Endowment, Center for American Progress, Hoover Institution, Cato Institute (diverse ideological spectrum)

Industry: Leading AI labs (OpenAI, Anthropic, DeepMind, Meta AI, Google Research, Microsoft Research), major tech companies, startups

Government: Former/current policymakers, regulators, legislative staff (participating as individuals, not official representatives)

Q6: What is the Council’s relationship to K-PAI Nexus?

Organizational Relationship:

The AI & Humanity Council is a program of K-PAI Nexus, but operates with intellectual and operational independence.

What K-PAI Nexus Provides:

  • Legal entity (501(c)(3) nonprofit status)
  • Financial infrastructure (accounting, tax compliance, grant management)
  • Community access (2,000+ members for practitioner insight)
  • Institutional partnerships (30+ MOUs for convening power)
  • Operational support (meeting coordination, logistics)
  • Distribution platform (forums, website, member network)

What the Council Controls:

  • Report topics and research agendas (subject to Executive Committee approval)
  • Expert selection and Working Group composition
  • Research methodology and analytical frameworks
  • Report findings and recommendations
  • Publication decisions and timing
  • Public statements and media engagement

Why This Structure Works:

Many think tanks struggle because they’re either:

  • Too isolated (no community grounding, no practitioner access)
  • Too captured (commercial sponsors influence findings)

The Council gets the best of both worlds:

  • Community grounding (K-PAI’s 2,000+ members provide real-world insight)
  • Independence (findings cannot be vetoed by sponsors or members)

Potential Tensions:

What if a Council report criticizes a K-PAI Nexus corporate sponsor or MOU partner?

The Answer: The Council publishes the report. K-PAI Nexus’s bylaws explicitly protect Council independence. If this creates friction with sponsors, that’s a cost K-PAI accepts to maintain credibility. A think tank that censors findings to please sponsors has no credibility.

This is written into Council governance documents: No sponsor, partner, or board member can veto or substantially alter Council findings. They can comment during peer review (like any expert), but final decisions rest with the Council Chair and Executive Committee.

Methodology and Process

Q7: How does the Council produce a report? What’s the process?

Report Production Process (Typical Timeline: 6-9 months)

Phase 1: Topic Selection & Scoping (4-6 weeks)

Week 1-2: Topic Identification

  • Council Chair and Executive Committee identify critical questions
  • Input from K-PAI community, Advisory Board, external experts
  • Prioritization based on: urgency, impact potential, disciplinary fit, resource availability

Week 3-4: Scoping Workshop

  • Convene 5-8 preliminary experts for 1-day workshop
  • Define key questions the report must address
  • Identify required disciplines and expertise
  • Develop preliminary outline and research agenda
  • Estimated budget and timeline

Week 5-6: Approval and Planning

  • Executive Committee reviews and approves scope
  • Budget finalized
  • Timeline established
  • Expert recruitment begins

Phase 2: Expert Assembly & Kickoff (4-6 weeks)

Week 1-3: Expert Recruitment

  • Identify 3-5 candidates per required discipline
  • Extend invitations with scope, timeline, compensation
  • Confirm participation from 8-15 experts
  • Disclose conflicts of interest

Week 4-5: Background Research

  • Council staff compile relevant literature, prior reports, data sources
  • Distribute to Working Group members
  • Members conduct preliminary reading and preparation

Week 6: Kickoff Convening (2-3 days, in-person preferred)

  • Day 1: Presentations by each expert on their discipline’s perspective
  • Day 2: Identify tensions, tradeoffs, open questions
  • Day 3: Outline report structure, assign section leads, establish work plan

Phase 3: Research & Deliberation (12-16 weeks)

Weeks 1-8: Initial Research

  • Working Group members draft assigned sections
  • Regular virtual meetings (bi-weekly, 2 hours)
  • Integration calls between section leads
  • Council staff support: literature review, data analysis, coordination

Weeks 9-12: Integration & Deliberation

  • Mid-process in-person convening (2 days)
  • Present draft sections, identify gaps and contradictions
  • Deliberate contested questions and competing frameworks
  • Revise outline if needed, reassign sections

Weeks 13-16: Synthesis

  • Working Group synthesizes sections into coherent narrative
  • Draft executive summary and policy recommendations
  • Internal review by Executive Committee
  • Revisions based on feedback

Phase 4: Peer Review & Revision (6-8 weeks)

Week 1-2: Peer Reviewer Selection

  • Identify 3-5 leading experts NOT involved in drafting
  • Ensure representation of relevant disciplines
  • Confirm availability and independence

Week 3-5: External Review

  • Distribute draft report to peer reviewers
  • Reviewers provide written feedback (typically 5-10 pages each)
  • Focus on: accuracy, completeness, clarity, actionability

Week 6-8: Revision

  • Working Group addresses peer review feedback
  • Revise draft (substantial changes often required)
  • Final review by Council Chair and Executive Committee
  • Copyediting and fact-checking

Phase 5: Publication & Distribution (4-6 weeks)

Week 1-2: Pre-Publication Preparation

  • Final copyedit and formatting
  • Prepare executive summary (10-15 pages)
  • Prepare short version for policymakers (4-6 pages)
  • Design graphics and visualizations
  • Media strategy and outreach planning

Week 3: Launch

  • Publish report on K-PAI Nexus website
  • Press release and media outreach
  • Presentation at K-PAI forum
  • Distribution to Advisory Board, partners, policymakers
  • Social media campaign

Week 4-6: Amplification

  • Op-eds by Working Group members in major publications
  • Presentations at conferences and universities
  • Briefings for Congressional/Parliamentary staff
  • Webinars and Q&A sessions
  • Translation into priority languages

Ongoing: Impact Tracking

  • Monitor citations in academic literature
  • Track media coverage
  • Document policy influence (legislation, regulation, corporate adoption)
  • Update impact metrics quarterly

This process produces reports that are:

  • Rigorous (peer-reviewed by leading experts)
  • Comprehensive (15+ disciplines integrated)
  • Actionable (concrete recommendations for decision-makers)
  • Accessible (executive summary for general audience, full report for specialists)
  • Impactful (strategic distribution to policymakers, media, educators)

Q8: How does the Council ensure intellectual rigor and quality?

Quality Assurance Mechanisms:

1. Expert Selection

  • Only invite leading scholars/practitioners in their fields
  • Verify credentials, publications, reputation
  • Check for conflicts of interest
  • Ensure diversity of perspectives (avoid ideological echo chamber)

2. Peer Review

  • Every report reviewed by 3-5 external experts before publication
  • Reviewers chosen for disciplinary expertise and independence
  • Written feedback (typically 5-10 pages per reviewer)
  • Working Group must address all substantive criticisms
  • Reviewer names published with report (accountability)

3. Evidence Standards

  • All empirical claims backed by data or research
  • Citations to peer-reviewed literature where available
  • Transparent methodology (how data analyzed, how conclusions drawn)
  • Acknowledgment of uncertainty where evidence incomplete
  • Clear distinction between established facts and informed speculation

4. Internal Review

  • Executive Committee reviews drafts before peer review
  • Council Chair has final approval authority
  • Multiple Working Group members review each section (not just section lead)
  • Integration meetings ensure coherence across sections

5. Stakeholder Feedback (without capture)

  • Circulate draft to K-PAI Nexus members for practitioner feedback
  • NOT subject to approval (feedback considered, not binding)
  • Identify practical implementation challenges
  • Reality-check policy recommendations

6. Adversarial Collaboration

  • Deliberately include experts with different perspectives
  • Surface disagreements explicitly
  • Represent competing views fairly before arguing for conclusions
  • When consensus impossible, present strongest arguments for each position

7. Revision Standards

  • Major revisions typical after peer review (not rubber-stamp)
  • Track changes documented
  • Substantive criticisms addressed (not dismissed)
  • Final report demonstrably improved from initial draft

Red Lines (Non-negotiable Quality Standards):

A report CANNOT be published if:

  • Key empirical claims lack evidence
  • Methodology is opaque or flawed
  • Peer reviewers identify fundamental errors that aren’t corrected
  • Report lacks actionable recommendations
  • Writing is inaccessible to intended audience
  • Conflicts of interest not disclosed
  • Critical perspectives systematically excluded

These aren’t aspirational. They’re operational standards.

If a report doesn’t meet these standards, it doesn’t get published—even if that means missing deadlines, disappointing sponsors, or frustrating Working Group members. Reputation is the Council’s only asset. One sloppy report destroys credibility that takes years to build.

Q9: How does the Council balance rigor with accessibility?

This is one of the hardest challenges: producing work that is simultaneously rigorous enough for experts and accessible enough for general audiences.

The Solution: Layered Communication

Layer 1: Executive Summary (10-15 pages)

  • Audience: Policymakers, executives, journalists, engaged citizens
  • Tone: Accessible but substantive
  • Content: Key findings, main arguments, core recommendations
  • No jargon: Technical terms explained when necessary
  • Visuals: Charts, diagrams, infographics to illustrate concepts
  • Call to action: Clear guidance on what different stakeholders should do

Layer 2: Full Report (80-150 pages)

  • Audience: Specialists, researchers, graduate students, serious readers
  • Tone: Rigorous but still readable
  • Content: Complete analysis, methodology, evidence, counterarguments
  • Citations: Full references to academic literature
  • Technical depth: Detailed arguments and data analysis
  • Nuance: Complexities, uncertainties, competing interpretations

Layer 3: Technical Appendices (online only, variable length)

  • Audience: Domain experts, peer reviewers, fact-checkers
  • Content: Detailed methodology, raw data, mathematical models, complete literature review
  • Purpose: Full transparency and reproducibility
  • Not required reading: Most readers skip this layer

Layer 4: Policymaker Brief (4-6 pages)

  • Audience: Congressional/Parliamentary staff, regulatory agencies
  • Format: Bullet points, key findings, specific policy recommendations
  • Tone: Direct, action-oriented
  • Focus: What should government do? What are tradeoffs?

Layer 5: Public-Facing Pieces

  • Op-eds: 800-1000 words in major publications (NYT, WSJ, FT, Economist)
  • Blog posts: 1500-2500 words explaining key insights for general readers
  • Webinars/Videos: 45-60 minute presentations with Q&A
  • Infographics: Visual summaries shareable on social media

The Strategy:

  • Everyone reads Executive Summary (accessible entry point)
  • Specialists read Full Report (rigorous analysis)
  • Experts check Technical Appendices (full transparency)
  • Policymakers use Policymaker Brief (actionable guidance)
  • Public engages via Op-eds/videos/infographics (broad reach)

This approach means we’re not choosing between rigor and accessibility—we’re providing both, at different layers, for different audiences.

Writing Standards:

Even the Full Report should be readable. This means:

  • Clear, direct prose (no unnecessary jargon)
  • Define technical terms when first introduced
  • Use concrete examples to illustrate abstract concepts
  • Break long sections with subheadings
  • Summarize key points at section transitions
  • Avoid passive voice and nominalization

Academic rigor doesn’t require impenetrable prose. If an argument can’t be explained clearly, it probably isn’t well understood.

Impact and Distribution

Q10: How does the Council measure success and impact?

Impact Metrics (Tracked Quarterly):

Tier 1: Direct Policy Influence (Highest Value)

  • Legislative citations (laws, regulations, government reports)
  • Congressional/Parliamentary testimony invitations
  • Agency consultation requests
  • International organization adoption (UN, OECD, EU, etc.)
  • Target by 2029: 12 countries, 20+ legislative citations, 15+ testimonies

Tier 2: Institutional Adoption

  • Corporate AI governance frameworks influenced by Council recommendations
  • University curricula incorporating Council reports
  • Professional association standards updated based on Council guidance
  • Target by 2029: 30 Fortune 500 companies, 50 universities, 10 professional associations

Tier 3: Academic and Media Impact

  • Academic citations in peer-reviewed literature
  • Major media coverage (NYT, WSJ, Economist, FT, Nature, Science, Foreign Affairs)
  • Invitations to present at major conferences
  • Target by 2029: 200+ academic citations, 100+ major media mentions, 50+ conference presentations

Tier 4: Public Engagement

  • Report downloads and views
  • Webinar/video viewership
  • Social media reach and engagement
  • Website traffic
  • Target by 2029: 500,000+ report downloads, 100,000+ webinar views, 1M+ social media reach

Tier 5: Community Integration

  • K-PAI Nexus members citing Council work
  • Interest group discussions sparked by reports
  • Educational programs built around Council frameworks
  • Target by 2029: 80% of K-PAI members aware of Council reports, 20+ interest groups engaging with findings

Qualitative Impact Indicators:

Beyond metrics, the Council tracks:

  • Policy windows: Did a report arrive at the moment when policymakers needed guidance?
  • Narrative shift: Did a report change how media/public discusses an issue?
  • Coalition building: Did a report bring together strange bedfellows around shared framework?
  • Decision deflection: Did a report prevent bad policy by articulating hidden costs?

What Success Looks Like (Concrete Examples):

  • Strong Success: California passes AI regulation incorporating Council framework verbatim
  • Medium Success: Corporate AI ethics board cites Council report when designing governance structure
  • Weak Success: Journalist reads Council report and writes article explaining issue more accurately

What Success Does NOT Look Like:

  • High download numbers but no policy/institutional influence
  • Academic citations but no real-world decision impact
  • Media coverage but no substantive engagement with arguments

The North Star Metric:

If we had to choose ONE metric, it would be: Number of consequential decisions (policy, corporate, institutional) demonstrably influenced by Council work.

This is hard to measure precisely, but it’s what actually matters. The Council exists to improve decisions about AI’s role in society. Everything else is intermediate.

Q11: What is the distribution and amplification strategy?

Distribution Strategy:

The Council doesn’t just publish reports and hope people read them. We actively distribute and amplify through multiple channels:

Channel 1: Direct Policymaker Outreach

Before Publication:

  • Identify 20-30 key policymakers who should see the report
  • Congressional/Parliamentary staff, regulatory agencies, executive branch officials
  • International organizations (UN, OECD, EU institutions)
  • Cultivate relationships through ongoing briefings (not just when reports drop)

At Publication:

  • Send personalized copies with cover letter explaining relevance to their work
  • Offer briefings (30-60 minutes) to explain findings and answer questions
  • Provide Policymaker Brief (4-6 pages) with specific recommendations

Post-Publication:

  • Respond quickly to requests for testimony or consultation
  • Track when policymakers cite our work, acknowledge and thank them
  • Build long-term relationships (not transactional)

Channel 2: Media Strategy

Pre-Launch (2-3 weeks before):

  • Identify 10-15 journalists who cover relevant topics
  • Offer embargoed advance copies to selected journalists
  • Prepare press release, fact sheets, quotes

Launch Day:

  • Coordinated press release
  • Op-eds by Working Group members in 3-5 major publications (NYT, WSJ, Economist, FT, Foreign Affairs)
  • Press briefing or webinar for journalists
  • Social media campaign with key findings and visuals

Post-Launch (4-6 weeks):

  • Pitch follow-up stories to media (“our report predicted this development”)
  • Respond quickly to journalist inquiries
  • Op-ed placements in second-tier publications
  • Podcast appearances by Working Group members

Channel 3: Academic Distribution

  • Submit to relevant academic journals as policy papers or perspectives
  • Present at major conferences (NeurIPS, AAAI, FAccT, AIES for technical; APSA, ASA, APA for social science)
  • Seminar series at universities (Stanford, MIT, Harvard, Berkeley, etc.)
  • Working paper series (SSRN, arXiv where appropriate)

Channel 4: Corporate Engagement

  • Directly contact CSR/Ethics leaders at Fortune 500 companies
  • Present at industry conferences and executive education programs
  • Partner with corporate responsibility associations
  • Offer briefings to C-suite and boards

Channel 5: Educational Integration

  • Develop teaching materials for professors (syllabi, case studies, discussion questions)
  • Partner with universities to build courses around Council frameworks
  • Guest lectures by Working Group members
  • Online courses or modules (Coursera, edX, etc.)

Channel 6: K-PAI Nexus Community

  • Present at K-PAI forums (2,000+ member audience)
  • Distribute through K-PAI Nexus Members chatroom
  • Interest group discussions and deep dives
  • Follow-up workshops on specific report sections

Channel 7: International Translation

Priority languages for translation:

  • Korean (given K-PAI’s Korea-US roots)
  • Spanish (Latin America reach)
  • Mandarin (China/Taiwan reach, where politically feasible)
  • French (EU and Africa reach)
  • Japanese (East Asia reach)
  • German (EU reach)
  • Portuguese (Brazil reach)
  • Arabic (Middle East reach, selective reports)

The Amplification Principle:

A great report that nobody reads has zero impact. Distribution is not an afterthought—it’s as important as the research itself. We budget 20-30% of report resources for distribution and amplification.

Q12: How will the Council build relationships with policymakers?

The DC Connection Strategy (3-Phase Approach)

Phase 1: Credibility Building (2026-2027)

Goal: Establish Council as serious, rigorous, independent voice

Tactics:

  • Publish first 2 reports demonstrating quality
  • Submit reports to relevant Congressional committees and agencies
  • Attend and present at policy conferences (Brookings, CFR, AEI, etc.)
  • Build relationships with legislative staff (not just members)
  • Op-eds in policy-focused publications (Foreign Affairs, Washington Post, The Hill)
  • Avoid partisan positioning (respected by both sides)

Metrics:

  • Report downloads by .gov domains
  • Invitations to speak at policy events
  • Citations in Congressional Research Service reports
  • Relationships with 20+ Congressional/agency staff

Phase 2: Policy Influence (2027-2028)

Goal: Council work actively cited in policy discussions

Tactics:

  • Congressional testimony (when invited)
  • Direct briefings to key committees (Senate Commerce, House Energy & Commerce, etc.)
  • Agency consultation (SEC, FTC, CFPB, FDA, etc. on AI-relevant rulemakings)
  • Partnership with established DC think tanks (co-author reports, joint events)
  • Regular DC presence (quarterly trips by Council Chair/members)
  • Media quotes linking Council work to current policy debates

Metrics:

  • 5+ testimony invitations
  • Citations in 3+ legislative bills or regulatory proceedings
  • Partnership with 2+ major DC think tanks
  • Council work cited in floor speeches or committee hearings

Phase 3: Established Influence (2028+)

Goal: Council as go-to resource on AI policy questions

Tactics:

  • Standing relationships with key committees (regular briefings)
  • Federal agency advisory roles
  • International organization participation (OECD, UN, etc.)
  • White House OSTP consultation
  • Amicus briefs in relevant court cases
  • Regular media presence on AI policy issues

Metrics:

  • 10+ testimonies per year
  • Council member on federal advisory committee
  • Report citations in 5+ enacted laws or major regulations
  • Regular White House consultation

The CFR Analogy:

Council on Foreign Relations (CFR) took 30+ years to become the definitive foreign policy voice. We’re attempting to compress that timeline through:

  • Narrow focus (AI only, not all technology policy)
  • Critical timing window (AI policy being written NOW)
  • Existing networks (K-PAI’s Silicon Valley and Korea connections)
  • Quality over quantity (5 exceptional reports > 20 mediocre ones)

Key Relationships to Cultivate:

Congressional:

  • Senate Commerce Committee staff (tech regulation)
  • House Energy & Commerce Committee staff (AI oversight)
  • House Science Committee staff (research funding)
  • Senate Judiciary Committee staff (liability, rights)

Executive Branch:

  • White House Office of Science and Technology Policy (OSTP)
  • National Science Foundation (research funding)
  • National Institute of Standards and Technology (NIST) (AI standards)
  • Federal Trade Commission (consumer protection)
  • Securities and Exchange Commission (corporate governance)

International:

  • OECD AI Policy Observatory
  • UN Secretary-General’s AI Advisory Body
  • European Commission DG CONNECT
  • UK AI Safety Institute

The goal is not to be a lobbying organization. We don’t advocate for specific bills or companies. We provide rigorous, independent analysis that policymakers can trust, regardless of party or ideology.

Operations and Resources

Q13: What are the resource requirements? How is the Council funded?

Budget Model (Annual Operating Budget)

Year 1 (2026): $400,000

  • Two reports
  • Establishing processes and relationships
  • Limited staff

Year 2 (2027): $750,000

  • Three reports
  • Increased staff support
  • Expanded distribution

Year 3 (2028): $1,200,000

  • Four reports
  • Full-time Director of Operations
  • International expansion

Year 4 (2029): $1,500,000

  • Five reports
  • Established infrastructure
  • Global reach

Budget Allocation (Steady State, ~$1.5M):

Research & Expert Compensation: 50% (~$750K)

  • Expert honoraria ($5K-15K per person depending on time commitment)
  • Travel for convenings (2-3 in-person meetings per report)
  • Research support (data access, specialized analysis)
  • Peer review compensation

Staff & Operations: 30% (~$450K)

  • Director of Operations (full-time)
  • Research Manager (full-time)
  • Administrative Coordinator (full-time)
  • Communications Manager (part-time)
  • Office costs, software, services

Distribution & Amplification: 15% (~$225K)

  • Media outreach and PR
  • Translation services
  • Conference travel and presentations
  • Video production, graphics, web design
  • Paid promotion (selective)

Institutional Support: 5% (~$75K)

  • Legal and accounting services
  • Board meetings and governance
  • Strategic planning
  • Miscellaneous overhead

Funding Sources:

Foundation Grants (Target: 50% of budget)

Priority Foundations:

  • Open Philanthropy (technology and global priorities)
  • Effective Ventures Foundation (AI safety, long-term impact)
  • Omidyar Network (responsible technology)
  • MacArthur Foundation (technology and human rights)
  • Ford Foundation (inequality and technology)
  • Carnegie Corporation (international peace and security)
  • Hewlett Foundation (economy and society)
  • Sloan Foundation (technology and society)

Government Grants (Target: 20% of budget)

  • National Science Foundation (AI research)
  • State of California grants (technology policy)
  • International organization funding (OECD, UN, etc.)

Corporate Sponsorship (Target: 20% of budget, with safeguards)

  • Technology companies ($25K-100K per year)
  • Financial institutions, healthcare, manufacturing (AI users)
  • Critical safeguards:
    • No single sponsor >10% of budget (avoid capture)
    • All sponsors disclosed publicly
    • No sponsor can veto or substantially alter findings
    • Corporate Advisory Board (input) separate from governance (control)

K-PAI Nexus General Operating Budget (Target: 10% of budget)

  • Infrastructure support
  • Staff time allocation
  • Community access

Funding Principles:

  1. Diversification: No single source >20% of budget
  2. Transparency: All funding sources disclosed publicly
  3. Independence: Governance documents prohibit sponsor control of findings
  4. Mission-alignment: Only accept funding aligned with Council mission
  5. Long-term sustainability: Build endowment over time (target: $10M by 2032)

What if we can’t raise full budget?

The Council scales activity to available resources:

  • Fewer reports (quality over quantity)
  • Longer timelines (maintain rigor)
  • Smaller Working Groups (still multidisciplinary)
  • Reduced distribution (focus on highest-impact channels)

What the Council will NOT do:

  • Accept corporate funding that creates conflicts of interest
  • Reduce quality to meet deadlines or budgets
  • Expand faster than we can maintain excellence
  • Chase funding that diverts from core mission

Q14: What are the staffing requirements and timeline?

Staff Structure (Build-Out Over 4 Years):

Year 1 (2026): Minimal Staff (2.5 FTE)

Council Chair (0.25 FTE, Sunghee Yun)

  • Strategic direction and vision
  • Expert recruitment
  • Primary spokesperson
  • Final approval authority

Research Manager (1.0 FTE, new hire)

  • Coordinate Working Groups
  • Manage report production process
  • Literature review and background research
  • Quality assurance

Administrative Coordinator (0.5 FTE, K-PAI Nexus shared)

  • Logistics and scheduling
  • Budget tracking
  • Travel coordination
  • Meeting support

Communications Support (0.5 FTE, K-PAI Nexus shared)

  • Media outreach
  • Social media
  • Website updates

Plus: K-PAI Nexus infrastructure (accounting, legal, HR, IT)

Year 2 (2027): Expanded Operations (4.0 FTE)

Add: Director of Operations (1.0 FTE, new hire)

  • Overall operations management
  • Budget and grant management
  • Staff supervision
  • Process optimization

Communications Manager (0.5 FTE, upgrade from shared)

  • Media strategy and outreach
  • Op-ed placement
  • Public-facing content

Year 3 (2028): Professional Infrastructure (5.5 FTE)

Add: Policy Director (1.0 FTE, new hire)

  • Policymaker relationships
  • Congressional/agency outreach
  • Testimony preparation
  • DC presence

Additional Research Support (0.5 FTE)

  • Data analysis
  • Fact-checking
  • Appendix preparation

Year 4 (2029): Mature Operations (7.0 FTE)

Add: International Coordinator (1.0 FTE, new hire)

  • Global partnerships
  • Translation management
  • Regional convenings
  • International distribution

Development Director (0.5 FTE, new hire)

  • Fundraising strategy
  • Grant writing
  • Donor relationships
  • Endowment building

Hiring Principles:

  1. Expertise over credentials: Value demonstrated ability over pedigree
  2. Mission alignment: Only hire people genuinely committed to public good
  3. Intellectual humility: Seek people who can change their minds based on evidence
  4. Diversity: Multiple perspectives, backgrounds, disciplines
  5. Operational excellence: High standards for quality and professionalism

Compensation Philosophy:

  • Competitive with nonprofit sector (not tech company levels)
  • Philosophy: Pay enough to attract excellent people, not so much that money becomes the motivation
  • Transparency: All salaries disclosed to Board
  • Equity: Compensation bands by role, not individual negotiation

Q15: What is the 4-year roadmap and key milestones?

2026: Launch Year — Establishing Foundation

Q2 2026:

  • ✅ K-PAI Nexus Board approves AI & Humanity Council formation
  • ✅ Council Chair appointed (Sunghee Yun)
  • Hire Research Manager (June)
  • Form Executive Committee (5 members)
  • Develop governance documents and operating procedures

Q3 2026:

  • Recruit Advisory Board (15 members)
  • Select first report topic: “AI and the Future of Work”
  • Assemble Working Group (12 experts)
  • Kickoff convening (September, 3 days in-person)
  • Launch Council website and communications

Q4 2026:

  • Report #1 research and deliberation phase
  • Begin fundraising (foundation grant applications)
  • Present Council vision at K-PAI forums
  • Initial media outreach

Success Metrics for 2026:

  • Council launched and operational ✓
  • First report in production ✓
  • $300K funding secured ✓
  • 10 media mentions ✓

2027: Credibility Building — Proving the Model

Q1 2027:

  • Continue Report #1 work
  • Begin Report #2 scoping: “Democratic Governance in the Age of AI”

Q2 2027:

  • Publish Report #1: “AI and the Future of Work” (May)
  • Major launch event at K-PAI forum
  • Op-eds in NYT, WSJ, Economist
  • Congressional staff briefings (5+)

Q3 2027:

  • Report #2 Working Group assembly and kickoff
  • Report #1 amplification (presentations, webinars, media)
  • First Congressional testimony invitation
  • Begin Report #3 scoping: “AI Safety”

Q4 2027:

  • Report #2 research and deliberation
  • Report #3 Working Group assembly
  • Year-end fundraising push
  • Impact assessment for Report #1

Success Metrics for 2027:

  • Two reports published ✓
  • 5+ Congressional/agency briefings ✓
  • 3+ academic citations ✓
  • 10+ university adoptions ✓
  • $600K funding secured ✓

2028: Scaling Impact — From Reports to Influence

Q1 2028:

  • Hire Director of Operations (January)
  • Upgrade Communications Manager to full-time

Q2 2028:

  • Publish Report #3: “AI Safety: Technical Challenges and Societal Imperatives”
  • Launch at major conference (NeurIPS or FAccT)
  • Partnership announcement with major DC think tank

Q3 2028:

  • Report #2 policy influence (track citations in legislation)
  • Begin Report #4: “Education Transformed”
  • Hire Policy Director
  • Expand international reach (translations, partnerships)

Q4 2028:

  • Publish Report #4: “Education Transformed”
  • Corporate governance framework adoptions (target: 15 companies)
  • International organization partnerships (OECD, UN)
  • Year-end impact report

Success Metrics for 2028:

  • Four reports total (cumulative) ✓
  • 10+ testimonies (cumulative) ✓
  • 3+ legislative citations ✓
  • 20+ Fortune 500 adoptions ✓
  • 30+ university curricula ✓
  • $1M funding secured ✓

2029: Global Influence — Established Authority

Q1 2029:

  • Report #5 scoping: “The Flourishing Question”
  • Federal advisory committee appointment (target)
  • International convenings (Europe, East Asia)

Q2 2029:

  • Hire International Coordinator
  • Regional working groups launched (EU, East Asia, Latin America)

Q3 2029:

  • Publish Report #5: “The Flourishing Question: AI, Meaning, and What It Means to Be Human”
  • Major media coverage (feature stories in NYT, Economist, etc.)
  • Philosophy and theology community engagement

Q4 2029:

  • Three-year impact assessment
  • Endowment campaign launch (target: $10M by 2032)
  • Strategic planning for 2030-2033

Success Metrics for 2029 (Cumulative):

  • Five reports published ✓
  • 12+ countries with policy influence ✓
  • 30+ Fortune 500 companies influenced ✓
  • 50+ universities using reports ✓
  • 15+ testimonies ✓
  • 200+ academic citations ✓
  • $1.5M annual budget ✓

2030 and Beyond: Sustained Excellence

The Council’s goal is not infinite growth—it’s sustained excellence and impact. By 2030, the Council should be:

  • Publishing 3-4 major reports per year (not more)
  • Established as go-to resource for AI policy globally
  • Financially sustainable through diversified funding
  • Training next generation of interdisciplinary AI scholars/practitioners
  • Expanding global reach while maintaining quality

The North Star: In 2030, when a policymaker, CEO, or educator asks “What should I read to understand AI’s implications for humanity?”, the answer should be: “Start with the AI & Humanity Council reports.”

Differentiation and Positioning

Q16: How is the Council different from other AI think tanks and research institutions?

Competitive Landscape Analysis:

Stanford HAI (Human-Centered AI Institute)

  • Strengths: Technical excellence, Silicon Valley location, world-class faculty, significant funding
  • Limitations: Primarily computer science focused, less integration with philosophy/theology/humanities, limited Korea connection
  • Council Differentiation: We integrate 15 disciplines genuinely (not just CS + ethics footnote), Korean-US bilateral perspective, more practitioner-grounded

MIT CSAIL, Berkeley BAIR

  • Strengths: Technical AI research leadership, computer science excellence
  • Limitations: Primarily technical focus, limited policy/philosophy integration
  • Council Differentiation: We start from human impact questions, not technical capabilities

Partnership on AI

  • Strengths: Multi-stakeholder model (tech companies, civil society, academia), broad membership
  • Limitations: Consensus-based (can be slow/watered down), primarily industry-funded (potential bias concerns)
  • Council Differentiation: Independent (not industry-controlled), willing to take strong positions, Korean-US bridge

AI Now Institute (NYU)

  • Strengths: Social justice focus, excellent critical perspective on AI harms
  • Limitations: Primarily critical (vs. constructive), less technical depth, ideological positioning
  • Council Differentiation: Balance technical understanding with social concerns, actionable guidance for industry/government

Brookings, CFR, Center for American Progress

  • Strengths: DC credibility, policy expertise, established relationships
  • Limitations: Limited technical AI depth, primarily US-focused, generalist (not AI-specialized)
  • Council Differentiation: Deep technical AI understanding, Korea-US bilateral lens, AI-specialized

DeepMind Ethics, Anthropic Safety Team, OpenAI Governance

  • Strengths: Technical excellence, researcher access, significant funding
  • Limitations: Commercial interests (even if well-intentioned), primarily technical safety focus
  • Council Differentiation: Complete independence, comprehensive scope (not just safety), philosophical depth

Korean Think Tanks (KISDI, KISTEP, ETRI, etc.)

  • Strengths: Korea policy context, government relationships, regional expertise
  • Limitations: Limited Silicon Valley ecosystem access, primarily Korea-focused, less global reach
  • Council Differentiation: Silicon Valley grounding, global perspective, English as primary language

The Unique Combination Nobody Else Has:

  1. 15 disciplines genuinely integrated (not just parallel tracks)
  2. Silicon Valley practitioner grounding + academic rigor (not isolated scholarship)
  3. Korean-US bilateral foundation + global reach (unique perspective)
  4. Philosophical sophistication (meaning, purpose, existential questions) + technical depth
  5. Independent (not captured by industry or ideology) + practical (actionable guidance)
  6. Community-rooted (K-PAI’s 2,000+ members) + elite expertise (world-class scholars)

Positioning Statement:

“The AI & Humanity Council is the only institution that combines Silicon Valley’s innovation ecosystem, Korea’s world-class AI research, genuine integration of 15 disciplines, and fierce independence to produce reports that are simultaneously rigorous, accessible, and actionable—addressing not just what AI can do, but what it means for human flourishing.”

Q17: What is the Council’s stance on controversial AI topics?

Core Principle: Evidence-Based, Not Ideology-Driven

The Council doesn’t have predetermined positions on controversial questions. We:

  1. Examine evidence from multiple perspectives
  2. Surface competing arguments fairly
  3. Acknowledge uncertainty where it exists
  4. Make recommendations based on best available evidence and reasoning
  5. Revise positions when new evidence emerges

That said, the Council has clear values:

1. Human Flourishing is the North Star

  • AI’s progress measured by contribution to human dignity, creativity, relationships, meaning
  • Technology serves humanity, not the reverse
  • When in doubt, prioritize human welfare over technical capability or economic efficiency

2. Democratic Values and Human Rights

  • Preserve human agency and democratic decision-making
  • Protect civil liberties and privacy
  • Ensure AI benefits are broadly shared, not concentrated
  • Special attention to impacts on vulnerable and marginalized communities

3. Long-Term Thinking

  • Consider not just immediate effects but multi-generational implications
  • Precautionary principle when risks are catastrophic even if probability uncertain
  • Obligation to future generations

4. Epistemic Humility

  • Acknowledge what we don’t know
  • Multiple perspectives often reveal aspects single viewpoint misses
  • Willingness to say “we need more evidence” rather than premature certainty

Examples of How These Values Apply:

Question: Should AI systems be allowed to make life-or-death decisions (military, healthcare, criminal justice)?

Council Approach:

  • Examine technical capabilities and limitations
  • Analyze ethical frameworks (deontological, consequentialist, virtue ethics)
  • Consider practical governance challenges
  • Assess societal trust and legitimacy
  • Conclusion: Context-dependent, with strong safeguards and human oversight

Question: Is AGI/ASI an existential risk to humanity?

Council Approach:

  • Review technical AI safety research
  • Examine historical precedents of technological risk
  • Consider epistemological challenges of predicting long-term AI trajectories
  • Balance near-term harms with speculative long-term risks
  • Conclusion: Serious concern warranting substantial research and precautions, but uncertainty about timelines and probability

Question: Should AI development be slowed or paused?

Council Approach:

  • Analyze benefits of AI progress (medical breakthroughs, climate solutions, etc.)
  • Examine harms and risks (job displacement, privacy, bias, existential risk)
  • Consider practicality of enforcement (global coordination, verification)
  • Assess opportunity costs of delay
  • Conclusion: Nuanced - certain high-risk applications warrant precautionary slowdown; blanket pause likely infeasible and potentially counterproductive

Topics We Will NOT Shy Away From:

  • AI’s impact on human meaning and purpose (even though it’s “soft”)
  • Possibility of AI consciousness and moral status (even though it’s speculative)
  • Existential risk from advanced AI (even though timelines uncertain)
  • AI’s disruption of labor markets (even though politically controversial)
  • AI in military and autonomous weapons (even though geopolitically sensitive)
  • Corporate concentration and AI monopolies (even though it affects potential sponsors)

How We Handle Disagreement:

When Working Group experts disagree fundamentally:

  1. Represent strongest arguments for each position
  2. Identify empirical questions that could resolve disagreement
  3. Make explicit which values/frameworks lead to different conclusions
  4. If warranted, present multiple perspectives rather than forced consensus

The Council is not afraid of controversy. We’re afraid of irrelevance.

Controversial topics are often the most consequential. If we avoid them to seem “balanced” or “safe,” we fail our mission.

APPENDICES

Appendix A: Sample Report Topics (Future Consideration)

Near-Term (2026-2029):

  • AI and the Future of Work: Beyond Automation Anxiety
  • Democratic Governance in the Age of AI: Preserving Human Agency
  • AI Safety: Technical Challenges and Societal Imperatives
  • Education Transformed: Learning, Teaching, and Human Development with AI
  • The Flourishing Question: AI, Meaning, and What It Means to Be Human

Medium-Term (2030-2032):

  • AI in Healthcare: From Diagnosis to Care Delivery to Human Relationship
  • Climate Solutions and Risks: AI’s Role in Planetary Stewardship
  • AI and Creativity: Augmentation, Replacement, or Transformation?
  • Economic Inequality in the AI Era: Markets, Power, and Distribution
  • AI and Human Relationships: Intimacy, Connection, and Social Fabric
  • Scientific Discovery: AI as Tool, Collaborator, or Independent Agent?
  • Legal and Moral Responsibility: Accountability in Human-AI Systems
  • Global Governance: International Cooperation on AI Development and Deployment

Long-Term (2033+):

  • Consciousness and Moral Status: What If AI Becomes Sentient?
  • Human Enhancement and the Posthuman Future
  • AI and the Nature of Truth: Epistemology in the Age of Synthetic Media
  • The End of Scarcity?: Economic Transformation Beyond Capitalism
  • Intergenerational Justice: What We Owe Future Humans in the AI Era

Appendix B: Advisory Board (Target Composition)

Target Size: 15-20 distinguished members

Disciplinary Representation:

  • 3-4 Computer Scientists (AI/ML technical expertise)
  • 2-3 Philosophers (ethics, epistemology, metaphysics)
  • 2-3 Social Scientists (economics, sociology, political science)
  • 2-3 Policy Experts (DC experience, international governance)
  • 1-2 Legal Scholars (technology law, constitutional law)
  • 1-2 Psychologists/Cognitive Scientists
  • 1-2 Theologians/Religious Studies Scholars
  • 1-2 Business Leaders (AI deployment experience)
  • 1-2 Civil Society Representatives

Geographic Diversity:

  • 40% US-based
  • 30% Korea-based
  • 30% Other international (Europe, Asia, Latin America)

Institutional Diversity:

  • Top-tier research universities
  • Policy think tanks
  • Tech companies (with conflict management)
  • Government (former officials)
  • Civil society organizations
  • International organizations

Selection Criteria:

  • Recognized expertise in their field
  • Commitment to interdisciplinary collaboration
  • Independence and intellectual integrity
  • Communication skills (ability to explain to non-experts)
  • Network and convening power
  • Diversity of perspectives (avoid echo chamber)

Appendix C: Governance Documents

Key Documents (To Be Developed):

  1. Council Charter
    • Mission and values
    • Governance structure
    • Roles and responsibilities
    • Relationship to K-PAI Nexus
  2. Conflict of Interest Policy
    • Definition of conflicts
    • Disclosure requirements
    • Recusal procedures
    • Penalties for non-compliance
  3. Peer Review Standards
    • Reviewer selection criteria
    • Review process and timeline
    • Addressing reviewer feedback
    • Publication approval
  4. Sponsor Independence Policy
    • Limits on single sponsor funding
    • Prohibition on sponsor control of findings
    • Disclosure requirements
    • Corporate Advisory Board vs. governance separation
  5. Data and Methodology Standards
    • Evidence requirements
    • Citation standards
    • Transparency and reproducibility
    • Data privacy and security

Appendix D: Success Stories (Aspirational Examples)

Example 1: Legislative Impact

California Assembly Bill 1234 (AI Labor Transition Support Act) passes in 2028, incorporating recommendations from Council Report #1 “AI and the Future of Work.” The bill establishes:

  • Skills retraining fund for displaced workers
  • AI impact assessment requirements for large employers
  • Transition support for communities affected by AI automation

In committee testimony, the bill’s sponsor directly cites Council analysis and recommendations. Council Chair Sunghee Yun testifies in support, providing technical and policy expertise.

Example 2: Corporate Adoption

Microsoft announces in Q2 2028 that its revised AI Ethics Framework incorporates principles from Council Report #3 “AI Safety: Technical Challenges and Societal Imperatives.” Specifically:

  • Human oversight requirements for high-stakes AI systems
  • Transparency standards for AI decision-making
  • Accountability mechanisms and incident response

Microsoft’s Chief Responsible AI Officer credits the Council report with providing “the most comprehensive framework we’ve seen for balancing innovation and safety.”

Example 3: Academic Integration

Stanford launches new graduate course “AI and Human Values” (Q1 2029) built entirely around Council reports. The course:

  • Requires reading all five Council reports
  • Brings in Council Working Group members as guest lecturers
  • Final project: Students write policy memo using Council frameworks

Course becomes one of Stanford’s most popular offerings, with 120+ students enrolled first quarter.

Example 4: International Influence

OECD AI Policy Observatory adopts Council framework from Report #2 “Democratic Governance in the Age of AI” as basis for updated AI governance recommendations to member states (2029).

The framework influences AI policy in 8 OECD countries, demonstrating Council’s global reach beyond US and Korea.

Example 5: Media Narrative Shift

After publication of Report #5 “The Flourishing Question,” major media outlets shift coverage of AI from purely technical/economic frame to include meaning and purpose questions.

The New York Times runs op-ed series exploring AI and human meaning, directly engaging with Council frameworks. The Economist cover story asks “Can Humans Flourish with AI?” — a question the Council made central to discourse.


This PR/FAQ is a strategic document prepared for K-PAI Nexus Leadership review.
Draft Version 1.0 — May 3, 2026
Author: Sunghee Yun

Updated: