Published Researcher
Field Builder
Systems Architect
Infrastructure Architect for Emerging Fields

Finding leverage
where others see
unrelated problems

Translating concepts. Field-building. Zero-to-one systems.

1
Published Paper
10%
Researcher Growth
8
ASEAN Countries
300+
Teachers Trained
Institutional Credibility

I co-authored research
on making AI pauses real

Most AI governance research focuses on superpowers. I got curious: what can smaller nations β€” like the ones in my region β€” actually do? Spent months in stakeholder interviews, reading policy docs, and asking "but how would this work in practice?" This paper is what I learned.

Building the Pause Button

Compute Governance for Middle Powers

Research Contribution

Extended PauseAI's international coordination mechanisms by mapping compute governance pathways for ASEAN nations. Synthesized stakeholder interviews with policy frameworks to propose actionable oversight mechanisms.

arXiv
Platform
2506.20530
Paper ID
Collaboration with PauseAI Research Team
Focus: Middle-Power Nations (ASEAN Context)

Why this matters: Most AI governance research focuses on US/China/EU. This work maps practical pathways for smaller nations to participate in global coordination without requiring superpower-level resources.

πŸ” What I loved about this

The detective work β€” interviewing stakeholders across different countries, finding patterns in how they talk about governance, then translating dense policy frameworks into "here's what you could actually do tomorrow." Every conversation revealed something I hadn't considered.

From Research to Real-World Application

Research Phase

Stakeholder mapping across 8 ASEAN countries, policy framework analysis, compute infrastructure assessment

Synthesis Phase

Translated technical governance mechanisms into region-appropriate policy recommendations with local political context

Distribution Phase

Published open-access on arXiv, presented to policy researchers, integrated into advocacy conversations

Decision-Making Frameworks

How I Actually Think
(and what I'm still figuring out)

The most interesting work isn't about executing obvious solutions. It's about seeing problems nobody's named yet, choosing between impossible trade-offs, and making things work despite messy reality. Every project teaches me something new about how humans, systems, and constraints actually interact. Here's my current best guess at how to do this well.

Done beats perfect when people are waiting

11,000 flood victims needed help now, not after I built something elegant. I shipped in 3 hours, fixed problems in real-time, brought data loss from 40% down to under 5%. The system wasn't pretty, but it worked.

Applied: Flood Relief 2021

Listen to the room you're actually in

I tried teaching ASEAN students about existential risk. They didn't connect. What they did care about? Bias in loan algorithms, misinformation spreading in their languages. I rebuilt the entire program around their reality β€” engagement tripled.

Applied: Effective Thesis ASEAN

Track what tells you when you're wrong

I tested 26 different versions of "Morph" β€” my AI Safety education program. Each version tested a guess about what actually makes concepts stick. Most guesses were wrong. That's fine β€” I measured, learned, kept going.

Applied: Teacher Workshops

My Standard Operating Procedure

(This changes as I learn more, but here's the current version I'm testing)

1

Map the Invisible

What's happening that nobody's naming? Who are the actors? What are the actual constraints vs perceived constraints?

2

Design the Forks

What are my actual options? What does each optimize for? What am I trading off by choosing path A vs B vs C?

3

Ship & Measure

Build minimum viable infrastructure. Deploy fast. Track leading indicators that predict failure before it cascades.

4

Document the Breaks

What assumption failed? How did I detect it? What did I rebuild? Turn failures into reusable playbooks.

What I Don't Do

Clarity through negation β€” understanding my positioning by what I avoid

βœ—

Assume tools solve problems

Better software doesn't fix unclear goals or bad processes. I focus on the people and workflow first, then find tools that fit β€” not the other way around.

βœ—

Wait for perfect before starting

Small iterations beat grand plans. I'd rather ship something messy that teaches me what's actually wrong than spend weeks building the "perfect" thing in isolation.

βœ—

Scale before validation

Prototype β†’ measure β†’ iterate β†’ then scale. Premature scaling kills more projects than anything else. Prove it works small before betting big.

Systems Architecture

The Infrastructure I've Built
(and what broke along the way)

Each system shows how I diagnose constraints, design decision trees, and iterate when assumptions fail. The failures are just as important as the wins β€” they're where the real learning happens. Curious how something turned out? Click into any project to see the full process.

Field-Building Pilot

Building ASEAN's First AI Safety Student Pipeline

2025–Present β€’ Effective Thesis Campus Director
Design Test Iterate Launch

Current Status:

Live pilot, tracking 10% researcher growth in 6 months

The Gap I Saw

ASEAN had major gaps in AI Safety infrastructure. Very few local researchers. No established academic programs. Minimal community organizing. Students who wanted to contribute had no clear pathways. I was building something that didn't exist yet.

Resources

No budget, no team, working solo across 8 countries

Environment

Different languages, academic systems, cultural contexts per country

Stakes

Entire region mostly missing from global AI Safety conversation

1

Design Phase

Before building anything, I needed to understand what was actually happening on the ground:

β†’
Mapped the stakeholders

Identified 100+ potential contributors across universities, tech companies, policy institutes in 8 countries. Created a database with notes on who's working on what, where they are in their careers, what they care about.

β†’
Talked to students first

Rather than assuming what would work, I ran pilot conversations. Do students actually care? What language resonates? What's stopping them from engaging with thesis work? Turns out: they cared about near-term safety (bias, misinformation) way more than existential risk framing.

β†’
Checked my resources

What did I have? Time (20hrs/week), network access (Effective Thesis brand), expertise (AI Safety basics + program design). What didn't I have? Money, a team, institutional authority in the region. Designed around those constraints.

2

Test Phase

I tested three different entry strategies with real students:

Test A: Academic Partnerships
Start with institutions β†’ slow, 6-12 month timeline
βœ— Too slow
Test B: Direct 1:1 Advising
Work with students first, prove value, then scale
βœ“ Chose this
Test C: Mass Workshops
Reach many people β†’ low conversion to actual work
βœ— Vanity metrics

Why I chose direct 1:1 advising:

Students respond faster than institutions. I could prove value with individuals, then use that proof to get institutional buy-in. Also: quality over quantity β€” 1 completed thesis beats 100 workshop attendees who don't follow through.

3

Iterate Phase

What broke in the pilot:

My assumption: Students would self-organize after workshops

What actually happened: Workshop energy didn't translate into sustained thesis work. Lost 80% of interested students within 2 weeks.

The fix: Built structured 1:1 advising with regular check-ins, concrete milestones, mentor matching. Retention jumped to 60%+.

How I caught it:

Tracked follow-through rates. When I saw the 80% drop-off, I did post-workshop interviews. Students said: "I'm excited but don't know what to do next." The structure was missing, not the interest.

4

Launch & Current Results

10%
Researcher Growth (6 months)
Baseline: ~10 active researchers in region
Now: 11+ with 3-7 more in pipeline
8
Countries Mapped
100+ stakeholders across Malaysia, Singapore, Indonesia, Thailand, Philippines, Vietnam, Myanmar, Cambodia
60%+
Conversion Rate
From workshop interest to sustained thesis advising relationship

Context that matters:

Most field-building orgs see 3-5% annual researcher growth. Getting 10% in 6 months in a region with major infrastructure gaps = roughly 4x baseline. This validates the "students first, institutions later" approach.

What I'd tell someone doing this next

  • β†’ Map stakeholders before programs. Validate demand through pilots. Adapt your framing to local context β€” don't just import Western frameworks.
  • β†’ For student pipelines: quality beats quantity. 1:1 advising with concrete milestones converts way better than mass workshops.
  • β†’ Track conversion rates, not vanity metrics. Workshop attendance means nothing if people don't actually do the work.

πŸ“Š What I loved about this

Watching my assumptions break in real-time. I thought students would self-organize after workshops β€” nope, 80% dropped off. That failure taught me more about human motivation than any success would have. Now I know: structure isn't optional, it's the whole point.

Crisis Operations

11,000 People Served in 3 Hours

2021–2022 β€’ MUDA Party/Maribantu Tech Lead
Airtable Real-Time Iteration Offline-First Design

The Challenge

Malaysia's 2021 floods displaced 11,000+ people across 8 states. We had 30+ volunteers, zero centralized tracking, scattered supplies with no inventory visibility, and coordination happening via WhatsApp chaos.

Time

Crisis conditions β€” people need help NOW, not after perfect system

Resources

No budget, no developers, 30 untrained volunteers

Environment

Intermittent internet, 8 states, distributed relief sites

The 3-Hour Build

Hour 1: Diagnosis

Mapped the chaos. Volunteers don't know who's assigned where. Supplies are untracked. No decision-making dashboard. WhatsApp groups hitting message limits.

Key insight: Coordination failure, not resource failure. We have supplies and people β€” they're just not connected.

Hour 2: Tool Selection

Chose no-code tools (Airtable + Make) for speed. Couldn't wait for developers. Needed something volunteers could use immediately without training.

Trade-off: Less powerful than custom code, but 100x faster to deploy. Speed > elegance in crisis.

Hour 3: System Build

Built three interconnected systems: (1) Volunteer assignment tracker with location routing, (2) Inventory management with real-time updates, (3) RSVP coordination for 600+ incoming volunteers

Deployment: Live testing with first wave of volunteers while building remaining features.

Critical System Failure

Assumption: Relief sites have stable internet

Reality discovered after 6 hours: Intermittent connectivity at relief sites. 40% data loss from volunteers trying to submit offline.

Detection Method:

Volunteers reporting "form not saving" via WhatsApp. Cross-referenced submission timestamps with volunteer check-ins β€” massive gaps.

Emergency Rebuild (6 hours to fix):

Pivoted to offline-first design using Google Forms with automatic sync when connection restored. Reduced data loss from 40% to <5%.

Lesson: Design for the environment you HAVE, not the environment you WANT. Field reality > office assumptions.

Impact

11k+
People Served
8
States Coordinated
<5%
Final Data Loss

Legacy:

System documented as crisis response playbook for future disasters. Now used as template by other volunteer organizations in Malaysia.

Reusable Framework

This became my standard crisis operations protocol:

  • β†’ Hour 1: Map what's actually broken (not what people say is broken)
  • β†’ Hour 2: Choose fastest tools even if imperfect (speed > elegance)
  • β†’ Hour 3: Ship minimum viable system, iterate in production
  • β†’ Monitor: Track leading failure indicators, rebuild when assumptions break

Additional Systems Built

Public Communication

300+ Teachers, 87% Confidence Improvement

Translated AI alignment research into classroom-relevant curriculum for Teach For Malaysia's DutaGuru program. Designed workshop format through 26 iterations of user testing.

Key Framework:

Translation Protocol β€” Research β†’ Classroom: (1) Extract core concept, (2) Remove jargon, (3) Add local deployment examples, (4) Test comprehension, (5) Iterate based on teacher feedback

300+
Teachers
87%
Confidence ↑
National
Adoption
πŸ” What I loved: Watching teachers go from "I don't understand AI" to "here's how I'll teach this" in 90 minutes. The translation challenge β€” finding examples that work in Malaysian classrooms, not Silicon Valley β€” was like solving a puzzle in real-time.
Real-Time Operations

6 Candidates, 3-Hour Build, 600 Volunteers

Built election coordination dashboards under deadline pressure. Tracked volunteer shifts, candidate schedules, real-time vote counting. Iterated live based on user feedback during 12+ hour operation.

Decision Under Pressure:

Custom code vs no-code tools? Chose Airtable (3 hour build) over waiting for developers (3 day build). Election happens whether system is ready or not β€” shipped imperfect but functional.

Live iteration example:

Volunteers couldn't find their assigned polling stations. Added location autocomplete + map view mid-operation. Deployment complaints dropped 80% within 2 hours.

⚑ What I loved: The impossible deadline energy. Election day doesn't wait β€” you ship or you fail. Building live with 600 people using the system simultaneously? Stressful and exhilarating. Every bug report was a chance to make it better in real-time.
Product Design

26 Iterations to Win Apart Hackathon

Designed "Morph" AI Safety education prototype. Tested 4 program formats (cohort, workshop, curriculum integration, self-paced) through rapid iteration cycles. Each cycle validated one hypothesis about what makes safety concepts stick.

Iteration Framework:

Hypothesis β†’ Prototype β†’ User Test β†’ Measure β†’ Refine. Tracked: comprehension scores, engagement time, follow-through rate. Killed 3 formats that tested poorly. Doubled down on 1 that worked.

Rapid Prototyping User Testing Hackathon Winner
πŸ“Š What I loved: Killing my darlings. Three formats I was excited about tested terribly β€” and watching the data prove me wrong was oddly satisfying. That's how you learn what actually works vs what you think should work.
Community Building

120k Views β†’ 50+ Student Relationships

Built AI Safety public education via TikTok content. Designed conversion funnel: viral content β†’ website β†’ email list β†’ 1:1 conversations β†’ sustained relationships. Resulted in 15 company partnerships.

Funnel Design:

TikTok (awareness) β†’ Link in bio (interest) β†’ Email capture (intent) β†’ Resource delivery (value) β†’ Conversation invite (conversion). Measured: view-to-click 8%, click-to-email 12%, email-to-conversation 35%.

120k
Total Views
50+
Student Relationships
πŸ” What I loved: Treating social media like a system, not magic. Mapping the conversion funnel, measuring each step, finding the leaks. Turns out viral views mean nothing if you don't have a plan for what happens next.

My Parallel Experiments Timeline

2025–Now
Organizational Experiment

Effective Thesis Campus Director (ASEAN AI Safety) β€” testing institutional amplification

Independent Experiment - Product Track

CetaLabs Development β€” testing autonomous capacity-building model. Apart Hackathon winner β€” Validating program design frameworks

2023–25
Research Track

BlueDot AI Governance Fellow β€” Compute governance policy

Education Track

300+ teacher workshops β€” Public AI Safety education at scale

Community Track

TikTok content (120k views) β€” Testing public engagement channels

2021–22
Tech Ops Track

Led digitalization for small organizations, frugally, impactfully. MUDA/Maribantu Tech Lead β€” Building systems under pressure

Education Track

Building digital initiatives for a village during COVID-19 with Undi18 under Parliamentary Fellowship

2018–Now
Continuous Thread: Freelance Consulting

Strategic consulting across government (WBS), startups (VerdasAI), NGOs (BERSIH) β€” Testing stakeholder synthesis skills across sectors

The Pattern Across All Tracks

Every experiment tests the same core hypothesis: Can I build infrastructure in domains where playbooks don't exist? Whether it's flood relief (crisis conditions), ASEAN AI Safety (big pipeline gaps), or teacher workshops (translating research for new audiences) β€” I'm consistently choosing zero-to-one environments over mature optimization problems.

βœ“

Success Metrics I Track

β†’
Speed of execution
<3hr system builds, 26 iteration cycles
β†’
Conversion rates
120k views β†’ 50+ students, 300 teachers β†’ 87% confidence improvement
β†’
Field growth
10% researcher increase in 6 months (ASEAN)
β†’
Autonomous operation
Built without supervision in crisis, elections, regional expansion
⚑

What I'm Optimizing For

β†’
Leverage
Where does my builder skillset create outsized impact?
β†’
Learning velocity
Rapid iteration cycles, measurable outcomes, documented learnings
β†’
Portfolio diversification
Testing multiple paths simultaneously rather than betting everything on one
β†’
Reversibility
Experiments I can exit without catastrophic loss
Live Experiment Dashboard

The Question I'm Answering Right Now

I'm not climbing a ladder. I'm running experiments to figure out where my skillset creates the most leverage. This is the fun part β€” I get to test different paths, collect data, and see what I learn. Here's what I'm actively exploring.

?

The Fork in the Road

Active decision timeline: April – June 2026

How do I maximize impact on AI Safety field-building?
A

Join an Organization

Work with established field-building groups like PRISM, Effective Thesis, or BlueDot. I'd scale existing infrastructure with real resources and institutional credibility backing me up.

+ Amplification, institutional backing, established networks
βˆ’ Less freedom to experiment, more coordination overhead
B

Build Independently

Grow CetaLabs into a sustainable ASEAN-focused AI Safety platform funded through grants and partnerships. Full creative control, slower scaling.

+ Total autonomy, regional specialization, experimental freedom
βˆ’ Slower scaling, resource constraints, solo operation

What I'm Testing

Org Fit

Do I thrive more with institutional resources + coordination meetings, or autonomy + resource constraints?

Testing via: Effective Thesis role (institutional) vs CetaLabs work (independent)

Geographic Focus

Should I double down on ASEAN-specific infrastructure, or build transferable systems that work globally?

Testing via: ASEAN mapping vs global compute governance research

Skill Gaps

I'm good at 0-to-1 building. Do I learn scaling skills myself, or find collaborators who fill that gap?

Testing via: Watching where my pipeline plateaus without scaling support

Right Now (April 2026)

What I'm Building

  • Regional field-building: Effective Thesis ASEAN: 10% researcher growth. Hypothesis: same stakeholder mapping + cultural adaptation model works for other neglected geographies
  • Crossover education: CetaLabs: AI Safety Γ— vulnerable populations, AI Safety Γ— families/kids. Testing which intersections create unexpected leverage for public literacy
  • Research translation: KL AI Safety Meetup + Intentional SDLC: Monthly experiments turning technical research into practitioner-legible frameworks (what makes concepts stick?)
  • Meta-Infrastructure: Documenting what transfers: crisis ops principles β†’ field-building, research frameworks β†’ public education, global theory β†’ regional practice

Open To

  • High-leverage problems where clear systems doesn't exist yet & values iterations (Program Lead, Field-Building, Generalist roles)
  • Cross-domain partnerships with AI Safety orgs testing new approaches to regional expansion, public education, or rapid deployment
  • Working with builders who are finding unexpected connections between research, policy, and ground-level implementation

Decision timeline: Collecting evidence through June 2026