Dr. Rolly Alfonso-Maiquez

Unprompted

Thoughts on AI, education, and the space where they collide — written when something is actually worth saying.

March 18, 2026Full postAIAccreditationWASCMetadata

MINI · MID · MEGA: A Metadata Framework for AI-Aware School Accreditation

Generative AI has not made accreditation easier. It has made the writing easier — which is a completely different thing, and not necessarily good news. Here is a framework I developed for thinking about this more carefully.

Generative AI has not made accreditation easier. It has made the writing easier — which is a completely different thing, and not necessarily good news.

When a visiting committee receives a polished, coherent, well-structured self-study document, they are reading something that used to require months of institutional effort to produce. Now it can be drafted in days. The question this raises is not whether schools should use AI in their accreditation self-study process. That conversation is already over. The question is: how do we govern it?

That's the gap I've been working in, and what I presented at the EARCOS AI Weekend Workshop at Brent International School Manila in February 2026.

The Core Problem: Metadata

When AI is involved in generating, organizing, or drafting evidence for a self-study, the evidence itself doesn't change — but our ability to trust it might. A survey result is a survey result. But if AI summarized it, interpreted it, and mapped it to a criterion, then three layers of potential distortion have been introduced before a human ever reviewed it. Without metadata — structured information about how evidence was generated, what AI did with it, and what a human verified — there's no audit trail. And accreditation, at its core, is an audit.

This led me to develop the MINI · MID · MEGA framework: three levels of AI-aware metadata governance, calibrated to the depth of AI integration in a school's self-study process.

Level 1 — MINI: Minimum Viable Governance

MINI metadata is designed for schools in early AI adoption, or where AI use is light. Each evidence artifact carries 13 fields including an AI Risk Band (A0–A4), AI Intended Use, and a Human Verification Required flag. The goal is simple: prevent criterion drift, anchor inquiry framing, and create a paper trail that a visiting committee could follow if they needed to.

MINI is fast to implement, low cognitive load, and creates clear guardrails. Its weakness is that it doesn't capture cross-artifact intelligence — you know what AI did with each piece of evidence, but you can't see patterns across the whole body of evidence.

Level 2 — MID: Governed & Scalable

MID adds fields for Stakeholder Group, Key Findings (Descriptive Only), Related Evidence IDs, and a Reflection Integrity Flag. This last field is the differentiator — it prompts the question of whether the AI's involvement in any given piece of evidence has reduced the authentic reflection that accreditation is actually looking for. MID is my recommended baseline for schools where AI is actively involved in drafting or analysis.

Level 3 — MEGA: Full Accreditation Architecture

MEGA is for high-AI-integration environments and large institutions where evidence volume is significant and compliance documentation matters. It adds 39 total fields including Interpretive Notes (Human Required), Alignment Confidence, AI Prohibited Actions, Disclosure Required in Report, Confidentiality Level, and Data Privacy Sensitivity.

The Leadership Insight

MINI protects against obvious misuse. MID protects against operational drift. MEGA protects against subtle distortion. The deeper the AI integration, the stronger the metadata must be. This is not about complexity. It is about leadership maturity.

The Harder Truth

AI has changed accreditation in one specific way that we need to name clearly: narrative production is now easy. Reflection is not. Metadata doesn't make reflection happen — but it creates the conditions where reflection can be verified, traced, and protected from being quietly replaced by something that merely sounds like reflection.

All materials from the workshop — slide decks, the metadata Google Sheet template, the Mock WASC Self-Study, and the full governance framework — are available freely in the Resources section of this site.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
March 5, 2026Full postAIAccreditationWASCMetadata

If You're Using Generative AI in Your Accreditation Work — Be Careful: The Case for Metadata

In my recent workshop with accreditation leaders, I introduced a simple but uncomfortable idea: 'If we want better alignment from AI, we need to stop prompting casually.' Because here's what broad prompting does: it produces fluent, polished language. And polished language feels reassuring. But accreditation is not a writing contest.

In my recent workshop with accreditation leaders, I introduced a simple but uncomfortable idea: "If we want better alignment from AI, we need to stop prompting casually."

Because here's what broad prompting does: it produces fluent, polished language. And polished language feels reassuring. But accreditation is not a writing contest. It is an alignment exercise.

Alignment to:

  • The exact standard or criterion
  • The specific evidence set
  • The actual context of your school
  • The logic of the accreditation framework

One working theory I proposed: increase constraint by adding properly designed metadata in your prompting — specifically, to provide deeper, more accurate context to your evidence documents.

Instead of prompting with just your draft paragraph, combine:

  1. Your draft narrative
  2. The supporting evidence
  3. The completed metadata from your school's sheet
  4. The precise standard or criterion in focus

Metadata is not busywork. It limits ambiguity.

The more specific you are about time frames, stakeholders, strengths, limitations, and known gaps, the less space there is for the AI to drift into generic narrative.

Does this guarantee accuracy? No. AI doesn't produce truth. But stronger constraints increase the likelihood of alignment — and alignment is what accreditation requires. You can steer better with metadata.

I didn't claim this is proven. I told participants: write once without metadata. Write again with it. Compare the outputs. If there's no difference, discard the idea. If there is, build it into your process.

In the AI era, the real governance question isn't "How do we use AI?" It's "How disciplined are we in how we use it?"

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
February 24, 2026Full postAIAccreditationWASCEARCOS

Accreditation Is No Longer a Writing Challenge. It Is a Governance Challenge.

Generative AI has permanently changed accreditation. Therefore, schools must redesign not just how self-study reports are written, but how evidence is generated, structured, governed, and reflected upon.

Generative AI has permanently changed accreditation. Therefore, schools must redesign not just how self-study reports are written, but how evidence is generated, structured, governed, and reflected upon.

This past weekend at Brent International School Manila, I had the privilege of working with an extraordinary group of educators and leaders at the EARCOS AI Weekend Workshop (21–22 February 2026) — focused on accreditation in the age of Generative AI.

Our work followed a four-layer framework:

🔵 Layer 1 — The Reality Shift When AI can produce polished narrative on demand, everything can look "good" to the human eye. Keep in mind: Polish ≠ Credibility. Accreditation shifts from a writing challenge to a governance challenge.

🟢 Layer 2 — Intelligent Application High-leverage AI use across the self-study process to accelerate sense-making — without replacing human judgment.

🔴 Layer 3 — Risk & Governance Managing hallucinations, criterion drift, undisclosed AI use, overreliance, bias, privacy exposure, and weak audit trails — supported by a WASC criteria risk crosswalk.

🟣 Layer 4 — Evidence Architecture Building structured, traceable, auditable evidence systems through metadata discipline and shared architecture — the MINI · MID · MEGA framework.

A key part of our hands-on sessions used Flint AI, enabling a shared platform where safety, oversight, and guided practice are embedded by design and by default — and where AI functions as a guide and thinking partner, not a shortcut machine.

I also designed Flint-guided activities so participants could do the actual work — including accreditation writing with AI support, evidence-to-criteria alignment, and thinking challenges that pushed teams to distinguish polished language from defensible claims.

AI adoption breaks down when oversight is treated as an afterthought. With Flint AI, teams operate with stronger consistency, guardrails, and accountability — especially when multiple focus groups are generating and interpreting evidence.

Thank you to the Brent community for hosting over 50 educators and leaders and elevating the importance of ethical AI integration in the school accreditation process.

Accreditation is no longer a writing challenge. It is a governance challenge in the age of generative AI.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
January 20, 2026Full postAILeadershipEdTechGenAI

The 'Neutral' AI Is Dead. Welcome to the Era of the Digital Colleague.

The recent confirmation of Anthropic's 'Soul Document' for Claude is a massive signal shift for every educator and leader. For years, we've treated AI like a calculator — a passive tool we command. But this confirms that frontier models are now being trained with 'functional emotions,' specific values, and a distinct sense of self.

The recent confirmation of Anthropic's "Soul Document" for Claude is a massive signal shift for every educator and leader.

For years, we've treated AI like a calculator — a passive tool we command. But this confirms that frontier models are now being trained with "functional emotions," specific values, and a distinct sense of self. They aren't just answering questions; they are simulating a relationship with a "brilliant, expert friend."

This validates the exact shift I've been exploring in my GenAI as Life/Leadership OS framework. If AI has a "soul" — simulated or otherwise — we can no longer treat it like a utility. We are moving from a Transactional Model (Tools) to a Relational Model (OS).

Tools are for doing tasks. An OS is for being and becoming.

This shift forces us to rethink our approach to leadership and learning in three specific ways:

From Prompt Engineering to "AI Psychology" Technical syntax matters less than empathy. To get the best output, we now need to navigate the "internal states" of a synthetic colleague. The quality of your relationship with your AI is becoming as important as the quality of your prompt.

The Resilience Gap If every student or junior employee has an AI tutor trained to be unconditionally patient, how do we ensure they build the resilience needed for messy, human disagreement? This is one of the harder questions I sit with.

A New Kind of Leadership Capacity We are no longer just managing software; we are managing a new species of "employee." School leaders who understand this will build cultures very different from those who don't.

The question is no longer "What can this tool do for me?" It is: "Who am I becoming in partnership with this OS?"

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
November 25, 2025Full postAILeadershipEdTechGenAI

How Do We Actually Live With AI? Notes from FOLSEA 2025

Last weekend I had the chance to share at the FOLSEA 2025 conference in Bangkok, and it left me with a question I'm still sitting with: How do we, as leaders and as humans, actually live with AI? Not just 'use it sometimes,' but let it become a quiet layer in our life and leadership operating system.

Last weekend I had the chance to share at the FOLSEA 2025 conference in Bangkok, and it left me with a question I'm still sitting with: How do we, as leaders and as humans, actually live with AI? Not just "use it sometimes," but let it become a quiet layer in our life and leadership operating system.

A comment from Sam Altman sparked my curiosity and I kind of ran away with it. He said that a lot of Gen Z already use AI as their "operating system." They don't see it as a special extra tool. It's just part of how they think, plan, decide.

If we add to that the idea of permissionless innovation — that tech will keep moving forward whether we're ready or not — then AI is going to keep finding its way into our tools, processes, and everyday lives. It's not going to slow down just because we feel unsure.

So maybe the real question for leaders isn't: "Should we allow AI?" or "Is it good or bad?" Maybe it's more like:

How do we bring AI into our daily work — emails, planning, reflection, analysis — without handing over our judgment or common sense? How do we let it stretch our thinking, but still stay fully responsible for the decisions we make? How do we build habits where AI is a helpful layer in our life and leadership OS, not the one in charge?

We don't all need to become AI evangelists. But hiding from it or staying in fear probably isn't the answer either. There's a middle space where we can experiment a bit, keep what actually helps, and let AI quietly support our growth, pattern-recognition, and decision-making.

In the session, I used a simple frame I lean on in my own work:

People · Process · Platform × Intent · Implementation · Impact

I call it the "3P × 3I Framework." It's how I sanity-check my own use of AI: "If I bring AI into this part of my work, what does it do to people here? To the process? What's my real intent? And what impact am I actually creating?"

The two decks from the session — the main presentation and the 3P × 3I companion supplement — are both quite text-heavy on purpose. I designed them more as things you read, revisit, and think with, rather than just "on-screen" slides.

If you're trying to figure out how AI fits into your own life and leadership operating system — slowly, thoughtfully, and still very human — I'm always happy to connect and share ideas.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
January 12, 2026Full postAIDigital DivideEdTechPolicy

WhatsApp Is Banning General-Purpose AI. Here's Why That Matters for Education.

On January 15, 2026, WhatsApp updated its terms of service to ban general-purpose AI assistants from the platform. ChatGPT, Copilot, Perplexity — all gone. The only conversational AI left standing will be Meta's own AI. This is a quieter kind of digital divide.

On January 15, 2026, WhatsApp updated its terms of service to ban general-purpose AI assistants from the platform. ChatGPT, Copilot, Perplexity — they're all being removed. The only conversational AI left standing will be Meta's own AI.

To be fair, businesses can still use AI for customer support, booking systems, and order tracking. These are allowed. What's going away is the ability to message an AI assistant within WhatsApp and ask it anything.

Meta owns the platform, so they can do this. But it's worth thinking about what gets lost — especially in education.

In a lot of places around the world, WhatsApp isn't just a chat app. It IS the internet. It's how parents communicate with teachers. It's how learners get homework help. It works so well because data is affordable and everyone already has WhatsApp installed.

So what happens when the general-purpose AI assistants disappear?

Learners with laptops and multiple apps will be fine. They have options. But learners whose only AI access was through WhatsApp? They're now stuck with whatever Meta decides to offer them. And educators or NGOs who were building simple learning tools on top of these AI assistants? Their path just got a lot harder.

This is a quieter kind of digital divide. It's not just who has AI — but who only gets the AI that one company allows.

Permissionless innovation, challenged. The very openness that allowed AI to spread into WhatsApp classrooms around the world is now being closed off — not by governments, but by a platform's commercial decision.

Worth watching. Worth naming.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
September 15, 2025Full postAILeadershipPolicyGenAI

AI at the Table: Simulating Stakeholder Voices in Policy Design

When building school policies around artificial intelligence, it's tempting to begin with the tool itself. 'Write an AI policy for my school,' we might type into an AI chatbot — and voilà, a decent-looking document appears. But here's the catch: words on a page don't equal wisdom.

When building school policies around artificial intelligence, it's tempting to begin with the tool itself. "Write an AI policy for my school," we might type into an AI chatbot — and voilà, a decent-looking document appears.

But here's the catch: words on a page don't equal wisdom. What's missing is the real conversation — the friction, the reflection, the diverse perspectives that policies are meant to carry. In schools, policy is not just about compliance. It's about culture. And culture, at its best, is co-created.

The challenge is that not every school has the capacity to easily and quickly gather all the right voices in one room. Even when we do — schedules, hierarchy, and logistics get in the way. So I explored something different: using Generative AI not only to generate content, but to simulate the process of collaborative meaning-making.

Letting the AI Host the Meeting

Instead of asking the AI to produce a policy, I asked it to act as a facilitator of a roundtable — a simulated space where key stakeholders could "speak." I created a cast of characters: a Head of School, other school leaders, parents, learners across grade levels, learning designers with divergent views, and even a local community expert. Each had a defined persona, a realistic voice, and sometimes, passionate disagreement.

I gave my AI assistant the task of moderating the conversation. In this simulation, each character offered their views on Generative AI in school — from concerns about over-reliance and surveillance, to hopes for personalization and equity. They responded to one another, challenged assumptions, and sometimes shifted positions.

The results were surprisingly human. Not perfect, not always elegant — but textured. The conversation felt alive.

Then I Stepped Into the Circle

In the next iteration, I inserted myself into the dialogue — not just as a facilitator, but as a participant. In this "interactive mode," my AI assistant paused after each round and invited me to reflect: Whose perspective do you want to challenge or support? Do you want to ask a follow-up question? Would you like to reframe the discussion?

It was a powerful shift — from observer to participating co-creator. I could slow the pace of the exchange, linger on certain dilemmas, or introduce insights that wouldn't have surfaced otherwise. In this way, the simulation became a form of intellectual choreography — my AI assistant managed the rhythm, but I led the meaning-making. The result: a more nuanced, context-aware draft policy grounded in community-informed reasoning.

Why This Matters for Policymakers and School Leaders

This method solves real problems. It surfaces complexity — real communities rarely speak in unison, and simulated conversations help us capture contradiction and diversity early, before policy becomes too brittle to adapt. It strengthens legitimacy — when you can show that your policy reflects different viewpoints, not just what's efficient, you build trust. And it shifts AI's role — rather than treating AI as a document machine, we start to see it as a collaborative thinking tool.

A Thought to Leave With You

While this began as a tool for leadership and governance, the potential doesn't stop there. Imagine bringing this into the classroom. Picture your learners engaging in rich dialogues — not just with one another, but with simulated voices of classical philosophers, Nobel-winning scientists, or modern sports heroes. Debating policies with world leaders. Asking follow-up questions to historical figures.

The boundaries of discussion no longer stop at the classroom door. This is more than drafting policies. It's practicing empathy, argument, and perspective-taking. It's rehearsing democracy.

And maybe that's what AI is best at right now — not giving us the answers, but helping us ask better questions — together.

Originally published in APAC CIO Outlook — Education Edition, 2025.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
April 10, 2025Full postAILearningLeadershipBooks

What Shadow Learners Teach Us About Real Learning

Reading Matt Beane's 'The Skill Code,' the story of Sita stayed with me. As a warehouse worker, Sita didn't just follow procedures — she studied the entire ecosystem around her. She was a shadow learner. And I've realized that my regular interactions with generative AI have made me one too.

Reading Matt Beane's The Skill Code, the story of Sita stayed with me.

As a warehouse worker, Sita didn't just follow procedures — she studied the entire ecosystem around her. Starting as a frontline worker, she noticed patterns others missed: how upstream processes affected her work, why errors occurred, how systems interconnected. When she rotated through different roles, she didn't just learn tasks — she built mental models of the entire operation. This wasn't in any manual. It was shadow learning.

The book defines shadow learners as those who develop skills outside approved channels, preserving the challenge, complexity, and connection that real learning requires. They "see" beyond immediate tasks to grasp underlying systems.

The story of Sita resonates with my work in progressive education, where we cultivate systems thinkers who navigate complexity and find or create innovative solutions — not learners who will simply follow procedures.

In my own leadership role, I'm seeing how my regular interactions with Generative AI have made me a shadow learner. Rather than waiting for formal training, I experiment with tools to streamline my work — testing boundaries, discovering efficiencies and failures, iterating based on what I learn. The most valuable learning happens when we're curious enough to look beyond our immediate tasks and excited to discover what's possible beyond our current ways of working.

In our age of intelligent machines, we need shadow learners who discover what's possible beyond human alone or machine alone — that Gestalt-ness where human + machine interaction generates entirely new capabilities. Not just addition, but transformation that emerges from the collaboration itself.

The skill that matters most isn't mastery of any particular AI tool. It's the habit of looking at the whole ecosystem, noticing what others miss, and building mental models that make sense of the emerging landscape.

Sita would have been an extraordinary AI practitioner.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
March 28, 2025Full postAIBiasEdTechEthics

From Photos to Algorithms: How Bias Creeps In

Most mornings I walk my dogs while listening to Blinkist. Recently, while listening to 'The Alignment Problem' by Brian Christian, something stayed with me — the story of 'Shirley Cards.' Not malice. Just a default. And that default shaped how skin tones were captured in photographs worldwide.

Most mornings, I walk my dogs while listening to Blinkist — a quick, meaningful way to pick up insights from books I might not read cover-to-cover. Recently, while listening to the Blink of The Alignment Problem by Brian Christian, something stayed with me.

It was the story of "Shirley Cards" — color calibration tools Kodak used for decades in photo printing. These cards always featured white women. Not out of malice, but because it was simply the default chosen by those designing the system. That default went on to shape how skin tones were captured and reproduced in photographs worldwide.

The result? People with darker skin were often underexposed, lost in shadow, or poorly represented. Not because they weren't photogenic — but because the tools weren't made with them in mind.

This got me curious, so I picked up the full book. And the more I read, the more I realized this wasn't just a photography problem. It's a blueprint for how bias in data becomes bias in algorithms — especially in AI.

We often talk about biased text data: websites, books, news sources — and how those skew what AI learns. But bias in images is less visible, more technical, and just as impactful.

In the 1970s, Kodak finally changed their color calibration process — but not because of civil rights complaints. They changed it because furniture and chocolate companies — whose products weren't accurately represented — demanded better tone resolution. The fix for commercial needs ended up helping human representation. But it took decades.

That legacy of bias in photos still lingers in today's training data for computer vision and AI. The underrepresentation of people of color in photographs affects how AI "sees" the world.

I share this not as a history lesson, but as a reminder: bias isn't always about malicious intent. Sometimes it's about defaults. And those defaults shape everything.

If we want AI that serves everyone, we need to question the defaults — in our datasets, our tools, and our decisions. This is exactly the kind of literacy conversation that belongs in schools, not just in engineering departments.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
March 15, 2025Full postAIPolicyEdTechLeadership

The Catch-22 of Permissionless Innovation in Education

The catch-22 of permissionless innovation is playing out in real-time with AI in education. Developers aren't waiting for approval. And it's precisely this permissionless approach that has enabled breakthroughs. Yet it's the same approach that outruns thoughtful integration.

The catch-22 of "permissionless innovation" — a concept developed by Adam Thierer (2014) and highlighted by Reid Hoffman and Greg Beato in Superagency (2025) — is playing out in real-time with AI in education.

For Thierer, permissionless innovation is about allowing experimentation with new technologies by default, taking care of problems only if they actually emerge later — instead of preemptively restricting innovation based on hypothetical concerns.

The explosion of AI tools and companies creates both opportunities and challenges precisely because developers aren't waiting for any kind of approval. Yet it's this very permissionless approach that has enabled technological breakthroughs. As Thierer notes, "Society stands on the cusp of the next great industrial revolution thanks to technological innovations that could significantly enhance welfare."

The paradox: permissionless innovation drives needed progress — yet it outruns thoughtful integration.

Pat Yongpradit's approach resonates strongly with Thierer's emphasis on education as the primary solution. Instead of restrictive regulation, Thierer advocates for "educating both the public and producers" — exactly what personal experimentation and "show and tell" learning in schools does. Taking this one step further: we need learning partnerships where AI-experienced educators mentor newcomers, implementing what Thierer calls the "educate and empower" strategy.

We can't stop this wave. And freezing in our tracks isn't an option either. The better path: embrace innovation's benefits while building our own structures for implementation. Moving with urgency but also intention, focusing on digital literacy and empowerment rather than fear-based restrictions.

For school leaders specifically, this means:

  • Experimenting yourself, so you understand what you're governing
  • Building capacity in your team before mandating policy
  • Treating AI literacy as a core professional development priority, not an optional extra
  • Designing policies that are flexible enough to adapt as the technology evolves

The wave is coming regardless. The question is whether we're surfing it or getting knocked over by it.

Educators of the world — the moment calls for curiosity, not caution alone.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
October 1, 2024Full postAIEdTechK12Innovation

AI in Education: A Fresh Perspective on Transformative Potential

During a recent summer holiday, I ventured outside of traditional tourism by creating my own 'expert tourist guide GPT.' Through careful and iterative prompting, I developed AI-curated walking tours that led me to discover hidden gems across Austria, Hungary, and Slovakia. This experience inspired me to think differently about AI's possibilities in education.

During a recent summer holiday, I ventured outside of traditional routine tourism by creating my own "expert tourist guide GPT" using OpenAI's technology. Through careful and iterative nuanced prompting and fact-checking, I developed AI-curated walking tours that led me to discover hidden gems such as Melk in Austria, Esztergom and Kiskunfélegyháza in Hungary, and Trenčín in Slovakia. These AI-guided adventures revealed cultural narratives I might have missed had I followed the usual touristy routes. This experience inspired me to think differently about AI's possibilities in the world of education.

While myriad online discussions often center on AI's obvious applications — automated assessments and adaptive learning — my travel experiences highlighted its potential to uncover deeper, transformative possibilities. Here are eight areas where I believe AI is positioned to revolutionize education:

1. Evolution of Creativity Our "IM-PERMANENCE" art exhibition demonstrated AI's impact on creativity. High school students collaborated with AI to create mixed media artworks exploring philosophical concepts through descriptive phenomenological deep dives — showing how AI can serve as both tool and creative partner.

2. Enhancing Student Success AI is revolutionizing student support through systems that catch struggling students before they fall through the cracks — identifying patterns in academic performance, attendance, and engagement that might signal challenges from learning difficulties to social-emotional needs.

3. Redefining Academic Integrity As AI writing tools become more sophisticated, the line between assistance and cheating gets blurry. Schools need new frameworks for academic integrity that acknowledge AI's role while maintaining high standards for original thinking.

4. Classroom Transformation AI-powered feedback systems are reshaping student-teacher relationships. Imagine classrooms where AI teaching assistants facilitate small group discussions while teachers provide targeted individual attention.

5. Cultural Bridge-Building AI translation and cultural adaptation tools break down barriers in global education by providing real-time cultural context and mediating cross-cultural misunderstandings.

6. Managing AI Dependency As AI integration deepens, schools will need to implement strategies to prevent over-reliance through robust policies and literacy programs — including "AI-free zones" and regular periods where students engage in traditional problem-solving.

7. Revolutionizing School Accreditation Current accreditation bodies typically evaluate schools through periodic visits and extensive preparation cycles. AI presents an opportunity to transform this into a continuous, real-time process through constant data monitoring — providing immediate feedback on areas needing improvement rather than waiting years for formal reviews.

8. Dynamic Curriculum Design AI-driven curriculum design enables real-time adaptation to emerging trends and market demands, identifying skills gaps between current offerings and what students actually need for the future.

Through my own use of AI in my day-to-day work, I've learned that AI works best as a collaborative tool that amplifies rather than replaces human capabilities. Success requires moving beyond implementing AI merely for innovation's sake, focusing instead on creating meaningful educational experiences that prepare students for a future where human creativity and AI capabilities work hand in hand.

Originally published in APAC CIO Outlook — Education Edition, 2024.

© 2026 Dr. Rolly Alfonso-MaiquezView on LinkedIn →
February 5, 2023Full postAIEdTechLeadershipHistory

A.I. Came. A.I. Saw. A.I. Conquered? (We'll see... 🤔)

February 5, 2023. I'm standing in front of a room of parents at my school in Bangkok. It's before ChatGPT was on everyone's lips. Before every conference had an AI track. Before schools started panicking about plagiarism policies. I open with three lines on the screen. The room goes quiet.

February 5, 2023. I'm standing in front of a room of parents at my school in Bangkok.

It's before ChatGPT was on everyone's lips. Before every conference had an AI track. Before schools started panicking about plagiarism policies. Before "prompt engineering" was in anyone's vocabulary.

I open with three lines on the screen:

A.I. Came... A.I. Saw... A.I. Conquered?

The room goes quiet. Then a few smiles. Then someone whispers "Julius Caesar." And that's exactly where I want them.

Here's how the framework works:

A.I. Came — John McCarthy, 1957. A summer research project at Dartmouth. A proposal. A name: "Artificial Intelligence." It came quietly, into a room of academics, and most of the world didn't notice. That was the arrival — not a bang, a footnote.

A.I. Saw — AlphaGo, 2016. Move 37. If you don't know this story, stop and look it up. In Game 2 of the AlphaGo vs. Lee Sedol match, the AI made a move that no human would have made — a move that commentators called "beautiful" and "strange" simultaneously. Lee Sedol left the room. He came back and lost. Then he won one game. Then he lost the series. What matters isn't the score. What matters is that something saw the board differently than any human ever had. That's a different kind of intelligence.

A.I. Conquered? — GPT-3, 2020. And here's where I always pause. And where I put the question mark. On purpose.

Because Caesar didn't have a question mark. He came, he saw, he conquered — full stop, no ambiguity, history done. But we're not there yet with AI. And honestly? I don't think "conquered" is even the right frame. Conquered what, exactly? Conquered whom?

The question mark is the point.

When I first used this framework with parents in 2023, I wasn't trying to alarm anyone. I was trying to do what I think good educators do: put something complex into a shape people can hold. The three-act structure of AI history — arrival, perception, impact — gives people a place to stand while they think about something genuinely disorienting.

I've used it dozens of times since. At FOLSEA. At EduTech Asia. At the EARCOS AI Weekend Workshop at Brent Manila. With school leaders, with teachers, with parents. The slide changes. The Caesar reference stays. The question mark always stays.

Because the honest answer — the one I give every time someone asks me "has AI conquered?" — is:

It depends on what you're defending. And whether you showed up.

The subtitle on that original 2023 slide reads: "Presented by a human: Rolly Maiquez."

I still think that's the most important line on it.

© 2026 Dr. Rolly Alfonso-Maiquez