In our third AI Openness & Equity Policy Leadership Cohort session, we dove deeper into a tough but timely question: how do we design AI that actually serves the public - not just in theory, but in practice?
Alek Tarkowski and Zuzanna Warso from Open Future joined us for this session. They helped us unpack what “Public AI” means in today’s (hypey) AI landscape. They shared ideas and projects they have been working on to reimagine and reframe openness, by tackling power dynamics and the structural imbalances in technology. Our session was focused on thinking about how we can embed public interest principles into real, actionable policy recommendations.
A report released earlier this year by Open Future and Bertelsmann Stiftung to help policymakers and funders to turn the vision of Public AI into a reality, helped contextualise our discussion. Other work by the Mozilla Foundation and the Public AI Network also helped us visualise building public AI as an alternative to existing power concentrations.
The following are some reflections and key takeaways from our third cohort session:
What is “Public AI”?
Today’s most advanced AI systems are largely proprietary (going back to last week’s discussions, most, unfortunately, are closed models). This concentration is not just structural, but means that a handful of companies build them, train them, and decide how they are used – often reaping the rewards while the public shoulders the risks. That’s not just a tech problem. It’s a power concentration problem.
“Public AI” is a vision to address this power imbalance. Such “public” technology does not necessarily or automatically mean “state owned”, but rather, it is more people and community focused. It allows for three things: accessibility (openness), functioning for the common good, and providing public ownership or control. This leads to the question, raised by one of our cohort members: who, where, and what is this “public”? And how do we meaningfully gauge what this “public” wants or really needs?
What’s the point of “Public AI” anyway?
Within the first few minutes, a cohort member asked a deceptively simple question, “Public AI… for what? What ‘good’ does it actually serve?“ That question stuck with us. When you emphasise the word “public” over “AI”, things begin to look different. It pushes you past the hype – and toward alternatives that can potentially challenge concentrated power. Questions begin to center around who it serves, how it's governed, and why it exists in the first place.
This framing allows us to move away from the techno-solutionist, technology-centric approach towards more need-centric, community-led approaches. The goal is not to create AI for the sake of AI, but to understand the use of AI and how it can be applied in well-defined, evidence-based problems that serve in the public’s interests. This “public”, we discussed, is not a static place, metric, or person. Rather, it is the ongoing process of listening to the voices of diverse communities.
How can principles of Public AI help us draft more meaningful and concrete Recommendations?
In this session, we discussed this issue from different angles intended to feed into our final Recommendations for better AI policymaking. We addressed some important questions that we would envision to be included in a methodology for policymakers to assess and regulate AI systems:
-
Is AI the right tool for the job? What is the true cost of AI infrastructure expansion?
We began with unpacking the full costs of AI expansion, and looked at the purpose and necessity of AI systems. We reflected on the importance of decoupling technology from its marketing narratives and inevitability, by thinking deeply about the critical question: is this the right tool for the job? Does using AI actually help? When an AI system is procured by a government, there should be considerations of its real costs which go beyond financial investment. These costs should include the energy and resources it consumes, the environmental impact, including the land that infrastructure like data centres occupy, and the social inequalities it can amplify. Right now, much of the AI conversation is distracted by vague terminology like “AI for good” that’s rarely backed by clear evidence or accountability. This obscures a grounded conversation, especially when policymakers (or schools, hospitals, businesses, etc) are procuring or deploying AI technologies, to understand the problem they are trying to solve, and whether AI would be the right tool for the job.
-
Are affected communities involved in shaping the system?
An often overlooked yet critical question, especially if technology is to serve a public interest, is whether and to what extent impacted communities are involved in the design, development, and deployment of these technologies. There is a need to bring new voices to the table from civil society, activists, academics, and people directly affected by these tools. This effort requires policymakers grappling with regulation to deal with power dynamics and address such concerns as diversity, decolonization, and intersectionality while drafting policy to truly diverse communities.
-
Does the process build trust and redistribute power?
A great place to start with redistributing power is by leveling the playing field through education. Not only about how AI works, but how it impacts our daily lives. For instance, one cohort member raised concerns about romantic chatbots — and how the risks of manipulation, emotional dependency, or even addiction are largely missing. These instances are not science fiction stories anymore, but real issues harming real people.
We need to take a more active role in understanding, and resisting technology that does not serve us. As AI continues to be thrust upon us — often without our consent — there is a sense that we don’t have any control. But, as a first step, awareness can create choice, and once we know we have a choice, we can assert agency and resist this technology that is often imposed on us without meaningful consent.
-
Are there mechanisms to challenge harms, access redress, and hold actors accountable?
Even with the best intentions, there is no "perfect AI". Just as there is no “clean internet”, or perfectly safe platforms. The reality is, there will always be harms that emerge. The important aspect is for those who develop and build AI systems to anticipate these harms, avoid them wherever possible, and ensure there are transparent and publicly auditable processes in place to mitigate them if and when they arise. EU policymakers have long encouraged this approach for platforms, including in the Digital Services Act for instance. A cohort member brought up the need for independent AI risk assessments, conducted by unbiased third parties, and not the companies building the systems. As one cohort member put it bluntly: “You don’t ask a student to grade their own homework!”
So, what’s the right strategy?
The public value of technology lies in whether - and how - it empowers communities. But in today’s political zeitgeist, where EU governments are actively pursuing regulatory “simplification” (or de-regulation), where public and private investments are accelerating expansion of AI infrastructure, and where dominant narratives frame AI as a solution for everything from healthcare to education to the climate crisis - it is increasingly difficult to challenge the hype and advance people-first visions of public AI.
So where does that leave us? It means staying critical. Not blindly opposing or uncritically embracing AI tools, but continuing to ask hard questions. It means recognising that “competitiveness” or the need to win the “AI global race” are not goals or strategies in themselves and should be scrutinised and not accepted at face value. There is power in continuing to advocate for more just systems that reflect and serve the needs of communities. And in acknowledging that, in many cases, we may not need these systems at all.
We need to imagine a different future - one that resists techno-solutionism and refuses to treat AI as a solution to our major and minor societal challenges. Advocating for public-centred approaches which centre communities can help pave the way towards those visions.
In our next session...
We’ll tackle the intersection of climate justice and AI, with Michelle Thorne from the Green Web Foundation.
This post was co-written by Nitya Kuthiala.