After a short summer pause, the AI Openness & Equity Policy Leadership Cohort reconvened for its final session, featuring Maximilian Gahntz from Mozilla. Together, we unpacked the shifting dynamics of AI policy in the European Union, where momentum for AI expansion continues through the AI Continent Action Plan and, recently, the Apply AI Strategy, with wide ranging implications for regulation.

Our discussion focused on a central question: What role can openness play in shaping an accountable and public interest approach to AI policy in Europe?
Below are some of the key reflections and insights that emerged from the session.
Rethinking “Digital Sovereignty”
In Europe’s current geopolitical climate, where the United States is no longer viewed as a reliable partner, “digital sovereignty” has become a rallying cry. One of the ways to achieve this is the so-called EuroStack, a common digital infrastructure for Europe to reduce its dependence on foreign technology, from connectivity to cloud services, AI and online platforms. The idea promises autonomy and resilience, but raises complex questions: what makes a system truly European in a globally entangled supply chain? Whose resources, labour, and data will power it?
As Max shared in our discussion, the appeal of the EuroStack narrative is clear. It sounds strategic, it signals strength, and it aligns neatly with industrial policy. Ana rightly noted it is a catchy narrative to win people over, without having to spell out many details. But beneath that surface lie mercantilist undertones, a framing that risks prioritising consolidation over collaboration and closed systems over open ones. Similar trends are visible worldwide as countries promote “sovereign AI models” to assert control. Too often, these approaches replicate dominant Tech's concentration of power rather than dismantle it.
Pen captured a central concern: the problem with Big Tech isn’t that it’s American, it’s that it’s unaccountable. Replacing one monopoly with another, even under an EU flag, doesn’t advance democracy or user agency. As Sam asked, “If we end up with a Big Tech monopoly but it’s European, is that really good enough?”.
This tension of wanting greater control over the technology we use while relying on the same global infrastructures and power dynamics, reflects a broader challenge in Europe’s digital sovereignty debate. As Zuza Warso notes in her recent article, achieving digital autonomy must be built not only through technological capability but through values of transparency, accountability, and the public interest. In other words, we need strong visions for what we are building and why, and to whom these investments and infrastructure will truly serve.
From a EuroStack to an Open Stack
Instead of a closed, frantic race to build “our own Big Tech,” Max urged us to think in terms of an open stack, technology infrastructure designed with public interest values baked in: transparency, interoperability, accountability and inclusivity.
Camilla asked, “Why do we even want an open stack?” It’s a powerful question. Openness is not valuable in itself; it is valuable when it enables accountability, accessibility, collaboration and collective control. An open stack approach would focus on the purpose it serves, for instance, empowering people to adapt technology to local needs, reducing dependence on unaccountable actors, and building autonomy through shared knowledge, collaboration, and shared values.
AI expansion and the push for de-regulation
Another issue we explored in our session was that of the incoming de-regulation efforts of the European Commission, to “cut the red tape” to attract investment for AI expansion. Hard won protections from consumer rights, environmental protections, transparency requirements, labour laws, and digital rights are on the chopping block. This includes laws like the GDPR and ePrivacy directive, which offer baseline and crucial protections for personal data, and the newly minted AI Act.
There are a range of important questions that need to be raised about the assumed value of this relentless AI expansion, and about its true costs to the environment, crucial rights protections, and democratic oversight. As the AI Now Institute noted in their recent analysis on the Apply AI Strategy, the plan features more language around “sovereign technology” and acknowledges Europe’s technological dependencies, yet it is silent on the value added of all this investment, and it risks “wrapping outcomes around technology rather than the other way around.”
In such a climate, it is increasingly tricky to influence Regulatory dialogues. In policy spaces, “regulation” has become the term that must-not-be-named, replaced instead with a framing of “industrial policy”. Sam pointed to how this dynamic plays out in South Africa, where Big Tech companies heavily (and successfully) lobby against formal oversight. The few organisations doing accountability work, mostly civil society, must navigate this regulatory taboo, often stepping in to fill the gap left by government inaction.
As Daniel noted, advocacy in Brussels faces a core dilemma: where can you actually say what you mean? And are decisionmakers listening? Even public consultations are often opaque by design, structured to exclude meaningful participation. Isha described this as “death by consultation”: endless dialogue that gives the illusion of inclusion while avoiding genuine engagement. This regulatory capture is leading to hard decisions and the need for new ideas, approaches, and strategies of civil society to effectively influence these major files.
Where we go from here
Our discussion concluded with a shared recognition: the pursuit of “digital sovereignty”, while the phrasing is far from ideal, on the substance is not inherently wrong, but it must be anchored in clear values and purpose. The questions we’ve returned to throughout the cohort: openness and equity for what, and for whom, apply here as well. The goal should not be to build European Big Tech, but to build public technology that is open, accountable, and equitable.
At its best, the digital sovereignty debate could model a new approach. One that centers public interest, participation, and shared control. At its worst, it risks becoming a race to replicate existing power structures under a new flag. The challenge, then, is to ensure that Europe’s investments in autonomy and innovation actually expand democratic agency, rather than simply rebrand corporate dominance.
This was our last cohort session for the AI Openness & Equity Policy Cohort 2025. But the journey does not end here, we are still working on consolidating our learnings and reflections into collective recommendations; more soon!
This post was co-written by Nitya Kuthiala and the cohort members.
Image credits: Windy night on the European Neighbourhood, by Patrick and Ambar Liétar - Hernández, CC BY-NC-ND 2.0

