In our sixth AI Openness & Equity Policy Leadership Cohort session, we were joined by Bruce Schneier to unpack the intersection between AI openness and security. Our conversations centered on questions including:
-
What does “security” in the AI context actually mean?
-
Does openness make AI systems more or less secure?
-
Can AI be designed in line with democratic values, and if so, how?
Photo: Bruce Schneier, credits: Asa Mathat
Below are the key reflections from our session:
What is “security” in the context of AI?
Security is often conflated with safety, privacy, secrecy, or national interests. But as Bruce explained, security means that the system is behaving as expected. In cybersecurity, this is often assessed through the CIA triad:
-
Confidentiality: Is sensitive data accessible only to authorised systems or users? (e.g. is a language model sharing private training data, like personal emails or passwords, when prompted?)
-
Integrity: Is the data accurate and reliable within its context? (e.g. is a self-driving car correctly interpreting data, or has it confused meters and feet?)
-
Availability: Is the system reliably accessible to authorised systems or users? (e.g. can a hospital’s AI diagnostic tool be used during peak hours, or is it unavailable due to cloud service outages or maintenance issues?)
While the CIA triad helps evaluate whether a system is behaving securely in technical terms, it does not adequately address the inherent power imbalances at play. Specifically, who defines that behavior, who benefits from the system, or who bears the risks and responsibility when things go wrong. These questions go beyond technicalities and into public accountability, and democratic values, and illustrate how AI, security, and public interest technology intersect.
AI and the democratic deficit
Bruce emphasised that for democracy to thrive, both openness and equity are essential. Yet AI development is shaped by market dynamics, not democratic ones. As one cohort member put it, “we live in times where capitalism problems are tech problems, and vice versa”. These AI systems are designed for the extraction of data, attention, profit and control, not for the public benefit.
This links to a recurring theme in our cohort: AI itself hasn't necessarily created the problems, but it does accelerate, amplify, and entrench existing harms. And too often, the narrative that “AI will fix it” obscures the structural change that’s truly needed.
One cohort member reflected on the importance of trust, noting that public trust in an institution isn’t the same as an institution being trustworthy. This distinction points to underlying issues about power, perception, and people’s lived experiences across contexts. A public institution can be perceived as trusted, whether by necessity, past actions, or aspirations, but trustworthiness is earned through transparency, fairness, and accountability. An institution can be trusted without being trustworthy, and vice versa. This recalls similar ideas raised in our session with Alek Tarkowski and Zuzanna Warso of the Open Future Foundation, that “public AI” does not suggest the technology is state owned, but one that is trusted, accountable and equitable in the eyes of a democratic public.
Openness vs secrecy in AI security
A persistent myth is that secrecy ensures security. But Bruce challenged this: secrecy in AI often serves to protect profits, not people. It entrenches corporate control, shields flawed systems from scrutiny, and undermines accountability.
When AI systems fail or make errors, whether through biased outputs, harmful decisions, or model collapse, there’s rarely a public process for understanding what went wrong. Without transparency, there’s no learning and correction, so harms persist, often falling disproportionately on already marginalised communities. Take for example Safia Noble’s work, Algorithms of Oppression, which details how search engines, in particular Google’s famously ‘secret’ algorithm, reinforces racism, specifically against Black women. In this sense, secrecy doesn’t just fail to improve security, it actively impairs it.
Another example lies in the production contexts of the AI pipeline itself. A critical factor in producing biased outputs is the lack of visibility in training data, and how human annotation and content moderation is carried out. The work of categorising data as “toxic”, “good” or “relevant” is poorly documented or often outsourced to underpaid workers in the Global Majority who are exposed to harmful content with little to no protection. These working conditions remain shrouded in secrecy, shielding technology companies from scrutiny and accountability. In this context, secrecy is not just hiding flaws but rendering the crucial work of marginalised communities in exploitative conditions invisible.
Governments also too often default to secrecy in the name of national security. Vulnerability disclosures are a key example: while there has recently been positive improvement in the EU on this front, across Europe and beyond, many states maintain stockpiles of “zero-day” exploits (unpatched software vulnerabilities) rather than disclosing them, even when these exploits present a major risk to the public. Similarly, public procurement practices often favour proprietary digital infrastructure, such as Microsoft Office 365, locking governments into closed systems and data security practices which breach EU law. These choices prioritise short-term convenience and political expediency over long-term resilience, auditability, or democratic oversight.
Bruce underscored that secrecy is not a neutral or inevitable default: it’s a policy choice. In other industries, like aviation, there is a long commitment to transparency: when a plane crashes, black box data is made public so the industry can learn and improve. That is a collective choice the aviation industry has made in the name of public safety. The AI ecosystem, by contrast, lacks such norms, even though the stakes are similarly high.
Where do we go from here?
To align AI with democratic values, we need more than stronger technical safeguards, but structural, systemic change, starting by challenging the concentration of power in the hands of dominant tech companies. Bruce outlined some concrete shifts we need as a starting point, which was echoed by the cohort:
-
Break up big tech monopolies: To move toward “Public AI”, we must address the concentration of power in vertically integrated tech giants. These companies control the full AI stack, from compute to data to deployment, and entrench private interests in what should be public interest spaces. Structural separation and stronger competition enforcement are essential first steps to diversify control and foster more public interest aligned alternatives.
-
Ban surveillance advertising: As long as surveillance advertising remains the dominant business model of the internet, the logic of data extraction will shape how AI systems are built and used. This model fuels misinformation, erodes privacy, and incentivises closed and manipulative technologies. Reforming it is key to enabling more ethical, open, and accountable technology ecosystems.
-
Reimagine democracy in the age of AI: We do not have to accept today’s AI landscape as inevitable. Public AI systems, meaning those which are built transparently, governed openly, and aligned with the public interest, are both possible and necessary. But they must be grounded in democratic accountability, not merely “public sector” implementation. “Public AI” requires trustworthy institutions, participatory governance, and clear guardrails to ensure alignment with public interest goals.
These changes won't happen overnight, but they will not happen at all unless we name the political, economic, and structural forces at play. Openness is not just about code; it’s about power, participation, and the values we embed into our systems.
A big thank you to Bruce Schneier and our cohort members for another insightful and thought-provoking discussion.
Up next: some concluding thoughts and highlights from our dear cohort members.
This blog post was co-written by Nitya Kuthiala.
Image credits: "Security - Dictionary" by aag_photos is licensed under CC BY-SA 2.0