While AI technologies are rapidly expanding, key concerns about its societal impacts are often overlooked for the sake of progress towards an AI future. The AI ecosystem is dominated by a handful of large tech companies, whose incentives misalign with the public interest. Furthermore, AI is often falsely framed as a solution to the world’s biggest challenges, including climate change.
If AI is to be valuable to the broader public, there is a need to rethink approaches to AI and their governance to one which empowers broader communities in line with principles of equity and justice. Openness, while not a silver bullet, can be a tool in supporting these efforts. However, the term “open” is used inconsistently, and often as a marketing ploy. This cohort will explore what openness can and should mean in AI, and how it can be leveraged to advance equity, accountability, and public interest goals.
The Openness & Equity Policy Leadership Cohort will bring together policy practitioners, advocates, academics, and technologists committed to shaping AI policies that work for people and the planet. Over seven sessions, we will critically engage with the political and technical dimensions of openness in AI, working towards concrete recommendations and interventions that can inform AI policymaking in the EU and beyond.
Key topics we’ll unpack include:
-
Public AI: What are the criteria for benchmarking AI openness beyond technical specifications, and how can we critically assess AI systems through a justice, human rights, and societal lens, including whether they should exist at all?
-
Openness & security: How does openness strengthen security, and how can we counter myths that suggest otherwise?
-
AI & planetary boundaries: How can AI be aligned with climate justice rather than accelerating extractivism and environmental harm?
-
The value of ‘open’: What are the practical applications of openness to advance accountability and counter open-washing?
-
A vision for public AI in Europe: What would a vision for Public AI in Europe look like, and how can we steer digital sovereignty and ‘investment’ debates in that direction?
Details about the Cohort
This virtual cohort will run from early May through June, with sessions held weekly. Discussions will be interactive, co-designed with participants, and led by leading experts in AI policy, including: Bruce Schneier, Independent security expert, Udbhav Tiwari, VP of Strategy & Global Affairs at Signal, Michelle Thorne, Director of Strategy at the Green Web Foundation, and more.
The cohort sessions are anticipated to take place on the following dates at 15h-17h CET.
-
Thursday, 8 May
-
Thursday, 15 May
-
Tuesday, 27 May
-
Thursday, 5 June
-
Tuesday, 10 June
-
Tuesday, 17 June
-
Thursday, 26 June
Cohort members will be expected to attend all sessions.
Who are we looking for in this cohort?
We welcome policy and advocacy practitioners, researchers, and technologists who are engaged in AI governance and want to develop strategic interventions that advance openness and equity in AI. No technical background is required. We are looking for critical thinkers and changemakers who want to shape the future open and just AI policymaking in the EU and beyond. We will prioritise expressions of interest from minoritised and underrepresented communities.
What do participants gain?
-
A collaborative space to strategise with leading AI policy thought leaders;
-
Deeper understanding of key AI policy debates and practical paths for effective policy impact;
-
The opportunity to co-develop recommendations that can influence policy and advocacy work in the EU;
-
A forum in which to refine and practice AI policy strategy knowledge and skills with leading practitioners;
-
Connection to a network of peers working across policy, research, and public interest technology.
Interested candidates are invited to fill in this expression of interest survey until Monday, 21 April.
Read more about our Policy Leadership Cohorts, part of our Policy Leadership Program.
Questions? Please reach out to policyleadership@aspirationtech.org.