Artificial Intelligence (AI) promises to solve challenges both in the private and public sector. In industry, for example, future accountants and marketeers will share their day-to-day tasks with AI bots. In the public sector, AI could help governments to act more quickly in crisis situations or anticipate disasters.
Future prospects point towards a high level of entanglement of AI with humans’ identities, wants and needs, through increasing accessibility to technology in our daily life. Yet, the rise of AI can also lead to an unhealthy living environment for people and the planet. Today, signs of that are already visible. Headlines about fake news, systemic racism, surveillance, targeted human manipulation, and the impact of datacenters on natural ecosystems continue to surface in the media (Crawford, 2021; Noble, 2018). In response to that, AI ethics has received a lot of attention over the last couple of years. (Lanier, 2020).
"Some might not get a seat at the table in the first place, resulting in reinforced societal inequality"
Co-creation with multiple stakeholders (e.g., governments, corporates, citizens, students and experts) is often mentioned as a way to create more responsible AI (Züger & Asghari, 2022). However, in these contexts the fundamental question of how power dynamics influence the creation of an AI system, is not always addressed (Sloane et al., 2020). Awareness about this issue is important because, for example, some stakeholders’ voices might have less value than those with more power or knowledge. Some might not get a seat at the table in the first place, resulting in reinforced societal inequality (O’Neil, 2017).
That is why this research is geared towards more even power distributions of stakeholders representing both people and planet within co-creation sessions. For this goal, it is necessary to improve communication, bridge knowledge domains and overcome language barriers. Boundary objects have proven to play a significant role in achieving these purposes (Gerling, 2020).
We propose a boundary object that can be used in co-creation sessions, which specifically focuses on making power dynamics behind AI more explicit. By this, we aim to create novel insight about 1) how boundary objects can be used as a tool for equal engagement and dialogue about power dynamics and 2) how awareness about power dynamics can translate into responsible decision making contexts.
"The boundary object aims to materialize the power dynamics behind AI and the way these affect people and the planet."
The boundary object
The boundary object aims to materialize the power dynamics behind AI and the way these affect people and the planet. The most effective way to do this is to make an object that has visual meaning through common knowledge. That is why, we chose to work with a literal translation of the metaphor for power: ‘Who pulls the strings?’
The multiple layers within the boundary object are structured as follows:
B. People affected by technology
C. Types of energy consumption
D. Amount of energy use
E. Technology (AI)
F. Power system
G. Value of voice
H. People in power
I. Degrees of power
The design invites different ways by which stakeholders can add, subtract and restructure elements to materialize scenarios of power systems.
The interaction starts with placing people in power (H) on a playing field (F). The varying weights of the pawns (G) and their placement on the board (I) have an influence on the balance of the entire system that decides if and how AI is designed and implemented (E). Simultaneously, it affects the amount of energy use (D), the amount of control over people (B) and how the planet is exploited (A). The AI application (E) is not an interactive element, rather it changes form indirectly by moving pieces within the layer of power (H). Emphasizing that this is where responsible AI starts.
"The object can reflect the existence of friction as it becomes visible and tangible."
The boundary object does not represent a truth but invites different interpretations (Star, 2010). As such, it functions as a common ground on which stakeholders can meet each other. The boundary object can help to analyze ‘what and who’ played a role in the creation of (ir)responsible AI, and facilitates the redesign of AI applications by imagining scenarios that lead to responsible AI. The object can reflect the existence of friction as it becomes visible and tangible. This allows stakeholders to single out important details within a complex system and share beliefs, thoughts and actions.
The context of use of the boundary object is the ‘Lab for Responsible Applied Artificial InTelligence (RAAIT)’ at Rotterdam University of Applied Sciences. The lab focuses on the business services sector: marketeers, accountants, consultants, ICT providers etc.. How can they develop Artificial Intelligence applications which are (and remain) responsible, ethically and socially? In the lab, we are working together with businesses, researchers, governments and students from various disciplines (Communication, Media Design, Game Design , IT and several business studies).