Diversity, Bias and Ethics in AI Development

Lessons from “The Design of Everyday AI Things”

On August 23rd, we were joined by four leading AI thinkers from the spheres of foundations and academia, and for-profit: 


Together we debunked everyday “AI clichés” and discussed the critical concepts of diversity, bias, and the role of a designer in instilling responsibility in this time of change. 

Listen to the Podcast:

Watch the Session Now:

The Design of Everyday AI Things | Nature x Design

Learn about opportunities for designers using data as material to create social impact through a more inclusive design of products and services.

Posted by TEALEAVES on Sunday, August 23, 2020


From New York to San Jose, Italy to Israel, attendees of the live session engaged passionately with the conversation, sharing resources and life experiences. We have compiled these for your exploration and enjoyment. 

Diversity and Inclusion in AI

Dr. Jamika D. Burge is the Head of AI Design Insights at Capital One and the Co-Founder of blackcomputeHER; an organization positioned to be an influential think tank for black women and girls in computing and technology. Dr. Burge advocates for the importance of representation and diversity in data collection and analysis as it pertains to AI.

Dr. Jamika Burge on Responsible AI:

How do we have AI machine-learning without understanding the impact of that work, or the impact of not including as many people as possible, and creating experiences that matter to them?

What does representation in data mean? As humans are multidimensional, the creators of technology and the builders of algorithms must understand the entire context and experiences of the end-user.

Panelists urge the importance of representation in the teams building AI tools and data sets. For teams building AI tools and data sets, representation is an important requisite to arrive at more equitable innovations. The long and important journey of anti-bias, anti-racist work is crucial to identifying, and ultimately correcting, the resulting biases built into technology we use every day. For those with a seat at the table, pulling up a chair for others is crucial to ensure values and bias are addressed for future generations.

For a more in-depth primer on algorithmic bias specifically as pertains to race, a panel participant urges viewers to watch a presentation from Where Are the Black Designers by Shabnam Kashani, Interaction Designer at Google.

For more on intersectionality, see here.

Data and Bias

Embracing equity will impact the data we collect and will affect how these algorithms affect our world.

– Dr. Molly Wright Steenson, Associate Professor at Carnegie Mellon University

Biases exist. From an equity perspective, the way we build algorithms needs to be explored: what problems are we solving, and who is creating the steps to solve those problems? 

The crucial element of reducing bias in technology starts with the analysis of the data set. Dr. Molly Wright Steenson argues the reduction of bias must begin at the source:

The crux of data is that it is in the past. The issue with data sets is the reinforcement of existing biases rather than finding new ways to do things and solve problems.

If the objectivity of data is based on something in the past that we’re reinforcing into the present, then these biases continue to be perpetuated. The Cognitive Bias Codex, which visually captures biases that unknowingly affect our experiences and decision-making, is an infographic which helps illustrate this issue. 

We need to talk about our assumptions and how we want to deal with them.

– Ruth Kikin-Gil, Responsible AI Design and Strategy at Microsoft 

Ethical and Responsible AI

It is a necessity to understand both human interaction and the logic behind the algorithm. At the center of this is how humans interact with each other, and how humanity can responsibly and ethically create and interact with these systems.

Accountability is a core principle of AI. Ruth Kikin-Gil argues that Artificial Intelligence cannot be considered as algorithms alone, as there is much more to the story: 

We need to acknowledge that the humans creating the systems and the products that use the systems are all part of the equation.

The human values behind each and every application need to be analyzed, and the way the system behaves needs to align with these values. Not doing so communicates a lack of accountability. 

When the builders of technology hold different values, biases and experiences, what does it mean for those using these technologies?

Ruth Kikin-Gil discusses the following in her article, Humanity-Centered Design: How Ethics Will Change the Conversation about Design:

Designers put people first. They empathize, observe, and listen. They find problems to solve not because they are technically difficult, but because they are hard human issues. How to use AI is one of these challenges — and humanity-centered design could be the solution.

Microsoft has created the Office of Responsible AI, that governs and shepherds the development of AI products in Microsoft, where development of AI is handled in an ethical and responsible way. The team has created frameworks that are practical and digestible. The guidelines for responsible AI from Microsoft can be found here.

Dr. Molly Wright Steenson encourages participants to read Why Computing Belongs Within the Social Sciences by Randy Connolly, for a further dive into the importance of human-centric computing and technology.

What Now?

To improve the human outcome of AI in the future, those who build these systems must recognize the innate biases that exist through their lived experiences and in the data as a result of these experiences. An attendee expressed this as “GIGO – Garbage in, garbage out”: What comes out (the technology) is only as good as what goes in (the data and the process to collect that data). We are responsible and accountable for understanding this connection. Innovators and designers must be deliberate and bring new experiences to the table in order to design the things that shape access and participation in the world.

Through the democratization of AI, we must recognize how our data is being used, and who benefits from our data. Dr. Jamika D. Burge urges each of us to move from understanding AI to understanding how our data is being used, who it is benefiting, and whether or not it is being used for good. The crux of this is to encourage conversations across disciplines, to make certain everyone has a role in ensuring all of humanity is considered equitably. 

By tuning in and paying attention in the world of AI, we can acknowledge the diversity of lived experience and ensure the design of these systems are equitable.


Discover the event here.

The Design of Everyday AI Things

Learn More:

Diversity and Inclusion in AI:

Presentation from Where Are the Black Designers by Shabnam Kashani, Interaction Designer at Google.

Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

Google Decides to Stop Training AI on Homeless People’s Faces

Speaking Truth to Power: Exploring the Intersectional Experiences of Black Women in Computing

For more on intersectionality, see here.

Data and Bias:

The Cognitive Bias Codex

Better Together: Guidelines for Designing Human-AI

AI & Society | Spring 2020 | Syllabus

Ethical and Responsible AI:

Why Computing Belongs Within the Social Sciences by Randy Connolly

A.I. Needs new Cliches by Molly Wright Steenson

Architectural Intelligence: How Designers and Architects Created the Digital Landscape by Molly Wright Steenson

You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place by Janelle Shane