According to research by the UK non-profit organization ‘Big Brother Watch’, 98% of surveillance cameras matches misidentify innocent people. This research further suggests that black people and women risk being misidentified the most.

Let’s be clear, we’re talking about artificial intelligence (AI) exacerbating pre-existing racial biases and amplifying social gaps. The consequences can hardly be coded away with a simple solution. Strange as it might sound, the next generation of engineers, anthropologists, sociologists, neuroscientists, etc. must study society together before sitting at the coding desks. Training algorithms to do something and hoping they will do it well is almost insane.

As an editor of a popular business publication, and cultural anthropologist, I’m disturbed by the lack of conversations about unjust technology in the customer experience (CX) industry. Companies have to rethink how their AI systems are built and whom they serve.

On the other hand, researchers, writers, computer scientists, and developers need to address the issue of AI harm before commercial value dominates all aspects of our lives. So, let’s talk for a moment about coded biases, gender discrimination, but also political regulations, and private company initiatives trying to develop algorithms that work for everyone. 

Algorithms and people that perpetuate racial discrimination

There is no magic in how algorithms learn to make decisions. The easiest way to train algorithms is to have humans labelling a huge amount of data. In other words, people are feeding in information until the mechanism learns to make decisions based on inserted patterns. The quality or morality of that decision depends on the context people added in the labelling process. Where is the problem? 

an image showing an algorithm detecting a face of a black women. This explains an intent of building ethical technology.

Firstly, people adding labels can describe what they see through an offensive, misogyny, or racist prism. They might use words such as ‘crazy’, ‘addict’, ‘slut’, ‘outsider’ and categorize people according to their perceptions of what is crazy or normal, acceptable or unacceptable, or what it means to look like a woman or a man. Remember Microsoft’s chatbot called Tay that became sexist and racist just in a few hours? That one didn’t have a lesson in moral reasoning before being released.

Furthermore, algorithms won’t take into consideration underrepresented populations. The M.I.T. Media Lab computer scientist, Joy Buolamwini, found that datasets are often overwhelmingly composed of lighter-skinned subjects. The consequences of such poor datasets led to a prototype of a self-driving car not recognizing dark-skinned pedestrians.

If these kinds of mistakes led to the shocking death of a person of colour, who would be responsible? What would be the missing puzzles that led to such consequences?

Why should we listen social scientists?

From my vantage point, ‘fair’ AI is not about coding but creating diverse development teams who can understand society and culture. Algorithms obviously can’t be evaluated isolated from their communities. To do better, we need social sciences to contextualize various realities people experience. Social scientists follow the norm of putting things into political, cultural, and economic context before drawing conclusions.

On the other hand, emerging tech start-ups blinded by speed and hope for a quick solution often underestimate and overlook the consequences of their products in the long run. Therefore, it is not rare that when ‘we’ talk about CX or tech solutions, ‘we’ mean WEIRD (Western, educated, industrialized, rich, and democratic).

The question is, who is looking at the margins of society? Who is listening to people’s silence and unspoken truths missing from the AI equations? If you are running a business, and you don’t have an answer, seek artists, scientists, researchers, and everyday people.

Encouraging self-determinism and fluid identities

an image showing a child with a rainbow under her eyes as a call for building ethical technology and diversity.

Imagine for a moment we’re building an AI system to support leaders deciding whom to promote or provide development opportunities. We might train our AI workforce management to recognize words we unconsciously use to describe women’s success such as ‘kind, smiling, calm, always there to help’, and men workers as ‘transparent, directs, decisive, initiative’. In this way, we will replicate the existing gender stereotypes. Makes you think, doesn’t it? 

By not questioning our biases, we are deliberately supporting AI in creating a world of a few identities. But these are shaped by someone else’s beliefs, leaving no place for LGBTQ+ or any other currently underrepresented group. Quite frankly, this is not the world I want to imagine for myself.

So, yes, business models and the digital economy are personal. Our digital identities must reflect the fluidity of identities in the real world. People change, and how they define themselves is not predictable or ‘written in code’. Luckily, there are some initiatives, and brilliant individuals fighting for the ethical future of technology.

Moving towards ethical technology with EU AI regulation 

In April 2021, the European Union (EU) became the first institution in the world to take a systematic approach to ensure fair and ethical artificial intelligence development. The proposed draft recognized remote biometric identification, and mechanisms that evaluate customers creditworthiness as high-risk AI systems. 

This initiative continued in January 2022 when the European parliament proposed banning the mass surveillance of Internet users and ad targeting based on sensitive data. Great for Europe, you might think. What will happen in the US, where AI has a dominantly capitalistic purpose or anywhere else in the world? 

There are a few remarkable initiatives that are setting the new trends in AI. One of them is surely The Center of Human Technology, which produces a great body of work to educate people on the insidious effects of persuasive technology. If you’re wondering how to take control over social media, protect yourself and your kids, they created some useful toolkits that can help you start with your ‘digital diet’. 

Recently I also came across an initiative called Feminist tech policy. This diverse group is on their mission of creating formats that will reflect evolving nature of our digital world. Their twelve principles fall into four main categories: Global, Societal, Interpersonal, and the Self

All the current efforts indicate that we need to reshape AI collaboratively. A small group of people created it and it is still making decisions in the name of many. To prevent possible harm, we have to encourage diverse community perspectives, as well as multidisciplinary scientists to step in. Otherwise, customers will feel they are just observers with no vote in any of the ‘progressive and beneficial changes’ in their name.

Businesses will have two choices: to write the history of a thriving society, or amplify social gaps, deepen polarization, and end up serving just a few privileged customers. Where do you stand?

Additional resource on building ethical technology

Below you’ll find just a few key resources that will help you gain diverse perspectives on our digital presence, possible futures, and the active part you can play in shaping it.

Movies

  • Coded bias
  • Social dilemma

Ted talk

  • Zeynep Tufekci: We’re building a dystopia just to make people click on ads

Books

  • Atlas of AI Kate Crawford
  • Weapons of Math Destruction by Cathy O’Neil
  • Algorithms of Oppression by Safiya Umoja Noble
  • 21 Lessons for the 21st Century by Yuval Noah Harari

Initiatives

  • The Center of Human Technology
  • The Algorithmic Justice League
  • Feminist tech policy
  • Big Brother Watch, UK
  • The Share Foundation, Serbia

Podcasts

  • Your undivided attention
  • Responsibility Tech
Post Views: 1994