Why AI without humanity is dangerous for diversity

Drawing on more than 50 years of research into intrinsic motivation, and built around the 11 Motivators identified by a team of academic and clinical experts, psychology is at the very core of the Attuned platform. Equally important, however, is artificial intelligence (AI)—the complex algorithms and advanced statistical models that allow us to turn the response data into actionable insights.

Like almost all technological innovations, AI can be used to improve the way we live and work, or it can be used for more nefarious purposes. It follows, therefore, that we need checks and balances to ensure that AI tech, and the way it’s used, tends towards the ethical end of the scale. As such, the EU’s proposal to create the first ever regulatory legal framework on AI is welcome news.

For us at Attuned, the key to making AI a force for good has always been that it should be used to illuminate potential problems and enhance people’s natural abilities to solve those problems efficiently and effectively. After all, we created Attuned to help people understand one another better—something we humans tend to do poorly when left to our own devices. Therefore, we believe it is crucial that any new AI technology must originate from a place of humanity.


Why good AI needs good EI

AI blog.png

So how can we imbue AI with a sense of humanity? It starts with making sure that our machines are not just learning to be intelligent, but doing it from a position of emotional intelligence (EI). To ensure that EI underpins the development of AI, it’s vital that we have the big ethical discussions about how it should (and should not) be used now, while the technology is still in its relative infancy and, even more importantly, before AI companies become too large and powerful to control—a situation we are already experiencing with the current tech giants because their businesses were poorly understood by the public and governments until it was too late.

One of the trickiest elements of AI is that, often, we don’t really know how artificial decision-making works. It’s a black box. We create AI by training it with data sets, but when it gives us a result, we don’t necessarily know how or why it came to that result. We can’t question its logic, we can’t probe its thinking, the result just is.

(There are, however, emerging technologies such as LIME that are helping us understand the decision trees of AI, and which may help to establish trust in the process. Hopefully these types of technologies and trust-building measures will be part of any discussions and resultant decisions around AI regulation.)

Diversity starts with data

Given the opaqueness of AI’s decision-making, how it is trained becomes even more important, and the quality of the data is essential.

AI Diversity.png

A crucial aspect of this is the diversity of the data. If the AI is involved in recreating human decision-making, or identifying humans, or if the decisions it makes directly affect our lives, then we want to make sure that it has been trained appropriately. This means using diverse data sets that span the full spectrum of humanity—from different physical characteristics and cultural behaviors to belief systems and thought processes. If machine learning doesn’t factor in diversity, then the AI we create will just reinforce—or, worse still, amplify—the biases we currently have in our society. 

This problem of bias in AI is real and needs to be addressed at the most fundamental level. With more and more uses for AI being created every day, the risk that even seemingly benign applications may be infused with biases that could have harmful real world consequences is increasing exponentially. And once the companies behind the AI grow large, and the usage habitual, then we are in real danger of further institutionalizing our current biases if we don’t take adequate preventative measures.

Balancing regulation and innovation

It makes sense that governments are in the best position to address the issues surrounding AI, and to regulate its development. Doing so will require a balance between creating thoughtful rules while avoiding heavy-handed regulation that will restrict innovation and the ability of new companies to grow and become competitive. And of course, government regulation is far better than the alternative, which is that AI researchers bear the responsibility for the quality of the data, as even the most well-intentioned researchers would likely still create biased systems.

Source: Element AI

Source: Element AI

And there is a further complication due to the demographics of AI researchers. A 2018 survey by WIRED magazine found that only 12% of AI researchers are women, indicating that data related to women, or women’s perspectives, wouldn’t be sufficiently represented in the training of the AI. Similarly, minorities are shamefully underrepresented. In short, the people creating the AI don’t adequately mirror our societies, and therefore won’t be fully equipped to train the AI on perspectives different to their own. If we don’t address this at a societal level, and by proxy at the government level, we will continue to deal with problems of AI discrimination, such as Amazon’s now-scrapped AI recruiting tool or flawed attempts to predict recidivism in US courts

As AI becomes more complex and more prevalent within the infrastructure of our lives, there is the real risk that discrimination, even if it is through negligence, becomes more pernicious and likely more nuanced in its injurious effects.

The common objection to government involvement, and potential regulation, is that the cost will be too high because it will stifle innovation and competition. It will thus be harder for new companies and new services to emerge, and only the biggest and most well-resourced will be able to navigate the regulation, which will in effect institutionalize a different form of homogeneity. To avoid these common refrains, it is critical that discussions on how to approach AI’s development at the societal level are begun early and openly, taking in the views of all the key stakeholders. 

Shaping AI’s future

The EU may be leading this conversation globally at the moment, but other places, specifically the US and China, will certainly seek to become the global leader in AI for strategic geopolitical purposes. Within the US, there may be different approaches. For example, rather than regulation, the Federal Communications Commission (FCC) may require truth in advertising—if you say your AI doesn’t have bias in hiring, then the burden will be on you to prove that. 

Ultimately, AI has the potential to change the lives of billions of people for the better. But to ensure that it can do that equitably and without bias, we have to build it into the developmental frameworks from the beginning. It’s not easy, but it can be done. Take it from someone who learned the hard way.

I was recently invited to discuss the topic of AI regulation on Cheddar News and Bloomberg Radio. Please click the links below to play the respective interviews:

Cheddar News (April 26, 2021)

Bloomberg Radio (May 5, 2021)

 
Want to learn about the motivational trends reshaping the workplace?
Download The State of Motivation Report 2024. It’s free!
 

More About Hr Tech and Trends


 
casey_c11522aefb1264a2d3c067358abae07d_800.jpeg

Casey Wahl
CEO and Founder

Intrinsic Motivator Report