The Ethics of AI Comes Down to Conscious Decisions

The Ethics of AI Comes Down to Conscious Decisions

This blog post was written by Pedro Pereira as a guest author for Cloudera. 

Right now, someone somewhere is writing the next fake news story or editing a deepfake video. An authoritarian regime is manipulating an artificial intelligence (AI) system to spy on technology users. No matter how good the intentions behind the development of a technology, someone is bound to corrupt and manipulate it.

Big data and AI amplify the problem. “If you have good intentions, you can make it very good. If you have bad intentions, you can make it very bad,” said Michael Stiefel, a principal at Reliable Software Inc. and a consultant on software development. 

It’s important to be conscious of this reality when creating algorithms and training models. Big data algorithms are smart, but not smart enough to solve inherently human problems. The AI in a self-driving car may not be able to determine if a snowman on the curb is going to jump in its way. The algorithms under the hood lack the common sense to make the distinction and may activate the brakes, said Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute in New Mexico and professor of computer science at Portland State University.

Such scenarios create complexities and ethical dilemmas. How can developers ensure algorithms are used for good deeds rather than nefarious purposes — that the vehicle doesn’t purposely run someone off the road? How do you avoid creating a technological loose cannon that can be used by racist groups to organize or cybercriminals to steal a senior citizen’s savings or a company’s closest-held secrets? And how do you prevent nation-states from creating a brave new world in which omnipresent leaders suppress citizen freedoms? 

Conscious Transparency

Technology companies that are serious about ethics — and want to avoid pariah status — need to be accountable. Turning a blind eye to problems or applying half measures isn’t going to work. Social media platforms have struggled with this. First, they were criticized for not policing content enough, and then, after implementing tougher policies, for being too heavy-handed.

Transparency is key. When algorithms make decisions, users may not realize AI is in charge, so to speak. It’s an issue with social media, as users accustomed to sharing whatever content they wanted suddenly were restricted by algorithmic rules.  

The public at large doesn’t know how algorithms work, so when technology acts in unexpected ways, it frustrates users. This could be addressed with an explanation of how a technology works — how, for instance, machine learning (ML) engines get better at their tasks by being fed gobs of data. A chatbot that now relies mostly on canned answers eventually becomes more precise and useful.

“When developing ethical AI systems, the most important part is intent and diligence in evaluating models on an ongoing basis,” said Santiago Giraldo Anduaga, director of product marketing, data engineering and ML at Cloudera. “Sometimes, even if everything is done to deliver ethical outcomes, the machine may still make predictions and assumptions that don’t abide by these rules. It’s not the machine’s fault. After all, machine learning systems are inherently dumb and require a human in the loop to make sure the model remains healthy, accurate, and free of bias.”

“The ultimate question is how artificial intelligence can contribute to a just and sustainable society,” said Stiefel. “We need to find the place where software is useful to society and where humans are needed. The answer is not obvious. We need to be able to make this decision consciously, not have it made unknowingly as the technology is released into the world.”

All the World Is a Lab

In a blog on Medium, author Rob Walker posits: “It would have been smart to think ahead about how neo-Nazis might use Twitter, how pedophiles might use YouTube, or how a mass murderer might use Facebook Live.” 

The reasoning is valid. But we must recognize the current limitations of technology. AI and other data-intensive algorithms are infants. No matter how well-raised, once they’re out in the world, they’ll act in ways that surprise parents. Or put it another way, the whole world is a lab – and we are all part of the AI experiment, watching and feeling and coping with the effect of the results.

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it,” AI researcher Eliezer Yudkowsky wrote in a paper titled, “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” 

This doesn’t exonerate technology companies from applying ethics to development. As Apple’s Tim Cook has said: “We are responsible for recognizing that the devices we make and the platforms we build have real, lasting, even permanent effects on the individuals and communities who use them. We must never stop asking ourselves, what kind of world do we want to live in? The answer to that question must not be an afterthought, it should be our primary concern.”

Development vs. Use

In addressing the ethics of AI, Stiefel advocates that developers and product leaders acknowledge a fundamental distinction between creating a product with potential adverse effects and a platform “allowing certain types of expression to take place.” The former involves the very essence of creation while the latter centers on use, he said.

When we talk about Facebook, Twitter or YouTube trying to arrest the spread of hate speech or fake news, the issue comes down to regulating use. Social media platforms are grappling with something newspaper publishers figured out long ago: Self-censorship is your friend. Having been called out for a laissez faire approach, they’re now trying to correct past errors by throwing AI at it. 

“Even then,” Stiefel said, “since the AI algorithms do not currently understand the meaning of what they are analyzing, content can be categorized incorrectly.”

Overcoming these issues isn’t easy, but doable. Lloyd Danzig, chairman and founder of the International Consortium for the Ethical Development of Artificial Intelligence, acknowledges the difficulty of foreseeing and preventing every type of platform misuse. “In recent years, many platforms have come under fire not for simply failing to prevent undesirable behavior, but for encouraging or ignoring it in the face of more powerful economic incentives,” he said.

And economic incentives drive another form of misuse — one that implicates enterprises collecting data from users without transparency. It’s led to privacy laws such as the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (COPA).

The Biggest Threat?

While technology companies have an unquestionable obligation to consider ethics in development, if someone is bent on corrupting a piece of technology, they’re bound to find a way. Too often, that someone is a nation-state. Witness the widespread use of surveillance by governments against their citizens. 

“At least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes,” the Carnegie Endowment for International Peace revealed in its 2019 report, “The Global Expansion of AI Surveillance.”

Whether restricting internet access for political gain or using cameras with facial recognition to keep citizens in line, authoritarian governments are never shy about co-opting whatever technology is available to control their citizens. It wasn’t long ago that technology prognosticators proclaimed “that every kind of new technology would actually serve the cause of freedom and would undermine and subvert authoritarian rule,” Kai Strittmatter, who has written about technology and surveillance, recently told NPR’s Dave Davies. Instead, she said, governments in authoritarian countries are using technology to cement their control.

Ethics in Development

History is filled with regrets. J. Robert Oppenheimer expressed mixed feelings later in life about his role in developing the atomic bomb. Ethan Zuckerman has apologized for his invention of the pop-up ad. In 2017, FaceApp, which uses neural network technology to edit selfies, “apologized for building a racist algorithm,” perhaps the first of many such regrets.

Realistically, it’s not possible to entirely prevent misuse of a product. However, in product development — or specifically the creation of AI algorithms — ethics demand that developers not only see into the future to anticipate misuse but also recognize their own human frailties to suppress biases that can influence an algorithm. Ethics warriors such as Danzig are working hard to make this case.

“Bias can seep into AI systems from a variety of sources and has the potential to create hazards that don’t reveal themselves until damage has been inflicted,” Danzig said. An especially problematic example of unintended consequences involves the use of big data in trial sentencing. ML algorithms that look at historical data to make sentencing recommendations to judges have led to tougher penalties for convicted criminals from low-income and minority groups. 

Similar outcomes can occur in other areas, such as when mortgage lenders decide on criteria for their lending decisions, Cloudera’s Anduaga points out. “If one of those criteria is gender, race or age, the machine will weigh it the same as other factors. Because the machine indiscriminately looks at the data, it’s up to the humans in the business to actively mitigate that bias or risk their brand, reputation, and credibility.”

Apple and Goldman Sachs found that out the hard way in 2019. The Apple Card issued in joint partnership with Goldman Sachs was called out for gender discrimination. But after completing an investigation into the matter, the New York State Department of Financial Services found no evidence of unlawful discrimination against applicants. A webpage on Apple’s website explains the criteria for card approval.

Preventing unexpected or problematic outcomes requires a lot of effort. First, start with a good model, said Stiefel. Then you have to feed it the right data – plenty of it. “For example, a ML algorithm for mortgage lending can be checked against a range of requirements that a human lender would be required to meet.” Lastly, ask peers to review the algorithm; a different set of eyes might spot issues you missed.

For companies unsure how to approach ethics issues, Danzig recommends “consulting with experts who specialize in AI readiness and preparedness, preferably ones who sit outside the organization.”

Cloudera Contributors
More by this author

2 Comments

by Bill Bellamy on

Great article Pedro I will share this with my readers to compliment an article written 5/29/21 posted on Facebook. Thank you for a more in-depth approach to the subject of AI my friend.

by ankit tiwari on

Hi
It’s a great article
It enhances my knowledge of learning,
Yes, “At least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes,” the Carnegie Endowment for International Peace revealed in its 2019 report, “The Global Expansion of AI Surveillance.”
AI is the future, it is broadly distributed in every field and every area.

Leave a comment

Your email address will not be published. Links are not permitted in comments.