It wasn’t that long ago that a new field of ethics and ethical studies began. In the 1960’s, bioethics came onto the scene as medical science advanced. There was great fear that doctors would “play God” with new and experimental treatments. The field grew in prominence soon after as genetic research advanced. Bioethicists discussed everything from “designer babies” to the source of fetal tissue for research, from transplants to physician assisted suicide. But the bottom line was simple – as science advanced so did man’s ethos.
We’re once again as a society approaching a new era in ethos – and this one might be harder to manage. With bioethics, a moral baseline was being drawn for scientists, doctors and the like to balance decisions and arrive at an “ethical” conclusion. This meant that a collective had established moral boundaries, but we were counting on people with a moral conscience to respect them. Maybe I am naïve, but I still believe that the majority of us are ethical. Of course there are both “bad actors” and the uninformed, but we have laws and sanctions both professional and legal to deal with both. So overall, for most people, bioethics served a good and useful purpose. Our doctors were guided to moral and ethical treatment when it came to our care.
Now we find the world headed in another direction – and that is the increased role of Artificial Intelligence, or AI in society in general. Coffee pots never had to make a moral choice before – a silly exaggeration to make a point, but not that far off from reality. The devices we use in our lives and the role of AI in business is very different than the rise of robotics which has been going on for a while. Yes, to be fair, the rise of robotics brought about ethical dilemmas that business had to address, but they were issues largely tied to finance. Should we lay off people and replace them with robots? At what pace? Do we have a responsibility to retrain the people we are laying off to be able to pursue a new career?
But that was robotics – the simple automation of a task previously done by man and now done with a machine. The ethical discussion around AI is much different, because with AI we are asking machines not simply to execute a repeatable physical or mechanical task, but to learn and make moral judgements. To extend our coffee pot analogy – automation or robotics allow you to program your coffee pot to come on and make coffee at 7 am so when you walk into the kitchen, the coffee is made. AI would let your coffee pot decide if you can have one cup or two, decaf or espresso. It might even allow your coffee pot to talk to other systems and devices to make those decisions and say that you can’t have that second cup today.
Do you remember Tay? Tay was released by Microsoft on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou. Presented as "The AI with zero chill" Tay started replying to other Twitter users, and yes, learning from them. Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet. As a result, Tay began releasing racist and sexually-charged messages in response to other Twitter users. Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading entries from the website Urban Dictionary. At first it was believed that many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability. However, it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. And beyond that, not all of the inflammatory responses involved the "repeat after me" capability, meaning Tay “learned” or was taught to be bad.
Quickly, Microsoft began deleting Tay’s tweets, and that gave rise to the #JusticeforTay campaign protesting the alleged editing of Tay’s tweets. Within 16 hours of its release, Microsoft suspended Tay’s account after almost 100,000 tweets. Accidentally released a week later, Tay was quickly deleted after being described as “artificial intelligence at its very worst - and it's only the beginning." Tay never came back.
Tay had no moral compass and the question remains; As AI moves into more mainstream usage, where is the AI equivalent to bioethics? Where is the research to set standards for ethical decision-making by AI’s? When we ask AI to make decisions, the criteria must be framed. It’s easy to ask AI to make decisions for example, based purely on financial considerations. But I don’t want my coffee pot to decide if I can afford a second cup of coffee, or if I should have a second cup of coffee - I just want my coffee pot to make coffee.
Now move AI into business and industry, and even public safety. Are we ready to accept AI decisions? Where are the academics driving AI ethics the way bioethics rose? History tells us ethics tends to hang on the tailcoats of the latest technology, not lead from the front. As recent scandals serve to underline, if innovation is to be remotely sustainable in the future, we need to carefully consider the ethical implications of transformative technologies like AI.
AI ethics will need to be dealt with head on by businesses if they are to thrive in the 21st century. Worldwide spending on cognitive systems is expected to mushroom to about $19 billion this year, an incredible 54 percent jump on 2017, according to research firm IDC. By 2020, Gartner predicts AI will create 2.3 million new jobs worldwide, while at the same time eliminating 1.8 million roles in the workplace.
My biggest concern is that as machines increasingly try to replicate human behavior and deliver complex professional business judgments, how do we ensure fairness, justice and integrity in decision-making, as well as transparency? I’ll keep thinking about this and challenging others to think about this as well. But for now, where does a discussion around AI ethics sit in your company?
J Rollins is the co-founder and CEO of ETHIX360. At ETHIX360, our goal is simple, to provide an affordable, flexible and comprehensive answer to employee communication and case management on issues related to corporate ethics, code of conduct, fraud, bribery, EH&S and workplace violence. To learn more about ETHIX360, please visit www.ethix360.com, or follow us on twitter @ethix360.