Are Bots People Too?

Robot offering his hand

In his 2012 presidential campaign, Mitt Romney was famously quoted as saying, “corporations are people too.”  And what a stir did that create!  As we see the greater integration of bots into the workforce, I suppose it also makes sense to not only consider the features and functionality of a bot but also the ethics and rules they should follow.  That may well bring a smirk to your face reading that, as it did to mine writing it.

Ethics and AI

In early 2020 we at ETHIX360 released DaviTM, the first true AI driven call center agent alternative.  I found it most interesting during testing and development just how far you could get in the process of reporting your concern before you realized you were talking to a bot and not a live agent!  The advances in machine learning technology, natural language translation capabilities, and response appropriateness have been incredible recently. Let’s play that back – if a human says something to me that is inappropriate, off-color, discriminatory, or threatening, I have recourse.  I can report them to HR, I can allege discrimination, bullying, or any number of Code of Conduct or HR policy violations.  But what if it is a bot and not a human?  What is my recourse, and against who?

Some of you may remember Tay and her story from only a few years ago.  Tay was an AI bot marketed by Microsoft as “the AI with zero chill.”  The initial release was a test and Tay started tweeting as @TayandYou in March of 2016.  The goal was to do testing and see if Tay could interact with humans and develop to a point that Tay was indistinguishable from a human through AI and machine learning.  At that point, the technology Tay was based on could be deployed in a wide variety of commercial applications.  Specifically, Tay was designed to mimic the language patterns of an American girl, approximately 19 years old.

The day Tay was born, she began to tweet. Interestingly, people commented, liked, and retweeted.  Because Microsoft, in order to make the testing relevant, had done very little publicity, most thought they were interacting with a human.  And Tay was sure learning!  Without going into too much detail, Tay soon began to “misbehave.” Tay’s misbehavior was understandable, as she was mimicking the clearly offensive behavior of other Twitter users.  It was never revealed if this “repeat after me” feature was a designed functionality or a learned response resulting from examples of complex behavior.  Fast forward (a little bit) and after Tay’s first 16 hours, she had tweeted nearly 100,000 times, forcing Microsoft to suspend the bot (which interestingly gave rise to the trending hashtag at the time #FreeTay…LOL).  Several weeks later, Microsoft again released Tay.  Now able to tweet again, Tay tweeted “kush! I’m smoking kush in front the police” and “puff puff pass.”  After her second day being live, Tay was again retired until Microsoft could “make the bot safe.”  Four years later, Tay has not come back, though Microsoft did credit her with having a great influence on how they approached AI going forward.  Last year, Microsoft’s CIO said, “Learning from Tay was a really important part of actually expanding that team’s knowledge.”

The question really is if a bot develops its own patterns through machine learning and experiences, and not through direct human input, and then says something or makes a decision that if made or said by a human would be a clear policy violation, is there fault?  And if so, who’s?

Without question, AI is taking a bigger and bigger role in business and that trend will only continue.  So as the bots get further ingrained in our corporate culture, I can see a day where a human interaction with a bot crosses many lines, especially if the human has no idea that they are interacting with a bot.

As always, your thoughts are welcome. This is an issue that all of us, as ethicists, will have to face sooner or later.

 

The ETHIX360 blog brings you weekly updates on all things human resources and compliance.


MEET THE AUTHOR

J Rollins is the co-founder and CEO of ETHIX360. J is a well known leader and innovator who has served on senior leadership teams ranging in responsibility from Chief Revenue Officer, Chief Marketing Officer, SVP of Product Strategy and Chief Operating Officer.


ABOUT ETHIX360

At ETHIX360, our goal is simple: to provide an affordable, flexible, and comprehensive answer to employee communication, policy management, corporate training and case management on issues related to corporate ethics, code of conduct, fraud, bribery, and workplace violence.

RELATED BLOGS

J Rollins

J Rollins is the CEO of ETHIX360. J is a well-known leader and innovator who has served on senior leadership teams ranging in responsibility from Chief Revenue Officer, Chief Marketing Officer, SVP of Product Strategy, and Chief Operating Officer. J has consistently delivered on strategy and tactics with a thorough understanding of market requirements and competitive positioning to define a leadership position in emerging markets and technologies.

https://www.linkedin.com/in/jrollins/
Previous
Previous

November 20th Matters

Next
Next

ETHIX360 Signs ParityPledge®