Are Chatbots Doomed to Failure or Will They Eventually Conquer HR?

A brief introduction to the nature of chatbots which further expands into a discussion about typical technical flaws

Chatbots have been the latest craze of the modern days.  We see a gold rush of companies trying to become the first in their field to successfully deploy a bot. A  great proof is a report by Forrester Research revealing that 57% of the companies worldwide are either using chatbots already, or are planning to do so. Chatbots are already being used in marketing, eCommerce and even customer service. Yet isn’t it ironic that chatbots are also becoming extremely popular in HR? These invaders are doing everything to replace humans in HUMAN Resource. Software such as Slack and HipChat are being used by millions of workers.

And guess what? HR management software CakeHR is up-to-date with the latest trends; integration of a Slackbot helps to manage employee leave more efficiently.

Marketers’ interest in chatbots is growing rapidly, and many plan to launch a branded chatbot in 2018

A chatbot is a software that is taught to respond to defined questions or commands. While a chatbot is the most complex area of Artificial Intelligence (AI), the majority of chatbots cannot brag about their intelligence. Chatbots are built based on decision-tree logic, meaning that the response provided by the bot depends on specific keywords identified in the users input. Such question and answer approach can be very beneficial in customer service.

Analytics, mobile web and now HR chatbots are on the rise

Gartner’s 2016 Hype Cycle for CRM, Customer Service, and Customer Engagement highlights that there is a rise in chatbots being deployed in customer service. But are chatbots going to become the next frontier for HR software? … it might take a while.

Despite all the hype around the bots, currently they are unable to meet user expectations. Within the next paragraphs we will discuss the main reasons why chatbots fail to achieve the desired results.

But before getting to technical flaws related to bot operation, we would like to draw attention to mistakes made by company leaders and bot developers:

Chatbot hype

With all the pressure from media, consulting companies, and even executive leaders, it often leaves no other choice for developers but to deliver – in this case, to design and deploy a chatbot. And how can you not follow the trend when there are already over 30,000 chatbots that are able to communicate with users and solve problems.  Moreover, the pressure only keeps building; according to a survey (2016) conducted by Aspect, 49% of the respondents admitted that they would rather prefer the customer service interactions via text, chat, or messaging which opens all the doors to AI.

Banks, airlines and others are developing virtual-software assistants to interact like humans

Xiaoice, an extremely successful bot in China released by Microsoft, currently has 40 million users. To help you understand just how popular it is – the average conversation length for Xiaoice is 23 CPS (“conversations per session”) while an average CPS for other chatbots are between 1.5 and 2.5. It is also important to note that while we might think that the main chatbot mission is to aid people in accomplishing tasks, Xiaoice shows a different tendency –  sometimes people simply want to talk.

Not having a clear vision

It is not uncommon to get carried away by the idea of being among the ‘cool’ companies that already have bots. But take a deep breath and ask yourself – what do you want to accomplish with your chatbot?

Lately, as AI chat technology has become prevalent in mainstream devices, like the iPhone’s Siri and Amazon’s Echo, the term “chatbot” has become an umbrella term

When the goal is clear, depending on the sophistication of the desired business outcome, you will have to choose the right chatbot. (1st, 2nd or 3rd generation) Developer excitement often leads to bots that are solving irrelevant problems or offering poor experience. However, since the industry is relatively new, it is important to experience these failures and learn from them to create and deploy relevant and more capable bots.

Poor knowledge about chatbots

Leaders lack knowledge about chatbots which then leads to frustration. A simple way to avoid it is to improve your knowledge about chatbots. It also involves realization that AI chatbot requires much more than just building and deploying it. It is a misconception to think that after deploying a bot nothing else needs to be done; chatbots require constant training, feeding with data and nurturing.

Now that these problems have been identified, let’s further discuss more technical issues related to chatbots:

1. Limited knowledge or Oops! I didn’t get that!

The intelligence of a chatbot is fully owed to a developer. Yet a chatbot cannot compete with the sophisticated mind of a human.  Not to mention sarcasm, puns and reading between lines that is a complete ‘mission impossible’ for a chatbot. Currently bots with linguistic and natural language learning capabilities are still quite uncommon.

A report by Publicis Groupe’s DigitasLBi found that in case of a negative experience with a chatbot, 74% of the respondents will likely not use the bot again.

Here’s an example of how a weather bot is unable to respond to the user’s input (particularly, thurs to sat, which is not recognized by the bot)

Artificial Intelligence (AI) is still not that accessible
Artificial Intelligence (AI) is still not that accessible

2. Loss of memory

This is another major flaw. Compared to humans, bots cannot hold contextual information for longer stretches of time which leads to losing track of whatever previous information given by the user.

Chatbots don't understand context
Chatbots don’t understand context

3. Lack of transparency

One of the things that most successful bots have in common is letting the user know that he/she is interacting with a robot and not with another human. A report by PwC shows that 27% consumers did not even know whether they were talking to a human or a bot. Perhaps, it is also one of the reasons why so many bots fail. Do not attempt to hide the fact that your company is using a chatbot. Make sure to set up the right expectations beforehand so that users would be more lenient to the possible mistakes a bot might make.

Today, AI works in three ways: Assisted intelligence, Augmented intelligence, Autonomous intelligence

Reports suggest that possibly by 2029 it will be hard to tell the difference between human assistant and a robot.  Chatbots might evolve tremendously and have human-like language abilities thus creating more meaningful conversations.

What is going on?
What is going on?

4. No communication with existing business systems

A chatbot that does not know about the company it represents creates additional work for the business. Moreover, users value their time and such lack of integration with existing system will make the users lose their patience.

Here are a few common reasons for chatbots to fail:
Here’re a few common reasons for chatbots to fail…

5. Voice is not supported

Typing in everything makes the task more time consuming. Therefore, use of a chatbot is less efficient compared to an app. Even more so if you have to re-type your question several times hoping that the bot will finally get it!

The vast majority of chatbots aren't actually intelligent
The vast majority of chatbots aren’t actually intelligent

6. Handling too many things at once

While a bot can be useful for various tasks, developers should remember that sometimes less is more. Therefore, it is best to narrow the area of focus and let that bot take care of one thing at a time.

Here’s an example of Poncho the weather bot sending users messages that are in no way related to weather.

despite the best of our intentions, sometimes chatbots fail to deliver
Despite the best of our intentions, sometimes chatbots fail to deliver

Speed and efficiency are the main factors that drive competitiveness among companies. Unfortunately, chatbots are still lagging behind the traditional webpages and apps. Although chatbots may become a huge turning point for the future of HR, currently it is way too early to speak about humans being replaced by bots. So far, chatbots are only able to deal with simple, standard issues and anything more complicated is beyond their competence, causing more work for the company.

Users expect personalized, human-like assistance from bots – and that’s where they fail, at least for now.  To meet this expectation there is a long way ahead; constant failing and learning from mistakes, as well as working harder to deliver clarity, cohesion and utility for the user.

Sintija.

Bonus Stuff 👾

Everybody speaks about how beneficial the implementation of the AI can be.

Well.. Robots make mistakes too and sometimes they can turn out to be quite costly for the company.

Here are 9 funny and shocking examples of why you shouldn’t rely on and leave important tasks in the hands of robots.

#1. Supposedly kid-friendly robot goes crazy and injures a young boy

#2. Alexa plays porn instead of a children’s song

#3. Robot passport checker rejects Asian man’s application because “eyes are closed”

After attempting to renew his passport, Richard Lee, a 22-year-old man of Asian descent, was turned down by the New Zealand Department of Internal Affairs after its software claimed his eyes were closed in his picture.
After attempting to renew his passport, Richard Lee, a 22-year-old man of Asian descent, was turned down by the New Zealand Department of Internal Affairs after its software claimed his eyes were closed in his picture.

#4. News broadcast triggers Amazon Alexa devices to purchase dollhouses

Earlier this year, a 6-year-old girl named Brooke Neitzel ordered a $170 Kidcraft dollhouse and four pounds of cookies through Amazon Alexa -- simply by asking Alexa for the products.
Earlier this year, a 6-year-old girl named Brooke Neitzel ordered a $170 Kidcraft dollhouse and four pounds of cookies through Amazon Alexa — simply by asking Alexa for the products.

#5. Amazon Alexa starts a party — and the neighbors call the cops

While Oliver Haberstroh, a resident of Hamburg, Germany, was out one night, his Alexa randomly began playing loud music at 1:50 a.m. After knocking on the door and ringing Haberstroh’s home to no answer, neighbors called the cops to shut down this “party.”
While Oliver Haberstroh, a resident of Hamburg, Germany, was out one night, his Alexa randomly began playing loud music at 1:50 a.m. After knocking on the door and ringing Haberstroh’s home to no answer, neighbors called the cops to shut down this “party.”

#6. Robots judge a beauty contest and don’t select women with dark skin

international beauty pageant Beauty.AI held an online beauty contest and used a machine as the judge. The algorithm didn’t favor women with dark skin.
International beauty pageant Beauty.AI held an online beauty contest and used a machine as the judge. The algorithm didn’t favor women with dark skin.

#7. Microsoft’s Twitter chatbot turns anti-feminist and pro-Hitler


HR management software app system CakeHR human resources
softwareadvice FrontRunner HRIS HR management software app system CakeHR human resources
HR management software app system CakeHR human resources
HR management software app system CakeHR human resources

CakeHR is a one stop shop for your HR management needs. With attention to user experience & making the software easy to use yet packed with loads of features we strive to make your HR management as easy as a piece of cake!

Written By

Sintija Valdez

CakeHR translator and content writer, born and raised in Latvia but currently living in the U.S. Passionate about preserving the “world” that each language encompasses. At present emerging into the art of HR - work life, management and other related topics.