Scaling Support for 150,000+ Customers Using AI

“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.” —Sebastian Thrun

Thrun, who founded the famous Google car project and is the chairman of Udacity, is bang on. If you really think about it, all of AI is basically a march towards building machines which can behave, process, and think like humans (but of course, not replace them).

Like all the major industries, our SaaS world is adopting AI like never before. There are experiments across verticals to use AI to remove redundancy and increase the utilization of available resources.

At the forefront of these experiments is customer support. It ticks all the boxes when it comes to AI, thanks to its very nature of being able to mimic human actions. For example, you have hundreds of customers reaching out for support and most of these are repeated queries that can be answered by an ‘intelligent’ bot.

While that’s easy to say, the real challenge is to come up with the right strategies to streamline the process of training a bot so that support agents spend their time answering new, meaningful queries that come in.

In this article, we will be sharing how at Freshworks we took up this task of training our bot and eventually implemented AI in our support processes. We will also share some of the key lessons we learnt in this process and where we see ourselves headed to in the months to come.

Where It All Began

For us at Freshworks, AI didn’t happen overnight. It was a gradual journey towards implementing AI.

To give you some context, we have been on a rapid upwards trajectory since our early days. Back in 2011, it was a one member Support/Pre Sales/Sales team (Vijay). When I joined in 2014 it was a 6 member team.

From 2014 till today, there has been a 5X increase in our customer base. While that augurs well for the company, it also means that our customer support team and its functions had to evolve and get bigger and better to match the growth.

scaling support

If you look at the graph, you’ll see how our ticket volume was on a steady rise. One way to tackle this load was to hire aggressively which we did. We scaled our support team from 6 to an 80 member team and during this growth phase we relied heavily on inbuilt automations like Skill-based Round Robin, Intelli Assign in Freshchat and Call Idle agent in Freshcaller to eliminate maximum human effort possible and achieve better routing mechanism.

With hiring on track and with all these automations in place, I felt we had the perfect support system setup. Back in 2016, if someone had suggested adopting AI, I would have felt it was irrelevant to us.

But the constant increase in ticket volume had us thinking. In 2017 when we were working on the projections for the next year, we asked ourselves — can we just rely on basic automations to scale the support team or is there a smarter way to address the problem? This pushed us to think harder.

Why AI? Why Not AI?

At one point, it became amply clear that we had to look at AI seriously. With that in mind, we decided to dig one level deeper and answer two important questions:

–  Which channel do we start with?

– What type of tickets do we tackle?

To drive the point home, let’s just take a look at the data here.

channels of support

What we found interesting was that, data showed the rise in the ticket volume was not only due to the increase in customer base but was also partly due to the omnichannel model that we adopted. We focused more on live support channels like chats and calls which helped us get closer to our customers. What this also did was that it shot up our ticket volumes. A quick glance at our channel source split made it clear that chat as a support channel was received very well by our customers and had slowly overtaken the traditional email channel in the last couple of years.

Apart from all these insights, it gave the answer to our first question — since our portal had comparatively less volume of tickets, we decided to start our ambitious AI experiment here.

We assessed that almost 50% of our tickets were of the “how to” category or as the industry calls it ‘L1’ tickets. Our goal here was to deflect as many L1 tickets as possible from our agents. We believed this would allow our agents to use their skill and time to solve difficult queries. Also, our customers would get quicker resolution for L1 tickets.


Launching Freddy

With these revelations in hand, we kickstarted ‘Freddy’, our AI engine experiment.

We started off with a controlled experiment to make sure that we did not hamper the customer experience. The idea was to keep it simple, but even at this stage, we knew there was much to do. We understood AI is only as good as the data it can learn from and we spent the next couple of months building the content — our knowledge base.

A Quick Detour

The challenge with our knowledge base was that it has been growing by leaps and base over the years. This meant that there were redundant articles, information overload, unplanned structure, and less consistency. On the other side, our product team was in the process of launching our new Mint UI which meant the Knowledge Base had to be rewritten anyway.

On further analyzing the L1 tickets, we found that answers to most of the questions were simple straight forward short answers, that were already available in our knowledge base articles. So we decided we needed to start answering simple straight forward questions in a quick and more efficient way and came up with the idea of bite-sized FAQs. Short form Knowledge Base article is what we are calling FAQs.

After sweating through the KB, we got down the number of articles from 426 to 200 and we were able to bring in a structure by reducing the number of folders and categories. We also added 800 more FAQs. This exercise took us a good three months to pull off.


With the content falling into place, the next step was to nurture Freddy with data and lots of feedback. This step is crucial for the bot to get ‘accustomed’ to the real world.

“The key to building a great AI engine is continued learning.”

We followed the method of supervised learning, wherein our agents mapped each input to the desired output so that the bot knows if a customer were to ask an ‘x’ query, ‘y’ is the appropriate answer.

Then we replaced the ‘Submit New Ticket’ form in our support portal with an ‘Ask Freddy’ option which allowed us to expose Freddy to real-world customer queries. Freddy started introducing itself as a bot and suggested relevant articles.

All this while, we were also closely tracking how customers were interacting with Freddy. This was important because whatever training we give, it cannot really match the real-life scenarios.

The Insights We Gained

#1 Customers were asking the same question in different ways.

The challenge here was to make sure the bot was able to give meaningful answers to these queries irrespective of the way in which the question was framed.

To make this possible, we came up with a Bot training version 1.0. We went through a long list of questions that the bot could not answer and mapped them to the relevant articles. All of this process happened on an excel sheet. This was then processed by our machine learning team to train the bot in the back end.

training the bot

training the bot

The training proved to be fruitful and we began noticing a spike in our deflection percentage. In the below graph, the actual deflection are the instances where the customers explicitly provided feedback that the bot answer was helpful. The total deflection is a sum of instances when the customers had not provided any feedback but chose not to create tickets either. But overall, for a start, we were fine with an average of 5 % deflection. But, we knew that this could get better.

#2 It was painful to manually train the bot with an excel sheet

We took this feedback to the Product team who soon set out to find an easier way to train the bot. This led us to our version 2.0 of bot training. Now, we had a ‘Train the Bot’ section within the product where all the questions the bot couldn’t answer were listed. We could either choose to map the question to an existing article or create a new article and map the same.

manually training the bot

Version 2.0 made the bot training simple and quick. We had an ‘Insights’ section which showed how we are doing in terms of ticket deflection.  This allowed us to see the results of our training immediately.


We added another 300 FAQs in no time and with constant nurturing, we were able to attain higher deflection rates.


At one point we achieved a 13% ticket deflection rate. To put it simply, this was equal to the productivity of two agents per month.

Note: 13% is the actual deflection, whereas the total deflection goes up to 31%.

With all this effort we started seeing a dip in our Portal tickets, from 11.4 % in 2017 it came down to almost 5 % in 2018. These promising results gave us the confidence to adopt Freddy for our chat channel as well.

What We Learned

Our learnings from the overall experiment are:

You need to focus on content creation and nurturing the bot, as training is the key for the bot to understand your business.

– You need owners within the support team because they are the closest to the customers and can help build the right content.

– Measure metrics religiously in order to track progress and make sure you are putting the effort in the right areas.

Over a period of time, with continuous training and an increase in the accuracy of answers, our customers realized that the best way to get simple questions answered instantly was by interacting with a bot. Many of our customers too have adopted the bot. Our bots handles approximately 50K questions each month with an average deflection rate of 13%.

Other AI Experiments

Seeing the good results in our very first experiment gave us the confidence and equipped us to explore new ideas.

Here’s a list of what’s keeping us busy and tinkering.

Agent Assist Bots

Our onboarding process is a long one. We have a structured training program which goes on for four weeks and even after that the new agents require a lot of on the job training and hand holding in the initial months. To solve this we are adopting the Agent Assist Bots, wherein the bot can provide recommendations for the next best action.

Ticket Field Suggester

We manually classify all our tickets for reporting purpose. Experiments are now on with Freddy to consume all the historic support data to understand how our Agents have classified tickets in the past and predict the same for the new tickets.

The idea is to make sure that our agents do not fill out forms. Ideally, when they receive a ticket, all the values should be auto-populated, and they should jump right in to solve the ticket.

Automatic Group Routing

We have lots of automation rules in place to route the incoming tickets to different groups.

However, we felt every automation is machine learnable and can be improved.

So, the thought is to experiment with Freddy and see if it can learn group association through historical data and route tickets accordingly. We are hoping this will reduce the number of automation rules into half, reducing administrative overhead.

Answer BOT

We are also experimenting with an email bot, where Freddy can take a shot at answering tickets through email. This can be very useful for simple “how to” questions where customers can get the answers immediately and not wait for an agent to get back to them.

In Conclusion

AI is not going to replace humans anytime soon. We view it as an enabler for our teams to be more productive.

AI cannot solve new problems and be ‘creative’ with solutions or interactions. The simple answer is — anything that can be predicted is better left to the machines. Anything that needs judgement needs to be handled by the human agent.

All said and done, this is just the start. In the near future, our ambition is to have our AI engine mimic the behavior of our best agents.

P.S. A huge shout out to the product, support, and marketing teams and various owners who helped build the content and train the bot at different stages of the experiment.

This article is co-authored by Vignesh Jeyaraman.
Main illustration: Siddharth Kandoth