https://landen.imgix.net/blog_pjUIGQgQQxNjuleV/assets/RKDmLisUPDVBnClS.jpg

This is the second piece of our three articles dedicated to the challenges our technical team has been through recently, that led AWS to award us the 2020 "Architecture of the year" trophy. You can read the first article here.

Introduction

We're writing a series of articles to explain how we handled one of our biggest 2020 technical challenges at calldesk: scaling our platform to handle from 1000 to 5000 phone calls in parallel using AWS, in less than a month.

In the previous article, we already told you a little bit more about our architecture and the team organization to handle this challenge. You now understand what makes calldesk technology so powerful in terms of voice recognition and understanding.

It's time to give more details on the tools we used to make thousands of phone calls on our platform, the scenario that was tested, their results, and the limits we faced. Keep reading until the end of the article, we'll also tell you about solutions!

Pre-requisites & tools

Pre-requisites

The first pre-requisite was to build a voice agent that would be able to handle the phone calls made on the platform. We defined a simple discussion that would simulate a realistic phone call:

The objective was to test simple intentions (confirm or deny a question), combined intentions (confirm plus give an entity in the same sentence), and different entities to see if the load has an impact on the understanding of the voice agent.

This discussion would last for about 2 minutes, which was perfect to simulate a production phone call.

The second pre-requisite was to have an audio recording of a caller making a phone call on this voice agent with every entities understood correctly. We only needed the caller voice, to play it when making phone calls by thousands, so that the voice agent could answer.

Finally, we needed to easily track several metrics: