Category Archives for "Testimonials"

Here’s why Intermedia turned to testRTC to proactively monitor their AnyMeeting web conferencing service

“testRTC enables us to monitor meeting performance from start to finish, with a focus on media quality. We get the best of everything with this platform.”

As 2019 came to a close, I had a chance to sit and chat with Ryan Cartmell, Director of Production System Administration at Intermedia®. Ryan manages a team responsible for monitoring and maintaining the production environment of Intermedia. His top priority is maintaining uptime and availability of Intermedia’s services.

In 2017, Intermedia acquired AnyMeeting®, a web conferencing platform based on WebRTC. Since then, Ryan and his team have been working towards building up the tools and experience needed to give them visibility into media quality and meeting performance.

Initially, these tools took care of two levels of monitoring:

  1. System resource performance monitoring was done by collecting and looking at server metrics
  2. Application level monitoring was incorporated by collecting certain application specific metrics, aggregating them in an in-house monitoring platform as well as using a cloud APM vendor

This approach gave a decent view of the service and its performance, but it has its limits.

What you don’t know you don’t know

The way such monitoring systems work is by collecting metrics, data and logs from the various components in the system as well as the running applications. Once you have that data, you can configure alerts and dashboard based on what you know you are looking for.

If an issue was found in the AnyMeeting service, Ryan’s team would try to understand how the issue can be deduced from the logs and available information, creating new alerts based on that input. Doing so would ensure that the next time the same issue occurred, it would be caught and dealt with properly.

The challenge here is that you don’t know what you don’t know. You first need a problem to occur and someone to complain in order to figure out a rule to alert for it happening again. And you can never really reach complete coverage over potential failures.

This kept the Intermedia Operations team in a reactive position. What they wanted and needed was a way to proactively run and test the system to catch any issues in their environment.

Proactively handling WebRTC issues using testRTC

Get ahead of issues and not wait for customers to report issues. 

“With testRTC we are now able to get ahead of issues and not wait for customers to report issues.”

testRTC is an active monitoring test engine that allows Intermedia to proactively test services. This enabled Intermedia to become aware of their performance as well as in the total service availability of the AnyMeeting platform.

Intermedia deployed multiple testRTC monitors, making sure its data centers are probed multiple times an hour. The monitors are active checks that create real web conferences on AnyMeeting, validating that a session works as expected from start to finish. From logging in, through communicating via voice and video to screen sharing. If things go awry or expected media threshold aren’t met, alerts are issued, and Ryan’s team can investigate.

A screenshot from one of the testRTC monitors for the AnyMeeting service

These new testRTC monitors mapped and validated the complete user experience of the AnyMeeting service, something that wasn’t directly monitored before.

Since its implementation, testRTC has helped Intermedia identify various issues in the AnyMeeting system – valuable information that Intermedia, as a market leader in unified communications, uses in its efforts to continually improve the performance and quality of its services.

The data that testRTC collects, including the screenshots it takes, make it a lot easier to track down issues. Before testRTC, using performance metrics alone, it was really difficult to understand the overall impact to an end user. Now it is part of the analysis process.

Working with testRTC

Since starting to work with testRTC and installing its monitors, Intermedia has found the following advantages of working with testRTC:

  1. The flexibility of the testRTC platform – this enables Intermedia to test all elements of the web platform service it offers
  2. Top tier support – testRTC team was there to assist with any issues and questions Intermedia had
  3. Level of expertise – the ability of testRTC to help Intermedia work through issues that testRTC exposes

For Ryan, testRTC gives a level of comfort knowing that tests are regularly being performed.  And, if a technical challenge does arise, the data available from testRTC will enable Ryan and his team to triage the issue a lot easier than they used to.

Intermedia and AnyMeeting are trademarks or registered trademarks of Intermedia.net, Inc. in the United States and/or other countries.

2

Preparing for WebRTC scale: What Honorlock did to validate their infrastructure

This week, I decided to have a conversation with Carl Scheller, VP of Engineering at Honorlock. I knew about the remote proctoring space, and have seen a few clients work with testRTC. This was the first real opportunity I had to understand firsthand about the scale-related requirements of this industry.

Honorlock is a remote proctoring vendor, providing online testing services to higher education institutions. Colleges and universities use Honorlock when they want to proctor test taking online. The purpose? Ensure academic integrity of the students who take online exams as part of their courses.

Here’s the thing – every student taking an exam online ends up using Honorlock’s service, which in turn connects to the user’s camera and screen to collect video feeds in real time along with other things that Honorlock does.

Proctoring with WebRTC and usage behavior

When taking an online exam, the proctoring platform connects to the student’s camera and screen by using WebRTC. The media feeds then get sent and recorded  on Honorlock’s servers in real time, along the way passing some AI related checks to find any signs of cheating. The recordings themselves are stored for an additional period of time to be used if and when needed for manual review.

To offer such a secure testing environment requires media servers to record the session for as long as the exam takes place. If 100 students need to take an exam in a specific subject, they might need to do so at the same scheduled time.

Exams have their own seasonality. There is a lot of usage taking place during midterm and final exam periods, whereas January sees a lot less activity when most schools open a lot less.

Online proctoring platforms need to make sure that each student taking an exam gets a high experience no matter how many other students are taking an exam at the same time. This clears students to worry about their test and not about the proctoring software.

Honorlock’s scaling challenge

Honorlock are aware of this seasonality. They wanted to make sure that their application can handle the load in the various areas of the application. Especially due to the expected growth of their business in the near future.

What Honorlock were looking for was an answer to the question: at what point do they need to improve their application to scale further?

Honorlock is using a third party video platform. They have decided early on not to develop and deploy their own infrastructure, preferring to focus on the core experience for the students and the institutions using them.

Honorlock decided not to have a working assumption as to the scale of the third party video platform blindly, and instead went ahead to conduct end-to-end stress testing, validating their assumptions and scale requirements.

When researching for alternatives, it was important for Honorlock to be able to test the whole client-side UI, to make sure the video infrastructure gets triggered the same way it would in real life. There was also a need to be able to test everything, and not only focus on scale testing of each component separately. This led Honorlock to testRTC.

“We’ve selected testRTC because it enabled us to stress test our service as closely to the live usage we were expecting to see in our platform. Using testRTC significantly assisted us in validating our readiness to scale our business.”

Carl Scheller, VP of Engineering, Honorlock

Load testing with testRTC

Once Honorlock started using testRTC, it was immediately apparent that the testing requirements of Honorlock were met:

  • Honorlock made use of the powerful scripting available in testRTC. These enabled handling the complexity of the UX of a proctoring service
  • Once ready, being able to scale up tests to hundreds or thousands of concurrent browsers made the validation process easier. Especially with the built-in graphs in testRTC focusing on high level aggregate media quality information
  • The global distribution of testRTC’s probes and their granular network controls enabled Honorlock to run stress tests with different machine configurations mapping to Honorlock’s target audience of students

A big win for Honorlock was the level of support provided by testRTC throughout the process. testRTC played an important role in helping define the test plan, writing the test scripts and modifying the scripts to work in a realistic scenario for the Honorlock application.

Building a partnership

Working with testRTC has been useful for Honorlock. While the testRTC service offered a powerful and flexible solution, the real benefit was in the approach testRTC took to the work needed. From the get go, testRTC provided hands on assistance, making sure Honorlock get ramp up their testing quickly and validate their scale.

That ability to get hands on assistance, coupled with the self service capabilities found in testRTC were what Honorlock was looking for.

The validation itself assisted Honorlock in uncovering issues in both their platform as well as the third party they were using. These issues are being taken care of. Using testRTC enabled Honorlock to make better technical decisions.

3 How Talkdesk support solves customer network issues faster with testRTC

“The adoption of testRTC Network Testing at Talkdesk was really high and positive”

Earlier this month, I sat down with João Gaspar, Global Director, Customer Service at Talkdesk to understand more how they are using the new testRTC Network Testing product. This is the first time they’ve introduced a product that is designed for support teams, so this was an interesting conversation for me.

Talkdesk is the fastest growing cloud contact center solution today. They have over 1,800 customers across more than 50 countries. João oversees the global support team at Talkdesk with the responsibility to ensure clients are happy by offering proactive and transparent support.

All of Talkdesk customers make use of WebRTC as part of their call center capabilities. When call center agents open the Talkdesk application, they can receive incoming calls or dial outgoing calls directly from their browser, making use of WebRTC.

WebRTC challenges for cloud contact centers

The main challenge with cloud communication in contact centers is finding the reason for user complaints about call quality. Troubleshooting such scenarios to get to the root cause is very hard, and in almost all cases, Talkdesk has found out that it is not because of its communication infrastructure but rather due to issues between the customer’s agent and his firewall/proxy.

Issues vary from available bandwidth and quality in their internet connection, problems with their headphones, the machine they are using and a slew of other areas.

Talkdesk’s perspective and proactive focus to support means they’re engaging with clients not only when there are issues but through the entire cycle. For larger, enterprise deals,Talkdesk makes network assessments and provides recommendations to the client’s network team during the POC itself, not waiting for quality issues to crop later on in the process.

To that end, Talkdesk used a set of multiple tools, some of them running only on Internet Explorer and others testing network conditions but not necessarily focused on VoIP or Talkdesk’s communication infrastructure. It wasn’t a user friendly approach neither to Talkdesk’s support teams nor to the client’s agents and network team.

Talkdesk wanted a tool that provides quick analysis in a simple and accurate manner.

Adopting testRTC’s Network Testing product

Talkdesk decommissioned its existing analysis tools, preferring to use testRTC’s Network Testing product instead. WFor the client, with a click of a button, the clienthe is now able to  provides detailed analysis results to the Talkdesk support team within a minute. This enables faster response times and less frustration to Talkdesk and Talkdesk’s customer.

Today, all of the Talkdesk teams on the field, including support, networks and sales teams, make use of the testRTC Network Testing service. When a Talkdesk representative at a client location or remotely needs to understand the client’s network behavior, they send a link to a client, asking them to click the start button. testRTC Network Testing then conducts a set of network checks, immediately making the results to Talkdesk’s support.

testRTC’s backend dashboard for Talkdesk

The adoption of this product in Talkdesk was really high and positive. This is due to the simplicity and ease of use of it. For the teams on the field, this enables to easily engage with potential clients who haven’t signed a contract yet while investing very little resources.

The big win: turnaround time

testRTC’s Network Testing service doesn’t solve the client’s problems. There is no silver bullet there. Talkdesk support still needs to analyze the results, figure out the issues and work with the client on them.

testRTC’s Network Testing service enables Talkdesk to quickly understand if there are any blocking issues for clients and start engaging with clients sooner in the process. This dramatically reduces the turnaround time when issues are found, increasing transparency and keeping clients happier throughout the process.

Talkdesk Network Test service in action

On selecting testRTC

When Talkdesk searched for an alternative to their existing solution, they came to testRTC. They knew testRTC’s CEO through webinars and WebRTC related posts he published independently and via testRTC, and wanted to see if they can engage with testRTC on such a solution.

“testRTC’s Network Testing service reduces the turnaround time for us in understanding and addressing potential network issues with clients”

testRTC made a strategic decision to create a new service offering for WebRTC support teams, working closely with Talkdesk on defining the requirements and developing the service.

Throughout the engagement, Talkdesk found testRTC to be very responsive and pragmatic, making the adjustments required by Talkdesk during and after the initial design and development stages.

What brought confidence to Talkdesk is the stance that testRTC took in the engagement, making it clear that for testRTC this is a partnership and not a one-off service. For Talkdesk, this was one of the most important aspects.

How Nexmo Integrated testRTC into their Test Automation for the Nexmo Voice API

Nexmo found in testRTC a solution to solve its end-to-end media testing challenges for their Nexmo Voice API product, connecting PSTN to WebRTC and vice versa.

Nexmo is one of the top CPaaS vendors out there providing cloud communication APIs to developers, enabling enterprises to add communication capabilities into their products and applications.

One of Nexmo’s capabilities involves connecting voice calls between regular phone numbers (PSTN) to browsers (using WebRTC) and vice versa. This capability is part of the Nexmo Voice API.

Testing @ Nexmo

Catering to so many customers with ongoing deployments to production means that Nexmo needs to take testing seriously. One of the things Nexmo did early on was introduce automated testing, using the pytest framework. Part of this automated testing includes a set of regression tests –  a huge amount of tests that provide very high test coverage. Regression tests get executed whenever the Nexmo team has a new version to release, but these tests can also be launched “on demand” by any engineer, they can also be triggered by the Jenkins CI pipeline upon a merge to a particular branch.

At Nexmo, development teams are in charge of the quality of their code, so there is no separate QA team.

In many cases, launching these regression tests first creates a new environment, where the Nexmo infrastructure is launched dynamically on cloud servers. This enables developers to run multiple test sessions in parallel, each in front of their own sandboxed environment, running a different version of the service.

When WebRTC was added to Nexmo Voice API, there was a need to extend the testing environment to include support for browsers and for WebRTC technology.

On Selecting testRTC

“When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”

Jamie Chapman, Voice API Engineer at Nexmo

Nexmo needed WebRTC end-to-end tests as part of their regression test suite for the Nexmo Voice API platform. These end-to-end tests were around two main scenarios:

  1. Dialing a call from PSTN and answering it inside a browser using WebRTC
  2. Calling a PSTN number directly from a browser using WebRTC

In both cases, their client side SDKs get loaded by a web page and tested as part of the scenario.

Nexmo ended up using testRTC as their tool of choice because it got the job done and it was possible to integrate it into their existing testing framework:

  • The python script used to define and execute a test scenario used testRTC’s API to dynamically create a test and run it on the testRTC platform
  • Environment variables specific to the dynamically created test environment got injected into the test
  • testRTC’s test result was then returned back to the python script to be recorded as part of the test execution result

This approach allowed Nexmo to integrate testRTC right into their current testing environment and test scripts.

Catering for Teams

The Voice API engineering team is a large oneAll these users have access to testRTC and they are able to launch regression tests that end up running testRTC scripts as well as using the testRTC dashboard to debug issues that are found.

The ability to have multiple users, each with their own credentials, running tests on demand when needed enabled increased productivity without dealing with coordination issues across the team members. The test results themselves get hosted on a single repository, accessible to the whole team, so all developers  can easily share faulty test results with the team .

Debugging WebRTC Issues

Nexmo has got regression testing for WebRTC off the ground by using testRTC. It does so by integrating with the testRTC APIs, scheduling and launching tests on demand from Nexmo’s own test environment. The tests today are geared towards providing end-to-end validation of media and connectivity between the PSTN network and WebRTC. Validation that testRTC takes care of by default.

When things break, developers check the results collected by testRTC. As Jamie Chapman, Voice API engineer at Nexmo said: “When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”.

testRTC takes screenshots during the test run, as well as upon failure. It collects browser logs and webrtc-internals dump files, visualizing it all and making it available for debugging purposes. This makes testRTC a valuable tool in the development process at Nexmo.

On the Horizon

Nexmo is currently making use of the basic scripting capabilities of testRTC. It has invested in the API integration, but there is more that can be done.

Nexmo are planning to increase their use of testRTC in several ways in the near future:

Monitoring Vidyo’s WebRTC Infrastructure End-to-End on a Global Scale

Vidyo has been using testRTC for the past two years to monitor its global WebRTC infrastructure end-to-end.

Vidyo offers high quality cloud video conferencing services to its impressive list of customers. There are three main product lines at Vidyo:

  1. VidyoConnect – a managed enterprise meeting solution for team collaboration
  2. VidyoEngage – a live video chat platform for call center customer engagement
  3. Vidyo.io – cloud APIs for embedded video communications in applications

All of these product lines share the same core video platform with WebRTC capabilities.

Vidyo caters large enterprises in mission critical systems, so from the start, it put in place a sophisticated system to monitor its infrastructure and service. That system is built on top of Splunk, where logs from across its system gets aggregated and filtered, letting different types of alerts to bubble up to the relevant teams within Vidyo via PagerDuty or email, depending on the seriousness of the alert.

End-to-End Monitoring

Early on, Vidyo saw the need for an end-to-end monitoring capability within their monitoring system. A way that will simulate real customers from all over the globe and alert of any issues. This is why Vidyo selected to use testRTC.

testRTC enabled Vidyo to create a scenario where testRTC’s probes join calls on any of Vidyo’s cloud products, authenticate with the service, join a meeting room, send and receive voice and video data in real time.

While Vidyo monitored its different machines and subsystems already, adding testRTC meant it was capable of monitoring the service as experienced by real users, doing it with predictability over the scenario used and at scale.

Integrating with an existing monitoring system

Vidyo wanted to collect and push monitor run results from testRTC into its Splunk big data repository of machine data. Run results from testRTC are automatically inserted into Vidyo’s Splunk repository using testRTC’s webhook mechanism.

Collecting that data gave Vidyo the power to finetune the feedback it received from testRTC, deciding if a failure is of a low priority, occurring randomly or of high priority, such as a failure occurring across monitors in a short period of time.

A global infrastructure

Every data center that Vidyo operates from gets its own special treatment. For each of the product lines hosted within that data center, Vidyo has a running testRTC monitor for.

Each monitor makes use of probes running independently from different locations worldwide, which adds another layer of monitoring to the solution – testRTC is capable of checking different routes and behaviors, with the intent to catch network issues as early as possible as well.

Whenever a new data center opens up, or a new geography needs to be served, Vidyo is able to modify an existing monitor or create a new testRTC monitor to cover that location.

It just works

testRTC runs continuously and relentlessly, connecting calls via Vidyo’s platform. It does so in a predictable fashion, collecting all logs along the way. Vidyo have learned to see the value in such an approach – random failures can be debugged in post mortem, finding their root causes and assisting in finding bugs and points of failures in the system.

“testRTC is a key component in Vidyo’s monitoring system. Digging down to the root cause is part of the work culture at Vidyo, and using testRTC we have eyes on the system 24×7 and can investigate issues thoroughly ensuring operational excellence for the benefit of our customers. ”

Nahum Cohen, SVP, Service and Operations @ Vidyo

Using testRTC, Vidyo are able to find issues with data centers, networks and their platform before customers notice it, giving them the needed time to resolve these issues.

Moving Forward with testRTC

Vidyo is in the process of introducing testRTC’s monitors to additional data centers it is currently operating, making sure its service is monitored end-to-end for all of its locations.