Category Archives for "Testimonials"

How Blitzz shifted to self service WebRTC network testing with testRTC

The Blitzz Remote Support Software is a flexible, scalable, and affordable solution for SMBs, mid-market, and well-established enterprises. Blitzz is helping service teams safely and successfully transition to a remote environment. The three-step solution to powerful visual assistance requires no app download. Customer Care Agents can clearly see what’s happening and offer remote guidance to quickly resolve issues without having to travel to the customer.

Keyur Patel, CTO and Co-founder of Blitzz describes how qualityRTC supports blitzz.co:

qualityRTC helped us focus on what we do best, and that’s providing an easy to use solution for remote video assistance over Browser; instead of having to worry about diagnosing different network issues. We really enjoy the direct support and quick communication the team at qualityRTC has given us in setting up and further developing our integration with them.

Here’s a better way to explain it:

Blitzz selected testRTC’s network testing product, qualityRTC. With it, they are able to quickly assist its clients when they encounter connectivity or quality issues with the service. We’ve been working closely with Blitzz in recent months, in order to fit the measurements and tests to their needs. One of the things that were added due to this partnership was our Video P2P test widget. I thought it would be interesting to understand what Blitzz is doing exactly with testRTC, and for that, I reached out to Keyur Patel, CTO and Co-founder of Blitzz

Understanding networks and devices

Blitzz aims to offer a simple experience. For that, it makes use of WebRTC and the fact that it is available in the browser. This makes it easy for the end users and there is no installation required for it. You can direct end users towards a URL and it will open up in their browser. The challenge though, is that with the proliferation of devices out there, you don’t control which exact browser and device is used by each user.

On the customer’s side, the agents are almost always operating from inside secure and restricted networks. They also have limited bandwidth available to them. When deploying the service to a new customer, this question comes up time and time again:

Can the agents connect to the Blitzz infrastructure?

Are the required ports opened on the firewall by the IT team? Do they have enough bandwidth allocated to them?

Finding suitable solutions

Solving connectivity issues is an ongoing effort. To that end, Blitzz were using a combination of analysis tools available freely on the Internet. These included test.webrtc.org, speed testing and the network diagnosis tool available from the CPaaS provider they were using.

This worked out well, but it was not very efficient. This process would take a couple of meetings, going back and forth, in order to collect all of the information, troubleshoot things and retries to get things done right.

It wasn’t the best experience, asking customers to go through 3 different URLs to make sure and validate that they had full connectivity.

Using qualityRTC

Keyur was aware of testRTC and knew about qualityRTC. Once he tried the tool, he saw the potential of using it at Blitzz.

After a quick integration process, Blitzz were able to troubleshoot customer issues with ease. This enabled them to provide a sophisticated service instead of gluing together multiple alternatives.

qualityRTC shined once the pandemic hit, and agents started working from home. Now the agents were running on very different networks, each in his one environment. While it was fine asking for an IT person to run multiple tools when onboarding to the service, doing that at scale increased the challenge.

By using qualityRTC, Blitzz was able to direct its customer base to a single tool. This allowed the agents to quickly and efficiently conduct these speed tests and connectivity tests, especially at times where quality of internet services was fluctuating.

Streamlining the process

“When we needed a solution for testing P2P connectivity based on our use case, the team at testRTC were able to quickly add features and deliver it in qualityRTC tool.”

Blitzz has embedded qualityRTC in their application for most of their users to diagnose connectivity issues during a video session. This allows end users to self test and diagnose issues by looking at the results on their own. If for some reason they still had to reach Blitzz Support, Blitzz support team could quickly review the log data collected by qualityRTC from their Network Test.

qualityRTC helped Blitzz increase customer satisfaction and reduce the friction in onboarding over several thousand customer care agents in a matter of days. This also reduced the number of support tickets as end users had all the information needed for resolving connectivity issues through the qualityRTC test portal.

Today, qualityRTC is an integral part of the Blitzz solution. This enables Blitzz to offer a better customer service and experience, while maintaining lower support costs.

Methodically testing and optimizing WebRTC applications at Vowel

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

Paul Fisher, CTO and Co-Founder at Vowel

There are many vendors who are trying to focus these days on making meetings more efficient. Vowel is a video conferencing tool that actually makes meetings better. It enables users to plan, host, transcribe, search, and share their meetings. They are doing that right from inside the browser, and make use of WebRTC.

Vowel has been using testRTC throughout 2020 and I thought it was a good time to talk with Paul Fisher, CTO and Co-Founder at Vowel. I wanted to understand from him how testRTC helps Vowel improve their product and its user experience.

Identifying bottlenecks and issues, scaling up for launch

One of the most important things in a video conferencing platform is the quality of the media. Before working with testRTC, Vowel lacked the visibility and the means to conduct systematic optimizations and improvements to their video platform. They got to know testRTC through an advisor in the company, whose first suggestion was to use testRTC.

In the early days, Vowel used internal tools, but found out that there’s a lot of overhead with using these tools. They require a lot more work to run, manage and extract the results from the tests conducted. Rolling their own was too time consuming and gave a lot less value.

Once testRTC was adopted by Vowel, things have changed for the better. By setting up a set of initial regression tests that can be executed on demand and through continuous integration, Vowel were able to create a baseline of their implementation performance and quality. From here, they were able to figure out what required improvement and optimization as well as understanding if a new release or modification caused an unwanted regression.

testRTC was extremely instrumental in assisting Vowel resolve multiple issues around its implementation: congestion control, optimizing resolution and bandwidth, debugging simulcast, understanding the cause and optimizing for latency, round trip time and jitter.

Vowel were able to proceed in huge strides in these areas by adopting testRTC. Prior to testRTC, Vowel had a kind of an ad-hoc approach, relying almost entirely on user feedback and metrics collected in datadog and other tools. There was no real methodical way for analyzing and pinpointing the source of the issues.

With the adoption of testRTC, Vowel is now able to reproduce issues and diagnose issues, as well as validate that these issues have been resolved. Vowel created a suite of test scripts for these issues and for the scenarios they focus on. They now methodically run these tests as regression with each release.

“Using testRTC has had the most significant impact in improving the quality, stability and maintenance of our platform.”

This approach got them to catch regression bugs earlier on, before potentially rolling out breaking changes to production – practically preventing them from happening.

Reliance on open source

Vowel was built on top of an open-source open source media server, but significant improvements, customizations and additional features were required for their platform. All these changes had to be rigorously tested, to see how they would affect behavior, stability and scalability.

On top of that, when using open source media servers, there are still all the aspects and nuances of the infrastructure itself. The cloud platform, running across regions, how the video layouts, etc.

One cannot just take an open source product or framework and expect it to work well without tweaking and tuning it.

Vowel made a number of significant modifications to lower-level media settings and behavior. testRTC was used to assess these changes — validating that there was a marked improvement across a range of scenarios, and ensuring that there were no unintentional, negative side effects or complications. Without the use of testRTC, it would be extremely difficult to run these validations — especially in a controlled, consistent, and replicable manner.

One approach is to roll out directly to production and try to figure out if a change made an improvement or not. The challenge there is that there is so much variability of testing in the wild that is unrelated to the changes made that it is easy to lose sight of the true effects of changes – big and small ones.

“A lot of the power of testRTC is that we can really isolate changes, create a clean room validation and make sure that there’s a net positive effect.”

testRTC enabled Vowel to establish a number of critical metrics and set goals across these metrics. Vowel then runs these recurring tests  automatically in regression and extracts these metrics to test and validate that they don’t “fail”.

On using testRTC

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

testRTC is used today at Vowel by most of the engineering team.

Test results are shared across the teams, data is exported into the internal company wiki. Vowel’s engineers constantly add new test scripts. New Scrum stories commonly include the creation or improvement of test scripts in testRTC.Every release includes running a battery of tests on testRTC.

For Vowel, testRTC is extremely fast and easy to use.

It is easy to automate and spin up tests on demand with just a click of the button, no matter the scale needed.

The fact that testRTC uses Nightwatch, an open source browser automation framework, makes it powerful in its ability to create and customize practically any scenario.

The test results are well organized in ways that make it easy to understand the status of the test, pinpoint issues and drill down to see the things needed in each layer and level.

How Workable uses testRTC for automated WebRTC testing

“testRTC had almost everything that we needed. The solution is easy to use, easy to integrate and  it was easy to include in our CI environment.”
Eleni Karakizi, Senior QA Engineer at Workable

HR is one of the business functions that are getting a digital transformation makeover. Workable is a leading vendor in this market, helping businesses make the right hires, faster with their talent acquisition software. A part of that enablement is Workable’s video interviews product which makes use of WebRTC.

WebRTC test automation via testRTC

Workable has video interviews implemented as a feature of a much larger service. The teams at Workable believe in test automation and are shying away from manual testing as much as possible.

When they started off with the implementation of their video interviews feature they immediately searched for a WebRTC test automation solution. They found out that WebRTC implementations are complicated systems. Developers needed to handle changing and unpredictable network environments. WebRTC brought with it a lot of moving parts. Using testRTC meant reducing a lot of the development efforts involved in setting up effective test automation for their environment.

Workable immediately created a set of tests and made them a part of their continuous integration processes, running as part of their nightly regression testing. This enabled Workable to find any regression issues quickly and effectively before they got to the hands of their users and without the need to invest expensive manual testing time.

At Workable, testRTC is accessed by developers and QA engineers who work on the video interviews platform to create tests, run them and analyze the results.

The testRTC experience

What Workable found in testRTC was an easy to use service.

Test scripts in testRTC are written using Nightwatch, a widely used open source scripting framework for browser automation. Since a lot of the code developed with WebRTC is written with JavaScript, being able to write test automation with the same language using Nightwatch meant there was no barrier in the learning curve of adopting testRTC.

The APIs used for the purpose of continuous integration were easy enough to pick up, making the integration process itself a breeze.

An important aspect of the testing conducted by Workable was the ability to configure and test various network conditions. The availability and ease of picking up different network profiles with testRTC made this possible.

Here’s why Intermedia turned to testRTC to proactively monitor their AnyMeeting web conferencing service

“testRTC enables us to monitor meeting performance from start to finish, with a focus on media quality. We get the best of everything with this platform.”

As 2019 came to a close, I had a chance to sit and chat with Ryan Cartmell, Director of Production System Administration at Intermedia®. Ryan manages a team responsible for monitoring and maintaining the production environment of Intermedia. His top priority is maintaining uptime and availability of Intermedia’s services.

In 2017, Intermedia acquired AnyMeeting®, a web conferencing platform based on WebRTC. Since then, Ryan and his team have been working towards building up the tools and experience needed to give them visibility into media quality and meeting performance.

Initially, these tools took care of two levels of monitoring:

  1. System resource performance monitoring was done by collecting and looking at server metrics
  2. Application level monitoring was incorporated by collecting certain application specific metrics, aggregating them in an in-house monitoring platform as well as using a cloud APM vendor

This approach gave a decent view of the service and its performance, but it has its limits.

What you don’t know you don’t know

The way such monitoring systems work is by collecting metrics, data and logs from the various components in the system as well as the running applications. Once you have that data, you can configure alerts and dashboard based on what you know you are looking for.

If an issue was found in the AnyMeeting service, Ryan’s team would try to understand how the issue can be deduced from the logs and available information, creating new alerts based on that input. Doing so would ensure that the next time the same issue occurred, it would be caught and dealt with properly.

The challenge here is that you don’t know what you don’t know. You first need a problem to occur and someone to complain in order to figure out a rule to alert for it happening again. And you can never really reach complete coverage over potential failures.

This kept the Intermedia Operations team in a reactive position. What they wanted and needed was a way to proactively run and test the system to catch any issues in their environment.

Proactively handling WebRTC issues using testRTC

Get ahead of issues and not wait for customers to report issues. 

“With testRTC we are now able to get ahead of issues and not wait for customers to report issues.”

testRTC is an active monitoring test engine that allows Intermedia to proactively test services. This enabled Intermedia to become aware of their performance as well as in the total service availability of the AnyMeeting platform.

Intermedia deployed multiple testRTC monitors, making sure its data centers are probed multiple times an hour. The monitors are active checks that create real web conferences on AnyMeeting, validating that a session works as expected from start to finish. From logging in, through communicating via voice and video to screen sharing. If things go awry or expected media threshold aren’t met, alerts are issued, and Ryan’s team can investigate.

A screenshot from one of the testRTC monitors for the AnyMeeting service

These new testRTC monitors mapped and validated the complete user experience of the AnyMeeting service, something that wasn’t directly monitored before.

Since its implementation, testRTC has helped Intermedia identify various issues in the AnyMeeting system – valuable information that Intermedia, as a market leader in unified communications, uses in its efforts to continually improve the performance and quality of its services.

The data that testRTC collects, including the screenshots it takes, make it a lot easier to track down issues. Before testRTC, using performance metrics alone, it was really difficult to understand the overall impact to an end user. Now it is part of the analysis process.

Working with testRTC

Since starting to work with testRTC and installing its monitors, Intermedia has found the following advantages of working with testRTC:

  1. The flexibility of the testRTC platform – this enables Intermedia to test all elements of the web platform service it offers
  2. Top tier support – testRTC team was there to assist with any issues and questions Intermedia had
  3. Level of expertise – the ability of testRTC to help Intermedia work through issues that testRTC exposes

For Ryan, testRTC gives a level of comfort knowing that tests are regularly being performed.  And, if a technical challenge does arise, the data available from testRTC will enable Ryan and his team to triage the issue a lot easier than they used to.

Intermedia and AnyMeeting are trademarks or registered trademarks of Intermedia.net, Inc. in the United States and/or other countries.

2

Preparing for WebRTC scale: What Honorlock did to validate their infrastructure

This week, I decided to have a conversation with Carl Scheller, VP of Engineering at Honorlock. I knew about the remote proctoring space, and have seen a few clients work with testRTC. This was the first real opportunity I had to understand firsthand about the scale-related requirements of this industry.

Honorlock is a remote proctoring vendor, providing online testing services to higher education institutions. Colleges and universities use Honorlock when they want to proctor test taking online. The purpose? Ensure academic integrity of the students who take online exams as part of their courses.

Here’s the thing – every student taking an exam online ends up using Honorlock’s service, which in turn connects to the user’s camera and screen to collect video feeds in real time along with other things that Honorlock does.

Proctoring with WebRTC and usage behavior

When taking an online exam, the proctoring platform connects to the student’s camera and screen by using WebRTC. The media feeds then get sent and recorded  on Honorlock’s servers in real time, along the way passing some AI related checks to find any signs of cheating. The recordings themselves are stored for an additional period of time to be used if and when needed for manual review.

To offer such a secure testing environment requires media servers to record the session for as long as the exam takes place. If 100 students need to take an exam in a specific subject, they might need to do so at the same scheduled time.

Exams have their own seasonality. There is a lot of usage taking place during midterm and final exam periods, whereas January sees a lot less activity when most schools open a lot less.

Online proctoring platforms need to make sure that each student taking an exam gets a high experience no matter how many other students are taking an exam at the same time. This clears students to worry about their test and not about the proctoring software.

Honorlock’s scaling challenge

Honorlock are aware of this seasonality. They wanted to make sure that their application can handle the load in the various areas of the application. Especially due to the expected growth of their business in the near future.

What Honorlock were looking for was an answer to the question: at what point do they need to improve their application to scale further?

Honorlock is using a third party video platform. They have decided early on not to develop and deploy their own infrastructure, preferring to focus on the core experience for the students and the institutions using them.

Honorlock decided not to have a working assumption as to the scale of the third party video platform blindly, and instead went ahead to conduct end-to-end stress testing, validating their assumptions and scale requirements.

When researching for alternatives, it was important for Honorlock to be able to test the whole client-side UI, to make sure the video infrastructure gets triggered the same way it would in real life. There was also a need to be able to test everything, and not only focus on scale testing of each component separately. This led Honorlock to testRTC.

“We’ve selected testRTC because it enabled us to stress test our service as closely to the live usage we were expecting to see in our platform. Using testRTC significantly assisted us in validating our readiness to scale our business.”

Carl Scheller, VP of Engineering, Honorlock

Load testing with testRTC

Once Honorlock started using testRTC, it was immediately apparent that the testing requirements of Honorlock were met:

  • Honorlock made use of the powerful scripting available in testRTC. These enabled handling the complexity of the UX of a proctoring service
  • Once ready, being able to scale up tests to hundreds or thousands of concurrent browsers made the validation process easier. Especially with the built-in graphs in testRTC focusing on high level aggregate media quality information
  • The global distribution of testRTC’s probes and their granular network controls enabled Honorlock to run stress tests with different machine configurations mapping to Honorlock’s target audience of students

A big win for Honorlock was the level of support provided by testRTC throughout the process. testRTC played an important role in helping define the test plan, writing the test scripts and modifying the scripts to work in a realistic scenario for the Honorlock application.

Building a partnership

Working with testRTC has been useful for Honorlock. While the testRTC service offered a powerful and flexible solution, the real benefit was in the approach testRTC took to the work needed. From the get go, testRTC provided hands on assistance, making sure Honorlock get ramp up their testing quickly and validate their scale.

That ability to get hands on assistance, coupled with the self service capabilities found in testRTC were what Honorlock was looking for.

The validation itself assisted Honorlock in uncovering issues in both their platform as well as the third party they were using. These issues are being taken care of. Using testRTC enabled Honorlock to make better technical decisions.

3 How Talkdesk support solves customer network issues faster with testRTC

“The adoption of testRTC Network Testing at Talkdesk was really high and positive”

Earlier this month, I sat down with João Gaspar, Global Director, Customer Service at Talkdesk to understand more how they are using the new testRTC Network Testing product. This is the first time they’ve introduced a product that is designed for support teams, so this was an interesting conversation for me.

Talkdesk is the fastest growing cloud contact center solution today. They have over 1,800 customers across more than 50 countries. João oversees the global support team at Talkdesk with the responsibility to ensure clients are happy by offering proactive and transparent support.

All of Talkdesk customers make use of WebRTC as part of their call center capabilities. When call center agents open the Talkdesk application, they can receive incoming calls or dial outgoing calls directly from their browser, making use of WebRTC.

WebRTC challenges for cloud contact centers

The main challenge with cloud communication in contact centers is finding the reason for user complaints about call quality. Troubleshooting such scenarios to get to the root cause is very hard, and in almost all cases, Talkdesk has found out that it is not because of its communication infrastructure but rather due to issues between the customer’s agent and his firewall/proxy.

Issues vary from available bandwidth and quality in their internet connection, problems with their headphones, the machine they are using and a slew of other areas.

Talkdesk’s perspective and proactive focus to support means they’re engaging with clients not only when there are issues but through the entire cycle. For larger, enterprise deals,Talkdesk makes network assessments and provides recommendations to the client’s network team during the POC itself, not waiting for quality issues to crop later on in the process.

To that end, Talkdesk used a set of multiple tools, some of them running only on Internet Explorer and others testing network conditions but not necessarily focused on VoIP or Talkdesk’s communication infrastructure. It wasn’t a user friendly approach neither to Talkdesk’s support teams nor to the client’s agents and network team.

Talkdesk wanted a tool that provides quick analysis in a simple and accurate manner.

Adopting testRTC’s Network Testing product

Talkdesk decommissioned its existing analysis tools, preferring to use testRTC’s Network Testing product instead. WFor the client, with a click of a button, the clienthe is now able to  provides detailed analysis results to the Talkdesk support team within a minute. This enables faster response times and less frustration to Talkdesk and Talkdesk’s customer.

Today, all of the Talkdesk teams on the field, including support, networks and sales teams, make use of the testRTC Network Testing service. When a Talkdesk representative at a client location or remotely needs to understand the client’s network behavior, they send a link to a client, asking them to click the start button. testRTC Network Testing then conducts a set of network checks, immediately making the results to Talkdesk’s support.

testRTC’s backend dashboard for Talkdesk

The adoption of this product in Talkdesk was really high and positive. This is due to the simplicity and ease of use of it. For the teams on the field, this enables to easily engage with potential clients who haven’t signed a contract yet while investing very little resources.

The big win: turnaround time

testRTC’s Network Testing service doesn’t solve the client’s problems. There is no silver bullet there. Talkdesk support still needs to analyze the results, figure out the issues and work with the client on them.

testRTC’s Network Testing service enables Talkdesk to quickly understand if there are any blocking issues for clients and start engaging with clients sooner in the process. This dramatically reduces the turnaround time when issues are found, increasing transparency and keeping clients happier throughout the process.

Talkdesk Network Test service in action

On selecting testRTC

When Talkdesk searched for an alternative to their existing solution, they came to testRTC. They knew testRTC’s CEO through webinars and WebRTC related posts he published independently and via testRTC, and wanted to see if they can engage with testRTC on such a solution.

“testRTC’s Network Testing service reduces the turnaround time for us in understanding and addressing potential network issues with clients”

testRTC made a strategic decision to create a new service offering for WebRTC support teams, working closely with Talkdesk on defining the requirements and developing the service.

Throughout the engagement, Talkdesk found testRTC to be very responsive and pragmatic, making the adjustments required by Talkdesk during and after the initial design and development stages.

What brought confidence to Talkdesk is the stance that testRTC took in the engagement, making it clear that for testRTC this is a partnership and not a one-off service. For Talkdesk, this was one of the most important aspects.

How Nexmo Integrated testRTC into their Test Automation for the Nexmo Voice API

Nexmo found in testRTC a solution to solve its end-to-end media testing challenges for their Nexmo Voice API product, connecting PSTN to WebRTC and vice versa.

Nexmo is one of the top CPaaS vendors out there providing cloud communication APIs to developers, enabling enterprises to add communication capabilities into their products and applications.

One of Nexmo’s capabilities involves connecting voice calls between regular phone numbers (PSTN) to browsers (using WebRTC) and vice versa. This capability is part of the Nexmo Voice API.

Testing @ Nexmo

Catering to so many customers with ongoing deployments to production means that Nexmo needs to take testing seriously. One of the things Nexmo did early on was introduce automated testing, using the pytest framework. Part of this automated testing includes a set of regression tests –  a huge amount of tests that provide very high test coverage. Regression tests get executed whenever the Nexmo team has a new version to release, but these tests can also be launched “on demand” by any engineer, they can also be triggered by the Jenkins CI pipeline upon a merge to a particular branch.

At Nexmo, development teams are in charge of the quality of their code, so there is no separate QA team.

In many cases, launching these regression tests first creates a new environment, where the Nexmo infrastructure is launched dynamically on cloud servers. This enables developers to run multiple test sessions in parallel, each in front of their own sandboxed environment, running a different version of the service.

When WebRTC was added to Nexmo Voice API, there was a need to extend the testing environment to include support for browsers and for WebRTC technology.

On Selecting testRTC

“When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”

Jamie Chapman, Voice API Engineer at Nexmo

Nexmo needed WebRTC end-to-end tests as part of their regression test suite for the Nexmo Voice API platform. These end-to-end tests were around two main scenarios:

  1. Dialing a call from PSTN and answering it inside a browser using WebRTC
  2. Calling a PSTN number directly from a browser using WebRTC

In both cases, their client side SDKs get loaded by a web page and tested as part of the scenario.

Nexmo ended up using testRTC as their tool of choice because it got the job done and it was possible to integrate it into their existing testing framework:

  • The python script used to define and execute a test scenario used testRTC’s API to dynamically create a test and run it on the testRTC platform
  • Environment variables specific to the dynamically created test environment got injected into the test
  • testRTC’s test result was then returned back to the python script to be recorded as part of the test execution result

This approach allowed Nexmo to integrate testRTC right into their current testing environment and test scripts.

Catering for Teams

The Voice API engineering team is a large oneAll these users have access to testRTC and they are able to launch regression tests that end up running testRTC scripts as well as using the testRTC dashboard to debug issues that are found.

The ability to have multiple users, each with their own credentials, running tests on demand when needed enabled increased productivity without dealing with coordination issues across the team members. The test results themselves get hosted on a single repository, accessible to the whole team, so all developers  can easily share faulty test results with the team .

Debugging WebRTC Issues

Nexmo has got regression testing for WebRTC off the ground by using testRTC. It does so by integrating with the testRTC APIs, scheduling and launching tests on demand from Nexmo’s own test environment. The tests today are geared towards providing end-to-end validation of media and connectivity between the PSTN network and WebRTC. Validation that testRTC takes care of by default.

When things break, developers check the results collected by testRTC. As Jamie Chapman, Voice API engineer at Nexmo said: “When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”.

testRTC takes screenshots during the test run, as well as upon failure. It collects browser logs and webrtc-internals dump files, visualizing it all and making it available for debugging purposes. This makes testRTC a valuable tool in the development process at Nexmo.

On the Horizon

Nexmo is currently making use of the basic scripting capabilities of testRTC. It has invested in the API integration, but there is more that can be done.

Nexmo are planning to increase their use of testRTC in several ways in the near future:

Monitoring Vidyo’s WebRTC Infrastructure End-to-End on a Global Scale

Vidyo has been using testRTC for the past two years to monitor its global WebRTC infrastructure end-to-end.

Vidyo offers high quality cloud video conferencing services to its impressive list of customers. There are three main product lines at Vidyo:

  1. VidyoConnect – a managed enterprise meeting solution for team collaboration
  2. VidyoEngage – a live video chat platform for call center customer engagement
  3. Vidyo.io – cloud APIs for embedded video communications in applications

All of these product lines share the same core video platform with WebRTC capabilities.

Vidyo caters large enterprises in mission critical systems, so from the start, it put in place a sophisticated system to monitor its infrastructure and service. That system is built on top of Splunk, where logs from across its system gets aggregated and filtered, letting different types of alerts to bubble up to the relevant teams within Vidyo via PagerDuty or email, depending on the seriousness of the alert.

End-to-End Monitoring

Early on, Vidyo saw the need for an end-to-end monitoring capability within their monitoring system. A way that will simulate real customers from all over the globe and alert of any issues. This is why Vidyo selected to use testRTC.

testRTC enabled Vidyo to create a scenario where testRTC’s probes join calls on any of Vidyo’s cloud products, authenticate with the service, join a meeting room, send and receive voice and video data in real time.

While Vidyo monitored its different machines and subsystems already, adding testRTC meant it was capable of monitoring the service as experienced by real users, doing it with predictability over the scenario used and at scale.

Integrating with an existing monitoring system

Vidyo wanted to collect and push monitor run results from testRTC into its Splunk big data repository of machine data. Run results from testRTC are automatically inserted into Vidyo’s Splunk repository using testRTC’s webhook mechanism.

Collecting that data gave Vidyo the power to finetune the feedback it received from testRTC, deciding if a failure is of a low priority, occurring randomly or of high priority, such as a failure occurring across monitors in a short period of time.

A global infrastructure

Every data center that Vidyo operates from gets its own special treatment. For each of the product lines hosted within that data center, Vidyo has a running testRTC monitor for.

Each monitor makes use of probes running independently from different locations worldwide, which adds another layer of monitoring to the solution – testRTC is capable of checking different routes and behaviors, with the intent to catch network issues as early as possible as well.

Whenever a new data center opens up, or a new geography needs to be served, Vidyo is able to modify an existing monitor or create a new testRTC monitor to cover that location.

It just works

testRTC runs continuously and relentlessly, connecting calls via Vidyo’s platform. It does so in a predictable fashion, collecting all logs along the way. Vidyo have learned to see the value in such an approach – random failures can be debugged in post mortem, finding their root causes and assisting in finding bugs and points of failures in the system.

“testRTC is a key component in Vidyo’s monitoring system. Digging down to the root cause is part of the work culture at Vidyo, and using testRTC we have eyes on the system 24×7 and can investigate issues thoroughly ensuring operational excellence for the benefit of our customers. ”

Nahum Cohen, SVP, Service and Operations @ Vidyo

Using testRTC, Vidyo are able to find issues with data centers, networks and their platform before customers notice it, giving them the needed time to resolve these issues.

Moving Forward with testRTC

Vidyo is in the process of introducing testRTC’s monitors to additional data centers it is currently operating, making sure its service is monitored end-to-end for all of its locations.

How Houseparty uses testRTC as an integral part of its WebRTC testing

Houseparty selected testRTC for its WebRTC infrastructure regression testing through continuous integration.

Houseparty is a mobile group video chat application, where groups of up to eight friends gather to chat in virtual rooms. With over half a billion video chats conducted using WebRTC, Houseparty is massive in its scale. What makes Houseparty interesting, is that the majority of its users base are 24 old or younger audiences, spending upwards of 50 minutes a day inside the app.

Being a social platform, Houseparty has to innovate on a daily basis. This calls for frequent updates of its mobile applications and infrastructure. An update to the Houseparty backend infrastructure happens on a daily basis and the mobile apps are updated every two weeks on average.

In Search of a Regression Testing Tool

The developers at Houseparty wanted to get a kind of an early warning system in place. One that would tell the team if the changes being made are breaking the service for its users. And breakage here means a reduction in media quality or the inability to work in certain network conditions. What Houseparty’s developers were looking for was higher confidence in their version rollouts.

Houseparty already had stress testing capabilities in place, along with the ability to test their mobile applications. What they were missing was regression testing for the infrastructure. When a decision had to be made, Houseparty preferred to use testRTC’s testing service instead of building their own testing environment, saving months of effort of experienced WebRTC developers with the understanding that the end result would be inferior in terms of its feature set and capabilities.

By selecting testRTC, Houseparty’s developers  were able to improve their confidence level when upgrading the service for their millions of users.

“testRTC offered us the fastest and cheapest way to get the type of regression testing we needed, increasing the confidence we had when rolling out new releases of the Houseparty application”

Simplicity is Key

One of the key reasons for selecting testRTC was the simplicity of the service. From writing tests, through selecting the machines’ configuration and defining test success criteria down to integrating with the API.

The ability to pick different network configurations was really important to Houseparty. Using both the preconfigured settings as well as dynamically modifying network conditions enabled Houseparty to quickly and efficiently understand how the behavior of their application is affected.

Furthermore, by using test expectations in testRTC, a mechanism that lets developers set success and failure criteria for a test, based on metrics collected, Houseparty developers are alerted when results needs to be further analysed. This enables Houseparty’s developers to spend more time on their application and less in drilling down to results, trying to understand their meaning.

When drilling down to results is needed, then the graphs displayed assist the developers in debugging the problems and resolving them faster.

Outgoing and incoming video bitrate for an 8 people room with simulcast enabled

Mobile Only and WebRTC

While running predominantly as a mobile application, Houseparty’s video processing makes use of WebRTC. Houseparty is making a distinction between its application testing and infrastructure testing. It had in its arsenal existing tools that are being used to test its mobile clients. What it was looking for was a way to test their video infrastructure – their media servers and TURN servers – making sure they work as expected.

To that end, Houseparty is using a simple HTML page that can be used to create calls on its staging environment for the application. testRTC is then used to access that page and automate the testing process, simulating different network conditions while testing Houseparty’s video infrastructure.

Continuous Integration as a First Priority

Houseparty made the decision early on to use testRTC as part of their continuous integration environment. Using testRTC’s APIs, the developers at Houseparty were able to quickly integrate the testing scripts they’ve written in testRTC to their Jenkins automation server.

This allows Houseparty to run the testRTC regression tests every night. Integrating testRTC with Jenkins means that when tests complete, their results are reported back to Jenkins and from there they get sent to Slack, where developers get notified on potential failures.

Running testRTC tests nightly from Jenkins with integrated reporting and notifications

Moving Forward with testRTC

For Houseparty, the work is not done yet. testRTC is used on a daily basis, running a battery of tests designed to check their infrastructure. There are additional tests that are planned to be added to this test suite.

Peer-to-peer testing and direct TURN server testing will be added in the near future, increasing the coverage of regression testing done over testRTC.

How Clique Migrated Smoothly to the Newest AWS EC2 C5 Instance

In a need to focus resources on core activities, Clique Communications turned to testRTC for stress testing and sizing.

Clique API provides web-based voice and text application programming interfaces. In their eight years of existence, they have grown to support over 20 million users across 150 countries. This amounts to over 500 million minutes per month. Clique’s cloud services deliver multi-party voice that can be embedded by enterprises  into their own business processes.

What are their current goals?

  • Grow the business
  • Add features to improve customer service and experience
  • Offer value-added services

Adding WebRTC

Clique started working with WebRTC some 18 months ago with customers starting to use it at the end of 2017.

Today, Clique supports all major browsers – Chrome, Firefox, Safari, and Edge; enabling its customers to offer uninterrupted interactions with their users. When a user joins a conference, he/she can do so over PSTN, directly from the browser or from within a native application that utilizes Clique SDKs.

Making Use of testRTC

As with any other software product, Clique had to test and validate its solution. To that end, Clique had already been using  tools for handling call volumes and regression, testing the application and the SDKs. The challenge was the issue of scalability and quality of service, which is essential  when it comes to WebRTC support. Clique had a decision to make – either invest in building their own set of testing tools on top of open source frameworks such as Selenium, or opt for a commercial alternative. They decided to go with the latter and use testRTC. They also preferred using a third party tool for testing as they didn’t want to burden their engineering team.

Switching from AWS EC2 C4 to C5

Clique had previously used a standard instance on Amazon from the AWS EC2 C4 series but when the AWS EC2 C5 series came out, they wanted to take advantage of it – not only was it more economical but it also had better performance. Furthermore, knowing Amazon would release newer sets of servers that would need to be tested again, Clique required this process to be repeatable.

The Action Plan

Since Clique is an embeddable service, they decided it was most strategic to have a third party develop an application using the Clique client SDK and APIs, and use that application as a test framework that could scale and grow the performance of the platform. It was a wonderful opportunity to optimize their own resources and save on the instances that they deploy on Amazon. An added bonus was having a third party that could then be used by Clique’s customers and partners who are building applications hey can use as part of their development process.

Clique wrote  their own test scripts in testRTC. The main test scenario for Clique was having a moderator who creates the conference and then generates a URL for other participants in the conference to join. Once they figured out how to do that with testRTC, the rest was a piece of cake.

Using testRTC to assist in sizing the instances on the AWS has ancillary benefits beyond Clique’s core objectives. Clique tested the full life-cycle of its solution. From developing yet another application with its SDKs, integrating its APIs, to continuous integration & devops, Clique discovered  bugs that were then fixed, optimized performance and gave Clique confidence to run services at scale in next generation architectures.

“testRTC provided Clique with a reliable and repeatable mechanism to measure our CPaaS performance… allowing Clique to save money, remain confident in our architectural choices and more importantly showcase our platform to customers with the integrity of an independent test system.”

Moving Forward – Continued use of testRTC

There are a lot of moving pieces in Clique’s solution:  infrastructure in the backend, media servers , the WebRTC gateways. Features such as recording can fit into various components within the architecture, and Clique is always looking for ways to optimize and simplify.

testRTC helps Clique evaluate if the assumptions made in their architecture are valid by determining bottlenecks and identifying places of consolidation.

In the future, Clique will be looking at testRTC’s monitoring capability as well as using testRTC to instantiate browsers in different locations.