Category Archives for "Testimonials"

How Blitzz shifted to self service WebRTC network testing with testRTC

The Blitzz Remote Support Software is a flexible, scalable, and affordable solution for SMBs, mid-market, and well-established enterprises. Blitzz is helping service teams safely and successfully transition to a remote environment. The three-step solution to powerful visual assistance requires no app download. Customer Care Agents can clearly see what’s happening and offer remote guidance to quickly resolve issues without having to travel to the customer.

Keyur Patel, CTO and Co-founder of Blitzz describes how qualityRTC supports

qualityRTC helped us focus on what we do best, and that’s providing an easy to use solution for remote video assistance over Browser; instead of having to worry about diagnosing different network issues. We really enjoy the direct support and quick communication the team at qualityRTC has given us in setting up and further developing our integration with them.

Here’s a better way to explain it:

Blitzz selected testRTC’s network testing product, qualityRTC. With it, they are able to quickly assist its clients when they encounter connectivity or quality issues with the service. We’ve been working closely with Blitzz in recent months, in order to fit the measurements and tests to their needs. One of the things that were added due to this partnership was our Video P2P test widget. I thought it would be interesting to understand what Blitzz is doing exactly with testRTC, and for that, I reached out to Keyur Patel, CTO and Co-founder of Blitzz

Understanding networks and devices

Blitzz aims to offer a simple experience. For that, it makes use of WebRTC and the fact that it is available in the browser. This makes it easy for the end users and there is no installation required for it. You can direct end users towards a URL and it will open up in their browser. The challenge though, is that with the proliferation of devices out there, you don’t control which exact browser and device is used by each user.

On the customer’s side, the agents are almost always operating from inside secure and restricted networks. They also have limited bandwidth available to them. When deploying the service to a new customer, this question comes up time and time again:

Can the agents connect to the Blitzz infrastructure?

Are the required ports opened on the firewall by the IT team? Do they have enough bandwidth allocated to them?

Finding suitable solutions

Solving connectivity issues is an ongoing effort. To that end, Blitzz were using a combination of analysis tools available freely on the Internet. These included, speed testing and the network diagnosis tool available from the CPaaS provider they were using.

This worked out well, but it was not very efficient. This process would take a couple of meetings, going back and forth, in order to collect all of the information, troubleshoot things and retries to get things done right.

It wasn’t the best experience, asking customers to go through 3 different URLs to make sure and validate that they had full connectivity.

Using qualityRTC

Keyur was aware of testRTC and knew about qualityRTC. Once he tried the tool, he saw the potential of using it at Blitzz.

After a quick integration process, Blitzz were able to troubleshoot customer issues with ease. This enabled them to provide a sophisticated service instead of gluing together multiple alternatives.

qualityRTC shined once the pandemic hit, and agents started working from home. Now the agents were running on very different networks, each in his one environment. While it was fine asking for an IT person to run multiple tools when onboarding to the service, doing that at scale increased the challenge.

By using qualityRTC, Blitzz was able to direct its customer base to a single tool. This allowed the agents to quickly and efficiently conduct these speed tests and connectivity tests, especially at times where quality of internet services was fluctuating.

Streamlining the process

“When we needed a solution for testing P2P connectivity based on our use case, the team at testRTC were able to quickly add features and deliver it in qualityRTC tool.”

Blitzz has embedded qualityRTC in their application for most of their users to diagnose connectivity issues during a video session. This allows end users to self test and diagnose issues by looking at the results on their own. If for some reason they still had to reach Blitzz Support, Blitzz support team could quickly review the log data collected by qualityRTC from their Network Test.

qualityRTC helped Blitzz increase customer satisfaction and reduce the friction in onboarding over several thousand customer care agents in a matter of days. This also reduced the number of support tickets as end users had all the information needed for resolving connectivity issues through the qualityRTC test portal.

Today, qualityRTC is an integral part of the Blitzz solution. This enables Blitzz to offer a better customer service and experience, while maintaining lower support costs.

Methodically testing and optimizing WebRTC applications at Vowel

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

Paul Fisher, CTO and Co-Founder at Vowel

There are many vendors who are trying to focus these days on making meetings more efficient. Vowel is a video conferencing tool that actually makes meetings better. It enables users to plan, host, transcribe, search, and share their meetings. They are doing that right from inside the browser, and make use of WebRTC.

Vowel has been using testRTC throughout 2020 and I thought it was a good time to talk with Paul Fisher, CTO and Co-Founder at Vowel. I wanted to understand from him how testRTC helps Vowel improve their product and its user experience.

Identifying bottlenecks and issues, scaling up for launch

One of the most important things in a video conferencing platform is the quality of the media. Before working with testRTC, Vowel lacked the visibility and the means to conduct systematic optimizations and improvements to their video platform. They got to know testRTC through an advisor in the company, whose first suggestion was to use testRTC.

In the early days, Vowel used internal tools, but found out that there’s a lot of overhead with using these tools. They require a lot more work to run, manage and extract the results from the tests conducted. Rolling their own was too time consuming and gave a lot less value.

Once testRTC was adopted by Vowel, things have changed for the better. By setting up a set of initial regression tests that can be executed on demand and through continuous integration, Vowel were able to create a baseline of their implementation performance and quality. From here, they were able to figure out what required improvement and optimization as well as understanding if a new release or modification caused an unwanted regression.

testRTC was extremely instrumental in assisting Vowel resolve multiple issues around its implementation: congestion control, optimizing resolution and bandwidth, debugging simulcast, understanding the cause and optimizing for latency, round trip time and jitter.

Vowel were able to proceed in huge strides in these areas by adopting testRTC. Prior to testRTC, Vowel had a kind of an ad-hoc approach, relying almost entirely on user feedback and metrics collected in datadog and other tools. There was no real methodical way for analyzing and pinpointing the source of the issues.

With the adoption of testRTC, Vowel is now able to reproduce issues and diagnose issues, as well as validate that these issues have been resolved. Vowel created a suite of test scripts for these issues and for the scenarios they focus on. They now methodically run these tests as regression with each release.

“Using testRTC has had the most significant impact in improving the quality, stability and maintenance of our platform.”

This approach got them to catch regression bugs earlier on, before potentially rolling out breaking changes to production – practically preventing them from happening.

Reliance on open source

Vowel was built on top of an open-source open source media server, but significant improvements, customizations and additional features were required for their platform. All these changes had to be rigorously tested, to see how they would affect behavior, stability and scalability.

On top of that, when using open source media servers, there are still all the aspects and nuances of the infrastructure itself. The cloud platform, running across regions, how the video layouts, etc.

One cannot just take an open source product or framework and expect it to work well without tweaking and tuning it.

Vowel made a number of significant modifications to lower-level media settings and behavior. testRTC was used to assess these changes — validating that there was a marked improvement across a range of scenarios, and ensuring that there were no unintentional, negative side effects or complications. Without the use of testRTC, it would be extremely difficult to run these validations — especially in a controlled, consistent, and replicable manner.

One approach is to roll out directly to production and try to figure out if a change made an improvement or not. The challenge there is that there is so much variability of testing in the wild that is unrelated to the changes made that it is easy to lose sight of the true effects of changes – big and small ones.

“A lot of the power of testRTC is that we can really isolate changes, create a clean room validation and make sure that there’s a net positive effect.”

testRTC enabled Vowel to establish a number of critical metrics and set goals across these metrics. Vowel then runs these recurring tests  automatically in regression and extracts these metrics to test and validate that they don’t “fail”.

On using testRTC

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

testRTC is used today at Vowel by most of the engineering team.

Test results are shared across the teams, data is exported into the internal company wiki. Vowel’s engineers constantly add new test scripts. New Scrum stories commonly include the creation or improvement of test scripts in testRTC.Every release includes running a battery of tests on testRTC.

For Vowel, testRTC is extremely fast and easy to use.

It is easy to automate and spin up tests on demand with just a click of the button, no matter the scale needed.

The fact that testRTC uses Nightwatch, an open source browser automation framework, makes it powerful in its ability to create and customize practically any scenario.

The test results are well organized in ways that make it easy to understand the status of the test, pinpoint issues and drill down to see the things needed in each layer and level.

How Workable uses testRTC for automated WebRTC testing

“testRTC had almost everything that we needed. The solution is easy to use, easy to integrate and  it was easy to include in our CI environment.”
Eleni Karakizi, Senior QA Engineer at Workable

HR is one of the business functions that are getting a digital transformation makeover. Workable is a leading vendor in this market, helping businesses make the right hires, faster with their talent acquisition software. A part of that enablement is Workable’s video interviews product which makes use of WebRTC.

WebRTC test automation via testRTC

Workable has video interviews implemented as a feature of a much larger service. The teams at Workable believe in test automation and are shying away from manual testing as much as possible.

When they started off with the implementation of their video interviews feature they immediately searched for a WebRTC test automation solution. They found out that WebRTC implementations are complicated systems. Developers needed to handle changing and unpredictable network environments. WebRTC brought with it a lot of moving parts. Using testRTC meant reducing a lot of the development efforts involved in setting up effective test automation for their environment.

Workable immediately created a set of tests and made them a part of their continuous integration processes, running as part of their nightly regression testing. This enabled Workable to find any regression issues quickly and effectively before they got to the hands of their users and without the need to invest expensive manual testing time.

At Workable, testRTC is accessed by developers and QA engineers who work on the video interviews platform to create tests, run them and analyze the results.

The testRTC experience

What Workable found in testRTC was an easy to use service.

Test scripts in testRTC are written using Nightwatch, a widely used open source scripting framework for browser automation. Since a lot of the code developed with WebRTC is written with JavaScript, being able to write test automation with the same language using Nightwatch meant there was no barrier in the learning curve of adopting testRTC.

The APIs used for the purpose of continuous integration were easy enough to pick up, making the integration process itself a breeze.

An important aspect of the testing conducted by Workable was the ability to configure and test various network conditions. The availability and ease of picking up different network profiles with testRTC made this possible.

Here’s why Intermedia turned to testRTC to proactively monitor their AnyMeeting web conferencing service

“testRTC enables us to monitor meeting performance from start to finish, with a focus on media quality. We get the best of everything with this platform.”

As 2019 came to a close, I had a chance to sit and chat with Ryan Cartmell, Director of Production System Administration at Intermedia®. Ryan manages a team responsible for monitoring and maintaining the production environment of Intermedia. His top priority is maintaining uptime and availability of Intermedia’s services.

In 2017, Intermedia acquired AnyMeeting®, a web conferencing platform based on WebRTC. Since then, Ryan and his team have been working towards building up the tools and experience needed to give them visibility into media quality and meeting performance.

Initially, these tools took care of two levels of monitoring:

  1. System resource performance monitoring was done by collecting and looking at server metrics
  2. Application level monitoring was incorporated by collecting certain application specific metrics, aggregating them in an in-house monitoring platform as well as using a cloud APM vendor

This approach gave a decent view of the service and its performance, but it has its limits.

What you don’t know you don’t know

The way such monitoring systems work is by collecting metrics, data and logs from the various components in the system as well as the running applications. Once you have that data, you can configure alerts and dashboard based on what you know you are looking for.

If an issue was found in the AnyMeeting service, Ryan’s team would try to understand how the issue can be deduced from the logs and available information, creating new alerts based on that input. Doing so would ensure that the next time the same issue occurred, it would be caught and dealt with properly.

The challenge here is that you don’t know what you don’t know. You first need a problem to occur and someone to complain in order to figure out a rule to alert for it happening again. And you can never really reach complete coverage over potential failures.

This kept the Intermedia Operations team in a reactive position. What they wanted and needed was a way to proactively run and test the system to catch any issues in their environment.

Proactively handling WebRTC issues using testRTC

Get ahead of issues and not wait for customers to report issues. 

“With testRTC we are now able to get ahead of issues and not wait for customers to report issues.”

testRTC is an active monitoring test engine that allows Intermedia to proactively test services. This enabled Intermedia to become aware of their performance as well as in the total service availability of the AnyMeeting platform.

Intermedia deployed multiple testRTC monitors, making sure its data centers are probed multiple times an hour. The monitors are active checks that create real web conferences on AnyMeeting, validating that a session works as expected from start to finish. From logging in, through communicating via voice and video to screen sharing. If things go awry or expected media threshold aren’t met, alerts are issued, and Ryan’s team can investigate.

A screenshot from one of the testRTC monitors for the AnyMeeting service

These new testRTC monitors mapped and validated the complete user experience of the AnyMeeting service, something that wasn’t directly monitored before.

Since its implementation, testRTC has helped Intermedia identify various issues in the AnyMeeting system – valuable information that Intermedia, as a market leader in unified communications, uses in its efforts to continually improve the performance and quality of its services.

The data that testRTC collects, including the screenshots it takes, make it a lot easier to track down issues. Before testRTC, using performance metrics alone, it was really difficult to understand the overall impact to an end user. Now it is part of the analysis process.

Working with testRTC

Since starting to work with testRTC and installing its monitors, Intermedia has found the following advantages of working with testRTC:

  1. The flexibility of the testRTC platform – this enables Intermedia to test all elements of the web platform service it offers
  2. Top tier support – testRTC team was there to assist with any issues and questions Intermedia had
  3. Level of expertise – the ability of testRTC to help Intermedia work through issues that testRTC exposes

For Ryan, testRTC gives a level of comfort knowing that tests are regularly being performed.  And, if a technical challenge does arise, the data available from testRTC will enable Ryan and his team to triage the issue a lot easier than they used to.

Intermedia and AnyMeeting are trademarks or registered trademarks of, Inc. in the United States and/or other countries.


Preparing for WebRTC scale: What Honorlock did to validate their infrastructure

This week, I decided to have a conversation with Carl Scheller, VP of Engineering at Honorlock. I knew about the remote proctoring space, and have seen a few clients work with testRTC. This was the first real opportunity I had to understand firsthand about the scale-related requirements of this industry.

Honorlock is a remote proctoring vendor, providing online testing services to higher education institutions. Colleges and universities use Honorlock when they want to proctor test taking online. The purpose? Ensure academic integrity of the students who take online exams as part of their courses.

Here’s the thing – every student taking an exam online ends up using Honorlock’s service, which in turn connects to the user’s camera and screen to collect video feeds in real time along with other things that Honorlock does.

Proctoring with WebRTC and usage behavior

When taking an online exam, the proctoring platform connects to the student’s camera and screen by using WebRTC. The media feeds then get sent and recorded  on Honorlock’s servers in real time, along the way passing some AI related checks to find any signs of cheating. The recordings themselves are stored for an additional period of time to be used if and when needed for manual review.

To offer such a secure testing environment requires media servers to record the session for as long as the exam takes place. If 100 students need to take an exam in a specific subject, they might need to do so at the same scheduled time.

Exams have their own seasonality. There is a lot of usage taking place during midterm and final exam periods, whereas January sees a lot less activity when most schools open a lot less.

Online proctoring platforms need to make sure that each student taking an exam gets a high experience no matter how many other students are taking an exam at the same time. This clears students to worry about their test and not about the proctoring software.

Honorlock’s scaling challenge

Honorlock are aware of this seasonality. They wanted to make sure that their application can handle the load in the various areas of the application. Especially due to the expected growth of their business in the near future.

What Honorlock were looking for was an answer to the question: at what point do they need to improve their application to scale further?

Honorlock is using a third party video platform. They have decided early on not to develop and deploy their own infrastructure, preferring to focus on the core experience for the students and the institutions using them.

Honorlock decided not to have a working assumption as to the scale of the third party video platform blindly, and instead went ahead to conduct end-to-end stress testing, validating their assumptions and scale requirements.

When researching for alternatives, it was important for Honorlock to be able to test the whole client-side UI, to make sure the video infrastructure gets triggered the same way it would in real life. There was also a need to be able to test everything, and not only focus on scale testing of each component separately. This led Honorlock to testRTC.

“We’ve selected testRTC because it enabled us to stress test our service as closely to the live usage we were expecting to see in our platform. Using testRTC significantly assisted us in validating our readiness to scale our business.”

Carl Scheller, VP of Engineering, Honorlock

Load testing with testRTC

Once Honorlock started using testRTC, it was immediately apparent that the testing requirements of Honorlock were met:

  • Honorlock made use of the powerful scripting available in testRTC. These enabled handling the complexity of the UX of a proctoring service
  • Once ready, being able to scale up tests to hundreds or thousands of concurrent browsers made the validation process easier. Especially with the built-in graphs in testRTC focusing on high level aggregate media quality information
  • The global distribution of testRTC’s probes and their granular network controls enabled Honorlock to run stress tests with different machine configurations mapping to Honorlock’s target audience of students

A big win for Honorlock was the level of support provided by testRTC throughout the process. testRTC played an important role in helping define the test plan, writing the test scripts and modifying the scripts to work in a realistic scenario for the Honorlock application.

Building a partnership

Working with testRTC has been useful for Honorlock. While the testRTC service offered a powerful and flexible solution, the real benefit was in the approach testRTC took to the work needed. From the get go, testRTC provided hands on assistance, making sure Honorlock get ramp up their testing quickly and validate their scale.

That ability to get hands on assistance, coupled with the self service capabilities found in testRTC were what Honorlock was looking for.

The validation itself assisted Honorlock in uncovering issues in both their platform as well as the third party they were using. These issues are being taken care of. Using testRTC enabled Honorlock to make better technical decisions.