Category Archives for "Testimonials"

Methodically testing and optimizing WebRTC applications at Vowel

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

Paul Fisher, CTO and Co-Founder at Vowel

There are many vendors who are trying to focus these days on making meetings more efficient. Vowel is a video conferencing tool that actually makes meetings better. It enables users to plan, host, transcribe, search, and share their meetings. They are doing that right from inside the browser, and make use of WebRTC.

Vowel has been using testRTC throughout 2020 and I thought it was a good time to talk with Paul Fisher, CTO and Co-Founder at Vowel. I wanted to understand from him how testRTC helps Vowel improve their product and its user experience.

Identifying bottlenecks and issues, scaling up for launch

One of the most important things in a video conferencing platform is the quality of the media. Before working with testRTC, Vowel lacked the visibility and the means to conduct systematic optimizations and improvements to their video platform. They got to know testRTC through an advisor in the company, whose first suggestion was to use testRTC.

In the early days, Vowel used internal tools, but found out that there’s a lot of overhead with using these tools. They require a lot more work to run, manage and extract the results from the tests conducted. Rolling their own was too time consuming and gave a lot less value.

Once testRTC was adopted by Vowel, things have changed for the better. By setting up a set of initial regression tests that can be executed on demand and through continuous integration, Vowel were able to create a baseline of their implementation performance and quality. From here, they were able to figure out what required improvement and optimization as well as understanding if a new release or modification caused an unwanted regression.

testRTC was extremely instrumental in assisting Vowel resolve multiple issues around its implementation: congestion control, optimizing resolution and bandwidth, debugging simulcast, understanding the cause and optimizing for latency, round trip time and jitter.

Vowel were able to proceed in huge strides in these areas by adopting testRTC. Prior to testRTC, Vowel had a kind of an ad-hoc approach, relying almost entirely on user feedback and metrics collected in datadog and other tools. There was no real methodical way for analyzing and pinpointing the source of the issues.

With the adoption of testRTC, Vowel is now able to reproduce issues and diagnose issues, as well as validate that these issues have been resolved. Vowel created a suite of test scripts for these issues and for the scenarios they focus on. They now methodically run these tests as regression with each release.

“Using testRTC has had the most significant impact in improving the quality, stability and maintenance of our platform.”

This approach got them to catch regression bugs earlier on, before potentially rolling out breaking changes to production – practically preventing them from happening.

Reliance on open source

Vowel was built on top of an open-source open source media server, but significant improvements, customizations and additional features were required for their platform. All these changes had to be rigorously tested, to see how they would affect behavior, stability and scalability.

On top of that, when using open source media servers, there are still all the aspects and nuances of the infrastructure itself. The cloud platform, running across regions, how the video layouts, etc.

One cannot just take an open source product or framework and expect it to work well without tweaking and tuning it.

Vowel made a number of significant modifications to lower-level media settings and behavior. testRTC was used to assess these changes — validating that there was a marked improvement across a range of scenarios, and ensuring that there were no unintentional, negative side effects or complications. Without the use of testRTC, it would be extremely difficult to run these validations — especially in a controlled, consistent, and replicable manner.

One approach is to roll out directly to production and try to figure out if a change made an improvement or not. The challenge there is that there is so much variability of testing in the wild that is unrelated to the changes made that it is easy to lose sight of the true effects of changes – big and small ones.

“A lot of the power of testRTC is that we can really isolate changes, create a clean room validation and make sure that there’s a net positive effect.”

testRTC enabled Vowel to establish a number of critical metrics and set goals across these metrics. Vowel then runs these recurring tests  automatically in regression and extracts these metrics to test and validate that they don’t “fail”.

On using testRTC

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

testRTC is used today at Vowel by most of the engineering team.

Test results are shared across the teams, data is exported into the internal company wiki. Vowel’s engineers constantly add new test scripts. New Scrum stories commonly include the creation or improvement of test scripts in testRTC.Every release includes running a battery of tests on testRTC.

For Vowel, testRTC is extremely fast and easy to use.

It is easy to automate and spin up tests on demand with just a click of the button, no matter the scale needed.

The fact that testRTC uses Nightwatch, an open source browser automation framework, makes it powerful in its ability to create and customize practically any scenario.

The test results are well organized in ways that make it easy to understand the status of the test, pinpoint issues and drill down to see the things needed in each layer and level.

How Workable uses testRTC for automated WebRTC testing

“testRTC had almost everything that we needed. The solution is easy to use, easy to integrate and  it was easy to include in our CI environment.”
Eleni Karakizi, Senior QA Engineer at Workable

HR is one of the business functions that are getting a digital transformation makeover. Workable is a leading vendor in this market, helping businesses make the right hires, faster with their talent acquisition software. A part of that enablement is Workable’s video interviews product which makes use of WebRTC.

WebRTC test automation via testRTC

Workable has video interviews implemented as a feature of a much larger service. The teams at Workable believe in test automation and are shying away from manual testing as much as possible.

When they started off with the implementation of their video interviews feature they immediately searched for a WebRTC test automation solution. They found out that WebRTC implementations are complicated systems. Developers needed to handle changing and unpredictable network environments. WebRTC brought with it a lot of moving parts. Using testRTC meant reducing a lot of the development efforts involved in setting up effective test automation for their environment.

Workable immediately created a set of tests and made them a part of their continuous integration processes, running as part of their nightly regression testing. This enabled Workable to find any regression issues quickly and effectively before they got to the hands of their users and without the need to invest expensive manual testing time.

At Workable, testRTC is accessed by developers and QA engineers who work on the video interviews platform to create tests, run them and analyze the results.

The testRTC experience

What Workable found in testRTC was an easy to use service.

Test scripts in testRTC are written using Nightwatch, a widely used open source scripting framework for browser automation. Since a lot of the code developed with WebRTC is written with JavaScript, being able to write test automation with the same language using Nightwatch meant there was no barrier in the learning curve of adopting testRTC.

The APIs used for the purpose of continuous integration were easy enough to pick up, making the integration process itself a breeze.

An important aspect of the testing conducted by Workable was the ability to configure and test various network conditions. The availability and ease of picking up different network profiles with testRTC made this possible.

Here’s why Intermedia turned to testRTC to proactively monitor their AnyMeeting web conferencing service

“testRTC enables us to monitor meeting performance from start to finish, with a focus on media quality. We get the best of everything with this platform.”

As 2019 came to a close, I had a chance to sit and chat with Ryan Cartmell, Director of Production System Administration at Intermedia®. Ryan manages a team responsible for monitoring and maintaining the production environment of Intermedia. His top priority is maintaining uptime and availability of Intermedia’s services.

In 2017, Intermedia acquired AnyMeeting®, a web conferencing platform based on WebRTC. Since then, Ryan and his team have been working towards building up the tools and experience needed to give them visibility into media quality and meeting performance.

Initially, these tools took care of two levels of monitoring:

  1. System resource performance monitoring was done by collecting and looking at server metrics
  2. Application level monitoring was incorporated by collecting certain application specific metrics, aggregating them in an in-house monitoring platform as well as using a cloud APM vendor

This approach gave a decent view of the service and its performance, but it has its limits.

What you don’t know you don’t know

The way such monitoring systems work is by collecting metrics, data and logs from the various components in the system as well as the running applications. Once you have that data, you can configure alerts and dashboard based on what you know you are looking for.

If an issue was found in the AnyMeeting service, Ryan’s team would try to understand how the issue can be deduced from the logs and available information, creating new alerts based on that input. Doing so would ensure that the next time the same issue occurred, it would be caught and dealt with properly.

The challenge here is that you don’t know what you don’t know. You first need a problem to occur and someone to complain in order to figure out a rule to alert for it happening again. And you can never really reach complete coverage over potential failures.

This kept the Intermedia Operations team in a reactive position. What they wanted and needed was a way to proactively run and test the system to catch any issues in their environment.

Proactively handling WebRTC issues using testRTC

Get ahead of issues and not wait for customers to report issues. 

“With testRTC we are now able to get ahead of issues and not wait for customers to report issues.”

testRTC is an active monitoring test engine that allows Intermedia to proactively test services. This enabled Intermedia to become aware of their performance as well as in the total service availability of the AnyMeeting platform.

Intermedia deployed multiple testRTC monitors, making sure its data centers are probed multiple times an hour. The monitors are active checks that create real web conferences on AnyMeeting, validating that a session works as expected from start to finish. From logging in, through communicating via voice and video to screen sharing. If things go awry or expected media threshold aren’t met, alerts are issued, and Ryan’s team can investigate.

A screenshot from one of the testRTC monitors for the AnyMeeting service

These new testRTC monitors mapped and validated the complete user experience of the AnyMeeting service, something that wasn’t directly monitored before.

Since its implementation, testRTC has helped Intermedia identify various issues in the AnyMeeting system – valuable information that Intermedia, as a market leader in unified communications, uses in its efforts to continually improve the performance and quality of its services.

The data that testRTC collects, including the screenshots it takes, make it a lot easier to track down issues. Before testRTC, using performance metrics alone, it was really difficult to understand the overall impact to an end user. Now it is part of the analysis process.

Working with testRTC

Since starting to work with testRTC and installing its monitors, Intermedia has found the following advantages of working with testRTC:

  1. The flexibility of the testRTC platform – this enables Intermedia to test all elements of the web platform service it offers
  2. Top tier support – testRTC team was there to assist with any issues and questions Intermedia had
  3. Level of expertise – the ability of testRTC to help Intermedia work through issues that testRTC exposes

For Ryan, testRTC gives a level of comfort knowing that tests are regularly being performed.  And, if a technical challenge does arise, the data available from testRTC will enable Ryan and his team to triage the issue a lot easier than they used to.

Intermedia and AnyMeeting are trademarks or registered trademarks of Intermedia.net, Inc. in the United States and/or other countries.

2

Preparing for WebRTC scale: What Honorlock did to validate their infrastructure

This week, I decided to have a conversation with Carl Scheller, VP of Engineering at Honorlock. I knew about the remote proctoring space, and have seen a few clients work with testRTC. This was the first real opportunity I had to understand firsthand about the scale-related requirements of this industry.

Honorlock is a remote proctoring vendor, providing online testing services to higher education institutions. Colleges and universities use Honorlock when they want to proctor test taking online. The purpose? Ensure academic integrity of the students who take online exams as part of their courses.

Here’s the thing – every student taking an exam online ends up using Honorlock’s service, which in turn connects to the user’s camera and screen to collect video feeds in real time along with other things that Honorlock does.

Proctoring with WebRTC and usage behavior

When taking an online exam, the proctoring platform connects to the student’s camera and screen by using WebRTC. The media feeds then get sent and recorded  on Honorlock’s servers in real time, along the way passing some AI related checks to find any signs of cheating. The recordings themselves are stored for an additional period of time to be used if and when needed for manual review.

To offer such a secure testing environment requires media servers to record the session for as long as the exam takes place. If 100 students need to take an exam in a specific subject, they might need to do so at the same scheduled time.

Exams have their own seasonality. There is a lot of usage taking place during midterm and final exam periods, whereas January sees a lot less activity when most schools open a lot less.

Online proctoring platforms need to make sure that each student taking an exam gets a high experience no matter how many other students are taking an exam at the same time. This clears students to worry about their test and not about the proctoring software.

Honorlock’s scaling challenge

Honorlock are aware of this seasonality. They wanted to make sure that their application can handle the load in the various areas of the application. Especially due to the expected growth of their business in the near future.

What Honorlock were looking for was an answer to the question: at what point do they need to improve their application to scale further?

Honorlock is using a third party video platform. They have decided early on not to develop and deploy their own infrastructure, preferring to focus on the core experience for the students and the institutions using them.

Honorlock decided not to have a working assumption as to the scale of the third party video platform blindly, and instead went ahead to conduct end-to-end stress testing, validating their assumptions and scale requirements.

When researching for alternatives, it was important for Honorlock to be able to test the whole client-side UI, to make sure the video infrastructure gets triggered the same way it would in real life. There was also a need to be able to test everything, and not only focus on scale testing of each component separately. This led Honorlock to testRTC.

“We’ve selected testRTC because it enabled us to stress test our service as closely to the live usage we were expecting to see in our platform. Using testRTC significantly assisted us in validating our readiness to scale our business.”

Carl Scheller, VP of Engineering, Honorlock

Load testing with testRTC

Once Honorlock started using testRTC, it was immediately apparent that the testing requirements of Honorlock were met:

  • Honorlock made use of the powerful scripting available in testRTC. These enabled handling the complexity of the UX of a proctoring service
  • Once ready, being able to scale up tests to hundreds or thousands of concurrent browsers made the validation process easier. Especially with the built-in graphs in testRTC focusing on high level aggregate media quality information
  • The global distribution of testRTC’s probes and their granular network controls enabled Honorlock to run stress tests with different machine configurations mapping to Honorlock’s target audience of students

A big win for Honorlock was the level of support provided by testRTC throughout the process. testRTC played an important role in helping define the test plan, writing the test scripts and modifying the scripts to work in a realistic scenario for the Honorlock application.

Building a partnership

Working with testRTC has been useful for Honorlock. While the testRTC service offered a powerful and flexible solution, the real benefit was in the approach testRTC took to the work needed. From the get go, testRTC provided hands on assistance, making sure Honorlock get ramp up their testing quickly and validate their scale.

That ability to get hands on assistance, coupled with the self service capabilities found in testRTC were what Honorlock was looking for.

The validation itself assisted Honorlock in uncovering issues in both their platform as well as the third party they were using. These issues are being taken care of. Using testRTC enabled Honorlock to make better technical decisions.

3 How Talkdesk support solves customer network issues faster with testRTC

“The adoption of testRTC Network Testing at Talkdesk was really high and positive”

Earlier this month, I sat down with João Gaspar, Global Director, Customer Service at Talkdesk to understand more how they are using the new testRTC Network Testing product. This is the first time they’ve introduced a product that is designed for support teams, so this was an interesting conversation for me.

Talkdesk is the fastest growing cloud contact center solution today. They have over 1,800 customers across more than 50 countries. João oversees the global support team at Talkdesk with the responsibility to ensure clients are happy by offering proactive and transparent support.

All of Talkdesk customers make use of WebRTC as part of their call center capabilities. When call center agents open the Talkdesk application, they can receive incoming calls or dial outgoing calls directly from their browser, making use of WebRTC.

WebRTC challenges for cloud contact centers

The main challenge with cloud communication in contact centers is finding the reason for user complaints about call quality. Troubleshooting such scenarios to get to the root cause is very hard, and in almost all cases, Talkdesk has found out that it is not because of its communication infrastructure but rather due to issues between the customer’s agent and his firewall/proxy.

Issues vary from available bandwidth and quality in their internet connection, problems with their headphones, the machine they are using and a slew of other areas.

Talkdesk’s perspective and proactive focus to support means they’re engaging with clients not only when there are issues but through the entire cycle. For larger, enterprise deals,Talkdesk makes network assessments and provides recommendations to the client’s network team during the POC itself, not waiting for quality issues to crop later on in the process.

To that end, Talkdesk used a set of multiple tools, some of them running only on Internet Explorer and others testing network conditions but not necessarily focused on VoIP or Talkdesk’s communication infrastructure. It wasn’t a user friendly approach neither to Talkdesk’s support teams nor to the client’s agents and network team.

Talkdesk wanted a tool that provides quick analysis in a simple and accurate manner.

Adopting testRTC’s Network Testing product

Talkdesk decommissioned its existing analysis tools, preferring to use testRTC’s Network Testing product instead. WFor the client, with a click of a button, the clienthe is now able to  provides detailed analysis results to the Talkdesk support team within a minute. This enables faster response times and less frustration to Talkdesk and Talkdesk’s customer.

Today, all of the Talkdesk teams on the field, including support, networks and sales teams, make use of the testRTC Network Testing service. When a Talkdesk representative at a client location or remotely needs to understand the client’s network behavior, they send a link to a client, asking them to click the start button. testRTC Network Testing then conducts a set of network checks, immediately making the results to Talkdesk’s support.

testRTC’s backend dashboard for Talkdesk

The adoption of this product in Talkdesk was really high and positive. This is due to the simplicity and ease of use of it. For the teams on the field, this enables to easily engage with potential clients who haven’t signed a contract yet while investing very little resources.

The big win: turnaround time

testRTC’s Network Testing service doesn’t solve the client’s problems. There is no silver bullet there. Talkdesk support still needs to analyze the results, figure out the issues and work with the client on them.

testRTC’s Network Testing service enables Talkdesk to quickly understand if there are any blocking issues for clients and start engaging with clients sooner in the process. This dramatically reduces the turnaround time when issues are found, increasing transparency and keeping clients happier throughout the process.

Talkdesk Network Test service in action

On selecting testRTC

When Talkdesk searched for an alternative to their existing solution, they came to testRTC. They knew testRTC’s CEO through webinars and WebRTC related posts he published independently and via testRTC, and wanted to see if they can engage with testRTC on such a solution.

“testRTC’s Network Testing service reduces the turnaround time for us in understanding and addressing potential network issues with clients”

testRTC made a strategic decision to create a new service offering for WebRTC support teams, working closely with Talkdesk on defining the requirements and developing the service.

Throughout the engagement, Talkdesk found testRTC to be very responsive and pragmatic, making the adjustments required by Talkdesk during and after the initial design and development stages.

What brought confidence to Talkdesk is the stance that testRTC took in the engagement, making it clear that for testRTC this is a partnership and not a one-off service. For Talkdesk, this was one of the most important aspects.