Tag Archives for " test automation "

Methodically testing and optimizing WebRTC applications at Vowel

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

Paul Fisher, CTO and Co-Founder at Vowel

There are many vendors who are trying to focus these days on making meetings more efficient. Vowel is a video conferencing tool that actually makes meetings better. It enables users to plan, host, transcribe, search, and share their meetings. They are doing that right from inside the browser, and make use of WebRTC.

Vowel has been using testRTC throughout 2020 and I thought it was a good time to talk with Paul Fisher, CTO and Co-Founder at Vowel. I wanted to understand from him how testRTC helps Vowel improve their product and its user experience.

Identifying bottlenecks and issues, scaling up for launch

One of the most important things in a video conferencing platform is the quality of the media. Before working with testRTC, Vowel lacked the visibility and the means to conduct systematic optimizations and improvements to their video platform. They got to know testRTC through an advisor in the company, whose first suggestion was to use testRTC.

In the early days, Vowel used internal tools, but found out that there’s a lot of overhead with using these tools. They require a lot more work to run, manage and extract the results from the tests conducted. Rolling their own was too time consuming and gave a lot less value.

Once testRTC was adopted by Vowel, things have changed for the better. By setting up a set of initial regression tests that can be executed on demand and through continuous integration, Vowel were able to create a baseline of their implementation performance and quality. From here, they were able to figure out what required improvement and optimization as well as understanding if a new release or modification caused an unwanted regression.

testRTC was extremely instrumental in assisting Vowel resolve multiple issues around its implementation: congestion control, optimizing resolution and bandwidth, debugging simulcast, understanding the cause and optimizing for latency, round trip time and jitter.

Vowel were able to proceed in huge strides in these areas by adopting testRTC. Prior to testRTC, Vowel had a kind of an ad-hoc approach, relying almost entirely on user feedback and metrics collected in datadog and other tools. There was no real methodical way for analyzing and pinpointing the source of the issues.

With the adoption of testRTC, Vowel is now able to reproduce issues and diagnose issues, as well as validate that these issues have been resolved. Vowel created a suite of test scripts for these issues and for the scenarios they focus on. They now methodically run these tests as regression with each release.

“Using testRTC has had the most significant impact in improving the quality, stability and maintenance of our platform.”

This approach got them to catch regression bugs earlier on, before potentially rolling out breaking changes to production – practically preventing them from happening.

Reliance on open source

Vowel was built on top of an open-source open source media server, but significant improvements, customizations and additional features were required for their platform. All these changes had to be rigorously tested, to see how they would affect behavior, stability and scalability.

On top of that, when using open source media servers, there are still all the aspects and nuances of the infrastructure itself. The cloud platform, running across regions, how the video layouts, etc.

One cannot just take an open source product or framework and expect it to work well without tweaking and tuning it.

Vowel made a number of significant modifications to lower-level media settings and behavior. testRTC was used to assess these changes — validating that there was a marked improvement across a range of scenarios, and ensuring that there were no unintentional, negative side effects or complications. Without the use of testRTC, it would be extremely difficult to run these validations — especially in a controlled, consistent, and replicable manner.

One approach is to roll out directly to production and try to figure out if a change made an improvement or not. The challenge there is that there is so much variability of testing in the wild that is unrelated to the changes made that it is easy to lose sight of the true effects of changes – big and small ones.

“A lot of the power of testRTC is that we can really isolate changes, create a clean room validation and make sure that there’s a net positive effect.”

testRTC enabled Vowel to establish a number of critical metrics and set goals across these metrics. Vowel then runs these recurring tests  automatically in regression and extracts these metrics to test and validate that they don’t “fail”.

On using testRTC

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

testRTC is used today at Vowel by most of the engineering team.

Test results are shared across the teams, data is exported into the internal company wiki. Vowel’s engineers constantly add new test scripts. New Scrum stories commonly include the creation or improvement of test scripts in testRTC.Every release includes running a battery of tests on testRTC.

For Vowel, testRTC is extremely fast and easy to use.

It is easy to automate and spin up tests on demand with just a click of the button, no matter the scale needed.

The fact that testRTC uses Nightwatch, an open source browser automation framework, makes it powerful in its ability to create and customize practically any scenario.

The test results are well organized in ways that make it easy to understand the status of the test, pinpoint issues and drill down to see the things needed in each layer and level.

How Nexmo Integrated testRTC into their Test Automation for the Nexmo Voice API

Nexmo found in testRTC a solution to solve its end-to-end media testing challenges for their Nexmo Voice API product, connecting PSTN to WebRTC and vice versa.

Nexmo is one of the top CPaaS vendors out there providing cloud communication APIs to developers, enabling enterprises to add communication capabilities into their products and applications.

One of Nexmo’s capabilities involves connecting voice calls between regular phone numbers (PSTN) to browsers (using WebRTC) and vice versa. This capability is part of the Nexmo Voice API.

Testing @ Nexmo

Catering to so many customers with ongoing deployments to production means that Nexmo needs to take testing seriously. One of the things Nexmo did early on was introduce automated testing, using the pytest framework. Part of this automated testing includes a set of regression tests –  a huge amount of tests that provide very high test coverage. Regression tests get executed whenever the Nexmo team has a new version to release, but these tests can also be launched “on demand” by any engineer, they can also be triggered by the Jenkins CI pipeline upon a merge to a particular branch.

At Nexmo, development teams are in charge of the quality of their code, so there is no separate QA team.

In many cases, launching these regression tests first creates a new environment, where the Nexmo infrastructure is launched dynamically on cloud servers. This enables developers to run multiple test sessions in parallel, each in front of their own sandboxed environment, running a different version of the service.

When WebRTC was added to Nexmo Voice API, there was a need to extend the testing environment to include support for browsers and for WebRTC technology.

On Selecting testRTC

“When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”

Jamie Chapman, Voice API Engineer at Nexmo

Nexmo needed WebRTC end-to-end tests as part of their regression test suite for the Nexmo Voice API platform. These end-to-end tests were around two main scenarios:

  1. Dialing a call from PSTN and answering it inside a browser using WebRTC
  2. Calling a PSTN number directly from a browser using WebRTC

In both cases, their client side SDKs get loaded by a web page and tested as part of the scenario.

Nexmo ended up using testRTC as their tool of choice because it got the job done and it was possible to integrate it into their existing testing framework:

  • The python script used to define and execute a test scenario used testRTC’s API to dynamically create a test and run it on the testRTC platform
  • Environment variables specific to the dynamically created test environment got injected into the test
  • testRTC’s test result was then returned back to the python script to be recorded as part of the test execution result

This approach allowed Nexmo to integrate testRTC right into their current testing environment and test scripts.

Catering for Teams

The Voice API engineering team is a large oneAll these users have access to testRTC and they are able to launch regression tests that end up running testRTC scripts as well as using the testRTC dashboard to debug issues that are found.

The ability to have multiple users, each with their own credentials, running tests on demand when needed enabled increased productivity without dealing with coordination issues across the team members. The test results themselves get hosted on a single repository, accessible to the whole team, so all developers  can easily share faulty test results with the team .

Debugging WebRTC Issues

Nexmo has got regression testing for WebRTC off the ground by using testRTC. It does so by integrating with the testRTC APIs, scheduling and launching tests on demand from Nexmo’s own test environment. The tests today are geared towards providing end-to-end validation of media and connectivity between the PSTN network and WebRTC. Validation that testRTC takes care of by default.

When things break, developers check the results collected by testRTC. As Jamie Chapman, Voice API engineer at Nexmo said: “When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”.

testRTC takes screenshots during the test run, as well as upon failure. It collects browser logs and webrtc-internals dump files, visualizing it all and making it available for debugging purposes. This makes testRTC a valuable tool in the development process at Nexmo.

On the Horizon

Nexmo is currently making use of the basic scripting capabilities of testRTC. It has invested in the API integration, but there is more that can be done.

Nexmo are planning to increase their use of testRTC in several ways in the near future:

2

Advanced Testing: Manipulating getUserMedia and Available Devices

Philipp Hancke is not new here on our blog. He has assisted us when we wrote the series on webrtc-internals. He is also not squeamish about writing his own testing environment and sharing the love. This time, he wanted to share a piece of code that takes device availability test automation in WebRTC to a new level.

Obviously… we said yes.

We don’t have that implemented in testRTC yet, but if you are interested – just give us a shout out and we’ll prioritize it.

Both Chrome and Firefox have quite powerful mechanisms for automating getUserMedia with fake devices and skipping the permission prompt.

In Chrome this is controlled by the use-fake-device-for-media-stream and use-fake-ui-for-media-stream command line flags while Firefox offers a preferences media.navigator.streams.fake. See the webdriver.js helper in this repository for the gory details of how to use this with selenium.

However there are some scenarios which are not testable by this:

  • getUserMedia returning an error
  • restricting the list of available devices

While most of these are typically handled by unit tests sometimes it is nice to test the complete user experience for a couple of use-cases

  • test the behaviour of a client with only a microphone
  • test the behaviour of a client with only a camera
  • test the behaviour of a client with neither camera or microphone
  • combine those tests with screen sharing which in some cases replaces the video track on appear.in
  • test audio-only clients interoperating with audio-video ones. The test matrix becomes pretty big at some point.

Those tests are particularly important because as developers we tend to do some manual testing on our own machines which tend to be equipped with both devices. Automated tests running on a continuous integration server help a lot to prevent regressions.

Manipulating APIs with an extension

In order to manipulate both APIs I wrote a chrome extension (which magically works in Firefox and Edge because both support webextensions) that makes them controllable.

An extension can inject javascript into the page on page load as a content script. This has been used in the webrtc-externals extension described on webrtchacks to wrap the whole RTCPeerConnection API.

In our case, the content script replaces the getUserMedia and enumerateDevices functions with wrappers that can be modified at runtime. For example, the enumerateDevices wrapper calls the original function and then uses Javascript to modify the result before returning it to the caller:

    navigator.mediaDevices.enumerateDevices = function() {
        return origEnumerateDevices()
            .then((devices) => {
                if (sessionStorage.__filterVideoDevices) {
                    devices = devices.filter((device) => device.kind !== 'videoinput');
                }
                if (sessionStorage.__filterAudioDevices) {
                    devices = devices.filter((device) => device.kind !== 'audioinput');
                }
                if (sessionStorage.__filterDeviceLabels
                    || sessionStorage.__getUserMediaAudioError === "NotAllowedError"
                    || sessionStorage.__getUserMediaVideoError === "NotAllowedError") {
                    devices = devices.map((device) => {
                        var deviceWithoutLabel = {
                            deviceId: device.deviceId,
                            kind: device.kind,
                            label: '',
                            groupId: device.groupId,
                        }
                        return deviceWithoutLabel;
                    });
                }
                return devices;
            });
    };

The full extension can be found on github. The behaviour is dynamic and can be controlled via sessionStorage flags. With Selenium, one would typically navigate to a page in the same domain, execute a small script to set the session storage flags as desired and then navigate to the page that is to be tested.

We will walk through two examples now:

Use-case: Have getUserMedia return an error and change it at runtime

Let’s say we want to test the case that a user has denied permission. For appear.in this leads to a dialog that attempts to help them with the browser UX to change that.

The full test can be found here. As most selenium tests, it consists of a series of simple and straightforward steps:

  • build a selenium webdriver instance that allows permissions and loads the extension
  • go to the appear.in homepage
  • set the List of fake devices in Chrome WebRTC testing  flag to cause a NotAllowedError (i.e. the user has denied permission) as well as an appear.in specific localStorage property that says the visitor is returning — this ensures we go into the flow we want to test and not into the “getUserMedia primer” that is shown to first-time users.
  • join an appear.in room by loading the URL directly.
  • the next step would typically be asserting the presence of certain DOM elements guiding the user to change the denied permission. This is omitted here as those elements change rather frequently and replaced with a three second sleep which allows for a visual inspection. It should look like this:
  • the List of fake devices in Chrome WebRTC testing  flag is deleted
  • this eventually leads to the user entering the room and video showing up. We do some magic here in order to avoid having to ask the user to refresh the page.

Watch a video of this test running below:

 

Incidentally, that dialog had a “enter anyway” button which, due to the lack of testing, was not visible for quite some time without anyone noticing because the visual regression tests could not access this stage. Now that is possible.

Restricting the list of available devices

The fake devices in both Chrome and Firefox return a stream with exactly those properties that you ask for and they always succeed (in Chrome there is a way to make them always fail too). In the real world you need to deal with users who don’t have a microphone or a camera attached to their machine. A call to getUserMedia would fail with a NotFoundError (note the recent change in Chrome 64 or simply use adapter.js and write spec-compliant code today).

The common way to avoid this is to enumerate the list of devices to figure out what is available using enumerateDevices by pasting this into the javascript console:

navigator.mediaDevices.enumerateDevices().then(devices => {
 const hasMicrophone = devices.some(device => device.kind === “audioinput”);
 console.log(‘has microphone’, hasMicrophone);
});

 

When you run this together with the fake device flag you’ll notice that it provides two fake microphones and one fake camera device:

When the extension is loaded (which for manual testing can be done on chrome://extensions; see above for the selenium ways to do it) one can manipulate that list:

sessionStorage.__filterAudioDevices = true;

Paste the enumerateDevices into the console again and the audio devices no longer show up:

At appear.in we used this to replace a couple of audio-only and video-only tests that used feature flags in the application code with more realistic behaviour. The extension allows a much cleaner separation between the frontend logic and the test logic.

Summary

Using a tiny web extension we could easily extend the already powerful WebRTC testing capabilities of the browsers and cover more advanced test scenarios. Using this approach it would even be possible to simulate events like the user unplugging the microphone during the call.

2 Automating Your WebRTC Product Testing (Recorded session)

I took part this week in Twilio’s Signal event in London.

As with the previous Signal event I attended, this one was excellent (but that’s for some other post).

Twilio were kind enough to invite me to talk at their event, which resulted in the recorded session below:

In the first part of this session, I tried explaining the challenges that WebRTC testing and automation brings with it. I ended up talking about these 5 challenges:

  1. WebRTC being a brand new technology (=always changing)
  2. Browser based (=you don’t control your whole tech stack)
  3. Resource intensive (=need to factor that in when allocating your testing machines)
  4. Network sensitive (=need to be able to test in different network conditions)
  5. It takes two to tango (=need to synchronize across browsers during a test)

The second part was going through some of the results we’ve collected in our recent Kurento experiment, where we tried to see how much can we scale a deployed Kurento media server in different scenarios.

After the session everyone asked me how was the session. Frankly – I don’t know. I wasn’t sitting and listening there. I was talking (enjoying myself while doing so). I hope the audience in the room found the session useful. You can check it out on your own and make your own judgement.

Oh – and if you need to test your WebRTC application then you know where to find us 🙂

–> And if you don’t, then here’s our contact page.

Automated WebRTC Testing using testRTC

Yesterday, we hosted a webinar on testRTC. This time, we were really focused on showing some live demos of our service.

I wanted this one to be useful, so I sat down earlier this week, working on a general story outline with the idea of showing live how you can write a test script from scratch, building more and more capabilities and functionality into it as I went along.

It was real fun.

If you missed it, I’d like to invite you to watch the replay:

watch @ crowdcast

For the purpose of this webinar, I took Jitsi Meet (https://meet.jit.si/) and created the following scripts for it:

  1. Simple one-on-one test
    • Then I cleaned it up a bit from nagging warnings
    • And added a few basic expectations
  2. 4-way video test
    • For this one I’ve added some synchronization across the probes, and made sure Jitsi is the one generating the random rooms
    • I changed the script to be aware of sessions (parallel meeting rooms in the same test)
    • Then I played with the test reconfiguring it to run 40 probes, 8 in each meeting room
  3. One-on-one test with network limits
    • Switched back to a 1:1 session, this time with the flexibility we achieved in (2)
    • Increased the test length to 3 minutes
    • Injected 5% packet loss to the test in the second minute of the test

I also went over some of the results from the Kurento post we’ve published yesterday and went through the screen sharing script we’ve written recently about that uses appear.in as an example

One of the things I was asked is to share the scripts used throughout the session.

So I cleaned up the scripts a bit and placed them on our Google Drive. I am sharing them here in two forms:

  1. The GDoc file of the script – open it to read, copy+paste it to wherever
  2. The JSON file of the script – you can import this one directly into your testRTC account (you’ll need to reconfigure the probe profiles before you run it):

Here they are:

  1. Simple one-on-one test: GDocJSON
  2. 4-way video test: GDocJSON
  3. One-on-one test with network limits: GDocJSON

We’re here for any questions you may have.

5

Just Landed: Automated WebRTC Screen Sharing Testing in testRTC

Well… this week we had a bit of a rough start, but we’re here. We just updated our production version of testRTC with some really cool capabilities. The time was selected to fit with the vacation schedule of everyone in this hectic summer and also because of some nagging Node.js security patch.

As always, our new release comes with too many features to enumerate, but I do want to highlight something we’ve added recently because of a couple of customers that really really really wanted it.

Screen sharing.

Yap. You can now use testRTC to validate the screen sharing feature of your WebRTC application. And like everything else with testRTC, you can do it at scale.

This time, we’ve decided to take appear.in for a spin (without even hinting anything to Philipp Hancke, so we’ll see how this thing goes).

First, a demo. Here’s a screencast of how this works, if you’re into such a thing:

Testing WebRTC Screen Sharing

There are two things to do when you want to test WebRTC screen sharing using testRTC:

  1. “Install” your WebRTC Chrome extension
  2. Show something interesting

#1 – “Install” your WebRTC Chrome extension

There are a couple of things you’ll need to do in the run options of the test script if you want to use screen sharing.

This is all quite arcane, so just follow the instructions and you’ll be good to go in no time.

Here’s what we’ve placed in the run options for appear.in:

#chrome-cli:auto-select-desktop-capture-source=Entire screen,use-fake-ui-for-media-stream,enable-usermedia-screen-capturing #extension:https://s3-us-west-2.amazonaws.com/testrtc-extensions/appearin.tar.gz

The #chrome-cli thingy stands for parameters that get passed to Chrome during execution. We need these to get screen sharing to work and to make sure Chrome doesn’t pop up any nagging selection windows when the user wants to screen share (these kills any possibility of automation here). Which is why we set the following parameters:

  • auto-select-desktop-capture-source=Entire screen – just to make sure the entire screen is automatically selected
  • use-fake-ui-for-media-stream – just add it if you want this thing to work
  • enable-usermedia-screen-capturing – just add it if you want this thing to work

The #extension bit is a new thing we just added in this release. It will tell testRTC to pre-install any Chrome extensions you wish on the browser prior to running your test script. And since screen sharing in Chrome requires an extension – this will allow you to do just that.

What we pass to #extension is the location of a .tar.gz file that holds the extension’s code.

Need to know how to obtain a .tar.gz file of your Chrome extension? Check out our Chrome extension extraction guide.

Now that we’ve got everything enabled, we can focus on the part of running a test that uses screen sharing.

#2 – Show something interesting

Screen sharing requires something interesting on the screen, preferably not an infinite video recursion of the screen being shared in one of the rectangles. Here’s what you want to avoid:

And this is what we really want to see instead:

The above is a screenshot that got captured by testRTC in a test scenario.

You can see here 4 participants where the top right one is screen sharing coming from one of the other participants.

How did we achieve this in the code?

Here are the code snippets we used in the script to get there:

var videoURL = "https://www.youtube.com/tv#/watch?v=INLzqh7rZ-U";

client
   .click('.VideoToolbar-item--screenshare.jstest-screenshare-button')
   .pause(300)
   .rtcEvent('Screen Share ' + agentSession, 'global')
   .rtcScreenshot('screen share ')
   .execute("window.open('" + videoURL + "', '_blank')")
   .pause(5000)

   // Switch to the YouTube
   .windowHandles(function (result) {
       var newWindow;
       newWindow = result.value[2];
       this.switchWindow(newWindow);
   })
   .pause(60000);
   .windowHandles(function (result) {
       var newWindow;
       newWindow = result.value[1];
       this.switchWindow(newWindow);
   });

We start by selecting the URL that will show some movement on the screen. In our case, an arbitrary YouTube video link.

Once we activate screen sharing in appear.in, we call rtcEvent which we’ve seen last time (and is also a new trick in this new release). This will add a vertical line on the resulting graphs so we know when we activated screen sharing (more on this one later).

We call execute to open up a new tab with our YouTube link. I decided to use the youtube.com/tv# URL to get the video to work close to full screen.

Then we switch to the YouTube in the first windowHandles call.

We pause for a minute, and then go back to the appear.in tab in the browser.

Let’s analyze the results – shall we?

Reading WebRTC screen sharing stats

Screen sharing is similar to a regular video channel. But it may vary in resolution, frame rate or bitrate.

Here’s how the appear.in graphs look like on one of the receiving browsers in this test run. Let’s start with the frame rate this time:

Two things you want to watch for here:

  1. The vertical green line – that’s where we’ve added the rtcEvent call. While it was added to the browser who is sending screen sharing, we can see it on one of the receiving browsers as well. It gets us focused on the things of interest in this test
  2. The incoming blue line. It starts off nicely, oscillating at 25-30 frames per second, but once screen sharing kicks in – it drops to 2-4 frames per second – which is to be expected in most scenarios

The interesting part? Appear.in made a decision to use the same video channel to send screen sharing. They don’t open an additional video channel or an additional peer connection to send screen sharing, preferring to repurpose an existing one (not all services behave like that).

Now let’s look at the video bitrate and number of packets graphs:

The video bitrate still runs at around 280 kbps, but it oscillates a lot more. BTW – I am using the mesh version of appear.in here with 4 participants, so it is going low on bitrate to accommodate for it.

The number of video packets per second on that incoming blue line goes down from around 40 to around 25. Probably due to the lower number of frames per second.

What else is new in testRTC?

Here’s a partial list of some new things you can do with testRTC

  • Manual testing service
  • Custom network profiles (more about it here)
  • Machine performance collection and visualization
  • Min/max bands on high level graphs
  • Ignore browser warnings and errors
  • Self service API key regeneration
  • Show elapsed time on running tests
  • More information in test runs on the actual script and run options used
  • More information across different tables and data views

Want to check screen sharing at scale?

You can now use testRTC to automate your screen sharing tests. And the best part? If you’re doing broadcast or multiparty, you can now test these scales easily for screen sharing related issues as well.

If you need a hand in setting up screen sharing in our account, then give us a shout and we’ll be there for you.

Do Browser Vendors Care About Your WebRTC Testing?

It is 2017 and it seems that browser vendors are starting to think of all of us WebRTC developers and testers. Well… not all the browser vendors… and not all the time – but I’ll take what I am given.

I remember years ago when I managed the development of a VoIP stack, we decided to rewrite our whole test application from scratch. We switched from the horrible “native” Windows and Unix UI frameworks to a cross platform one – Tcl/Tk (yes. I know. I am old). We also took the time to redesign our UI, trying to make it easier for us and our developers to test the APIs of the VoIP stack. These were the good ol’ days of manual testing – automation wasn’t even a concept for us.

This change brought with it a world of pain to me. I had almost daily fights with the test manager who had her team file bugs that from my perspective were UI issues and not the product’s issues. While true, fixing these bugs and even adding more tooling for our testing team ended up making our product better and more developers-friendly – an important factor for a product used by developers.

Things aren’t much different in WebRTC-land and browsers these days.

If I had to guess, here’s what I’d say is happening:

  • Developers are the main customers of WebRTC and the implementation of WebRTC in browsers
  • Browser vendors are working hard on getting WebRTC to work, but at times neglected this minor issue of empowering developers with their testing needs
  • Testing tools provided by browsers specifically for WebRTC are second class citizens when it comes to… well… almost everything else in the browser

The First 5 Years

Up until now, Chrome was the most accommodating browser out there when it came to us being able to adopt it and automate it for our own needs. It was never easy even with Chrome, but it is working, so it is hard to complain.

Chrome gives us out of the box the following set of capabilities:

  1. Support for Selenium and WebDriver, which allows us to automate it properly (for most versions, most of the times, when things don’t go breaking up on us suddenly). Firefox has similar capabilities
  2. The webrtc-internals Chrome tab with all of its goodness and data
  3. Ability to easily replace raw inputs of camera and microphone with media files (even if at times this capability is buggy)

We’ve had our share of Chrome bugs that we had to file or star to get specific features to work. Some of it got solved, while others are still open. That’s life I guess – you win some and you lose some.

Firefox was not that fun, to say the least. We’ve been struggling for a long time with it trying to get it to behave with Selenium inside a Docker container. The end result never got beyond 5 frames per second. Somehow, the combination of technologies we’ve been using didn’t work and never got the attention of Mozilla to take a look at – it may well be our own ignorance of how and where to nag the Mozilla team to get that attention 🙂

Edge? Had nothing – or at least not close to the level that Chrome and Firefox have on offer. We will get there. Eventually.

This has been the status quo for quite some time. Perhaps the whole 5 years of WebRTC’s existence.

But now things are changing.

And they are becoming rather interesting.

Mozilla Wiresharking

Mozilla introduced last month the ability to log RTP headers in Firefox WebRTC sessions.

While Chrome had something similar for quite some time, Firefox took this a step further:

“Bug 1343640 adds support in Firefox version 55 to log the RTP header plus the first five bytes of the payload unencrypted. RTCP will be logged in full and unencrypted.”

The best thing though? It also shared a script that can convert these logs to PCAP files, making them readable in Wireshark – a popular open source tool for analyzing network traffic.

The end result? You can now analyze with more clarity what goes on the network and how the browser behaves – especially if you don’t have a media server in the middle (or if you haven’t invested in tools that enable you to analyze it already).

This isn’t a first for Mozilla. It seems that lately, they have been sharing some useful information and pieces of code on their new Advancing WebRTC blog – a definite resource you should be following if you aren’t already.

Edge Does BrowserStack

Microsoft has been on a very positive streak lately. For over a year now, most of the Microsoft announcements are actually furthering the cause of their customers and developers without creating closed gardens – something that I find refreshing.

When it comes to WebRTC, Microsoft recently released a new version of Edge (in beta still) that is interoperable with Chrome and Firefox – on the codec level. While that was a rather expected move, the one we’ve seen last week was quite surprising and interesting.

An Edge testing partnership with BrowserStack: If you want to test your web app on the Edge browser, you can now use BrowserStack for free to do that (there are a few free plans there for it).

How does WebRTC comes to play here? As an enabler to a new feature that got introduced there:

See how that Edge window inside a Chrome app running on a Mac looks?

Guess what – BrowserStack are using WebRTC to enable this screen casting feature. While the original Microsoft announcement removed any trace of WebRTC from it, you can still find that over the web (here, here and here for example). For the geeks, we have a webrtc-internal dump!

The feature is called “Live Testing” at BrowserStack and offers the ability to run a cloud machine running Windows 10 and the Edge browser – and have that machine stream its virtual screen to your local machine – all assuming the local browser you are using for it all supports WebRTC.

In a way, this is a replacement of VNC (which is what we use at testRTC to offer this capability).

Is this coming from Microsoft? From BrowserStack?

I don’t really think it matters. It shows how WebRTC is getting used in new ways and how browser vendors are a major part of this change.

Will Google even care?

Google has been running along with WebRTC, practically on their own.

Yes. Mozilla with Firefox was there from the beginning. Microsoft is joining with Edge. Apple is slowly being dragged into it if you follow the rumormill.

But Google has been setting the tone through the initial acquisitions it made and the ongoing investment in it – both in engineering and in marketing. The end result for Google’s investments (not only in WebRTC but in everything HTML5 related)? Desktop browsers market share dominance

With these new toys that other browser vendors are giving us developers and testers – may that be something to reconsider and revisit? We are the early adopters of browsers, and we usually pick and choose the ones that offer us the greater power and enable us to speed our development efforts.

I wonder if Google will answer in turn with its own new tools and initiatives or continue in their current trajectory.

Should we expect better tooling?

Yes. Definitely.

WebRTC is hard to develop compared to other HTML5 technologies and it is a lot harder to test. Test automation frameworks and commercial offerings tend to focus on the easier problems of browser testing and they often neglect WebRTC, which is where we try to fill in these gaps.

I for once, would appreciate a few more trinkets from browser vendors that we could adopt and use at testRTC.

Check out the enhancements we’ve made to testRTC

It has been a while since we released a version, so it is with great pleasure that I am writing this announcement.

Yes. Our latest release is now out in the wild. We’ve upgraded our service on Sunday, so it is about time we take you for a quick roundup of the changes we’ve made.

#1 – Support for projects and users

This one is long overdue. Up until today, if you signed up for testRTC, you had to share your credentials with whoever was on your team to work with him on the tests. This was impossible to work with, assuming you wanted QA, R&D and DevOps to share the account and work cooperatively with the tests and monitors that got logged inside testRTC.

So we did what we should have – we now support two modes of operation:

  1. A user can be linked to multiple projects
    • So if your company is running multiple projects, you can now run them separately, having people focused on their own environment and tests
    • This is great for those who run segregated services for their own customers
    • It also means that now, a user can switch between projects with a single set of credentials in the system
  2. A project can belong to multiple users
    • Need someone to work on writing the scripts and executing them? You got it
    • Have a developer working on a bug that got reported with a link to testRTC? Sure thing
    • The IT guy who just received a downtime alarm from the WebRTC monitor we run? That’s another user
    • Each user has his own place in the project, and each is distinguished by his own credentials

testRTC project selection

If you require multiple projects, or want to add more users to your account just contact our support.

#2 – Longer, bigger tests

While theoretically, testRTC can run any test at any length and size, things aren’t always that easy.

There are usually two limitations to these requirements:

  1. The time they take to prepare, execute, run and collect results
  2. The time it takes to analyze the results

We worked hard in this release on both elements and got to a point where we’re quite happy with the results.

If you need long tests, we can handle those. One of the main concerns with long tests is what to do if you made a mistake while configuring them? Now you can cancel such tests in the middle if necessary.

Canceling a test run

If you need to scale tests to a large number of browsers – we can do that too.

We are making sure we bubble up the essentials from the browsers, so you don’t have to work hard and rummage through hundreds of browser logs to find out what went wrong. To that end, the tables that show browser results have been reworked and are now sorted in a way that will show failures first.

#3 – Advanced WebRTC analysis

We’ve noticed in the past few months that some of our customers are rather hard core. They are technology savvy and know their way in WebRTC. For them, the graphs we offer of bitrates, latencies, packet losses, … – are just not enough.

Chrome’s webrtc-internals and getstats() offer a wealth of additional information that we offered up until now only in a JSON file download. Well… now we also visualize it upon request right from the report itself:

Advanced WebRTC graphs

These graphs are reachable by clicking the webrtc_internals_dump.txt link under the Logs tab of a test result. Or by clicking the Advanced WebRTC Analytics button located just below the channels list:

Access advanced WebRTC graphs

I’d like to thank Fippo for the work he did (webrtc-dump-importer) – we adopted it for this feature.

#4 – Simulation of call drops and dynamic network changes

This is something we’ve been asked more than once. We have the capability of modeling the network of our probes, so that the browser runs with a specific configuration of a firewall or via a specific type of simulated network. We’re modifying and tweaking the profiles we have for these from time to time, but now we’ve added a script command so that you can change this configuring in runtime.

What can you do with it? Run two minutes of a test with 2 Mbps, then close virtually everything for 20-30 seconds, then open up  the network again – and see what happens. It is a way to test WebRTC in your application in dynamic network conditions – ones that may require ICE restarts.

Dynamically changing network profile in testRTC

In the test above, we dynamically changed the network profile in mid-call to starve WebRTC and see how it affects the test.

How do you use this new capability? Use our new command rtcSetNetworkProfile(). Read all about it in our knowledge base: rtcSetNetworkProfile()

#5 – Additional test expectations

We had the basics covered when it came to expectations. You could check the number and types of channels, validate that there’s some bits going on in there, validate packet loss. And that’s about it.

To this list of capabilities that existed in rtcSetTestExpectations() we’ve now added the ability to add expectations related to jitter, video resolutions, frame rate, and call setup time. We’ve also taken the time to handle expectations on empty channels a lot better.

There’s really nothing new here, besides an enhancement of what rtcSetTestExpectations() can do.

#6 – Additional information in Webhook responses

testRTC can notify your backend whenever a test or a monitor run ends on the status of that run – success or failure. This is done by configuring a webhook that is called at the end of the test run. We’ve had customers use it to collect the results to their own internal monitoring systems such as Splunk and Elastic Search.

What we had on offer in the actual payload that was passed with the webhook was rather thin, and while we’re still trying to keep it simple, we did add the leading error in that response in cases of failure:

testRTC webhook test failure response

#7 – API enabled to all customers

Yes. We had APIs in the past, but somehow, there was friction involved, with customers needing to ask for their API key in order to use the API for their continuous integration plans. It worked well, but the number of customers asking for API keys – both customers and prospects under evaluation – has risen to a point where it was ridiculous to continue doing this manually. Especially when our intent is for customers to use our APIs.

So we took this one step forward. From now on, every account has an API key by default. That API key is accessible from the account’s dashboard when you login, so there’s no need to ask for it any longer.

testRTC API key

For those of you who have been using it – note that we’ve also reset your key to a new value.

Your turn

This has been quite a big release for us, and I am sure to miss an enhancement or two (or more).

Now back to you. How would you want to test WebRTC in your product?

6

Executing a WebRTC test that scales

There’s a growing trend from the companies that come to testRTC in recent months, and it has to do with the focus of what they are looking for.

Most are less interested in how testRTC can be used for functional testing – things like coverage of scenarios and finding edge cases and automating tests for them. What people are interested now when they want to run a WebRTC test scenario is how to scale it.

Customers typically try to take stress in WebRTC tests in two slightly different vectors: they either focus on testing how their WebRTC service can handle multiple sessions in parallel or they focus on testing how their WebRTC service can increase the number of users in a single session.

Let’s review what’s the meaning of each of these alternatives.

#1 – WebRTC test that scales to a large number of sessions

I decided to put things on a simple graph. The X axis denotes the number of sessions we’re going to focus on while the Y axis is all about the number of users in a single session.

In this case, where we want to test WebRTC for a large number of sessions, we will have this focus:

Scale a WebRTC test by the number of sessions

So we have a WebRTC service to test. It has a single user in a session (a contact center agent receiving calls from PSTN for example) or two users in a session (one person talking to another across browsers).

In such a case, vendors are usually concerned about stressing their servers – checking if they can fit their intended capacity.

When this is done, there are three different things that can be tested for scale:

  1. The signaling server
    • How well does it behave while increasing capacity? How is its connection to the databse? Does it slow down as connections accumulate? Does it leak memory?
    • Usually, stress testing a signaling server is better done with other tools. Ones that have a lower cost per connection than testRTC and don’t really require a full browser per connection
    • That said, oftentimes, you may as well want to throw in a few “real” users using testRTC on top of a tool that loads your signaling connections separately – just to make sure there’s nothing that kills your service when media is added into the mix on top of the signaling
    • You also need to think about the third component below – how do you test your TURN server?
  2. The media server
    • These crop into 1:1 tests when there’s a need to record the session or to enforce a given route. I’ve seen many of these recently, mainly in the healthcare and education markets
    • For single users, this usually means the gateway that connects the user to other networks is what we want to test, and there it will usually include a media server of sorts for media transcoding
    • In such a case, there’s no getting away from the fact that scale is in the low 10’s or 100’s of browsers and real ones are needed. It is also where we see a lot of interest in testRTC and its capabilities
  3. The TURN server
    • Anywhere between 5-20% of the calls will end up being relayed via a TURN server – and there’s nothing you can do about it
    • If you put up your own TURN servers – how confident are you in your setup and its ability to scale nicely as your service grows?
    • One way to find out is to place real browsers in front of your service, but doing so in a way that forces the browsers to negotiate via TURN. This can be acheived by changing the configuration of your client, filtering ICE candidates and doing SDP munging. A better way would be to enforce network rules on the machine running the browser and actually test your service in different network conditions
    • And yes. testRTC allows you to do just that

#2 – WebRTC test that accommodates a large group of users in a single session

The other type of focus use cases we see a lot from our customers are those that want to answer the question “how many users can I cram into a single session without considerably degrading the quality?”

Scale a WebRTC test by the number of users per sesson

Many look for doing such tests at around 10-20 concurrent browsers, either in MCU or SFU models (see this post on the differences between the multiparty WebRTC technologies).

What happens next is usually a single session where browsers are added one on top of the other to check for scale. Here, the main purpose of a test is validating the media server and not much else.

The scenario is rather simple:

  • Try 1:1. Record the results
  • Go for 4 users. Record the results
  • Expand to 10 users. Record the results
  • Rinse and repeat

Now go back to the recorded results and see if the media got degraded:

  • Was latency introduced?
  • Do we see more packet losses?
  • Does bitrates go down the more browsers we add?
  • Is the bitrate stable or fluctuating all over the chart?
  • Is the degradation linear or exponential?

These types of questions are indicators to problems in the WebRTC product’s infrastructure (be it network connections, CPU, storage or software).

#3 – Test WebRTC at scale

And then you can try to accommodate for both these needs. And you should – scale the size of the sessions at the same time that you scale the number of sessions.

Scale a WebRTC test by the number of sessions and by the number of users in them

Here what we’re trying to do is everything at the same time.

We want to be able to place multiple users in the same session but spread our browsers across sessions.

How about running 100 browsers, split across 10 different sessions, where each session accommodates for 10 browsers? This is where our customers are headed next after they tested their WebRTC multiparty service for a single session capacity.

Why is WebRTC test scaling so hard?

When you scale test WebRTC infrastructure, you end up needing lots of bandwidth and processing power. Remember that each user is a full browser (why that is necessary see here). Running 2 or 4 of these may be simple, but running 20 or more becomes quite a challenge:

  • You can no longer place them all in a single machine, so you need to start distributing them – across machines, across data centers
  • You need to take care of both downlink and uplink network speeds – this isn’t easy to acheive at scale
  • You need to synchronize across your small army of browsers so they hit the server at roughly the right time for it all to work
  • Oh – and you need the WebRTC test environment to be stable, so that when issues occur, it will more often than not be due to an issue in the tested product and not in your test environment itself

testRTC, users and sessions

There are many ways to do multiple users in a single session:

  • All join the same URL or room, given the same level of access
  • A chair hosting a large conference, where control and access is assymetric
  • A broadcaster and a large number of viewers
  • A few people in a discussion with a large number of viewers

Each of these scales differently and requires a slightly different treatment.

What we did at testRTC was introduce the notion of #session into the mix. When you indicate #session, the test will automatically wrap itself around that notion – splitting the number of concurrent users you want into sessions at the size you state by #session.

Want to see it in action? Check our our latest tutorial videos on how to scale WebRTC tests in testRTC, by using the notion of a session:

Are you following the WebRTC deprecation path?

I just had to share this one. You know how people complain about WebRTC breaking their services, and it being unstable?

It is partially true. What these people don’t tell you, is that oftentimes they just ignore all the warnings signs that are out there. In many of the cases, the service breaks simply because it wasn’t updated in time – and time was ample for it to be updated.

This “WebRTC deprecation” of feature and capabilities, as well as other browser features is a good thing – it is a way for the browser to get rid of excess junk (and vulnerabilities).

This week I worked with one of our customers, and bumped into this warning message that I just had to share:

WebRTC deprecation warning on Chrome

What you see above is a screenshot of the types of reports we put out.

One of the things we decided early on was to collect the browser console logs and analyze them. If there’s anything suspicious there – we just bubble it up for our users.

One of the classic warnings for services that are in their staging phase is that they have no favicon for their website, which you can see in the first warning. The thing that was new to me was the second warning in there:

The MediaStream ‘ended’ event is deprecated and will be removed in M54, around October 2016.

You know what? Knowing that enables the tester to file a bug, and the developer to complain (and curse the tester) and then fix this issue. Hopefully before deprecation kicks in

The interesting thing is that whenever a new release of Chrome comes out, the number of deprecation warnings rise in all services we test, and after awhile, these things get fixed and cleaned.

So what’s the takeaway?

  1. When you build your releases calendar and the patches, make sure to take into account the time needed to fix deprecation issues related to WebRTC – since it isn’t yet a standardized RFC, expect browsers to modify their APIs between versions
  2. Make sure to look at your browser’s console logs and clean them up. And while at it, why not automate this part as well and just use testRTC for the purpose?