Category Archives for "testRTC tips"

WebRTC Test Automation and where it fits in your roadmap

I see mixed signals about the popularity and acceptance of test automation. It is doubly so when testing WebRTC.

Time to consider some serious WebRTC test automation.

In favor of automation

A tester automated his job for 6 years – most probably a hoax, but one that rings partially true. The moral of the story is simple – if you invest time in automating rudimentary tasks – you get your ROI back tenfold in the future.

That’s… about it.

We have customers who use us to automate areas of their testing, but not many. At least not as many as I’d expect there to be – WebRTC being new and all – and us looking at best practices and changing our bad ways and habits of the past when stating with green field projects.

Against automation

Why is Manual QA Still So Prevalent? – it seems like SauceLabs, who delve into general purpose browser automation, is also experiencing the same thing. Having companies focus on manual testing instead of moving to automation.

Best explanation I heard from someone? They can get a cheap tester to do the work for them by outsourcing it to a developing country and then it costs them less to do the same – just with humans.

For me, that’s taking Amazon’s Mechanical Turk a step too much. For a repetitive task that you’re going to do in each and every release (yours and of browser vendors), to have different nameless faces (or even named ones) do the same tasks over and over again?

Dog-fooding at testRTC

We’ve been around for almost 2 years now. So it is high time we start automating our own testing as well.

The first place where we will be automating our own testing is in making sure our test related feature set works:

  • Our special script commands and variables
  • Running common test scenarios that our customers use in WebRTC

Now, we have test scripts that run these tests, so we can automate them individually. Next step would be to run them sequentially with a “click of a button”. Or more accurately, an execution of a shell script. Which is where we’re taking this in our next release.

The rest will stay manual for now. Mostly because in each version we change our UI based on the feedback we receive. One of our top priorities is to make our product stupidly simple – so that our customers can focus on their own product and need to learn as little as possible (or nothing at all) to use testRTC.

Why our customers end up automating?

There are several huge benefits in automating at least parts of your testing. Here are the ones we see every day from the way our customers make use of WebRTC:

  • Doing the most basic sanity tests – answering the question “is it broken?” and getting an answer fast with no human intervention. This is usually coupled with continuous integration, where every night the latest build is tested against it
  • Scale tests – when a service needs to grow, be it to 10 users in the same session, 40 people across 20 1:1 sessions or 100 viewers of a webinar – it becomes hard to manually test. So they end up writing a simple script in our platform and running it on demand when the time comes to stress test their product
  • Network configurations – taking a script and running it in various network conditions – with and without forcing TURN, packet losses, etc. Some also add different data center locations for the browsers and play with the browser versions used. The idea is to get testing to the edge cases where a user’s configuration is what comes back to bite you
  • Debugging performance – similar to scale tests, but slightly different. Some require the ability to check the capacity of a given machine in their product. Usually the media server. There’s much to be said about that, but being able to run a large scale test, analyze the performance report testRTC produces, and then rinse and repeat means it is easier to find the bottlenecks in the system and fix them prior to deployment

Starting out with WebRTC, we’ve seen other things getting higher priority by customers. They all talk about scenarios and coverage of their test plans. Most don’t go there due to that initial high investment.

What we do see, and what effectively improves our customer’s product, is taking one scenario. Usually a simple one. Writing it in a way that allows for scaling it up. Once a customer runs it for a few days, he sees areas he needs to improve in his product, and how that simple script can expand to encompass more of his testing needs.

This is also why we try to be there with our customers every step of the way. From assisting in defining that test, to writing it and following through with analysis if need be.

Are you serious about your WebRTC product? Don’t waste your time and try us out.

Use our Monitor During Development to Flush Out Your Nastiest WebRTC Bugs

Things break. Unexpectedly. All the time.

One thing we noticed recently is the a customer or two of ours decided to use our monitoring service in their staging versions or during development – and not only on their production system. So I wanted to share this technique here.

First off, let me explain how the testRTC monitoring service works.

In testRTC you write scripts that instruct the browser what to do. This means going to a URL of your service, maybe clicking a few buttons, implementing a form, waiting to synchronize with other browsers – whatever is necessary to get to that coveted media interaction. You can then configure the script for  number of browsers, browser versions, locations, firewall configurations and network conditions. And then you can run the test whenever you want.

The monitoring part means taking a test script in testRTC and stating that this is an active monitor. You then indicate what frequency you wish to use (every hour, 15 minutes, etc.) and indicate any expected results and thresholds. The test will the run in the intervals specified and alert you if anything fails or the thresholds are exceeded.

Here’s one of the ways in which we test our own monitor – by running it against AppRTC:

Monitoring AppRTC using testRTC

What you see above is the run history – the archive. It shows past executions of the AppRTC monitor we configured and their result. You should already know how fond I am of using AppRTC as a baseline.

We’ve added this service as a way for customers to be able to monitor their production service – to make sure their system is up and running properly – as opposed to just knowing the CPU of their server is not being overworked. What we found out is that some decided to use it on their development platform and not only the production system.

Why?

Because it catches nasty bugs. Especially those that happen once in a lifetime. Or those that need the system to be working for some time before things start to break. This is quite powerful when the service being tested isn’t pure voice or video calling, but has additional moving parts. Things such as directory service, databases, integration with third party services or some extra business logic.

The neat thing about it all? When things do break and the monitor catches that, you get alerted – but more than that, you get the whole shebang of debugging information:

  • Our reports and visualization
  • The webrtc-internals dump file
  • The console log
  • Screenshots taken throughout the session
  • In our next release, we’re adding some more tools such as an automatic screenshot taken at the point of failure

Those long endurance testing QA people love doing on systems for stretches of days or weeks? You can now set it up to run on your own systems, and get the most out of it when it comes to collection of post mortem logs and data that will assist you to analyze, debug and fix the problem.

Come check us out – you won’t regret it.

How to make sure you have WebRTC incoming audio using testRTC

I hear this one far too many times:

How can I make sure there’s incoming audio? We had this issue a few weeks/months ago, where sessions got connected in our WebRTC service, but there was no incoming audio. Can testRTC alert me of such a situation?

Looking for your WebRTC inocming audio? testRTC can do just that for you.

There are two things that our customers are looking for in such cases:

  1. Making sure that the exact channels they expected to open in the call actually opened (here the idea is no more and no less)
  2. Validating there’s media being sent on these channels at all

Some customers even have more specific expectations – in the domain of the packet loss or latency of these channels.

This is one of the reasons why we’ve added the test expectations capability to testRTC. What this means is that you can indicate inside your test script what you expect to happen – with a focus on media channels and media – and if these expectations aren’t met, the test will either fail or just have an added warning on it.

Here’s how it works. The code below is a simple test script for Google’s AppRTC that I’ve written for the occasion:

It is nothing fancy. It just gets the user into a predetermined URL (testRTCexpectations) and gives it 30 seconds to run.

We run this on two agents (=two browsers) and you get a call. That call should have two incoming and two channels (one video and one voice in each direction).

To make sure this really happens, I added those first four lines with rtcSetTestExpectation() – they indicate that I want to fail the test if I don’t get these incoming channels with media on them.

Here’s how a successful test will look like:

WebRTC test expectations

Perfect scoring. No warnings or errors. All channels are listeed nicely in the test results.

And here what happened when I ran this test with only a single browser of testRTC – and join the same room with my own laptop with its camera disabled:

WebRTC test expectations

As you can see, the test failed and the reasons given were that there was no incoming video channel (or media on it).

You can see from the channels list at the bottom that there is no incoming channel, along with all the other channels and the information about them.

What else can I do?

  • Use it on outgoing channels
  • Check other parameters such as packet loss
  • Place expectations on custom metrics and timers you use in the test

You can learn more about this capability in our knowledge base on expectations.