I see mixed signals about the popularity and acceptance of test automation. It is doubly so when testing WebRTC.
Time to consider some serious WebRTC test automation.
In favor of automation
A tester automated his job for 6 years – most probably a hoax, but one that rings partially true. The moral of the story is simple – if you invest time in automating rudimentary tasks – you get your ROI back tenfold in the future.
That’s… about it.
We have customers who use us to automate areas of their testing, but not many. At least not as many as I’d expect there to be – WebRTC being new and all – and us looking at best practices and changing our bad ways and habits of the past when stating with green field projects.
Against automation
Why is Manual QA Still So Prevalent? – it seems like SauceLabs, who delve into general purpose browser automation, is also experiencing the same thing. Having companies focus on manual testing instead of moving to automation.
Best explanation I heard from someone? They can get a cheap tester to do the work for them by outsourcing it to a developing country and then it costs them less to do the same – just with humans.
For me, that’s taking Amazon’s Mechanical Turk a step too much. For a repetitive task that you’re going to do in each and every release (yours and of browser vendors), to have different nameless faces (or even named ones) do the same tasks over and over again?
Dog-fooding at testRTC
We’ve been around for almost 2 years now. So it is high time we start automating our own testing as well.
The first place where we will be automating our own testing is in making sure our test related feature set works:
- Our special script commands and variables
- Running common test scenarios that our customers use in WebRTC
Now, we have test scripts that run these tests, so we can automate them individually. Next step would be to run them sequentially with a “click of a button”. Or more accurately, an execution of a shell script. Which is where we’re taking this in our next release.
The rest will stay manual for now. Mostly because in each version we change our UI based on the feedback we receive. One of our top priorities is to make our product stupidly simple – so that our customers can focus on their own product and need to learn as little as possible (or nothing at all) to use testRTC.
Why our customers end up automating?
There are several huge benefits in automating at least parts of your testing. Here are the ones we see every day from the way our customers make use of WebRTC:
- Doing the most basic sanity tests – answering the question “is it broken?” and getting an answer fast with no human intervention. This is usually coupled with continuous integration, where every night the latest build is tested against it
- Scale tests – when a service needs to grow, be it to 10 users in the same session, 40 people across 20 1:1 sessions or 100 viewers of a webinar – it becomes hard to manually test. So they end up writing a simple script in our platform and running it on demand when the time comes to stress test their product
- Network configurations – taking a script and running it in various network conditions – with and without forcing TURN, packet losses, etc. Some also add different data center locations for the browsers and play with the browser versions used. The idea is to get testing to the edge cases where a user’s configuration is what comes back to bite you
- Debugging performance – similar to scale tests, but slightly different. Some require the ability to check the capacity of a given machine in their product. Usually the media server. There’s much to be said about that, but being able to run a large scale test, analyze the performance report testRTC produces, and then rinse and repeat means it is easier to find the bottlenecks in the system and fix them prior to deployment
Starting out with WebRTC, we’ve seen other things getting higher priority by customers. They all talk about scenarios and coverage of their test plans. Most don’t go there due to that initial high investment.
What we do see, and what effectively improves our customer’s product, is taking one scenario. Usually a simple one. Writing it in a way that allows for scaling it up. Once a customer runs it for a few days, he sees areas he needs to improve in his product, and how that simple script can expand to encompass more of his testing needs.
–
This is also why we try to be there with our customers every step of the way. From assisting in defining that test, to writing it and following through with analysis if need be.
Are you serious about your WebRTC product? Don’t waste your time and try us out.