Tag Archives for " functional testing "

Methodically testing and optimizing WebRTC applications at Vowel

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

Paul Fisher, CTO and Co-Founder at Vowel

There are many vendors who are trying to focus these days on making meetings more efficient. Vowel is a video conferencing tool that actually makes meetings better. It enables users to plan, host, transcribe, search, and share their meetings. They are doing that right from inside the browser, and make use of WebRTC.

Vowel has been using testRTC throughout 2020 and I thought it was a good time to talk with Paul Fisher, CTO and Co-Founder at Vowel. I wanted to understand from him how testRTC helps Vowel improve their product and its user experience.

Identifying bottlenecks and issues, scaling up for launch

One of the most important things in a video conferencing platform is the quality of the media. Before working with testRTC, Vowel lacked the visibility and the means to conduct systematic optimizations and improvements to their video platform. They got to know testRTC through an advisor in the company, whose first suggestion was to use testRTC.

In the early days, Vowel used internal tools, but found out that there’s a lot of overhead with using these tools. They require a lot more work to run, manage and extract the results from the tests conducted. Rolling their own was too time consuming and gave a lot less value.

Once testRTC was adopted by Vowel, things have changed for the better. By setting up a set of initial regression tests that can be executed on demand and through continuous integration, Vowel were able to create a baseline of their implementation performance and quality. From here, they were able to figure out what required improvement and optimization as well as understanding if a new release or modification caused an unwanted regression.

testRTC was extremely instrumental in assisting Vowel resolve multiple issues around its implementation: congestion control, optimizing resolution and bandwidth, debugging simulcast, understanding the cause and optimizing for latency, round trip time and jitter.

Vowel were able to proceed in huge strides in these areas by adopting testRTC. Prior to testRTC, Vowel had a kind of an ad-hoc approach, relying almost entirely on user feedback and metrics collected in datadog and other tools. There was no real methodical way for analyzing and pinpointing the source of the issues.

With the adoption of testRTC, Vowel is now able to reproduce issues and diagnose issues, as well as validate that these issues have been resolved. Vowel created a suite of test scripts for these issues and for the scenarios they focus on. They now methodically run these tests as regression with each release.

“Using testRTC has had the most significant impact in improving the quality, stability and maintenance of our platform.”

This approach got them to catch regression bugs earlier on, before potentially rolling out breaking changes to production – practically preventing them from happening.

Reliance on open source

Vowel was built on top of an open-source open source media server, but significant improvements, customizations and additional features were required for their platform. All these changes had to be rigorously tested, to see how they would affect behavior, stability and scalability.

On top of that, when using open source media servers, there are still all the aspects and nuances of the infrastructure itself. The cloud platform, running across regions, how the video layouts, etc.

One cannot just take an open source product or framework and expect it to work well without tweaking and tuning it.

Vowel made a number of significant modifications to lower-level media settings and behavior. testRTC was used to assess these changes — validating that there was a marked improvement across a range of scenarios, and ensuring that there were no unintentional, negative side effects or complications. Without the use of testRTC, it would be extremely difficult to run these validations — especially in a controlled, consistent, and replicable manner.

One approach is to roll out directly to production and try to figure out if a change made an improvement or not. The challenge there is that there is so much variability of testing in the wild that is unrelated to the changes made that it is easy to lose sight of the true effects of changes – big and small ones.

“A lot of the power of testRTC is that we can really isolate changes, create a clean room validation and make sure that there’s a net positive effect.”

testRTC enabled Vowel to establish a number of critical metrics and set goals across these metrics. Vowel then runs these recurring tests  automatically in regression and extracts these metrics to test and validate that they don’t “fail”.

On using testRTC

“testRTC is the defacto standard for providing reliable WebRTC testing functionality.”

testRTC is used today at Vowel by most of the engineering team.

Test results are shared across the teams, data is exported into the internal company wiki. Vowel’s engineers constantly add new test scripts. New Scrum stories commonly include the creation or improvement of test scripts in testRTC.Every release includes running a battery of tests on testRTC.

For Vowel, testRTC is extremely fast and easy to use.

It is easy to automate and spin up tests on demand with just a click of the button, no matter the scale needed.

The fact that testRTC uses Nightwatch, an open source browser automation framework, makes it powerful in its ability to create and customize practically any scenario.

The test results are well organized in ways that make it easy to understand the status of the test, pinpoint issues and drill down to see the things needed in each layer and level.

How Nexmo Integrated testRTC into their Test Automation for the Nexmo Voice API

Nexmo found in testRTC a solution to solve its end-to-end media testing challenges for their Nexmo Voice API product, connecting PSTN to WebRTC and vice versa.

Nexmo is one of the top CPaaS vendors out there providing cloud communication APIs to developers, enabling enterprises to add communication capabilities into their products and applications.

One of Nexmo’s capabilities involves connecting voice calls between regular phone numbers (PSTN) to browsers (using WebRTC) and vice versa. This capability is part of the Nexmo Voice API.

Testing @ Nexmo

Catering to so many customers with ongoing deployments to production means that Nexmo needs to take testing seriously. One of the things Nexmo did early on was introduce automated testing, using the pytest framework. Part of this automated testing includes a set of regression tests –  a huge amount of tests that provide very high test coverage. Regression tests get executed whenever the Nexmo team has a new version to release, but these tests can also be launched “on demand” by any engineer, they can also be triggered by the Jenkins CI pipeline upon a merge to a particular branch.

At Nexmo, development teams are in charge of the quality of their code, so there is no separate QA team.

In many cases, launching these regression tests first creates a new environment, where the Nexmo infrastructure is launched dynamically on cloud servers. This enables developers to run multiple test sessions in parallel, each in front of their own sandboxed environment, running a different version of the service.

When WebRTC was added to Nexmo Voice API, there was a need to extend the testing environment to include support for browsers and for WebRTC technology.

On Selecting testRTC

“When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”

Jamie Chapman, Voice API Engineer at Nexmo

Nexmo needed WebRTC end-to-end tests as part of their regression test suite for the Nexmo Voice API platform. These end-to-end tests were around two main scenarios:

  1. Dialing a call from PSTN and answering it inside a browser using WebRTC
  2. Calling a PSTN number directly from a browser using WebRTC

In both cases, their client side SDKs get loaded by a web page and tested as part of the scenario.

Nexmo ended up using testRTC as their tool of choice because it got the job done and it was possible to integrate it into their existing testing framework:

  • The python script used to define and execute a test scenario used testRTC’s API to dynamically create a test and run it on the testRTC platform
  • Environment variables specific to the dynamically created test environment got injected into the test
  • testRTC’s test result was then returned back to the python script to be recorded as part of the test execution result

This approach allowed Nexmo to integrate testRTC right into their current testing environment and test scripts.

Catering for Teams

The Voice API engineering team is a large oneAll these users have access to testRTC and they are able to launch regression tests that end up running testRTC scripts as well as using the testRTC dashboard to debug issues that are found.

The ability to have multiple users, each with their own credentials, running tests on demand when needed enabled increased productivity without dealing with coordination issues across the team members. The test results themselves get hosted on a single repository, accessible to the whole team, so all developers  can easily share faulty test results with the team .

Debugging WebRTC Issues

Nexmo has got regression testing for WebRTC off the ground by using testRTC. It does so by integrating with the testRTC APIs, scheduling and launching tests on demand from Nexmo’s own test environment. The tests today are geared towards providing end-to-end validation of media and connectivity between the PSTN network and WebRTC. Validation that testRTC takes care of by default.

When things break, developers check the results collected by testRTC. As Jamie Chapman, Voice API engineer at Nexmo said: “When it comes to debugging, when something has gone wrong, testRTC is the first place we’d go look. There’s a lot of information there”.

testRTC takes screenshots during the test run, as well as upon failure. It collects browser logs and webrtc-internals dump files, visualizing it all and making it available for debugging purposes. This makes testRTC a valuable tool in the development process at Nexmo.

On the Horizon

Nexmo is currently making use of the basic scripting capabilities of testRTC. It has invested in the API integration, but there is more that can be done.

Nexmo are planning to increase their use of testRTC in several ways in the near future:

Using testRTC for WebRTC-PSTN testing and monitoring

When we started a couple of years ago, we started receiving requests from contact center vendors to support scenarios that involve both WebRTC and PSTN.

Most of these were customers calling from a regular phone to an agent sitting in front of his browser and accepting the call using WebRTC. Or the opposite – contact center agents dialing out from their browser towards a regular phone.

That being the case, we thought it was high time we took care of that and give a better, more thorough explanation on how to get that done. So we partnered with Twilio on this one, took their reference application of a contact center from github, and wrote the test scripts in testRTC to automate it.

Along the way, we’ve made use of Twilio to accept calls and dial out calls; dabbled with AWS Lambda; etc.

It was a fun project, and Twilio were kind enough to share our story on their own blog.

If you are trying to test or monitor your contact center, and you need to handle scenarios that require PSTN automation mangled with WebRTC, then this is mandatory reading for you:

Automate Your Twilio Contact Center Testing with testRTC

And if you need help in getting that done, just ping us.

2

Advanced Testing: Manipulating getUserMedia and Available Devices

Philipp Hancke is not new here on our blog. He has assisted us when we wrote the series on webrtc-internals. He is also not squeamish about writing his own testing environment and sharing the love. This time, he wanted to share a piece of code that takes device availability test automation in WebRTC to a new level.

Obviously… we said yes.

We don’t have that implemented in testRTC yet, but if you are interested – just give us a shout out and we’ll prioritize it.

Both Chrome and Firefox have quite powerful mechanisms for automating getUserMedia with fake devices and skipping the permission prompt.

In Chrome this is controlled by the use-fake-device-for-media-stream and use-fake-ui-for-media-stream command line flags while Firefox offers a preferences media.navigator.streams.fake. See the webdriver.js helper in this repository for the gory details of how to use this with selenium.

However there are some scenarios which are not testable by this:

  • getUserMedia returning an error
  • restricting the list of available devices

While most of these are typically handled by unit tests sometimes it is nice to test the complete user experience for a couple of use-cases

  • test the behaviour of a client with only a microphone
  • test the behaviour of a client with only a camera
  • test the behaviour of a client with neither camera or microphone
  • combine those tests with screen sharing which in some cases replaces the video track on appear.in
  • test audio-only clients interoperating with audio-video ones. The test matrix becomes pretty big at some point.

Those tests are particularly important because as developers we tend to do some manual testing on our own machines which tend to be equipped with both devices. Automated tests running on a continuous integration server help a lot to prevent regressions.

Manipulating APIs with an extension

In order to manipulate both APIs I wrote a chrome extension (which magically works in Firefox and Edge because both support webextensions) that makes them controllable.

An extension can inject javascript into the page on page load as a content script. This has been used in the webrtc-externals extension described on webrtchacks to wrap the whole RTCPeerConnection API.

In our case, the content script replaces the getUserMedia and enumerateDevices functions with wrappers that can be modified at runtime. For example, the enumerateDevices wrapper calls the original function and then uses Javascript to modify the result before returning it to the caller:

    navigator.mediaDevices.enumerateDevices = function() {
        return origEnumerateDevices()
            .then((devices) => {
                if (sessionStorage.__filterVideoDevices) {
                    devices = devices.filter((device) => device.kind !== 'videoinput');
                }
                if (sessionStorage.__filterAudioDevices) {
                    devices = devices.filter((device) => device.kind !== 'audioinput');
                }
                if (sessionStorage.__filterDeviceLabels
                    || sessionStorage.__getUserMediaAudioError === "NotAllowedError"
                    || sessionStorage.__getUserMediaVideoError === "NotAllowedError") {
                    devices = devices.map((device) => {
                        var deviceWithoutLabel = {
                            deviceId: device.deviceId,
                            kind: device.kind,
                            label: '',
                            groupId: device.groupId,
                        }
                        return deviceWithoutLabel;
                    });
                }
                return devices;
            });
    };

The full extension can be found on github. The behaviour is dynamic and can be controlled via sessionStorage flags. With Selenium, one would typically navigate to a page in the same domain, execute a small script to set the session storage flags as desired and then navigate to the page that is to be tested.

We will walk through two examples now:

Use-case: Have getUserMedia return an error and change it at runtime

Let’s say we want to test the case that a user has denied permission. For appear.in this leads to a dialog that attempts to help them with the browser UX to change that.

The full test can be found here. As most selenium tests, it consists of a series of simple and straightforward steps:

  • build a selenium webdriver instance that allows permissions and loads the extension
  • go to the appear.in homepage
  • set the List of fake devices in Chrome WebRTC testing  flag to cause a NotAllowedError (i.e. the user has denied permission) as well as an appear.in specific localStorage property that says the visitor is returning — this ensures we go into the flow we want to test and not into the “getUserMedia primer” that is shown to first-time users.
  • join an appear.in room by loading the URL directly.
  • the next step would typically be asserting the presence of certain DOM elements guiding the user to change the denied permission. This is omitted here as those elements change rather frequently and replaced with a three second sleep which allows for a visual inspection. It should look like this:
  • the List of fake devices in Chrome WebRTC testing  flag is deleted
  • this eventually leads to the user entering the room and video showing up. We do some magic here in order to avoid having to ask the user to refresh the page.

Watch a video of this test running below:

 

Incidentally, that dialog had a “enter anyway” button which, due to the lack of testing, was not visible for quite some time without anyone noticing because the visual regression tests could not access this stage. Now that is possible.

Restricting the list of available devices

The fake devices in both Chrome and Firefox return a stream with exactly those properties that you ask for and they always succeed (in Chrome there is a way to make them always fail too). In the real world you need to deal with users who don’t have a microphone or a camera attached to their machine. A call to getUserMedia would fail with a NotFoundError (note the recent change in Chrome 64 or simply use adapter.js and write spec-compliant code today).

The common way to avoid this is to enumerate the list of devices to figure out what is available using enumerateDevices by pasting this into the javascript console:

navigator.mediaDevices.enumerateDevices().then(devices => {
 const hasMicrophone = devices.some(device => device.kind === “audioinput”);
 console.log(‘has microphone’, hasMicrophone);
});

 

When you run this together with the fake device flag you’ll notice that it provides two fake microphones and one fake camera device:

When the extension is loaded (which for manual testing can be done on chrome://extensions; see above for the selenium ways to do it) one can manipulate that list:

sessionStorage.__filterAudioDevices = true;

Paste the enumerateDevices into the console again and the audio devices no longer show up:

At appear.in we used this to replace a couple of audio-only and video-only tests that used feature flags in the application code with more realistic behaviour. The extension allows a much cleaner separation between the frontend logic and the test logic.

Summary

Using a tiny web extension we could easily extend the already powerful WebRTC testing capabilities of the browsers and cover more advanced test scenarios. Using this approach it would even be possible to simulate events like the user unplugging the microphone during the call.

Automated WebRTC Testing using testRTC

Yesterday, we hosted a webinar on testRTC. This time, we were really focused on showing some live demos of our service.

I wanted this one to be useful, so I sat down earlier this week, working on a general story outline with the idea of showing live how you can write a test script from scratch, building more and more capabilities and functionality into it as I went along.

It was real fun.

If you missed it, I’d like to invite you to watch the replay:

watch @ crowdcast

For the purpose of this webinar, I took Jitsi Meet (https://meet.jit.si/) and created the following scripts for it:

  1. Simple one-on-one test
    • Then I cleaned it up a bit from nagging warnings
    • And added a few basic expectations
  2. 4-way video test
    • For this one I’ve added some synchronization across the probes, and made sure Jitsi is the one generating the random rooms
    • I changed the script to be aware of sessions (parallel meeting rooms in the same test)
    • Then I played with the test reconfiguring it to run 40 probes, 8 in each meeting room
  3. One-on-one test with network limits
    • Switched back to a 1:1 session, this time with the flexibility we achieved in (2)
    • Increased the test length to 3 minutes
    • Injected 5% packet loss to the test in the second minute of the test

I also went over some of the results from the Kurento post we’ve published yesterday and went through the screen sharing script we’ve written recently about that uses appear.in as an example

One of the things I was asked is to share the scripts used throughout the session.

So I cleaned up the scripts a bit and placed them on our Google Drive. I am sharing them here in two forms:

  1. The GDoc file of the script – open it to read, copy+paste it to wherever
  2. The JSON file of the script – you can import this one directly into your testRTC account (you’ll need to reconfigure the probe profiles before you run it):

Here they are:

  1. Simple one-on-one test: GDocJSON
  2. 4-way video test: GDocJSON
  3. One-on-one test with network limits: GDocJSON

We’re here for any questions you may have.

Check out the enhancements we’ve made to testRTC

It has been a while since we released a version, so it is with great pleasure that I am writing this announcement.

Yes. Our latest release is now out in the wild. We’ve upgraded our service on Sunday, so it is about time we take you for a quick roundup of the changes we’ve made.

#1 – Support for projects and users

This one is long overdue. Up until today, if you signed up for testRTC, you had to share your credentials with whoever was on your team to work with him on the tests. This was impossible to work with, assuming you wanted QA, R&D and DevOps to share the account and work cooperatively with the tests and monitors that got logged inside testRTC.

So we did what we should have – we now support two modes of operation:

  1. A user can be linked to multiple projects
    • So if your company is running multiple projects, you can now run them separately, having people focused on their own environment and tests
    • This is great for those who run segregated services for their own customers
    • It also means that now, a user can switch between projects with a single set of credentials in the system
  2. A project can belong to multiple users
    • Need someone to work on writing the scripts and executing them? You got it
    • Have a developer working on a bug that got reported with a link to testRTC? Sure thing
    • The IT guy who just received a downtime alarm from the WebRTC monitor we run? That’s another user
    • Each user has his own place in the project, and each is distinguished by his own credentials

testRTC project selection

If you require multiple projects, or want to add more users to your account just contact our support.

#2 – Longer, bigger tests

While theoretically, testRTC can run any test at any length and size, things aren’t always that easy.

There are usually two limitations to these requirements:

  1. The time they take to prepare, execute, run and collect results
  2. The time it takes to analyze the results

We worked hard in this release on both elements and got to a point where we’re quite happy with the results.

If you need long tests, we can handle those. One of the main concerns with long tests is what to do if you made a mistake while configuring them? Now you can cancel such tests in the middle if necessary.

Canceling a test run

If you need to scale tests to a large number of browsers – we can do that too.

We are making sure we bubble up the essentials from the browsers, so you don’t have to work hard and rummage through hundreds of browser logs to find out what went wrong. To that end, the tables that show browser results have been reworked and are now sorted in a way that will show failures first.

#3 – Advanced WebRTC analysis

We’ve noticed in the past few months that some of our customers are rather hard core. They are technology savvy and know their way in WebRTC. For them, the graphs we offer of bitrates, latencies, packet losses, … – are just not enough.

Chrome’s webrtc-internals and getstats() offer a wealth of additional information that we offered up until now only in a JSON file download. Well… now we also visualize it upon request right from the report itself:

Advanced WebRTC graphs

These graphs are reachable by clicking the webrtc_internals_dump.txt link under the Logs tab of a test result. Or by clicking the Advanced WebRTC Analytics button located just below the channels list:

Access advanced WebRTC graphs

I’d like to thank Fippo for the work he did (webrtc-dump-importer) – we adopted it for this feature.

#4 – Simulation of call drops and dynamic network changes

This is something we’ve been asked more than once. We have the capability of modeling the network of our probes, so that the browser runs with a specific configuration of a firewall or via a specific type of simulated network. We’re modifying and tweaking the profiles we have for these from time to time, but now we’ve added a script command so that you can change this configuring in runtime.

What can you do with it? Run two minutes of a test with 2 Mbps, then close virtually everything for 20-30 seconds, then open up  the network again – and see what happens. It is a way to test WebRTC in your application in dynamic network conditions – ones that may require ICE restarts.

Dynamically changing network profile in testRTC

In the test above, we dynamically changed the network profile in mid-call to starve WebRTC and see how it affects the test.

How do you use this new capability? Use our new command rtcSetNetworkProfile(). Read all about it in our knowledge base: rtcSetNetworkProfile()

#5 – Additional test expectations

We had the basics covered when it came to expectations. You could check the number and types of channels, validate that there’s some bits going on in there, validate packet loss. And that’s about it.

To this list of capabilities that existed in rtcSetTestExpectations() we’ve now added the ability to add expectations related to jitter, video resolutions, frame rate, and call setup time. We’ve also taken the time to handle expectations on empty channels a lot better.

There’s really nothing new here, besides an enhancement of what rtcSetTestExpectations() can do.

#6 – Additional information in Webhook responses

testRTC can notify your backend whenever a test or a monitor run ends on the status of that run – success or failure. This is done by configuring a webhook that is called at the end of the test run. We’ve had customers use it to collect the results to their own internal monitoring systems such as Splunk and Elastic Search.

What we had on offer in the actual payload that was passed with the webhook was rather thin, and while we’re still trying to keep it simple, we did add the leading error in that response in cases of failure:

testRTC webhook test failure response

#7 – API enabled to all customers

Yes. We had APIs in the past, but somehow, there was friction involved, with customers needing to ask for their API key in order to use the API for their continuous integration plans. It worked well, but the number of customers asking for API keys – both customers and prospects under evaluation – has risen to a point where it was ridiculous to continue doing this manually. Especially when our intent is for customers to use our APIs.

So we took this one step forward. From now on, every account has an API key by default. That API key is accessible from the account’s dashboard when you login, so there’s no need to ask for it any longer.

testRTC API key

For those of you who have been using it – note that we’ve also reset your key to a new value.

Your turn

This has been quite a big release for us, and I am sure to miss an enhancement or two (or more).

Now back to you. How would you want to test WebRTC in your product?

WebRTC: To Mechanical Turk or NOT to Mechanical Turk

I’ve seen this a few times already. People look at an automated process – only to replace it with a human one. For some reason, there’s a belief that humans are better. And grinding the same thing over and over and over and over and over again.

They’re not. And there’s a place for both humans and machines in WebRTC product testing.

WebRTC, Mechanical Turk and the lack of consistency

The Amazon Mechanical Turk is a great example. You can easily take a task, split it between many people, and have them do it for you. Say you have a list of a million songs and you wish to categorize them by genre. You can get 10,000 people in Amazon Mechanical Turk to do 100 lines each from that list and you’re done. Heck, you can have each to 300 lines and for each line (now with 3 scores), take the most common Genre defined by the people who classified it.

Which brings us to the problem. Humans are finicky creatures. Two people don’t have the same worldview, and will give different Genre indication to the same song. Even worse, the same person will give a different Genre to the same song if enough time passes (enough time can be a couple of minutes). Which is why we decided to show 3 people the same song to begin with – so we get some conformity in the decision we end up with on the Genre.

Which brings us to testing WebRTC products. And how should we approach it.

Here’s a quick example I gleaned from the great discuss-webrtc mailing list:

discuss-webrtc bug report

There’s nothing wrong with this question. It is a valid one, but I am not sure there’s enough information to work off this one:

  • What “regardless of the amount of bandwidth” is exactly?
  • Was this sent over the network or only done locally?
  • What resolution and frame rate are we talking about?
  • Might there be some packet loss causing it?
  • How easy is it to reproduce?

I used to manage the development of VoIP products. One thing we were always challenged by is the amount of information provided by the testing team in their bug reports. Sometimes, there wasn’t enough information to understand what was done. Other times, we had so many unnecessary logs that you either didn’t find what was needed or felt for the poor tester who spent so much time collecting this stuff together for you with no real need.

The Tester/Developer grind cycle

Then there’s that grind:

Test-Dev grind cycle

We’ve all been there. A tester finds what he believes is a bug. He files it in the bug tracking system. The developer can’t reproduce the bug, or needs more information, so the cycle starts. Once the developer fixes something, the tester needs to check that fix. And then another cycle starts.

The problem with these cycles is that the tester who runs the scenario (and the developer who does the same) are humans. Which makes it hard for repeated runs of the same scenario to end up the same.

When it comes to WebRTC, this is doubly so. There are just too many aspects that are going to affect how the test scenario will be affected:

  • The human tester
  • The machine used during the test
  • Other processes running on said machine
  • Other browser tabs being used
  • How the network behaves during the test

It is not that you don’t want to test in these conditions – it is that you want to be able to repeat them to be able to fix them.

My suggestion? Mix and match

Take a few cases that goes through the fundamental flows of your service. Automate that part of your testing. Don’t use some WebRTC Mechanical Turk in places where it brings you more grief than value.

Augment it with human testers. Ones that will be in charge of giving the final verdict on the automated tests AND run around with their own scenarios on your system.

It will give you the best of both worlds, and with time, you will be able to automate more use cases – covering regression, stress testing, etc.

I like to think of testRTC as the Test Engineer’s best companion – we’re not here to replace him – just to make him smarter and better at his job.

Introducing: Our Brand New Dashboard

We’ve been working hard these past two months, ever since we got our previous release out the door. This time, we invested a lot of time and thought on the small items. And one big item as well.

All over the service, you’ll notice some slight changes to the UI. This is an ongoing process to fine-tune the service and make it simpler to use for our customers.

The biggest visible addition to our latest release is the introduction of a new user dashboard.

From now one, when a user logs in, he gets a bird’s eye view of his activities in testRTC:

Vive la Experiencia de Apuestas Más Emocionante con Yajuego Colombiano!

¿Estás listo para vivir la experiencia de apuestas más emocionante? En Yajuego Colombiano, podrás sumergirte en un mundo lleno de adrenalina y diversión, donde las apuestas se convierten en una verdadera aventura. En este artículo, descubrirás todo lo que necesitas saber sobre esta plataforma de apuestas en línea y por qué es la opción ideal para aquellos que buscan una experiencia única.

Desde una amplia variedad de juegos de casino hasta apuestas deportivas en tiempo real, Yajuego Colombiano ofrece una gama completa de opciones para satisfacer todos los gustos y preferencias. Además, cuenta con licencia y regulación en Colombia, lo que garantiza un ambiente seguro y confiable para todos los usuarios. Ya sea que estés interesado en probar tu suerte en las máquinas tragamonedas, desafiar a otros jugadores en emocionantes partidas de póker o apostar en tus equipos favoritos, Yajuego Colombiano tiene todo lo que necesitas para una experiencia de apuestas inigualable. ¡No esperes más y descubre todo lo que esta plataforma tiene para ofrecerte!

Descubre la emoción de las apuestas en línea con Yajuego Colombiano

Vive la experiencia de apuestas más emocionante con Yajuego Colombiano. En Yajuego, te ofrecemos una plataforma de apuestas en línea segura y confiable, donde podrás disfrutar de una amplia variedad de juegos y apuestas deportivas. Nuestro objetivo es brindarte la mejor experiencia de entretenimiento, con opciones para todos los gustos y preferencias.

En Yajuego, encontrarás una amplia selección de juegos de casino, desde las clásicas máquinas tragamonedas hasta emocionantes mesas de blackjack y ruleta. Además, podrás apostar en tus deportes favoritos, con una amplia gama de opciones y mercados disponibles. Nuestro equipo de expertos se encarga de ofrecerte las mejores cuotas y promociones, para que puedas maximizar tus ganancias.

Confía en Yajuego Colombiano para vivir la emoción de las apuestas en línea. Nuestra plataforma cuenta con todas las medidas de seguridad necesarias para proteger tus datos personales y transacciones. Además, nuestro equipo de atención al cliente está disponible las 24 horas del día, los 7 días de la semana, para brindarte el mejor soporte en caso de cualquier consulta o inconveniente. ¡Únete a Yajuego y vive la emoción de apostar hoy mismo!

Variedad de juegos y opciones para todos los gustos en Yajuego Colombiano

Vive la experiencia de apuestas más emocionante con Yajuego Colombiano! Descubre la adrenalina de apostar en tus deportes favoritos y disfruta de una amplia variedad de juegos de casino en línea. Con Yajuego, tienes la oportunidad de ganar grandes premios y vivir momentos llenos de emoción y diversión.

No te pierdas la oportunidad de aprovechar el código promocional Yajuego para obtener increíbles bonificaciones y beneficios adicionales. Este código te permitirá acceder a promociones exclusivas y aumentar tus posibilidades de ganar. ¡No esperes más y únete a la comunidad de apostadores de Yajuego para vivir la experiencia de apuestas más emocionante en Colombia!

Yajuego Colombiano te ofrece una plataforma segura y confiable para disfrutar de tus apuestas en línea. Con una amplia selección de deportes y juegos de casino, siempre encontrarás algo que se ajuste a tus gustos y preferencias. No importa si eres un experto en apuestas o si estás comenzando, Yajuego te brinda todas las herramientas necesarias para que vivas una experiencia única y emocionante. ¡Regístrate hoy y utiliza el código promocional Yajuego para empezar a disfrutar de todas las ventajas que esta plataforma tiene para ofrecerte!

Vive la experiencia de apuestas seguras y confiables con Yajuego Colombiano

¡Vive la experiencia de apuestas más emocionante con Yajuego Colombiano! En Yajuego, te ofrecemos una plataforma de apuestas en línea que te brinda la oportunidad de disfrutar de una amplia variedad de juegos y actividades emocionantes. Ya sea que te guste apostar en deportes, jugar a las tragamonedas o probar tu suerte en el casino en vivo, tenemos todo lo que necesitas para vivir una experiencia de apuestas inolvidable.

Nuestro objetivo en Yajuego es proporcionarte un entorno seguro y confiable para que puedas disfrutar de tus apuestas sin preocupaciones. Contamos con licencia y regulación en Colombia, lo que significa que cumplimos con los más altos estándares de seguridad y protección de datos. Además, nuestra plataforma es fácil de usar y está diseñada para ofrecerte una experiencia de juego fluida y sin complicaciones.

En Yajuego, también te ofrecemos una amplia gama de promociones y bonificaciones para que puedas maximizar tus ganancias. Desde bonos de bienvenida hasta promociones exclusivas, siempre encontrarás algo emocionante que te mantendrá entretenido. ¡Así que no esperes más y únete a la emoción de las apuestas en Yajuego Colombiano!

Bonificaciones y promociones exclusivas para maximizar tu diversión en Yajuego Colombiano

¡Vive la experiencia de apuestas más emocionante con Yajuego Colombiano! Si eres amante de la adrenalina y la emoción de las apuestas deportivas, no puedes dejar pasar la oportunidad de unirte a Yajuego. Con una amplia variedad de deportes disponibles para apostar, desde fútbol hasta baloncesto y tenis, encontrarás siempre el evento perfecto para disfrutar al máximo.

Además, Yajuego te ofrece una plataforma segura y confiable para realizar tus apuestas. Con su interfaz fácil de usar y su sistema de pagos seguro, puedes estar tranquilo de que tus ganancias estarán protegidas. No importa si eres principiante o un apostador experimentado, Yajuego te brinda todas las herramientas necesarias para que puedas disfrutar de la emoción de las apuestas sin preocupaciones.

Yajuego también se destaca por sus increíbles promociones y bonificaciones. Desde bonos de bienvenida hasta promociones especiales para eventos deportivos, siempre encontrarás una oferta que se adapte a tus necesidades. Además, Yajuego cuenta con un equipo de atención al cliente disponible las 24 horas del día, los 7 días de la semana, para resolver cualquier duda o problema que puedas tener.

No pierdas más tiempo y únete a la comunidad de Yajuego Colombiano. Vive la emoción de las apuestas deportivas y disfruta de una experiencia única. ¡Regístrate ahora y comienza a ganar con Yajuego!

Soporte al cliente excepcional para una experiencia de apuestas sin igual en Yajuego Colombiano

Vive la experiencia de apuestas más emocionante con Yajuego Colombiano! Si eres amante de las apuestas deportivas y los juegos de casino, Yajuego es tu mejor opción en Colombia. Con una plataforma fácil de usar y una amplia variedad de opciones de apuestas, Yajuego te brinda la emoción y diversión que estás buscando.

En Yajuego, podrás apostar en tus deportes favoritos, desde fútbol hasta baloncesto y tenis. Además, podrás disfrutar de una amplia selección de juegos de casino, como tragamonedas, ruleta y blackjack. Con las mejores cuotas del mercado y promociones exclusivas, Yajuego te ofrece la oportunidad de ganar grandes premios mientras te diviertes.

Además, Yajuego cuenta con un equipo de atención al cliente disponible las 24 horas del día, los 7 días de la semana, para resolver cualquier duda o consulta que puedas tener. Con métodos de pago seguros y rápidos, podrás realizar tus depósitos y retiros de manera fácil y confiable. No esperes más y vive la emoción de las apuestas con Yajuego Colombiano.

En conclusión, Yajuego Colombiano ofrece la experiencia de apuestas más emocionante que podrás encontrar en el mercado. Con su amplia variedad de juegos, bonificaciones y promociones, te garantizamos que nunca te aburrirás. Además, su plataforma segura y confiable te brinda la tranquilidad de saber que tus datos personales y transacciones están protegidos. Ya sea que prefieras las apuestas deportivas, los juegos de casino o las tragamonedas en línea, Yajuego tiene todo lo que necesitas para vivir la emoción de apostar. ¡No esperes más y únete a la diversión en Yajuego Colombiano hoy mismo!

testRTC dashboard

What can you see on the dashboard?

Usage

Usage

This area of the dashboard highlights the usage done in the account.

It allows you to understand what resources are available to you, so if you want to run a stress test, you will be able to use enough browsers.

If you want to do ad-hoc testing with more browsers than are available in your account, you’ll need to holler us and we’re enable more browsers on your account for a period of time.

Stats

Stats

This area shows the statistics of your use over a span of time. It is quite useful for managers to understand how many tests were conducted and know how they fared.

  • In red, we indicate tests and monitor executions that failed for the period selected
  • In green, we indicate tests and monitor executions that succeeded for the period selected
  • In blue, we indicate the total number of tests and monitor executions for the period selected

And you can select a different period to look at.

Active Monitors

Active Monitors

This area indicate what monitors are up and running at the moment, along with the status of the most recent execution.

If you click on any of the rows, it will get you to the monitor run results, filtered for that specific monitor.

Recent Tests

Recent Tests

This area shows the last 5 tests that got executed, along with their results.

As with the active monitors, clicking on the test gets you to the results themselves.

News and Announcements

News and Announcements

This area shows some news and announcements we have for our users.

What’s Next?

Consider the dashboard a work in progress. We’re sure there’s much to be improved here. We wanted to get this out the door and into the hands of our users. Ping us if you have any suggestions on how to improve it.

If you need to test or monitor a WebRTC product – don’t be shy – sign up for testRTC.

2

Why we are Using Real Browsers to Test WebRTC Services?

The most important decision we made was one that took place before testRTC became a company. It was the decision to use a web browser as the agent/probe for our service instead of building something on top of WebRTC directly or god forbid GStreamer.

Here are a few things we can do because we use real browsers in WebRTC testing:

#1 – Time to Market

Chrome just released version 49.

How long will it take for you to test its behavior against your WebRTC service if you are simulating traffic instead of using this browser directly?

For us, the moment a browser gets released is almost the moment we can enable it for our customers.

To top it off, we enable access to our customers to the upcoming  browser versions as well – beta and unstable. This helps those who need to test their service check and get some confidence in their service when running them against future versions of browsers.

Even large players in the WebRTC industry can be hit by browser updates – TokBox did some time ago, so being able to test and validate such issues earlier on is imperative.

#2 – Pace of Change

VP9? H.264? ORTC APIs? Deprecation of previous APIs? Replacement of the echo canceler?  Addition of local recording APIs? Media forwarding?

Every day in WebRTC brings with it yet another change.

Browsers get updated in 6-8 weeks cycles, and the browser vendors aren’t shy about removing features or adding new ones.

Maintaining such short release cycles is hellishly tough. For an established vendors (testing or otherwise), it is close to impossible – they are used to 6-12 months release cycles at best. For startups it is just too much of a hassle to run at these speeds – what you are trying to achieve at this point is leverage others and focus on the things you need to do.

So if the browser is there, it gets frequently updated, and it is how the end users end up running the service, why can’t we use it ourselves to leverage both automated and manual testing?

It was stupidly easy for me to test VP9 with testRTC even before it was officially released in the browser. All I had to do was pick the unstable version of the browser testRTC supports and… run the test script we already had.

The same is true for all other changes browsers make in WebRTC or elsewhere – they become available to us ad our customers immediately. And in most cases, with no development at all on our part.

#3 – Closest to Reality

You decided to use someone who simulates traffic and follows the WebRTC spec for your testing.

Great.

But does it act like a browser?

Chrome and Firefox act different through the API calls and look different on the wire. Hell – the same browser in two different versions acts differently.

Then why the hell use a third party who read the WebRTC spec and interpreted it slightly different than the browser used at the end of the day? Count the days. In each passing day, that third party is probably getting farther away from the browsers your customers are using (until someone takes the time and invests in updating it).

#4 – Signaling Protocols

When we started this adventure we are on with testRTC, we needed to decide what signaling to put on top of WebRTC.

Should it be SIP over WebSocket? Covering the traditional VoIP market.

Maybe we should to for XMPP. Over BOSH. Or Comet. Or WebSocket. Or not at all.

Should we add an API on top that the customer integrates with in order to simulate the traffic and connect to his own signaling?

All these alternatives had serious limitations:

  • Picking a specific signaling protocol would have limited our market drastically
  • Introducing an integration API for any signaling meant longer customer acquisition cycles and reducing our target market (yet again)

A browser on the other hand… that meant that whatever the customer decided to do – we immediately support. The browser is going to drive the interaction anyway. Which is why we ended up using browsers as the main focus of our WebRTC testing and monitoring service.

#5- Functional Testing and Business Processes

WebRTC isn’t tested in vacuum. When you used to use VoIP – things were relatively easy. You have the phone system. It is a service. You know what it does and how it works. You can test it and any of its devices and building blocks – it is all standardized anyway.

WebRTC isn’t like that. It made VoIP into a feature. You have a dating site. In that site people interact in multiple ways. They may also be doing voice and video calls. But how they reach out to each other, and what business processes there are along the way – all these aren’t related to VoIP at all.

Having a browser meant we can add these types of tests to our service. And we have customers who check the logic of their site and backend while also checking the media quality and the WebRTC traffic. It means there’s more testing you can do and more functionality of your own service you can cover with a single tool.

Thinking of Testing Your WebRTC Service?

Make sure a considerable part of the testing you do happens with the help of browsers.

Simulators and traffic generators are nice, but they just don’t cut it for this tech.