Category Archives for "Announcements"

WebRTC Application Monitoring: Do you Wipe or Wash?

UPDATE: Recording of this webinar can be found here.

If you are running an application then you are most probably monitoring it already.

You’ve got New Relic, Datadog or some other cloud service or on premise monitoring setup handling your APM (Application Performance Management).

What does that mean exactly with WebRTC?

If we do the math, you’ve got the following servers to worry about:

  • STUN/TURN servers, deployed in one or more (probably more) data centers
  • Signaling server, at least one. Maybe more when you scale the service up
  • Web server, where you actually host your application and its HTML pages
  • Media servers, optionally, you’ll have media servers to handle recording or group calls (look at our Kurento sizing article for some examples)
  • Database, while you might not have this, most services do, so that’s another set of headaches
  • Load balancers, distributed memory datagrid (call this redis), etc.

Lots and lots of servers in that backend of yours. I like to think of them as moving parts. Every additional server that you add. Every new type of server you introduce. It adds a moving part. Another system that can fail. Another system that needs to be maintained and monitored.

WebRTC is a very generous technology when it comes to the variety of servers it needs to run in production.

Assuming you’re doing application monitoring on these servers, you are collecting all machine characteristics. CPU use, bandwidth, memory, storage. For the various servers you can go further and collect specific application metrics.

Is that enough? Aren’t you missing something?

Here are 4 quick stories we’ve heard in the last year.

#1 – That Video Chat Feature? It Is Broken

We’re still figuring out this whole embeddable communications trend. The idea of companies taking WebRTC and shoving voice and video calling capabilities into an existing product and workflow. It can be project management tools, doctor visitations, meeting scheduler, etc.

In some cases, the interactions via WebRTC are an experiment of sorts. A decision to attempt embedding communications directly to the existing product instead of having users find how to communicate directly (phone calls and Skype were the most common alternatives).

Treated as an experiment, such integrations sometimes were taken somewhat out of focus, and the development teams rushed to handle other tasks within the core product, as so often happens.

In one such case, the company used a CPaaS vendor to get that capability integrated with their service, so they didn’t think much about monitoring it.

At least not until they found out one day that their video meetings feature was malfunctioning for over two weeks (!). Customers tried using it and failed and just moved on, until someone complained loud enough.

The problem ended up being the use of deprecated CPaaS SDK that had to be upgraded and wasn’t.

#2 – But Our Service is Working. Just not the Web Calling Part

In many cases, there’s an existing communication product that does most of its “dealings” over PSTN and regular phone numbers. Then one day, someone decides to add browser dialing. Next thing that happens, you’ve got a core product doing communications with a new WebRTC-based feature in there.

Things are great and calls are being made. Until one day a customer calls to complain. He embedded a call button to his website, but people stopped calling him from the site. This has gone for a couple of days while he tried tweaking his business and trying to figure out what’s wrong. Until finding out that the click to call button on the website just doesn’t work anymore.

Again, all the monitoring and health check metrics were fine, but the integration point of WebRTC to the rest of the system was somewhat lost.

The challenge here was that this got caught by a customer who was paying for the service. What the company wanted to do at that point is to make sure this doesn’t repeat itself. They wanted to know about their integration issues before their customers do.

#3 – Where’s My Database When I Need it?

Here’s another one. A customer of ours has this hosted unified communications service that runs from the browser. You login with your credentials, see a contacts list and can dial anyone or receive calls right inside the browser.

They decided to create a monitor with us that runs at a low frequency doing the exact same thing: two people logging in, one calls and the other answers. Checking that there’s audio and video and all is well.

One time they contacted us complaining that our monitor is  failing while they know their system is up and running. So we opened up a failed monitor run, looked at the screenshot we collect automatically upon failure and saw an error on the screen – the browser just couldn’t get the address book of the user after logging in.

This had nothing to do with WebRTC. It was a faulty connection to the database, but it ended up killing the service. They got that pinpointed and resolved after a couple of iterations. For them, it was all about the end-to-end experience and making sure it works properly.

#4 – The Doctor Won’t See You Now

Healthcare is another interesting area for us. We’ve got customers in this space doing both testing and monitoring. The interesting thing about healthcare is that doctor visitations aren’t a 24/7 thing. For that particular customer it was a 3-hour day shift.

The service was operating outside of the normal working hours of the doctor’s office, with the idea of offering patients a way to get a doctor during the evening hours.

With a service running only part of the day, the company wanted to be certain that the service is up and running properly – and know about it as early on as possible to be able to resolve any issues prior to the doctors starting their shift.

End-to-End Monitoring to the Rescue

In all of these cases, the servers were up and running. The machines were humming along, but the service itself was broken. Why? Because application metrics tell a story, but not the whole story. For that, you need end-to-end monitoring. You need a way to run a real session through the system to validate that all of its pieces – all of its moving parts – are working well TOGETHER.

Next week, we will be hosting a webinar. In this webinar, we will show step by step how you can create a killer monitor for your own WebRTC application.

Oh – and we won’t only focus on working/not working type of scenarios. We will show you how to catch quality degradation issues of your service.

I’ll be doing it live, giving some tips and spending time explaining how our customers use our WebRTC monitoring service today – what types of problems are they solving with it.

Join me:

Creating a Kickass WebRTC Monitor Using testRTC
recording can be found here

 

1

We’ve Partnered Up With Frozen Mountain

Guess what? We’ve partnered with Frozen Mountain.

If you are developing a WebRTC application that is self hosted service on your own (AWS, bare metal or whatever cloud or data center), then you’ve got your hands full with work. That work includes a lot in the domain of stress testing the service, trying to size your servers and then ongoing monitoring of it.

More often than not, this would lead you to us. At testRTC we take care of your testing and monitoring needs for your WebRTC application.

And recently, we’ve seen several companies using Frozen Mountain and selecting testRTC for their WebRTC testing and monitoring needs.

Which lead to a natural next step for both companies –

We’ve now partnered.

What does that mean exactly?

It means that we know Frozen Mountain’s products and their capabilities a bit better – and guess what – Frozen Mountain knows our products and our product’s capabilities a bit better. It also means that if you’re using testRTC through Frozen Mountain, then the Frozen Mountain team can easily gain access to your test results when needed, analyze them and assist you with the issues you’re facing.

The end result? Speeding up your time from development to production.

If you are a Frozen Mountain customer, and you are looking for a testing and/or monitoring solution for your WebRTC application, then you can reach out directly to Frozen Mountain (or to us) – we’ll both be there for you to guide you through the process and make sure you end up with a better product offering with a higher quality to it.

5

Just Landed: Automated WebRTC Screen Sharing Testing in testRTC

Well… this week we had a bit of a rough start, but we’re here. We just updated our production version of testRTC with some really cool capabilities. The time was selected to fit with the vacation schedule of everyone in this hectic summer and also because of some nagging Node.js security patch.

As always, our new release comes with too many features to enumerate, but I do want to highlight something we’ve added recently because of a couple of customers that really really really wanted it.

Screen sharing.

Yap. You can now use testRTC to validate the screen sharing feature of your WebRTC application. And like everything else with testRTC, you can do it at scale.

This time, we’ve decided to take appear.in for a spin (without even hinting anything to Philipp Hancke, so we’ll see how this thing goes).

First, a demo. Here’s a screencast of how this works, if you’re into such a thing:

Testing WebRTC Screen Sharing

There are two things to do when you want to test WebRTC screen sharing using testRTC:

  1. “Install” your WebRTC Chrome extension
  2. Show something interesting

#1 – “Install” your WebRTC Chrome extension

There are a couple of things you’ll need to do in the run options of the test script if you want to use screen sharing.

This is all quite arcane, so just follow the instructions and you’ll be good to go in no time.

Here’s what we’ve placed in the run options for appear.in:

#chrome-cli:auto-select-desktop-capture-source=Entire screen,use-fake-ui-for-media-stream,enable-usermedia-screen-capturing #extension:https://s3-us-west-2.amazonaws.com/testrtc-extensions/appearin.tar.gz

The #chrome-cli thingy stands for parameters that get passed to Chrome during execution. We need these to get screen sharing to work and to make sure Chrome doesn’t pop up any nagging selection windows when the user wants to screen share (these kills any possibility of automation here). Which is why we set the following parameters:

  • auto-select-desktop-capture-source=Entire screen – just to make sure the entire screen is automatically selected
  • use-fake-ui-for-media-stream – just add it if you want this thing to work
  • enable-usermedia-screen-capturing – just add it if you want this thing to work

The #extension bit is a new thing we just added in this release. It will tell testRTC to pre-install any Chrome extensions you wish on the browser prior to running your test script. And since screen sharing in Chrome requires an extension – this will allow you to do just that.

What we pass to #extension is the location of a .tar.gz file that holds the extension’s code.

Need to know how to obtain a .tar.gz file of your Chrome extension? Check out our Chrome extension extraction guide.

Now that we’ve got everything enabled, we can focus on the part of running a test that uses screen sharing.

#2 – Show something interesting

Screen sharing requires something interesting on the screen, preferably not an infinite video recursion of the screen being shared in one of the rectangles. Here’s what you want to avoid:

And this is what we really want to see instead:

The above is a screenshot that got captured by testRTC in a test scenario.

You can see here 4 participants where the top right one is screen sharing coming from one of the other participants.

How did we achieve this in the code?

Here are the code snippets we used in the script to get there:

var videoURL = "https://www.youtube.com/tv#/watch?v=INLzqh7rZ-U";

client
   .click('.VideoToolbar-item--screenshare.jstest-screenshare-button')
   .pause(300)
   .rtcEvent('Screen Share ' + agentSession, 'global')
   .rtcScreenshot('screen share ')
   .execute("window.open('" + videoURL + "', '_blank')")
   .pause(5000)

   // Switch to the YouTube
   .windowHandles(function (result) {
       var newWindow;
       newWindow = result.value[2];
       this.switchWindow(newWindow);
   })
   .pause(60000);
   .windowHandles(function (result) {
       var newWindow;
       newWindow = result.value[1];
       this.switchWindow(newWindow);
   });

We start by selecting the URL that will show some movement on the screen. In our case, an arbitrary YouTube video link.

Once we activate screen sharing in appear.in, we call rtcEvent which we’ve seen last time (and is also a new trick in this new release). This will add a vertical line on the resulting graphs so we know when we activated screen sharing (more on this one later).

We call execute to open up a new tab with our YouTube link. I decided to use the youtube.com/tv# URL to get the video to work close to full screen.

Then we switch to the YouTube in the first windowHandles call.

We pause for a minute, and then go back to the appear.in tab in the browser.

Let’s analyze the results – shall we?

Reading WebRTC screen sharing stats

Screen sharing is similar to a regular video channel. But it may vary in resolution, frame rate or bitrate.

Here’s how the appear.in graphs look like on one of the receiving browsers in this test run. Let’s start with the frame rate this time:

Two things you want to watch for here:

  1. The vertical green line – that’s where we’ve added the rtcEvent call. While it was added to the browser who is sending screen sharing, we can see it on one of the receiving browsers as well. It gets us focused on the things of interest in this test
  2. The incoming blue line. It starts off nicely, oscillating at 25-30 frames per second, but once screen sharing kicks in – it drops to 2-4 frames per second – which is to be expected in most scenarios

The interesting part? Appear.in made a decision to use the same video channel to send screen sharing. They don’t open an additional video channel or an additional peer connection to send screen sharing, preferring to repurpose an existing one (not all services behave like that).

Now let’s look at the video bitrate and number of packets graphs:

The video bitrate still runs at around 280 kbps, but it oscillates a lot more. BTW – I am using the mesh version of appear.in here with 4 participants, so it is going low on bitrate to accommodate for it.

The number of video packets per second on that incoming blue line goes down from around 40 to around 25. Probably due to the lower number of frames per second.

What else is new in testRTC?

Here’s a partial list of some new things you can do with testRTC

  • Manual testing service
  • Custom network profiles (more about it here)
  • Machine performance collection and visualization
  • Min/max bands on high level graphs
  • Ignore browser warnings and errors
  • Self service API key regeneration
  • Show elapsed time on running tests
  • More information in test runs on the actual script and run options used
  • More information across different tables and data views

Want to check screen sharing at scale?

You can now use testRTC to automate your screen sharing tests. And the best part? If you’re doing broadcast or multiparty, you can now test these scales easily for screen sharing related issues as well.

If you need a hand in setting up screen sharing in our account, then give us a shout and we’ll be there for you.

Join us to Learn More About WebRTC in Education

Education and E-learning are one of the largest market niches that are adopting WebRTC.

It probably has to do with the no-fuss approach that WebRTC has, coupled with the ability to hook it up to different business processes. This enables education and LMS vendors to integrate WebRTC into their products directly, reducing the need to ask customers to install 3rd party apps or having to deal with multiple systems.

What we’ve seen at testRTC is a large swath of education use cases:

  • Private 1:1 tutoring lessons
  • Class-type systems, where a single teacher facilitates the learning of multiple students
  • Webinar-type services, where a few active participants get broadcasted to a larger audience
  • MOOC (Massive Open Online Course)
  • Marketplace systems, brandable sites and widgets, aggerators of courses

We’d like to share our experiences with you and show you some of these use cases and the challenges they bring to developers of such systems.

Join our Webinar on WebRTC in Education

Join us on Wednesday, December 14 at 14:30 EDT to learn more about this fascinating new frontier in real time education.

If you already have questions for us – just register to the event and place your questions on the registration page – these questions will be saved until the webinar itself.

Reserve your spot now

Check out the enhancements we’ve made to testRTC

It has been a while since we released a version, so it is with great pleasure that I am writing this announcement.

Yes. Our latest release is now out in the wild. We’ve upgraded our service on Sunday, so it is about time we take you for a quick roundup of the changes we’ve made.

#1 – Support for projects and users

This one is long overdue. Up until today, if you signed up for testRTC, you had to share your credentials with whoever was on your team to work with him on the tests. This was impossible to work with, assuming you wanted QA, R&D and DevOps to share the account and work cooperatively with the tests and monitors that got logged inside testRTC.

So we did what we should have – we now support two modes of operation:

  1. A user can be linked to multiple projects
    • So if your company is running multiple projects, you can now run them separately, having people focused on their own environment and tests
    • This is great for those who run segregated services for their own customers
    • It also means that now, a user can switch between projects with a single set of credentials in the system
  2. A project can belong to multiple users
    • Need someone to work on writing the scripts and executing them? You got it
    • Have a developer working on a bug that got reported with a link to testRTC? Sure thing
    • The IT guy who just received a downtime alarm from the WebRTC monitor we run? That’s another user
    • Each user has his own place in the project, and each is distinguished by his own credentials

testRTC project selection

If you require multiple projects, or want to add more users to your account just contact our support.

#2 – Longer, bigger tests

While theoretically, testRTC can run any test at any length and size, things aren’t always that easy.

There are usually two limitations to these requirements:

  1. The time they take to prepare, execute, run and collect results
  2. The time it takes to analyze the results

We worked hard in this release on both elements and got to a point where we’re quite happy with the results.

If you need long tests, we can handle those. One of the main concerns with long tests is what to do if you made a mistake while configuring them? Now you can cancel such tests in the middle if necessary.

Canceling a test run

If you need to scale tests to a large number of browsers – we can do that too.

We are making sure we bubble up the essentials from the browsers, so you don’t have to work hard and rummage through hundreds of browser logs to find out what went wrong. To that end, the tables that show browser results have been reworked and are now sorted in a way that will show failures first.

#3 – Advanced WebRTC analysis

We’ve noticed in the past few months that some of our customers are rather hard core. They are technology savvy and know their way in WebRTC. For them, the graphs we offer of bitrates, latencies, packet losses, … – are just not enough.

Chrome’s webrtc-internals and getstats() offer a wealth of additional information that we offered up until now only in a JSON file download. Well… now we also visualize it upon request right from the report itself:

Advanced WebRTC graphs

These graphs are reachable by clicking the webrtc_internals_dump.txt link under the Logs tab of a test result. Or by clicking the Advanced WebRTC Analytics button located just below the channels list:

Access advanced WebRTC graphs

I’d like to thank Fippo for the work he did (webrtc-dump-importer) – we adopted it for this feature.

#4 – Simulation of call drops and dynamic network changes

This is something we’ve been asked more than once. We have the capability of modeling the network of our probes, so that the browser runs with a specific configuration of a firewall or via a specific type of simulated network. We’re modifying and tweaking the profiles we have for these from time to time, but now we’ve added a script command so that you can change this configuring in runtime.

What can you do with it? Run two minutes of a test with 2 Mbps, then close virtually everything for 20-30 seconds, then open up  the network again – and see what happens. It is a way to test WebRTC in your application in dynamic network conditions – ones that may require ICE restarts.

Dynamically changing network profile in testRTC

In the test above, we dynamically changed the network profile in mid-call to starve WebRTC and see how it affects the test.

How do you use this new capability? Use our new command rtcSetNetworkProfile(). Read all about it in our knowledge base: rtcSetNetworkProfile()

#5 – Additional test expectations

We had the basics covered when it came to expectations. You could check the number and types of channels, validate that there’s some bits going on in there, validate packet loss. And that’s about it.

To this list of capabilities that existed in rtcSetTestExpectations() we’ve now added the ability to add expectations related to jitter, video resolutions, frame rate, and call setup time. We’ve also taken the time to handle expectations on empty channels a lot better.

There’s really nothing new here, besides an enhancement of what rtcSetTestExpectations() can do.

#6 – Additional information in Webhook responses

testRTC can notify your backend whenever a test or a monitor run ends on the status of that run – success or failure. This is done by configuring a webhook that is called at the end of the test run. We’ve had customers use it to collect the results to their own internal monitoring systems such as Splunk and Elastic Search.

What we had on offer in the actual payload that was passed with the webhook was rather thin, and while we’re still trying to keep it simple, we did add the leading error in that response in cases of failure:

testRTC webhook test failure response

#7 – API enabled to all customers

Yes. We had APIs in the past, but somehow, there was friction involved, with customers needing to ask for their API key in order to use the API for their continuous integration plans. It worked well, but the number of customers asking for API keys – both customers and prospects under evaluation – has risen to a point where it was ridiculous to continue doing this manually. Especially when our intent is for customers to use our APIs.

So we took this one step forward. From now on, every account has an API key by default. That API key is accessible from the account’s dashboard when you login, so there’s no need to ask for it any longer.

testRTC API key

For those of you who have been using it – note that we’ve also reset your key to a new value.

Your turn

This has been quite a big release for us, and I am sure to miss an enhancement or two (or more).

Now back to you. How would you want to test WebRTC in your product?