Category Archives for "Analysis"

1

How Different WebRTC Multiparty Video Conferencing Technologies Look Like on the Wire

MCU, SFU, Mesh – what do they really mean? We decided to take all these techniques to a spin to see what goes on on the network.

To that end, we used some simple test scripts in testRTC and handpicked a service that uses each of these techniques:

We used 4 browsers for each test. All running Chrome 48 (the current stable version). All from the same data center. All using the same 720p video stream as their camera source.

While the test lengths varied across tests, we will be interested to see the average bitrate expenditure of each to understand the differences.

Mesh

appear.in runs a mesh call. It means that each user will need to send its media to all other users in the session – as well as receive all the media streams from them.

This is how it looks like:

mesh video architecture

I’ve opened up an ad-hoc room there and got 4 of our browser agents into it. Waited about a minute and collected the results:

appear.in mesh video

Nothing much to see here. Incoming and outgoing video across the whole test is rather similar, if somewhat high.

Looking at one of the browser’s media channels tells the story:

appear.in mesh video

This agent has 3 outgoing and 3 incoming voice and video channels.

Average bitrate on the video channel is around 1.2 mbps, which means our agent runs about 3.6 megabytes uplink and downlink. Not trivial.

SFU

Talky uses Jitsi for its SFU implementation. It means that it doesn’t process video but rather routes it to everyone who needs it. Each browser sends its media to the SFU, which then forwards that media to all other participants.

This is how it looks like:

sfu video architecture

I took 4 browsers in testRTC and pointed them at a single Talky session. Here’s what the report showed:

Talky SFU video

The main thing to not there is that in total, the browsers we used processed a lot more incoming media than outgoing one (at a rate of 3 to 1). This shouldn’t surprise us. Look at how one of these browsers reports its media channels:

Talky SFU video

1 outgoing audio and video channel and then 3 incoming audio and video channels. There’s another empty video channel – Talky is probably using that for incoming screen sharing.

Note how in this case the same machines with the same network performance did a lot better. The outgoing video channel gets to almost 2.5 mbps bitrate. Almost twice as much as the mesh was capable of using. To make it clear – mesh doesn’t scale well.

MCU

For an MCU I picked BlueJeans service. We’ve been playing with it a bit on a demo account so I took the time to take a quick capture of a session. Being architectured around an MCU means that each browser sends a single video stream. The MCU takes all these video streams and composes them into a single video stream that is then sent to each participant separately.

mcu video architecture

As with the other two experiments, I used 4 browsers with this MCU, receiving this report highlights:

BlueJeans MCU video

Total kilobits here is rather similar. It seems that in total, browsers received less than they sent out.

Drilling down into a single browser report, we see the following channels:

BlueJeans MCU video

Single incoming and a single outgoing audio and video channels. We have an additional incoming/outgoing video channel with no data on it – probably saved for screen sharing. While similar to how Talky does it, BlueJeans opens up an extra outgoing channel by default while Talky doesn’t.

Outgoing bitrate averages at 1.2 mbps – a lot lower than the 2.5 mbps in Talky. I assume that’s because BlueJeans limited the bitrate from the browser, which actually makes a lot of sense for 720p video stream. The incoming video is even lower at 455 kbps bitrate on average.

This didn’t make sense to me, so I dug a bit deeper into some of our video charts and found this:

BlueJeans MCU video

So BlueJeans successfully managers to get its outgoing video from the MCU towards the browser up to the same 1.2 mbps bitrate. Thinking about it, I shouldn’t be surprised. Talky and appear.in are ad-hoc services, while BlueJeans is a full service with business logic in it – getting all browsers into the session takes more time with it, especially with how we’ve written the script for it. We have a full minute here from the browser showing its local video until it really “connects” to the conference.

Another interesting tidbit is that Chrome gets its bitrate to 1.2 quite fast – something Google took care of in 2015. BlueJeans takes a slower route towards that 1.2mbps taking about half a minute to get there.

So What?

Video comes in different shapes and sizes.

WebRTC reduces a lot of the decisions we had to make and takes care of most browser related media issues, but it is quite flexible – different services use it differently to get to the same use case – here multiparty video chat.

If you are looking to understand your WebRTC service better and at the same time automate your testing and monitoring – try out testRTC.

2

The day Talky and Jitsi failed – and why end-to-end monitoring is critical

It was a bad day for me. 14 January 2016.

I had a demo to show to a customer of testRTC. Up until that point, the demos we’ve shown potential customers were focused on Jitsi or Talky (depending on who did the demo).

There were a couple of reasons for picking these services for our demos:

  1. They are freely available, so using them required no approval from anyone
  2. They require no login to use, so the script on top of them was a simple one to explain and showcase
  3. They support video, making them visual – a good thing in a demo
  4. They support more than two participants, which shows how we can scale nicely
  5. In the case of Jitsi, you can visually see if the session is relayed or not – making it easy to show how our network configuration affects WebRTC media routing

We used to use them a lot. For me, they were always stable.

Until 14th of January last month, when both mysteriously failed on me. The failure was a subtle one. The site works. You can join sessions. You can see your camera capture. It tells you it is waiting for other participants to join. But it does that also when someone joins – that other participant? He sees the same message exactly.

You have two or more people in the same session, all waiting for each other, when they are already all effectively “in the meeting”.

Our scheduled demos for the day failed. We couldn’t show a decent thing to customers – relying on a third party was a small mistake – we switch to show demo on other services – but it cost us time in these meetings. Since then, we’ve gone AppRTC for our baseline.

I don’t know why Jitsi and Talky failed on the same day. They both make use of the Jitsi Videobridge, but I don’t believe it was related to the videobridge or even to the same issue – just a matter of coincidence.

While these things happen to all of us, we need to strive for continuous improvement – both in the time it takes us to find an issue as well as fixing it.

5

Will your WebRTC service cope with VP9?

Chrome 48 has been out for a week now with some much needed features – some of them we plan on using in testRTC ourselves.

One thing that got added is official VP9 support. Sam Dutton, developer advocate @Google for everything WebRTC (and then some), wrote a useful piece on how to use VP9 with Chrome 48.

A few things interesting here:

  1. Chrome 48 now supports both VP8 and VP9 video codecs
  2. While VP9 is the superior codec, it is NOT the preferred codec by default

That second thing, of VP9 not being the preferred codec, means it won’t be used unless you explicitly request it. For now, you probably don’t have to do anything about it, but how many Chrome versions will pass until Google flips this preference?

When should you start looking at VP9?

  1. If your service runs peer-to-peer and uses no backend media processing, then you should care TODAY. Change that preference in your code and try it out. That’s free media quality you’re getting here
  2. If your service routes media through a backend server but doesn’t process it any further, then you should try it out and see how well it works. With a bit of fine tuning you should be good to go
  3. If your service needs to encode or decode video in the backend, then you’ve got work to do, and you should have started a couple of months ago to make this transition

In all cases, now would be a good time to start if you haven’t already.

Let’s take a look at AppRTC and VP9

You may have noticed I am a fan of using AppRTC for a baseline, so let’s look at it for VP9 as well. Google were kind enough to add support for VP9 to AppRTC, which means that our own baseline test for our testRTC users already supports VP9.

I took our AppRTC testRTC script for a spin. Decided to use a 720p resolution for the camera output and placed two users on it to see what happens. Here are screenshots of the results:

Chrome 48 AppRTC with VP9

What you see above is a drill down of one of the two test agents running in this test. The call was set to around 3 minutes and the interesting part here is in the bottom – the fact that we’re using VP9 for this call. You can also see that the average bitrate for the video channel is around 1.6Mbps. More about that later.

We could have done this when Chrome 48 was beta, but we didn’t have this blog then, so there wasn’t any point 🙂

When you look at the graphs for the video channels, this is what you see:

Chrome 48 AppRTC with VP9

From the above charts, there are 3 things I want to mention:

  1. It takes Chrome around 30 seconds to get to the average 1.6Mbps bitrate with VP9
  2. It is rather flaky, going up and down in bitrate quite a lot
  3. This also affects the audio bitrate (not seen here, but in another graph in the report that I didn’t put here)

How does this compare to VP8?

I took the same test and used Chrome 47 for it via testRTC. Here’s what I got:

Chrome 47 AppRTC with VP8

The main difference here from Chrome 48’s VP9 implementation? Average bitrate is around 2.5Mbps – around 0.9Mbps more (that’s 36% difference).

What’s telling though is the other part of the report:

201602-Chrome-47-VP8-2

Notice the differences?

  1. It takes Chrome 47 10 seconds or less to get VP8 to its 2.5Mbps average bit rate. A third the time that Chrome 48 needs for VP9. I didn’t further investigate if this is an issue of how Chrome works or the specific video codec implementation. Worth taking the time to look at though
  2. Chrome 47 is as stable as a rock once it reaches the available bitrate (or its limit)
  3. Audio (again, no screenshot) is just as predictable and straight in this case

How are you getting ready for VP9?

VP9 is now officially available in Chrome. There’s a lot way to go still to get it to the level of stability and network performance of VP8. That said, there are those that will enjoy this new feature immediately, so why wait?

We’re here for you. testRTC can help you test and analyze your WebRTC service with VP9 – contact us to learn more.