All Posts by Tsahi Levent-Levi

Notifications in watchRTC

watchRTC has a notification center that holds notification messages the system deems as relevant to your needs. You can reach the notification center by selecting watchRTC | Notifications in the sidebar.

The notification center collects and alerts you of the following conditions:

  1. A new browser version was first seen in the last couple of weeks
    • Whenever there’s a new browser version, you need to be aware of it
    • Newer browsers tend to change WebRTC behavior from time to time, so knowing users are now using them in your service can be useful in correlating it with new complaints coming from users
  2. Weekly traffic changes
    • If the number sessions, peers or minutes changes by more than 5% on a weekly basis, this will be indicated by a notification

Data Streams in watchRTC

watchRTC makes the data it collects available to you in a programmable format, consumable by external BI systems via a data streams mechanism.

Data streams aren’t available in all accounts. They require enterprise plans.

What are data streams?

Data streams are a stream of files that store operational data from testRTC products in an easy to consume format for our customers. Data stream files hold arrays of well defined JSON structures, each denoting a specific interaction or event taking place in one of the products.

These files are created at a certain interval, usually measured in hours, enabling the application to grab the file and process it internally by shipping it to its own data warehouse. Customers can use this mechanism to enrich their data or run their own proprietary queries on the data.

Configuring your AWS S3 for data streams

testRTC currently implements data streams using AWS S3 buckets cloud object store.

To use data streams, you will need to create an S3 bucket in your own AWS account, along with the creation of the following permissions for this bucket:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::watchrtc-data-streams/*" ] } ] }
Code language: JSON / JSON with Comments (json)

Once created, pass to our support the following parameters:

  • accessKeyId
  • secretAccessKey
  • S3 bucket name
  • Filename prefix
  • Collection interval (in minutes)

watchRTC data streams

Data streams in watchRTC are generated on the room level. Once a data stream file needs to be created for watchRTC, watchRTC will collect all history results for the time interval configured and generate a JSON struct per room, placing all these rooms in the data stream file and storing that file into the configured object store.

Detailed below is the structure of the JSON structure you can expect:

[ { "room_url": "https://app.testrtc.com/app/watchrtc-room/xxxxxx", "room_id": "roomid", "start_time": "2022-09-04T00:09:22.885Z", "end_time": "2022-09-04T00:17:06.149Z", "duration": 463, // in seconds "users": 2, // number of peers in the room "stats": { "mos": 4.35, // MOS score "score": 6, // testRTC media score "call_setup_time": 900, // in milliseconds "audio": { "send": { "bitrate": 11, // in kbit/s "packet_loss": 6.1, // in percentage "jitter": 6, // in milliseconds "rtt": 2397 // in milliseconds }, "recv": { "bitrate": 20, // in kbit/s "packet_loss": 0.2, // in percentage "jitter": 15 // in milliseconds }, "bitrate": 16, // in kbit/s "packet_loss": 3.2, // in percentage "jitter": 11 // in milliseconds }, "video": { "send": { "bitrate": 1499, // in kbit/s "packet_loss": 4.5, // in percentage "jitter": 48, // in milliseconds "rtt": 1073 // in milliseconds }, "recv": { "bitrate": 109, // in kbit/s "packet_loss": 0.1, // in percentage "jitter": 43 // in milliseconds }, "bitrate": 804, // in kbit/s "packet_loss": 2.3, // in percentage "jitter": 46 // in milliseconds } "peers": [ { "peer_id": "peerid", "os": "Linux", "os_version": "64-bit", "browser": "Chrome", "browser_version": "102.0.5005.61", "location": { "city": "Frankfurt am Main", "country": "Germany", "organization": "Google" // ISP or carrier }, "sdk_version": "1.34.1-beta.1", "start_time": "2022-06-24T06:37:35.547Z", "duration": 392, // in seconds "keys": { "keyA": "valueA", "keyB": "valueB" ... }, "features": { "connection_type": "TURN", "media_transport": "udp" // udp/tcp/tls }, "stats": { "mos": 3.29, "score": 5.9, "call_setup_time": 164, "audio": { "send": { "bitrate": 13, "packet_loss": 3.6, "jitter": 8, "rtt": 428 }, "recv": { "bitrate": 24, "packet_loss": 0.1, "jitter": 4 }, "bitrate": 19, "packet_loss": 1.9, "jitter": 6 }, "video": { "send": { "bitrate": 1596, "packet_loss": 2.7, "jitter": 15, "rtt": 159 }, "recv": { "bitrate": 68, "packet_loss": 0.1, "jitter": 13 }, "bitrate": 832, "packet_loss": 1.4, "jitter": 14 } } }, ... ] } ]
Code language: JSON / JSON with Comments (json)

Console logs in watchRTC

The watchRTC SDK can also collect browser console logs. While these might be verbose, they can be quite useful to trace and resolve application related issues.

The ability to collect console logs in watchRTC is dependent on the specific plan you are on.

Configuring console logs collection

Since browser console logs can be quite verbose, collecting them all can affect your application performance by taking up much of the available bandwidth that is better used for the actual voice or video sessions you are conducting over WebRTC.

This is why the watchRTC SDK will filter console log messages based on their level.

On Settings | watchRTC you can define Console logs level to “log”, “debug”, “info”, “warn” or “error”. Keep it at “warn” or “error” at all times is our suggestion.

Notes:

  • In the example above, we decided to collect log level messages because we were keen in figuring out a bug and catching it in our staging environment, not deploying it to the production system
  • The configuration specified here will be used by default for collecting console logs in the watchRTC SDK. You can override this for individual peers if needed
  • If this field is disabled in your account then your watchRTC plan does not include console logs collection

Granular control of console log collection in the SDK

On the watchRTC SDK level, you can granularly decide to override the default collection configuration of browser console logs per peer as well as dynamically during a session.

To do that, you can use the watchrtc.init() or watchrtc.setConfig() API calls in the watchRTC SDK, providing them a console parameter:

watchRTC.setConfig({ console: { level: "error", override: true } });
Code language: JavaScript (javascript)

The above example will set the log level to “error”, overriding the configuration of the account.

Viewing console logs

Console logs collected can be viewed as part of the Trace window of the peer level page:

probeRTC options

probeRTC is meant to run on its own with little assistance from the user. From time to time, there is a need to modify its behavior, especially when further troubleshooting is required.

This is enabled via URL parameters when running a probeRTC probe or via the options in the probe editor:

Supported options

All variables are optional. You are not mandated to add any of them to the URL.

VariableDescription
region=XIf the deployment of your infrastructure includes multiple data centers, then you can specify the specific data center to work in front by using the region= option

Connecting to watchRTC

The watchRTC SDK automatically connects to the watchRTC servers when a peer connection is created in your web application. Sometimes, application developers would rather connect earlier than that. This has the added benefit of:

  1. Collecting failures taking place prior to the creation of a peer connection
  2. Catching and logging fast failures that close the page, refresh or retry the connection before a peer connection get fully connected

This is doubly true for cases where custom events and console logs are added and collected prior to a peer connection’s creation.

To that end, you can use watchRTC.connect() API call to explicitly tell the watchRTC SDK to connect to the server. If you make use of the connect() API, you will also need to explicitly call watchRTC.disconnect() when you are done with the sessions (note that without calling connect, the watchRTC SDK will automatically disconnect once all peer connections are closed).

WebRTC test scoring

testRTC collects and analyzes a lot of different data points and metrics. To manage that information, testRTC also offers various scoring values for tests and collected monitors data. When opening test results or monitor information, you can find the score values at the top ribbon bar of the results:

If you are using testingRTC or upRTC, then this information block includes Score and MOS. For watchRTC this information block includes also User.

Score

The Score value is a number between 0-10 giving the overall scoring of the scenario.

It looks at both audio and video, across all channels, penalizing them from a score of 10 based on the media metrics values as well as the variance of these values over time.

It is an objective score that is mainly useful for baselining service performance. Deciphering if a score value is good or bad is left to the judgement of the testRTC user based on his understanding and experience with his own application.

Learn more about test scoring in testingRTC.

MOS

MOS score stands for Mean Opinion Score. It looks only at the audio channels in a scenario. It is a widely used metric.

MOS is a value between 1-5 indicating a subjective quality measurement.

MOS scorePerceived quality
Above 3Good
2-3Mediocre
Below 2Bad

testRTC derives its MOS score calculations indirectly from RTCP reports that are exposed in WebRTC via the getStats() APIs. We rely on jitter, packet loss, round trip time and codec type to calculate it (we figure out the R-factor and from there derive the MOS score value).

User

User scoring is available in watchRTC. It enables developers to collect user rating and store them in watchRTC directly.

You can use the watchRTC SDK‘s watchRTC.setUserRating() API to report user rating information.

Quality of a scenario

When more than a single user or device is part of the scenario, the scoring information of the whole scenario is calculated as the average score across the relevant devices.

  • Average score relates to Score, MOS and User values
  • Relevant devices are all the devices in the scenario that have that score value. For example, if a single User in a meeting of 10 participants provided a user rating, then his rating will be the User rating given to the whole scenario

Comparing callstats.io and testRTC

We released watchRTC last year, a product that handles passive monitoring of WebRTC users and applications. With it, we’ve changed the game of what vendors and tools you are going to need for your WebRTC application lifecycle. Now, there’s a single vendor who can fulfill each of your needs – testRTC.

testRTC offers a complete set of products and services suitable for the full lifecycle of WebRTC application development. From development and testing, to monitoring and support of production infrastructure and users. Here’s what you’re getting from our products:

  • testingRTC helps developers test the scalability, performance and functionality of their WebRTC-based applications. Our customers use it for unit testing, stress testing, regression, etc.
  • upRTC is our active monitoring service. It is designed to validate that your infrastructure is up and running as intended, accessible from anywhere you need it to be.
  • watchRTC helps operations teams monitor and analyze live WebRTC voice and video quality in production environments. Some of our customers use it also to assist with their manual testing.
  • qualityRTC provides support teams with a tool that drastically reduces their handling time of quality and connectivity issues reported by end users. This in turn increases satisfaction and reduces churn.
  • probeRTC can check specific office locations and their network performance in front of your WebRTC infrastructure, doing it continuously over a long period of time.

<img role= Also, since Spearline acquired us, we can offer even broader solutions when it comes to call centers, with end-to-end monitoring and testing of landline and mobile phone numbers across the globe connecting to call center agents on web browsers.

While callstats.io lets you monitor live WebRTC calls and evaluate real end-user experiences, testRTC does a lot more than that – it offers a full suite of testing and monitoring services where monitoring live WebRTC calls is just one of the products.

In this post, I review the differences between testRTC and callstats.io, especially when it comes to the holistic view of a WebRTC application lifecycle.

testRTC is suitable for all of your WebRTC application lifecycle needs

testRTC has a variety of users. From developers and testers, though IT operations teams to support organizations. Each user will find that the set of testRTC products assists with daily work and makes tasks less cumbersome – be it to automate and scale testing, provide visibility towards the WebRTC production infrastructure or assist in solving user complaints.

We have our roots in testing. This brings with it three target pillars that all our products start from:

  1. Collect and share everything. Developers want it all, so we collect and share everything with them. We took this approach to all of our other products as we grew, which makes our solutions the most complete ones. Whatever data point you need – we have it available for you.
  2. Simplify and focus analysis. WebRTC is complex. The data we collect is complex. On one hand, we make it all available, on the other hand, we spend a lot of time figuring out what to show and where to make information available with as little number of clicks as possible for our users.
  3. Shorten the cycle. We look at the complete WebRTC application lifecycle. And our goal is to simplify and shorten it. The quicker the iterations you have, the faster you will be able to operate and innovate. Much of our engineering effort is used towards optimizing speed and performance of our UI so you won’t have to wait.

At the end of the day, testRTC has a rich and powerful set of products that are suitable for your needs if you are using WebRTC.

callstats.io is suitable for monitoring end users

callstats.io is built for monitoring WebRTC in production environments. It is designed to make it easy to evaluate and understand the user experience. The target audience is the operations team handling the WebRTC infrastructure and no one else.

Only testRTC offers complete monitoring, testing and support capabilities

callstats.io is used to monitor users in production environments. That said, if you need to check the connectivity of your users, the health of your WebRTC infrastructure or to test new releases and features before introducing them to your customers, then you won’t be able to do any of these using callstats.io.

By contrast, testRTC lets you monitor both live calls as well as the underlying infrastructure with predictable simulated traffic. testRTC also lets you easily handle customer tickets on connectivity and quality issues. And to top it off, testRTC assists you in developing and validating your service before going to market.

testRTC provides the tools you need for the full WebRTC application lifecycle – from development – through deployment and monitoring – to support. We simulate traffic, capture live traffic, analyze networks and visualize results in ways that make it easier for you no matter what point in time you are at with your WebRTC deployment.

testRTC or callstats.io?

The table below summarizes the key features and capabilities of testRTC and callstats.io:

  testRTC callstats.io
General    
Target audience Operations teams, Support teams, Application developers, testing and QA teams Operations teams
Primary function watchRTC & upRTC: Monitor and analyze WebRTC service
qualityRTC & probeRTC: Assist in solving end user connectivity and quality issues
testingRTC: Evaluate WebRTC application performance and scalability
Monitor and analyze WebRTC service quality
Key features    
Monitor live calls Yes (data available while the call is taking place) Yes (data available only after call ends)
Gather KPIs from real users Yes Yes
Generate predictable synthetic data Yes No
Scores Yes Yes
Measure end-user network quality Yes No
Automated stress/sizing testing Yes No
Automated functional testing Yes No

The Devil is in the Details

At first glance callstats.io and testRTC may seem similar. But take a closer look and you’ll see that they’re more like an apple versus a fruit bowl:

<img role= callstats.io offers a point solution for your users monitoring needs

<img role=<img role=<img role=<img role=<img role= testRTC offers a rich set of solutions covering the whole WebRTC application lifecycle

Oh – and if you want we invite you to compare callstats.io to our watchRTC product.

testRTC is the first and only solution that lets you test, monitor AND support your WebRTC application. Come check us out <img role=

rtcSetEventsExpectation()

This is a variation of .rtcSetTestExpectation() that works between given events created using .rtcEvent().

Indicate expected outcome for the specific probe execution between two application defined events. This is used to decide if a test has successfully passed or failed.

The condition is evaluated at the end of the test, based on collected information and metrics. The command can be placed anywhere in the script and can appear multiple times with different constraint values.

testRTC offers additional assertion and expectation commands.

Arguments

NameTypeDescription
criteriastringThe criteria to test. See below for the available options
start-eventstringThe starting point in time for the evaluation. Events are creating using rtcEvent()
Learn more about event based test expectations
end-eventstringThe ending point in time for the evaluation. Events are created using rtcEvent()
Learn more about event based test expectations
messagestringMessage to invoke if criteria isn’t met
levelstringLevel of expectation:
  • “error” – error occurred – fail the test
  • “warning” – consider this as a warning
  • Default value: error

Criteria

A criteria is comprised of the metric to test, an operator and a value.

For example: “video.in > 0” will evaluate that the number of incoming video channels is greater than 0.

Operators

The available operators for the criteria are:

  • ==
  • >
  • <
  • >=
  • <=
  • !=

Complex expectations

You can also use the boolean operators and or or to build more complex expectations.

An example of using it is when you want to check for a certain threshold only on some of the channels. Assume for example that you have any incoming channels, but some of them are muted so they have no data flowing on them. But you still want to test for frame rate. Here’s how you can now do that:

client.rtcSetEventsExpectation("video.in.channel.bitrate == 0 or video.in.channel.fps > 0");
Code language: JavaScript (javascript)

Criteria metrics

The criteria is defined as a chained definition of the object we wish to evaluate, and depends on the metric type we wish to access.

The detailed list of available criteria metrics are found in the description of .rtcSetTestExpetaction().

Example

In the example below, the expectations check for the audio bitrate between two events created using .rtcEvent() in the test script. These events are called ‘Limit Network’ and ‘Stop Limit’.

// The below expectation is based on the events used to check network configuration client .rtcSetEventsExpectation("audio.in.bitrate >= 15", 'Limit Network', 'Stop Limit', "audio bitrate too low", 'error') .rtcSetEventsExpectation("audio.out.bitrate >= 15", 'Limit Network', 'Stop Limit', "audio bitrate too low", 'error') ;
Code language: JavaScript (javascript)

DEVICE STATE: What we measure

The Device State Widget shows information collected by other tests, placing it all in a conveniente location:

Data we collect and share

AudioIndicates if an audio device (microphone) was allowed access to
DeviceProvides the name of the audio device used by default on the machine (and the one used during the test)
NetworkIndicates what network type the browser thinks it is using. This is a best-effort parameter, available only on some browsers

Things to notice

Make sure that access to the audio device is allowed.

Check that the device name provided makes sense. You’ll be able to notice here use of remote desktop type solutions (VDI) or bluetooth devices for example.

If your user is using wifi as his network, it might be worthwhile checking his jitter values and pointing out he might be too far from the access point if his network seems somewhat “flaky”.

DEVICE STATE widget

The Device State Widget collects useful device information that can be found digging the logs and puts them front and center on the results page.

The information shared here focuses on the audio device – the ability to access it, its name and the network used.

Estimated run time: 1 second or less

Customization

During onboarding, we can:

  • Remove fields or add new fields

When to use?

The Device State Widget can be used if you want quick access to that information for your support people. It adds no new data over other tests conducted – it just places it in an easy to reach location.

Usually, clients who have room on their results page and no need for other test widgets will adopt the device state widget.