Category Archives for "Announcements"

Join us to Learn More About WebRTC in Education

Education and E-learning are one of the largest market niches that are adopting WebRTC.

It probably has to do with the no-fuss approach that WebRTC has, coupled with the ability to hook it up to different business processes. This enables education and LMS vendors to integrate WebRTC into their products directly, reducing the need to ask customers to install 3rd party apps or having to deal with multiple systems.

What we’ve seen at testRTC is a large swath of education use cases:

  • Private 1:1 tutoring lessons
  • Class-type systems, where a single teacher facilitates the learning of multiple students
  • Webinar-type services, where a few active participants get broadcasted to a larger audience
  • MOOC (Massive Open Online Course)
  • Marketplace systems, brandable sites and widgets, aggerators of courses

We’d like to share our experiences with you and show you some of these use cases and the challenges they bring to developers of such systems.

Join our Webinar on WebRTC in Education

Join us on Wednesday, December 14 at 14:30 EDT to learn more about this fascinating new frontier in real time education.

If you already have questions for us – just register to the event and place your questions on the registration page – these questions will be saved until the webinar itself.

Reserve your spot now

Check out the enhancements we’ve made to testRTC

It has been a while since we released a version, so it is with great pleasure that I am writing this announcement.

Yes. Our latest release is now out in the wild. We’ve upgraded our service on Sunday, so it is about time we take you for a quick roundup of the changes we’ve made.

#1 – Support for projects and users

This one is long overdue. Up until today, if you signed up for testRTC, you had to share your credentials with whoever was on your team to work with him on the tests. This was impossible to work with, assuming you wanted QA, R&D and DevOps to share the account and work cooperatively with the tests and monitors that got logged inside testRTC.

So we did what we should have – we now support two modes of operation:

  1. A user can be linked to multiple projects
    • So if your company is running multiple projects, you can now run them separately, having people focused on their own environment and tests
    • This is great for those who run segregated services for their own customers
    • It also means that now, a user can switch between projects with a single set of credentials in the system
  2. A project can belong to multiple users
    • Need someone to work on writing the scripts and executing them? You got it
    • Have a developer working on a bug that got reported with a link to testRTC? Sure thing
    • The IT guy who just received a downtime alarm from the WebRTC monitor we run? That’s another user
    • Each user has his own place in the project, and each is distinguished by his own credentials

testRTC project selection

If you require multiple projects, or want to add more users to your account just contact our support.

#2 – Longer, bigger tests

While theoretically, testRTC can run any test at any length and size, things aren’t always that easy.

There are usually two limitations to these requirements:

  1. The time they take to prepare, execute, run and collect results
  2. The time it takes to analyze the results

We worked hard in this release on both elements and got to a point where we’re quite happy with the results.

If you need long tests, we can handle those. One of the main concerns with long tests is what to do if you made a mistake while configuring them? Now you can cancel such tests in the middle if necessary.

Canceling a test run

If you need to scale tests to a large number of browsers – we can do that too.

We are making sure we bubble up the essentials from the browsers, so you don’t have to work hard and rummage through hundreds of browser logs to find out what went wrong. To that end, the tables that show browser results have been reworked and are now sorted in a way that will show failures first.

#3 – Advanced WebRTC analysis

We’ve noticed in the past few months that some of our customers are rather hard core. They are technology savvy and know their way in WebRTC. For them, the graphs we offer of bitrates, latencies, packet losses, … – are just not enough.

Chrome’s webrtc-internals and getstats() offer a wealth of additional information that we offered up until now only in a JSON file download. Well… now we also visualize it upon request right from the report itself:

Advanced WebRTC graphs

These graphs are reachable by clicking the webrtc_internals_dump.txt link under the Logs tab of a test result. Or by clicking the Advanced WebRTC Analytics button located just below the channels list:

Access advanced WebRTC graphs

I’d like to thank Fippo for the work he did (webrtc-dump-importer) – we adopted it for this feature.

#4 – Simulation of call drops and dynamic network changes

This is something we’ve been asked more than once. We have the capability of modeling the network of our probes, so that the browser runs with a specific configuration of a firewall or via a specific type of simulated network. We’re modifying and tweaking the profiles we have for these from time to time, but now we’ve added a script command so that you can change this configuring in runtime.

What can you do with it? Run two minutes of a test with 2 Mbps, then close virtually everything for 20-30 seconds, then open up  the network again – and see what happens. It is a way to test WebRTC in your application in dynamic network conditions – ones that may require ICE restarts.

Dynamically changing network profile in testRTC

In the test above, we dynamically changed the network profile in mid-call to starve WebRTC and see how it affects the test.

How do you use this new capability? Use our new command rtcSetNetworkProfile(). Read all about it in our knowledge base: rtcSetNetworkProfile()

#5 – Additional test expectations

We had the basics covered when it came to expectations. You could check the number and types of channels, validate that there’s some bits going on in there, validate packet loss. And that’s about it.

To this list of capabilities that existed in rtcSetTestExpectations() we’ve now added the ability to add expectations related to jitter, video resolutions, frame rate, and call setup time. We’ve also taken the time to handle expectations on empty channels a lot better.

There’s really nothing new here, besides an enhancement of what rtcSetTestExpectations() can do.

#6 – Additional information in Webhook responses

testRTC can notify your backend whenever a test or a monitor run ends on the status of that run – success or failure. This is done by configuring a webhook that is called at the end of the test run. We’ve had customers use it to collect the results to their own internal monitoring systems such as Splunk and Elastic Search.

What we had on offer in the actual payload that was passed with the webhook was rather thin, and while we’re still trying to keep it simple, we did add the leading error in that response in cases of failure:

testRTC webhook test failure response

#7 – API enabled to all customers

Yes. We had APIs in the past, but somehow, there was friction involved, with customers needing to ask for their API key in order to use the API for their continuous integration plans. It worked well, but the number of customers asking for API keys – both customers and prospects under evaluation – has risen to a point where it was ridiculous to continue doing this manually. Especially when our intent is for customers to use our APIs.

So we took this one step forward. From now on, every account has an API key by default. That API key is accessible from the account’s dashboard when you login, so there’s no need to ask for it any longer.

testRTC API key

For those of you who have been using it – note that we’ve also reset your key to a new value.

Your turn

This has been quite a big release for us, and I am sure to miss an enhancement or two (or more).

Now back to you. How would you want to test WebRTC in your product?

Introducing: Our Brand New Dashboard

We’ve been working hard these past two months, ever since we got our previous release out the door. This time, we invested a lot of time and thought on the small items. And one big item as well.

All over the service, you’ll notice some slight changes to the UI. This is an ongoing process to fine-tune the service and make it simpler to use for our customers.

The biggest visible addition to our latest release is the introduction of a new user dashboard.

From now one, when a user logs in, he gets a bird’s eye view of his activities in testRTC:

testRTC dashboard

What can you see on the dashboard?

Usage

Usage

This area of the dashboard highlights the usage done in the account.

It allows you to understand what resources are available to you, so if you want to run a stress test, you will be able to use enough browsers.

If you want to do ad-hoc testing with more browsers than are available in your account, you’ll need to holler us and we’re enable more browsers on your account for a period of time.

Stats

Stats

This area shows the statistics of your use over a span of time. It is quite useful for managers to understand how many tests were conducted and know how they fared.

  • In red, we indicate tests and monitor executions that failed for the period selected
  • In green, we indicate tests and monitor executions that succeeded for the period selected
  • In blue, we indicate the total number of tests and monitor executions for the period selected

And you can select a different period to look at.

Active Monitors

Active Monitors

This area indicate what monitors are up and running at the moment, along with the status of the most recent execution.

If you click on any of the rows, it will get you to the monitor run results, filtered for that specific monitor.

Recent Tests

Recent Tests

This area shows the last 5 tests that got executed, along with their results.

As with the active monitors, clicking on the test gets you to the results themselves.

News and Announcements

News and Announcements

This area shows some news and announcements we have for our users.

What’s Next?

Consider the dashboard a work in progress. We’re sure there’s much to be improved here. We wanted to get this out the door and into the hands of our users. Ping us if you have any suggestions on how to improve it.

 

If you need to test or monitor a WebRTC product – don’t be shy – sign up for testRTC.

3 testRTC Update: May 2016

Yesterday, we released our latest update to testRTC.

This new version started with the intent to “just replace backend tables” and grew from there due to the usual feature creep. Wanting to serve the needs of our customers does that to us.

Anyway, here’s what you should expect from this updated version:

  • Improved backend tables. We’ve rewritten our table’s code
    • This makes them load faster when they are full – especially the monitor run history ones which tend to fill up fast and then load slow
    • We used the time to add pagination, filters and search capabilities
  • Report results improvements
    • The results tab for specific agents are now easier to navigate
    • Warnings and errors are collected, aggregated and can be filtered based on their type
    • All collected file can now be viewed or downloaded in the same manner
  • Automatic screenshot on failure
    • Whenever a test fails, we try to take a screenshot of that last moment
    • This can be very useful for debugging and post mortem analysis of test results
  • Test import/export
    • We’ve added the ability to import and export tests
    • This got us into serializing our tests’ information as JSON objects – a characteristics we will be making use of in future versions
  • New webhook at the end of a test/monitor run. You can now call an external system with the result of a test or monitor
    • We’ve seen people use it to push our results into Splunk
    • They can generally be used as a good programmable alerting mechanism on top of the monitoring service
    • More on how to use the new webhooks
  • APIs
    • We’ve added RESTful APIs to our service, so you can now executre and retrieve test results programmatically
    • This is quite useful for continuous integration scenarios
    • Our API documentation is available online
  • More expectations on channels
    • We’ve added the ability to set expectations on channels also based on the total data, number of packets and round trip
    • Check out rtcSetTestExpectation for more information

We are also introducing audio MOS in limited beta – contact us if you’d like to try it out.

Meet us at WebRTC Global Summit

Next week, I will be presenting at WebRTC Global Summit in London.

WebRTC Global Summit is one of the main European events that are focused around WebRTC.

This time around, the event is split into three separate tracks:

  1. Developer track, taking place on April 11, where I will be speaking about video codecs in WebRTC and chairing the day
  2. Telecom track, taking place in parallel to the Developer track
  3. Enterprise track, taking place on April 12, where I will be speaking about testing challenges of WebRTC in the enterprise

WebRTC brings with it new challenges when it comes to testing and monitoring, and this will be my main focus for the session on the Enterprise track. To give a few examples of where these challenges are:

  • You are now reliant on the browsers and how they implement and update their support for WebRTC
  • What is it that you test and monitor? I’ve seen services fail because a connection to a directory service was flaky or a NAT wasn’t configured properly
  • How do you simulate different network conditions for the browsers you use during testing?

These types of challenges, and how to deal with them are things I will be raising during the session.

On another topic, until the recording from the recent Kranky Geek India event becomes available, here’s the presentation I’ve given there:

There’s a solid agenda, and the event is free to attend for telcos, enterprises and developers. If you are in London, I highly recommend you register and come to the event. I’ll be happy to catch a chat with you on anything WebRTC related.