We’ve introduced a scoring/ranking system for test results. This should make it easier to understand the quality of different test and monitor runs by briefly reviewing the rank value provided to them.
The scoring mechanism looks at the various quality metrics collected by testRTC during a test run, giving a composite value to measure the quality of the test run.
Scoring looks at different media-related metrics. It specifically checks the bitrate, resolution, and delay, taking into consideration their values and stability over time. The quality score calculation looks at both audio and video.
That depends a lot on your scenario. Since there is a large variety of use cases, we decided not to go on a static 1-5 score values. This would give the wrong impression. This is why our scale gives a value of 0-10.
The way to use quality score values is by looking at the values across tests and monitor runs. The higher the score – the better the metrics. If you run a monitor for a long period of time, you can see how its media quality behaves over time by checking the quality score. If you run a stress test, you can see how the quality score is affected as you add more and more probes to the test runs.
For audio (Opus) we consider 40 kbps as optimal and for video 1.5 Mbps as a optimum. That said, there are no optimal values besides the one you consider as optimal.
The minimum resolution for an optimal score is 1280×720.
Yes, variation in bitrate during the course of the test inversely affects the scoring. The more stable the bitrate the better the score will be.