When setting up an upRTC WebRTC monitoring script, what you are looking for is to reduce the amount of false positives to a minimum. You don’t want to get alerted when what is broken is the test script itself and not your service.
Here are a few suggestions that you should implement in your monitoring script to reduce the number of potential false positives.
1. Total run time
The total run time of a monitor needs to be short for it to be effective.
Since the decision of passed/failed for the monitor occurs at the end of the run, the longer the test, the longer it will take for you to be notified about a potential issue.
Keep the total test length at 3-4 minutes. Less if you can handle it.
Have the media flowing in the session itself at 60-120 seconds, so that there’s enough data to look at and analyze.
2. Reduce the number of fields and forms that needs to be filled
If your service has long forms or many fields that need to be filled to get to the media itself, then this might be a bit of a challenge to automate. The reason for that is that automation tools may fail with form filling the UI elements every once in a while which will again lead to false positives.
Our suggestion here is to reduce the number of fields and forms that the script will need to fill to get to the actual media. If possible, have these filled out using an API call, making the flow more deterministic and simple in nature.
3. Take care of clean up via an API
Oftentimes, getting a monitor to run on a scheduled service is a bit of a challenge. Your application might have some logic that keeps session state or user state open or at a condition that won’t let the next session start properly.
This usually boils down to having call queues clogged or emptied, sessions marked as open when they shouldn’t be, etc.
Getting to such a condition can happen when a single sporadic failure happens which doesn’t let the whole script scenario used for the monitor to go from start to finish nicely, leaving things hanging around and causing failures in the next runs.
If your service is like that, you should have an API or a similar mechanism that can remotely clean up the context that the monitoring script uses. The script should call this API at the beginning using .rtcActivateWebhook(), to make sure it is running properly and that the session that is about to start isn’t affected from old data of a previous run.
4. Avoid user related pop ups
Some services like to have popups and messages appear to users. These are used to collect metrics and statistics like NPS (Net Promoter Score), customer feedback, or to notify a user about new features or promotions.
These tend to kill automation, since it changes the behavior of the UI. While humans can easily avoid such distractions and close redundant windows, automation will have a hard time anticipating this and scripting for it in advance.
Our suggestion? Make sure user account used for the test script used for monitoring has all of these marketing automation tools disabled for him.
5. Use waitForElementVisible() instead of pause()
This is just best practices in writing automation scripts, but it is doubly important in WebRTC stress testing, where processing time can stretch a bit longer than usual.
When waiting for an element prior to clicking it, don’t use a .pause() statement with the intent of letting the browser load the new page or show the button it needs to – wait for that button to appear using .waitForElementVisible().
The above code snippet will wait up to 30 seconds for the #login button to appear on the screen and then click on it.
6. Use pause() after waitForElementVisible()
Now that you are using .waitForElementVisible(), here’s a non-obvious suggestion – add a pause() after it. Specifically when you are running a load test. The reason for it is that sometimes, with slow servers and high CPU load, there can be delays between the changes in the DOM to the screen rendering.
In such cases, .waitForElementVisible() will return but it might be too early for the .click() command.
Here’s how to add that .pause():