This article lists all run options that are available to users.
Run options can be added to the run options entry field in the test script editor. They affect how the script gets executed and add a lot of power and flexibility.
|#chrome-cli:X||At times, you may want to indicate a specific command line switch for your use case. To that end, you can use this run option in testRTC. Learn more about controlling Chrome command line switches in testRTC.|
|#disableAudio:X||If you want to force testRTC not to inject its fake audio (and use whatever the browser has by default), then you can use #disableAudio:true
Note that this doesn’t simulate the case where the machine has no mic.
|#disableVideo:X||If you want to force testRTC not to inject its fake video (and use whatever the browser has by default), then you can use #disableVideo:true
Note that this doesn’t simulate the case where the machine has no camera.
|#ignore-browser-errors||testRTC automatically collects all console logs. If these contain browser errors, they will account as errors of the test and cause it to fail. If you wish to suppress these issues and have testRTC ignore them then use this run option.|
|#ignore-browser-warnings||testRTC automatically collects all console logs. If these contain browser warnings, they will account as warnings of the test and cause it to succeed with warnings. If you wish to suppress these issues and have testRTC ignore them then use this run option.|
|#ignore-nightwatch-warnings||testRTC uses Nightwatch for its scripting language. If there are warnings on Nightwatch they will account for warnings in your test results. If you wish to suppress these issues and have testRTC ignore them then use this run option.|
|#disable-browser-logs||testRTC automatically collects all console logs. Sometimes, these can cause failures or too much “noise”. If you wish to suppress these issues and have testRTC not look and collect browser console logs at all then use this run option.|
|#random-profile||At times, you may want to run the same test script, but with different machine profiles. To that end, you can use this run option and instead of the usual round robin selection of profiles for probes, it will pick the profiles at random from the list available on the test script.|
|#session:X||In many test cases, you may want to run different agents that are logically “linked” to sessions. For example, you may want that different users will connect to different video chat rooms in the tested system. testRTC supports the distribution of probes to multiple sessions and, in addition, it is possible to define different logic or role for every probe in the session.
Learn more about sessions in testRTC.
|#timeout:X||The scripts’ default maximum duration is 3 minutes. In order to keep the system’s resources from endless scripts, the tests manager will stop running scripts if reaching the script’s defined timeout. If you wish to run a longer script, please use the timeout run option #timeout:X.
Try setting the timeout to a reasonable value that isn’t too high – if tests fail and get “stuck”, the time used will be counted to your account.
X is the maximum duration (timeout) in minutes for every test iteration.
|#getstats||Tells testRTC to collect its metrics using the WebRTC getstats API and not only using webrtc-internals. See collection methods and collection failures for more information|
|#vnc||You can open a VNC connection to the tested instance and track the test’s progress. For further information about how to use VNC, please refer to ‘Using VNC’|
|#webhook:X||When this run option is used, then the given webhook will be called at the end of the test or monitor run indicating the status of the test result. Learn more about integrating webhooks at the end of a test run.|
|#dynamic-probe:true||Force dynamic allocation of probes when running a test. This slows down allocation and execution of a test for tests with a small number of probes.|
|#har-file||Collect HAR file, which holds all HTTP network traffic of the browser. To view this file, use netlog-viewer.|
|#try:N||Indicate how many times to retry this test if it fails before deciding this test failed. Useful for monitors who fail intermittently on issues you deem as false positives.|