When multiple users have their hands on a product and code (10+ in our case) having to consistently address conflict issues with settings files has become overwhelming. I question why the "In Development" property is maintained at the project settings instead of maintaining the at the individual test level will significantly reduce issues with maintaining what is actually in development.
CaptureBrowser() functionality does not work as expected with Firefox browser on Windows 10 version 2004. The captured image shows only the browser frame and and white contend of the browser.
There is no direct method in the Testing Framework, which can be used to scroll the RadGanttView control in WPF app.
It will be useful to explore such implementation.
A customer application sets window.top to null or undefined at some point and thus the test execution fails to verify the state of a checkbox with the following error:Executing step 6 "Verify checked value is 'True' on 'PhysiciansCensusTypeICheckBox'"...
When it comes to automated testing of the API, it would be useful to support human-readable (markdown/html/pdf) formats.
We (dev & qa team) want to use the generated output file as documentation to make the test cases & results easily accessible for the product manager and colleagues without licensed access. The supported xml file already contains a lot of information about the tests.
The human-readable version does not need to be multilingual, English is perfectly sufficient. The format only needs a standard like markdown, html or pdf.
We would need something like that:
C:\>"C:\Program Files\Telerik\Test Studio for APIs\Bin\ApiTesting\runnerconsole\Telerik.ApiTesting.Runner.exe" test -p "C:\DemoTests" -o "C:\result.md" -f markdown
Using the Telerik.ApiTesting.Runner.exe to execute API tests cannot output results in junitstep format. It throws an error if using the -f junitstep option when running tests or test suite:[ERROR] Not supported test results format
Hi Team, I would like to be able to use a cloud based browser platform similar to BrowserStack, Sauce Labs, Ghostlab, Browsershots, etc. In order to avoid setting up massive VM farms to distribute browser iteration testing. Thanks!
I have a WPF test and converted one of the steps to code. My project is configured to use Visual Basic as coding language and the error is on line 46 of the Pages.g.vb file - BC30201: Expression expected.
There are no issues if the project is configured to use C#.
"Active browser is now null", caused from disposed Manager instance, fails randomly some of the tests. The behavior is not consistent and can't be reliably reproduced, where the same tests fail.
It would be useful to support the generated output file additionally in a human-readable format (markdown/html), when it comes to automated API testing.
We (dev/qa team) would like to attach the file to the story as documentation of the test case. So that the product manager or other colleagues (without license access) can easily take a look at the covered cases.
The current xml output (sample attached) already provides a good overview and could be extended with background information in some places (for example <action>).
At this point it is possible to run a WPF application with arguments only as a workaround - starting a separate process in a coded step as given in this example. It would be more useful if this feature is implemented when configure the WPF app in the test.
The version I am using is not listed below: 2020.1.403.0
I am noticing an inconsistency when using the Replace Element feature i.e. the attributes selected are not respecting the priority set in the Settings - Find Logic (Html) screen.
Please refer to screenshots attached.
I am logging this feature request on behalf of Eugeniy Gorbovoy. Currently the timeout setting of http requests allows only numeric input and defaults to 2000 ms when left empty. The initial plans are to implement a global project-level setting for timeout that all http requests should inherit (unless locally overwritten). (I hope we will have it delivered in some of our releases in the near future.) This should handle most cases when users want to manage the timeout from a single place instead of manually increasing it for every new http step they create. Still the idea for accepting a reference to a variable seems appealing since some users might prefer to have several "tiers" of timeout rules across the test project and manage them using variables. Any comments and shared use-cases are appreciated.
Get the response body in the API test results, when it is executed from the API command line runner. Currently, the response body is only available in the Test Studio for APIs user interface and it is not outputted in the results.