Every test we have has a Login test as step associated... so when creating a new test this always needs to be first. So lets say we have 400 tests, thats 400 checkmarks we need to make sure are unchecked before running a test list (dont want the login to trip twice)... so then lets say a handful of tests fail and we need to go back to editing them, we need to re-toggle that checkmark on the single test, and then remember to remove it later (which someone almost always forgets). It would be great to just select that step and have a property to say "Ignore in TestList" so when X test is added to the list, that STEP just gets ignored... This would save us so much manually work... no more checkmark toggles!
Steps to reproduce: 1. Connect a project to Team Foundation Service 2. Create a test list 3. Schedule it with option 'Get Latest' checked. Expected behavior: To execute the test list successful at the scheduled time Actual: An error appears that the source control server cannot be reached and test list is not executed.
If I have an element in my elements list, of ControlType HtmlUnorderedList why is there no default verification step to validate item counts? ...seems like a no brainer? Would improve test creation further for it to be in the recorder as well.
We need to include a way to support elements that are common across a page, such as Master Page objects like headers, footers, navigation, etc. These elements should be defined uniquely in the repository, but not locked to a particular page. I should be able to reference them in any step/action on a page where those objects are included.
Expect this feature to show up under the Common section along with Refresh Browser. With many AJAX requests, Test Studio does not recognize changes and I would like to avoid a coded step using ActiveBrowser.RefreshDom()
In Test Studio UI there is currently no way to checkout the entire project with all its files or if you checkout a folder to automatically checkout all files in it - it is very useful to be able to do that in certain situations, so it will be great if we can implement such behavior. In Visual Studio there is a dialog from which you select which items (in project or folder) to checkout, so we can go in this direction also
Hi, Currently Telerik Test Studio have all basic asserts for UI level validation. We are finding some gaps in form of missing Assert.Fail and Assert.Ignore commands. Sometimes apart from validating UI elements or validating conditions, we need to determine pass/ fail test based on return values from single or multiple functions. So for that purpose we are looking for Assert.Fail / Assert.Ignore similar to ones available in Nunit/ Mstest Frameworks. Thanks, VVP
I have implemented "IExecutionExtension' for generating custom reports after script execution. I am able to get total execution time for a test . But i also need to find out test time for each coded steps. For eg - I have a test named GmailVerification and 3 codedsteps named a) Navigation, b) Login c) Verification. I am getting total time for GmailVerification , but not for Navigation or Login or Verification. Can you please implement feature so that i can calculate each Codedstep execution duration. Thanks, VVP
Telerik should offer capability to execute individual coded steps via commandline. There may be many coded steps in a .tstest file. Sometime we want to execute only some specific coded steps. So if Artoftest.runner can provide that, it will allow much more flexibility for automation.
Hi, I am looking forward for an enhanced capability of Artoftest.runner to execute multiple tests via command line. eg "Artoftest.runner test1 test2 test3" Sometimes we need to dynamically choose tests and pass it for execute. Although dynamic lists are there, this capability will be a straight forward approach and all other test tools offers this.
Currently the default typing speed for simulated real typing is 100ms key down time and 10ms delay between keys time. This is a 10 character per second typing speed, kind of slow for most applications. It would be nice to have one (or more) global settings: - An option to change the recording default - A global override to control the typing speed for the entire test (a test level setting) and/or project (a project level setting). This applies to both HTML and XAML based applications.
Current behavior of Test Studio when editing page property field: The element repository is automatically rebuilt after each field change. This works fine with relative small projects. However it leads to inconvenience when used in large projects (over 1000 tests with 40+ elements addressed) as refresh might take up to 10 minutes for each field. Could this behavior be changed to repeat the behavior of editing frame properties? Currently frame properties edit allows multiple property fields to be changed and when finished manually refresh the element repository to take effect.
When merging manually added elements in a WPF test with an automatically detected, the control type may vary and therefore merging may not be successful. If saving the project after each change in control type and/or elements find logic it works fine. An automatic refresh of the Element Repository display when updating Find Logic and/or Control Type will be helpful since it often doesn't display the underlying merged element.
We have 3 test lists which differs only by BaseUrl. When we execute them and when we open the 'Performance: History view grid' for the specific test we see a lot of test run results, but it is not clear which run was made on which environment, because there's no such info in the grid. We like to see the BaseUrl info in that grid.
When selected script step does not keep its description visible.
Please add the browser name and version to the excel sheet generated by the schedule notification email.
During recording, images are captured of the screens that are active. When these images are viewed after the recording is made, the level of resolution is pretty low making it difficult to make out objects on the captured pages. This is also the case when a failure occurs in that the failure image is of pretty low resolution, too, making it difficult to do comparison between the original recording image and the failure image. I would like for the tester to be able to choose the level of resolution that they want captured during the recording and full resolution at point of failure.