Pending Review
Last Updated: 09 May 2016 14:32 by Steve
Every test we have has a Login test as step associated... so when creating a new test this always needs to be first.

So lets say we have 400 tests, thats 400 checkmarks we need to make sure are unchecked before running a test list (dont want the login to trip twice)... so then lets say a handful of tests fail and we need to go back to editing them, we need to re-toggle that checkmark on the single test, and then remember to remove it later (which someone almost always forgets).

It would be great to just select that step and have a property to say "Ignore in TestList" so when X test is added to the list, that STEP just gets ignored...

This would save us so much manually work... no more checkmark toggles!
Pending Review
Last Updated: 29 Apr 2016 06:07 by ADMIN
Steps to reproduce: 

1. Connect a project to Team Foundation Service

2. Create a test list 

3. Schedule it with option 'Get Latest' checked. 

Expected behavior: To execute the test list successful at the scheduled time

Actual: An error appears that the source control server cannot be reached and test list is not executed.
Pending Review
Last Updated: 28 Apr 2016 18:28 by Steve
Created by: Larry
Comments: 1
Type: Feature Request
3
Please add Ctrl+f and Ctrl+h to the code behind views to be able to search and replace text.

Please add Ctrl+Tab functionality to be able to switch between open files.
Pending Review
Last Updated: 28 Apr 2016 18:28 by Steve
Created by: Steve
Comments: 1
Type: Feature Request
0
If I have an element in my elements list, of ControlType HtmlUnorderedList why is there no default verification step to validate item counts? ...seems like a no brainer?  Would improve test creation further for it to be in the recorder as well.
Pending Review
Last Updated: 28 Apr 2016 18:00 by Steve
Created by: Steve
Comments: 0
Type: Feature Request
0
I would like to be able to Wait on specific ajax call in a test step.. so not just an ajax timeout, but like (pseudo):
- Wait for route /RestApi/thing/1 
- Once complete, continue test execution
Pending Review
Last Updated: 28 Apr 2016 17:11 by Steve
ADMIN
Created by: Jim
Comments: 1
Type: Feature Request
2
We need to include a way to support elements that are common across a page, such as Master Page objects like headers, footers, navigation, etc.

These elements should be defined uniquely in the repository, but not locked to a particular page. I should be able to reference them in any step/action on a page where those objects are included.
Pending Review
Last Updated: 28 Apr 2016 09:48 by ADMIN
Created by: Larry
Comments: 0
Type: Feature Request
0
Expect this feature to show up under the Common section along with Refresh Browser.

With many AJAX requests, Test Studio does not recognize changes and I would like to avoid a coded step using ActiveBrowser.RefreshDom()
Pending Review
Last Updated: 28 Apr 2016 09:48 by ADMIN
In Test Studio UI there is currently no way to checkout the entire project with all its files or if you checkout a folder to automatically checkout all files in it - it is very useful to be able to do that in certain situations, so it will be great if we can implement such behavior. In Visual Studio there is a dialog from which you select which items (in project or folder) to checkout, so we can go in this direction also
Pending Review
Last Updated: 20 Apr 2016 06:13 by VVP
Hi,

Currently Telerik Test Studio have all basic asserts for UI level validation.
We are finding some gaps in form of missing Assert.Fail  and Assert.Ignore commands.

Sometimes apart from validating UI elements or validating conditions, we need to determine pass/ fail test based on return values from single or multiple functions. 

So for that purpose we are looking for Assert.Fail / Assert.Ignore similar to ones available in Nunit/ Mstest Frameworks.

Thanks,
VVP
Pending Review
Last Updated: 19 Apr 2016 09:40 by VVP
I have implemented "IExecutionExtension' for generating custom reports after script execution.
I am able to get total execution time for a test . But i also need to find out test time for each coded steps.

For eg - I have a test named GmailVerification and 3 codedsteps named 
a) Navigation, b) Login c) Verification.
 I am getting total time for GmailVerification , but not for Navigation or Login or Verification.

Can you please implement feature so that i can calculate each Codedstep execution duration.

Thanks,
VVP
Pending Review
Last Updated: 19 Apr 2016 09:34 by VVP
Created by: VVP
Comments: 2
Type: Feature Request
1
Telerik should offer capability to execute individual coded steps via commandline. There may be many coded steps in a .tstest file. Sometime we want to execute only some specific coded steps.
So if Artoftest.runner can provide that, it will allow much more flexibility for automation. 
Pending Review
Last Updated: 14 Apr 2016 06:40 by VVP
Hi,
I am looking forward for an enhanced capability of Artoftest.runner to execute multiple tests via command line. 
eg "Artoftest.runner test1 test2 test3"
Sometimes we need to dynamically choose tests and pass it for execute. Although dynamic lists are there, this capability will be a straight forward approach and all other test tools offers this.
Pending Review
Last Updated: 13 Apr 2016 05:02 by Martijn
ADMIN
Created by: Cody
Comments: 1
Type: Feature Request
5
Currently the default typing speed for simulated real typing is 100ms key down time and 10ms delay between keys time. This is a 10 character per second typing speed, kind of slow for most applications. It would be nice to have one (or more) global settings:

- An option to change the recording default

- A global override to control the typing speed for the entire test (a test level setting) and/or project (a project level setting).

This applies to both HTML and XAML based applications.
Pending Review
Last Updated: 07 Apr 2016 14:13 by ADMIN
Current behavior of Test Studio when editing page property field: 

The element repository is automatically rebuilt after each field change. This works fine with relative small projects. However it leads to inconvenience when used in large projects (over 1000 tests with 40+ elements addressed) as refresh might take up to 10 minutes for each field. 

Could this behavior be changed to repeat the behavior of editing frame properties? 

Currently frame properties edit allows multiple property fields to be changed and when finished manually refresh the element repository to take effect. 
Pending Review
Last Updated: 04 Apr 2016 06:31 by ADMIN
When merging manually added elements in a WPF test with an automatically detected, the control type may vary and therefore merging may not be successful. If saving the project after each change in control type and/or elements find logic it works fine. 

An automatic refresh of the Element Repository display when updating Find Logic and/or Control Type will be helpful since it often doesn't display the underlying merged element. 
Pending Review
Last Updated: 29 Mar 2016 06:22 by ADMIN
ADMIN
Created by: Nikolay Petrov
Comments: 0
Type: Feature Request
1
Ability to data bind test to MongoDb
Pending Review
Last Updated: 17 Mar 2016 12:26 by Don
ADMIN
Created by: Boyan Boev
Comments: 1
Type: Feature Request
6
We have 3 test lists which differs only by BaseUrl. When we execute them and when we open the 'Performance: History view grid' for the specific test we see a lot of test run results, but it is not clear which run was made on which environment, because there's no such info in the grid.

We like to see the BaseUrl info in that grid.
Pending Review
Last Updated: 15 Mar 2016 07:11 by ADMIN
ADMIN
Created by: Nikolay Petrov
Comments: 0
Type: Feature Request
0
When selected script step does not keep its description visible.
Pending Review
Last Updated: 29 Feb 2016 14:21 by Briar
Please add the browser name and version to the excel sheet generated by the schedule notification email.
Pending Review
Last Updated: 23 Feb 2016 14:32 by Don
During recording, images are captured of the screens that are active.  When these images are viewed after the recording is made, the level of resolution is pretty low making it difficult to make out objects on the captured pages.
This is also the case when a failure occurs in that the failure image is of pretty low resolution, too, making it difficult to do comparison between the original recording image and the failure image.

I would like for the tester to be able to choose the level of resolution that they want captured during the recording and full resolution at point of failure.