Pending Review
Last Updated: 12 Nov 2015 19:31 by ADMIN
Some HTML applications have an overlay div at the top and/or the bottom of the browser window. This causes a problem for elements that require ScrollToVisible selected. Test Studio will unconditionally scroll the element to the top of the browser window, which ends up scrolling the element so that it's hidden behind the overlay div.

Currently the way to handle this scenario is a bit complex:
1) Add an extra step to scroll the element above the target element to the top
2) Convert the real click step to code and comment out the ScrollToVisible line of code

This is not at all intuitive or obvious. Can't we come up with a better solution to handle this scenario? See the video added to the internal feature request for a demonstration of a real customer application that demonstrates this problem
Pending Review
Last Updated: 09 Nov 2015 21:35 by Jeff
Created by: Jeff
Comments: 0
Type: Feature Request
0
It would be nice if we could set up parent test lists that can have child test lists added to them.  I am requesting this because we are working on test cases that switch between applications.  According to Telerik support we need to set up these test cases like Test Lists - the steps to execute against each app will have their own automated tests that will be put in order of execution in a test list; test 1 executed against first app, test 2 executed against 2nd app, test 3 executed again against the 1st app, etc.  Since this is the solution currently provided it makes it difficult to set up our test lists - we end up having a large number of them.

If we could set these "test lists" (which are really test cases that are individual tests executed in a specific order) up as Child test lists that can be called by a Parent test list it would make this process easier to manage - This would allow us to have something like a release-based Parent test list that includes all of these test cases as Child test lists.
Pending Review
Last Updated: 09 Nov 2015 09:11 by ADMIN
When running a performance test in a test list, which has iterations, checking performance data and comparing it for the whole test makes it hard to compare results for specific step for different iterations, that is why if there is iteration information column when showing performance results in a test list will be helpful.

Along with filtering of the columns so a specific step can be shown only.
Pending Review
Last Updated: 06 Nov 2015 21:06 by Don
When the project is checked out not from Test Studio you should close and reopen the project so the TFS status is updated.

Please add that functionality in the Refresh button.
Pending Review
Last Updated: 10 Nov 2015 06:53 by ADMIN
When a tfs connected project is up to date, editing a test list shows the project checked out
Pending Review
Last Updated: 17 Mar 2016 12:26 by Don
ADMIN
Created by: Boyan Boev
Comments: 1
Type: Feature Request
6
We have 3 test lists which differs only by BaseUrl. When we execute them and when we open the 'Performance: History view grid' for the specific test we see a lot of test run results, but it is not clear which run was made on which environment, because there's no such info in the grid.

We like to see the BaseUrl info in that grid.
Pending Review
Last Updated: 06 Nov 2015 06:40 by Dhruv
There should be a possibility to set  "RunsAgainstVersion" property for the entire test or project as we can for the specific step.
Pending Review
Last Updated: 05 Nov 2015 11:13 by ADMIN
Currently we are able to filter the data only in a specific range of rows. The grid filters are not applying during the execution and there is no other way to filter data except by specific row range.

We want to be able to filter the data by a specific content (e.g. text content).
Pending Review
Last Updated: 11 Dec 2018 08:59 by Neo
ADMIN
Created by: Andy Wieland
Comments: 4
Type: Feature Request
11
Can you please provide the ability to data-drive mobile tests, similar to the way Test Studio web and desktop works?
Pending Review
Last Updated: 03 Nov 2015 03:49 by ADMIN
ADMIN
Created by: Andy Wieland
Comments: 2
Type: Feature Request
5
Can you please provide an option for image verification for mobile testing, similar to the one in web and desktop testing?
Thank you!
Pending Review
Last Updated: 23 Oct 2015 01:24 by Ryan
ADMIN
Created by: Andy Wieland
Comments: 1
Type: Feature Request
3
This feature would be very useful to determine which data set and binding properties to use for tests at the list level.  
Pending Review
Last Updated: 20 Oct 2015 13:51 by ADMIN
ADMIN
Created by: Ivaylo
Comments: 0
Type: Feature Request
2
Currently the Testing Framework doesn't have any support for Kendo Slider element, please add this functionality.

Thanks.
Pending Review
Last Updated: 16 Oct 2015 14:57 by Linford
Created by: Linford
Comments: 0
Type: Feature Request
1
The reports provide extensive iinformation concerning failed test steps. I would like to request a more elaborate presentation of the reports (e.g. the option of including screen dumps of all execution steps). Also the option to adjust the layout of the repost instead of just showing a percentage of how many tests passed during a scheduled test.
Pending Review
Last Updated: 16 Oct 2015 14:46 by Linford
Created by: Linford
Comments: 0
Type: Feature Request
2
Once a test step fails during execution several options are presented in order to give a clear understanding of why the step failed and the circumstances in which this failure occured. To this end two images are included, however these images are often small and too unclear to identify small differences. 

I would like to request some kind of zoom in, enlarge or export function for these images in order to clearly determine what went wrong during execution.
Pending Review
Last Updated: 08 Oct 2015 02:36 by VVP
ADMIN
Created by: Cody
Comments: 1
Type: Feature Request
4
After running a test, include the following in the summary report:

# passed steps
# failed steps
# of steps not executed
Pending Review
Last Updated: 02 Oct 2015 15:21 by Don
Created by: Don
Comments: 0
Type: Feature Request
0
There needs to be an "In Your Face" dialog that says you are about to checkout the Project file (AIIS) and, as such, you may be affecting other users with their work.  I often notice that I've got this file checked out but don't know what caused it or how long I've had the file checked out.  A modal dialog box that is presented when it happens would help me to understand what other action I've taken that caused the file to be checked out.  Furthermore, it would let me know to make such changes quickly so that the file can be checked back in as soon as possible.
Pending Review
Last Updated: 18 Sep 2015 01:49 by ADMIN
Created by: Ewin
Comments: 0
Type: Feature Request
0
I have used coded steps in a large number of tests.  

But the common theme that I noticed is that the coded step is using a dynamic object like DateTime.Today.ToShortDateString().  

When I initially record a step, the date would be in a string.  The step would playback as the same string.  But with dynamic objects like DateTime.Today gives me the ability to test the system further.  

I could think of one option that relies on SetExtractedVariable(), but that still requires me to write some code. 

Today, days from today, months from today, and years from today.
Probably other users may like to see similar time options as well.  

While it is easy enough to make a coded step, it would extremely helpful to have a quick setup for using dates instead of converting a step to a coded one.  

Maybe something that would be similar to how test steps have the ability to be bound against a specific data source.  

Pending Review
Last Updated: 25 Sep 2015 21:41 by ADMIN
Pending Review
Last Updated: 23 Sep 2015 11:56 by Don
Please add the ability to use extracted variable in find logic without having data source attached. Currently extracted variable in the find logic doesn’t work until you attach a data source to the test.