I found myself looking for buttons to collapse all tests and folders (see attached screenshot) under the "project explorer". Definitively just a nice to have, but thought I'd put it out there to see if others looked for the same options.
I have a suite of tests already written in Ready API/Soap UI. I'd really, really like to be able to migrate to your product once it has scripting. However, it'd be nice to be able to automate some of that work with an "Import from Soap UI" feature that would take my existing Soap UI project and convert it into a Test Studio project.
When validating JSON files by path, it would be good to check the size of an array or if an object is null or not based on JSON path. First one, size of array, can be implemented with .hasSize(int n) attribute from JsonPath. Second one, check whether an object is null, can be implemented within doesNotExists() method. For my scenario, I have a request that contains N elements and response that should contain <= N responses. I would like to check if response array is not null and if it contains X elements based on request count.
I am logging this feature request on behalf of Eugeniy Gorbovoy. Currently the timeout setting of http requests allows only numeric input and defaults to 2000 ms when left empty. The initial plans are to implement a global project-level setting for timeout that all http requests should inherit (unless locally overwritten). (I hope we will have it delivered in some of our releases in the near future.) This should handle most cases when users want to manage the timeout from a single place instead of manually increasing it for every new http step they create. Still the idea for accepting a reference to a variable seems appealing since some users might prefer to have several "tiers" of timeout rules across the test project and manage them using variables. Any comments and shared use-cases are appreciated.
Customer reports issue with playback tests against a specific WPF app. During investigation was identified that multiple recorders are attached to a single popup. Details shared internally!
In a source controlled (TFS) project when a code file is added to an existing test from another user, getting the change checks out the test file.
I find that I am often adding multiple Test as Steps in a test case. I also have to perform searches within the Add Test as Step window to find the test cases I need. CTRL+clicking results in unselected items after a search, so I am forced to reopen that window multiple times. It would be nice if there were checkboxes next to all rows in the Add Test as Step window so as to make multi-selecting easier....or any other means of supporting an explicit multi-select function in that window.
We are using bitbucket (git) for our codes, etc.. and it would be great if we could have test studio connect to branches instead of the master. Being able to connect to branches would allow testers to work in their own branch and not mess with the master. Merging can happen outside of test studio so that the master is always kept up to date but never touched per SE.
When edit an element in live - the element is highlighted in IE but not in Chrome.
As we already have in Test Studio Web and Desktop, we need to be able to set the simulate real typing or real clicking property at the step/test level, possibly project level if necessary.
Scheduling and Storage API documentations could not be accessed. None of the links below could be opened to provide the respective APIs. http://StorageServerAddress:8492/v1 http://StorageServerAddress:8492/v1/documentation http://SchedulingServerAddress:8009 http://SchedulingServerAddress:8009/documentation
Steps to reproduce: Execute the project attached internally. Expected behavior: To add the value in the third numeric textbox Actual behavior: The value is always set to the first numeric textbox
If user forgets to configure SMTP server when setting up scheduling server, when you go to schedule a test list you get steps 1 & 2 and no indication how to enable emailing results. This results in frequent support tickets asking how to email results.
With the tracelog ON the application can find windows and will run ok (except for the memory leak).If you turn the tracelog off, it will fail after a couple of iterations.(This is the registry key in HKEY_CURRENT_USER\Software\Wow6432Node\Telerik\Test Studio\TraceLogEnabled ) Application and test code to replicate the problem is shared internally.
Here is my concern. Whenever I have my test lists execute and there are failed tests. It requires a large amount of time to determine what caused the tests within the test lists to fail. When I see a failures in a test list, I have seen 4 general errors. 1. Element identification 2. Step execution 3. Validation 4. Other problems From the current view of a failed test list result, I always use the Result filter to deselect the Passed tests. Then I have a partial view of the level of work that I require to resolve these issues. So when I resolve these issues, I experience one of the 4 errors I listed above. The error that appears to take the most time for me to have a test pass is the Element identification. The element identification causes concern for me because the test was only able to run up to that point, so there is chance the steps after the failed element identification may fail as well after I repaired the failure. Based on the system under test, if a test has a failed element identification, the system may have thrown an exception and is actually on an error page. Considering that I spend my time digging through the failed tests to access if the system under test has a bug or if the automated test is the cause of the failure, the reason for this feedback item is: 1. solicit other users' feedback and see if there are commonality between our user experience within Test Studio despite the differences in System under Test. 2. Provide an idea to introduce a way to prioritize failed tests instead of going through each individual failed test to see what actually failed. So from this screenshot, show me which failed tests requires more of my time.
Currently the translators don't offer any text verifications when selecting htmltablerow or htmltable. Adding TextContent or InnerText to this would be extremely useful--it's a very common verification!
Steps to reproduce: 1. Execute а data driven test in a test list 2. Drill down the results to where the iteration section is Actual behavior: There is an empty browser column in the grid Expected behavior: This column should not be present in this view
Steps to reproduce and video of the issue are in the internal description.