I have a suite of tests already written in Ready API/Soap UI. I'd really, really like to be able to migrate to your product once it has scripting. However, it'd be nice to be able to automate some of that work with an "Import from Soap UI" feature that would take my existing Soap UI project and convert it into a Test Studio project.
When validating JSON files by path, it would be good to check the size of an array or if an object is null or not based on JSON path. First one, size of array, can be implemented with .hasSize(int n) attribute from JsonPath. Second one, check whether an object is null, can be implemented within doesNotExists() method. For my scenario, I have a request that contains N elements and response that should contain <= N responses. I would like to check if response array is not null and if it contains X elements based on request count.
I am logging this feature request on behalf of Eugeniy Gorbovoy. Currently the timeout setting of http requests allows only numeric input and defaults to 2000 ms when left empty. The initial plans are to implement a global project-level setting for timeout that all http requests should inherit (unless locally overwritten). (I hope we will have it delivered in some of our releases in the near future.) This should handle most cases when users want to manage the timeout from a single place instead of manually increasing it for every new http step they create. Still the idea for accepting a reference to a variable seems appealing since some users might prefer to have several "tiers" of timeout rules across the test project and manage them using variables. Any comments and shared use-cases are appreciated.
Scenario:Use a simple Login-Logout test that is separated into two different tests, executed as steps (Login test & Logout test = login-logout test) for performance run.
Issue: The current overview functionality will show the execution of the test steps 'login' and all the individual steps from the nested test, then moves onto the 'logout' test and the steps for the second nested test. The times displayed for the 'Test as Step' steps is the client side time, required for the initialization of the nested test.
There are two options for more understandable results - the 'Test as Step' step should either not representing any time, or displaying the summed up time of the nested step execution.
Recording using existing Chrome session is not available
Customer reports issue with playback tests against a specific WPF app. During investigation was identified that multiple recorders are attached to a single popup. Details shared internally!
In a source controlled (TFS) project when a code file is added to an existing test from another user, getting the change checks out the test file.
I find that I am often adding multiple Test as Steps in a test case. I also have to perform searches within the Add Test as Step window to find the test cases I need. CTRL+clicking results in unselected items after a search, so I am forced to reopen that window multiple times. It would be nice if there were checkboxes next to all rows in the Add Test as Step window so as to make multi-selecting easier....or any other means of supporting an explicit multi-select function in that window.
We are using bitbucket (git) for our codes, etc.. and it would be great if we could have test studio connect to branches instead of the master. Being able to connect to branches would allow testers to work in their own branch and not mess with the master. Merging can happen outside of test studio so that the master is always kept up to date but never touched per SE.
When edit an element in live - the element is highlighted in IE but not in Chrome.
As we already have in Test Studio Web and Desktop, we need to be able to set the simulate real typing or real clicking property at the step/test level, possibly project level if necessary.
Scheduling and Storage API documentations could not be accessed. None of the links below could be opened to provide the respective APIs. http://StorageServerAddress:8492/v1 http://StorageServerAddress:8492/v1/documentation http://SchedulingServerAddress:8009 http://SchedulingServerAddress:8009/documentation
Steps to reproduce: Execute the project attached internally. Expected behavior: To add the value in the third numeric textbox Actual behavior: The value is always set to the first numeric textbox
If user forgets to configure SMTP server when setting up scheduling server, when you go to schedule a test list you get steps 1 & 2 and no indication how to enable emailing results. This results in frequent support tickets asking how to email results.
With the tracelog ON the application can find windows and will run ok (except for the memory leak).If you turn the tracelog off, it will fail after a couple of iterations.(This is the registry key in HKEY_CURRENT_USER\Software\Wow6432Node\Telerik\Test Studio\TraceLogEnabled ) Application and test code to replicate the problem is shared internally.
The "StopTestListOnFailure" property of a test is no longer stopping the rest of a test list from executing upon the failure of the test. Steps to reproduce: 1. Create a Project 2. Add two tests to the project (test A and B) 3. Create a step that will cause the first test to fail (Test A) 4. On the Project tab right click on both tests and check the property "StopTestListOnFailure" (set it to true) 5. Click the Test Lists Tab 6. Add the two tests to the test lists with the first one as the first test in the list (Test A) 7. Execute the test list Expected: After the first test is run and fails the second test will not run due to that property being set to true. Actual: Both tests are run
Steps to reproduce: 1. Execute a performance test. 2. Click History tab. 3. Add a result description. 4. Open some other test. 5. Reopen the initial test. Expected: The description is saved. Actual: No description is displayed.