This is a limitation/bug in TestStudio. Here’s where the customer creates the variable: http://10.225.68.127:8080/prweb/PRServlet/nF-qFK4ha1B9XolnJtsJWpQlh9AKUteKa55vdjEHm_Y%5B*/!STANDARD?pzPostData=1316036933 <input type='hidden' id='pzHarnessID' value='HID1CE396E7F15CF6B5FC350407D4BF1FDD'> Test Studio is looking for values that are wrapped in double-quotes. For whatever reason, the server has decided to wrap its values in single quotes, which leads us to fail to identify this as a source. More information along with the test is provided in the internal description.
Steps to reproduce: 1. Create a test and start recording aginst: http://spread.grapecity.com/Demos/JS/ViewsDemo/#/demos/EditorMode 2. Highlight some cell in the grid. 3. Click on locate in DOM Expected: The element should be located in DOM. Actual: No element is found. The element is within an iFrame however Test Studio is not able to see any element in that frame.
Steps to reproduce: 1. Close & re-launch Test Studio. 2. Make sure there are no pending changes for the project. 3. Go the Test Lists tab & schedule a test list 4. Choose option to receive test results by E-mail, enter an e-mail address & click Done. 5. Click the Tests tab & note that the project shows a pending save and a pending change. 6. Right-click on the project & select Check In to TFS from the context menu. Click Save. 7. The Check In to TFS screen is displayed with no pending changes. Visual Studio (TFS) does not show a pending change.
We are trying to use the performance profiling feature and are hitting an issue trying to open the results via Test Studio. From the trace file we can see that it is an out of memory error while parsing the JSON. The file is large, but not that large (150 MB). Here is the exception: [10/16 12:00:49,Telerik.TestStudio.exe(8208:11),Error] PerformanceMainViewModel.LoadLocalPerformanceResults() : EXCEPTION! (see below) Situation: Failed to extract metadata from "C:\Users\manor\Downloads\perf results\130263250855318769 CM7CreateAndLoop.tsperf". Outer Exception Type: System.OutOfMemoryException Message: Exception of type 'System.OutOfMemoryException' was thrown. HRESULT: 0x8007000E (Official ID (if app.) = E_OUTOFMEMORY, Error Bit = FAILED, Facility = FACILITY_WIN32, Code = DNS_ERROR_NO_MEMORY) Call Stack: at System.Text.StringBuilder.ToString() at System.IO.StreamReader.ReadToEnd() at System.IO.File.InternalReadAllText(String path, Encoding encoding, Boolean checkHost) at System.IO.File.ReadAllText(String path) at ArtOfTest.WebAii.Design.Execution.Profiler.Storage.ProfilerResultsFile.Load(Guid testGuid, String strFilePath, ResultsFileXmlRoot loadedResultObj) at ArtOfTest.WebAii.Design.UI.PerformanceMainViewModel.LoadLocalPerformanceResults()
While connecting to Jira Cloud (https://name.atlassian.net) with valid userid and password, its giving error message "The remote server returned an error :(401) Unauthorized".
Steps to reproduce: 1. Open a project in TFS with no pending changes 2. Schedule a test list 3. Change this test list and save the changes. The project looks dirty. 4. Press Check-in button 5. Press Cancel. 6. Project looks as Checked in, but changes still can be checked in and in VS it is dirty as well.
With the tracelog ON the application can find windows and will run ok (except for the memory leak).If you turn the tracelog off, it will fail after a couple of iterations.(This is the registry key in HKEY_CURRENT_USER\Software\Wow6432Node\Telerik\Test Studio\TraceLogEnabled ) Application and test code to replicate the problem is shared internally.
Steps to reproduce: Execute the project attached internally. Expected behavior: To add the value in the third numeric textbox Actual behavior: The value is always set to the first numeric textbox
The "StopTestListOnFailure" property of a test is no longer stopping the rest of a test list from executing upon the failure of the test. Steps to reproduce: 1. Create a Project 2. Add two tests to the project (test A and B) 3. Create a step that will cause the first test to fail (Test A) 4. On the Project tab right click on both tests and check the property "StopTestListOnFailure" (set it to true) 5. Click the Test Lists Tab 6. Add the two tests to the test lists with the first one as the first test in the list (Test A) 7. Execute the test list Expected: After the first test is run and fails the second test will not run due to that property being set to true. Actual: Both tests are run
In a source controlled (TFS) project when a code file is added to an existing test from another user, getting the change checks out the test file.
When edit an element in live - the element is highlighted in IE but not in Chrome.
Scenario:Use a simple Login-Logout test that is separated into two different tests, executed as steps (Login test & Logout test = login-logout test) for performance run.
Issue: The current overview functionality will show the execution of the test steps 'login' and all the individual steps from the nested test, then moves onto the 'logout' test and the steps for the second nested test. The times displayed for the 'Test as Step' steps is the client side time, required for the initialization of the nested test.
There are two options for more understandable results - the 'Test as Step' step should either not representing any time, or displaying the summed up time of the nested step execution.
It is very difficult to explain (and understand) when the global wait for elements applies and when it doesn't. There seem to be lots of questions on the forums and in the suggestions around this, for example this post: http://feedback.telerik.com/Project/117/Feedback/Details/44270-when-waiting-for-element-to-exist-in-code-test-studio-doesnt-respect-the-timeou I can't find anything in the online documentation that talks about it. We have opened support cases, asked questions on the forums and new users here get very confused. It is pretty fundamental to "getting" the product and would be great to get some documentation on that. I'd say it is important enough to document and call out in a blog. Thanks!
1) Double click on a .aiiresult file of a test list containing a failed test 2) Drill down into Step Failure Details 3) Export the details to a .zip file Expected: The file to include the image on failure Actual: the image on failure is missing
Steps to reproduce: 1. Execute a performance test. 2. Click History tab. 3. Add a result description. 4. Open some other test. 5. Reopen the initial test. Expected: The description is saved. Actual: No description is displayed.
When you record in IE 10 the application becomes partially irresponsible and then acts very slowly. Access to the application and steps to reproduce are in the external description.
Steps to reproduce and video of the issue are in the internal description.
Steps to reproduce and access to the application are in the internal description.