ISSUE: Editing an existing scheduled event does not remember any of the properties set beside the email. Agents must be reselected and alert preferences must be set again (send email, attach excell, etc). REQUEST: Test Studio should retain the settings, including the execution servers.
Please add an easy way for changing the scheduling database through the UI, currently in order to change the database you need to either reinstall Test Studio or edit the config files (not recommended)
We have an HTTP Rest API that uses some JSON/XML http requests, that are easily created and send through Fiddler or Rest Console (Chrome plugin). Then we verify the response and use it to form our next request. This process is simple and effective in our manual testing. For automation testing though we need a tool that can do those requests and also response verifications and variable usage for those HTTP requests (POST, GET, DELETE, etc.). Of course, that needs to come with a lot more functionalities (data driven testing, random generation of variables, loops, etc.) It will be great if this feature can be used side-to-side with the Web UI testing capabilities of Test Studio.
Currently after a user logs off of an execution server, its status will change to 'dead'. Such a feature would force our executable to keep running even when the user logs off.
Customer has a problem with an Element in his test; when he customises the xpath it continues to work, but when he also adds a data-driven identifier, the xpath locator becomes corrupted which has two major impacts: 1: The test run fails, because an Invalid expression exception is thrown when the test tries to find the element. 2: The Edit Element window fails to open, because an exception is thrown in BuildFindExpression. Steps to reproduce: 1: The overall plan is to define a data-driven element, which can match any of the links in a table column (see attached screenshot 1). 2: At first, use the element recorder to record a Verify Exists step with the default locator (screenshot 2). 3: Edit the element, adding an xpath locator (screenshot 3); note that this xpath does not identify a single link, but identifies all the links in the table. 4: With the edited element, run the test again; it passes. This confirms that, at this stage, the xpath is not causing any problems. 5: Edit the element a second time, this time replacing the hard-coded TextContent with a data-driven field (screenshot 4). The xpath is unchanged. 6: Run the test; this time it fails. But it doesn't fail to find the element, in the normal way; instead it throws an exception when it tries to apply the find logic (attachment 5 has the full results log with the exception callstack), based on only part of the xpath: Exception thrown while finding elements for the following descriptor 'Verify Exists 'UnassignedTable_SubmissionIDLink_datadriven''. Exception 'System.ArgumentException: Invalid expression ' 'rowUnAlloc')]/td/a' 7: Try to edit the element again; the edit window does not open, instead test studio shows a dialog with another exception (screenshot 6): at ArtOfTest.Common.Design.ProjectModel.Elements.FindExpressionElement.BuildFindExpression(String fes) at ArtOfTest.Common.Design.ProjectModel.Elements.FindExpressionElement.get_DataBoundFindExpressions() at ArtOfTest.WebAii.Design.UI.FindElementModel.BuildSentencesFromCurrentExpression() at ArtOfTest.WebAii.Design.UI.FindElementModel.set_IsDataDriven(Boolean value) When it gets to this state, the test step and the element are unusable; the only way to proceed is to delete the step, delete the element, and start again from scratch. Variations This fault does not always manifest itself in the same way. Sometimes the term 'xpath' goes missing from the identifier, also resulting in a find failure at runtime, but not preventing the element editor window from opening (screenshot 7). Sometimes the fault emerges after the first time I edit the element; sometimes only after subsequent edits, as described above. Sometimes, the introduction of the xpath identifier causes runtime identification failures, even though the overall identifier does not seem to be corrupted and the xpath should be successful. The screenshots and the sample test are attached in the internal description.
When you click "Reload from server" in the Results tab, any results from a remotely run performance test list are not automatically copied to the local machine. You have to manually copy them or configure the performance test to use a UNC path.
It can get annoying to constantly have to re-enter the same email address over and over again every time you schedule a test list with email results. It would be useful if there were a default/global setting where a default list of email addresses could be stored. Now when you go to schedule a test list, it will automatically fill in the list of email addresses for you. You can edit the list if needed, but at least you don't have to re-enter every single time. From support ticket 837356.
Currently, running a test list compiles the entire project if a test within that list contains a coded step. If another test in the project is in development and contains code that does not compile, the test list will not execute - even if the test in development is not included in the test list. This idea will allow a flag to be set that the test in 'In Development' and thus be excluded from compilation of a test list. Tests that are flagged as such would not be available to be included in a test list.
As a user running TeamPulse bug tracking from Test Studio I submit bugs for test failures. I need the links of those bugs persisted with tests and exposed from the Test Studio UI as I get Acceptance Criteria links.
Test Studio removes elevated trust privileges from an OOB application when connected to the application for recording. Please refer to the screenshot attached from the local repro. The sample project is attached. The ArtOfTest.SLExtension.dll is included in the SL application.
If during the test the WPF application crashes, .Window.Close() will not be executed (which is expected behavior), however it will not timeout or throw an exception.
Hi guys, the Telerik sample app contains a Kendo Editor control. TS is able to successfully record typing against Kendo Editor from our demo page: http://demos.kendoui.com/web/editor/index.html But it doesn't successfully record it for the same control in the sample app. Steps to repro: 1) Start the TS recorder 2) Navigate to http://stoichev:8080/inbox.html (only accessible internally) 2) Click "Compose" button 3) In Compose view start typing into the Kendo Editor Expected: typing action is recoded Actually: it will only record a click against an HTML object with the tagname Iframe (which is not actually an HTML frame!?)
It would be a nice addition if telerik test studio (+scheduler service) was able to also record fullscreen video (swf/mp4 etc) of any failed tests. I currently have implemented a DIY IExecutionExtension plugin that takes care of this. It works quite nicely. How it works ? Upon test-start it will initiate a fullscreen recording using the SDK from techsmith (Jing SWF video recorder.dll). It will then record a video until the test finishes. If the test fails it will keep the video, if the test succeeded the video will be discarded. failed tests are saved in this file naming pattern: [user defineable path]+[testname]+date.swf example: C:\temp\DMS - 2013-06-04.09.19.swf failed tests within testlists are saved in this file naming pattern: [user defineable path]+[testlist name]+date \ [testname]+date.swf example: c:\temp\IsAlive test - critical sites - 2013-06-06.09.27\Log ind på Agromarkets.dk - 2013-06-06.09.29.swf I have then shared the c:\temp folder and modified the schedulers email layout to include this path. So any testers can easily watch those videos and see what happens just prior to the failure. Any chance of having some builtin feature like this in telerik would be awesome and an alternative to the normal screen capture features that we already have ? My custom plugin works nicely and it should make our team here able to more quickly be able to identify why a given test fails. Sometimes those stacktraces are a bit voodo for non coders :) regards and thanks for listening :) Elo
Please add an ability to select all records from Manage Results dialog and delete them.