When you manually add an element via the Element menu, and the element already exists (according to the find logic) but under a different friendly name such that it gets merged with the existing element, there's no indicator to the user what happened. The user is not informed what element it got merged into. 1) Manually add an element to the test, the added element should be highlighted in Element Explorer via the yellow arrow 2) Click on a different test step to highlight a different element in Element Explorer 3) Add the same element again, but enter a different friendly name Expected: Because the duplicate element was merged with the element from step 1, some visual cue displayed to the user to indicated which element it was merged into Actual: Under the recording toolbar it says "Added xxxx" where xxxx is the name you manually entered. However element xxxx is nowhere to be found, the yellow arrow still points to the element for step 2 above. Nothing tells you that it got merged with the element from step 1.
Steps to reproduce: 1. Configure the Scheduling service as per the instructions here. 2. Two factor authentication for the account was also enabled and a password for the application was created. Expected: Send email after a scheduled execution. Actual: Error in the log file: telerik.teststudio.executionmanagerservice.exe(1844:3),Error] ResultMailSender.ClientSendCompleted() : Smtp sending email failed: Failure sending mail.
Allow the ability to create a custom folder structure to organize project elements within the Element Repository. This will be helpful for cases where there are many elements on one page, or other more beneficial orgs for elements. This should be an alternative and additional view for the element repository, retaining the option to see elements organized in current view.
When a customer first sees the "Verify IsVisible" they are sort of naturally inclined to believe that Test Studio will test whether or not the target element is actually visible within the browsers window. This leads to frustration when they try to use it and it doesn't work the way they imagine it should work (see ticket 814023 for an example of this frustration). For example, if the element is not present in the DOM, the test will actually fail with "element not found" instead of passing a test step "verify is not visible" as one would naturally expect given the incorrect assumption above. Or the test may pass indicating the element is visible when it's just hidden behind another element or scrolled out of view. I'm not sure what the right answer is... do we change what IsVisible actually does? Do we change the name of the verification so it better reflects what it's actually verifying (which is the DISPLAY and VISIBILITY property of the element)? The current code/action is useful, once you correctly understand what it actually does. I'm filing this just to log it so we can start discussion on how to eliminate the ambiguity of this particular verification.
After using the search for a test, all of the folders are expanded. It would be nice to have the ability to expand and collapse the folders
When a test list is executed it results in a list of all executed tests. There should be an option that allows only failed tests to be re-executed in the context of the same test list.
For example, action steps would be one color, verifications and/or waits would be in a different color.
it is useful to have in the splash stratup screen of Test Studio information about product version, because it is only available in Help tab currently
We want to be able to run the same test against different servers and see the difference in performance. The natural way to do this, I think, is to use data binding for the URL and list each system we want to compare in the data file - this will let Test Studio loop the test over each URL. When I do this, it seems the results for each of the URLs all go to the same results file, making it difficult to compare the runs.
Test Studio currently has a compare view that can be found here: http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/performance/compare-view.aspx#1028 We need a way that this report can be generated at the end of a test list run. Either programmatically or otherwise. Currently there is no documented method available.
There are times where test studio throws this exception described in http://feedback.telerik.com/Project/161/Feedback/Details/121035-path-too-long-on-backup-problem [04/24 11:48:37,Telerik.TestStudio.RemoteExecutor.exe(5092:133),TestStudio] AutomationHostState.StoreToFile() : EXCEPTION! (see below) Situation: AutomationHostState.StoreDomOnDisk Outer Exception Type: System.IO.PathTooLongException Message: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. HRESULT: 0x800700CE (Official ID (if app.) = 2147942606, Error Bit = FAILED, Facility = FACILITY_WIN32, Code = ERROR_FILENAME_EXCED_RANGE) Call Stack: at System.IO.PathHelper.GetFullPathName() at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths) at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize) at ArtOfTest.WebAii.Design.AutomationHostState.StoreToFile(String filePath, Object value, String traceMethod) [04/24 11:48:37,Telerik.TestStudio.RemoteExecutor.exe(5092:133),TestStudio] AutomationHostState.StoreImageBytesOnDisk() : EXCEPTION! (see below) Situation: AutomationHostState.StoreImageBytesOnDisk Outer Exception Type: System.IO.PathTooLongException Message: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. HRESULT: 0x800700CE (Official ID (if app.) = 2147942606, Error Bit = FAILED, Facility = FACILITY_WIN32, Code = ERROR_FILENAME_EXCED_RANGE) Call Stack: at System.IO.PathHelper.GetFullPathName() at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths) at System.IO.Path.GetFullPathInternal(String path) at System.IO.File.InternalMove(String sourceFileName, String destFileName, Boolean checkHost) at ArtOfTest.WebAii.Design.AutomationHostState.StoreImageBytesOnDisk(String projectResultsPath, String fileRelativePath, String expectedImageRelativePath) There was a MSDN blog by Kim Hamilton that suggested a possible solution to this issue. http://blogs.msdn.com/b/bclteam/archive/2008/07/07/long-paths-in-net-part-3-of-3-redux-kim-hamilton.aspx Please see if this exception handling is feasible.
In consideration of this item, http://feedback.telerik.com/Project/161/Feedback/Details/126710-project-connected-to-tfs-waiting-for-long-time-to-finish-connecting, another idea is to giving the user the ability to load project without checking TFS for status so that the project would load as if it was not checked to TFS, but does not go through the hassle of checking every single file
Currently for IE recorder we can configure the highlighting options on a project level, for the color, thickness and menu hold time. We should hook up the new Chrome and Firefox recorder to use the same configuration for consistency.
It seems Test Studio doesn't work with modeless popups of Webpage Dialog. Steps for repro: 1.Start Recording. 2. Navigate to : http://samples.msdn.microsoft.com/workshop/samples/author/dhtml/refs/showModelessDialogEX.htm 3. Click on Display Modeless Dialog button. 4. Try to record any actions against the new appeared window Expected behavior: To be able to record various steps Actual behavior: Recording is not working, it seems even the dialog is not detected at all Reproduced with IE 10 and Test Studio 2013.2.1426
Steps to reproduce: 1) Download the Windows GitHub client (a WPF ClickOnce app): http://windows.github.com/ 2) Install it and take note that the Installer creates a shortcut on your desktop by default 3) Start Test Studio and create a WPF test 4) Attempt to record against the application by dragging the default shortcut into the WPF application path field in the UI Expected: it works perfectly Actually: it fails to attach the recorded even though it starts the app. However, if you use the original GitHub.exe file or if you create a brand new shortcut from this file - both of these can be used to record against the app successfully.
Currently Test Studio uploads the project files to the storage server only if the files are newer than the files in the storage. This could lead to problems if you have several branches of the same project. The only fix that I can come up with, is to create a new project for every branch and rename it. It would be more useful to have the possibility to force upload project files to the storage, even if they are older. Just a simple checkbox to force upload this. Or maybe even better if the storage service can differentiate between the projects by parsing more of the directory names.
Here are the steps for repro: 1. Open TFS connected project 2. Under the project tab, click the ‘Open’ button on the TFS toolbar 3. The Open dialog then opens to a Connect to TFS pane with the TFS connection details. Click Connect. 4. The user then has to navigate to their local directory and choose the TFS connected project. There should be a way to save the project location so that he won’t have to navigate to it every time he opens it.
After performing a Quick Execute, the steps view is always scrolled to the very bottom. it's annoying to have to scroll back up to find the failed step before you can begin to analyze the failure.