What I want to achieve is to run a combination of tests for different browsers. The request is based on the situation that one particular test isn't able to run in e.g. Internet Explorer and I want that test to run in Chrome. The example I have is: 001 (IE) 002 (Chrome) 003 (IE) 004 (IE) It would be nice if you could specify at each test in the test list at what browser it should run.
This feature will help teams sharing projects avoid issues with paths that include username.
Mobile testing is not super steady in recognising elements, and as a result often taps the wrong element. A way to improve this could be the add a way to tap an element with a specific text (i.e a button).
Multiple customers reported that after applying a Windows update KB4041676 on a windows 10 system all excel files bound as data source in a Test Studio project stopped working as expected - the file stayed bound but no tables appear to select any of these. It would be good to investigate what that update causes.
Step up/ Step down is not working as it should. To be able to choose value we should set initial one first.
Custom WPF app cannot be automated as the recorder crashes the application. Currently I am unable to login and cannot provide details if the recorder can be attached to the running instance. Details shared internally!
I run my test list on a remote machine. When I look at the Results tab in Test Studio I don't see the correct image with the failed step. Sometimes I see no images and other times I see the image from the previous step that failed. Never the image at the time of failure from the step I actually want to see. This makes it hard to analyse the results.
In Windows 10, Microsoft blocks users from opening programs that show that administrator permissions are needed to run the program. Why is Test Studio configured this way instead of just as an executable program? I read other posts referencing saving log files and such, but can't the security permissions on the machine itself dictate where files can be saved instead of TS making that distinction out of the gate? It means I cannot open the program in Windows 10, much less create any tests. And our IT department is not going to grant admin access on the individual user machines.
Currently a single test can be bound to only one data source (a single file, a single sheet, etc.). For larger tests that include subtests and passing data to the subtests, the single spreadsheet becomes nearly unmanageablylarge. It would be useful in cases like this to be able to bind the test to multiple data sources (e.g. multple sheets within the same Excel file). Then the various subtests can reference data in sheet 2, 3, 4, etc. while the parent test could use data from sheet 1. This would significantly simplify setting up the data being feed into the main parent test.
Using FF - The popup is recognized and recorder is being attached to it but no actions inside get recorded, highlighting cannot be activated at all and the DOM tree is not displaying all elements. When executing a test recorded properly the popup gets closed immediately after it is opened and no actions can be executed against it. Using Chrome and IE - Recording and execution works as expected. Further details shared internally!
1. Create a new WebForm project, build it, no AWSSDK.Core.dll in the bin folder. 2. Converted the project to a "Telerik UI for ASP.NET AJAX" project, build it, and now the AWSSDK.Core.dll and AWSSDK.S3.dll are added to the bin folder. Expected: These are not required outside of Test Studio.
We have tens of fields on 100s of our scripts, it will be too laborious and not efficient to enter verification step for each field (text box, drop downs, radio buttons, etc.). We would like Telerik software to improve and detect automatically if it was able to enter the data in a particular field. If not, the script should error out like it does when a field (element) is not found.
The options optionally provided to the command line runner related to publish results to TFS are not working correctly with the new builds in TFS 2017 vNext.
Steps to reproduce: 1. Create a sample project and add the following steps: - Verify element Exists - Verify element ExistsNot - Wait element Exists - Wait element ExistsNot 2. Convert these to code and double check that the ExistsNot steps refer to an element with the current find expression. In contrary the Exists steps refer the element from the Pages file directly. That way editing the find expression of the element will be updated and referred in the Exists steps. But the ExistsNot steps in code should be manually updated. Expected: The elements in all converted steps to be referred from the elements' repository.
Not sure about exact repro. Sometimes when I open a test with coded steps in Visual Studio I get the following error when I try to run it: '17-Dec-16 11:23:32 PM' - System.ArgumentException: String cannot have zero length. at System.Reflection.RuntimeAssembly.GetType(RuntimeAssembly assembly, String name, Boolean throwOnError, Boolean ignoreCase, ObjectHandleOnStack type) at System.Reflection.RuntimeAssembly.GetType(String name, Boolean throwOnError, Boolean ignoreCase) at System.Reflection.Assembly.GetType(String name) at ArtOfTest.WebAii.Design.Execution.ExecutionUtils.EnsureTypeExists(Assembly assm, String typeName) at ArtOfTest.WebAii.Design.Execution.ExecutionUtils.CreateCodedTestInstance(Test test, TestResult result, String binariesFolder) at ArtOfTest.WebAii.Design.Execution.ExecutionEngine.InternalExecuteTest(Test test, TestResult initializationResult) at ArtOfTest.WebAii.Design.Execution.TestExecuteProxy.ExecuteTest(ExecuteTestCommand command) Closing the test, rebuilding and opening it again seems to work. But I think this same issue also causes problems when the test suite is deployed on remote machines.
Steps to reproduce: 1. Create a Verify element Exists and Wait for element exists steps 2. Converting both to code generates Pages.Element.Wait.ForExists(30000); Expected would be that the Verify element Exists step rely on the global project elements timeout or alternatively verifies immediately. Actual: The current implementation is that the Verify step takes the default wait timeout for a wait step.
steps to reproduce: 1. Open the test in the sample project. 2. Check the target elements' find logic for step 2 and 3 - these are modified to use TextContent for the html anchor. 3. The target elements in step 4 and 5 are the same - recorded in a second recording session. Expected: To record steps against the same elements already existing. Actual: Two new elements are recorded with TagIndex used.
I have created a seperate project containing Telerik tests which I plan to import as 'TestAsStep' components into my main test cases. However, I cannot seem to be able to locate then when i try to add them to the test. The Visual Studio solution has the project where these components are located added to the workspace, but it's a different project then where the master test is located. The reason why this is important to us, is because we have a number of different web-apps (verticals) that all share the some common basic behaviour (eg. Login to system, Search for User, etc). It is this common behaviour that we have modelled as individual web tests to be used as 'TestAsSteps' in all of our main projects for each vertical.
GetRectangle() function call returns invalid X and Y coordinates under special circumstances. Steps to reproduce and access to the application are in the internal description.