Under Review
Last Updated: 28 Feb 2022 14:27 by ADMIN

Hello,

We have a suite made of 23 test scenarios which are made up of hundreds of test steps. They have been running fine and passing as expected in the previous version of Test Studio (version 2021.3.1103.0). In order to support the most recent version of browsers (Google Chrome and MS Edge), we upgraded Test Studio to the most recent version (version 2022.1.215.0). During the upgrade, Test Studio automatically upgraded all the test scripts. However since the upgrade, only 8 of the 23 test scenarios have been passing as expected even though the target web app has not changed i.e. the web app is not the problem.

  • Here is the error returned when executing one of the failing test scenario against Google Chrome Version 98.0.4758.102 (Official Build) (64-bit):

    ArtOfTest.Common.Design.Exceptions.ExecutionException: ExecuteCommand failed!InError set by the client. Client Error:Evaluation failed: SyntaxError: Invalid or unexpected tokenBrowserCommand (Type:'Action',Info:'NotSet',Action:'SetText',Target:'ElementId (tagName: 'textarea',occurrenceIndex: '0')',Data:'This is a regression test note to test note area text field. You can enter any number, text or any characters here.
    eg :- 12.57980
    * # @ +
    assura@co.nz',ClientId:'662245DAE606F8418F6D8F8916AB8CD6',HasFrames:'False',FramesInfo:'',TargetFrameIndex:'0',InError:'True',Response:'Evaluation failed: SyntaxError: Invalid or unexpected token')
    InnerException: none.
    ---> ArtOfTest.WebAii.Exceptions.ExecuteCommandException
       at ArtOfTest.WebAii.Core.Browser.ExecuteCommandInternal(BrowserCommand request)
       at ArtOfTest.WebAii.Core.Browser.ExecuteCommand(BrowserCommand request, Boolean performDomRefresh, Boolean waitUntilReady)
       at ArtOfTest.WebAii.Core.Browser.ExecuteCommand(BrowserCommand request)
       at ArtOfTest.WebAii.Core.Actions.SetText(Element targetElement, String text)
       at ArtOfTest.WebAii.Design.IntrinsicTranslators.Descriptors.SetTextActionDescriptor.Execute(Browser browser)
       at ArtOfTest.WebAii.Design.Extensibility.HtmlActionDescriptor.Execute(IAutomationHost autoHost)
       at ArtOfTest.WebAii.Design.Execution.ExecutionEngine.ExecuteStep(Int32 order)
       --- End of inner exception stack trace ---
  • Here is another error returned when executing one of the failing test scenario against Google Chrome Version 98.0.4758.102 (Official Build) (64-bit):

    ArtOfTest.Common.Design.Exceptions.ExecutionException: Invalid control '[Element: 'li:0' (id=k-tabstrip-tab-0)]'.  Control does not match FindExpression '[role 'Exact' tab]' ---> System.ArgumentException: Invalid control '[Element: 'li:0' (id=k-tabstrip-tab-0)]'.  Control does not match FindExpression '[role 'Exact' tab]'
       at ArtOfTest.WebAii.Controls.Control.CreateInstance[TControl](Element e, Boolean throwIfNull)
       at Telerik.TestStudio.Translators.KendoUI.Angular.TabStrip.VerificationDescriptors.TabTitleVerificationDescriptor.ExtractData(IAutomationHost targetHost, DescriptorValueStore dataStore)
       at ArtOfTest.WebAii.Design.Execution.ExecutionEngine.ExecuteStep(Int32 order)
       --- End of inner exception stack trace ---

Can you please look into this and help us resolve whatever root cause prevents our suite of tests from executing and passing as expected with the most recent version of Telerik Test Studio? Thanks in advance.

St├ęphane C. (Senior Software Tester at Assura Software)

Under Review
Last Updated: 31 Jan 2022 09:31 by ADMIN

I have a project with two tests.  One test is the child of the other and extracts values from a web page to use in the parent test.  The first step of the parent test is to run the child test. 

If I choose a step later on in the parent test and "Run to here" on that step, once it finishes I can view the log and I can see all of the steps and outcomes from both the parent and child test.

If instead I launch the same test from the execute button and set a breakpoint on that later parent step then once it hits that break point and I open the log from the debug options the only thing I see are the results from the child test steps and nothing from the parent test.  Nothing that even indicates that the first step of the parent had launched the child test.

I find it hard to believe that this behavior is intentional or at least not able to be configured but I didn't find anything in my search of the documentation or the forums.

Is this possibly a bug or "unintended feature"?

Thanks in advance,

Jason

P.S. I am running Test Studio Ultimate v 2021.3.1103.0

Under Review
Last Updated: 21 Dec 2021 12:47 by ADMIN
I think that a commit comment should be required.
Under Review
Last Updated: 18 Oct 2021 09:18 by ADMIN
  1. Schedule a recurring job.
  2. Rename the test list in Test Studio.
  3. The recurring run entries in the Results view (the yellow coloured entries) remain with the old name of the test list. 
  4. Upon first execution from the recurring runs, the generated result has the changed name of the test list as it is expected. 
  5. The yellow items for the next coming runs of this recurring job remain with the old name even if Test Studio is restarted.

Expected: The upcoming recurring runs to reflect the change of the name.

Actual: The upcoming recurring runs remain with the initial name of the test list. 

Under Review
Last Updated: 27 Sep 2021 13:31 by ADMIN

BlazorTextArea records a KendoAngularTextBox step. Upon execution the step doesn't enter the text as expected. 

To workaround this, you can disable the KendoAngular controls group from the Project Settings -> Translators -> HTML and re-record the step. 

Under Review
Last Updated: 08 Sep 2021 10:35 by ADMIN
A specific test cannot generate the output html file correctly when using the html option for CLI execution with ArtOfTest.Runner.exe.
Under Review
Last Updated: 14 Jul 2021 13:35 by ADMIN
Created by: Kyriakos
Comments: 1
Type: Feature Request
2

Hi Team,

 

I hope my email finds you well and safe.

I successfully managed to setup executive server. 

 

This ticket is not an issue but a suggestion from a team member of my company.

 

On the executive dashboard page it would be nice to have a description to show what this test list do. Think that the description it will be something that i will write it somewhere and show it up on the executive dashboard.

 

Thank you for your support

Kyriakos, Prevention at Sea


Under Review
Last Updated: 24 Jun 2021 08:23 by ADMIN
Created by: David
Comments: 1
Type: Feature Request
0

The problem we're currently having is being able to leverage Test Studio in the same capacity of distributed testing through Azure DevOps which our IT organization uses for Continuous Integration.

Although the AOR-CLI does provide some benefits, it significantly is missing distributed testing which creates a large bottleneck in the turn-around times of our tests.

A potential way to solve this, as I've seen other testing platforms, is create a Test Studio Add-on for Azure DevOps in which you can handle calling your tests in the same fashion that you can from the Test Studio platform

Under Review
Last Updated: 29 Apr 2021 15:07 by ADMIN

Nested data driven tests generates quite large results, even when there is no data in the bound data table (built-in or external). The result is almost twice the size of the same tests, if these are not using the data tables. The tests may be using InheritParentDataSource, but the behavior is similar if these are not using it as well. 

For real user scenarios, the large results may exceed the limitation of the file size (16MB), which can be transferred through the storage service. 

Investigate if it is possible to optimize the data driven test results, at least for the case, when the data source is empty.

Under Review
Last Updated: 28 Sep 2020 14:44 by Stephane
Created by: Allen
Comments: 8
Type: Feature Request
14
It would be nice to have the story board capture subtest screenshots instead of the "Test Step" placeholder.  It would be fine maybe to watermark the screenshots with "Test Step," but still show the screens for reference.
Under Review
Last Updated: 15 Sep 2020 13:50 by ADMIN

Not a big deal, or anything requiring further support, but an observation worth noting nonetheless:

If a dynamic target is set between 2 steps and one such step is removed from HTTP traffic, rather than being able to edit the target to reflect a new destination step (or rather than the target information simply passing to step taking the place of the deleted step in the HTTP order), the target destination defaults to zero and any attempt to edit that traffic or the dynamic target further results in the application crashing. 

Also of note, the reason that this has been an issue, is that deleted traffic seems to be reappearing on application close-->reopen, even when the traffic is saved (with or without the crashing behavior). 

Hope this helps! Thanks. 

Under Review
Last Updated: 18 Aug 2020 07:49 by ADMIN

Hello,

ResultsViewer crashes when closing the Step Failure Details window with the OK button.

To reproduce:

  1. Open Trends_UserDefined 132421213805726776.aiiresult
  2. Double-click on Trends_UserDefined 17/08/2020 09:00
  3. Double-click on Trends_UserDefined.tstest
  4. Double-click on any failed step
  5. Click on Ok button

 

Best regards,
Alexandre.

Under Review
Last Updated: 23 Jun 2020 09:52 by ADMIN

Dear Support,

The version I am using is not listed below: 2020.1.403.0

I am noticing an inconsistency when using the Replace Element feature i.e. the attributes selected are not respecting the priority set in the Settings - Find Logic (Html) screen.

Please refer to screenshots attached.

Under Review
Last Updated: 12 May 2020 11:43 by ADMIN

See the screen shot.

There are two annotation markers. For some reason, a "MouseClick:LeftClick" will appear, as it should, but then it sticks there, even as 10 or 12 other steps execute and their annotations come and go. Sometimes another MouseClick:LeftClick will appear and the old one will go away and then the new one will stick. But this does not happen every times. Sometimes a new MouseClick:LeftClick annotation comes in and goes away and the old one is still hanging around.

This happens will almost all of my scripts.

Is it because some of the mouse clicks are from recorded steps and some are from code steps? This is just a guess.

Under Review
Last Updated: 22 Apr 2020 13:27 by ADMIN
Created by: Piyush
Comments: 2
Type: Feature Request
2
Add dark and light themes in Test Studio product that can be switched at any time.
Under Review
Last Updated: 15 Apr 2020 10:33 by ADMIN

On a data driven web test, setting the DataRange property to the example "SingleRow" causes Visual Studio 2017 to crash.  I have three rows of data in a data table that are bound to specific steps of my test.  I was attempting to limit execution to only one row temporarily. Using SingleRow or 'SingleRow' causes Visual Studio 2017 to crash with the following stack trace as found in Windows Event log.  I was able to successfully use '1:1' or ':1' as a work around.

Application: devenv.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.ArgumentException
   at ArtOfTest.Common.Design.ProjectModel.TestBase.set_DataRange(System.String)
   at ArtOfTest.Common.Design.ProjectModel.TestBase.SetUserProperties(ArtOfTest.Common.Design.ProjectModel.TestBaseUserProperties)
   at ArtOfTest.WebAii.Design.ProjectModel.Test.SetUserProperties(ArtOfTest.Common.Design.ProjectModel.TestBaseUserProperties)
   at Telerik.TestStudio.Web.Toolbar.previewTestProperties_Click(System.Object, System.EventArgs)
   at System.Windows.Forms.ToolStripItem.RaiseEvent(System.Object, System.EventArgs)
   at System.Windows.Forms.ToolStripButton.OnClick(System.EventArgs)
   at System.Windows.Forms.ToolStripItem.HandleClick(System.EventArgs)
   at System.Windows.Forms.ToolStripItem.HandleMouseUp(System.Windows.Forms.MouseEventArgs)
   at System.Windows.Forms.ToolStripItem.FireEventInteractive(System.EventArgs, System.Windows.Forms.ToolStripItemEventType)
   at System.Windows.Forms.ToolStripItem.FireEvent(System.EventArgs, System.Windows.Forms.ToolStripItemEventType)
   at System.Windows.Forms.ToolStrip.OnMouseUp(System.Windows.Forms.MouseEventArgs)
   at System.Windows.Forms.Control.WmMouseUp(System.Windows.Forms.Message ByRef, System.Windows.Forms.MouseButtons, Int32)
   at System.Windows.Forms.Control.WndProc(System.Windows.Forms.Message ByRef)
   at System.Windows.Forms.ScrollableControl.WndProc(System.Windows.Forms.Message ByRef)
   at System.Windows.Forms.ToolStrip.WndProc(System.Windows.Forms.Message ByRef)
   at ArtOfTest.WebAii.Design.UI.VsToolStrip.WndProc(System.Windows.Forms.Message ByRef)
   at System.Windows.Forms.Control+ControlNativeWindow.OnMessage(System.Windows.Forms.Message ByRef)
   at System.Windows.Forms.Control+ControlNativeWindow.WndProc(System.Windows.Forms.Message ByRef)
   at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr, Int32, IntPtr, IntPtr)
Under Review
Last Updated: 06 Apr 2020 08:50 by ADMIN
Created by: Larry
Comments: 0
Type: Bug Report
1
The "Description" column is missing, when I export the test results from the Results tab in Excel. This was previously working on Test Studio 2019.1 and earlier versions. It looks to be a regression, so please take a look and resolve it.
Under Review
Last Updated: 23 Sep 2019 07:53 by ADMIN

Often in our web app a lot of logs, reports or forms are downloaded through Step Builder "Handle 'Download' dialog".

We would like to compare the downloaded file (xls, xlsx, pdf are mostly used) to a baseline file so that we can verify the downloaded file has the same content as expected result.

This definitely can be achieved through "Coded Step" but it will be great if there is a built in function to compare and verify file, similar to how to verify image. 

Under Review
Last Updated: 23 Sep 2019 07:29 by ADMIN

Hi,

Is it possible to have the Test List result in Excel format to be as detailed as HTML format?

Is there any particular reason for the results in Excel format to be so simplified?

Management level would prefer to be able to filter results conveniently in Excel meanwhile the results reporting in Excel is overly summarized.

It would be great if we are able to view which steps fail with details in Excel format too.

Under Review
Last Updated: 26 Jun 2019 08:29 by ADMIN

Hi ,

 

We are using telerik testing framework for Automation testing. Currently we are executing our scripts in a client VM using TFS builds. Now we would like transform to Docker Execution. We couldn't get the direct solution from any forum for Telerik Testing framework and Docker integration. So please help us on identifying the docker image for Telerik testing framework, so that we will make use of it and we will create a container for the same to run our test scripts in docker vm. Any websites/links related to this topic would be helpful for us.