Export a test list result to HTML and open it.
When you hover over the actual and expected images, they are zoomed. They are not visible enough and as a result it is hard to analyze them.
Please improve their resolution and visibility.
Currently you can find the ImageOnFailure in the project's Results folder under the unique test list's result folder (something like "Failing Test 132106926993112448").
Add NTLM authentication in Test Studio for APIs or some kind of workaround.
Steps to reproduce:
1. Open an Enter password step and expand in the step pane
2. Set the encrypt property
Expected: All fields with the actual text entered should be masked with *
Actual: If the step is expanded the actual text entered field does not change its value.
Note: Reopening the test fixes the visual glitch.
It would be useful to have the functionality to build the project from command line.
Currently scheduling a test list with getting the latest server version of the project is only possible if the source control repository is of the TFS version control type (Team Foundation Version Control).
It will be very useful to add the same functionality for Git source controlled projects.
The step to verify an element for enabled/disabled step cannot be data-driven. The 'Verify element exists' also does not provide a binding option.
However, the 'Verify isVisible' step can be data driven with true/false values for the isVisible property.
It will be useful and helpful to unify the behavior and allow data binding for isEnabled and Exists verification steps.
When using a builtin step for checking rowscount of a table that uses "greater than" or "less than" it fails even though it should not.
"Equals" works fine.. but if I use any other TableVerificationType than "equals", then it fails.. unless if I convert it to a coded step.
So to reiterate: If I convert the step to a coded step then it works.
We have several test projects that now have failing tests because of this.
Attached test project that showcases the problem.
I have some databound parrent tests.
For these I have created seperate sheets in my excel, to ensure my part test are working i switch their databinding between the sheets to ensure them working in all environments.
Everytime i rebind a test the excel file gets corrupted! I have to restart the computer and then it works again… What is the fix for this?
I have attached an example of being unable to open the excel document!
Pre-requisite: Upgrade to 2019.1.212.0 from 2018.3.1004.0
1. Select the Test List tab
2. Click the List button and add a new Test List with appropriate tests.
3. Click Save
4. Notice the "Storage service unreachable." message pop-up.
5. Click "OK" on the pop-up.
6. Notice that the Test List and Tests are saved as expected.
**This issue did not happen on the previous version of Test Studio
The step gets successfully executed but fails with the following exception:
InError set by the client. Client Error:
Cannot read property 'toggle' of undefined
BrowserCommand (Type:'Action',Info:'NotSet',Action:'InvokeJsFunction',Target:'ElementId (tagName: '',occurrenceIndex: '-1')',Data:'$("th:eq(1)").data("kendoFilterMenu").popup.toggle()',ClientId:'663fc963-6051-4044-9dfb-ce01ca15339d',HasFrames:'False',FramesInfo:'',TargetFrameIndex:'-1',InError:'True',Response:'Cannot read property 'toggle' of undefined')
Workaround: Record the click step to open the filter without using the Test Studio translator, or in other words against the span element directly.
Data bound request does not support "application/json" content type.
The exception listed in the log:
Steps to reproduce: Using the project attached execute the test which should handle a dialog. Expected behavior: The save dialog is handled successfully Actual behavior: The dialog is not handled and the following is output in the log: System.TimeoutException: Timed out waiting '60000' msec. for any dialog to be handled '1' Result StackTrace: at ArtOfTest.WebAii.Win32.Dialogs.BaseDialog.WaitUntilAnyHandled(IEnumerable`1 dialogs, Int32 handleCount, Int64 timeoutMilliseconds, Boolean resetHandleCount) at ArtOfTest.WebAii.Win32.Dialogs.BaseDialog.WaitUntilHandled(Int32 handleCount, Int32 timeout, Boolean resetHandleCount) at ArtOfTest.WebAii.Win32.Dialogs.BaseDialog.WaitUntilHandled() at UnitTestProject2.UnitTest1.TestMethod1() in c:\Users\ittodorov\Documents\Visual Studio 2012\Projects\UnitTestProject2\UnitTestProject2\UnitTest1.cs:line 42
The Application we are using stopped the support for IE. So I have to switch to chrome to be able to connect to the DOM window of TEST STUDIO. Inorder to connect to the browser, I have to use "Run - To here" option and then connect to the browser. Then I Navigate to the Page where I actually want to do the recording and debugging.
But in IE browser, I have the opened instance already and then I can connect the recorder wherever I want. This is more easier way of recording and debugging and save time.
So, Could you please consider this feedback and add the option for "Attach to chrome" browser. Refer to Screenshots..
Changing the DPI settings to 125%, for example, on Windows 7 causes troubles with the Desktop clicks when running tests against Chrome. The click is below or above the target element. Highlighting in record mode may also be affected.
Workaround: As a workaround the Desktop clicks (MouseClick() method) can be replaced with the Click() method, which works as expected. The alternative in a non-coded solution is to disable the 'SimulateRealClick' option for the click steps.
Allow deserialization of dateTime values to strings instead of DateTime objects when extracting such with JSONPath.
When a dateTime value is returned in a JSON response ("2019-04-08T00:00:00") and a JSONPath expression is used to extract it from that response, Test Studio deserializes it to a DateTime object. This leads to losing its original format.
It will be useful to have the option to deserialize it in a plain string value.
When the element's find expression fails to find the element it should log in the execution log, that alternative method of verification is used.
The Wait for Exists step does not log the usage of image verification, even though the find expression fails.
The behavior with click step is as expected and the log is correctly generated.'17-Jul-19 14:10:30' - 'Pass' : 2. Click 'BtnKSubmit'