Steps to reproduce: 1. Add a data source to a test project which contains more than one row. 2. Bind a test to the data source. 3. Sort the rows by clicking the column header. 4. Enable Filtering and choose the second row (2-2) for example. Actual: The second row before sorting is selected. Expected: The second row after sorting should be selected.
In Internet Explorer: if the one use the .Capture() method on an SVG element, the resulting image does not contain a sub-graphics being a part of the SVG object. To reproduce this it is necessary to have a "Scroll to top" step before performing the .Capture()
If you use a Capture() method in Chrome for an element that is not visible or some other window is shown over it, the captured image will contain all visible data on screen, not only the element image. This works fine in IE
Sample date picker used here.
Steps to reproduce:
Expected: To record a step to 'Enter Text'
Actual: A 'Click' step gets recorded and no date entered is registered in the test.
We just updated our version of TTS to 2022.1.215.0 and the following message is encountered whenever the Chrome browser is started:
--disable-site-isolation-trials message is being displayed in Chrome
We are running Version 98.0.4758.102 of Chrome.
Going back now to see if we can get to 2022.1.215..5...
When recording a custom web application in Chrome, Firefox or Edge, no steps are added after the navigate step. There are elements listed in the DOM tree, but no steps can be added for these from the Advanced Recording tools. Highlighting is also not working.
If recording the actions in IE these are captured and listed in the test. This recorded test can be executed against the other browsers.
Steps to reproduce: 1. A demo application is provided 2. Record a test against it The application crashes after the available button is clicked.
It would be useful to have a method where you could specify global/environmental variables/parameters that is separate and independent from the data driven feature. These global parameters would be avalaible to all tests using the Data Driven interface of binding values to test steps, but would not be mingled in the data source itself. This way you can create a data driven test that runs iterations but keep the environmental static data e.g. userid's to login as, separate from the data itself.
The test list events are fired in case of running on the scheduling server, however the problem occurs in case of running on the execution client setup (out of the server). Steps to reproduce: 1. Create your own execution extension. 2. Override OnBeforeTestlistStarted and OnAfterTestListCompleted. 3. Create a test list and run it remotely. Expected: OnBeforeTestListStarted and OnAfterTestListCompleted are called. Actual: OnBeforeTestListStarted and OnAfterTestListCompleted aren't called.
In two attached test projects TestProjectForTelerik_BigReport and TestProjectForTelerik_SmallReport the same test actions are executed (tests set is the same too but there are some differences in test configurations). 1. Load www.telerik.com page 2. Come back to the main page 3. Click on a link to the products. The link is found by content (product's name) from data source (TelerikProducts.xls file, Products sheet) But different ways are used to run tests in the projects. In TestProjectForTelerik_SmallReport we start from SmallReportTest (use SmallReport test list from the project) and bind ClickProduct test to the data (the data source) directly. In TestProjectForTelerik_BigReport we start from BigReportTest (use BigReport test list from the project) and bind IntermediateStep test to the data directly. Data binding is removed from ClickProduct test but this one must inherit the parent data source. In IntermediateStep test we choose which test will be run next according to data from data source. In IntermediateStep we always run ClickProduct test. Please see comments in the IntermediateStep test for more information. This way is very needed for my real project. So, these two projects have very different report size for the small report it is 249674 bytes, for another 989670 bytes. The difference is about 4 times. If steps count in ClickProduct test or row count in data source increases, size difference will be more. As I can see in the report from TestProjectForTelerik_BigReport project, there are a lot of duplicated information in the report. The information from StepResults section of ClickProduct test is duplicated in DataIterationResults section of IntermediateStep test. Please check this. Maybe there is a way to decrease the report size. Resources shared internally
When DBMigrator is in progress all you see is "Please wait...". There's no indication how far it's gotten nor how much farther there is to go. The user can't tell if he needs to wait another 5 minutes or another 5 days, or even if anything is actually happening. Some sort of feedback needs to be shown the user indicating progress is actually being made by the migration process. In addition, when the process is completed, it would be useful to show something like "N records migrated" to indicate success or failure of the migration process. Instead currently when the process is finished the "Please wait..." is removed. The user can only assume the process was successful and how much data was migrated.
Displayu the full test list name/Execured browser in the tooltip in the same way as it is in the calendar view on results tab. Refer to the attachment provided internally
Test Studio removes elevated trust privileges from an OOB application when connected to the application for recording. Please refer to the screenshot attached from the local repro. The sample project is attached. The ArtOfTest.SLExtension.dll is included in the SL application.
Connecting to pop-up is successful. Using coded step and activebrowser.navigateto(url) to some specific url i.e. "http://www.google.com" results in a "Wait for condition has timed out" error. Full error description: System.TimeoutException: Wait for condition has timed out at ArtOfTest.Common.WaitSync.CheckResult(WaitSync wait, String extraExceptionInfo, Object target) at ArtOfTest.Common.WaitSync.For[T](Predicate`1 predicate, T target, Boolean invertCondition, Int32 timeout, WaitResultType errorResultType) at ArtOfTest.Common.WaitSync.For[T](Predicate`1 predicate, T target, Boolean invertCondition, Int32 timeout) at ArtOfTest.WebAii.Core.Browser.WaitUntilReady() at ArtOfTest.WebAii.Core.Browser.ExecuteCommand(BrowserCommand request, Boolean performDomRefresh, Boolean waitUntilReady) at ArtOfTest.WebAii.Core.Browser.ExecuteCommand(BrowserCommand request) at ArtOfTest.WebAii.Core.Browser.InternalNavigateTo(Uri uri, Boolean useDecodedUrl) at ArtOfTest.WebAii.Core.Browser.NavigateTo(Uri uri, Boolean useDecodedUrl) Why, because page loads successfully?
Capture the URL of the current browser page when creating a feedback item. Include the full URL with query & fragments.
The Send dialog should close automatically after submitting a bug or mailing feedback. Generally people won't be using more than one action, so let's eliminate the extra step of making them close another dialog.
Some customers display reports using the XAML RadPdfViewer control. They would like to be able to parse the content displayed in it and verify the application generated reports are correct. Currently this can't be done because there seems to be no way to extract the various text fields that do get displayed. Every single letter is a separate Glyph element and you can't even get at which letter the Glyph is representing.