søndag 8. juli 2012

Overview of how the Click Test DSL is implemented

One of the main goals, while implementing this framework, was to make it as readable as possible, also for non-developers. Therefore most objects and methods have names which try to describe the GUI and the operations from a testers viewpoint, instead of using terms more common to developers.
Our DSL/framework has been developed as a set of Gui-object classes. They represent objects like: Button, Label, TextBox, DataGrid, DataGridCell, etc. All of these classes contain the commonly used operations and validations performed on these objects. Some of the objects also has a class whose name is pluralized; these classes return collections of the different types, like Labels, TextBoxes, etc. The pluralized classes mainly contain validations like AllShouldBeEmpty/HaveContent or AllShouldBeEnabled/Disabled.
All the object classes were created to hide away as much as possible of the MS UI Automation code and code which tries to simulate that it is a user doing these operations.
An example of an operation that simulates users waiting for the applicaion to be ready is the button click method: After actually clicking the button, the method will call Process.WaitForInputIdle() which waits for the application process to enter an idle state.
The actual MS UI Automation code is found in extension methods under AutomationCode, and it is these methods the object classes use. The MS UI Automation code is more or less just different ways to search for the correct AutomationElement, i.e. the actual element in the current application, or the correct pattern, like InvokePattern, ValuePattern or SelectionItemPattern.

Some more concrete samples:

GuiButton

The method Button("") in the superclass hides a call to a static method in the GuiButton class. That method runs the correct UI Automation searches and returns an object of the GuiButton class. This class then contains the AutomationElement for the button. The buttons have different action/validations methods:
  • The Click method will do a search for the stored AutomationElements InvokePattern, invoke the pattern and wait for the process to become idle.
  • The ShouldBeEnabled method will assert that its AutomaionElement.Current.IsEnabled property is true.
  • The ShouldBeVisible method will assert that its AutomaionElement.Current.IsOffscreen property is false.

 GuiCheckBox

The method CheckBox("") works in the same way as the Button("") method. The GuiCheckBox class contains the AutomationElement of the checkbox and the TogglePattern for it.
  • IsChecked / IsUnchecked / IsIndeterminate returns the
    AutomationElement.Current.ToggleState == ToggleState.On / Off / Indetermindate.
  • ShouldBeChecked / ShouldNotBeChecked asserts that the checkbox is in the correct state
  • Check and UnCheck executes the TogglePattern.Toggle method until the checkbox is in the correct state. So in practice, they cycle through all the possible states of the checkbox.

GuiComboBoxes

The pluralized version of the class for comboboxes inherits from List, and its static call will return all comboboxes whose identifier starts with a certain prefix. This class contains the following methods:
  • ShouldShow(params string[] values) which will assert that all the supplied values is the selected item of at least one combobox on the screen.
  • CountShouldBe(int expectedcount) asserts that the number of comboboxes is as expected.
  • SetValues(string[] selectValues) loops through the selectValues and sets them as the selected item in the different comboboxes. Note, there must be at least as many comboboxes as there are values to select.

Overview of how to use the Click Test DSL

The click-test framework uses inheritance to make all methods available to the actual tests.
When starting to use the framework we recommend creating a super-class, for your project, that inherits from UiTestDslCoreCommon. This class should have property shortcuts for often used objects, and methods for UserControls created specially for your project. In our project this class has properties for the most commonly used datagrids, buttons without identifying text, some specialized usercontrols, and common operations like changing to a specific tab. This class also holds the StartApplicationAndLogin-method which is set to be the [TestInitialize] method.
The actual test classes inherit from the super class. This ensures that, when the test is about to start, the application is started and the user is logged in.
After the user is logged in, our tests then follow this pattern:
  1. Change to the tab they are going to test
  2. Setup of the specific test context
  3. Perform the actions to test
  4. Optional validation (failed validations will trigger a screenshot)
  5. Close the application (Framework)
There is some validation built into the framework, so the tests won't always need explicit validation. These validations will detect unexpected exceptions by checking for open dialogs when the application closes. If any such dialogs are found, the framework will take a screenshot of the application and fail the test. We have set the DispatcherUnhandledException in our applications App.xaml to log the error and show a dialog with "Fatal error" for uncaught exceptions.
Another useful pattern is to start with a call to the method CreateNewUniqueIdentifier which assigns a new Guid to the property UniqueIdentifier. This text-string is then unique to the specific test run. We use this to test our automatching features, and to validate saving and retrieving of strings to our persistent data layer.
The actions and validations typically take the form <GuiObject>("<Identifier>").<Action>(...); where GuiObject is Button, Label, TextBox, etc; Identifier is the caption of the button, or name of the component; Action can be actions like Click or Type, or a validation call like ShouldBeEnabled, RowCountShouldBe or ShouldRead.
The Identifier of an object (other than captioned buttons) can be somewhat tricky to find. If you look in the code, it is most often the Name or AutomationId property of the object. Although this is not our prefered method, we prefer using the Inspect Objects-program from the Windows SDK download. Snoop will also show the AutomationProperties.AutomationId. Both of these applications allow us to find the identifier of a component in a running application.
Note: some action calls contain WaitWhileBusy and Thread.Sleep calls to allow the GUI time to respond to the simulated user interaction. This is something we quickly learned that was something we had to implement. Computer interaction is many orders of magnitude faster than human interaction, in addition to the fact that humans will normally wait for the screen to update after pushing a button. A computer will only do that if you tell it to.
Although we have implemented the most common actions and validations in the framework, others will surely miss some functionality. Since it now is a open source, anyone is invited to expand it with methods they feel should be there.
Please see my previous blog post for a code sample showing a complete test.

The framework also contains some control classes, which are used to start/stop the application, to find and run all tests if started as a command line executable, and a class that takes screenshots. Some of these classes require some setup, but after that you most likely will not look at them again. This setup mainly consists of setting path and filename for the executable, any code you would like to run before all tests, before or after each test, and adding naming options. The naming options we have added mostly consist of mappings between button captions to WPF Command Binding texts, or localized to English versions of the captions. The SampleProgram.cs in the github repository shows most of these.

torsdag 28. juni 2012

Click-testing or Automatic Regression-testing

Does your company test every feature of your desktop application before every release? How much time and manpower does this take?
My team develops a WPF application, that we release a new version of more or less every week. To find out more about our application, see http://www.delficert.com/. We are 3 developers, plus about a quarter of a job post for manual testing, but we still get more or less all features of the application tested. The way we manage to do this is by having an old recycled computer on which every build is deployed, and automatically regression-tested.
We have written a DSL/framework that encapsulates MS UI Automation and allows us to write readable testscripts. We are now releaseing this DSL to GitHub, so that other people can enjoy the same level of automatic testing that we now have. Check out this link: https://github.com/delfidata/clicktest
A goal of the DSL was to allow us to write tests that are hopefully just a little more detailed than the manual testscripts used by maual testers. This makes the tests more accesible to testers, who often are not developers, and it improves readability. Here is a sample of our testcode with a couple of images for context:
    ComboBox("cmbFieldList").SelectItem("PoNo");
    TextBox("txtSrcValue").Type("klikktest PoNo");
    PressEnter();
    DataGrid("SearchResult").SelectRow(0);
    Button("Edit Certificate").Click();
    // in the edit tab
    Button("Edit").Click();
    WaitWhileBusy();
    SleepIfOnTestMachine(2000);
    // actual test
    Image("ActualCertPage").RightClickMouse(-500);
    Image("ActualCertPage").LeftClickMouse(50);
    WaitWhileBusy();



This test was actually a test-driven bugfix of a bug we discovered in the gui-code, where right-clicking the mouse outside the expected points set the application in an error-state which made it crash on the next left-click inside the image boundary.
This test assumes that the test-framework/DSL opens the application, logs on, and opens the Search-tab, before running. After the test is run, the framework will verify that there is no error-dialogs showing, and that the application is not in an error-state. Last, it will kill the application, so earlier test-runs never affect the current. If it finds any errors, it will take a screenshot that we can use to pin-point the error, in conjunction with the application logs.
A nice feature is that we can start the application in Visual Studio, start the test-run, and get the exception in the debugger.
The code required to click a button in MS UI Automation looks like this:
void ClickButton(AutomationElement window, string name){
  var searchConditions = new List {
    new PropertyCondition(AutomationElement.ControlTypeProperty, ControlType.Button),
    new PropertyCondition(AutomationElement.NameProperty, name),
  };
  var searchCond = new AndCondition(searchConditions.ToArray());
  AutomationElement result = element.FindFirst(TreeScope.Descendants, searchCond);
  if (result == null)
      throw new AutomationElementNotFoundException("Could not find element: ", searchConditions);
  var invoker = result.GetPattern<InvokePattern>(InvokePattern.Pattern);
  invoker.Invoke();
}
As of writing this post, we have 130 of these tests. The tests run in about 2 hours, doing them manually, we estimate, would take at least 2 full days. With the tests being run after every build, and a release schedule of once a week, this really saves us a lot of time.
Our tests test features like:
  • searching for certificates and deliveries
  • storing and editing all objects we handle in the application, and then verifying that they have been correctly stored in the database
  • compiling zip-packages of certificates and verifying that the zip-file is not empty
  • scanning certificates and attachments with a flatbed-scanner
  • drag-n-drop of files from Explorer to the application
  • typing strings in numeric fields
  • information labels show what they are supposed to show
The main huge advantage of this is that all the repetetive tests are run by a computer, and that frees our manual tester to do more constructive work. He still has to do a rough acceptance test, because we don't have a complete set automatic tests yet, and we think it is good to have some human eyes on the application before releasing it. He also does a manual test of all new features, so that all features are always tested by someone other than the developers.
Other things he is freed up to do are:
  • Test functionality that is not possible to test automatically, yet, like:
    • Visual confirmation that images are shown correctly
  • Exploratory testing
  • Find test-cases which are not automatically tested yet
  • Test and develop usability, workflow etc
Before anyone asks, of course we have unit tests for our code. The unit tests test the domain-layers, the web-service-layer and everywhere we can manage to use them. The tests, which this blog post is about, cover the view-layer, and through that, they also work as integration-tests for our entire system.

This was maybe a bit long for an introductory post, but I will be writing a few more blog posts about this framework and the tests we have, in the near future.