Does your company test every feature of your desktop application before every release? How much time and manpower does this take?
My team develops a WPF application, that we release a new version of more or less every week. To find out more about our application, see http://www.delficert.com/. We are 3 developers, plus about a quarter of a job post for manual testing, but we still get more or less all features of the application tested. The way we manage to do this is by having an old recycled computer on which every build is deployed, and automatically regression-tested.
We have written a DSL/framework that encapsulates MS UI Automation and allows us to write readable testscripts. We are now releaseing this DSL to GitHub, so that other people can enjoy the same level of automatic testing that we now have. Check out this link: https://github.com/delfidata/clicktest
A goal of the DSL was to allow us to write tests that are hopefully just a little more detailed than the manual testscripts used by maual testers. This makes the tests more accesible to testers, who often are not developers, and it improves readability. Here is a sample of our testcode with a couple of images for context:
This test was actually a test-driven bugfix of a bug we discovered in the gui-code, where right-clicking the mouse outside the expected points set the application in an error-state which made it crash on the next left-click inside the image boundary.
This test assumes that the test-framework/DSL opens the application, logs on, and opens the Search-tab, before running. After the test is run, the framework will verify that there is no error-dialogs showing, and that the application is not in an error-state. Last, it will kill the application, so earlier test-runs never affect the current. If it finds any errors, it will take a screenshot that we can use to pin-point the error, in conjunction with the application logs.
A nice feature is that we can start the application in Visual Studio, start the test-run, and get the exception in the debugger.
The code required to click a button in MS UI Automation looks like this:
Our tests test features like:
Other things he is freed up to do are:
Before anyone asks, of course we have unit tests for our code. The unit tests test the domain-layers, the web-service-layer and everywhere we can manage to use them. The tests, which this blog post is about, cover the view-layer, and through that, they also work as integration-tests for our entire system.My team develops a WPF application, that we release a new version of more or less every week. To find out more about our application, see http://www.delficert.com/. We are 3 developers, plus about a quarter of a job post for manual testing, but we still get more or less all features of the application tested. The way we manage to do this is by having an old recycled computer on which every build is deployed, and automatically regression-tested.
We have written a DSL/framework that encapsulates MS UI Automation and allows us to write readable testscripts. We are now releaseing this DSL to GitHub, so that other people can enjoy the same level of automatic testing that we now have. Check out this link: https://github.com/delfidata/clicktest
A goal of the DSL was to allow us to write tests that are hopefully just a little more detailed than the manual testscripts used by maual testers. This makes the tests more accesible to testers, who often are not developers, and it improves readability. Here is a sample of our testcode with a couple of images for context:
ComboBox("cmbFieldList").SelectItem("PoNo");
TextBox("txtSrcValue").Type("klikktest PoNo");PressEnter();Button("Edit Certificate").Click();// in the edit tabButton("Edit").Click();WaitWhileBusy();SleepIfOnTestMachine(2000);// actual testImage("ActualCertPage").RightClickMouse(-500);Image("ActualCertPage").LeftClickMouse(50);WaitWhileBusy();
This test was actually a test-driven bugfix of a bug we discovered in the gui-code, where right-clicking the mouse outside the expected points set the application in an error-state which made it crash on the next left-click inside the image boundary.
This test assumes that the test-framework/DSL opens the application, logs on, and opens the Search-tab, before running. After the test is run, the framework will verify that there is no error-dialogs showing, and that the application is not in an error-state. Last, it will kill the application, so earlier test-runs never affect the current. If it finds any errors, it will take a screenshot that we can use to pin-point the error, in conjunction with the application logs.
A nice feature is that we can start the application in Visual Studio, start the test-run, and get the exception in the debugger.
The code required to click a button in MS UI Automation looks like this:
void ClickButton(AutomationElement window, string name){As of writing this post, we have 130 of these tests. The tests run in about 2 hours, doing them manually, we estimate, would take at least 2 full days. With the tests being run after every build, and a release schedule of once a week, this really saves us a lot of time.
var searchConditions = new List{
new PropertyCondition(AutomationElement.ControlTypeProperty, ControlType.Button),
new PropertyCondition(AutomationElement.NameProperty, name),
};
var searchCond = new AndCondition(searchConditions.ToArray());
AutomationElement result = element.FindFirst(TreeScope.Descendants, searchCond);
if (result == null)
throw new AutomationElementNotFoundException("Could not find element: ", searchConditions);
var invoker = result.GetPattern<InvokePattern>(InvokePattern.Pattern);
invoker.Invoke();
}
Our tests test features like:
- searching for certificates and deliveries
- storing and editing all objects we handle in the application, and then verifying that they have been correctly stored in the database
- compiling zip-packages of certificates and verifying that the zip-file is not empty
- scanning certificates and attachments with a flatbed-scanner
- drag-n-drop of files from Explorer to the application
- typing strings in numeric fields
- information labels show what they are supposed to show
Other things he is freed up to do are:
- Test functionality that is not possible to test automatically, yet, like:
- Visual confirmation that images are shown correctly
- Exploratory testing
- Find test-cases which are not automatically tested yet
- Test and develop usability, workflow etc
This was maybe a bit long for an introductory post, but I will be writing a few more blog posts about this framework and the tests we have, in the near future.