When I accepted the job to work for Dovetail Software, one of the things that I was most interested to learn about was the mythical testing infrastructure that they had built up. Over the last two years, I have studied the code and this is my attempt to extract the pattern from that team and share it with the greater community.
Project Size: Larger projects, or projects with critical testing needs
Investment Time: Extensive
Testing Framework: In our case we use NUnit, what you need is a standard way to run a set of tests. I am personally most familiar w/ xUnit style frameworks but anything like that should be acceptable.
AppContext: this is what loads up the application context. For us, a lot of this in the IoC Container - I have no idea what this is going to look like in non-IoC languages. However the point of it is that you should be able to bootstrap your application from something as simple as newing up an object. All of the parameters come from configuration or are overriden at test start.
UI Drivers: these are focused pieces of code that interact directly with the raw API (this could be WatiR, selenium, the rest api, or the code api). The raw API can change quickly and suddenly (especially when you own the api) so we need a layer to protect us from that. The example of the driver is a small piece of code that can enter ‘a value’ into an Html form element. That class will be very small, but the datepicker might be a bit tricker and will depend highly on which datepicker you choose to use. This is the second level of abstraction (the first being a tool like Selenium).
A more complex one might look like:
So now, if you change your date time picker code, you only have to go to one place to update everything. Nice and dry.
The Navigation Driver: While this guy is not a UI type driver, it is a critical piece to keeping your application testing sane. All this driver does is work to get you on to the screen that you want. A key wrinkle in this driver is if you say you want to go to a given url and you are not logged in, it will take care of the login for you and then get you on that screen. This helps remove needless noise in your test code.
Converters: Entity converters make testing life much easier as well by taking in simple text phrases and converting them into test objects.
This guys are a powerful part of the system, but they require a lot of baking to really become useful. You need to be able to express your testing commands as strings so that under the covers, these guys can get involved to build out the correct objects.
Screens: I dislike this name because not everything is a screen, but until I can come up with a better word its what I will use. A screen simply orchestrates a series of steps to acheive a higher level goal. For example, we have a logon screen. This screen orchestrates entering a username and password and submitting the form. The screen can also check to see that login was successful or not and report back on errors visible on the screen.
There is a lot going on in the above example. Input models, our abstraction on top of input models for DOM querying, and .Net generics (if you are new to those).
Putting it all together
Testing Framework (like NUnit)
external shell of app context and screens
inside the shell with navigation and ui drivers
- inside the drivers with the browser abstraction
The team has put a lot of time and effort investing into this pattern / framework for our testing infrastructure. I can say with comfort that this level of investment lets them make drastic changes to their entire system with little effect to the testing code. Examples of this include replacing view engines, changing out UI widgets, renaming low level constructs in the application, etc. The bottom line is they are willing to make about any change they deem fit, even after 4 years of active development on the product.