Monday, June 21, 2010

Barriers to Automation

I've seen several attempts at automation from several companies. Many getting started and then dropping off and eventually rendered unusable. Keeping in mind the definition of insanity being "Doing the same thing over and over and expecting different results", it may be time to look at why these have failed.

Here are some of the scenarios that have come to mind based on what I've seen:
  • Someone takes the inititative to create a set of tests, but other priorities take them away and the tests become obsolete, making them nearly useless.

  • We bring in a contractor to build tests, but when the contract is over, there is nobody given the responsibility and the time to keep them up.

  • We start some UI automation testing and find that the scripts are fragile, making upkeep difficult and ultimately are left to become obsolete.

This is not to say that we don't have some successes with automation:

  • There are many experiences of using throw-away scripts to perform some focused and repetative task.

  • There are internal tools built to assist with generating data.
  • Development teams have their own scripts / applications for performing installation/configuration/cleanup tasks.

The trick is to see the pattern with the successes and failures.

Successful attempts at automation seem to have these common qualities. They typically are either grassroots efforts where time is found to work on them or they are given priority by management to spend on them. Grassroots efforts typically have modest upkeep costs and time can be found for upkeep. Management-directed efforts have had continued priority set for them since they require much more upkeep. Grassroots projects are typically used heavily by internal staff and Management priorities are typically used outside the development teams (including other internal teams as well as customers). Your experiences may differ from these as these are based on my own observations.

Unsuccessful attempts at automation appear to have these in common. There was no call to maintain the time needed to maintain these scripts either from the grassroots level or from management. The scripts were succeptible to changes in code, operating system, 3rd party components such as browsers, Java, .NET, Application Server Versions, etc.

So how do we take advantage of the things that make these efforts successful and mitigate the things that make them unsuccessful?

Auutomation has to be something that is used regularly. Whether it's an expectation of your development process or a commitment made to have time spent on upkeep during a project, it can't be an afterthought.

The benefits of automation must be valued both at the grassroots level and by management. I see that in both cases, I generally see agreement that automation is helpful, but I think there may be different ideas on what that looks like. Having this be very visible and openly discussed will contribute to it's long-term success.

Environment and code changes that affect scripting should be mitigated. Managing unit tests over time is a difficult process when the library of unit tests becomes large. Not only do they take time to run, they need to be managed and 'sunset' just as we would do for any other piece of code. There needs to be a lifecycle for these tests that address 1)When they should be built 2)How long they should be maintained 3)When they should be removed from use. UI tests are much more succeptible to these environmental changes. For example, different operating systems render web pages as well as applications differently, making some UI tests suitable for cross-platform execution difficult or impractical. Using 3rd-party UI components require customized tools to use for automation, often at additional cost. These additional tools are not absolutely required, but they do help with not only automating the tests, but also in validating the results. These UI tests also need to have a lifecycle with the same requirements as for Unit Tests. While UI tests are helpful, there needs to be more scrutiny applied to what tests get automated and in what environment(s).

Now what?

"It depends". Much of what needs to happen must be based on your circumstances. What is the will internally to make changes? How far does this will go to ensure that these changes are implemented for the long-term? What resources are available to impelement these? What training is needed? Once you start to answer these, the answers will become more clear.

No comments: