Tuesday, June 29, 2010

Test your code online!

I've seen this posted elsewhere and it's worth a look! While this is not a full test, but you can validate your C# code online or you can download Pex and use it locally.

Tuesday, June 22, 2010

Interesting debugging technique for windows

I've been reading "Windows Internals, Fifth Edition" and ran across an interesting way to debug not only windows drivers, but any user application.

If you have a MSDN license, you can get what's called the 'checked build'. This is a build with debug messages enabled and optimizations turned off. It's most useful for debugging device drivers, but it can help replicate timing issues with the kernel since the timing will be different than for the retail version. Rather than having to install each component from the checked build, you can limit it to a couple files files. The instructions below show how to install and set up your system to have a boot option for this.

Obtain the checked build

Install minimal comonents from the checked build

Especially if you are tracking down timing issues, this may expose the issues more clearly.

Monday, June 21, 2010

Barriers to Automation

I've seen several attempts at automation from several companies. Many getting started and then dropping off and eventually rendered unusable. Keeping in mind the definition of insanity being "Doing the same thing over and over and expecting different results", it may be time to look at why these have failed.

Here are some of the scenarios that have come to mind based on what I've seen:
  • Someone takes the inititative to create a set of tests, but other priorities take them away and the tests become obsolete, making them nearly useless.

  • We bring in a contractor to build tests, but when the contract is over, there is nobody given the responsibility and the time to keep them up.

  • We start some UI automation testing and find that the scripts are fragile, making upkeep difficult and ultimately are left to become obsolete.

This is not to say that we don't have some successes with automation:

  • There are many experiences of using throw-away scripts to perform some focused and repetative task.

  • There are internal tools built to assist with generating data.
  • Development teams have their own scripts / applications for performing installation/configuration/cleanup tasks.

The trick is to see the pattern with the successes and failures.

Successful attempts at automation seem to have these common qualities. They typically are either grassroots efforts where time is found to work on them or they are given priority by management to spend on them. Grassroots efforts typically have modest upkeep costs and time can be found for upkeep. Management-directed efforts have had continued priority set for them since they require much more upkeep. Grassroots projects are typically used heavily by internal staff and Management priorities are typically used outside the development teams (including other internal teams as well as customers). Your experiences may differ from these as these are based on my own observations.

Unsuccessful attempts at automation appear to have these in common. There was no call to maintain the time needed to maintain these scripts either from the grassroots level or from management. The scripts were succeptible to changes in code, operating system, 3rd party components such as browsers, Java, .NET, Application Server Versions, etc.

So how do we take advantage of the things that make these efforts successful and mitigate the things that make them unsuccessful?

Auutomation has to be something that is used regularly. Whether it's an expectation of your development process or a commitment made to have time spent on upkeep during a project, it can't be an afterthought.

The benefits of automation must be valued both at the grassroots level and by management. I see that in both cases, I generally see agreement that automation is helpful, but I think there may be different ideas on what that looks like. Having this be very visible and openly discussed will contribute to it's long-term success.

Environment and code changes that affect scripting should be mitigated. Managing unit tests over time is a difficult process when the library of unit tests becomes large. Not only do they take time to run, they need to be managed and 'sunset' just as we would do for any other piece of code. There needs to be a lifecycle for these tests that address 1)When they should be built 2)How long they should be maintained 3)When they should be removed from use. UI tests are much more succeptible to these environmental changes. For example, different operating systems render web pages as well as applications differently, making some UI tests suitable for cross-platform execution difficult or impractical. Using 3rd-party UI components require customized tools to use for automation, often at additional cost. These additional tools are not absolutely required, but they do help with not only automating the tests, but also in validating the results. These UI tests also need to have a lifecycle with the same requirements as for Unit Tests. While UI tests are helpful, there needs to be more scrutiny applied to what tests get automated and in what environment(s).

Now what?

"It depends". Much of what needs to happen must be based on your circumstances. What is the will internally to make changes? How far does this will go to ensure that these changes are implemented for the long-term? What resources are available to impelement these? What training is needed? Once you start to answer these, the answers will become more clear.

Book Review - Little Brother

Well, it's not a book review in the truest sense. I've just gotten about a third of the way through "Little Brother" (free download) and I just had to post a review. You can read a summary from the website for the book.

Essentially, this is a fictional account based in many topics related to security (physical, computer, privacy, etc.) and a series of events that affect these. A seventeen year old boy and his friends are caught up in a terrorist plot by being in the wrong place at the wrong time. What follows is a drastic shift in what various governments and other groups consider 'acceptable levels of monitoring' and what it means to those being monitored and those doing the monitoring.

What strikes me most is the main character's internal monologue on the effectiveness of different security measures. In some cases, they just make everyone feel more secure, but do little to address true risks. In other cases, the data gathered starts to be misused, prompting the question "Who is watching the watchers?".

This book is an easy and excellent read. It's entertaining and thought provoking. You may even learn a thing or two. I certainly am. The book can be purchased and it can also be downloaded in several electronic formats for free.