Tuesday, June 23, 2009

Spec Explorer - Validate your model and generate tests

Sounds like a tall order. Let's start at the end result, and then go through how we get there.

The specification we are describing here is for a product I work on where we would like system administrators to send requests to users in the form of messages. For this example, we are only looking at the state transition diagram for the messages.

What does Spec Explorer Give Me?
The following diagram and tests were auto-generated based on the model specified for Spec Explorer.

For now, note that the tests below are labeled 'test segment 0', 'test segment 1' etc. and that the transitions From S0 to S3 represent 'State 0' and 'State 3' respectively. The diagram correctly describes these states and I'll describe how to understand how 'State 0' is associated with the name 'NotCreated' later in this blog entry.

While the diagram isn't the easiest to follow, some deference should be given since it was generated by the model created in the spec. However, you should be able to see that the message starts in a 'Not Created' state, then is moved to 'Pending' etc. until it is ultimately in the 'Deleted' state.

The transitions are the actions taken to move from one state to another such as 'UserViewMessage' and 'CreateMessage'.

The test suite shows how to move through each path of this state diagram. In this particular case, we are able to go through all paths because we don't have any loops (such as you would get if you could take a 'RejectedByClient' message and move it back to 'Pending').

If we had such a model, the test suites would be selected using strategies built into Spec Explorer and could be used as a basis for testing.

We can also use Spec Explorer to ensure that our models were complete and accurate before we sent our Specs for review.

Who is Spec Explorer for?

Anyone that writes specs, interprets specs or implements specs can make use of this.

Business Analysts - This can auto-generate use cases and diagrams as well as validate that the model you intend on having implemented is complete and accurate.

Quality Assurance - This can be used to build more extensive models along with more detailed data tracking to generate test cases.

Developers - This can be used to validate technical designs and auto-generate unit tests including basic validation of those tests.

Depending on your desire to dig into the capabilities, you will get more or less from this tool.

How do I get started?

You can get it from Spec Explorer Site at Microsoft Research as well and copy the spec sample below.

Once you have installed Spec Explorer and launch it, create a new project.

click 'Next'

Select the 'Create Directory for Project' checkbox and then Next.

Select the "AsmL (Plain Text)" Category and "Empty Program" Template and click the "Finish" button. One feature that should be mentioned here is that we can embed the markup language into the Word documents used for specs and it will work the same. Spec Explorer will embed MS Word into the file edit window and you can modify the spec as you would normally do.

Replace the default text with the following:


var status as MESSAGE_STATUSES = NotCreated

CreateMessage ()
require status = NotCreated
status := Pending

CERequestMessage ()
require status = Pending
status := ReceivedByCaptureEngine

ClientRequestMessage ()
require status = ReceivedByCaptureEngine
status := ReceivedByClient

UserViewMessage ()
require status = ReceivedByClient
status := ViewedByClient

UserReject ()
require status = ViewedByClient
status := RejectedByClient

UserComplete ()
require status = ViewedByClient
status := Completed

UserDelete ()
require ((status<> NotCreated) and (status <> Deleted))
status := Deleted

Main ()

There are 3 main parts to this spec. Variables and Constants, Actions and the 'Main()' statement.

In the Variables and Constants section for this example, we define the Message Status options and define a variable to keep track of the specific state for a given message.

In the Actions section, we describe each action to the message.

The 'Main()' statement is a necessary part for Spec Explorer to do it's anlaysis. More complex usage of the tool will use this, but not this example.

To generate the graph, we need to do a couple things. First, we'll 'build' a reference implementation of the specification. To do this, click 'Project' -> 'Build'. You'll see messages in the 'Output' tab. Note that in the output, you'll see a hint for each action described.

Next, we'll need to run this reference implementation to generate the diagram. To do this, click 'Execute' -> 'Run'.

You'll get the 'Select Execution Goal' dialog.

Select 'FSM Generation' then 'OK'. (FSM = Finite State Machine). Remember the S0 and S1 references from the tests generated earlier? Here is how they are seen by default. In order to make the diagram easier to read, we need to associate each state with some value, in this example, we'll associate it with the variable 'status'.

Right-click on the diagram and select 'Node Label' then 'Custom Expression'.

Now, we have the diagram we are looking for. Next, to generate the tests. Simply click 'Test' -> 'Generate Test Suites' and you get the list of tests listed above.

What are some juicy details that can't be covered here?

Using Word Documents with the AsmL Styles is a simple way to ensure that the model is tied to it's diagram and other analysis. You can publish the word document to a document repository and still be able to pull it back into Spec Explorer for future analysis.

There are several markup languages you can use to generate these models, AsmL language and Spec#.

There are other tools to use as well to build and evaluate these models, NModel , dia2fsm and others.

This is not a deep dive into this area, but I'm sure there will be cases where this is helpful.

Wednesday, June 17, 2009

Meeting Announcement - June 25th

- We've moved our meeting to the 4th Thursday of each month

Time and Location

The Red Earth QA's meeting will be held on the 3rd floor of 100 N. Broadway from 11:30am-1pm on Thursday, June 25th. Look for the signs to direct you to the correct room.


Jeff Stanley from Metavante will talk about his role in building a Test Technology team to support various QA teams with new tools.


  • You can park in Main Street Parking on Main or you can find street parking.
  • From I-40, take the Robinson Exit. Go North on Robinson to Main. Right on Main. You can either go to Main Street Parking or continue to Santa Fe Parking. You will see 100 N Broadway on your left across Broadway. The building says 'Chase' at the top.

Monday, June 15, 2009

Book Review - "Essential Software Test Design"

Earlier this year, I wrote a review of "Essential Software Test Design" for the StickyMinds Website.


It's on the front page this week, and will be found in the archives after this week.


Thursday, June 11, 2009

Using Root Cause Analysis for Process Improvement

A few weeks ago, I discussed the idea of Root Cause Analysis. The approach there works well when you want to analyze each individual issue and mitigate it. If you start to do this often, you'll find that to be somewhat unweildy. Further, you may want to see trends in common root causes over a time period.

I'll discuss how to take this information and do some basic analysis to find trends. That trend information can help identify the most common causes and most common areas that cause issues.

In order to do this tracking we need to generalize and standardize some of the fields already tracked. For example, the specification of "Function" and "Cause" should come from their own respective lists. This will allow these to be grouped and each occurance counted for analysis. Additionally, you may want to add other aspects such as version, or sub-function to do analysis on.

If you plan on identifying multiple causes for a given issue, one way to identify that is to replicate the row in the spreadsheet for each subsequent cause so that one individual issue will have one row for each identified cause.

You can open a sample spreadsheet in OpenOffice format and follow along if you would like to see how to take this spreadsheet and do some basic analysis.

This is a sample of not just a single application, but multiple applications. The Root Cause Analysis has already been performed and we're ready for the analysis.

Let's assume we want to count the causes by application feature (called Issue Area in the spreadsheet). In Open Office, you want to use the "Data Pilot" feature. (In Excel, it's called Pivot Table and Pivot Chart).

You'll get the configuration for the Data Pilot (This is nearly exactly how Excel does it as well)

Drag the "Root Cause" button to the "Row fields" area, the "Issue Area" button to the "Column fields" area and Issue ID to the "Data Fields" area.

By default, Open Office wants to sum the data field, we'll want to change that to count, so when we click on the "data field" area, we are prompted to select the function we want to perform. Let's select 'count'.

I don't prefer the default graph, so I delete it, select the data fields without the totals and create a new graph of type 'stacked'.

I can repeat that for cause count by version, cause count by application, overall cause count, etc.

Once I have all my graphs, then I can look for the things that are in most need of change. From the graph above (and reading the issue spreadsheet attached), you'll see that there was a huge issue for login that caused all kinds of havoc. The login issue was related to the main domain server going down due to a faulty network card. Depending on your situation, this may or may not be something that can be mitigated by process change. However, it does give a sense of where the most issues lie and justifies the need for specific changes.

Good luck!