Monday, August 24, 2009

Meeting Announcement - Friday, August 28th 11:30am

Time and Location

The Red Earth QA's meeting will be held on the 3rd floor of 100 N. Broadway from 11:30am-1pm on FRIDAY, August 28th. Look for the signs to direct you to the correct room.

Topic

Devon will provide a view of their use of Silk Performer in their environment.

Directions

  • You can park in Main Street Parking on Main or you can find street parking.
  • From I-40, take the Robinson Exit. Go North on Robinson to Main. Right on Main. You can either go to Main Street Parking or continue to Santa Fe Parking. You will see 100 N Broadway on your left across Broadway. The building says 'Chase' at the top.

Tuesday, August 04, 2009

Regression Testing Without a QA team - Part II

Don't worry, while there's more to do, we are well positioned to make the most efficient use of our time with the remainder of the tasks.

Step 3 - Identify test cases / test data
For the purposes of this discussion, let's limit the concept of a 'test case' to a 1-2 sentence that describes the goal of each test you want to achieve. Start with the high-risk areas first. Depending on your situation, you may do only the risk level 1 items, or you may have time to do more. The answer depends on available time. Once you've built some test cases, you'll want to take a step back and build some models of your system to see if there are any more test cases you could write.

You'll also want to think about test data. You'll want to consider valid data, invalid data, large quantities of data, lack of data, etc.

For our example with the calculator, here are some preliminary test cases.
  1. Perform valid basic arithmetic calculations
  2. Perform 'invalid' basic arithmetic calculations (divide by 0, invalid input, etc.)
  3. Perform scientific calculations
  4. Perform 'invalid' scientific calculations (log 0, tan 90, etc.)
  5. Perform statistical calculations
Hmmm.... While I was using the application to identify test cases, I realized that I've missed some of the features

- keyboard shortcuts (risk - 3)
- number format -decimal, octal, etc. (risk - 2)
- number type -degree, radian, grad (risk -2)
- use parentheses up to 25 levels at a time (risk -2)

The Model
Now that I've gotten into this a bit further, it seems that I'm ready to build a model to make sure that I'm covering these requirements. There are several types of models. Dataflow diagram, Workflow diagram, State Transition diagram, etc. At least one would be helpful, so here is one that I came up with.

State - Ready for a Number
Actions
- Enter a '('.
- Enter a 'binary' operator (a 'binary' operator takes two inputs like +, *, etc.)
- Enter a number,

State - Ready for an Operator
Actions
- Enter a ')'
- Enter a number
- Enter a 'unary' operator (a 'unary' operator takes one input like 1/x, sin x, etc.)
- Enter a 'binary' operator


As with all models, this model doesn't tell the whole story, but it is better than having no model. Among it's faults are that it does not cover the statisical mode calculations, the names of the states are not completely accurate, only valid actions are allowed, etc. You will have to get used to these sort of issues to be able to have a strong basis for testing available in a short time.

Planning for Test Cases
For now, let's assume that the starting state of the calculator is 'Ready for a Number'. I have three valid actions, enter a '(', a number or a binary operator. Suppose I choose to enter a number, then my state has changed to 'Ready for an Operator'. In this new state, I have four valid actions (listed above). You will quickly see that there are lots of paths I can follow where I select actions and that action may or may not change the state I'm in (though it will change the value of the calculation being displayed and stored). You can also see that there is no actual ending state, you can theoretically keep doing calculations idefinately.

Here's one example path to follow
- Enter a number
- Enter a binary operator
- Enter a number

In an actual testing situation, you may want to cycle through each binary operation to make sure they are correct. Similarly, you can do the same with unary operators.

It does get tricky when you start to plan for using parentheses. What test cases are best to use? This question will be your constant challenge. It's akin to asking "When are we done testing?". The answer is "It depends". It depends on how much time you have available, the level of quality that you are attempting to achieve, how much coverage of your test requirements you want to achieve, etc.

Now suppose that we've had to make these hard decisions and have come up with the following test cases based on using the model we have built. Note that these are not the actual tests, only the test cases. The tests (or test procedures) include specific data and specific validations.

- Verify unary operators
- Verify binary operators
- Verify nested calculations with both unary and binary operators
- one level of nested calculations (e.g. "500 - (45 / 5)" )
- two levels of nested calculations (e.g. "sqrt (a^2 + b^2)" )
- 25 levels of nested calculations
- verify calculations where operators are replaced by different operators (e.g. enter '2' then '+' then 'x' then '5' and verify it is interpreted as "2 x 5" )

There are certainly more test cases, we have several test requirments that are not covered by these tests. You'll want to make sure that you are covering all the requirements you have identified or be able to explain why tests are not warranted.

These will suffice for now. You can use an automated tool to generate tests based on your model, or you can pick them manually. Initially, I'd suggest doing it manally so that you can see how the automated tool is such a benefit.

Step 4 - Write Tests
There are many different approaches to writing the tests. Depending on factors such as time available, will the same person write it as will run it, what is the knowledge of the person running the test, etc. I'm going to suggest the following format because it's simple and covers just enough to be helpful and not so much to be overly cubmersome.

In a spreadsheet, create the following columns

- Test Name
- Step
- Expected Results
- Actual Results
- Pass/Fail

Now under the row with these headings, but the name of the fist test case. Under that, describe the action you want the tester to take. The action should be unambiguous, specific and only contain an action and not any validation. In the Expected Results column, describe what you expect to happen. Again, be unambiguous and specific. Leave the last two columns for later.

Here is an example using our test cases from above.
...

You may have noticed that on the second test, I have a step without an Expected Result. In this case, I don't expect anything to happen. I could say that, or I can leave it empty. You will need to determine which way works best for you.

Those that have an opnion on the matter may take exception to these tests as being poor examples of test cases, but for now, they will do.

We're almost done. To be continued.


Monday, August 03, 2009

Regression Testing Without a QA team - Part I

Many of you work in an environment where there are no QA people assigned to your software product. So when a new version is released, it's usually up to the developers or other non-QA staff to make sure the release is ready.

Usually, this ends up being the same people spending a few days with the new version to make sure nothing seems wrong. But wouldn't it be better to have some insight into what parts of the system were tested and what parts weren't?

Let's walk through a basic strategy for getting some insight into how you want to not only test your product, but how you want to report on those results. Even QA people forget to think about how they want to report their results and end up with a 1200 page document that nobody reads.

For the examples, I'll suppose you are on the team that tests the Calculator application that comes with Windows OS and you are preparing to get it ready for Windows Vista based on the Windows XP version. There are no new visible features to speak of. But there is a new API, Themes, etc. to take into consideration.


Step 1 - Develop a model of the system functionality
There is no single way to do this, but here are some suggestions and considerations.

Think of the 'ilities'
  • Usability
  • Functionality
  • Security
  • Performance
  • Release (Backwards-Compatibility/Upgrade/Installation)
Think of how the product is organized:
  • Are there specific workflows that are helpful to use to organize the list of funcitonality?
  • Would it be helpful to organize by screen/page/mode? If it's a website, is it helpful to organize functionality by web page? If it is a command-line applicaiton, is it helpful to organize by comand-line switch?
  • If there are multiple components, is it helpful to organize by component (or group of components)?
Keep in mind, that you will usually have a blend of strategies in the end.

For the calculator example, we need to include the scientific and statistical functions.

List of System Functionality (aka Test Requirements)
Functionality
- Perform Standard Calculation
- Perform Additional Scientific Calculations
- Perform Additional Statistical Calculations
- Perform Support Functions
- Support Copy/Paste
- Support Digit Grouping Function
- "About" functionality changes for Vista
Usability
- Support OS Themes
- Support OS Color schemes
- Support OS Font Size settings
- Support 'each' Vista version

NOTE
But there's also a question of detail. You can list test requirements in varying degrees of detail. I've been on both sides of the equation. Too many requirements can be difficult to maintain. Too few details can tend to be useless. You will have to determine what makes sense to you and be prepared to make changes accordingly.

You will also want to think of how you want to report these issues. It may be helpful to use some subset of these test requirements to group test results. Again, this will be dependent on your needs.

Step 2 - Identify 'high risk' areas of functionality
Not all features are created equal. Some workflows are used daily and others are used infrequently. Those used daily likely pose a higher risk to the product if they don't work properly. So-called 'core functionality' is good candidate as well. You will want to come up with a scale for the risk rating. In addition to the scale, it may be helpful to identify the criteria used to justify this risk rating.

For our example, here is the risk rating we could use.
1 - Without this feature, the product would be rejected by the users
2 - Without this feature, the product would be used rarely by the users
3 - Without this feature, the product would be used, but with reservations
4 - Unlikely to affect usage of this product

Here is how we could apply the risk rating to our test requirements.
Functionality
- Perform Standard Calculation (risk - 1)
- Perform Additional Scientific Calculations (risk - 2)
- Perform Additional Statistical Calculations (risk - 2)
- Perform Support Functions
- Support Copy/Paste (risk - 3)
- Support Digit Grouping Function (risk - 4)
- "About" functionality changes for Vista (risk - 4)
Usability
- Support OS Themes (risk - 3)
- Support OS Color schemes (risk - 3)
- Support OS Font Size settings (risk - 3)
- Support 'each' Vista version (risk - 2)

NOTE
You will want to make sure that there is somewhat of an even distribution in your risk levels. Too many items in one level means that you'll want to discriminate further items at that level. We will use this risk rating later.

Wow, this is getting long. I'll post what I have now and pick up on this in a day or so.