Here is a sample story.
This highly-structured (plain English) format can be parsed and used to direct your automation code. Originally, this was meant as a way to describe new functionality.
But consider this. Suppose you wrote stories like this for the expected behavior as a part of each defect report. Additionally, suppose you had a system that would:
- go through each fixed defect for a new build
- extract any attached story
- run that story through your test automation system
- if passed, close the ticket
- if failed, attach a screenshot/log file/etc. of the failure
How much time would that save? How much quicker could new builds be verified?
I'll be giving this more thought and come up with something I hope finds its way to a future blog post.
2 comments:
My philosophy is that any change (and in particular any bug fix) has a high potential of introducing more bugs.
Hence, I would never trust such an automated verification system to close a ticket for me by itself.
I always expect my testers to "test around the bug" looking for other breakages, and unintended side-effects. This isn't something I'd likely trust to this sort of automation.
I certainly see your point. Maybe the better use would be to see this as a way to look for failed fixes and only look at it manually if the test passed.
Post a Comment