Thursday, July 1, 2010

LEGO Automation Explained

Still a concept in the making but some more ideas regarding the concept.

The idea is that you create independent building blocks for of your automated works and have them conform to a common layout of the API which means that it is easy to join to blocks together due to the well defined API. A typical example which goes hand in hand with what I have written here before is one tool to add test data to the system, one tool that performs actions on the system and one tool that validates performed actions. On these three tools you build a new block which use the test data tool to load data, use the action tool to perform actions and the validation tool to validate the actions. That tool is very similar to full scale test automation tool. If also have a software provisioning tool, you add that block also.

There are two major advantages with this method. One is that when the product changes you only need to change one tool rather than in many parts of one monolith. The second is that is can still benefit from the tool even if you do not use the full scale automation. The first part one could argue that it is just good design practice to develop tools that way, and to that I agree, but that are two nuances to this. One is that all test automation tools that I have worked with have never been designed in this way in a truly decoupled way, and the other is that this LEGO idea also means that you can let different groups be responsible for the different tools. For instance can the developers be responsible for the action tool since when they change the interface they also update the action tool to fit the new interface. Which means loads when it comes to keeping the test automation up to date, one of the major arguments against test automation. With this method you can also reuse parts much better between products and organizations because the differences can be contained in some parts locally and the common parts handled in some common function.

It also makes future development of the tools easier since one can easily replace a old tool with a new one or update tools with a greater flexibility.

Monday, June 14, 2010

LEGO Automation

I've come up with the name for the type of automation I've previously described. LEGO Automation. Where the idea is to make pluggable tools that you can use to build more advanced tools. I.e. one tool for deploying the system, one tool for generate data, one tool for validating data which combined can be used for some "push-the-button" automation.

Wednesday, April 14, 2010

How to report what you do

One of the most important parts of testing is the ability to report what have been done. I think this is one reason for the popularity of scripted test cases and formal tools, a means for management to get some report of the testing. What I have found is that it is very easy to make false assumptions and come to the wrong conclusions using this method, one common mistake is that test cases is not a documentation of what have been done, but a documentation of what is intended to do. The different is subtle but important.
Lets go back to the entire purpose of testing, one of the main reasons is to gain knowledge. How does the product perform, does the product fulfill requirements, does the product behave as expected and so on. The receiver of this knowledge is not the test organization but the product owner. The test organization of course also have an interest in this knowledge, but that is secondary to the product owner. It is also secondary to the needs of the developer organization for which the feedback from the product is invaluable, both in order to improve the quality by correcting errors, but also to help improve the development in order to increase the chance of improved quality in the future as well. This is why I like to think of the test organization as a service to the product owner and to the developers. A service to provide feedback.
With this in mind how do we report what is done in testing? The simple answer is to provide the information needed for the product owner to know if the product is good enough and for the developers to realize what to do in order to improve quality. This is of course different for different organization.
Currently we are looking into different methods to report what has been done without having test cases to report. More on that soon to come.

Monday, March 22, 2010

Thoughts on test automation

Having worked on projects that are introducing test automation, projects where test automation is almost the only means to test and in projects trying to develop test automation, here's my two cents on the subject. All the projects have had the common definition of test automation, which is what I call "Press the button and go home" test automation. No human interaction required. This means that the tool should setup the test environment, perform the action part of the scripted test and then perform the result observation and validation of the scripted test. I say scripted test since no other kind of test fits this test automation practice. James Bach discussed this topic during his course and without trying to write to much of what he thinks regarding this his definition is more in the line of "anything that does something automatic with regards to testing" and by a rule of thumb he would not bother if the tool required took more than 48 hours to produce. The key here is not the 48 hour rule but the wider term which can lead to a more effective use of test automation. Before I continue with that, I just want to point out the reason for why you would want to do any kind of automation, and the main reason is that you want to speed up the test process. This is why the wider term is so effective because that includes all efforts made to by automatic means speeds up the process, i.e. a tool to calculate all different combinations of given set of variables can speed up the test design time, or a tool to compare log files speeds up the validation, all tools related to create load automatically on the system are also test automation tools.

Having said that, and you still feel like to focus on "Press the button and go home" test automation, at least consider the following. A scripted test generally consists of some test preparation, some action steps and some expected result parts. If you separate the different activities in your automation effort you will both be more resilient to change, and can help to speed up manual testing as well. The proposal is to create at least three different tools and make sure they work together as well as independently. The three tools are the following:

  • Test environment control
  • Test data generation
  • Test data analysis

Test environment control is a tool that make sure that the test environment is in the required state for the test case. I.e. install and configure the correct software, bring the system to a known and desired state, configure and start other tools that are needed for the specific test etc.

The test data generation is a tool that generate actions in the system, i.e. simulates a user, simulate different failures. The generation tool should not validate the system but record all the actions it has performed in the log.

The data analysis tool is the tool which validates that the performed actions produced the desired results.

What I like about this is also that it can support any sapient test activities and you can work on realizing the parts which you need the most first.

Friday, March 19, 2010

Course with James Bach

I've just finished a great course with James Bach where many of my ideas and thoughts about testing got very much clearer. My thoughts and insights, among many are the following: Think for yourself! Define what you do and evolve that into something better. Learn from your and others mistakes.
My problem with testing so long has been the lack of common definitions, the lack of arguments and the unreasonable faith in things. If one tester says "System Testing" one tester will think non-functional testing such as stress testing, performance testing, robustness testing. Another tester will think functional testing of the entire system. Or as the ISTQB defines: "The process of testing an integrated system to verify that it meets specified requirements.". Which actually does not say anything. As for the arguments I have often heard things like "You should do like this." and upon the the question why you get something like "Because it is best." or "Because we have always done this." or similar. Which leads to the blind faith thing, which of course goes against what Mr Bach tries to teach, to think for yourself.
Currently I feel that this is still not something you can be told but something you have to experience, but what I ask is that when you do. Please try to learn from it.

Tuesday, March 16, 2010

Restart

The questions and misunderstandings regarding tests are spread far and wide. On the other hand the common definitions, methods, tools, techniques are almost tangible in their absence. Just ask two different persons that have been working with testing for a definition and I almost guarantee that you will get answers that can be so different that you would not even believe that you asked for the same thing. I'm not an authority on the subject but after over 10 years in the business I feel that an effort should be made towards clarifying the world of testing a little more.

So here goes, a journey has begun, where it will take me is still unknown, but hopefully the testing community will benefit in some way from it.