A Unique Idea for Designing Tests

Katrina Clokie, author of “A Practical Guide to Testing in DevOps“, offers us her unique insight into developing effective tests in her blog post “Generic Testing Personas“. In this article, Clokie explains how developing personas can be helpful in modeling expected behavior of users. This information is very valuable when designing different tests.

Clokie begins by explaining how good tests should cover all possible behaviors, to make sure that the software being tested is as adaptable as possible for different individuals. Developing “personas” for expected or stereotypical behavior can give an informed perspective when designing tests.

When designing personas, each individual should have clearly distinct behavior patterns and needs from one another. In this article, the author gives us an example of six personas that could be helpful when writing tests. Here I will just briefly describe two that I feel compliment each other and demonstrate the point nicely.

“Manager Maria” is a persona who is always busy and rushes through the software, consistently using shortcut keys, making mistakes, and constantly going AFK while using the program. For example, Maria might be frustrated with slow response times, so the tester ought to make sure the software is running smoothly even during times of high traffic.

In contrast, “Elder Elisabeth” has trouble with new technology and may require simple user interfaces, need to zoom far in, or may need to reach out for assistance. In Elisabeth’s case, the tester should make sure the program is visually stable, and can be run on older systems.

Both of these personas are stereotypes of real users who have different needs and behaviors. The more defined the characteristics of each persona, the more information about their needs can be inferred. It is the responsibility of both the developer and the Software Quality Assurance professional to make sure all of these different needs and desires are met to deliver the best possible product.

I very much enjoyed this article and I found Clokie’s perspective both interesting and helpful. I definitely enjoy and find important the application of heuristics in software design, and it makes sense that this knowledge would be helpful in the context of designing tests as well.

Commonly Used Software Testing Strategies

In this article, The 7 Common Types of Software Testing, founder of the Simple Programmer blog John Sonmez uses his experience to describe common strategies of software testing, explains some of the benefits and liabilities of them, and how they are applied in the world of software development. Sonmez stresses how there are many other methods, and each of these are not used in isolation.

The first Software Testing method Sonmez describes is black box testing, where we are only concerned with the output of the program, and no actual code is given to the tester. The benefits of this method are simplifying test cases to input/output, and tests are from a user perspective. The downsides are the underlying reasons behind errors are not knowable, and cases can be hard to design.

Naturally, the next method the author discusses is white box testing. The pros of having the source code to test are discovering hidden bugs, optimizing code, and faster problem solving. The cons are the tester must have knowledge of programming and have access to the code, and it only works on existing code, so missing functionality may be overlooked.

Sonmez next describes specification based or acceptance testing. This methodology is where preset specifications guide the development process. The main benefit of this strategy is that errors are discovered and fixed early in development. The main drawback is the effectiveness of this method relies on well defined and complete specifications, which is time consuming to say the least.

The next strategy the author describes is automated testing and regression testing. This style of testing is designed to make sure changes in software does not cause problems, and is used where manual tests are slow and costly. Sonmez explains how vital these testing strategies are in Agile frameworks, where software is constantly added to. Automated regression tests are integral to this methodology because the same test cases have to be applied frequently, so automation is more advantageous than manual tests.

While there are more strategies Sonmez describes in this article, and many more in general, I chose to focus on the ones we discussed in class, as the author provides good context and a broad overview of these commonly used testing methodologies that further cement my understanding of these concepts and when to use them.

Test Driven Development And Unit Testing

Software developer John Sonmez writing for simpleprogrammer.com gives a narrative of his experience as a lead developer, and how he applied the practices of test driven development and unit testing in his article What is TDD? What is Unit Testing?.  Sonmez stresses the definitions in these concepts and illustrates how they are applied in the development process.

Somnez begins by asserting that the purpose of a unit test is to verify the functionality of the smallest possible unit of code, usually on the class level. The author calls unit tests an “absolute specification”, and he explains that the value of a unit test is verifying that the smallest possible units of code function properly in isolation.

According to the author, by making sure unit tests are absolute specifications on the smallest possible units of code, unit testing can highlight problems in the design. Sonmez backs up his claim by describing how unit tests help developers discover tight coupling, and issues with cohesiveness, as the process of testing classes in isolation highlights these issues, since most programs have complex dependencies.

Sonmez moves on to define test driven development as the opposite of the traditional waterfall methodology of testing, where complete specifications were provided up front and tests happened at the end of development of a component. In test driven development, tests are written before any implementation, and only minimal code is written to make that particular test pass. After the tests pass, the developer refactors the code. This process Sonmez describes as “Red, Green, Refactor”, alluding to the red/green color of the unit tests.

The value of test driven development according to Sonmez is that tests are the best form of specifications, they either pass or fail; there is no room for interpreting the specifications. Furthermore, the author asserts that this development style is important because it forces the developer to understand the function of the code before implementing it. If we can’t describe exactly what the test should verify, then we need to go back and understand more, helping prevent errors before they happen.

I enjoyed this article and highly recommend it to anyone confused about how these concepts apply in the industry, as you can tell Sonmez’s professional experience has enlightened him in these areas. He is very good at simply describing concepts and the practical situations and uses of applying them.

On the Differences of Test Doubles

In his post “Test Double Rule of Thumb“, Matt Parker describes the five types of test doubles in the context of a “launchMissile” program. Using a Missile object and a LaunchCode object as parameters, if the LaunchCode is valid, it will fire the missile. Since there are dependencies to outside objects, test doubles should be used.

First, we create a dummyMissile. This way we can call the method using our dummy, and test the behavior of the LaunchCodes. The example the Parker gave us is when testing given expired launch codes, the missile is not called. If we passed our dummy missile and invalid LaunchCodes to the function, we would be able to tell if the dummy variable was called or not.

The next type of test double Parker explains is the spy. Using the same scenerio, he creates a MissleSpy class which contains information about whether or not launch() was called, the flag would be set to true, and we could determine if the spy was called.

The mock is similar to the spy. Essentially, a mock object is a spy that has a function to verify itself. In our example, say we wanted to verify a “code red”, where the missile was disabled given an invalid LaunchCode. By adding a “verifyCodeRed” method to our spy object, we can get information from the object itself, rather than creating more complicated tests.

Stubs are test doubles that are given hard-coded values used to identify when a particular function is called. Parker explains that all these example tests have been using stubs, as they all return a boolean, where in practice real launch codes would be provided. But we do not need to know about the LaunchCodes to test our LaunchMissile function, so stubs work well for this application.

The final type of double Parker describes is the fake. A fake is used when there’s a combination of read and write operations to be tested. Say we want to make sure multiple calls to launch() are not satisfied. To do this, he explains how a database should be used, but before designing an entire database he creates a fake one to verify that it will behave as expected.

I found this post helpful in illuminating the differences between the types of doubles, as Parker’s use of simple examples made it easy to follow and get additional practice identifying these testing strategies.

Test Automation and Continuous Testing

Blogger Kyle McMeekin writing for QAsymphony.com in his post “Test Automation vs. Automation Testing” explains the definition and distinction between automated testing and test automation, and also goes into their roles in continuous testing and why this type of testing is important  to understand.

McMeekin begins by defining automation as using technology to complete a task. When applied to the field of software quality assurance, there are two different types of automation: automated testing and test automation. While the terms sound interchangeable, the author differentiates them by explaining how the scope of each is different.

Automated testing is concerned with the automation of executing particular test cases, while test automation is concerned with automating the management of the test cases as an aggregate. While automated testing is actually carrying out the tests we are interested in, testing automation manages the pipeline of all of the automated tests. So the scope of automated testing is more local compared to the more global scope of test automation.

After differentiating between these two types of automation, McMeekin describes how they apply to continuous testing, and explains the importance of this strategy in today’s economic climate. While most software testing is done after development is finished, today more tech companies are using models where the software is constantly in development constantly updated even after it becomes released. In this case, testing must be conducted as soon as something is changed. This technique is known as continuous testing.

However, keeping track of every test suite constantly is a huge task itself. This is where test automation comes in. If we are able to automate the process of managing the processes of this ever-growing list of test suites, a massive amount of work is saved on the tester’s part, freeing up time to create effective test cases.

Automation is at the heart of computer science, as saving work by taking advantage of computer’s abilities to handle processes is integral to being a good developer. So learning how to apply automation in the context of software testing is definitely advantageous. Especially since it is so common nowadays for programs to be constantly added to after release, the amount of tests to keep track of increases steadily. By taking advantage of test automation and keeping track of all the testing processes, we don’t need to worry about the timing of the tests, we can spend more time testing and analyzing the behavior of software.