A roadmap for test automation

By Barnaby Golden, 27 December, 2013

Agile drives frequent releases and frequent releases drive automated testing. You can't just switch on test automation though, there is a lot of investment in time and energy required and the payoff is usually in the medium to long term.

The following is a roadmap for the introduction of automated functional testing for a web application and includes a series of incremental improvements to enhance the benefits. It is intended to be a guide for a development team that is using manual testing methods but wants to move towards more test automation. An assumption is made that the front-end of the application is web based, but much of what is described can be applied to other types of systems. The guide is technology specific (Java, Selenium and Cucumber), but could be adapted to work with other technologies.

Key stages

  • Your first automated Selenium test
  • Running your tests in continuous integration
  • Adding tests for each new piece of functionality
  • Introducing a Behaviour Driven Development (BDD) approach
  • Tackling legacy code - using a prioritised functionality list
  • Cross browser testing
  • Scaling up - using a grid
  • Next steps - mobile, performance, etc.

Your first test

The first automated test you write is likely to provide little value as it will cover a fraction of the functionality of the whole application. The importance of the first automated test comes from the testing framework that surrounds it. For example, a team writing its first Selenium test will have to introduce the Selenium libraries (or install the Selenium applications). They will also need to integrate this in with their build process (Maven or Ant for example).

I would suggest making your first automated test a simple one, such as a basic home page load followed by a check for a known piece of content. Spend all the time that is necessary to get the glue in place to run this test as effectively as possible. The aim is to make the addition of a second automated test as simple as possible. Knowledge sharing is critical at this stage. Ensure that all of the team understands how the automated tests work, not just the testers in the team.

Writing automated tests is usually a cross-functional skill involving developers and testers. Remember that the gains from automated test suites come from frequent running and fixing failing tests. Fixing broken tests is a cost not a benefit and needs to be minimised. For this reason the team should spend time making sure the tests and the test setup are stable and resilient. An important part of achieving stability will come from avoiding fragile XPath definitions to locate items on a page. A better approach is to use IDs, even if this requires some refactoring of the product.

Running your test from continuous integration

As soon as you have your first automated test running from your build system you should look to run that test from continuous integration. Once again this will provide little immediate benefit, but it will force the team to go through the setup pain. Things to think about in your CI setup:

  • How easy is it to debug failed tests?
  • Is there an effective visual display of test results?
  • How long do the test suites take to run?

The last point is important as we want to encourage frequent running of the tests. As your test suite grows it may become necessary to split out a core set of tests for frequent running and have the complete test suite run less frequently.

Adding tests for new work

The next step for the team is to start adding at least one automated functional test for each new bit of functionality added. In Scrum teams the writing of automated functional tests should be added to the definition of done. This is where the first real benefits of automated testing become apparent. The automated tests serve to demonstrate the work done by the team in each sprint. As time passes by they add to a growing automated regression test suite that will progressively improve quality.

BDD

An optional next step which I recommend is to integrate your automated functional tests with a BDD framework such as Cucumber. There are many benefits from this approach, but the two key ones are:

  • Self documentation of the system
  • BDD is more accessible to the Product Owner and stakeholders

The ideal approach is to form a link between user stories (requirements) and tests that confirm that the new functionality is working. For example, a user story is added to a sprint backlog, an appropriate BDD feature is added (or modified) and then automated functional tests are added to satisfy the new feature.

For an example of a BDD approach take a look at this Selenium and Cucumber post.

The legacy code challenge

So far we have addressed new functionality but ignored any legacy code. My experience has been that the introduction of automated testing is often to a legacy product that has historically been manually tested.

What do you do about legacy functionality that is outside of test coverage? The answer is complicated and will depend on an indepth conversation between the development team and the Product Owner. The most important step is to identify the testing gap. My favoured approach for this is to get the Product Owner and stakeholders to identify the key functionality of the product in priority order. This list defines the legacy testing gap and is used to formulate the following question: "How far down this list should the team be testing before each release?" The Product Owner and the team then can negotiate what level of coverage they see as acceptable and plan it in to the team's backlog. As an example, the Product Owner might say that they have to see the top 20 scenarios tested, but that the scenarios below this are less critical. The team and Product Owner then schedule in the time to write automated tests to cover the top 20 scenarios. Ideally this coverage work will be done prior to the next release, else the testing for the release will have to be manual.

Cross browser testing

I have assumed so far that testing has been on a single browser type on a single environment. A lot of the value of automated tests comes from their ability to be run with different criteria with only a small additional effort. Firstly the team and Product Owner need to identify what their requirements are. For example, they may find that the product needs to be tested on IE 10 and Firefox 24. Next the team needs to modify their Selenium tests so that they run across the required environments. This can be quite tricky to implement well and the team should spend the time necessary to do it well.

Scaling up

As testing goes cross-browser and has a decent coverage it is likely that it will quickly outgrow a single test server. For example, imagine that 20 scenarios are being tested on 5 browsers. That is 100 tests that we would like to run as frequently as possible. It is well worth considering switching to a grid based approach, such as the Selenium Grid. This enables cross-browser, cross-platform testing over a number of servers. The size of the grid can then be scaled to provide the required level of testing performance. If the continuous integration server starts to become a bottleneck then it is worth considering either increasing hardware or using a CI grid (such as a Jenkins cluster).

Next steps

The growing importance of mobile devices means that some form of automated mobile testing is desirable. This falls broadly into two categories:

  • Native mobile applications
  • Responsive web sites

There are tools available for both types of testing but it is a challenge to integrate them in with build systems and BDD. Once again the team and the Product Owner need to discuss the importance of these types of tests and agree on the effort that is to be dedicated it. Beyond functional testing it is also worth considering some kind of benchmarking tests, perhaps utilising the existing test scenarios.


 

Target Audience