How can we sweat those test assets?

Investment

How many of us have embraced the wave of free thinking over the last few years, and started implementing automated acceptance tests in our sleep?  I know that the teams I’ve worked with have, and we have all become increasing better at it at an encouraging rate.  On top of that, we no longer have to come with a huge investment in tools to get started on this journey these days.  I love the fact that I can download software in a few minutes and start building automated test assets in an equally short amount of time.  I did this recently with a demo.  A quick download of Java, Eclipse, Selenium 2 Jar… 15 minutes later we had some working code, in the form of some JUnit 4 tests for a web page, to talk around.  This is some distance away from the pain I remember from assessing automation even a mere 5 years ago (insert former Mercury products and consultancy blank cheques here).

However, in this age of quick start and easy assess to tools and knowledge don’t fool yourself.  You are still making in investment, and usually a huge one after a team have built automated acceptance tests that have matched an ever growing feature set of the system under test.  People time has been invested, and it has still cost you money.  It is just that the entry point into the investment has been made easier.  Don’t get me wrong, I know these test assets will have enormous value for a multitude of reasons.  However, with valuable assets in any type of business… you need to get the most out of them and maximise their value (if it helps you towards your overall goal).

Multi-Purpose Usage

Ok.  So I should have really entitled this article “reuse of functional automated acceptance tests for another purpose” because that’s where I’m coming from.  I suppose I wanted to give the idea a value context, as that is where my head lives when trying to communicate with others.  However, to drop down to the test and risk level again… What are your team doing to cover off potential risks (assuming the system-under-test is web site for example) around some of the following:

  • Stack performance bottlenecks under load, stress, etc.
  • Performance at page level (e.g. page weights)
  • Security (e.g. cross site scripting, SQL injection)
Unfortunately, in many cases that I have come across the answer is likely to be one of the following:
  • We haven’t thought about that yet as we are focusing on story acceptance
  • We do something ad hoc either manually or semi automated
  • We have another set of monolithic environments and tools to handle that
  • We want to cover those risks, but there is a bar from entry (e.g. setup, cost, knowledge)

So over the next couple of posts I’m going to look at what I’ve come across in this area, but I’m also interested in what you’ve come up with. Send me your thoughts, ideas, and blog posts.  Whether that’s on how you’ve got ShowSlow (showslow.com) setup to collect data from your tests, or if you’ve got OWASP ZAP (owasp.org) hooked in to the build.  I hope to hear from you.

Keep your eyes on the DSL prize with Twist

Lost and Confused Signpost

Introduction

Learning a language can be a challenging task.  The absorption of a lexicon takes time and patience.  Twist can help keep this task achievable for consumers of acceptance tests by allowing the definition of confirmation language to be natural.  This is assuming that the consumers will be business customers or other non-technical people.  If that is not the case, I would suggest using a pure code test framework such as TestNG to achieve the technical benefits of the Twist runner design.

Assuming the later statement is the case, Twist has the potential to facilitate communication in and around software delivery teams on a dramatic scale.  It can help give teams a highly accesible language to express acceptance tests that give common understanding, and enable rapid continuous delivery through automation.  However, it does take discipline to keep that goal in mind, and not to get side tracked on the technicalities of coding the automation solution.  It’s not that those things don’t need consideration.  On the contrary, it’s good to be thinking about the execution giving the fastest possible feedback, producing maintainable automation code, and ideally even test data abstraction.  The fact is that those things won’t matter if the language for communication is neglected as the focus.

Content Assist

Twist has an excellent set of features for managing a scenario focused language.  In particular, it allows the following:

  • Creation of method signatures and code from workflow steps (in natural language)
  • Rephrasing of workflow steps across scenarios and the use of searching by content assist
  • Abstraction of multiple workflow steps in to concepts or code implementation (the first is preferred for language clarity)
  • Data driving of workflow step parameters
  • … and an ever growing list

Limit The Deviation and Limit The Confusion

The experience I’ve had is that it is easy to create a massive amount of confusion (and also code) if you don’t keep to some basic patterns of usage around the language used and some ground rules.

Usage Patterns

  • Break down workflow fixtures by pages / areas / services to give them context
  • Before creating a new workflow step search for an existing match
  • When defining a new workflow step ensure it can be easily searched for by others

ground rules

  • To support the searching of workflow steps allow the context of the area of the system under test to be visible
    • e.g. “… on the home page” or “… using google search”
  • Keep to a basic set of action words against each area so that it can display the interactions it allows easily
    • e.g. “Verify search suggestions for ‘Simon Reekie’ using google search”
    • e.g. “Set language to ‘English’ on the home page”
    • e.g. “Set currency to ‘British Pounds’ on the home page”
  • If you are going to get something off one part of the system under test and verify it in another, then keep it visible within the scenario language
    • e.g. “Get and Store breakfast price as “Reported Breakfast Price” on the price page
    • e.g. “Verify breakfast price is equal to Stored “Report Breakfast Price” on the summary page

Conclusion

It is difficult enough communicating with many individuals to a common understanding, and keeping up with managing the complexity of that communication.  Give yourself a chance by building a DSL for Twist scenarios that keeps to a few understood rules, and works within the boundaries of the tool capabilities.  There is nothing wrong with adding or changing the rules, but do it knowingly and collaboratively with those using them.

For those that are looking for the technical implications of this approach, I can only really give you these figures as food for thought.

  • Before this approach was taken in my current workplace, and scenario workflow step language spaghetti was everywhere… we had:
    • Thousands of fixture and test implementation classes built up with duplication (I know… refactoring mindsets could have helped)
    • It took days in some cases for all types of people to write a useful automated scenario
    • The debugging of test failures out of CI took an hours and became a specialist skill
  • After this approach was embedded we had the same coverage plus much more, but we also had:
    • A couple of dozen fixture classes
    • A whole range to skill levels could build new scenarios to point of automation within minutes to an hour

Configuring Twist for Selenium 2

The Challenge

I have been happily using the Sahi driver with Twist for the last few months now.  However, I wanted to try driving some mobile devices in via Twist with Selenium 2 to give our team some extra scope and flexibility.  Fortunately, this proved achievable with a small amount of code and some Spring configuration with Twist.

Creating a Driver Factory

The Twist documentation provides some excellent documentation on how to switch to alternative Selenium drivers.  So I essentially followed that template.  After downloading and referencing the Selenium 2 jars in project class path, I created a factory class for WebDriver with the following code:

package twist.drivers;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.android.AndroidDriver;
import org.openqa.selenium.iphone.IPhoneDriver;

public class WebDriverFactory {
  
  private enum DeviceType {
    iphone, android
  }

  private WebDriver webDriver;
  private DeviceType deviceType;
  private String deviceURL;

  public WebDriverFactory(String deviceType, String deviceURL) {
    this.deviceType = DeviceType.valueOf(deviceType);
    this.deviceURL = deviceURL;
  }

  public void start() {
    try {
      if(deviceType.equals(DeviceType.iphone)) {
        webDriver = new IPhoneDriver(deviceURL);
      } else if(deviceType.equals(DeviceType.android)) {
        webDriver = new AndroidDriver(deviceURL);
      } else {
        throw new RuntimeException("Device type not selected");
      }
    } catch (Exception e) {
      throw new RuntimeException(e);
    } finally {
      //
    }
  }

  public void stop() {
    webDriver.quit();
  }

  public WebDriver getWebDriver() {
    return webDriver;
  }
}

An enumeration was used to allow the management of execution device targets.  In this case, I switched the appropriate driver implementation to the interface based on these values.  These could as easily be different browser types such as Firefox, IE, Safari, or Google Chrome.

Configuring the Spring Context (Suite)

The “applicationContext-suite.xml” file was then configured to use the driver factory with the following XML:

<bean id="webDriverFactory" class="twist.drivers.WebDriverFactory" init-method="start"
    destroy-method="stop" lazy-init="true">
  <constructor-arg value="${webdriver.device.type}"/>
  <constructor-arg value="${webdriver.device.url}"/>
</bean>

<bean id="webdriver" factory-bean="webDriverFactory" factory-method="getWebDriver" lazy-init="true" />

You’ll probably notice that both the device type and device URL are being injected in to the constructor via placeholders.  This meant I could give the flexibility of launching to different execution targets based on properties file settings (see more on this at my previous post Switch Test Runner Browsers Easily).

Scenario Writing and Test Code

One real advantage of using a tool like Twist is that the scenario writing and high level test code doesn’t need to be altered when switching or evolving drivers.  In fact, it shows that you’re along the right lines with scenario definition if that is the case.  This is especially true when you code to an automation framework pattern that abstracts site implementation along the lines of PageObjects (see the Selenium 2 wiki for further information on PageObjects).

The bottom line is that however you code your automation, you can now pick up and use the driver by passing it in via a constructor.  The example below gives a snippet of how this might look for a workflow class without such a pattern being used for a simple illustration of using the driver Spring bean:

package web.workflows;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import static org.junit.Assert.assertThat;
import static org.hamcrest.core.Is.is;

public class HomePageTests {

  private WebDriver driver;

  public HomePageTests(WebDriver driver) {
    this.driver = driver;
  }

  private WebElement dateDropDown(){
    return driver.findElement(By.id("date"));
  }

  public void setDate(String date){
    dateDropDown().sendKeys(date);
  }

  public String getDate(){
    return dateDropDown().getValue();
  }

  public void verifyDateSet(String date){
    setDate(date);
    assertThat(getDate(), is(date));
  }
}

Conclusion

The flexibility and ease at which this can be a achieved with Twist can’t be understated.  You can be up and running with Selenium 2 in a very short amount of time.  The other side of this spike for me was getting the mobile devices set up for driving tests through both real devices and simulators.  However, that’s another post in itself (especially the iPhoneWebDriver)… I’ll post that as a part II to this post at a later date.

As ever, please feel free to post comments or alternative views.  I’ll try to help where I can.

Switch Test Runner Browsers Easily

The Problem

I recently got frustrated with having to comment in and out settings in the “twist.properties” file in order to switch browsers when debugging scenarios.  However, luck found its way to a relatively neat solution to my problem.

Whilst looking at various ways of managing different browser and application properties for the “build.xml” target, I found that I could switch between between twist.properties files by setting system properties at the command line.  This ended up looking a little something like this for firing off the ant target:

ant twist-scenarios -Dbrowser=firefox

This ended up taking a couple of fairly simple changes to implement, as follows:

Optional Overriding of “twist.properties” using Property Placeholder Configurer

Because of the way in which Twist uses Spring to inject constructor arguments in to the browser bean using placeholders, it was possible to add in a few extra lines to make the bean override the default “twist.properties’ file dynamically.  I’ve highlighted these changes to the “applicationContext-suite.xml” file below.

<bean id="propertiesConfigurer"> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
  <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/>
  <property name="searchSystemEnvironment" value="true"/>
  <property name="ignoreResourceNotFound" value="true"/>
  <property name="locations">
    <list>
      <value>classpath:twist.properties</value>
      <value>classpath:twist-conf/${browser}.twist.properties</value>
    </list>
  </property>
</bean>

The “searchSystemEnvironment” property makes “${browser}” look to resolve to a system property (as provided in the ant command line call above).  The “ignoreResourceNotFound” property ensures the that if a system property isn’t supplied; it still picks up the default “twist.properties” file without complaining.

Creating a Folder of Browser Specific “twist.properties” Files

Next, under under the project “src” folder, I created another folder called “twist-conf” containing multiple browser specific “twist.properties” files.  They ended up in this format for readability:

twist-conf/
firefox.twist.properties
ie.twist.properties
chrome.twist.properties

Each one of these files had the appropriate individual settings for each browser (in this case Sahi driver properties) giving a few lines as below:

sahi.browserExecutable = firefox.exe
sahi.browserLocation = "C:/Program Files/Mozilla Firefox/firefox.exe"
sahi.browserOptions = -profile sahi/userdata/browser/ff/profiles/sahi<threadnumber> -no-remote

Eureka Moment

The moment of revelation came when I realised that I could also specify this very same system property as a JVM argument in the “Twist/Preferences” menu option (on the Mac version that is).  This changed pushed the system property the “Run Configuration” on next run of a scenario.  And then… I just duplicated the “Run Configuration” for each browser adjusting the system property for “-Dbrowser=”.  In each new version I would rename it to “Scenario in IE”, “Scenario in Firefox”, etc. until I had the full range.  I then added these as favorites so they were readily accessible.

Conclusion

There are limitations to this approach.  For example, the configuration is tied to an individual project.  So if you use multiple Twist projects, or various branches of the same one, you would need multiple configurations.

That said, in my situation it has certainly made life easier.  I hope it can be of use to you also.  Let me know what you think it has mileage, or whether you have a better approach.  I’d appreciate different views.