How can we sweat those test assets?

Investment

How many of us have embraced the wave of free thinking over the last few years, and started implementing automated acceptance tests in our sleep?  I know that the teams I’ve worked with have, and we have all become increasing better at it at an encouraging rate.  On top of that, we no longer have to come with a huge investment in tools to get started on this journey these days.  I love the fact that I can download software in a few minutes and start building automated test assets in an equally short amount of time.  I did this recently with a demo.  A quick download of Java, Eclipse, Selenium 2 Jar… 15 minutes later we had some working code, in the form of some JUnit 4 tests for a web page, to talk around.  This is some distance away from the pain I remember from assessing automation even a mere 5 years ago (insert former Mercury products and consultancy blank cheques here).

However, in this age of quick start and easy assess to tools and knowledge don’t fool yourself.  You are still making in investment, and usually a huge one after a team have built automated acceptance tests that have matched an ever growing feature set of the system under test.  People time has been invested, and it has still cost you money.  It is just that the entry point into the investment has been made easier.  Don’t get me wrong, I know these test assets will have enormous value for a multitude of reasons.  However, with valuable assets in any type of business… you need to get the most out of them and maximise their value (if it helps you towards your overall goal).

Multi-Purpose Usage

Ok.  So I should have really entitled this article “reuse of functional automated acceptance tests for another purpose” because that’s where I’m coming from.  I suppose I wanted to give the idea a value context, as that is where my head lives when trying to communicate with others.  However, to drop down to the test and risk level again… What are your team doing to cover off potential risks (assuming the system-under-test is web site for example) around some of the following:

  • Stack performance bottlenecks under load, stress, etc.
  • Performance at page level (e.g. page weights)
  • Security (e.g. cross site scripting, SQL injection)
Unfortunately, in many cases that I have come across the answer is likely to be one of the following:
  • We haven’t thought about that yet as we are focusing on story acceptance
  • We do something ad hoc either manually or semi automated
  • We have another set of monolithic environments and tools to handle that
  • We want to cover those risks, but there is a bar from entry (e.g. setup, cost, knowledge)

So over the next couple of posts I’m going to look at what I’ve come across in this area, but I’m also interested in what you’ve come up with. Send me your thoughts, ideas, and blog posts.  Whether that’s on how you’ve got ShowSlow (showslow.com) setup to collect data from your tests, or if you’ve got OWASP ZAP (owasp.org) hooked in to the build.  I hope to hear from you.

3 thoughts on “How can we sweat those test assets?

  • We managed to “re-use” our Selenium “Functional” tests for Performance and Security tests.

    For Performance we ran them through the Jmeter proxy to record the HTTP Requests.

    For Security we did the same but through a Dynamic Security tool called Seeker.

    The capturing of our Selenium Tests in both of these tools was automatic as we loaded up the “browser profiles” with the relevant proxies enabled.

    We’d then run the “re-captured” tests through Jmeter and Seeker which were scheduled via Bamboo.

    It gave us some benefits but I much preferred designing our “Functional” selenium tests separate to our “Security/Performance” tests. When we reused them it felt like throwing a large blanket over the problem rather than specifically designing tests for Performance/Security.

    However Reuse certainly got us started much quicker than if we had started completely from scratch.

  • We managed to “re-use” our Selenium “Functional” tests for Performance and Security tests.

    For Performance we ran them through the Jmeter proxy to record the HTTP Requests.

    For Security we did the same but through a Dynamic Security tool called Seeker.

    The capturing of our Selenium Tests in both of these tools was automatic as we loaded up the “browser profiles” with the relevant proxies enabled.

    We’d then run the “re-captured” tests through Jmeter and Seeker which were scheduled via Bamboo.

    It gave us some benefits but I much preferred designing our “Functional” selenium tests separate to our “Security/Performance” tests. When we reused them it felt like throwing a large blanket over the problem rather than specifically designing tests for Performance/Security.

    However Reuse certainly got us started much quicker than if we had started completely from scratch.

    • Cheers Toby, great example of breaking this assumption. I think you’re right that re-use can mask solving the real problem at times. It can give you a starter for 10 in some situations. In particular, page level performance is a quick win for re-use, but if you really want to pull the tech stack apart… it’s a different context entirely.

Leave a Reply

%d bloggers like this: