How many of us have embraced the wave of free thinking over the last few years, and started implementing automated acceptance tests in our sleep? I know that the teams I’ve worked with have, and we have all become increasing better at it at an encouraging rate. On top of that, we no longer have to come with a huge investment in tools to get started on this journey these days. I love the fact that I can download software in a few minutes and start building automated test assets in an equally short amount of time. I did this recently with a demo. A quick download of Java, Eclipse, Selenium 2 Jar… 15 minutes later we had some working code, in the form of some JUnit 4 tests for a web page, to talk around. This is some distance away from the pain I remember from assessing automation even a mere 5 years ago (insert former Mercury products and consultancy blank cheques here).
However, in this age of quick start and easy assess to tools and knowledge don’t fool yourself. You are still making in investment, and usually a huge one after a team have built automated acceptance tests that have matched an ever growing feature set of the system under test. People time has been invested, and it has still cost you money. It is just that the entry point into the investment has been made easier. Don’t get me wrong, I know these test assets will have enormous value for a multitude of reasons. However, with valuable assets in any type of business… you need to get the most out of them and maximise their value (if it helps you towards your overall goal).
Ok. So I should have really entitled this article “reuse of functional automated acceptance tests for another purpose” because that’s where I’m coming from. I suppose I wanted to give the idea a value context, as that is where my head lives when trying to communicate with others. However, to drop down to the test and risk level again… What are your team doing to cover off potential risks (assuming the system-under-test is web site for example) around some of the following:
- Stack performance bottlenecks under load, stress, etc.
- Performance at page level (e.g. page weights)
- Security (e.g. cross site scripting, SQL injection)
- We haven’t thought about that yet as we are focusing on story acceptance
- We do something ad hoc either manually or semi automated
- We have another set of monolithic environments and tools to handle that
- We want to cover those risks, but there is a bar from entry (e.g. setup, cost, knowledge)
So over the next couple of posts I’m going to look at what I’ve come across in this area, but I’m also interested in what you’ve come up with. Send me your thoughts, ideas, and blog posts. Whether that’s on how you’ve got ShowSlow (showslow.com) setup to collect data from your tests, or if you’ve got OWASP ZAP (owasp.org) hooked in to the build. I hope to hear from you.