I’ve attended the session Testing with Selenium at the JAX conference in the hope to finally understand how it is possible to do professional test automation of web application with this tool.
If you’re interested by test automation, particularly in web area, you will often see people without relevant experience in the domain telling you why such or such tool is great or why you should do this or that. The reason is probably because it is really easy to understand the subject: the goal is “just” to automate what you would do manually on a website. The problem is that even if the question is simple, really good answers are more complicated.
Here we had a typical case of a speaker having no real experience in this area and acting just like a tool vendor. He didn’t say anything about maintainability and his XPath examples weren’t maintainable at all (one had even a missing tbody but this may happens while preparing a slide). Nothing either about scripts generated by a recorder and how carefully they should be considered and finally he avoided my question about his own experience by telling how many tests exist in his company.
Otherwise a few remarks have been interesting:
- “tests should be made very late, once the application has nearly no change”
Hmmm, why do you test in this case?
- “[Selenium's goal is] test automation for end user” and in the same talk the speaker presented how to use JSPs to dynamically generate the html containing the tests and showed why using Selenium RC is the best choice. Not really coherent.
- No load testing possible currently because “we want to kill one Mercury product at a time!”
This is perhaps the explanation: if Selenium’s reference is a product like Mercury’s QTP, then it has no chance to be really useful for really professional test automation projects!
All in one, this session just confirmed my diagnostic on Selenium: it may be useful in some special cases but otherwise it doesn’t allow to create what I call professional tests, ie. tests that are easy to create, cheap to maintain and to extend, that fit in the development process and provide helpful reports when a failure occur.