WebTest with Groovy, Maven and Eclipse

WebTest is in fact “only” a set of Ant tasks therefore it’s not a surprise that WebTest tests are traditionally written in XML. Grails WebTest plugin has shown that Groovy can be a very pleasant alternative thanks to the AntBuilder. Even if you don’t use Grails, you can write WebTest tests in Groovy using the support provided by WebTest’s distribution but if you plan to write all your tests in Groovy, this is surely not optimal. This post shows step by step a way to write WebTest tests that is near to a traditional setup for unit tests using Groovy and Maven.

Note: I’m not a Maven expert. The proposed code works but it can perhaps be improved.

  1. Create a new Maven project
    mvn archetype:create -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=my.domain -DartifactId=myWebTestApp

  2. Edit the generated pom.xml
    1. Add WebTest as dependency:
    2. Add reference to WebTest’s Maven snapshot repository
      	            <name>WebTest dependencies</name>

      Note that this is needed only if you want to use the latest WebTest snapshot (which is always the best one). You don’t need it if you use a version that is available on the central repository. As of Apr. 30, WebTest 3.0 has not been uploaded to the central repository. The upload request is available here: MAVENUPLOAD-2439.

      Update: WebTest 3.0 is available in Maven central repository since May 19.

    3. Configure the GMaven plugin
  3. Create the Eclipse project
    mvn -Declipse.downloadSources=true eclipse:eclipse

    I guess that something similar exists for other IDEs.

  4. Import the project in your workspace
  5. Add Groovy support in the IDE
    The Groovy support in Eclipse is not as good as in other IDEs but it is better than nothing.
  6. Write your first WebTest in Groovy src/test/groovy/my/domain/FirstWebTest.groovy
    package my.domain
    import com.canoo.webtest.WebtestCase
     * My first WebTest in Groovy.
    class FirstWebTest extends WebtestCase {
    	void testGoogleSearch() {
    		webtest("My first test") {
    			invoke "http://www.google.com/ncr", description: "Go to Google (in English)"
    			verifyTitle "Google"
    			setInputField name: "q", value: "WebTest"
    			clickButton "I'm Feeling Lucky"
    			verifyTitle "Canoo WebTest"
  7. Run the test
    • As normal unit test from the IDE
      As it is a normal unit test and you can use the IDE support for that. In Eclipse this can be done with right mouse click / Run As... / Junit Test or with the shortcut Alt + Shift + X Z
    • Or from the command line
      mvn test
  8. Enjoy the results!
    Running a WebTest this way produces “normal” JUnit reports as well as the traditional reports of WebTest that contain precise information to quickly find the cause of failed tests.
  9. (optional) Run headless
    Per default WebTest shows a console that informs about the running tests and opens the reports in your browser once the tests are finished. This is often useful on a workstation but it may be really annoying on a build server. To avoid that, you just need to set the wt.headless property:

    mvn test -Dwt.headless

Why Google should better use HtmlUnit to avoid JS error on its start page

Like many other testing tools, WebTest uses Google‘s website for simple code examples. It has the advantage to be a well known website, pretty fast and with simple code.

http://www.google.com used to be a simple page

For some time WebTest users started to report that even simple examples from the demo tests available when creating a new project failed due to JS errors like this:

As this test used to work, this means that Google’s start page isn’t so simple any more that it used to be. It was predictable that this page would change one day but it is quite embarrassing as we use this kind of tests to show how easy and powerful WebTest is even for sites using a lot of JS stuff.

http://www.google.com has really a JS error!

HtmlUnit‘s JS support is quite good (and improved continuously), but of course not (yet πŸ˜‰ ) as good as the one of “real” browsers, therefore I’ve started to search for the cause of the problem in HtmlUnit, looking for missing/incorrect implementation of some JS functionalities. At the end I’ve found that the problem occurred only when setting directly the value attribute of the field rather than calling the type(...) method on it because keypress handlers were being used. This means that WebTest should probably use type(...) rather than setting the field’s value directly.

But this means as well that Google’s home page relies on the keypress handlers and that the JavaScript code doesn’t work correctly if these handlers are not called like in following scenario:

  1. – go to http://www.google.com/ncr (to be sure to have the English version of the page)
  2. – right mouse click in the search field to paste some value to search for
  3. – click the “I’m Feeling Lucky” button

The results are quite similar to the JS error obtained by WebTest, no matter if you use Firefox or Internet Explorer (perhaps does it work correctly with Chrome πŸ˜‰ ):

Rather painful for Google than for WebTest!

OK, this error is nothing dramatical but for an Internet giant like Google that has plenty of talented engineers (and …, and…, and …. You perhaps know better than I what the web means for Google), I find quite painful to have such an error on its main page!

The error scenario is not particularly complicated and such a case should surely be covered by automated tests. Perhaps is it a question of inadequate tools? Google seems to use intensively Selenium πŸ˜‰ .

Concerning WebTest, I’ll start the discussion on the mailing list to decide what is the best way to follow: setting directly the field value, typing it, or having both possibilities.

WebTest @ JUG Cologne slides

Here are the slides of the presentation I’ve made yesterday:

WebTest @ JUG Cologne, 25.08.2008

I’ll give a presentation about efficient web test automation with WebTest at JUG Cologne on 25.08.2008. Thanks Michael for inviting me, it’s an honour to appear in the list of great speakers of my “home JUG”.

There is no time limitation (according to Michael) therefore I plan to show different new demos of existing features as well as of future ones that should trouble / motivate / question the attendees.

There will be a Lightning Talk on Selenium by Simon Tiffert before my presentation. I think that this will be quite interesting as well because Simon has far more experience with Selenium than nearly all persons blogging about this tool.

WebTest/HtmlUnit integration with JMeter started

I couldn’t resist to show this screenshot before my holiday:

Boost your WebTests: 50% faster or more!

Tests never run fast enough!

This is particularly true for functional tests of web applications. WebTest is known to be very fast compared to other functional test tools, nevertheless when your test suite grows or simply when you want to quickly receive feedback after a commit, you often feel that tests are too slow. A new experimental feature of WebTest allows to specify the number of threads that should be used for the tests what can bring enormous speed improvements without modification of the tests.

Simply configure the number of threads

Rather than just calling


you just specify the number of worker threads

ant -Dwt.parallel.nbWorkers=20

The optimal number of worker threads will depend from the usage.

How it works: enqueue rather than execute

The idea is quite simple and the implementation totally non-intrusive. In fact this is a shame that nobody has had the idea previously (including you! ;-)). This can be simply explained with a few lines of code.

WebTest is based on Ant what means that the task mapped to <webtest> contains something like this:

class WebTestTask extends Task
  void execute()
     // do the WebTest

For the parallel execution we map <webtest> to a new class that looks like this:

class WebTestTaskParallel extends WebTestTask
	void execute()
		workQueue.add this

	void executeReally()

Once this is done, Ant can do its job normally. When it calls the execute() method of the task, the instance adds itself to a queue and the execute() method returns allowing Ant to continue its normal execution. A set of worker threads look at the queue, calling executeReally() on the WebTestTask instances to really run the tests. That’s nearly everything, the rest is just synchronization glue code.

Improvements from 0% to 50% and more

I’ve seen many cases where using a few threads allows to gain over 50% execution speed. In fact this will depend from the capacity of the application under test and how it can handle the increased charge. With a server – not necessary fast – that smoothly handles the charge generated by numerous worker threads, I can imagine tests suites that execute approximatively in the time of the slowest test. On the other side I’ve already seen test suites that take more time to execute with many worker threads when the AUT runs on the same physical machine than the tests themselves. It’s not really surprising: if the tests need more “power” to execute, then there is less for the application.


With a recent WebTest build, you just have to specify the number of worker threads to start using this new feature. You many need to refactor your tests to include processing made after a </webtest> within the test otherwise this will be done before the tests itself gets executed and may be the cause of problems. Finally current implementation of this feature doesn’t contain any facility to manage dependencies between tests.

Towards functional load testing?

Tests with large numbers of worker threads have shown that WebTest scales quite well: both memory and CPU usage stay correct. This opens a new perspective for WebTest: the usage as load testing tool. In fact it should probably be bound to something like JMeter but I see great advantages in WebTest usage compared to low level http processing as usually done by load testing tools, particularly in test maintainability, AJAX testing or for the correctness of the simulated scenario.

Update (Feb 6): a former client told me that they just started to use a slightly modified version of this feature I prepared them for some time (allowing to manage some dependencies between tests). The results are even better as expected: over 75% saved time! With 8 working threads the tests complete in less than 25% of the original time.

WebTest as universal DSL for automated web testing in Groovy thanks to its Ant roots?

What for a long title!

In the discussion in Grails-user mailing list following my previous post “WebTest vs Selenium“, Marc Palmer and James Page requested the creation of a kind of meta DSL in Groovy for automated functional tests of web applications.

The idea was to provide a DSL allowing to write functional tests in a tool agnostic way and to run them with WebTest as often as needed because it is fast and for instance once each night using Selenium because it uses a real browser but is quite slow.

I’m still not fully convinced on the utility of such a feature because WebTest is so good πŸ˜‰ but I think that there is no need for a new DSL: it already exists! The AntBuilder allows WebTest to have a really nice syntax in Groovy and this could be simply reused for other “target tools” thanks to Ant.

Ant theory

When you write something like this (the examples in this post use Groovy AntBuilder but the same would apply for “pure” Ant in XML format):

ant.webtest(name: "a simple test)
  invoke "http://webtest.canoo.com"
  verifyTitle "WebTest website"
  clickLink "Manual"
  verifyXPath(xpath: "count(id('navigation-top')//li)", text: "5",
      description: "check the number of top menu elements")

this looks like WebTest but this is not WebTest as long as you haven’t configured Ant with for instance something like

ant.project.addTaskDefinition("webtest", com.canoo.webtest.ant.WebTestTask)
ant.project.addTaskDefinition("invoke", com.canoo.webtest.steps.request.Invoke)

This has 2 consequences:
– it is possible to configure other tasks than WebTest’s ones for “WebTest steps”
– it is possible to inspect the “parsed” tasks tree

This means that it wouldn’t be too difficult to run a “WebTest” test with an other automated test tool like for instance Selenium (as long as the tests don’t use any WebTest feature for which no Selenium equivalent exists). Let’s see how this could be done.

First solution: redefine “WebTest steps”

The first approach consists in redefining the “WebTest steps” to provide an alternative implementation for each WebTest step before executing the test. This could look like following for verifyTitle:

import org.apache.tools.ant.*class SeleniumVerifyTitle extends Task
  String text
  void execute()
    if (text != selenium.title)
      throw new BuildException("Wrong title: expected $text, got ${selenium.title}")
  def getSelenium()
    project.references.'selenium' // assuming that test start placed it there
ant.project.taskDefinitions["verifyTitle"] = SeleniumVerifyTitle

verifyTitle is quite simple, for other steps it would be more tricky or even impossible (like the pdf or email steps) to write a Selenium equivalent.

Generate script from Ant tree

The second approach consist in the generation of a script from the Ant structure. This has the “advantage” that the
generated script doesn’t necessarily have to be run on the Java Virtual Machine.

class WebTest2SeleniumRubyConverter extends WebTest
  File targetFile // additional attribute to Ant task: the file to write in
  private converters = [
    'invoke': { "open \"${it.attributeMap.url}\"" },
    'verifyTitle': { "assert_equal \"${it.attributeMap.text}\", @selenium.get_title" },
    'clickLink': { "@selenium.click \"link=${it.attributeMap.label}\"" },
    'verifyXPath': { "# ?? I haven't found an example" },
    'pdfVerifyTitle': { "# skipped because not supported: pdfVerifyTitle ${it.attributeMap.title}" },
  def execute()
    def rubySteps = runtimeConfigurableWrapper.children.collect { converters[it.elementTag](it) }
    def scriptTemplate = """
require 'test/unit'
require 'selenium'
class ExampleTest < Test::Unit::TestCase
include SeleniumHelper

def setup
@selenium = Selenium::SeleniumDriver.new("localhost", 4444, "*firefox", "http://localhost", 10000);

def teardown

def test_something
<% steps.each { %>
<% }%>


    def engine = new groovy.text.SimpleTemplateEngine()
    def template = engine.createTemplate(scriptTemplate)
    targetFile.withWriter {
      it << template.make([steps: rubySteps])
ant.project.taskDefinitions["webTest"] = WebTest2SeleniumRubyConverter

NB: I don’t have any experience in Ruby and have written the example code above only by looking at the samples on Selenium’s website.

Naturally this is only an example and it is a bit more complicated to do it correctly (handle different combinations of task attributes, special characters, Ant macros, …) but surely not so much.

Conclusion: not very complicated but does it makes sense?

These two examples shows that it wouldn’t be complicated to “convert” simple WebTest tests to allow them to be executed with an other tool like Selenium (but other tools like for instance WebDriver could be a target too) as long as they use the subset of WebTest features that the target tool accepts. Of course WebTest features like its particularly rich reporting wouldn’t be available either.

Personally I’m not really convinced of the utility of such a “generic DSL” due to WebTest’s excellent quality and unless a new client (you? ;-)) really wants this feature I don’t plan to work on it but I’m ready to provide technical assistance if someone wants to do the job.

2nd WebTest screencast: Data Driven WebTest

My colleague Tomi SchΓΌtz and I have finished our second WebTest screencast: Data driven WebTest.

Content: (duration ~2’30”)
I- Do you know Google Calculator?
II- Do you know the dataDriven Ant task?
III- Testing Google Calculator with data driven WebTest

It should be interesting even for experienced WebTest users: the dataDriven task is so simple and powerful, and Google Calculator is a funny toy.

WebTest Monitor

WebTest now contains a minimal GUI showing the progression of the tests execution when started manually:


What does it bring? In fact nearly nothing.

It’s just a small information displayed during test execution. It becomes funny when you start a few dozen tests in parallel because you nearly don’t have time to follow each single test that get started or finished: it goes too fast. But this will be the subject of a future post once this feature is integrated in WebTest builds.

WebTest vs Selenium: WebTest wins 13 – 5

In the last months I’ve seen a rising interest in automated testing of web applications thanks to the efficient viral marketing of Selenium. However, the world is full of test automation projects that started with big hopes and lots of enthusiasm only to be abandoned shortly after facing the unpleasant reality that it needs more than a point-and-click activity to develop a suite of robust tests.

The maintainability of automated tests depends primarily on the skills of the test authors, but different tools have different features that impact their efficiency. This blog post compares the features of two open source automated web testing tools: Canoo WebTest and Selenium.

A short introduction to the contenders:

Canoo WebTest is a free open source tool that has existed since 2001. It is written in pure Java and contains a set of Ant tasks that drive a simulated, faceless browser (originally HttpUnit, but for the last few years HtmlUnit).

Selenium is a free open source tool as well, created in 2004. It uses injected JavaScript to work within real browsers. Different components exists under the name Selenium: Selenium Core, Selenium RC, Selenium IDE (!), … In this blog post I will only consider Selenium RC used with Selenium IDE on Firefox or Selenium HTA on Internet Explorer due to the limitations of the other possibilities.

Features comparison

To be clear, as a WebTest (and HtmlUnit) committer I’m undoubtedly biased. On the other hand, I have experience with huge functional test suites being developed and maintained over periods of years. Trying to be objective, I may overcompensate in the other direction and give Selenium too much credit. Of course I will diligently fix errors I may have in my Selenium understanding. But please read this post until the end before starting with criticisms πŸ˜‰

  • Browser fidelity: WebTest 01 Selenium

This is probably the most overestimated characteristic of a web testing tool. Automated tests don’t make manual testing useless because automated tests can’t cover everything (at least for affordable costs). You still have to walk through your application (just think of everything you’ve checked just reading this: page layout, font size, font colors, …).
The consequence is that an automated web testing tool’s purpose is not to ensure that an application works “well” as it is not possible, but to detect most of the failures that could happen. This is a huge difference because it means that tests don’t have necessary to run in a “real” browser.
Nevertheless the browser’s real behaviour has to be approximated as closely as possible. HtmlUnit’s JavaScript support has made impressive progress but it still doesn’t (and will never) behave exactly like a normal browser.
Even though Selenium modifies the normal JavaScript execution of an application, it uses the browser itself and therefore is nearer to the standard behaviour of the browser.

  • Reports: WebTest 10 Selenium

JUnit-like reports are far too limited for web test automation. This is probably something that you first see when you have reached a certain volume of tests. If the tests are successful, you don’t need any report at all, but when some tests fail, you need the information to find as quickly as possible what is the failure cause and an error message is often not enough.
With comprehensive reports like those provided by WebTest, you don’t have to debug your tests, just to analyse the reports. Furthermore it allows you to understand (and fix) the worst kind of bugs: those that don’t occur systematically.

  • Speed: WebTest 10 Selenium

Tests are never fast enough. Selenium is known not to be very fast and even slower on Internet Explorer (just read the mailing list) and seems to suffer from memory leaks. On the other side, WebTest is quite fast (see for instance this thread in Selenium Dev mailing list for a non representative test where Selenium took ~ 10 seconds and WebTest < 2s).
It’s not surprising due to Selenium’s architecture (3 tiers involved) and all the rendering that happens in browser. Even if HtmlUnit’s HTML handling algorithms are not as good as the real browsers, WebTest has simply less to do and everything happens in the JVM.

  • Integration in development process: WebTest 10 Selenium

WebTest is “just” Ant which means that it can directly be called from CruiseControl for instance or from each developer’s workstation.
On the other side for Selenium you need a real browser with an own profile, a proxy – possibly on another computer if you want to test with IE and run the tests from a non Windows system.

  • Scalability: WebTest 10 Selenium

For a large application (or set of applications) with a good functional test coverage your test suite(s) will rapidly grow and scalability may become an issue. WebTest scales far better than Selenium mostly because it’s faster and because you can simply run many tests suites in parallel (just think of the hardware requirements and browser limitations to do that with Selenium).

  • Capture JS errors: WebTest 10 Selenium

This is what surprises me most about experienced developers working with Selenium: they find it acceptable to ignore JS errors. Would you accept compilation errors in your program as long as your unit test pass? Surely not! But in fact this is exactly what you do with Selenium as it doesn’t detect the javascript errors contained in your application (unless they directly impact the specific tests causing them to fail).

  • Testing AJAX: WebTest 11 Selenium

Contrary to popular belief, you don’t need to run your test as JavaScript inside a browser to test AJAX functionality. HtmlUnit and thus WebTest is just as well up to the task. It can even be considered superior as it allows better control over how to schedule the in-page requests making the unpredictable browser behavior predictable (see for instance my previous post).

  • Beginner friendly: WebTest 01 Selenium

Beginners (as well as managers ;-)) better understand test automation of web applications, when they see what happens.

  • Documentation: WebTest 10 Selenium

Extensive and up-to-date documentation is very important. A quick look at both web sites will show you that WebTest manual is clearly the winner. It should even be a negative point for Selenium as advice on what makes test suites maintainable is completely missing.

  • Predictable behaviour: WebTest 10 Selenium

Should be a minimal requirement for a test tool, but if you look regularly at Selenium mailing lists or at different posts (like this one), this is not yet fully the case for Selenium.

  • XPath support: WebTest 10 Selenium

WebTest currently uses Jaxen as XPath engine which means that XPath 1.0 is covered as well as some XPath 2 functions (do you know that starts-with is in XPath 1 spec but ends-with first appears in XPath 2?).
Additionally you can customize it to define your own XPath functions.
Selenium uses native XPath support when it’s available (like in Firefox) and evaluates XPath expressions using JavaScript libraries otherwise (like in IE). This JS library is slow and many XPath expressions aren’t interpreted correctly. Even in Firefox, the support is limited to XPath 1.0.

  • Support for badly formatted HTML code: WebTest 01 Selenium

Browsers are able to cope with really badly formed HTML code and so does Selenium as a consequence. WebTest’s parser (NekoHTML) is able to handle some malformations but not that much. Even though it is quite questionable to see it as a feature when your goal is to write your web application as well as possible, sometimes testers do not have access to the development resource to correct the source and just want to test functionality so I’ll give this point to Selenium.

  • Extensibility: WebTest 10 Selenium

Selenium accepts custom extensions but first this is cumbersome because the extensions have to be deployed in the target browser(s) and second interactions are limited as extension code executes in the browser and not in your test program.
In WebTest you have full control over the “browser” from within your tests which you can use to simply write global extensions as well as project or test specific ones.

  • Data driven tests: WebTest 10 Selenium

No discussion, the dataDriven Ant task used with WebTest is simple and powerful!

  • Multi-language support: WebTest 01 Selenium

Selenium RC has bindings in different languages (Java, Ruby, PHP, …) whereas WebTest is bound to Ant which means XML or for instance Groovy with its nice AntBuilder (in fact any of the over 200 languages for the JVM could probably be used). I think that Selenium is missing a real specification language (please don’t talk about Selenese!) like Ant is in this case for WebTest, but I need to give some points to Selenium…

  • Internationalisation support: WebTest 10 Selenium

Using WebTest, you just need to put your language specific strings in property files and use Ant’s built in property task to load the right resources before executing your tests.

Update 05.11.07:
Dan Fabulich correctly indicates that Selenium RC tests written in the same language than the AUT can directly use the application’s i18n resource bundles. Therefore WebTest can’t be seen as winner outside of the Java world.

  • Support for non HTML content: WebTest 10 Selenium

HTML is only one of the content types used by a web application and it’s a common need to have mixed content within the same applications with for instance a PDF file containing the invoice after the checkout. Selenium is limited to HTML content (+XML and text). On the other side WebTest has built in support to work with PDF documents and Excel files as well as Applets and Emails.


Automated functional tests of web application should become as natural as unit tests. Some tests are better than no tests, no matter which tool is used. Selenium does a good job to introduce newcomers and has many advantages (besides the price) over its commercial model QTP. Nevertheless at least when the size of your suite grows, you should pay attention to your efficiency if you want to last and “the ROI on WebTest is many orders of magnitude higher than any other tool I’ve used” (Lisa Crispin, author of Testing Extreme Programming).

When comparing Selenium and WebTest, 3 categories of web application can be considered. First the applications that are supported both by WebTest and Selenium. This contains most of the applications. Then the ones that use browser features (mostly javascript) that are not yet supported by HtmlUnit. Due to HtmlUnit’s awesome progress in JavaScript support, the size of this category continuously shrinks. The last category concerns applications that use for instance PDF documents or applets and for which Selenium has no support. In all cases where WebTest can be used it is far more efficient than Selenium to ensure the quality of web applications because it provides more feedback and takes less time (both to execute and to analyse results).

When I started writing this post I didn’t expect that Selenium would get such a bad score. Comments are welcome to show which advantages of Selenium over WebTest I’ve missed!

« Older entries