Frontend Testing with PhantomJS CasperJS Mocha and Chai

Let’s face it. Front end testing, or in fact, any sort of testing that involves you (the developer/tester) manually going through each scenario can be a gruelling process. This post isn’t about the importance of front end testing, because it's 2015.

I’m going to write about testing the UI and simulating user actions in the browser using PhantomJS, CasperJS, Mocha and Chai.

Before I proceed any further, here is a brief introduction to what each framework/library does:

Mocha

Mocha is a feature-rich JavaScript test framework running on node.js and the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases.

Chai

Chai is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.

PhantomJS

PhantomJS is a headless WebKit scriptable with a JavaScript API. It has fast and native support for various web standards: DOM handling, CSS selector, JSON, Canvas, and SVG.

CasperJS

CasperJS is an open source navigation scripting and testing utility written in Javascript for the PhantomJS WebKit headless browser and SlimerJS (Gecko). It eases the process of defining a full navigation scenario and provides useful high-level functions, methods and syntactic sugar.

CasperJS provides us with some really neat functions to work with. Particularly:

  • casper.start() : Configures and starts Casper
  • casper.waitFor(): Waits until a function returns true to process any next step.
  • casper.waitUntilVisible() : Waits until an element matching the provided selector expression is visible in the remote DOM to process a next step. Uses waitFor(). I find this particularly useful especially when we are interacting with DOM elements that have animation.
  • casper.capture(): Proxy method for PhantomJS’ WebPage#render. Adds a clipRect parameter for automatically setting page clipRect settings and reverts it back once done.

There are many more useful functions here.

CasperJS provides built in testing functionality as well, but we opted for the Mocha testing framework and Chai assertion library because we liked it better.

Let’s Get Started!

First get Mocha and Chai installed (if you opt to use Mocha and Chai instead of just using casper’s built in testing utility):

  • npm install mocha
  • npm install chai

Then install phantomJS: - npm install -g phantomjs

There are a couple of other ways to install phantomjs shown here.

Get casperjs running: - npm install -g casperjs

Finally get mocha-casperjs and casper-chai installed if you want to use Mocha and Chai with casperjs.

npm install -g mocha-casperjs

npm install -g casper-chai

Note from the future: I recommend installing the npm packages locally and specifying the test commands within package.json instead.

Alternatively, if you’re happy with just phantomjs and casperjs, using the Chrome extension tool here can help you get off to a good start. Resurrectio allows you to record a sequence of browser actions and produces the corresponding CasperJS script.

Ahem. Let’s really get started now.

Let’s write some code that does all of the following in sequence:

  1. Loads a page
  2. Waits for a selector to appear
  3. Clicks on an element and asserts that the redirected page has the correct url. If successful, capture a screenshot.

 describe('Home page', function() {
 	before(function() {
		casper.start('http://localhost:8000');
	});
	it('should have an element in DOM', function(){
		casper.waitForSelector('#correctElement', function() {
			'#correctElement'.should.be.inDOM;
		});
	});
	it('should bring you to another page on click', function() {
		casper.thenClick('#destroyEverything', function(){
			this.echo('Clicked on Destory everything');
		});
		casper.waitFor(function check() {
			return this.evaluate(function() {
				return /urlthatwewant/.test(document.location.pathname);
			});
		}, function then() {
			// Succeeded
			this.echo('->Succeeded in loading the another page');
			this.capture(anotherPage.png');
		}, function timeout() {
			this.echo('Failed to load page').exit();
		});
	});
});

Running the above is simple.

mocha-casperjs <filename>.js

Here are some things we learnt while setting up frontend tests for Kogan.com.

Keeping it DRY

Many of the functions can be reused. Fortunately, PhantomJS allows you to import/require modules using the CommonJS syntax.

A very reusable module would be the login module (simplified version) (as shown below)


module.exports = function(email, password) { 
  describe('Logging in', function() {
    it('Filling in form fields and clicking should log the user in', function() {
    	casper.then(function(){
    		this.fillSelectors('form.form-email', {
    			'input#email' : email,
    			'input#password' : password
    		});

  			this.echo('Filling in details');
  		});

      casper.thenClick('#loginButton', function() {
        this.capture('filledIn.png');
        this.echo('->Clicked on Login');
      });   

  	});

  }); 
};
Config File

Having a config file with all your valid/invalid data can also help with making things DRY-er.

	
      /**
      config file
      **/
      module.exports = {
      URL: {
      dev: ‘http://localhost:8000’
      }
      };
    

Putting them all together, we could make the code a lot cleaner, and easier to understand. Pretty sweet huh?


var config = require('./config');
var login = require('./functions/login);
var doSomething1 = require('./functions/doSomething1);
var doSomething2 = require('./functions/doSomething2);


describe('Yet Another Page', function() {
  before(function() {
    casper.start(config.URL.dev);
  });

  login();

  doSomething1();

  doSomething2(1);

});

Like the sound of how we work? Check out our Careers Page!

References: - PhantomJS - CasperJS - Mocha

Continuously Improving our Process - Retrospectives

Like many agile based teams we regularly run retrospectives to gauge as a team how we are going and think of what and how to improve.

We have one every two weeks, they are time boxed to one hour and are held standing up (like we do for nearly all of our team meetings).

We have the typical ‘happy’ column, a ‘sad’ column and a ‘puzzling’ column. Everyone brainstorms on post-its, we group it together and vote on what we would like to discuss.

The team is prompted by being next to our wall and running through some of the achievements/big events in the past two weeks.

I ask probing questions such as:

  • What has slowed down your progress?
  • What has enabled you?
  • From when you pick up a story to when you deploy it to production, what obstacles do you have?
  • What areas have been inefficient and where is there unnecessary work or rework?

There is one key part that distinguishes a productive retrospective from a time waster - the actions and improvements that are an output of the retro.

One thing that is tempting is to discuss the particular details of an issue that has just occurred - which can often turn into a bit of a whinge and sometimes even a blame game (especially if the people involved are not present at the retro). Then a tactical action is thought of to address the latest symptom and the team moves on to the next highest voted item.

It is far more valuable for the team to discuss the patterns and root cause of the issue.

To do this, the team should discuss what process was followed and give other examples of that process in play. This is to remove the emotion and raise it up to a more high level discussion about repeatable systemic issues.

Once we understand the root cause we explore what is within our control to improve. That way, when we begin to think of actions to improve the situation we are thinking of changes/tweaks to our process rather than a quick fix or bandaid.

Then, once we have an action to improve our process, we run through a few hypothetical situations to ensure we have a shared understanding of what our new improvement looks like. We’ll often look at a few tasks in our backlog to give us some examples and we run through how things will play out with our new improvement.

The next retrospective the first thing we discuss are our actions from the last retro:

  • Has the action been implemented? if not, why not?
  • Is the issue still occurring? if yes, why?
  • Did the action improve the issue? if not, why not?

By beginning our retrospective with the previously agreed actions, it reinforces to the team the purpose of our retrospectives, which is, to share examples of our teams anchors and engines, to discuss their root cause and to action improvements to our process.

Tips for writing unit tests for Django middleware

Django framework provides developers with great testing tools and it's dead easy to write tests for views using Django's test client. It has extensive documentation on how to use django.test.Client to write automated tests. However, we often want to write tests for components that we have no control over when using django.test.Client. An example of that is Django Middleware which is used to add business logic either before or after view processing. django.test.Client has no public API for developers to access the internal request object.

Here is a simple example of a middleware class that creates a stash from data saved in the session.


class Stash(object):
    def __init__(self, **kwargs):
        self.__dict__.update(kwargs)

class StashMiddleware(object):
    """
    Reconstructs the stash object from session
    and attach it to the request object
    """
    def process_request(self, request):
        stashed_data = request.session.get('stashed_data', None)
        # Instatiate the stash from data in session
        if stashed_data is None:
            stash = Stash()
        else:
            stash = Stash(**stashed_data)
        # Attach the stash to request object
        setattr(request, 'stash', stash)
        return None

Let's analyze what needs to be tested:
1. Assert that if the stashed data exists in the session, it should be set as an attribute of the request
2. Assert that if the stashed data doesn't exist in the session, an empty stash is created and attached to the request object
3. Assert that all attributes of the stash can be accessed

How about dependencies? What do we need in order to write this test?
- StashMiddleware class (this can be easily imported)
- request object as an argument in process_request(). This one is a bit harder to obtain, and since we are writing a unit test, let's just mock it.

We are now ready to write the test


from django.test import TestCase
from mock import Mock
from bugfreeapp.middleware import StashMiddleware, Stash

class StashMiddlewareTest(TestCase):

    def setUp(self):
        self.middleware = StashMiddleware()
        self.request = Mock()
        self.request.session = {}

This sets up an instance of StashMiddleware and mocks a request. I'm using Michael Foord's mock library to assist me with this. Since we know session is a dictionary like object, we can mock it with an empty dictionary.


    def test_process_request_without_stash(self):
        self.assertIsNone(self.middleware.process_request(self.request))
        self.assertIsInstance(self.request.stash, Stash)

    def test_process_request_with_stash(self):
        data = {'foo': 'bar'}
        self.request.session = {'stashed_data': data}
        self.assertIsNone(self.middleware.process_request(self.request))
        self.assertIsInstance(self.request.stash, Stash)
        self.assertEqual(self.request.stash.foo, 'bar')

The first test asserts that (without stashed data in the session):
- process_request returns None
- Stash object has been attached to request

The second test asserts that:
- process_request returns None
- Dictionary containing data in session is unpacked and used to create a Stash object.
- Stash attributes can be accessed

In both cases, we assert for a return value of process_request. This might sound like a redandunt thing to test for but it actually helps us to identify regressions. Knowing that process_request returns None, we don't have to worry about this middleware skipping the subsequent middlewares.

Tips

  • Not all tests can be written with django.test.client.Client.
  • Keep your dependencies for unit tests as light as possible, use mocks.
  • Write unit tests that run fast. Don't test ORM or network calls, try using mock.patch instead
  • Revisit your code if you have a hard time trying to set up dependencies, that normally indicates that the code is too coupled.

Like the sound of how we work? Check out our Careers Page!

Anatomy 101 - Does Django Scale?

How Django is used by Australia’s largest online retailer.

I’m often asked about the choice of Django (and Python) for the base technology stack of kogan.com.  People are often surprised that Australia’s largest online retailer is not built in Java, .NET (your typical ‘enterprise-y’ stack) or using an out-of-box enterprise commerce product.  I thought it would be good to give people context behind the technology that we use at kogan.com, how it came into being and where it is going.