Testing

Catches when Expecting Exceptions in Django Unit Tests

To cover all bases when writing a suite of unit tests, you need to test for the exceptional cases. However, handling exceptions can break the usual flow of the test case and confuse Django.

Example scenario: unique_together

For example, we have an ecommerce site with many products serving multiple countries, which may have different national languages. Our products may have a description written in different languages, but only one description per (product, language) pair.

We can set up a unique_together constraint to enforce that unique pairing:

class Description(models.Model):
    product = models.ForeignKey("Product")
    language = models.ForeignKey("countries.Language")

    class Meta:
        unique_together = ("product", "language")

    subtitle = models.CharField(...)
    body = models.CharField(...)
    ...

Developer chooses AssertRaises()

If the unique_together rule is violated, Django will raise an IntegrityError. A unit test can verify that this occurs using assertRaises() on a lambda function:

def test_unique_product_description(self):
   desc1 = DescriptionFactory(self.prod1, self.lang1)
   self.assertRaises(IntegrityError, lambda:
      desc2 = DescriptionFactory(self.prod1, self.lang1)

The assertion passes, but the test will fail with a new exception.

A wild TransactionManagementError appears!

Raising the exception when creating a new object will break the current database transaction, causing further queries to be invalid. The next code that accesses the DB - probably the test teardown - will cause a TransactionManagementError to be thrown:

Traceback (most recent call last):
File ".../test_....py", line 29, in tearDown
   ...
File ...
   ...
File ".../django/db/backends/__init.py, line 386, in validate_no_broken_transaction
An error occurred in the current transaction.
TransactionManagementError: An error occurred in the current transaction.
You can't execute queries until the end of the 'atomic' block.

Developer used transaction.atomic. It's super effective!

Wrapping the test (or just the assertion) in its own transaction will prevent the TransactionManagementError from occurring, as only the inner transaction will be affected by the IntegrityError:

def test_unique_product_description(self):
   desc1 = DescriptionFactory(self.prod1, self.lang1)
   with transaction.atomic():
       self.assertRaises(IntegrityError, lambda:
          desc2 = DescriptionFactory(self.prod1, self.lang1)

You don't have to catch 'em all: Another solution

Another way to fix this issue is to subclass your test from TransactionTestCase instead of the usual TestCase. Despite the name, TransactionTestCase doesn't use DB transactions to reset between tests; instead it truncates the tables. This may make the test slower for some cases, but will be more convenient if you are dealing with many IntegrityErrors in the one test. See the Django Documentation for more details on the difference between the two classes.

Testing auto_now DateTime Fields in Django

Django's auto_now_add and auto_now field arguments provide a convenient way to create a field which tracks when an object was created and last modified.

For example:

class BlogPost(models.Model):
      title   = models.CharField()
      author  = models.ForeignKey("author")
      body    = models.TextField()
      created = models.DateTimeField(auto_now_add=True)
      edited  = models.DateTimeField(auto_now=True)
      ...

Unfortunately, they can make writing unit tests which depend on these creation or modification times difficult, as there is no simple way to set these fields to a specific time for testing.

The problem

Although auto_now fields can be be changed in code, as they will update themselves afterwards with the present date and time, they can effectively never be set to another time for testing.

For example, if your Django-powered blog is set to prevent commenting on posts a month after it was last edited, you may wish to create a post object to test the block. The following example will not work:

def test_no_comment(self):
      blog_post = BlogPostFactory()

      blog_post.edited = datetime.now() - timedelta(days=60)
      # Django will replace this change with now()

      self.assertFalse(blog_post.can_comment())

Even changes to an auto_now field in a factory or using the update() function won't last; Django will still overwrite the change with the current time.

The easiest way to fix this for testing? Fake the current time.

The solution: Mock Time

The auto_now field uses django.utils.timezone.now to obtain the current time. We can mock.patch() this function to return a false time when the factory creates the object for testing:

import mock
   ...
   def test_no_comment(self):

      # make "now" 2 months ago
      testtime = datetime.now() - timedelta(days=60)

      with mock.patch('django.utils.timezone.now') as mock_now:
         mock_now.return_value = testtime

         blog_post = BlogPostFactory()

      # out of the with statement - now is now the real now
      self.assertFalse(blog_post.can_comment())

Once you need to return to the present, get out of the with statement and then you can test the long-ago-updated object in the present time.

Other Solutions

An alternative solution is to use a fixture instead; however fixtures should generally be avoided as they have to be manually updated as your models change and can lead to tests incorrectly passing or failing - see this blog post for more details.

Another alternative is to create your own version of save() for the object which can be overridden directly. However this requires more complex code than using mock.patch() - and all that extra code will end up in production, not in the test as in the example above.