The other day I was battling with some weird behaviour where a key in a session was updated, but sometimes it would revert after a while.
The key in question was a flag to say that a customer had been sent an email about abandoning their cart, and when the key reverted they ended up getting duplicate emails.
To achieve this, we have an offline Celery task that looks over all sessions in the DB and checks a flag on the cart to know if the email had already been sent.
from django.contrib.sessions.backends.db import SessionStore
def update_flag(session, cart):
cart.email_sent = True
session_store = SessionStore(session_key=session.pk)
session_store['cart'] = cart.serialise()
session_store.save()
def find_sessions_for_email():
for session in SessionStore.objects.all().iterator():
cart = Cart.from_session(session)
if check_time(session) and not cart.email_sent:
yield session, cart
I made a test session, forced the email send, and checked the session in the DB and the flag was correctly set. I then searched the entire codebase for references to this flag, and found absolutely nothing else touched this flag except this code.
When I dug into the cases where duplicate emails were sent, I noticed that all of them came back to the site after the first email and started browsing again. But why? Why would browsing the site cause the flag to change state? And why wasn't everyone effected?
A big clue is that we are using Django's django.contrib.sessions.backends.cached_db
session engine in production.
The problem was we were directly importing django.contrib.sessions.backends.db
instead of the backend we had in the settings!
When we used db
instead of cached_db
it was updating the flag in the DB, but not in the Redis cache. When the user browsed the site again during the cache's TTL, it would have potentially re-saved the cached version to the database, clobbering the version in the database.