Recently a bug found its way into production which caused excessive updates to one of our critical Postgres tables. While we identified and squashed the bug quickly, the damage was already done: about 160% of the live tuples were dead. At a critical threshold of dead tuples Postgres' query planner will start to ignore indices as it thinks there are much more rows than there are. In this degraded state, queries that would normally complete in sub-second time were now taking multiple seconds, even minutes to complete.
Usually dead tuples aren’t much of an issue as they’re vacuumed away regularly by the auto vacuum process. Auto vacuum works well because it only acquires a SHARE UPDATE EXCLUSIVE
lock, meaning selects, updates, and inserts can still occur. We calculated from the currently running vacuum’s progress that the auto vacuum was going to take more than 30 days to complete. With Black Friday sales around the corner we needed another solution. A full vacuum was calculated to take roughly an hour and would reclaim a lot more space, however this would acquire an ACCESS EXCLUSIVE
lock, blocking all access to the table for the entire duration: not possible without taking the entire site down!
Or is it? If we can get a full vacuum to work, the database would recover in less than an hour!
We analysed the usage of this table. We determined that, so long as there were no writes, removing all relations (and replacing with null / no rows) would be acceptable in the short term, as the values were already denormalised into our elasticsearch database. Could we do this with minimal code changes?
Our Django application uses select_related/prefetch_related heavily so we would need to create an empty table with the same signature to allow for these queries to work without exceptions. Here’s what we came up with:
BEGIN;
-- we want to be able to rename fast later, so lock the table now
LOCK TABLE affected_table IN ACCESS EXCLUSIVE MODE;
-- create a table with the same columns as affected_table.
-- Foreign keys and indices are not required as
-- there’ll (theoretically) be no data inserted
CREATE TABLE "affected_table_empty"
(
"id" serial NOT NULL PRIMARY KEY,
"date" timestamp with time zone NOT NULL,
"enabled" boolean NOT NULL,
"data" varchar(128) NOT NULL,
"product_id" integer NOT NULL,
"mode_id" integer NOT NULL
);
-- swap the tables around. Table renaming is a fast operation
ALTER TABLE "affected_table" RENAME TO "affected_table_broken";
ALTER TABLE "affected_table_empty" RENAME TO "affected_table";
COMMIT;
We had some foreign keys linking to this table so each of those constraints must be dropped:
-- Another fast operation
ALTER TABLE "table_1" DROP CONSTRAINT "tbl1_fk_affected_table_id";
At this point we’ll have:
- No references to the affected table (now renamed to broken) in the database
- An empty table for our code to look at
Testing this out revealed a problem. If a table with a foreign key had a value set, Django would crash with DoesNotExist
as it assumes there would be a value.
>>> obj = Table1.objects.get(pk=1)
>>> obj.affected_table_id
<<< 156
>>> obj.affected_table
<<< DoesNotExist: AffectedTable matching query does not exist
As mentioned earlier, we’re heavy users of select_related so we couldn’t just mock the columns to be None. Here’s what we came up with to work around it:
class DisablingForwardOneToOneDescriptor(ForwardOneToOneDescriptor):
def __get__(self, instance, cls=None):
if waffle.switch_is_active(f"disable_relation_{self.field}"):
return None
return super().__get__(instance, cls=cls)
class DisablingOneToOneField(models.OneToOneField):
forward_related_accessor_class = DisablingForwardOneToOneDescriptor
We use django-waffle to manage feature flags which makes it easier to turn off relations through this new class.
The above class could be used in place of a OneToOne relation (but could easily be extended to a normal ForeignKey) as follows:
class Table(models.Model):
affected_table = DisablingOneToOneField("AffectedTable", null=True, blank=True)
Now when we try again it will just return None:
>>> obj = Table1.objects.get(pk=1)
>>> obj.affected_table_id
<<< 156
>>> obj.affected_table
<<< None
Calling obj.save()
is also safe, it will preserve the existing ID.
After some vigorous testing we were confident and ready to roll in production.
The steps that needed to occur were:
- Scale down Celery workers to prevent any writes to the empty table
- Disable the relation via the feature switch
- Create an empty version of the table and swap it with the affected table
- Drop all foreign key constraints to the affected table
- Vacuum the affected table
- Swap the affected table back with the empty table
- Insert any data that managed to find its way into the empty table into the affected table
- Add all the foreign key constraints back in
- Re-enable the relation via the feature switch
- Scale workers back up
We executed the plan and observed that the system showed no signs of distress, so we were ready to perform the vacuum.
VACUUM FULL affected_table_broken;
The vacuum completed successfully with no queries blocked!
Now to reverse the procedure to get the data back:
BEGIN;
LOCK TABLE affected_table_broken, affected_table IN ACCESS EXCLUSIVE MODE;
ALTER TABLE "affected_table"
RENAME TO "affected_table_empty";
ALTER TABLE "affected_table_broken"
RENAME TO "affected_table";
-- insert any rows that managed to sneak through
INSERT INTO affected_table (date, enabled, data, product_id,
mode_id)
(SELECT date,
enabled,
data,
product_id,
mode_id
FROM "affected_table_empty");
COMMIT;
We’ll also need to add the foreign key constraints back in but in a non blocking way.
The trick to doing this is to create it as NOT VALID
, meaning existing rows won’t be checked when it’s added. The constraint can later be checked to clear out any invalid values, and it’s done with a less intrusive lock (SHARE UPDATE EXCLUSIVE
) so regular activity can continue.
-- put the constraint back on
-- note that it is initially deferred and not valid (old rows won't be checked, new inserts and updates will be). This means it will run fast!
ALTER TABLE "table_1"
ADD CONSTRAINT "tbl1_fk_affected_table_id" FOREIGN KEY (table_id) REFERENCES affected_table (id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
ALTER TABLE "table_1" VALIDATE CONSTRAINT "tbl1_fk_affected_table_id";
After disabling the feature switch and scaling our Celery workers back up our site was back to normal.
We were fortunate enough to be able to substitute null values for the affected table and rely on other databases. If this was not possible we’d have no choice but to bring the entire site down while the vacuum was running. Given we’re in the Christmas period and with Black Friday coming up, everyone was relieved that we could keep our uptime high!