At Kogan.com we use Webpack to bundle our React app. which powers the frontend for our Django backend application. One way we ensure that our website loads fast and is responsive for our customers is using isomorphic rendering to have the HTML of the page pre-rendered, ready for React to hydrate when it loads in the browser. This is also crucial for SEO purposes to ensure that search engines can easily crawl the pages of our site. To allow us to do this, we create two builds of our React app, one to be loaded in the browser and one for our node service to server render.
As our codebase grows in size and complexity, the performance of our webpack builds - both development and production - has suffered. Throwing hardware at the problem is one option, but this isn’t so easy for our local developer environments. We suspected we had some room to improve in our build configuration that could give us some quick wins.
Due to the number and complexity of our builds, we already had a number of optimisations in place. These included:
- parallelising our six builds with parallel-webpack and happypack,
- picking faster source maps,
- using the Webpack DLL plugin, and
- disabling some plugins for server render.
Various ideas on potential bottlenecks and improvements had been thrown around, but without any measurements of the current build internals, they were hard to validate.
Profiling
We first attempted to profile the build using the ProfilingPlugin that comes built-in to webpack. This is as simple as adding a plugin to your webpack configuration, running your build and dropping the file produced into the Chrome Devtools Performance profiler. Unfortunately, it seems that our builds were a bit too large and complex and the Devtools profiler crashed whenever we tried to load the file produced by the plugin.
We also used the Webpack Bundle Analyzer to visualise the size of each chunk being produced in our bundle to make sure we didn't have any issues in our code structure that was making the build output larger than it needed to be. However, this didn't surface much actionable information as we look at this fairly regularly and we understand the existing pain-points that are mostly tied to legacy libraries still in use that we are actively working to remove.
Finally, we used the Speed Measurement Plugin which proved to be the most useful for our needs. It provides timing information for each loader and plugin in use to quickly see where the most time is being spent in your webpack builds, allowing you to better target your optimisations. It is worth noting that the HappyPack plugin made it harder to interpret the results, but it did still work.
Below is an example of SMP’s output for a full build on my local machine:
Optimisations
Our goal was to improve build speeds for both the development and production builds, as both have impacts on our team efficiency (and frustration levels!). We made a number of changes in an effort to reduce our build times and have jotted down our results below so you can learn what worked and what didn’t.
Minifying code is expensive
As you can see from the screenshot above, the UglifyJsPlugin is taking a large percentage of our total build time. We already run this plugin with the compress: false
, parallel: true
and cache: true
options, so we weren’t able to find any improvements here this time unfortunately.
Type check only once
We build server and client bundles in parallel and the builds are very similar. Aside from the entry points and some initialisation code for the client bundle, they are compiling the same React apps. We set ts-loader
to run with transpileOnly: true
for development builds of the server bundle so that we would only be running type checking on the client build, while keeping 99% of the Typescript coverage. This saved us around 30 seconds when building the server bundle.
Don’t extract CSS unless you need to
We swapped to use style-loader
instead of the MiniCssExtractPlugin
loader for the client bundle, as we don’t need a CSS file to be generated like we do for the server builds. Avoiding the need to extract CSS into a single file saved us about 3s (2%) on our build time.
Node Sass to Dart Sass
Changing to the dart-sass package from node-sass didn’t end up providing any performance improvements to our build time, but it did reduce our yarn install time. This is because Dart can be compiled down to JS and so it does not need to be built for the environment being installed on, unlike the node-sass implementation.
Avoid generating expensive hashes
For development builds, we tried to remove any locations that were generating content hashes either for uglification or cache busting, as this was not important in a development environment.
We changed the localIdentName
in the css-loader
options to remove the hash output, as well as removing the HashedModuleIdsPlugin
altogether.
Scale back threading
We use parallel webpack to produce multiple webpack builds at the same time and take advantage of all available cores. We were also using HappyPack which enables webpack loaders to use multiple threads as well. However, running both of these at the same time ended up being slower overall as all the cores were already in use and so we were paying the price of HappyPack’s overhead with no benefit. We ended up removing this altogether, however this would likely be beneficial if you were only building a single webpack build.
Cache everything you can
The HardSourceWebpackPlugin was the biggest contributor to our final performance improvements. This plugin caches the output of each module to the file system so that subsequent webpack builds can re-use the previous run’s work and massively cut down the build time.
NB: the HardSourceWebpackPlugin is currently incompatible with the SpeedMeasureWebpackPlugin, so make sure to remove one before adding the other!
Final Results
After combining each of the changes above, we saw slight improvements to our first build time with a roughly 5% decrease duration for builds with no cache. However, given we use a warm cache for production builds, the hard source plugin cut our build times from 152 down to 52 seconds! This was a huge win, and it took only minor changes to our docker build configuration to ensure the HardSourceWebpackPlugin cache persisted between our production docker builds so that it was almost always warm.
The plugin has its risks as you need to ensure that all external factors of your build are taken into account when it is determining if the cache needs to be updated, but it worked out of the box for our fairly complex setup and we haven’t run into any issues after using it in production for over a month.
If you also love finding performance wins and efficiency gains in your tooling and code, check out our careers page to learn more about our team at Kogan.com.