Bazel is incompatible with JavaScript
Bazel is an open source fork of Googleâs internal tooling, Blaze, that Iâve had the misfortune of fighting with for the past year. It promises a mouthwatering smorgasboard of:
- Incremental builds
- Build caching
- Remote build caching
- Support for any programming language, any codebase, and any possible deploy target
The reviews are great. It has a passionate fanbase. Itâs well-maintained and supported. Thereâs just one itty-bitty problem that gets overlooked by the people opting for it: itâs useless for JavaScript. In this blog post weâll dig into why Bazel shouldnât touch ANY JavaScript, TypeScript, or Node.js at all.
TL;DR
âI donât use Bazel and donât care. I found this Googling and just want to know what I should use.â Use Turborepo + pnpm. Turborepo is designed for how JS works from the ground-up, and does a lovely job at delivering on all of Bazelâs promises but tailored to the JS ecosystem. Though I wonât mention Turborepo again in this blog post, just know for every criticism levied against Bazel, I canât say the same for Turborepo.
âWhat about Nx?â Iâve had a lot more papercuts with Nx because they try and follow Bazel philosophy more closely, but compromise on what parts donât fit into JS. So while you donât have the full-on headaches, you have a compromise of the headaches.
Inputs and outputs
Letâs back up a bit and lay some groundwork. The underlying principle of skipping work is determinism. Or more specifically, the idea that if the inputs donât change, the outputs shouldnât, therefore, no reason to rebuild (assuming itâs a pure system).
Bazelâand any other build system for that matterâoperates off this principle of âidentify the inputs, and you can determine whether or not a rebuild is necessary.â Everyone is on board at this point. But where it oversteps is it needs to know the outputs ahead of time, too, Which doesnât work for JS.
Problem 1: Assistant to the Micromanager
Much of the existing Bazel + JS systems have focused on simple tsc
generation: for every .ts
file, build one .js
file. While that is one usecase, friend, if you think thatâs all JS build tools do, Iâve got really bad news.
There are 2 common patterns in JS that are unsolved problems in Bazel: npm packages, and bundled sites.
An npm package needs to build .mjs
, .js
, .cjs
, corresponding .d.mts
and .d.cjs
declarations, as well as *.map
files for all of the above. All these outouts likely wonât be 1:1 with the source .ts
files. In a distributed package, the inputs alone do not determine the outputs; they are only half of the equation. You may bundle your CJS build into one file, or your ESM, or both. You may choose to even have a âliteâ version of your package that leaves out heavy modules. In all cases the output is a product of the inputs + build system. Bazel would want me to give it a full list of every file in my package ahead of time. Which means I have to manually write down a list of possibly hundreds of files before any build will work.
For the bundled site, consider an Astro site (like this one!). It can handle any filetype the web can (because weâre building a website). But whatâs more, we have things like PageFind, where based on the contents of other files, we will get about 50+ .pf_meta
/.pf_index
/.pf_fragment
files, randomly-hashed, indexing the site for search. Bazel would want me to tell it ahead of time what all those random files will be named I have no control over that are critical to the functioning of my site.
No.
Problem 2: What we have is a failure to communicate
If typing out lists of filenames doesnât sound that bad to you, Iâve got even worse news: you canât run npm executables. Thatâs right. All your favorite npm CLIs? Poof. Gone.
Letâs say you have an existing Rollup build you want to Bazelify. You describe all your inputs and outputs, but⌠how do you run rollup -c
? You canât!
After hours of research youâll find:
rules_js
kinda has a thing where you can point to abin
inside an npm package. But it blows up for many packages.- You find a
js_binary
macro, but thatâs for running JS scripts you own, not npm CLIs (but you can spawn child processes to run npm CLIs except you might as well not use Bazel at that point). - You find
rules_rollup
, thinking youâre saved, except it only has a placeholder test thatâs just a flimsy proof of concept.
You slowly come to the realization youâre going to have to make an entire Starlark wrapper. taking a week or two to work on this. You run into dead end after dead end. You fear the reason you canât find others have paved this path before you is because itâs a bad idea. In the end, you get it working, but out of tiredness youâve skipped recreating all the features you didnât use.
Now onto your 2nd npm executable. Oh God. Weeks have gone by, just trying to get a thing running that normally takes 15 minutes. You claw your way back to normalcy, with it barely running. Then you change your Rollup config and your rules_gen
breaks. You break.
Problem 3: Danteâs 9 circles of hermeticity
Even assuming you are able to get a JS project running exactly as it did, the final boss is hermeticity where it doesnât do anything in your local node_modules
(of course not! That would be silly). It copies everything into a secret location on disk where it does work.
âWhatâs that error?â you say. You forgot to copy your node_modules
into the secret location. âItâs still blowing up.â No, all of your node_modules
. List them out. Every single one. Even the sub-dependencies. âOK it ran⌠where are my files?â Still in the super-secret location we canât tell you about. What are their names? What are their contents? Who knows!
âJust tell me how to get my files back.â You have to create a build target selecting those files. âThen whatâtheyâll appear locally?â No, silly! A target only gives Bazel a list of files; you have to import and run another copy_to_directory
macro to write those files back to your local workspace. âOK that worked but the files are in the wrong locations?â Oh, right, rightâit actually didnât build them like Node.js would have so youâll need to rename and remap all the files back to their correct locations (and maybe fix some broken JS imports too).
âOK all that is done; now I want to run a followup command that generates my next set of files to the same location.â Nope! Bzzzzzt. Not possible at this time. You may only run one build task per output directory. Please try again later.
Problem 4: the times they are a-changinâ
Some of the afore-mentioned would be workable if investments in Bazel paid dividends down the line. But the biggest mistake I see Bazel users making is underestimating how quickly the JS ecosystem evolves.
Smart technical decisions require less investment over time. Bazel requires increasing amounts of time and energy to maintain your infrastructure of Starlark wrappers and shim code teaching Bazel about JS. Imagine having to redo this every month. Every time a major version of an npm package is released. Every time an OSS project fails out of maintenance and a new one takes its place. Every time a quantum leap forward happens in research. Every generation of JS tooling requires more time plugging into Bazel because itâs evolutionarily more complex. But youâll never benefit from all these new tools because youâre (still) trying to catch up to the previous generation.
But the real pointlessness is the fact that the JS ecosystem gets orders of magnitude faster every generation. All that work you spent building a wrapper for the slow JS tool that sped it up by 25% would have been better served upgrading to its successor thatâs 1,000% faster. Iâm not even talking hypothetically: webpack was replaced by Vite, esbuild, and soon Turbopack, Rollup with Rolldown, ESLint with Biome, PostCSS with Lightning CSS. If youâre using Bazel to speed up CI, then compare apples to apples. Focus on speeding up CI. And usually that fast track is leaning into the hundreds of thousands of smart JS devs that would absolutely love to help you with that.
Summary

At the end of the day, a build system just wonât work for all languages, and thatâs OK. Bazel was designed for a different usecase than JS. And itâs natural that Bazel tried to solve a problem for everyone, especially with the growing popularity and maturation of JS. And I applaud all the lovely contributors and maintainers that are making an effort at making Bazel better! But that doesnât mean Iâd touch Bazel with a 10 foot pole for handling JS right now.
But rather than end on a sour note and be a downer, I can sum up what would make Bazel compatible with JS with a more concrete answer than âmake it like Turbopackâ (OK, I liedâI did end up mentioning it again), only 3 changes would be required:
- Bazel runs npm executables with zero config. The most important tools in the ecosystemââbuild systems, linters, test runnersârun via CLI and npm bins. Having to rewrite every CLI we want to use in Starlark is insane; we should just be able to use the damn thing.
- Bazel treats local
node_modules
as the canonical source. All Node.js tools rely on thenode_modules
local directory to share caches and talk to other packages. This is what powers Electron apps and extensions like VS Code. Donât overlook all the optimizations npm packages are relying on localnode_modules
forânot the hermetic layer. - No
rules_ts
. Evidence of Bazelâs incompatibility has been right under your nose the whole time. If Bazel Just Worked⢠with the npm ecosystem,rules_ts
goes away. After all, TypeScript is a package just like any other. The fact that Bazel requires a heavyweight wrapper for every tool you use is not a sustainable development model. Bazel will always be playing catchup to JSâ ecosystem. But if Bazel truly integrated with JS,rules_js
would have everything you needed.
Will all these changes happen? I donât know. And Iâm not making âem. But until these changes are made, Iâll personally be using anything but Bazel for JS. And you should, too.