Expanding hex colors to 48 bit

Hexadecimal for expressing colors has been a staple of the internet for 20+ years now. They’re a really darn efficient way of encoding colors, and are a rare combination of:
- Highly-compressed: it’s difficult to pack more data into fewer bytes
- Human-readable: they can even be memorized well—arguably more than any other color storage format
But my one major gripe is they reduce colors to 8-bits per channel (a.k.a 24 bit; 8 Ă— 3 channels), which was great in the 90s, standard for the 00s & 10s, but in the 20s is growing obsolete as we watch rich-depth movies in 10-bits-per-channel (and beyond) on our TVs, phones, and tablets. So why is the web (and our design software) lagging behind?
In this article, I’m going to try to convince you that Hex-12—twelve-digit hex codes—are a good next step for the web.
A lesson on quick maths
In order to understand what hex codes are doing, we’ll need to spend a little bit (har) of time unpacking the math behind them.
The Rule of least power in computing aims to accomplish tasks using the fewest resources possible. So we need to figure out how many values we’re accounting for, and handle it as efficiently as possible. When hex colors were made into a web standard in the 90s, 28 values for each R, G, and B channel seemed to be a reasonable balance at the time, allowing some future growth but without being wasteful of resources.
We needed each R, G, and B channel to hold 256 possible values. The obvious storage method is storing the literal numbers 0–255 in each channel, which is why it can be expressed literally as in rgb(255, 192, 128). But less obvious is having the eureka! moment of realizing:
In other words, hexadecimal does not naturally coincide with all powers of 2. Just in this specific 8-bit instance, we just happen to get a nice, tidy 2-digit number. It’s just a remarkable convenience that was too tantalizing to pass up.
Base-N encodings
So other than hexadecimal, what are our other options? Once you remember that going beyond the base-10 number system requires borrowing symbols, you start to think of more possibilities. When we want to reach sixteen, we start borrowing from letters, of which we only need six: A, B, C, D, E, and F. But what else could we do?
- The Base32 approach adds the 26 character alphabet to the 10 digits to get 36, but throws away 4 letters because 36 is “weird” not being a power of 2 and 32 has far more uses.
- Naïvely, you could throw away W, X, Y, and Z. But we have an opportunity to do something neat here: we can throw away similar-looking characters like I and L that look like 1. We can throw away O because it looks like 0. And for one more… uh… V, just because. And now we get reduced mistakes when hand-writing or memorizing!
- Also because we didn’t specify uppercase or lowercase, we get the bonus of being case-insensitive which is also great.
 
- To get higher, we can treat uppercase and lowercase as distinct, and wind up at Base64. But because 52 + 10 = 62, we also need +and/to wind up at a power of 2.- This gives us a ton of compression, however, we are losing our user-friendliness because now 0,O, andoare all significant. Memorizing and writing by hand is significantly more error prone than simpler Base32 or hex.
 
- This gives us a ton of compression, however, we are losing our user-friendliness because now 
To get higher… well, we can’t, really. Though theoretically we could define Base128 or higher forms, we’ve already exhausted standard numerals and letters, so where do you start borrowing symbols from? Unicode characters? Chinese? Emojis? It’s unlikely most people could remember a 6-character combination of emojis where the order matters (however if someone wants to take a crack at encoding colors using Hà nzi/Kanji, I eagerly await your blog post).
Also because we’re looking for powers of 2, alignment, we can skip past Base29, Base41, and all other oddly-numbered systems.
So for systems that are usable, we really only have Base32 (232), and Base64 (264).
Finding alignments
Back to our initial discovery before: we can express 8-bit color with 2-digit hexadecimal because 28 = 162. So if we want higher bit depths, where can we find alignment among other compression approaches?
| Color code | Colors (per channel) | Result | Fidelity (per channel) | 
|---|---|---|---|
| Hex-6 | 256 | = | 8-bit (256) | 
| Hex-9 | 4.096 | = | 12-bit (4,096) | 
| Hex-12 | 65,536 | = | 16-bit (65,536) | 
| Base32-6 | 1,024 | = | 10-bit (1,024) | 
| Base32-9 | 32,768 | = | 15-bit (32,768) | 
| Base64-6 | 4,096 | = | 12-bit (4,096) | 
| Base64-9 | 262,144 | = | 18-bit (262,144) | 
Comparing results
Our ideal target to hit is 16-bits per channel or above, i.e. double the current standard. This would give us plenty of room to support all modern higher-depth color devices today and for years to come, without violating the Rule of least power. Plus, Adobe Photoshop using 16-bit color for years is good precedent.
Of the table above, some good contenders are:
- Base32-6, e.g. #x0f1z8, gets us to 10-bit, and it’s the same length as hex codes! But 10-bit is really not that much higher than 8-bit, so this may not last long.
- Hex-9 gets us to 12-bit. This is a pretty good leap, but can we go higher?
- Base32-9 gets us to 15-bit, which is our best so far!
- Hex-12 gets us to that magic 16-bit target. Its only downside is its length.
Base64-3 got us way more than we wanted at 18-bit, but before we land there and call it a day, let’s take a good hard look at the color code #O0O0O0O0O—a mix of O and zeroes—and be honest with ourselves: this is an ergonomic nightmare. No one wants to remember whether letters were capitalized or not. Even if it meets some of our criteria, this is not user-friendly enough to consider seriously.
But how do Base32 and our old friend hexadecimal compare?
- Familiarity: I would argue a tie here; folks that understand hexadecimal could probably easily wrap their heads around the same thing, just with a few more letters.
- Backwards-compatibility: Hexadecimal—we’re already using it.
- Length: Base32-9 is shorter than Hex-12.
- Fidelity: Hex-12 produces 16-bit color, as opposed to Base32-9’s 15-bit.
And the winner is…
For me, I feel like Hex-16 juuuuust tips the scales with it being the same thing we’ve been using, just expanding the length. Base32-9 does shave off a few digits, yes, but us going from 16-bit → 15-bit and introducing a never-before-seen color code system would probably make it a harder sell for folks. There’s never a simpler upgrade path than “don’t change how anything works, just double it.”
Which, looking back and skipping over our train of thought that got us here, it’s like, “duh—when you double hex codes you get double the values.” Which sounds simple when you put it that way. But sometimes dumb ideas are smart.
So to leave you with something to ruminate on, let’s just look at how you’d translate some existing colors to Hex-16, and it’s really not that bad:
| Color | Hex-6 | Hex-12 | 
|---|---|---|
| color(srgb 1 0 0) | #ff0000 | #ffff 0000 0000 | 
| color(srgb 0 1 1) | #00ffff | #0000 ffff ffff | 
| color(srgb 1 1 1) | #ffffff | #ffff ffff ffff | 
| color(srgb 0.73 0.85 0.33) | #bad954 | #bae1 d999 547b | 
| color(srgb 0.3 0.13 0.73) | #4d21ba | #4ccd 2148 bae1 | 
| color(srgb 0.64 0.4 0.22) | #a36638 | #a3d6 6666 3852 | 
What’s a few extra characters for perfect, high-fidelity color reproduction that will last us another 20 years? I say it’s worth it.