I was looking through some of my CSS experiments I did a while ago, and I found an experiment I did with binary on CodePen.

It was a fun little project and thoroughly nerdy, but it probably requires a little bit of explanation. First of all, what *can* a binary representation of time look like?

See the Pen Pure Binary Clock Ring by Gary (@garypaul) on CodePen.

## Behold, the binary clock?

In a nutshell, a binary clock is a representation of the time using only two states: ON and OFF. We commonly use 1 and 0 to represent these two states, but many binary clocks use LEDs. In case you didn’t realize. Most clocks use decimals to tell you the time. There are lots of tutorials on how to read binary clocks and I wrote a whole article about teaching binary to kids if you want to read more on that subject.

If you’re sufficiently nerdy enough, you have probably come across a binary clock on a site like thinkgeek.com or amazon.com. Perhaps you even ordered one as a way to prove to your co-workers that you happen to know and understand the esoteric language of computers. It’s a great party trick to see a bunch of dots in a pattern and be able to tell someone what time it is.

Some will be *impressed* and ask you to show you the trick to reading it. Others will look at you with one eyebrow cocked and one eyeball looking down at their smartphone which has been permanently bonded to their hand wondering why it’s so much effort to find the time. Try to avoid the *other* type of person. No amount of explaining to them why this is so cool will ever have any effect on them. There is also a 3rd group of people. These people are so nerdy that they will take one look at your binary clock and, privately (if they’re kind), say: “You know that’s not *really* a pure binary representation of a day, right?”

## Is it pure binary or impure binary?

The problem with most of the *so-called* binary clocks is that they’re essentially just converting hours, minutes, and seconds that were originally read in decimal. Strictly speaking, the numbers are binary representations of that time, but in reality, we’re not actually using binary. To be precise, clocks are not “really” using decimal either. They’re using a hybrid of duodecimal (or dozenal) hours, combined with sexagesimal ( base 60 ) minutes and seconds, all displayed using the decimal number system. A true decimal clock might have 10 hours in a day ( or 20, if you continue to divide night and day like we do now ), 100 minutes in an hour, and 100 seconds in a minute. So what would a pure binary representation of time really mean?

## What *is* a clock’s primary function?

A clock’s most important job is to measure the passage of time within a single day. That may seem obvious, but keeping this in mind helps guide how we measure time. We don’t need to worry about weeks or months or anything else. This makes sense. Few people care that there are 168 hours in a week and I don’t think anyone has requested a clock that divides up a year or a month in anything other than days. That’s what we use calendars for.

### Divide and conquer

Since there are 10 different symbols in decimal, it makes most sense to divide things into 10. That way you have 1 symbol for each part. Although we don’t divide our current clock into ten parts, many have tried to implement decimal time. Binary has only two states. True and False, 0 and 1, ON and OFF, YES and NO… so we start by dividing the day in two. We’ll use numbers for simplicity. 0 will be before midday, 1 will represent after midday. We could choose any time start and stop our new binary day… but we’ll stick with conventions.

Since a clock that tells you if it’s before or after midday isn’t that useful, we’ll add another position to divide those up again. Now we have a day divided into quarters. 00 would be the first quarter and 01 would be the second quarter. 10 would be the third quarter, and 11 would be the fourth quarter. To break that down so that it makes sense. the First position in a binary representation of time divides the day into two parts, the second position divides the half-day into two parts.

From here, we just keep dividing by two. Eventually we’ll get to a point where it doesn’t make any sense dividing further. For me, that point was 16. So what is the smallest (practical) unit in a binary clock? For me, it was (conveniently) 16 positions. That equals about 1.34 of our current seconds. To figure this out, you just need to take the number of current seconds in a day, which is 87,600, and divide by 2^{16}. Or you can just go through the exercise of just dividing the day in 2, 16 times. Same result.

Of course, converting the divisions to decimal really defeats the point of representing time in binary, but it’s nice to know that each time the clock ticks, it will be *ever so slightly* longer than it is now. Not that it will give you any more time than you have now, mind you.

## What’s the point?

There isn’t any point. This is more of a mental exercise to challenge your perceptions and preconceived ideas of what a day looks like. It wouldn’t be practical to convert our time to something like this… however, using binary time as a base could be interesting in one other say. Under the hood, if time was divided neatly into twos… it would relatively simple to display time in a much more human-readable format: hexadecimal or base 16. I’ll let you try to figure out what time I finished this article: 8:C:A:F.

Neat!