Zip - How not to design a file format.


The Zip file format is now 32 years old. You'd think being 32 years old the format would be well documented. Unfortunately it's not.

I have a feeling this is like many file formats. They aren't designed, rather the developer just makes it up as they go. If it gets popular other people want to read and/or write them. They either try to reverse engineer the format OR they ask for specs. Even if the developer writes specs they often forget all the assumptions their original program makes. Those are not written down and hence the spec is incomplete. Zip is such a format.

Zip claims its format is documented in a file called APPNOTE.TXT which can be found here.

The short version is, a zip file consists of records, each record starts with some 4 byte marker that generally takes the form

0x50, 0x4B, ??, ??

Where the 0x50, 0x4B are the letters PK standing for "Phil Katz", the person who made the zip format. The two ?? are bytes that identify the type of the record. Examples

0x50 0x4b 0x03 0x04   // a local file record
0x50 0x4b 0x01 0x02   // a central directory file record
0x50 0x4b 0x06 0x06   // an end of central directory record

Records do NOT follow any standard pattern. To read or even skip a record you must know its format. What I mean is there are several other formats that follow some convention like each record id is followed by the length of the record. So, if you see an id, and you don't understand it, you just read the length, skip that many bytes (*), and you'll be at the next id. Examples of this type include most video container formats, jpgs, tiff, photoshop files, wav files, and many others.
(*) some formats require rounding the length up to the nearest multiple of 4 or 16.

Zip does NOT do this. If you see an id and you don't know how that type of record's content is structured there is no way to know how many bytes to skip.

APPNOTE.TXT says the following things

4.1.9 ZIP files MAY be streamed, split into segments (on fixed or on removable media) or "self-extracting". Self-extracting ZIP files MUST include extraction code for a target platform within the ZIP file.

4.3.1 A ZIP file MUST contain an "end of central directory record". A ZIP file containing only an "end of central directory record" is considered an empty ZIP file. Files MAY be added or replaced within a ZIP file, or deleted. A ZIP file MUST have only one "end of central directory record". Other records defined in this specification MAY be used as needed to support storage requirements for individual ZIP files.

4.3.2 Each file placed into a ZIP file MUST be preceded by a "local file header" record for that file. Each "local file header" MUST be accompanied by a corresponding "central directory header" record within the central directory section of the ZIP file.

4.3.3 Files MAY be stored in arbitrary order within a ZIP file. A ZIP file MAY span multiple volumes or it MAY be split into user-defined segment sizes. All values MUST be stored in little-endian byte order unless otherwise specified in this document for a specific data element.

4.3.6 Overall .ZIP file format:

      [local file header 1]
      [encryption header 1]
      [file data 1]
      [data descriptor 1]
      [local file header n]
      [encryption header n]
      [file data n]
      [data descriptor n]
      [archive decryption header] 
      [archive extra data record] 
      [central directory header 1]
      [central directory header n]
      [zip64 end of central directory record]
      [zip64 end of central directory locator] 
      [end of central directory record]

4.3.7 Local file header:

      local file header signature     4 bytes  (0x04034b50)
      version needed to extract       2 bytes
      general purpose bit flag        2 bytes
      compression method              2 bytes
      last mod file time              2 bytes
      last mod file date              2 bytes
      crc-32                          4 bytes
      compressed size                 4 bytes
      uncompressed size               4 bytes
      file name length                2 bytes
      extra field length              2 bytes

      file name (variable size)
      extra field (variable size)

4.3.8 File data

Immediately following the local header for a file SHOULD be placed the compressed or stored data for the file. If the file is encrypted, the encryption header for the file SHOULD be placed after the local header and before the file data. The series of [local file header][encryption header] [file data][data descriptor] repeats for each file in the .ZIP archive.

Zero-byte files, directories, and other file types that contain no content MUST NOT include file data.

4.3.12 Central directory structure:

      [central directory header 1]
      [central directory header n]
      [digital signature] 

File header:

        central file header signature   4 bytes  (0x02014b50)
        version made by                 2 bytes
        version needed to extract       2 bytes
        general purpose bit flag        2 bytes
        compression method              2 bytes
        last mod file time              2 bytes
        last mod file date              2 bytes
        crc-32                          4 bytes
        compressed size                 4 bytes
        uncompressed size               4 bytes
        file name length                2 bytes
        extra field length              2 bytes
        file comment length             2 bytes
        disk number start               2 bytes
        internal file attributes        2 bytes
        external file attributes        4 bytes
        relative offset of local header 4 bytes

        file name (variable size)
        extra field (variable size)
        file comment (variable size)

4.3.16 End of central directory record:

      end of central dir signature    4 bytes  (0x06054b50)
      number of this disk             2 bytes
      number of the disk with the
      start of the central directory  2 bytes
      total number of entries in the
      central directory on this disk  2 bytes
      total number of entries in
      the central directory           2 bytes
      size of the central directory   4 bytes
      offset of start of central
      directory with respect to
      the starting disk number        4 bytes
      .ZIP file comment length        2 bytes
      .ZIP file comment       (variable size)

There are other details involving encryption, larger files, optional data, but for the purposes of this post this is all we need. We need one more piece of info, how to make a self extracting archive.

To do so we could look back to ZIP2EXE.exe which shipped with pkzip in 1989 and see what it does but it's easier look at Info-Zip to see what happens.

How do I make a DOS (or other non-native) self-extracting archive under Unix?

The procedure is basically described in the UnZipSFX man page. First grab the appropriate UnZip binary distribution for your target platform (DOS, Windows, OS/2, etc.), as described above; we'll assume DOS in the following example. Then extract the UnZipSFX stub from the distribution and prepend as if it were a native Unix stub:

> unzip unz552x3.exe unzipsfx.exe                // extract the DOS SFX stub
> cat unzipsfx.exe > yourDOSzip.exe  // create the SFX archive
> zip -A yourDOSzip.exe                          // fix up internal offsets

That's it. You can still test, update and delete entries from the archive; it's a fully functional zipfile.

So given all of that let's go over some problems.

How do you read a zip file?

This is undefined by the spec.

There are 2 obvious ways.

  1. Scan from the front, when you see an id for a record do the appropriate thing.

  2. Scan from the back, find the end-of-central-directory-record and then use it to read through the central directory, only looking at things the central directory references.

Scanning from the back is how the original pkunzip works. For one it means if you ask for some subset of files it can jump directly to the data you need instead of having to scan the entire zip file. This was especially important if the zip file spanned multiple floppy disks.

But, 4.1.9 says you can stream zip files. How is that possible? What if there is some local file record that is not referenced by the central directory? Is that valid? This is undefined.

4.3.1 states

Files MAY be added or replaced within a ZIP file, or deleted.

Okay? That suggests the central directory might not reference all the files in the zip file because otherwise this statement about files being added, replaced, or delete has no point to be in the spec.

If I have that contains files, A, B, C and I generate that only contains files A, B. Those are just 2 independent zip files. It makes zero sense to put in the spec that you can add, replace, and delete files unless that knowledge some how affects the format of a zip file.

In other words. If you have

  [local file A]
  [local file B]
  [local file C]
  [central directory file A]
  [central directory file C]
  [end of central directory]

Then clearly B is deleted as the central directory doesn't reference it. On the other hand, if there's no [local file B] then you just have an independent zip file, independent of some other zip file that has B in it. No need for the spec to even mention that situation.

Similarly if you had

  [local file A (old)]
  [local file B]
  [local file C]
  [local file A (new)]
  [central directory file A(new)]
  [central directory file B]
  [central directory file C]
  [end of central directory]

Then A (old) has been replaced by A (new) according to the central directory. If on the other hand there is no [local file A (old)] you just have an independent zip file.

You might think this is nonsense but you have to remember, pkzip comes from the era of floppy disks. Reading an entire zip file's contents and writing out a brand new zip file could be an extremely slow process. In both cases, the ability to delete a file just by updating the central directory, or to add a file by reading the existing central directory, appending the new data, then writing a new central directory, is a desirable feature. This would be especially true if you had a zip file that spanned multiple floppy disks; something that was common in 1989. You'd like to be able to update a README.TXT in your zip file without having to re-write multiple floppies.

In discussion with PKWare, they state the following

The format was originally intended to be written front-to-back so the central directory and end of central directory record could be written out last after all files included in the ZIP are known and written. If adding files, changes can applied without rewriting the entire file. This was how the original PKZIP application was designed to write .ZIP files. When reading, it will read the ZIP file end of central directory first to locate the central directory and then seek to any files it needs to access

Of course "add" is different than "delete" and "replace".

Whether or not having local files not referenced by the central directory is undefined by the spec. It is only implied by the mention of:

Files MAY be added or replaced within a ZIP file, or deleted.

If it is valid for the central directory to not reference all the local files then reading a zip file by scanning from the front may fail. Without special care you'd get files that aren't supposed to exist or errors from trying to overwrite existing files.

But, that contradicts 4.1.9 that says zip files maybe be streamed. If zip files can be streamed then both of the example above would fail because in the first case we'd see file B and in the second we'd see file A (old) before we saw that the central directory doesn't reference them. If you have to wait for the central directory before you can correctly use any of the entries then functionally you can not stream zip files.

Can the self extracting portion have any zip IDs in it?

Seeing the instructions for how to create a self extracting zip file above, we just prepend some executable code to the front of the file and then fix the offsets in the central directory.

So let's say your self extractor has code like this

switch (id) {
  case 0x06054b50:
  case 0x04034b50:
  case 0x02014b50:

Given the code above, it's likely those values 0x06054b50, 0x04034b50, 0x02014b50 will appear in binary in the self extracting portion of the zip file at the front of the file. If you read a zip file by scanning from the front your scanner my see those ids and mis-interpret them as a zip records.

In fact you can imagine a self extractor with a zip file in it like this

// data for a zip file that contains
//   LICENSE.txt
//   README.txt
//   player.exe
const unsigned char[] runtimeAndLicenseData = {
  0x50, 0x4b, 0x03, 0x04, ??, ??, ...

int main() {
   extractZipFromMemory(runtimeAndLicenseData, sizeof(runtimeAndLicenseData));

Now there's a zip file in the self extractor. Any reader that reads from the front would see this inner zip file and fail. Is that a valid zip file? This is undefined by the spec.

I tested this. The original PKUnzip.exe in DOS, the Windows Explorer, MacOS Finder, Info-Zip (the unzip included in MacOS and Linux), all clearly read from the back and see the files after the self extractor. 7z, Keka, see the embedded zip inside the self extractor.

Is that failure or is that a bad zip file? The APPNOTE.TXT does not say. I think it should be explicit here and I think it's one of those unstated assumptions. PKunzip scans from the back and so this just happens to work but the fact of how it happens to work is never documented. The issue that the data in the self-extractor might happen to resemble a zip file is just glossed over. Similarly streaming will likely fail if it hasn't already from the previous issues.

You might think this is a non issue but their are 100s of thousands of self extracting zip files out there from the 1990s in the archives. A forward scanner might fail to read these.

Can the zip comment contain zip IDs in it?

If you go look at 4.3.16 above you'll see the end of a zip file is a variable length comment. So, if you're doing backward scanning you basically read from the back of the file looking for 0x50 0x4B 0x05 0x06 but what if that sequence of bytes is in the comment?

I'm sure Phil Katz never gave it a second thought. He just assumed people would put the equivalent of a README.txt in there. As such it would only have values from 0x20 to 0x7F with maybe a 0x0D (carriage return), 0x0A (linefeed), 0x09 (tab) and maybe 0x06 (bell).

Unfortunately all of those values in the ids are valid ASCII, even utf-8. We already went over 0x50 = P and 0x4B = K. 0x06 is "Bell" in ASCII (makes a noise or flashes the screen). 0x05 is "Enquiry".

The APPNOTE.TXT should arguably explicitly specify if this is invalid. Indirectly 4.3.1 says

A ZIP file MUST have only one "end of central directory record"

But what does that mean? Does that mean the bytes 0x50 0x4B 0x05 0x06 can't appear in the comment nor the self extracting code? Does it mean the first time you see that scanning from the back you don't try to find a second match?

If you scan from the front and run into none of the issues mentioned before, then a forward scanner would successfully read this. On the other hand, pkunzip itself would fail.

What if the offset to the central directory is 1,347,093,766?

That offset is 0x504b0506 so it will appear to be end central directory header. I think 1.3gig zip file wasn't even on the radar when zip was created and in fact extensions were required to handle files larger then 4gig. But, it does show one more way the format is poorly designed.

What's a good design?

There's certainly debate to be had about what a good design would be but somethings are arguably easy to decide if we could start over.

  1. It would have been better if records had a fixed format like id followed by size so that you can skip a record you don't understand.

  2. It would have been better if the last record at the end of the file was just an offset-to-end-of-central-directory record as in

       0x504b0609 (id: some id is not in use)
       0x04000000 (size of data of record)
       0x???????? (relative offset to end-of-central-directory)

    Then there would be no ambiguity for reading from the back.

    1. Read the last 12 bytes
    2. Check the first 8 are 0x50 0x4b 0x06 0x09 0x04 0x00 0x00 0x00. If not, fail.
    3. Read the offset and go to the end-of-central-directory

    Or, conversely, put the comment in its own record and write it before the central directory and put an offset to it in the end-of-central-directory-record. Then at least this issue of scanning over the comment would disappear.

  3. Be clear about what data can appear in a self extracting stub.

    If you want to support reading from the front it seems required to state that the self extracting portion can't appear to have any records.

    This is hard to enforce unless you specifically wrote some validator. If you just check based on whether your own app can read the zip file then, as it stands now, Pkzip, pkunzip, info-zip (the zip in MacOS, Linux), Windows Explorer, and MacOS all don't care what's in the self extracting portion so they aren't useful for validation. You must explicitly state that you must scan from the back in the spec or write a validator that rejects zip that are not forward scanable and state in the spec why.

  4. Be clear if the central directory can disagree with local file records

  5. Be clear if random data can appear between records

    A backward scanner does not care what's between records. It only cares it can find the central directory and it only reads what that central directory points to. That means there can be any random data between records (or some at least some records).

    Be explicit if this is okay or not okay. Don't rely on implicit diagrams.

What to do, how to fix?

If I was to to guess all of these issues are implementation details that didn't make it into the APPNOTE.TXT. What I believe the APPNOTE.TXT really wants to say is "a valid zip file is one that pkzip can manipulate and pkunzip can correctly unzip. Instead it defines things in such a way that various implementations can make files that other implementations can't read.

Of course with 32 years of zip files out their we can't fix the format. What PKWare could do is get specific about these edge cases. If it was me I'd add these sections to the APPNOTE.TXT

4.3.1 A ZIP file MUST contain an "end of central directory record". A ZIP file containing only an "end of central directory record" is considered an empty ZIP file. Files MAY be added or replaced within a ZIP file, or deleted. A ZIP file MUST have only one "end of central directory record". Other records defined in this specification MAY be used as needed to support storage requirements for individual ZIP files.

The "end of central directory record" must be at the end of the file and the sequence of bytes, 0x50 0x4B 0x05 0x06, must not appear in the comment.

The "central directory" is the authority on the contents of the zip file. Only the data it references are valid to read from the file. This is because (1) the contents of the self extracting portion of the file is undefined and might be appear to contain zip records when in fact they are not related to the zip file and (2) the ability to add, update, and delete files in a zip file stems from the fact that it is only the central directory that knows which local files are valid.

That would be one way. I believe this will read the 100s of millions of existing zip files out there.

On the other hand, if PKWare claims such files that have these issues don't exist then this would work as well

4.3.1 A ZIP file MUST contain an "end of central directory record". A ZIP file containing only an "end of central directory record" is considered an empty ZIP file. Files MAY be added or replaced within a ZIP file, or deleted. A ZIP file MUST have only one "end of central directory record". Other records defined in this specification MAY be used as needed to support storage requirements for individual ZIP files.

The "end of central directory record" must be at the end of the file and the sequence of bytes, 0x50 0x4B 0x05 0x06, must not appear in the comment.

There can be no [local file records] that do not appear in the central directory. This guarantee is required so reading a file front to back provides the same results as reading it back to front. Any file that does not follow this rule is an invalid zip file.

A self extracting zip file must not contain any of the sequences of record ids listed in this document as they maybe mis-interpreted by forward scanning zip readers. Any file that does not follow this rule is an invalid zip file.

I hope they will update the APPNOTE.TXT so that the various zip readers and zip creators can agree on what's valid.

Unfortunately I feel like pkware doesn't want to be clear here. Their POV seems to be that zip is an ambiguous format. If you want to read by scanning from the front then just don't try to read files you can't read that way. They're still valid zip files and but the fact that you can't read them is irrelevant. It's just your choice to fail to support those.

I suppose that's a valid POV. Few if any zip libraries handle every feature of zip. Still, it would be nice to know if you're intentionally not handling something or if you're just reading the file wrong and getting lucky that it works sometimes.

The reason all this came up is I wrote a javascript unzip library. There are tons out here but I had special needs the other libraries I found didn't handle. In particular I needed a library that let me read a single file from a large zip as fast as possible. That means backward scanning, finding the offset to the desired file, and just decompressing that single file. Hopefully others find it useful.

You might find this history of Zip fascinating


Randomly Selected Music


I don't think this will interest anyone but me but ...

I've been listening to music via my iPhone for many years playing my collection of mp3s. I guess that dates me as I'm not using Spotify or Apple Music or Youtube Music but, I haven't had any luck using any of those services.

As some examples, Spotify, I picked "Caro Emerald" Radio. I'd classify her as modern Swing

and Spotify played "How Would You Feel" by Kzezip who I'd classify as pop.

Another example I put in Prince Radio

And Spotify played rap. Prince had nothing to do with rap. The list from when I pasted it above is basically "Hits from the 80s" but that's not what I want if pick "Prince Radio". I want music that sounds similar to Prince. Maybe Windy and Lisa, or maybe Rick James? or maybe some bands I never heard of. Checking the list though Spotify will give me Huey Lewis & The News. I have nothing against them, I like their songs, but they aren't similar to Prince.

An example from Youtube Music, I put in "Fuck you till your Groovy" by Jill Jones and pick "Radio"

And it played "All Night Long" by Lionel Richie. WAT!???!

Note: I got the Jill Jones recommendation from doing the same thing on Google Play Music who's radio feature actually seemed to work, or rather actually played music similar to the artist and not just music popular by people who like that artist.

Another Youtube Music Example, I put in "Swingrowers" radio which is an electro swing band.

And youtube played "Bliss on Mushrooms" by Infected Mushroom, an Industrial Band


Anyway, all this means I sadly I keep going back to just my own collection of mp3s because trying any of the other services means hitting "no" or "don't like" for 9 of 10 songs.

All that was really beside the point though. What I wanted to write about was I'd been listening to music via my iPhone for years and recently went through my entire list of ~8500 songs trying to make a playlist and that's when it became clear to me,

iPhone SUCKS at playing music!!! ๐Ÿ˜ญ

In particular I noticed lots of songs I never heard my iPhone play and conversely there were some albums it would seem to play way too often. One example is I have this album called "Pure Sugar" by Pure Sugar. It's house music from 1998

As far I know it was some album I bought at a record store,probably in 1998, because on a short listen it sounded ok and back then buying CD was the only way to add to your music collection. I had over 1100 CDs at the time.

I have no particular affinity for this CD. I wouldn't add any song on the album to a playlist but if you like house music as like background music it's fine. I can listen to the entire album which is better than most.

In any way, out of ~8500 songs my iPhone seemed to pick songs from this album all the friggen time?!?! I never really gave it much thought because I just assumed it was bad luck or one of this weird artifacts of random selection but then, when I was going through the entire list of songs and seeing all the stuff not being played I started to be clear something was broken.

I didn't actually figure out what the problem was. I've never rated any albums or tracks. iTunes/Music apparently auto rates albums. No idea what it does to do that. If it's by the number of times played that would suck because it would re-enforce its bad choices. If it's by looking up on the net other people's opinions that would suck too as I don't want other people choosing music for me from my own collection. I also have no idea if the rating are used to pick random tracks.

If shuffling is related to rating, apparently the solution is to set all the ratings to 1. That way the app will assume you set it and won't auto rate.

I actually have no idea if that works. Instead I switched to using a different music app and suddenly I'm hearing much more of my collection than I was on the built in app.

We'll see how it goes. I have no idea how the new app chooses songs though. I can think of lots of algorithms. The simplest would just be to pick a random track from all tracks.

I'm pretty confident the app isn't doing that because it's also played too many tracks from the same albums. In other words lets say it played a song from "Unbreakable" by Janet Jackson. Within about 10 songs I'd hear another song from the same album, and 10 songs later yet another. I'm not sure what the odds of that are but I think they are low for 8500 tracks. I might guess that it picks a random album and then a random track. Would that be more likely to hear songs from the same albums? Or maybe it picks N albums and then picks random tracks from just those albums trying to make a theme? I have no idea.

I wrote some code to try just picking songs at random and to see how often it picks a song from some album of the last 20 albums.

It just keeps picking songs at random until it's played every song at least once. Using JavaScript's Math.random() function, for ~8500 tracks it would have to play around 80k tracks until it's played every track once. During that time at least one track would have been played ~25 times. Also about one out of 36 tracks will be from the same album as one of the last 20 albums played. That wouldn't remotely explain getting 3+ songs from the same album in say 60 songs. Yes I know random = random but in my own tests that situation rarely comes up.

Apparently the there's also a difference between "random" and "shuffle". "Shuffle" is supposed to by like a deck of card. You put take all the tracks and "shuffle" them, then play the tracks in the shuffled order. I suppose I should check that.

Well, according to that on average every ~40 tracks I'll get a track from the same album. Maybe that's what I'm experiencing.

It's ridiculous how much of my collection I'm being re-introduced to since I switched players

In any case, within the first 60 tracks on the new app it played 2 songs from "Pure Sugar"!!! ๐Ÿ˜ญ๐Ÿ˜…๐Ÿคฃ๐Ÿคฏ

Comments and


I recently made 2 new sites. They came about like this.

Once in a while I want to benchmark solutions in JavaScript just to see how much slower one solution is vs another. I used to use but sometime in early 2020 or 2019 it disappeared.

Searching around I found 2 others, Trying them they both have their issues. is ugly. Probably not fair but whatever. Using it bugged me., at least as of this writing had 2 issues when I tried to use it. One is that if my code had a bug the site would crash as in it would put up a full window UI that blocks everything with a "running..." message and then never recovers. The result was all the code I typed in was lost and I'd have to start over. The other is it has a 4k limit. 4k might sound like a lot but I ran into that limit trying to test a fairly simple thing. I managed to squeeze my test in with some work but worse, there's no contact info anywhere except a donate button that leads directly to the donation site, not a contact sight so there's no way to even file a bug let alone make a suggestion.

In any case I put up with it for 6 months or so but then one day about a month ago, I don't remember what triggered it but I figured I could make my own site fairly quickly where I'm sure in my head quickly meant 1-3 days max. ๐Ÿ˜‚


So, this is what happened. First I decide I should use benchmark.js mostly because I suck at math and it claims "statistically significant results". I have no idea what that means ๐Ÿ˜… but, a glance at the code shows some math happening that's more than I'd do if I just wrote my own timing functions.

Unfortunately I'd argue benchmark.js is actually not a very good library. They made up some username or org name called "bestiejs" to make it sound like it's good and they claim tests and docs but the docs are horrible auto-generated docs. They don't actually cover how to use the library they just list a bunch of classes and methods and it's left to you to figure out which functions to call and when and why. There's also some very questionable design choices like the way you add setup code is by manually patching the prototype of one of their classes. WAT?!?

I thought about writing my own anyway and trying to extract the math parts but eventually I got things working enough and just wanted to move on.

Personal Access Tokens

I also didn't want to run a full server with database and everything else so I decided I'd see if it was possible to store the data in a github gist. It turns out yes, it's possible but I also learned there is no way to make a static website that supports Oauth to let the user login to github.

A workaround is a user can make a Personal Access Token which is a fancy way of basically making a special password that is given certain permissions. So, in order to save the user would have to go to github, manually make a personal access token, paste it into and then they could save. It worked! ๐ŸŽ‰

As I got it working I released I could also make a site similar to jsfiddle or codepen with only minor tweaks to the UI so I started on that too.


Running User Code

Both sites run arbitrary user code and so if I didn't do something people could write code that steals the personal access token. That's no good. Stealing their own token is not an issue but passing a benchmark or jsgist to another user would let them steal that user's token.

The solution is to run the user's code in an iframe on another domain. That domain can't read any of the data from the main site so problem is solved.

Unfortunately I ran into a new problem. Well, maybe it's not so new. The problem is since the servers are static I can't serve the user's code like a normal site would. If you look at jsfiddle, codepen, and stack overflow snippets you'll see they run the code from a server served page generated using the user's code. With a static site I don't have that option.

To work around it I generated a blob, make a URL to the blob and have the browser load that in an iframe. I use this solution on,, and It works but it has other problems. One is since I can't serve any files whatsoever I have to re-write URLs if you use more than 1 file.

Take for example something that uses workers. You usually need a minimum of 2 files. An HTML file with a <script> section that launches a worker and the worker's script is in another file. So you start with main.html that loads worker.js but you end up with blob:1234-1314523-1232 for main.html but it's still referencing worker.js and you have to some how find that and change it to the blob url that was generated for worker.js. I actually implemented this solution on those sites I mentioned above but it only works because I wrote all the examples that are running live and the solutions only handle the small number of cases I needed to work.

The second problem with the blob solution they are no good for debugging. Every time the user clicks "run" new blobs are created so any breakpoints you set last time you ran it don't apply to the new blob since they're associated with a URL and that URL has just changed.

Looking into it I found out I could solve both problems with a service worker. The main page starts the service worker then injects the filename/content of each file into the service worker. It then references those files as normal URLs so the don't change. Both problems are solved. ๐Ÿ˜Š

I went on to continue making the sites even though I was way past the amount of time I thought I'd be spending on them.

Github Login

In using the sites I ran into a new problems. Using a personal access token sucked! I have at least 4 computers I want to run these on. A Windows PC, a Mac, an iPhone, and an Android phone. When I'd try to use a different PC I needed to either go through the process of making a new personal access token, or I needed to find someway to pass that token between machines, like email it to myself. ๐Ÿคฎ

I wondered if I could get the browser to save it as a password. It turns out, yes, if you use a <form> and an <input type="password"> and you apply the correct incantation when the user clicks a "submit" button the browser will offer to save the personal access token as a password.

Problem solved? No ๐Ÿ˜ญ

A minor issue is there's no username but the browsers assume it's always username + password. That's not that big a deal, I can supply a fake username though it will probably confuse users.

A major issue though is that passing between machines via a browser's password manager doesn't help pass between different browsers. If I want to test Firefox vs Chrome vs Safari then I was back to the same problem of keeping track of a personal access token somewhere.

Now I was entering sunk cost issues. I'd spent a bunch of time getting this far but the personal access token issues seemed like they'd likely make no one use either site. If no one is going to use it then I've wasted all the time I put in already.

Looking into it more it turns out the amount of "server" need to support oauth so that users could log in with github directly is actually really tiny. No storage is needed, almost nothing.

Basically they way Oauth works is

  1. user clicks "login with github"
  2. static page opens a popup to and passes it an app id (called client_id), the permissions the app wants, and something called "state" which you make up.
  3. The popup shows github's login and asks for permission for the app to use whatever features it requested.
  4. If the user picks "permission granted" then the github page redirects to some page you pre-registered when you registered your app with github. For our case this would be To this page the redirect includes a "code" and the "state" passed at step 2.
  5. The auth.html page either directly or by communicating with the page that opened the popup, first verifies that the "state" matches what was sent at step 2. If not something is fishy, abort. Otherwise it needs to contact github at a special URL and pass the "code", the "client_id" and a "client_secret".

    Here's the part that needs a server. The page can't send the secret because then anyone could read the secret. So, the secret needs to be on a server. So,

  6. The page needs to contact a server you control that contains the secret and passes it the "code". That server then contacts github, passes the "code", "client_id", and "client_secret" to github. In response github will return an access token which is exactly the same as a "personal access token" except the process for getting one is more automated.
  7. The page gets the access token from the server and starts using it to access github's API

If you were able to follow that the short part is you need a server, and all it has to do is given a "code", contact github, pass the "code", "client_id" and "client_secret" on to github, and pass back the resulting token.

Pretty simple. Once that's done the server is no longer needed. The client will function without contacting that server until and unless the token expires. This means that server can be stateless and basically only takes a few lines of code to run.

A found a couple of solutions. One is called Pizzly. It's overkill for my needs. It's a server that provides the oauth server in step 6 above but it also tracks the tokens for you and proxies all other github requests, or requests to whatever service you're using. So your client side code just gets a pizzly user id which gets translated for you to the correct token.

I'm sure that's a great solution but it would mean paying for a much larger server, having to back up user accounts, keep applying patches as security issues are found. It also means paying for all bandwidth between the browser can github because pizzly is in the middle.

Another repo though made it clear how simple the issue can be solved. It's this github-secret-keeper repo. It runs a few line node server. I ran the free example on heroku and it works! But, I didn't want to make an heroku account. It seemed too expensive for what I needed it for. I also didn't feel like setting up a new dynamo at Digital Ocean and paying $5 a month just to run this simple server that I'd have to maintain.

AWS Lambda

I ended up making an AWS Lambda function to do this which added another 3 days or so to try to learn enough AWS to get it done.

I want to say the experience was rather poor IMO. Here's the problem. All the examples I found showed lambda doing node.js like stuff, accepting a request, reading the parameters, and returning a response. Some showed the parameters already parsed and the response being structured. Trying that didn't work and it turns out the reason is AWS for this feature is split into 2 parts.

Part 1 is AWS Lambda which just runs functions in node.js or python or Java etc...

Part 2 is AWS API Gateway which provides public facing endpoints (URLS) and routes them to different services on AWS, AWS Lambda being one of those targets.

It turns out the default in AWS API Gateway doesn't match any of the examples I saw. In particular the default in AWS API Gateway is that you setup a ton of rules to parse and validate requests and parameters and only if they parse correctly and pass all the validation do they get forwarded to the next service. But that's not really what the example shown wanted. Instead they wanted AWS API Gateway to effectively just pass through the request. That's not the default and I'd argue it not being the default is a mistake.

My guess is that service was originally written in Java. Because Java is strongly typed it was natural to think in terms of making the request fit strong types. Node.js on the other hand, is loosely typed. It's trivial to take random JSON, look at the data you care about, ignore the rest, and move on with your life.

In any case I finally figured out how to get AWS API Gateway to do what all the AWS Lambda examples I was seeing needed and it started working.

My solution is here if you want to use it for github or any Oauth service.

CSS and Splitting

Next up was spitting and CSS. I still can't claim to be a CSS guru in any way shape or form and several times I year I run into infuriating CSS issues where I thought I'd get something done in 15 minutes but turns into 15 minutes of the work I thought I was going to do and 1 to 4 hours of trying to figure out why my CSS is not working.

I think there are 2 big issues.

  1. is that Safari doesn't match Chrome and Firefox so you get something working only to find it doesn't work on Safari

  2. Nowhere does it seem to be documented how to make children always fill their parents. This is especially important if you're trying to make a page that acts more like an app where the data available should always fit on the screen vs a webpage that be been as tall as all the content.

    To be more clear you try (or I try) to make some layout like

       |                    |
       |   |            |   |
       |   |            |   |
       |   |            |   |
       |         |          |

    and I want the entire thing to fill the screen and the contents of each area expand to use all of it. For whatever reason it never "just works". I'd think this would be trivial but something about it is not or at least not for me. It's always a bunch of sitting in the dev tools and adding random height: 100% or min-height: 0 or flex: 1 1 auto; or position: relative to various places in the hope things get fixed and they don't break something else or one of the other browsers. I'd think this would be common enough that the solution would be well documented on MDN or CSS Tricks or some place but it's not or at least I've never found it. Instead there's just all us clueless users reading the guesses of other clueless users sharing their magic solutions on Stack Overflow.

    I often wonder if any of the browser makers or spec writers ever actually use the stuff they make and why they don't work harder to spread the solutions.

    Any any case my CSS seems to be doing what I want at the moment

That said, I also ran into the issue that I needed a splitter control that let you drag the divider between two areas to adjust their sizes. There's 3 I found but they all had issues. One was out of date, and unmaintained and got errors with current React. Yea, I used react. Maybe that was a bad decision. Still not sure.

After fighting with the other solutions I ended up writing my own so that was a couple of days of working through issues.


Next up was comments. I don't know why I felt compelled to add comments but I did. I felt like people being able to comment would be net positive. Codepen allows comments. The easiest thing to do is just tack on disqus. Similar to the user code issue though I can't use disqus directly on the main site otherwise they could steal the access token.

So, setup another domain, put disqus in an iframe. The truth is disqus already puts itself in an iframe but at the top level it does this with a script on your main page which means they can steal secrets if they want. So, yea, 3rd domain (2nd was for user code).

The next problem is there is no way in the browser to size an iframe to fit its content. It seems ridiculous to have that limitation in 2020 but it's still there. The solution is the iframe sends messages to the parent saying what size its content is and then the parent can adjust the size of the iframe to match. It turns out this is how disqus itself works. The script it uses to insert an iframe listens for messages to resize the iframe.

Since I was doing iframe in iframe I needed to re-implement that solution.

It worked, problem solved..... or is it? ๐Ÿ˜†

Github Comments

It's a common topic on tech sites but there is a vocal minority that really dislike disqus. I assume it's because they are being tracked across the net. One half solution is you put a click through so that by default disqus doesn't load but the user can click "load comments" which is effectively an opt in to being tracked.

The thing is, gists already support comments. If only there was a way to use them easily on a 3rd party site like disqus. There isn't so, ....

There's an API where you can get the comments for a gist. You then have to format them from markdown into HTML. You need to sanitize them because it's user data and you don't want people to be able to insert JavaScript. I was already running comments on a 3rd domain so at least that part is already covered.

In any case it wasn't too much work to get existing comments displayed. New comments was more work though.

Github gists display as follows

| header   |
| files    |
|          |
|          |
| comments |
|          |
|          |
| new      |
| comment  |
| form     |

that comment form is way down the page. If there was an id to jump to I could have possibly put that page in an iframe and just use a link like<id>/#new-comment-form. to get the form to appear in a useful way. That would give the full github comment UI which includes drag and drop image attachments amount other things. Even if putting it in an iframe sucked I could have just had a link in the form of

<a href="<id>/#new-comment-form">click here to leave a comment</a>

But, no such ID exits, nor does any standalone new comment form page.

So, I ended up adding a form. But for a preview we're back to the problem of user data on a page that has access to a github token.

The solution to put the preview on a separate page served from the comments domain and send a message with new content when the user asks for a preview. That way, even if we fail to fully sanitize the user content can't steal the tokens.


Both sites support embedding just uses iframes.

JsBenchIt supports 2 embed modes. One, uses an iframe.

+ there's no security issues (like I can't see any data on whatever site you embedded it)

- It's up to you to make your iframe fit the results

The other mode uses a script

+ it can auto size the frame

- if I was bad I could change the script and steal your login credentials for whatever site you embed it on.

Of course I'd never do that but just to be aware. Maybe someone hacks my account or steals my domain etc... This same problem exists for any scripts you use from a site you don't control like query from a CDN for example so it's not uncommon to use a script. Just pointing out the trade off.

I'm not sure what the point of embedding the benchmarks is but I guess you could show off your special solution or, show how some other solution is slow, or maybe post one and encourage others to try to beat it.

Closing Thoughts

I spent about a month, 6 to 12hrs a day on these 2 sites so far. There's a long list of things I could add, especially to No idea if I will add those things. has a point for me because I didn't like the existing solution. has much less of a point because are are 10 or sites that already do something like this in various ways. jsfiddle, codepen, jsbin, codesandbox, glitch, github codespaces, plunkr, and I know there are others so I'm not sure what the point was. It started as just a kind of "oh, yea, I could do that too" while making jsbenchit and honestly I spent probably spent 2/3rds of the time there vs the benchmark site.

I honestly wish I'd find a way to spend this kind of time on something that has some hope of generating income, not just income but also something I'm truly passionate about. Much of this feels more like the procrastination project that one does to avoid doing the thing they should really do.

That said, the sites are live, they seem to kind of work though I'm sure there are still lurking bugs. Being stored in gists the data is yours. There is no tracking on the site. The sites are also open source so pull requests welcome.


Embedded Scripts - Stop it!


I recently started making a website. I needed to store some credientials info locally in the user's browser. I had to give some thought that I can't let 3rd parties access those credientials and that's lead to a bunch of rabbit holes.

It's surprising the number of services out there that will tell you to embed their JavaScript into your webpage. If you do that then those scripts could be reading all the data on the page including login credientials, your credit card number, contact info, whatever else is on the page.

In other words, for example, to use the disqus comment service you effectively add a script like this

<script src=""></script>

Disqus uses that to insert an iframe and then show all the comments and the UI for adding more. I kind of wanted to add comments to the site above via disqus but there's no easy way to do it securely. The best I can think of is I can make a 2nd domain so that on the main page I create an iframe that links to the 2nd domain and that 2nd domain then includes that disqus script.

I'm not dissing disqus, I'm just more surprised this type of issue is not called out more as the security issue it is.

I looked into how codepen allows embedding a pen recently. Here's the UI for embedding

Notice of the 4 methods they mark HTML as recommended. Well if you dig through the HTML you see it does this

<script async src=""></script>

Yes, it powns your page. Fortunately they offer using an iframe but it's surprising to me they recommend the insecure, we own your site, embed our script directly on your page option over the others. In fact I'd argue it's irresponsible for them offer that option at all. I'm not trying to single out codepen, it's common across may companies. Heck, Google Analytics is probably the most common embedded script with Facebook's being second.

I guess what goes through most people's heads who make this stuff is "we're trustworthy so nothing to worry about". Except,

  1. It sets a precedent to trust all such similar sites offering embedded scripts

  2. I might be able to trust "you" but I can I trust all your employees and successors?

    We're basically setting up a world of millions of effectively compromised sites and then praying that it doesn't become an issue sometime in the future.

  3. Even if I trust you you could be compelled to use your backdoor.

    I suppose this is unlikely but who knows. Maybe the FBI comes knocking requesting that for a specific site you help them steal credientials because they see your script is on the site they want to hack or get info from.

Anyway, I do have comments on this site by disqus using their script and I have google analytics on here too. This site though has no login, there are no credientials or anything else to steal. For the new site though I'll have to decide on whether or not I want to run comments at all and if so setup the second domain.


GitHub has a Permission Problem.


TL;DR: Thousands of developers are giving 3rd parties write access to their github repos. This is even more irresponsible than giving out your email password or your computer's password since your github repos are often used by more than just you. The tokens given to 3rd parties are just like passwords. A hacker that breaches a company that has that info will suddenly have write access to every github repo the breached company had tokens for.

github should work to stop this irresponsible practice.

I really want to scream about security on a great many fronts but today let's talk about github.

What the actual F!!!

How is this not a 2000 comment topic on HN and Github not shamed into fixing this?

Github's permission systems are irresponsible in the extreme!!

Lots of sites let you sign up via github. Gatsby is one. Here's the screen you get when you try to sign up via your github account.

Like seriously, WTF does "Act on your behalf" mean? Does it mean Gatsby can have someone assassinated on my behalf? Can they take out a mortgage on my behalf? Can they volunteer me for the Peace Corps on my behalf? More seriously can they scan all my private repos on my behalf? Insert trojans in my code on my behalf? Open pull requests on other people's projects on my behalf? Log in to every other service I've connected to my github account on my behalf? Delete all my repos on my behalf? Add users to my projects on my behalf? Change my password on my behalf?

This seems like the most ridiculous permission ever!

I bought this up with github and they basically threw up their hands and said "Well, at least we're telling you something". No you're not. You're effectively telling me absolutely nothing except that you're claiming if I click through you're giving that company permission to do absolute anything. How is that useful info?

But, just telling me isn't really the point. The point is each service should be required to use as small of permissions as is absolutely necessary. If I sign up for a service, the default should be no permissions except getting my email address. If a service is supposed to work with a repo (like gatsby is) then github should provide an interface such that gatsby tells github "Give me a list of repos the user wants me to use" and github present the UI to select an existing one or create new ones and when finished, only those repos are accessible and only with the minimal permissions need.

This isn't entirely github's fault though, the majority of the development community seems asleep as well.

Let's imagine your bank let you sign in to 3rd party services in a similar manner. How many people would click through on "Let ACME corp act on your behalf on your Citibank Account". I think most people would be super scared of permissions like that. Instead they'd want very specific permission like, only permission to deposit money, or only permission to read the balance, or only permission to read transactions, etc...

Github providing blanket permissions to so many companies is a huge recipe for disaster just waiting to happen. If any one of those companies gets hacked, or has insider help, or has a disgruntled employee, suddenly your trade secrets are stolen, your unreleased app is leaked, your software is hacked with a trojan and all your customers sue you for the loss of work your app caused. It could be worse, you could run an open source library so by hacking ACME corp the bad guys can hack your library and via that hack everyone using your library.

I get why github does it and/or why the apps do it. For example check out Forestry. They could ask for minimal permissions and good on them for providing a path to go that route. They ask for greater permissions so that they can do all the steps for you. I get that. But if you allow them blanket access to your github (or gitlab), YOU SHOULD ARGUABLY BE DISQUALIFIED FROM BEING A SOFTWARE DEVELOPER!!!

The fact that you trusted some random 3rd party with blanket permissions to edit all of your repos and change all of your permissions is proof you don't know WTF you're doing and you can't be trusted. It's like if someone asked you for the password to your computer. If you give it out you're not computer literate!

boy: "Is it ok I set my password to your birthday?

girl: "Then your password is meaningless!"

Here's the default permissions Forestry asks for if you follow their recommended path.

First let's explain what Forestry is. It's a UI for editing blog posts through git so you can have a nice friendly interface for your git based static site generator. That's great! But, at most it only needs access to a single repo. Not all your public repos! If you click through and picked "Authorize" that's no different then giving them the password to your computer. Maybe worse because at least your hacked computer will probably only affect you.

Further, the fact that companies like Forestry even ask for this should be shameful! Remember when various companies like Facebook, Yelp, when you signed up they'd ask for your the username and password for your email account? Remember how pretty much every tech person on the planet knew that was seriously irresponsible to even ask? Well this is no different. It's entirely irresponsible for Forestry to ask for these kind of blanket permissions! It's entirely irresponsible for any users to give them these permissions! How are all the tech leaders seemingly asleep at calling this out?

Like I mentioned above, part of this arguably lies at Github's feet. Forestry does this because github provides no good flow to do it well so Forestry is left with 2 options (1) be entirely irresponsible but make it easy for the user to use their service, (2) be responsible but lose sales because people can't get setup easily.

Instead it should be a sign they're an irresponsible and untrustworthy company that they ask for these kinds of permissions at all. And further, github should be should also be ashamed their system encourages these kinds of blanket permissions.

Think of it this way. There are literally millions of open source libraries. npm has over a million libraries and that's just JavaScript. Add in python, C++, Java, C#, ruby, and all the other projects on github. Hundreds of thousands of developers wrote those libraries. How many of those developers have given out the keys their repos so random 3rd parties can hack their libraries? Maybe they gave too broad permissions to some code linting site. Maybe they gave too broad permissions to some project monitoring site. Maybe they gave too broad permissions just to join a forum using their github account. Isn't that super irresponsible? They've opened a door by using the library and they're letting more people in the door. That can't be good.

I don't blame the devs so much as github for effectively making this common. Github needs to take security seriously and that means working to make issues like this the exception, not the rule. It should be the easiest thing to do to allow a 3rd party minimal access to your repo. It should be much harder to give them too much access. There should be giant warnings that you're about to do something super irresponsible and that you should probably not be trusting the company asking for these permissions.

Call it out!

I don't mean to pick on Forestry. 100s of other github (and gitlab?) integrations have the same issues. Forestry was just the latest one I looked at. I've seen various companies have this issue for years now and I've been seriously surprised this hasn't been a bigger topic.

Don't clutter the UX with meaningless info

Look above at the Github permissions. Reading public info should not even be listed! It's already obvious that all your public info can be read by the app. That's the definition of public! There's no reason to tell me it might read it. It doesn't need permission to do so.


What if Google Was Like YouTube?


This was just a random brain fart but ...

I get the impression that for many topics, youtube is more popular than web pages. Note: I have zero proof but it doesn't really matter for the point of this article.

Let's imaging there is a website that teaches JavaScript, for example this one.

Note: I have no idea how many people go to that site but compare it to this youtube channel which has millions of views.

For example this one video has 2.8 million views and it's just one of 100s of videos.

I have no idea but I suspect the youtube channel is far more viewed than the website.

Why is that?

At first I thought it was obvious, it's because more people like videos more than they like text for these topics. It's certainly easy to believe. Especially the younger generation, pretty much anyone under 25 has grown up with YouTube as part of their life.

There are lots of arguments to be made for video for learning coding. Seeing someone walk through the steps can be better than reading about how to do it. For one, it's unlikely someone writing a tutorial is going to remember to detail everything where as someone making a video is at least likely showing the actual steps on the video. Small things they might have forgotten to write down appear in the video.

On the other hand, video sucks for reference and speed. I can't currently search the content of video. While I can cue a video and set it to different time points that's much worse than being able to skim or jump to the middle of an article.

Anyway, there are certainly valid reason why a video might be more popular than an article on the same topic.


What if one of the major reasons why videos are more popular than articles is because of YouTube itself. You go to youtube and based on what you watched before it recommends other things to watch. You watch one video on how to code in JavaScript and it's going to recommend watching more videos about programming in JavaScript and programming in general. It's also going to ask you to subscribe to those channels. You might even be setup to get emails when a youtuber posts a new video to their channel.

So, Imagine Google's home page worked the same way. Imagine instead of this

It looked more like this

Even before you searched you'd see recommendations based on things you searched for or viewed before. You'd see things you subscribed to. You'd see marks for stuff you'd read before. Your history would be part of the website just like it is on youtube. Google could even keep the [+] button in top right which would lead to sites to create your content.

I can hear a lot of various responses.

I think it would be an interesting experiment. If not Google's current home page than some new one, or something.

Like youtube it would mark what you've already read. Like youtube it would allow people to make channels. RSS is ready in place to let people add their channels. Not sure how many systems still support this but there was a standard for learning where the page is for adding new content so clicking the [+] button could take you there, where ever it is and Google could suggest places if you want to start from scratch including squarespace or or even blogger ๐Ÿ˜‚

I think it might be super useful to have more sites recommended to me based on my interests. I watch youtube. I look at the recommendations. In fact I appreciate the recommendations. Why should websites be any different? Unlike Youtube the web is more decentralized so that's actually a win over Youtube. Why shouldn't Google (or someone) offer this service?

I'm honestly surprised it hasn't been done already. It probably has but I just forgot or didn't notice.

This might also make the tracking more visible. People claim Google knows all the sites you visit. Well, why not show it? If there's a Google analytics script on some site and Google recorded you went there, then you go to Google's home page and there in your history, just like Youtube's history, is a list of the sites you've visited. This would make it far more explicit so advocates for privacy could more easily point to it and say LOOK!. It might also get people to pursue more ways to have things not get tracked. But, I suspect lots of people would also find it super useful and having Google recommend stuff based on that would seem natural given the interface. As it is now all they use that data for is to serve ads they think you might be interested in. Using that data to recommend pages seems more directly useful to me. Something I want, an article on a topic I'm interested in, vs something they want, to show me ad. And it seems like no loss to them. They'll still get a chance to show me the ad.

Oh well, I expect the majority of people who will respond to this to be anti-Google and so anti this idea. I still think the idea is an interesting one. No site I know of recommends content for me in a way similar to Youtube. I'd like to try it out and see how it goes.


Someone pointed out Chrome for Android and iOS has the "suggested articles" feature but trying it out it completely fails.

First off I turned in on and for me it recommended nothing but Japanese articles. Google knows my browsing history. It knows that 99% of the articles I read are English. The fact that it recommended articles in Japanese shows it completely failed to be anything like the youtube experience I'm suggesting. In fact Google claims the suggestions are based on my Web & App Activity but checking my Web & App Activity there is almost zero Japanese.

Further, there is no method to "Subscribe" to a channel, for whatever definition of "channel". There is nothing showing me articles I've read, though given my rant on Youtube showing me articles I've read maybe that's a good thing? I mean I can go into my account and check my activity but what I want is to be able to go to a page for a specific channel and see the list of all that channel's content and see which articles I've read and which I haven't.

So while it's a step toward providing a youtube like experience it's completely failing to come close to matching it.

Note: I believe "channels" are important. When you watch a youtube video most creators say "Click Subscribe!". It's arguably an important part of the youtube experience and needs to be present if we're trying to see what it would be like to bring that same experience to the web. Most sites already have feeds so this is arguably something relatively easy for Google or whoever is providing this youtube like web experience to implement a "channels" feature.


Bad UI Design - Youtube


Today's bad design thoughts - Youtube.

Caveat, maybe I'm full of it and there are reasons for the UI the way it is. I doubt it. ๐Ÿ˜

Youtube's recommendations drive me crazy. I'm sure they have stats or something that says their recommendations are perfect on average but maybe it's possible different people respond better to different kinds of recommendation systems?

As an example some people might like to watch the same videos again and again. Others might only want new videos (me!). So, when my recommendations are 10-50% for videos I've already watched it's a complete waste of time and space.

Here are some recommendations

You can see 2 of them have a red bar underneath. This signifies that I've watched this video. Don't recommend it to me please!!!

But it gets worse. Here's some more recommendations. The highlighted video I've already watched so I don't want it recommended.

I click the 3 dots and tell it "Not interested"

I then have to worry that youtube thinks I hate that channel which is not the case so this appears

Clicking "Tell Us Why" I get this

So I finally choose "I already watched this video".


  1. click '...'
  2. pick "not interested"
  3. pick "tell us why"
  4. pick "I already watched this video"

It could be 3 steps

  1. click '...'
  2. pick "not interested"
  3. pick "I already watched this video"

It could even be 2 steps

  1. click '...'
  2. pick "I already watched this video"

Why is that 4 steps? What UX guidelines or process decided this needed to be 4 steps? It reminds me of the Windows Off Menu fiasco.

It gets worse though. Youtube effectively calls me a liar!

After those steps above I go to the channel for that user and you'll notice the video I marked as "I already watched the video" is not marked as watched with the read bar.

Imagine if in gmail you marked a message as read but Google decided, nope, we're going to keep it marked as un-read because we know better than you! I get it I guess. The red bar is not a "I watched this already" it's a "how much of this have I watched". Well, if I mark it as watched then mark it as 100% watched!!!

I'm also someone who would prefer to separate music from videos. If I want music I'll go to some music site, maybe even youtube music ๐Ÿคฎ Youtube seems to often fill my recommendations with 10-50% music playlists. STOP IT! You're not getting me to watch more videos (or listen to more music). You're just wasting my time.

Here 5 of 12 recommendations are for music! I'm on YouTUBE to watch things, not listen to things.

Now, maybe some users looking for something to watch end up clicking on 1-2 hr music videos or playlist. Fine, let me turn off all music so I can opt out of it. Pretty please ๐Ÿฅบ I'm happy to go to or something if I want music from youtube or I'll search for it directly but in general if I to go youtube and I'm looking for recommendations I'm there to watch something.

Please Youtube, let help me help you surface more videos I want to watch. Make it easier to me to tell you I've already watched the video and mark them as watched so when I'm glancing at videos in a channel It's easy to see what I have and haven't watched. Let me separate looking for music from looking for videos. Thank you ๐Ÿ™‡โ€โ™€๏ธ


Comparing Code Styles


In a few projects in the past I made these functions

function createElem(tag, attrs = {}) { 
  const elem = document.createElement(tag);
  for (const [key, value] of Object.entries(attrs)) {
    if (typeof value === 'object') {
      for (const [k, v] of Object.entries(value)) {
        elem[key][k] = v;
    } else if (elem[key] === undefined) {
      elem.setAttribute(key, value);
    } else {
      elem[key] = value;
  return elem;

function addElem(tag, parent, attrs = {}) {
  const elem = createElem(tag, attrs);
  return elem;

It let's you create an element and fill the various parts of it relatively tersely.

For example

const form = addElem('form', document.body);

const checkbox = addElem('input', form, {
  type: 'checkbox',
  id: 'debug',
  className: 'bullseye',

const label = addElem('label', form, {
  for: 'debug',
  textContent: 'debugging on',
  style: {
    background: 'red';

With the built in browser API this would be

const form = document.createElement('form');

const checkbox = document.createElement('input');
checkbox.type = 'checkbox'; = 'debug';
checkbox.className = 'bullseye';

const label = document.createElement('label');
form.for = 'debug';
form.textContent = 'debugging on'; = 'red';

Recently I saw someone post they use a function more like this

function addElem(tag, attrs = {}, children  = []) {
  const elem = createElem(tag, attrs);
  for (const child of children) {
  return elem;

The difference to mine was you pass in the children, not the parent. This suggests a nested style like this

document.body.appendChild(addElem('form', {}, [
  addElem('input', {
    type: 'checkbox',
    id: 'debug',
    className: 'bullseye',
  addElem('label', {
    for: 'debug',
    textContent: 'debugging on',
    style: {
      background: 'red';

I tried it out recently when refactoring someone else's code. No idea why I decided to refactor but anyway. Here's the original code

function createTableData(thead, tbody) {
  const row = document.createElement('tr');
    const header = document.createElement('th');
    header.className = "text sortcol";
    header.textContent = "Library";
    for(const benchmark of Object.keys(testData)) {
      const header = document.createElement('th');
      header.className = "number sortcol";
      header.textContent = benchmark;
    const header = document.createElement('th');
    header.className = "number sortcol sortfirstdesc";
    header.textContent = "Average";
    for (let i = 0; i < libraries.length; i++) {
      const row = document.createElement('tr'); = libraries[i] + '_row';
        const data = document.createElement('td'); = colors[i]; = '#ffffff'; = 'normal'; = 'Arial Black';
        data.textContent = libraries[i];
        for(const benchmark of Object.keys(testData)) {
          const data = document.createElement('td');
 = `${benchmark}_${library_to_id(libraries[i])}_data`;
          data.textContent = "";
        const data = document.createElement('td'); = library_to_id(libraries[i]) + '_ave__data'
        data.textContent = "";

While that code is verbose it's relatively easy to follow.

Here's the refactor

function createTableData(thead, tbody) {
  thead.appendChild(addElem('tr', {}, [
    addElem('th', {
      className: "text sortcol",
      textContent: "Library",
    ...Object.keys(testData).map(benchmark => addElem('th', {
      className: "number sortcol",
      textContent: benchmark,
    addElem('th', {
      className: "number sortcol sortfirstdesc",
      textContent: "Average",
  for (let i = 0; i < libraries.length; i++) {
    tbody.appendChild(addElem('tr', {
      id: `${libraries[i]}_row`,
    }, [
      addElem('td', {
        style: {
          backgroundColor: colors[i],
          color: '#ffffff',
          fontWeight: 'normal',
          fontFamily: 'Arial Black',
        textContent: libraries[i],
      ...Object.keys(testData).map(benchmark => addElem('td', {
        id: `${benchmark}_${library_to_id(libraries[i])}_data`,
      addElem('td', {
        id: `${library_to_id(libraries[i])}_ave__data`,

I'm not entirely sure I like it better. What I noticed when I was writing it is I found myself having a hard time keeping track of the opening and closing braces, parenthesis, and square brackets. Effectively it's one giant expression instead of multiple individual statements.

Maybe if it was JSX it might hold the same structure but be more readable? Let's assume we could use JSX here then it would be

function createTableData(thead, tbody) {
      <th className="text sortcol">Library</th>
        Object.keys(testData).map(benchmark => (
          <th className="number sortcol">{benchmark}</th>
      <th className="number sortcol sortfirstdesc">Average</th>
  for (let i = 0; i < libraries.length; i++) {
      <tr id={`libraries[i]}_row`}>
        <td style={{
          backgroundColor: colors[i],
          color: '#ffffff',
          fontWeight: 'normal',
          fontFamily: 'Arial Black',
        Object.keys(testData).map(benchmark => (
           <td id={`${benchmark}_${library_to_id(libraries[i])}_data`} />
        <td id={`${library_to_id(libraries[i])}_ave__data`} />

I really don't know which I like best. I'm sure I don't like the most verbose raw browser API version. The more terse it gets though the harder it seem to be to read it.

Maybe I just need to come up with a better way to format?

I mostly wrote this post only because after the refactor I wasn't sure I was diggin it but writing all this out I have no ideas on how to fix my reservations. I did feel a little like was solving a puzzle unrelated to the task and hand to generate one giant expression.

Maybe my spidey senses are telling me it will be hard to read or edit later? I mean I do try to break down expressions into smaller parts now more than I did in the past. In the past I might have written something like

const dist = Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2;

but now-a-days I'd be much more likely to write something like

const dx = x2 - x1;
const dy = y2 - y1;
const distSq = dx * dx + dy * dy;
const dist = Math.sqrt(distSq);

Maybe with such a simple equation it's hard to see why I prefer spell it out. Maybe I prefer to spell it out because often I'm writing tutorials. Certainly my younger self thought terseness was "cool" but my older self finds terseness for the sake of terseness to be mis-placed. Readability, understandability, editability, comparability I value over terseness.



OpenGL Trivia


I am not an OpenGL guru and I'm sure someone who is a guru will protest loudly and rudely in the comments below about a something that's wrong here at some point but ... I effectively wrote an OpenGL ES 2.0 driver for Chrome. During that time I learned a bunch of trivia about OpenGL that I think is probably not common knowledge.

Until OpenGL 3.1 you didn't need to call glGenBuffers, glGenTextures, glGenRenderbuffer, glGenFramebuffers

You still don't need to call them if you're using the compatibility profile.

The spec effectively said that all the glGenXXX functions do is manage numbers for you but it was perfectly fine to make up your own numbers

const id = 123;
glBindBuffer(GL_ARRAY_BUFFER, id);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);

I found this out when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it.

Note: I am not suggesting you should not call glGenXXX!. I'm just pointing out the triva that they don't/didn't need to be called.

Texture 0 is the default texture.

You can set it the same as any other texture

glBindTexture(GL_TEXTURE_2D, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oneRedPixel);

Now if you happen to use the default texture it will be red.

I found this out as well when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it. It was also a little bit of a disappointment to me that WebGL didn't ship with this feature. I brought it up with the committee when I discovered it but I think people just wanted to ship rather than go back and revisit the spec to make it compatible with OpenGL and OpenGL ES. Especially since this trivia seems not well known and therefore rarely used.

Compiling a shader is not required to fail even if there are errors in your shader.

The spec, at least the ES spec, says that glCompileShader can always return success. The spec only requires that glLinkProgram fail if the shaders are bad.

I found this out as well when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it.

This trivia is unlikely to ever matter to you unless you're on some low memory embedded device.

There were no OpenGL conformance tests until 2012-ish

I don't know the actual date but when I was using the OpenGL ES 2.0 conformance tests they were being back ported to OpenGL because there had never been an official set of tests. This is one reason there are so many issues with various OpenGL implementations or at least were in the past. Tests now exist but of course any edge case they miss is almost guaranteed to show inconsistencies across implementations.

This is also a lesson I learned. If you don't have comprehensive conformance tests for your standards, implementations will diverge. Making them comprehensive is hard but if you don't want your standard to devolve into lots of non-standard edge cases then you need to invest the time to make comprehensive conformance tests and do you best to make them easily usable with implementations other than your own. Not just for APIs, file formats are another place comprehensive conformance tests would likely help to keep the non-standard variations at a minimum.

Here are the WebGL2 tests as examples and here are the OpenGL tests. The OpenGL ones were not made public until 2017, 25yrs after OpenGL shipped.

Whether or not fragments get clipped by the viewport is implementation specific

This may or may not be fixed in the spec but it is not fixed in actual implementations. Originally the viewport setting set by glViewport only clipped vertices (and or the triangles they create). but for example, draw a 32x32 size POINTS point say 2 pixels off the edge of the viewport, should the 14 pixels still in the viewport be drawn? NVidia says yes, AMD says no. The OpenGL ES spec says yes, the OpenGL spec says no.

Arguably the answer should be yes otherwise POINTS are entirely useless for any size other than 1.0

POINTS have a max size. That size can be 1.0.

I don't think it's trivia really but it might be. Plenty of projects might use POINTS for particles and they expand the size based on the distance from the camera but it turns out they may never expand or they might be limited to some size like 64x64.

I find this very strange that there is a limit. I can imagine there is/was dedicated hardware to draw points in the past. It's relatively trivial to implemented them yourself using instanced drawing and some trivial math in the vertex shader that has no size limit so I'm surprised that GPUs just don't use that method and not have a size limit.

But whatever, it's how it is. Basically you should not use POINTS if you want consistent behavior.

LINES have a max thickness of 1.0 in core OpenGL

Older OpenGL and therefore the compatibility profile of OpenGL supports lines of various thicknesses although like points above the max thickness was driver/GPU dependant and allowed to be just 1.0. But, in the core spec as of OpenGL 3.0 only 1.0 is allowed period.

The funny thing is the spec still explains how glLineWidth works. It's only buried in the appendix that it doesn't actually work.

E.2.1 Deprecated But Still Supported Features

The following features are deprecated, but still present in the core profile. They may be removed from a future version of OpenGL, and are removed in a forward compatible context implementing the core profile.

  • Wide lines - LineWidth values greater than 1.0 will generate an INVALID_VALUE error.

The point is, except for maybe debugging you probably don't want to use LINES and instead you need to rasterize lines yourself using triangles.

You don't need to setup any attributes or buffers to render.

This comes up from needing to make the smallest repos either to post on stack overflow or to file a bug. Let's assume you're using core OpenGL or OpenGL ES 2.0+ so that you're required to write shaders. Here's the simplest code to test a texture

const GLchar* vsrc = R"(#version 300
void main() {
  gl_Position = vec4(0, 0, 0, 1);
  gl_PointSize = 100.0;

const GLchar* fsrc = R"(#version 300
precision highp float;
uniform sampler2D tex;
out vec4 color;
void main() {
  color = texture(tex, gl_PointCoord);

GLuint prg = someUtilToCompileShadersAndLinkToProgram(vsrc, fsrc);

// this block only needed in GL, not GL ES
    GLuint vertex_array;
    glGenVertexArrays(1, &vertex_array);

const GLubyte oneRedPixel[] = { 0xFF, 0x00, 0x00, 0xFF };
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oneRedPixel);

glDrawArrays(GL_POINTS, 0, 1);

Note: no attributes, no buffers, and I can test things about textures. If I wanted to try multiple things I can just change the vertex shader to

const GLchar* vsrc = R"(#version 300
layout(location = 0) in vec4 position;
void main() {
  gl_Position = position;
  gl_PointSize = 100.0;

And then use glVertexAttrib to change the position. Example

glVertexAttrib2f(0, -0.5, 0);  // draw on left
glDrawArrays(GL_POINTS, 0, 1);
glVertexAttrib2f(0,  0.5, 0);  // draw on right
glDrawArrays(GL_POINTS, 0, 1);

Note that even if we used this second shader and didn't call glVertexAttrib we'd get a point in the center of the viewport. See next item.

PS: This may only work in the core profile.

The default attribute value is 0, 0, 0, 1

I see this all the time. Someone declares a position attribute as vec3 and then manually sets w to 1.

in vec3 position;
uniform mat4 matrix;
void main() {
  gl_Position = matrix * vec4(position, 1);

The thing is for attributes w defaults to 1.0 so this will work just as well

in vec4 position;
uniform mat4 matrix;
void main() {
  gl_Position = matrix * position;

It doesn't matter that you're only supplying x, y, and z from your attributes. w defaults to 1.

Framebuffers are cheap and you should create more of them rather than modify them.

I'm not sure if this is well known or not. It partly falls out from understanding the API.

A framebuffer is a tiny thing that just consists of a collection of references to textures and renderbuffers. Therefore don't be afraid to make more.

Let's say your doing some multipass post processing where you swap inputs and outputs.

texture A as uniform input => pass 1 shader => texture B attached to framebuffer texture B as uniform input => pass 2 shader => texture A attached to framebuffer texture A as uniform input => pass 3 shader => texture B attached to framebuffer texture B as uniform input => pass 4 shader => texture A attached to framebuffer ...

You can implement this in 2 ways

  1. Make one framebuffer, call gl.framebufferTexture2D to set which texture to render to between passes.

  2. Make 2 framebuffers, attach texture A to one and texture B to the other. Bind the other framebuffer between passes.

Method 2 is better. Every time you change the settings inside a framebuffer the driver potentially has to check a bunch of stuff at render time. Don't change anything and nothing has to be checked again.

This arguably includes glDrawBuffers which is also framebuffer state. If you need multiple settings for glDrawBuffers make a different framebuffer with the same attachments but different glDrawBuffers settings.

Arguably this is likely a trivial optimization. The more important point is framebuffers themselves are cheap.

TexImage2D the API leads to interesting complications

Not too many people seem to be aware of the implications of TexImage2D. Consider that in order to function on the GPU your texture must be setup with the correct number of mip levels. You can set how many. It could be 1 mip. It could be a a bunch but they each have to be the correct size and format. Let's say you have a 8x8 texture and you want to do the standard thing (not setting any other texture or sampler parameters). You'll also need a 4x4 mip, a 2x2 mip, an 1x1 mip. You can get those automatically by uploading the level 0 8x8 mip and calling glGenerateMipmap.

Those 4 mip levels need to copied to the GPU, ideally without wasting a lot of memory. But look at the API. There's nothing in that says I can't do this

glTexImage2D(GL_TEXTURE_2D, 0, 8, 8, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData8x8);
glTexImage2D(GL_TEXTURE_2D, 1, 20, 40, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData40x20);
glTexImage2D(GL_TEXTURE_2D, 2, 10, 20, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData20x10);
glTexImage2D(GL_TEXTURE_2D, 3, 5, 10, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData10x5);
glTexImage2D(GL_TEXTURE_2D, 4, 2, 5, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData5x2);
glTexImage2D(GL_TEXTURE_2D, 5, 1, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData2x1);
glTexImage2D(GL_TEXTURE_2D, 6, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1x1);

If it's not clear what that code does a normal mipmap looks like this

but the mip chain above looks like this

Now, the texture above will not render but the code is valid, no errors, and, I can fix it by adding this line at the bottom

glTexImage2D(GL_TEXTURE_2D, 0, 40, 80, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData80x40);

I can even do this

glTexImage2D(GL_TEXTURE_2D, 6, 1000, 1000, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1000x1000);
glTexImage2D(GL_TEXTURE_2D, 6, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1x1);

or this

glTexImage2D(GL_TEXTURE_2D, 6, 1000, 1000, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1000x1000);

Do you see the issue? The API can't actually know anything about what you're trying to do until you actually draw. All the data you send to each mip just has to sit around until you call draw because there's no way for the API to know beforehand what state all the mips will be until you finally decide to draw with that texture. Maybe you supply the last mip first. Maybe you supply different internal formats to every mip and then fix them all later.

Ideally you'd specify the level 0 mip and then it would be an error to specify any other mip that does not match. Same internal format, correct size for the current level 0. That still might not be perfect because on changing level 0 all the mips might be the wrong size or format but it could be that changing level 0 to a different size invalidates all the other mip levels.

This is specifically why TexStorage2D was added but TexImage2D is pretty much ingrained at this point


Reduce Your Dependencies


I recently wanted to add colored output to a terminal/command line program. I checked some other project that was outputting color and saw they were using a library called chalk.

All else being equal I prefer smaller libraries to larger ones and I prefer to glue libraries together rather than take a library that tries to combine them for me. So, looking around I found chalk, colors, and ansi-colors. All popular libraries to provide colors in the terminal.

chalk is by far the largest with 5 dependencies totaling 3600 lines of code.

Things it combines

Next up is colors. It's about 1500 lines of code.

Like chalk it also spies on your command line arguments.

Next up is ansi-color. It's about 900 lines of code. It claims to be a clone of colors without the excess parts. No auto detecting support. No spying on your command line. It does include the theme function if only to try to match colors API.

Why all these hacks and integrations?


Starting with themes. chalk gets this one correct. They don't do anything. They just show you that it's trivial to do it yourself.

const theme = {

console.log('on fire'));

Why add a function setTheme just to do that? What happens if I go

  red: 'green',
  green: 'red',

Yes you'd never do that but an API shouldn't be designed to fail. What was the point of cluttering this code with this feature when it's so trivial to do yourself?

Color Names

It would arguably be better to just have them as separate libraries. Let's assume the color libraries have a function rgb that takes an array of 3 values. Then you can do this:

const pencil = require('pencil');
const webColors = require('color-name');

pencil.rgb(webColors.burlywood)('some string');


const chalk = require('chalk');


In exchange for breaking the dependency you gain the ability to take the newest color set anytime color-name is updated rather than have to wait for chalk to update its deps. You also don't have 150 lines of unused JavaScript in your code if you're not using the feature which you weren't.

Color Conversion

As above the same is true of color conversions

const pencil = require('pencil');
const hsl = require('color-convert').rgb.hsl;

pencil.rgb(hsl(30, 100, 50))('some-string');


const chalk = require('chalk');

chalk.hsl(30, 100, 50)('some-string');

Breaking the dependency 1500 lines are removed from the library that you probably weren't using anyway. You can update the conversion library if there are bugs or new features you want. You can also use other conversions and they won't have a different coding style.

Command Line hacks

As mentioned above chalk looks at your command line behind the scenes. I don't know how to even describe how horrible that is.

A library peeking at your command line behind the scenes seems like a really bad idea. To do this not only is it looking at your command line it's including another library to parse your command line. It has no idea how your command line works. Maybe you're shelling to another program and you have a โ€”- to separate arguments to your program from arguments meant for the program you spawn like Electron and npm. How would chalk know this? To fix this you have to hack around chalk using environment variables. But of course if the program you're shelling to also uses chalk it will inherit the environment variables requiring yet more workarounds. It's just simply a bad idea.

Like the other examples, if your program takes command line arguments it's literally going to be 2 lines to do this yourself. One line to add --color to your list of arguments and one line to use it to configure the color library. Bonus, your command line argument is now documented for your users instead of being some hidden secret.

Detecting a Color Terminal

This is another one where the added dependency only detracts, not adds.

We could just do this:

const colorSupport = require('color-support');
const pencil = require('pencil');

pencil.enabled = colorSupport.hasBasic;

Was that so hard? Instead it chalk tries to guess on its own. There are plenty of situations where it will guess wrong which is why making the user add 2 lines of code is arguably a better design. Only they know when it's appropriate to auto detect.

Issues with Dependencies

There are more issues with dependencies than just aesthetics and bloat though.

Dependencies = Less Flexible

The library has chosen specific solutions. If you need different solutions you now have to work around the hard coded ones

Dependencies = More Risk

Every dependency adds risks.

Dependencies = More Work for You

Every dependency a library uses is one more you have to deal with. Library A gets discontinued. Library B has a security bug. Library C has a data leak. Library D doesn't run in the newest version of node, etcโ€ฆ

If the library you were using didn't depend on A, B, C, and D all of those issues disappear. Less work for you. Less things to monitor. Less notifications of issues.

Lower your Dependencies

I picked on chalk and colors here because they're perfect examples of a poor tradeoffs. Their dependencies take at most 2 lines of code to provide the same functionality with out the dependencies so including them did nothing but add all the issues and risks listed above.

It made more work for every user of chalk since they have to deal with the issues above. It even made more work for the developers of chalk who have to keep the dependencies up to date.

Just like they have a small blurb in their readme on how to implement themes they could have just as easily shown how to do all the other things without the dependencies using just 2 lines of code!

I'm not saying you should never have dependencies. The point is you should evaluate if they are really needed. In the case of chalk it's abundantly clear they were not. If you're adding a library to npm please reduce your dependencies. If it only takes 1 to 3 lines to reproduce the feature without the dependency then just document what to do instead of adding a dep. Your library will be more flexible. You'll expose your users to less risks. You'll make less work for yourself because you won't have to keep updating your deps. You'll make less work for your users because they won't have to keep updating your library just to get new deps.

Less dependencies = Everyone wins!