Comparing Code Styles


In a few projects in the past I made these functions

function createElem(tag, attrs = {}) { 
  const elem = document.createElement(tag);
  for (const [key, value] of Object.entries(attrs)) {
    if (typeof value === 'object') {
      for (const [k, v] of Object.entries(value)) {
        elem[key][k] = v;
    } else if (elem[key] === undefined) {
      elem.setAttribute(key, value);
    } else {
      elem[key] = value;
  return elem;

function addElem(tag, parent, attrs = {}) {
  const elem = addElem(tag, attrs);
  return elem;

It let's you create an element and fill the various parts of it relatively tersely.

For example

const form = addElem('form', document.body);

const checkbox = addElem('input', form, {
  type: 'checkbox',
  id: 'debug',
  className: 'bullseye',

const label = addElem('label', form, {
  for: 'debug',
  textContent: 'debugging on',
  style: {
    background: 'red';

With the built in browser API this would be

const form = document.createElement('form');

const checkbox = document.createElement('input');
checkbox.type = 'checkbox'; = 'debug';
checkbox.className = 'bullseye';

const label = document.createElement('label');
form.for = 'debug';
form.textContent = 'debugging on'; = 'red';

Recently I saw someone post they use a function more like this

function addElem(tag, attrs = {}, children  = []) {
  const elem = createElem(tag, attrs);
  for (const child of children) {
  return elem;

The difference to mine was you pass in the children, not the parent. This suggests a nested style like this

document.body.appendChild(addElem('form', {}, [
  addElem('input', {
    type: 'checkbox',
    id: 'debug',
    className: 'bullseye',
  addElem('label', {
    for: 'debug',
    textContent: 'debugging on',
    style: {
      background: 'red';

I tried it out recently when refactoring someone else's code. No idea why I decided to refactor but anyway. Here's the original code

function createTableData(thead, tbody) {
  const row = document.createElement('tr');
    const header = document.createElement('th');
    header.className = "text sortcol";
    header.textContent = "Library";
    for(const benchmark of Object.keys(testData)) {
      const header = document.createElement('th');
      header.className = "number sortcol";
      header.textContent = benchmark;
    const header = document.createElement('th');
    header.className = "number sortcol sortfirstdesc";
    header.textContent = "Average";
    for (let i = 0; i < libraries.length; i++) {
      const row = document.createElement('tr'); = libraries[i] + '_row';
        const data = document.createElement('td'); = colors[i]; = '#ffffff'; = 'normal'; = 'Arial Black';
        data.textContent = libraries[i];
        for(const benchmark of Object.keys(testData)) {
          const data = document.createElement('td');
 = `${benchmark}_${library_to_id(libraries[i])}_data`;
          data.textContent = "";
        const data = document.createElement('td'); = library_to_id(libraries[i]) + '_ave__data'
        data.textContent = "";

While that code is verbose it's relatively easy to follow.

Here's the refactor

function createTableData(thead, tbody) {
  thead.appendChild(addElem('tr', {}, [
    addElem('th', {
      className: "text sortcol",
      textContent: "Library",
    ...Object.keys(testData).map(benchmark => addElem('th', {
      className: "number sortcol",
      textContent: benchmark,
    addElem('th', {
      className: "number sortcol sortfirstdesc",
      textContent: "Average",
  for (let i = 0; i < libraries.length; i++) {
    tbody.appendChild(addElem('tr', {
      id: `${libraries[i]}_row`,
    }, [
      addElem('td', {
        style: {
          backgroundColor: colors[i],
          color: '#ffffff',
          fontWeight: 'normal',
          fontFamily: 'Arial Black',
        textContent: libraries[i],
      ...Object.keys(testData).map(benchmark => addElem('td', {
        id: `${benchmark}_${library_to_id(libraries[i])}_data`,
      addElem('td', {
        id: `${library_to_id(libraries[i])}_ave__data`,

I'm not entirely sure I like it better. What I noticed when I was writing it is I found myself having a hard time keeping track of the opening and closing braces, parenthesis, and square brackets. Effectively it's one giant expression instead of multiple individual statements.

Maybe if it was JSX it might hold the same structure but be more readable? Let's assume we could use JSX here then it would be

function createTableData(thead, tbody) {
      <th className="text sortcol">Library</th>
        Object.keys(testData).map(benchmark => (
          <th className="number sortcol">{benchmark}</th>
      <th className="number sortcol sortfirstdesc">Average</th>
  for (let i = 0; i < libraries.length; i++) {
      <tr id={`libraries[i]}_row`}>
        <td style={{
          backgroundColor: colors[i],
          color: '#ffffff',
          fontWeight: 'normal',
          fontFamily: 'Arial Black',
        Object.keys(testData).map(benchmark => (
           <td id={`${benchmark}_${library_to_id(libraries[i])}_data`} />
        <td id={`${library_to_id(libraries[i])}_ave__data`} />

I really don't know which I like best. I'm sure I don't like the most verbose raw browser API version. The more terse it gets though the harder it seem to be to read it.

Maybe I just need to come up with a better way to format?

I mostly wrote this post only because after the refactor I wasn't sure I was diggin it but writing all this out I have no ideas on how to fix my reservations. I did feel a little like was solving a puzzle unrelated to the task and hand to generate one giant expression.

Maybe my spidey senses are telling me it will be hard to read or edit later? I mean I do try to break down expressions into smaller parts now more than I did in the past. In the past I might have written something like

const dist = Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2;

but now-a-days I'd be much more likely to write something like

const dx = x2 - x1;
const dy = y2 - y1;
const distSq = dx * dx + dy * dy;
const dist = sqrt(distSq);

Maybe with such a simple equation it's hard to see why I prefer spell it out. Maybe I prefer to spell it out because often I'm writing tutorials. Certainly my younger self thought terseness was "cool" but my older self finds terseness for the sake of terseness to be mis-placed. Readability, understandability, editability, comparability I value over terseness.



OpenGL Trivia


I am not an OpenGL guru and I'm sure someone who is a guru will protest loudly and rudely in the comments below about a something that's wrong here at some point but ... I effectively wrote an OpenGL ES 2.0 driver for Chrome. During that time I learned a bunch of trivia about OpenGL that I think is probably not common knowledge.

Until OpenGL 3.1 you didn't need to call glGenBuffers, glGenTextures, glGenRenderbuffer, glGenFramebuffers

You still don't need to call them if you're using the compatibility profile.

The spec effectively said that all the glGenXXX functions do is manage numbers for you but it was perfectly fine to make up your own numbers

const id = 123;
glBindBuffer(GL_ARRAY_BUFFER, id);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);

I found this out when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it.

Note: I am not suggesting you should not call glGenXXX!. I'm just pointing out the triva that they don't/didn't need to be called.

Texture 0 is the default texture.

You can set it the same as any other texture

glBindTexture(GL_TEXTURE_2D, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oneRedPixel);

Now if you happen to use the default texture it will be red.

I found this out as well when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it. It was also a little bit of a disappointment to me that WebGL didn't ship with this feature. I brought it up with the committee when I discovered it but I think people just wanted to ship rather than go back and revisit the spec to make it compatible with OpenGL and OpenGL ES. Especially since this trivia seems not well known and therefore rarely used.

Compiling a shader is not required to fail even if there are errors in your shader.

The spec, at least the ES spec, says that glCompileShader can always return success. The spec only requires that glLinkProgram fail if the shaders are bad.

I found this out as well when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it.

This trivia is unlikely to ever matter to you unless you're on some low memory embedded device.

There were no OpenGL conformance tests until 2012-ish

I don't know the actual date but when I was using the OpenGL ES 2.0 conformance tests they were being back ported to OpenGL because there had never been an official set of tests. This is one reason there are so many issues with various OpenGL implementations or at least were in the past. Tests now exist but of course any edge case they miss is almost guaranteed to show inconsistencies across implementations.

This is also a lesson I learned. If you don't have comprehensive conformance tests for your standards, implementations will diverge. Making them comprehensive is hard but if you don't want your standard to devolve into lots of non-standard edge cases then you need to invest the time to make comprehensive conformance tests and do you best to make them easily usable with implementations other than your own. Not just for APIs, file formats are another place comprehensive conformance tests would likely help to keep the non-standard variations at a minimum.

Here are the WebGL2 tests as examples and here are the OpenGL tests. The OpenGL ones were not made public until 2017, 25yrs after OpenGL shipped.

Whether or not fragments get clipped by the viewport is implementation specific

This may or may not be fixed in the spec but it is not fixed in actual implementations. Originally the viewport setting set by glViewport only clipped vertices (and or the triangles they create). but for example, draw a 32x32 size POINTS point say 2 pixels off the edge of the viewport, should the 14 pixels still in the viewport be drawn? NVidia says yes, AMD says no. The OpenGL ES spec says yes, the OpenGL spec says no.

Arguably the answer should be yes otherwise POINTS are entirely useless for any size other than 1.0

POINTS have a max size. That size can be 1.0.

I don't think it's trivia really but it might be. Plenty of projects might use POINTS for particles and they expand the size based on the distance from the camera but it turns out they may never expand or they might be limited to some size like 64x64.

I find this very strange that there is a limit. I can imagine there is/was dedicated hardware to draw points in the past. It's relatively trivial to implemented them yourself using instanced drawing and some trivial math in the vertex shader that has no size limit so I'm surprised that GPUs just don't use that method and not have a size limit.

But whatever, it's how it is. Basically you should not use POINTS if you want consistent behavior.

LINES have a max thickness of 1.0 in core OpenGL

Older OpenGL and therefore the compatibility profile of OpenGL supports lines of various thicknesses although like points above the max thickness was driver/GPU dependant and allowed to be just 1.0. But, in the core spec as of OpenGL 3.0 only 1.0 is allowed period.

The funny thing is the spec still explains how glLineWidth works. It's only buried in the appendix that it doesn't actually work.

E.2.1 Deprecated But Still Supported Features

The following features are deprecated, but still present in the core profile. They may be removed from a future version of OpenGL, and are removed in a forward compatible context implementing the core profile.

  • Wide lines - LineWidth values greater than 1.0 will generate an INVALID_VALUE error.

The point is, except for maybe debugging you probably don't want to use LINES and instead you need to rasterize lines yourself using triangles.

You don't need to setup any attributes or buffers to render.

This comes up from needing to make the smallest repos either to post on stack overflow or to file a bug. Let's assume you're using core OpenGL or OpenGL ES 2.0+ so that you're required to write shaders. Here's the simplest code to test a texture

const GLchar* vsrc = R"(#version 300
void main() {
  gl_Position = vec4(0, 0, 0, 1);
  gl_PointSize = 100.0;

const GLchar* fsrc = R"(#version 300
precision highp float;
uniform sampler2D tex;
out vec4 color;
void main() {
  color = texture(tex, gl_PointCoord);

GLuint prg = someUtilToCompileShadersAndLinkToProgram(vsrc, fsrc);

// this block only needed in GL, not GL ES
    GLuint vertex_array;
    glGenVertexArrays(1, &vertex_array);

const GLubyte oneRedPixel[] = { 0xFF, 0x00, 0x00, 0xFF };
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oneRedPixel);

glDrawArrays(GL_POINTS, 0, 1);

Note: no attributes, no buffers, and I can test things about textures. If I wanted to try multiple things I can just change the vertex shader to

const GLchar* vsrc = R"(#version 300
layout(location = 0) in vec4 position;
void main() {
  gl_Position = position;
  gl_PointSize = 100.0;

And then use glVertexAttrib to change the position. Example

glVertexAttrib2f(0, -0.5, 0);  // draw on left
glDrawArrays(GL_POINTS, 0, 1);
glVertexAttrib2f(0,  0.5, 0);  // draw on right
glDrawArrays(GL_POINTS, 0, 1);

Note that even if we used this second shader and didn't call glVertexAttrib we'd get a point in the center of the viewport. See next item.

PS: This may only work in the core profile.

The default attribute value is 0, 0, 0, 1

I see this all the time. Someone declares a position attribute as vec3 and then manually sets w to 1.

in vec3 position;
uniform mat4 matrix;
void main() {
  gl_Position = matrix * vec4(position, 1);

The thing is for attributes w defaults to 1.0 so this will work just as well

in vec4 position;
uniform mat4 matrix;
void main() {
  gl_Position = matrix * position;

It doesn't matter that you're only supplying x, y, and z from your attributes. w defaults to 1.

Framebuffers are cheap and you should create more of them rather than modify them.

I'm not sure if this is well known or not. It partly falls out from understanding the API.

A framebuffer is a tiny thing that just consists of a collection of references to textures and renderbuffers. Therefore don't be afraid to make more.

Let's say your doing some multipass post processing where you swap inputs and outputs.

texture A as uniform input => pass 1 shader => texture B attached to framebuffer texture B as uniform input => pass 2 shader => texture A attached to framebuffer texture A as uniform input => pass 3 shader => texture B attached to framebuffer texture B as uniform input => pass 4 shader => texture A attached to framebuffer ...

You can implement this in 2 ways

  1. Make one framebuffer, call gl.framebufferTexture2D to set which texture to render to between passes.

  2. Make 2 framebuffers, attach texture A to one and texture B to the other. Bind the other framebuffer between passes.

Method 2 is better. Every time you change the settings inside a framebuffer the driver potentially has to check a bunch of stuff at render time. Don't change anything and nothing has to be checked again.

This arguably includes glDrawBuffers which is also framebuffer state. If you need multiple settings for glDrawBuffers make a different framebuffer with the same attachments but different glDrawBuffers settings.

Arguably this is likely a trivial optimization. The more important point is framebuffers themselves are cheap.

TexImage2D the API leads to interesting complications

Not too many people seem to be aware of the implications of TexImage2D. Consider that in order to function on the GPU your texture must be setup with the correct number of mip levels. You can set how many. It could be 1 mip. It could be a a bunch but they each have to be the correct size and format. Let's say you have a 8x8 texture and you want to do the standard thing (not setting any other texture or sampler parameters). You'll also need a 4x4 mip, a 2x2 mip, an 1x1 mip. You can get those automatically by uploading the level 0 8x8 mip and calling glGenerateMipmap.

Those 4 mip levels need to copied to the GPU, ideally without wasting a lot of memory. But look at the API. There's nothing in that says I can't do this

glTexImage2D(GL_TEXTURE_2D, 0, 8, 8, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData8x8);
glTexImage2D(GL_TEXTURE_2D, 1, 20, 40, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData40x20);
glTexImage2D(GL_TEXTURE_2D, 2, 10, 20, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData20x10);
glTexImage2D(GL_TEXTURE_2D, 3, 5, 10, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData10x5);
glTexImage2D(GL_TEXTURE_2D, 4, 2, 5, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData5x2);
glTexImage2D(GL_TEXTURE_2D, 5, 1, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData2x1);
glTexImage2D(GL_TEXTURE_2D, 6, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1x1);

If it's not clear what that code does a normal mipmap looks like this

but the mip chain above looks like this

Now, the texture above will not render but the code is valid, no errors, and, I can fix it by adding this line at the bottom

glTexImage2D(GL_TEXTURE_2D, 0, 40, 80, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData80x40);

I can even do this

glTexImage2D(GL_TEXTURE_2D, 6, 1000, 1000, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1000x1000);
glTexImage2D(GL_TEXTURE_2D, 6, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1x1);

or this

glTexImage2D(GL_TEXTURE_2D, 6, 1000, 1000, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1000x1000);

Do you see the issue? The API can't actually know anything about what you're trying to do until you actually draw. All the data you send to each mip just has to sit around until you call draw because there's no way for the API to know beforehand what state all the mips will be until you finally decide to draw with that texture. Maybe you supply the last mip first. Maybe you supply different internal formats to every mip and then fix them all later.

Ideally you'd specify the level 0 mip and then it would be an error to specify any other mip that does not match. Same internal format, correct size for the current level 0. That still might not be perfect because on changing level 0 all the mips might be the wrong size or format but it could be that changing level 0 to a different size invalidates all the other mip levels.

This is specifically why TexStorage2D was added but TexImage2D is pretty much ingrained at this point


Reduce Your Dependencies


I recently wanted to add colored output to a terminal/command line program. I checked some other project that was outputting color and saw they were using a library called chalk.

All else being equal I prefer smaller libraries to larger ones and I prefer to glue libraries together rather than take a library that tries to combine them for me. So, looking around I found chalk, colors, and ansi-colors. All popular libraries to provide colors in the terminal.

chalk is by far the largest with 5 dependencies totaling 3600 lines of code.

Things it combines

Next up is colors. It's about 1500 lines of code.

Like chalk it also spies on your command line arguments.

Next up is ansi-color. It's about 900 lines of code. It claims to be a clone of colors without the excess parts. No auto detecting support. No spying on your command line. It does include the theme function if only to try to match colors API.

Why all these hacks and integrations?


Starting with themes. chalk gets this one correct. They don't do anything. They just show you that it's trivial to do it yourself.

const theme = {

console.log('on fire'));

Why add a function setTheme just to do that? What happens if I go

  red: 'green',
  green: 'red',

Yes you'd never do that but an API shouldn't be designed to fail. What was the point of cluttering this code with this feature when it's so trivial to do yourself?

Color Names

It would arguably be better to just have them as separate libraries. Let's assume the color libraries have a function rgb that takes an array of 3 values. Then you can do this:

const pencil = require('pencil');
const webColors = require('color-name');

pencil.rgb(webColors.burlywood)('some string');


const chalk = require('chalk');


In exchange for breaking the dependency you gain the ability to take the newest color set anytime color-name is updated rather than have to wait for chalk to update its deps. You also don't have 150 lines of unused JavaScript in your code if you're not using the feature which you weren't.

Color Conversion

As above the same is true of color conversions

const pencil = require('pencil');
const hsl = require('color-convert').rgb.hsl;

pencil.rgb(hsl(30, 100, 50))('some-string');


const chalk = require('chalk');

chalk.hsl(30, 100, 50)('some-string');

Breaking the dependency 1500 lines are removed from the library that you probably weren't using anyway. You can update the conversion library if there are bugs or new features you want. You can also use other conversions and they won't have a different coding style.

Command Line hacks

As mentioned above chalk looks at your command line behind the scenes. I don't know how to even describe how horrible that is.

A library peeking at your command line behind the scenes seems like a really bad idea. To do this not only is it looking at your command line it's including another library to parse your command line. It has no idea how your command line works. Maybe you're shelling to another program and you have a β€”- to separate arguments to your program from arguments meant for the program you spawn like Electron and npm. How would chalk know this? To fix this you have to hack around chalk using environment variables. But of course if the program you're shelling to also uses chalk it will inherit the environment variables requiring yet more workarounds. It's just simply a bad idea.

Like the other examples, if your program takes command line arguments it's literally going to be 2 lines to do this yourself. One line to add --color to your list of arguments and one line to use it to configure the color library. Bonus, your command line argument is now documented for your users instead of being some hidden secret.

Detecting a Color Terminal

This is another one where the added dependency only detracts, not adds.

We could just do this:

const colorSupport = require('color-support');
const pencil = require('pencil');

pencil.enabled = colorSupport.hasBasic;

Was that so hard? Instead it chalk tries to guess on its own. There are plenty of situations where it will guess wrong which is why making the user add 2 lines of code is arguably a better design. Only they know when it's appropriate to auto detect.

Issues with Dependencies

There are more issues with dependencies than just aesthetics and bloat though.

Dependencies = Less Flexible

The library has chosen specific solutions. If you need different solutions you now have to work around the hard coded ones

Dependencies = More Risk

Every dependency adds risks.

Dependencies = More Work for You

Every dependency a library uses is one more you have to deal with. Library A gets discontinued. Library B has a security bug. Library C has a data leak. Library D doesn't run in the newest version of node, etc…

If the library you were using didn't depend on A, B, C, and D all of those issues disappear. Less work for you. Less things to monitor. Less notifications of issues.

Lower your Dependencies

I picked on chalk and colors here because they're perfect examples of a poor tradeoffs. Their dependencies take at most 2 lines of code to provide the same functionality with out the dependencies so including them did nothing but add all the issues and risks listed above.

It made more work for every user of chalk since they have to deal with the issues above. It even made more work for the developers of chalk who have to keep the dependencies up to date.

Just like they have a small blurb in their readme on how to implement themes they could have just as easily shown how to do all the other things without the dependencies using just 2 lines of code!

I'm not saying you should never have dependencies. The point is you should evaluate if they are really needed. In the case of chalk it's abundantly clear they were not. If you're adding a library to npm please reduce your dependencies. If it only takes 1 to 3 lines to reproduce the feature without the dependency then just document what to do instead of adding a dep. Your library will be more flexible. You'll expose your users to less risks. You'll make less work for yourself because you won't have to keep updating your deps. You'll make less work for your users because they won't have to keep updating your library just to get new deps.

Less dependencies = Everyone wins!


What to do about dependencies


More rants on the dependencies issue

So today I needed to copy a file in a node based JavaScript build step.

Background: For those that don't know it node has a package manager called npm (Node Package Manager). Packages have a package.json file that defines tons of things and that includes a "scripts" section which are effectively just tiny command line strings associated with a keyword.


"scripts": {
   "build": "make -f makefile",
   "test": "runtest-harness"

So you can now type npm run build to run the build script and it will run just as if you had typed make -f makefile.

Other than organizational the biggest plus is that if you have any development dependencies npm will look in those locally installed dependencies to run the commands. This means all your tools can be local to your project. If this project needs lint 1.6 and some other project needs lint 2.9 no worries. Just add the correct version of lint to your development dependencies and npm will run it for you.

But, the issue comes up, I wanted to copy a file. I could use a bigger build system but for small things you can imagine just wanting to use cp as in

"scripts": {
   "build": "make -f makefile && cp a.out dist/MyApp",

The problem is cp is mac/linux only. If you care about Windows devs being able to build on Windows then you can't use cp. The solution is to add a node based copy command to your development dependencies and then you can use it cross platform

So, I go looking for copy commands. One of the most popular is [cpy-cli]. Here's its dependency tree

└─┬ cpy-cli@2.0.0
  β”œβ”€β”¬ cpy@7.3.0
  β”‚ β”œβ”€β”€ arrify@1.0.1
  β”‚ β”œβ”€β”¬ cp-file@6.2.0
  β”‚ β”‚ β”œβ”€β”€ graceful-fs@4.2.3
  β”‚ β”‚ β”œβ”€β”¬ make-dir@2.1.0
  β”‚ β”‚ β”‚ β”œβ”€β”€ pify@4.0.1 deduped
  β”‚ β”‚ β”‚ └── semver@5.7.1 deduped
  β”‚ β”‚ β”œβ”€β”€ nested-error-stacks@2.1.0 deduped
  β”‚ β”‚ β”œβ”€β”€ pify@4.0.1
  β”‚ β”‚ └── safe-buffer@5.2.0
  β”‚ β”œβ”€β”¬ globby@9.2.0
  β”‚ β”‚ β”œβ”€β”¬ @types/glob@7.1.1
  β”‚ β”‚ β”‚ β”œβ”€β”€ @types/events@3.0.0
  β”‚ β”‚ β”‚ β”œβ”€β”€ @types/minimatch@3.0.3
  β”‚ β”‚ β”‚ └── @types/node@12.11.6
  β”‚ β”‚ β”œβ”€β”¬ array-union@1.0.2
  β”‚ β”‚ β”‚ └── array-uniq@1.0.3
  β”‚ β”‚ β”œβ”€β”¬ dir-glob@2.2.2
  β”‚ β”‚ β”‚ └─┬ path-type@3.0.0
  β”‚ β”‚ β”‚   └── pify@3.0.0
  β”‚ β”‚ β”œβ”€β”¬ fast-glob@2.2.7
  β”‚ β”‚ β”‚ β”œβ”€β”¬ @mrmlnc/readdir-enhanced@2.2.1
  β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ call-me-maybe@1.0.1
  β”‚ β”‚ β”‚ β”‚ └── glob-to-regexp@0.3.0
  β”‚ β”‚ β”‚ β”œβ”€β”€ @nodelib/fs.stat@1.1.3
  β”‚ β”‚ β”‚ β”œβ”€β”¬ glob-parent@3.1.0
  β”‚ β”‚ β”‚ β”‚ β”œβ”€β”¬ is-glob@3.1.0
  β”‚ β”‚ β”‚ β”‚ β”‚ └── is-extglob@2.1.1 deduped
  β”‚ β”‚ β”‚ β”‚ └── path-dirname@1.0.2
  β”‚ β”‚ β”‚ β”œβ”€β”¬ is-glob@4.0.1
  β”‚ β”‚ β”‚ β”‚ └── is-extglob@2.1.1
  β”‚ β”‚ β”‚ β”œβ”€β”€ merge2@1.3.0
  β”‚ β”‚ β”‚ └─┬ micromatch@3.1.10
  β”‚ β”‚ β”‚   β”œβ”€β”€ arr-diff@4.0.0
  β”‚ β”‚ β”‚   β”œβ”€β”€ array-unique@0.3.2
  β”‚ β”‚ β”‚   β”œβ”€β”¬ braces@2.3.2
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ arr-flatten@1.1.0
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ array-unique@0.3.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ extend-shallow@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ └── is-extendable@0.1.1
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ fill-range@4.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ extend-shallow@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └── is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ is-number@3.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └─┬ kind-of@3.2.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   └── is-buffer@1.1.6
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ repeat-string@1.6.1
  β”‚ β”‚ β”‚   β”‚ β”‚ └─┬ to-regex-range@2.1.1
  β”‚ β”‚ β”‚   β”‚ β”‚   β”œβ”€β”€ is-number@3.0.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚   └── repeat-string@1.6.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ isobject@3.0.1
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ repeat-element@1.1.3
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ snapdragon@0.8.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ snapdragon-node@2.1.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ define-property@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └─┬ is-descriptor@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”œβ”€β”¬ is-accessor-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”œβ”€β”¬ is-data-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ └─┬ snapdragon-util@3.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚   └─┬ kind-of@3.2.2
  β”‚ β”‚ β”‚   β”‚ β”‚     └── is-buffer@1.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ split-string@3.1.0
  β”‚ β”‚ β”‚   β”‚ β”‚ └── extend-shallow@3.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ └── to-regex@3.0.2 deduped
  β”‚ β”‚ β”‚   β”œβ”€β”¬ define-property@2.0.2
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ is-descriptor@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ is-accessor-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ is-data-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ └── isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”œβ”€β”¬ extend-shallow@3.0.2
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ assign-symbols@1.0.0
  β”‚ β”‚ β”‚   β”‚ └─┬ is-extendable@1.0.1
  β”‚ β”‚ β”‚   β”‚   └─┬ is-plain-object@2.0.4
  β”‚ β”‚ β”‚   β”‚     └── isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”œβ”€β”¬ extglob@2.0.4
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ array-unique@0.3.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ define-property@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ └─┬ is-descriptor@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”‚   β”œβ”€β”¬ is-accessor-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚   β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚   β”œβ”€β”¬ is-data-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚   β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚   └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ expand-brackets@2.1.4
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ debug@2.6.9 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ define-property@0.2.5
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └── is-descriptor@0.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ extend-shallow@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └── is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ posix-character-classes@0.1.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ regex-not@1.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ snapdragon@0.8.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ └── to-regex@3.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ extend-shallow@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ └── is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ fragment-cache@0.2.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ regex-not@1.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ snapdragon@0.8.2 deduped
  β”‚ β”‚ β”‚   β”‚ └── to-regex@3.0.2 deduped
  β”‚ β”‚ β”‚   β”œβ”€β”¬ fragment-cache@0.2.1
  β”‚ β”‚ β”‚   β”‚ └── map-cache@0.2.2
  β”‚ β”‚ β”‚   β”œβ”€β”€ kind-of@6.0.2
  β”‚ β”‚ β”‚   β”œβ”€β”¬ nanomatch@1.2.13
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ arr-diff@4.0.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ array-unique@0.3.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ define-property@2.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ extend-shallow@3.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ fragment-cache@0.2.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ is-windows@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ object.pick@1.3.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ regex-not@1.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ snapdragon@0.8.2 deduped
  β”‚ β”‚ β”‚   β”‚ └── to-regex@3.0.2 deduped
  β”‚ β”‚ β”‚   β”œβ”€β”¬ object.pick@1.3.0
  β”‚ β”‚ β”‚   β”‚ └── isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”œβ”€β”¬ regex-not@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ extend-shallow@3.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ └─┬ safe-regex@1.1.0
  β”‚ β”‚ β”‚   β”‚   └── ret@0.1.15
  β”‚ β”‚ β”‚   β”œβ”€β”¬ snapdragon@0.8.2
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ base@0.11.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ cache-base@1.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”¬ collection-visit@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”¬ map-visit@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”‚ └── object-visit@1.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ └─┬ object-visit@1.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚   └── isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”€ component-emitter@1.3.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”€ get-value@2.0.6
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”¬ has-value@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ get-value@2.0.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”¬ has-values@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ is-number@3.0.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”‚ └─┬ kind-of@4.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”‚   └── is-buffer@1.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ └── isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”€ isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”¬ set-value@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”¬ extend-shallow@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”‚ └── is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ is-plain-object@2.0.4 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ └── split-string@3.1.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”¬ to-object-path@0.3.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ └─┬ kind-of@3.2.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚   └── is-buffer@1.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”¬ union-value@1.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ arr-union@3.1.0 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ get-value@2.0.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ └── set-value@2.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └─┬ unset-value@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”œβ”€β”¬ has-value@0.3.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ get-value@2.0.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ has-values@0.1.4
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ └─┬ isobject@2.1.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚   └── isarray@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   └── isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ class-utils@0.3.6
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”€ arr-union@3.1.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”¬ define-property@0.2.5
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”‚ └── is-descriptor@0.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”€ isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └─┬ static-extend@0.1.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”œβ”€β”¬ define-property@0.2.5
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ └── is-descriptor@0.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   └─┬ object-copy@0.1.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚     β”œβ”€β”€ copy-descriptor@0.1.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚     β”œβ”€β”¬ define-property@0.2.5
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚     β”‚ └── is-descriptor@0.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚     └─┬ kind-of@3.2.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚       └── is-buffer@1.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ component-emitter@1.3.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ define-property@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └─┬ is-descriptor@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”œβ”€β”¬ is-accessor-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”œβ”€β”¬ is-data-descriptor@1.0.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   β”‚ └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   └── kind-of@6.0.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ isobject@3.0.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”¬ mixin-deep@1.3.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ β”œβ”€β”€ for-in@1.0.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚ └─┬ is-extendable@1.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”‚   └── is-plain-object@2.0.4 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚ └── pascalcase@0.1.1
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ debug@2.6.9
  β”‚ β”‚ β”‚   β”‚ β”‚ └── ms@2.0.0
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ define-property@0.2.5
  β”‚ β”‚ β”‚   β”‚ β”‚ └─┬ is-descriptor@0.1.6
  β”‚ β”‚ β”‚   β”‚ β”‚   β”œβ”€β”¬ is-accessor-descriptor@0.1.6
  β”‚ β”‚ β”‚   β”‚ β”‚   β”‚ └─┬ kind-of@3.2.2
  β”‚ β”‚ β”‚   β”‚ β”‚   β”‚   └── is-buffer@1.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚   β”œβ”€β”¬ is-data-descriptor@0.1.4
  β”‚ β”‚ β”‚   β”‚ β”‚   β”‚ └─┬ kind-of@3.2.2
  β”‚ β”‚ β”‚   β”‚ β”‚   β”‚   └── is-buffer@1.1.6 deduped
  β”‚ β”‚ β”‚   β”‚ β”‚   └── kind-of@5.1.0
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ extend-shallow@2.0.1
  β”‚ β”‚ β”‚   β”‚ β”‚ └── is-extendable@0.1.1 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ map-cache@0.2.2 deduped
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”€ source-map@0.5.7
  β”‚ β”‚ β”‚   β”‚ β”œβ”€β”¬ source-map-resolve@0.5.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ atob@2.1.2
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ decode-uri-component@0.2.0
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ resolve-url@0.2.1
  β”‚ β”‚ β”‚   β”‚ β”‚ β”œβ”€β”€ source-map-url@0.4.0
  β”‚ β”‚ β”‚   β”‚ β”‚ └── urix@0.1.0
  β”‚ β”‚ β”‚   β”‚ └── use@3.1.1
  β”‚ β”‚ β”‚   └─┬ to-regex@3.0.2
  β”‚ β”‚ β”‚     β”œβ”€β”€ define-property@2.0.2 deduped
  β”‚ β”‚ β”‚     β”œβ”€β”€ extend-shallow@3.0.2 deduped
  β”‚ β”‚ β”‚     β”œβ”€β”€ regex-not@1.0.2 deduped
  β”‚ β”‚ β”‚     └── safe-regex@1.1.0 deduped
  β”‚ β”‚ β”œβ”€β”¬ glob@7.1.5
  β”‚ β”‚ β”‚ β”œβ”€β”€ fs.realpath@1.0.0
  β”‚ β”‚ β”‚ β”œβ”€β”¬ inflight@1.0.6
  β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ once@1.4.0 deduped
  β”‚ β”‚ β”‚ β”‚ └── wrappy@1.0.2
  β”‚ β”‚ β”‚ β”œβ”€β”€ inherits@2.0.4
  β”‚ β”‚ β”‚ β”œβ”€β”¬ minimatch@3.0.4
  β”‚ β”‚ β”‚ β”‚ └─┬ brace-expansion@1.1.11
  β”‚ β”‚ β”‚ β”‚   β”œβ”€β”€ balanced-match@1.0.0
  β”‚ β”‚ β”‚ β”‚   └── concat-map@0.0.1
  β”‚ β”‚ β”‚ β”œβ”€β”¬ once@1.4.0
  β”‚ β”‚ β”‚ β”‚ └── wrappy@1.0.2 deduped
  β”‚ β”‚ β”‚ └── path-is-absolute@1.0.1
  β”‚ β”‚ β”œβ”€β”€ ignore@4.0.6
  β”‚ β”‚ β”œβ”€β”€ pify@4.0.1 deduped
  β”‚ β”‚ └── slash@2.0.0
  β”‚ └── nested-error-stacks@2.1.0
  └─┬ meow@5.0.0
    β”œβ”€β”¬ camelcase-keys@4.2.0
    β”‚ β”œβ”€β”€ camelcase@4.1.0
    β”‚ β”œβ”€β”€ map-obj@2.0.0
    β”‚ └── quick-lru@1.1.0
    β”œβ”€β”¬ decamelize-keys@1.1.0
    β”‚ β”œβ”€β”€ decamelize@1.2.0
    β”‚ └── map-obj@1.0.1
    β”œβ”€β”¬ loud-rejection@1.6.0
    β”‚ β”œβ”€β”¬ currently-unhandled@0.4.1
    β”‚ β”‚ └── array-find-index@1.0.2
    β”‚ └── signal-exit@3.0.2
    β”œβ”€β”¬ minimist-options@3.0.2
    β”‚ β”œβ”€β”€ arrify@1.0.1 deduped
    β”‚ └── is-plain-obj@1.1.0
    β”œβ”€β”¬ normalize-package-data@2.5.0
    β”‚ β”œβ”€β”€ hosted-git-info@2.8.5
    β”‚ β”œβ”€β”¬ resolve@1.12.0
    β”‚ β”‚ └── path-parse@1.0.6
    β”‚ β”œβ”€β”€ semver@5.7.1
    β”‚ └─┬ validate-npm-package-license@3.0.4
    β”‚   β”œβ”€β”¬ spdx-correct@3.1.0
    β”‚   β”‚ β”œβ”€β”€ spdx-expression-parse@3.0.0 deduped
    β”‚   β”‚ └── spdx-license-ids@3.0.5
    β”‚   └─┬ spdx-expression-parse@3.0.0
    β”‚     β”œβ”€β”€ spdx-exceptions@2.2.0
    β”‚     └── spdx-license-ids@3.0.5 deduped
    β”œβ”€β”¬ read-pkg-up@3.0.0
    β”‚ β”œβ”€β”¬ find-up@2.1.0
    β”‚ β”‚ └─┬ locate-path@2.0.0
    β”‚ β”‚   β”œβ”€β”¬ p-locate@2.0.0
    β”‚ β”‚   β”‚ └─┬ p-limit@1.3.0
    β”‚ β”‚   β”‚   └── p-try@1.0.0
    β”‚ β”‚   └── path-exists@3.0.0
    β”‚ └─┬ read-pkg@3.0.0
    β”‚   β”œβ”€β”¬ load-json-file@4.0.0
    β”‚   β”‚ β”œβ”€β”€ graceful-fs@4.2.3 deduped
    β”‚   β”‚ β”œβ”€β”¬ parse-json@4.0.0
    β”‚   β”‚ β”‚ β”œβ”€β”¬ error-ex@1.3.2
    β”‚   β”‚ β”‚ β”‚ └── is-arrayish@0.2.1
    β”‚   β”‚ β”‚ └── json-parse-better-errors@1.0.2
    β”‚   β”‚ β”œβ”€β”€ pify@3.0.0
    β”‚   β”‚ └── strip-bom@3.0.0
    β”‚   β”œβ”€β”€ normalize-package-data@2.5.0 deduped
    β”‚   └── path-type@3.0.0 deduped
    β”œβ”€β”¬ redent@2.0.0
    β”‚ β”œβ”€β”€ indent-string@3.2.0
    β”‚ └── strip-indent@2.0.0
    β”œβ”€β”€ trim-newlines@2.0.0
    └─┬ yargs-parser@10.1.0
      └── camelcase@4.1.0 deduped

Yea, what the actually Effing F!?

197 dependencies, 1170 files, 47000 lines of JavaScript to copy files.

I ended up writing my own. There's the entire program

const fs = require('fs');
const src = process.argv[2];
const dst = process.argv[3];
fs.copyFileSync(src, dst);

And I added it to my build like this

"scripts": {
   "build": "make -f makefile && node copy.js a.out dist/MyApp",

So, my first reaction was, yea, something is massively over engineered. Or maybe that's under engineered if by under engineered it means "made without thinking".

You might think so what, people have large hard drives, fast internet, lots of memory. Who cares about dependencies? Well, the more dependencies you have the more you get messages like this

found 35 vulnerabilities (1 low, 2 moderate, 31 high, 1 critical) in 1668 scanned packages

You get more and more and more maintenance with more dependencies.

Not only that, you get dependent, not just on the software but on the people maintaining that software. Above, 197 dependencies also means trusting none of them are doing anything bad. As far as we know one of those dependencies could easily have a time bomb waiting until some day in the future to pown your machine or server.

On the other hand my copy copies a single file. cpy-cli copies similar to cp. It can copy multiple files and whole trees.

I started wondering what it would take to add the minimal features to reproduce a functional cp clone. Note: not a full clone, a functional clone I'm sure cp has a million features but in my entire 40yr career I've only used about 2 of those features. (1) copying using wildcard as in cp *.txt dst which honestly is handled by the shell, not cp. (2) copying recursively cp -R src dst.

The first thing I did was look at a command line argument library. I've used one called optionator in the past and it's fine. I check and it has several dependencies. 2 that stick out are:

  1. a wordwrap library.

    This is used to make your command's help fit the size of the terminal you're in. Definitely a useful feature. I have terminals of all difference sizes. I default to having 4 open.

  2. a levenshtein distance library.

    This is used so that if you specify a switch that doesn't exist it can try to suggest the correct one. For example might type:

       my-copy-clone --src=abc.txt -destinatoin=def.txt

    and it would says something like

       no such switch: 'destinatoin' did you mean 'destination'?`. 

    Yea, that's kind of useful too.

Okay so my 4 line copy.js just got 3500 lines of libraries added. Or maybe I should look into another library that uses less deps while getting "woke" about dependencies.

Meh, I decide to parse my own arguments rather that take 3500 lines of code and 7 dependencies. Here's the code

#!/usr/bin/env node

'use strict';

const fs = require('fs');
const ldcp = require('../src/ldcp');

const args = process.argv.slice(2);

const options = {
  recurse: false,
  dryRun: false,
  verbose: false,

while (args.length && args[0].startsWith('-')) {
  const opt = args.shift();
  switch (opt) {
    case '-v':
    case '--verbose':
       options.verbose = true;
    case '--dry-run':
       options.dryRun = true;
       options.verbose = true;
    case '-R':
       options.recurse = true;
       console.error('illegal option:', opt);

function printUsage() {
  console.log('usage: ldcp [-R] src_file dst_file\n       ldcp [-R] src_file ... dst_dir');

let dst = args.pop();
if (args.length < 1) {

Now that the args are parsed we need a function to copy the files

const path = require('path');
const fs = require('fs');

const defaultAPI = {
  copyFileSync(...args) { return fs.copyFileSync(...args) },
  mkdirSync(...args) { return fs.mkdirSync(...args); },
  statSync(...args) { return fs.statSync(...args); },
  readdirSync(...args) { return fs.readdirSync(...args); },
  log() {},

function ldcp(_srcs, dst, options, api = defaultAPI) {
  const {recurse} = options;

  // check if dst is or needs to be a directory
  const dstStat = safeStat(dst);
  let isDstDirectory = false;
  let needMakeDir = false;
  if (dstStat) {
    isDstDirectory = dstStat.isDirectory();
  } else {
    isDstDirectory = recurse;
    needMakeDir = recurse;

  if (!recurse && _srcs.length > 1 && !isDstDirectory) {
    throw new Error('can not copy multiple files to same dst file');

  const srcs = [];

  // handle the case where src ends with / like cp
  for (const src of _srcs) {
    if (recurse) {
      const srcStat = safeStat(src);
      if ((needMakeDir && srcStat && srcStat.isDirectory()) ||
          (src.endsWith('/') || src.endsWith('\\'))) {
        srcs.push(...api.readdirSync(src).map(f => path.join(src, f)));

  const srcDsts = [{srcs, dst, isDstDirectory, needMakeDir}];

  while (srcDsts.length) {
    const {srcs, dst, isDstDirectory, needMakeDir} = srcDsts.shift();

    if (needMakeDir) {
      api.log('mkdir', dst);

    for (const src of srcs) {
      const dstFilename = isDstDirectory ? path.join(dst, path.basename(src)) : dst;
      if (recurse) {
        const srcStat = api.statSync(src);
        if (srcStat.isDirectory()) {
              srcs: api.readdirSync(src).map(f => path.join(src, f)),
              dst: path.join(dst, path.basename(src)),
              isDstDirectory: true,
              needMakeDir: true,
      api.log('copy', src, dstFilename);
      api.copyFileSync(src, dstFilename);

  function safeStat(filename) {
    try {
      return api.statSync(filename.replace(/(\\|\/)$/, ''));
    } catch (e) {

I made it so you pass an optional API of all the external functions it calls. That way you can pass in for example functions that do nothing if you want to test it. Or you can pass in graceful-fs if that's your jam but in the interest of NOT adding dependencies if you want that that's on you. Simple!

All that's left is using it after parsing the args

const log = options.verbose ? console.log.bind(console) : () => {};
const api = options.dryRun ? {
  copyFileSync(src) { fs.statSync(src) },
  mkdirSync() { },
  statSync(...args) { return fs.statSync(...args); },
  readdirSync(...args) { return fs.readdirSync(...args); },
} : {
  copyFileSync(...args) { return fs.copyFileSync(...args) },
  mkdirSync(...args) { return fs.mkdirSync(...args); },
  statSync(...args) { return fs.statSync(...args); },
  readdirSync(...args) { return fs.readdirSync(...args); },

ldcp(args, dst, options, api);

Total lines: 176 and 0 dependencies.

It's here if you want it.


10 Things Apple Could do to Increase Privacy.


Apple under Tim Cook is staking out the claim that they are "the Privacy company".

Apple products are designed to protect your privacy.

At Apple, we believe privacy is a fundamental human right.

Here's 10 things they could do to actually honor that mission.

1. Disallow Apps from using the camera directly.

This one is problematic but ... the majority of apps that ask to use your camera do not actually need access to your camera. Examples are the Facebook App, The Messenger App, the Twitter App. Even the Instagram App. Instead Apple could change their APIs such that the app asks for a camera picture and the OS takes the picture.

This removes the need to for those apps to have access to the camera at all. The only thing the app would get is the picture you took using the built in camera functionality controlled by the OS itself. If you don't take a picture and pick "Use Picture" then the app never sees anything.

As it is now you really have no idea what the app is doing. When you are in the Facebook app, once you've given the app permission to use the camera then as far as you know the app is streaming video, or pictures to Facebook constantly. You have absolutely no idea.

By changing the API so that the app is required to ask the OS for a photo that problem would be solved.

The problem with this solution is it doesn't cover streaming video since in that case the app needs the constant video. It also doesn't cover unique apps that do special things with the camera.

One solution to the unique camera feature issue would be app store rules. Basically only "camera" apps would be allowed to use the camera directly. SNS apps and other apps that just need a photo would be rejected if they asked for camera permission instead of asking the OS for a photo.

Another solution might be that the OS always ask the user for permission to use the camera (or at least provide the option). In other words if you are in some app like the Instagram app and you click the "take a photo" image the OS asks you "Allow App To Use The Camera?" each and every time. As it is now it only asks once. For those people that are privacy conscious being able to give the app each and every time would prevent spying.

2. Disallow Apps from using the Mic directly

See previous paragraph just replace every instance of "camera" with "mic"

3. Disallow access to all Photos

This is similar to the two above but, as it is now apps like the Facebook App, Twitter, etc will ask for permission to access your photos. They do this so they can provide an interface to let you choose photos to post on facebook or tweet on twitter.

The problem is the moment you give them permission they can immediately look at ALL of your photos. All of them!

It would be better if Apple changed the API so the app asks the OS to ask you to choose 1 or more photos. The OS would then present an interface to choose 1 or more photos at which point only those photos you chose are given to the app.

That way apps could not read all of your photos.

Note that I get that some apps also want permission to read all your photos to enable to upload all of them automatically as you take them. That fine, it should just be a separate permission and Apple should enforce that features that let you choose photos to upload go through the OSes photo chooser and that apps that want full permission to access all photos for things like backup must also function without that permission when selecting photos for other purposes.

4. Let GPS be one time only

There are 3 options for GPS currently

  1. Let the app use GPS always
  2. Let the app use GPS when active
  3. Disallow GPS

There needs to a 4th

  1. Ask for permission each time

As it is, basically if you give an app permission to use GPS at all then every time you access that app it gets to know where you are.

It would be much more privacy oriented if you could choose to only give it GPS access for a moment, next 5 minutes, next 30 minutes, etc...

As it is now if you're privacy conscious you have to dig deep into the settings app for the privacy options. Give an app permission for GPS, then remember to dig through those options again to turn GPS permission back off a few minutes later.

That's not a very privacy oriented design.

5. Disallow apps from implementing an internal web browser.

Many apps show links to websites. For example Twitter or Facebook or the Google Maps app. When you click the links those apps open a web browser directly inside their app.

This means they can spy on everything you do in that web browser. That's not privacy oriented.

Apple should disallow having an internal web browser. They could do this by enforcing a policy that you can only make an app that can access all websites if that app is a web browser app. Otherwise you have to list the sites your app is allowed to access and that list has to be relatively small.

Many apps are actually just an app that goes directly to some company's website which is fine. The app can list or * as the sites it accesses. Otherwise it's not allowed to access any other websites.

This would force apps to launch the user's browser when they click a link which would mean the apps could no longer spy on your browser activity. The most the could do is know the link you clicked. The couldn't know every link you click after that nor could the log everything you enter on every website you visit while in their app as they can do now.

Note that this would also be better user experience IMO. Users are used to the features available in their browser. For example being able to search in a page. Being able to turn on reader mode. Being able to bookmark and have those bookmarks sync. Being able to use an ad blocker. Etc... As it is when an app uses an internal web browser all of these features are not available. It's inconsistent and inconvenient for the user. By forcing apps to launch the user's browser all of that is solved.

Note: Apple should also allow setting a default browser so that users can choose Firefox or Brave or Chrome or whatever browser the choose for the features they want. If I use Firefox on my Mac I want to be able to bookmark things on iOS and have those bookmarks synced to my Mac but that becomes cumbersome if the OS keeps launching Safari instead of Firefox or whatever my browser of choice is.

6. Put a light on the camera/mic?

In Japan there is a law that phone cameras must make a shutter noise. I actually despise that law. I want to be able to take pictures of my delicious gourmet meal in a quiet fancy restaurant without alerting and annoying all the other guests that I'm doing so. Japan claims this is to prevent perverts from taking up skirt pictures but perverts can just buy non-phone cameras and they can use an app because apps are not bound by the same laws so in effect this law does absolutely nothing except make it annoying and embarrassing to take pictures in quiet places.

On the other hand, if there was a small green, or orange light next to the camera that was physically connected to the camera's power so that it came on when the camera is on then I'd know when the camera was in use which would at least be a privacy oriented feature and so unlike the law above it would have a point.

If they wanted to be cute they could use a multi-color LED where red = camera is on, green = mic is on, yellow = both are on.

Let me add, I wish Apple devices had a built in camera cover or at least the Macs. I know you can buy a 3rd party one but adding a built in cover would show Apple is serious above Privacy.

7. Disallow scanning WiFi / Bluetooth for most apps

AFAIK any app can scan WiFi and or bluetooth. Apps can use this info to know your location even if you have GPS off.

Basically there are databases of every WiFi's SSID (the name you pick to connect to a WiFi hotspot/router) and the databases also have recorded that WiFi's GPS so if they know which WiFis are near then they basically know where you are.

Here's a website where you can see what I'm talking about. Zoom in anywhere in the world and it will show the known WiFi hotspots / routers.

Why do most apps need this ability? They don't? Why doesn't Apple disallow it for most apps?

There are exceptions. I have a Wifi scanner app and a WiFi signal strength app and even a Bluetooth scanner and testing app that are very useful but Apple could easily have an App Store policy that only network utilities are allowed to use this powerful spying feature.

There is absolutely no reason the Twitter app or the Facebook app need to be able to see WiFi SSIDs nor local bluetooth devices.

Apple could easily add a permission requirement to use these features and only allow select apps have them. OR they could add it as yet another per app privacy setting.

8. Allow more Browser engines

This one is probably the most controversial suggestion here. The reasoning though goes like this

Safari is not even remotely the most secure browser.

This is provable by looking through the National Vulnerability Database (NVD) run by the National Institute of Standards and Technology (NIST)

In it you can see that while all browsers have around the same amount of vulnerabilities the types of vulnerabilities are different. Some browsers are designed to be more secure and so are less likely to have vulnerabilities that compromises your device and therefore your privacy. To put it slight more concretely 2 browsers might both have 150 vulnerabilities a year but one might have 90% code execution vulnerabilities (your device and data are compromised) and the other might have 90% DOS vulnerabilities (your device slows down or freezes but no data is compromised). If you check the database you'll find it's true that some browsers have orders of magnitude more code execution vulnerabilities than others.

By allowing competing browser engines users would have the choice to run those empirically more secure browser engines.

As it is now Safari has zero competition on iOS. A developer can make a new browser but it's really just Safari with a skin. That means Apple has less competition and so there is less pressure to make Safari better.

Allowing competing browsers engines would both be win for privacy and encourage faster and more development of Safari.

The number 1 objection I hear is that allowing other engines is a security issue but that is also provably false. See the NVD above. Other engines are more secure. By disallowing other engines you prevent users from protecting themselves from being hacked and therefore having their privacy invaded.

Another objection I hear is JITing, turing JavaScript into code, is something only Apple should be able to do. That argument basically boils down to Apple's app sandbox is insecure and that all apps must be 100% perfect or else they can escape the sandbox. You can't have it both ways. Either Apple's app sandbox is insecure and therefore the whole product is insecure OR Apple's app sandbox is secure and therefore allowing JITing doesn't affect that security. Now of course Apple's app sandbox could have bugs but those bugs can be exploited by any app. The solution is for Apple to be diligent and fix the bugs quickly and timely. The solution is not to make up some bogus JIT restriction.

To make an analogy if a product advertises as waterproof then it better actually waterproof. It can't come with some disclaimer that says "waterproof to 100meters but don't actually put this product in water as it might break".

The JIT argument is basically the same. "Our app sandbox is secure but don't actually run any code". It's clear the JIT argument is bogus. It's exists only to allow Apple a monopoly on browsers on iOS so they don't have to compete and so they can wield veto power over all browser standards. Since only they can make new browser features available to their 1.4 billion iOS devices if they don't support a feature it might as well not exist. Since devs can't use the feature with those 1.4 billion devices they generally just avoid the feature altogether even on non iOS devices.

All that is the long way of saying users would be more secure and get better privacy if they could run more secure and more privacy oriented browsers.

9. Lower the price of Apple products or come out with cheaper alternatives

Apple fans won't like this reason. I don't consider myself an Apple fan and yet I own a Macbook Air, a Macbook Pro, a Mac Mini, an iPad Air 3rd Generation, an iPhone6+, an iPhoneX, an Apple TV 4 and at one point I also owned late 2018 iPad Pro and 4th Gen Apple Watch so clearly I also like Apple even if I don't consider myself a fanatic.

The thing is Apple is expensive. People will argue Apple's quality is high and worth the price and that might be true but it's kind of beside the point. You could make the argument a BMW or Mercedes Benz is a higher quality car than a Kia or a Hyundai but someone who only has a budget for a used Kia or Hyundai it's not realistic to ask them to buy an BMW or Mercedes

Similarly if you have a family of 4 and you want to give everyone in the family their own laptop computer you can buy 4 Windows laptops for the price of the cheapest Mac laptop. Sure those $200-$300 laptops are not nearly as nice as a Macbook Air but just like a Kia will still get you to your job a $250 Windows laptop will still let you browse the internet, run Microsoft Word, Illustrator, Photoshop, listen to music, watch youtube, edit a blog, read reddit, learn to program, etc.... It's unrealistic to ask a family of four to spend $4400 for 4 mac laptops instead of $1200 for 4 windows laptops.

Now you might be thinking so what… people who can afford should be able to spend their money on whatever they want. That's no different than anything else. Rich people buy penthouses just off Central Park and poor people live in trailer parks. The difference though is for most expensive things there are functionally equivalent inexpensive alternatives. A Kia will get you to work just as well as a BMW. Cheap clothing from Old Navy or Uniqlo or H&M will cloth you just as well as clothing from Versace or Prada or Louis Vuitton or pick you favorite but expensive brand. The food at Applebees will feed you just as well as the food from French Laundry. A $250 Vizio TV will let you watch TV just as functionally as a $4000 Sony.

But, if Apple really is the only privacy oriented option, if Android and Windows don't take your privacy seriously, then Apple being out of reach of so many people is … well I don't know what words to use basically say that people that can't afford Apple don't deserve privacy.

Of course that's not Apple's fault. Microsoft for Windows and Google for Android could step up and start making their OSes stop sucking for privacy.

My only point is if Apple is "the privacy company" then at the moment they are really "the privacy company for non-poor people" only and that they could be the privacy company for everyone if they offered some more affordable alternatives.

10. Stop asking for passwords to repair

If you take your Apple device into repair they will ask you for your password or passcode. What the actual Effing Eff!??? Privacy? What? What's that? No, give us the password that unlocks all of your bank accounts, shopping accounts, bitcoin accounts, etc. Give us the password that lets us look at all your photos and videos. Give us the password that gives us access to the email on your device so that we can use that to open all other accounts by asking for password resets. Give us the password for the device that has all your two-factor codes and apps that confirm login on various services.

This is Apple's default stance. If you take a device in for service they will ask you for your password or passcode. That is not the kind of policy a privacy first company would have!

If you object they might tell you to change your password to something else and then change it back after you've gotten the repaired device back. That helps them not to know your normal password. It doesn't prevent all the stuff above.

If it's a Mac they'll give you the option to either turn on the guest account or add another account for them to login. Unfortunately that's really no better. If you're actually privacy oriented you'll have encrypted the hard drive. Giving them a password that unlocks the drive effectively gives them access to all your data whether or not a particular account has access to that data.

You can opt out of that too in which case they'll basically throw up their hands and say "In that case we may not be able to confirm the repairs". Another option is you can format the drive before giving it to them. Is that really the only option a privacy orient company should give you?

Now I get it, I'm sympathetic to the fact that it's harder for them if you don't give them the password. Still, for a Mac they can plug in an external drive and boot off that and at least confirm the machine itself is fine. For an iOS device, if they really are a "Privacy First" company then they need to find another way. They need to design a way to service the products that doesn't risk your privacy and risk exposing all your data.

Do I trust Apple as a company? Mostly. Do I trust every $15 an hour employee at the store like the one asking for password? No! Do I even trust some repair technician making more money but who may be getting paid on the side to scoop up login credentials? No! Do I know they destroy the info when the repair is finished? Nope! They ask you to write it down. As far a I know I could go dig through the trash behind an Apple store and find tons of credentials. Also as far as I know it's all stored in their service database ready to be stolen or hacked.

A privacy first company would do something different. They might for example backup your entire hard drive or iDevice, then reformat it, work on it, then restore. They might put it all on a USB drive, and hand the drive to you, you bring it back when they're done with any physical repairs and they restore it then and reformat the drive. If that's too slow then that's just incentive for them to make it faster. The might add some special service partition or service mode they can boot devices into.

The point is, a company that claims to take privacy seriously shouldn't be asking you to tell them the single most important password in your life. The password that unlocks all other passwords.

I'm not really hopeful Apple will make these changes but I'd argue if they don't make them then their statements of

Apple products are designed to protect your privacy.

At Apple, we believe privacy is a fundamental human right.

Is really just marketing and not at all real. Let's hope it is real and they take more steps to increase user privacy.


A Bad rant on a bad rant on OpenGL ES


A long time ago some idiot wrote a rant about OpenGL ES. I'm saying some idiot in particular because that article called people idiots.

The rant is basically the person wrote a demo in many many years ago in OpenGL. He then decided he wanted to see it run on iPhone which used OpenGL ES. He thought it would be trivial but instead it was a ton of work so he vomited out a rant.

He rants that you should never remove features from an API just deprecate them. First off it's not the same API which is why the name actually changed so this argument comes down to basically he's ranting they didn't change the name enough.

OpenGL vs OpenGL ES

Note: ES = Embedded Systems

If they had called it WaterGraphicsAPI he'd have no reason to complain. If they had called it OpenGL Mini maybe again he'd not have complained because the 'mini' would make it clear it's going to be missing things. So the entire rant basically comes down to his confusion at the name. It's a valid rant that they should have chosen a less confusing name. The rest of the rant is completely without merit though.

OpenGL was invented by SGI in 1991 for specialized and very fast, at the time, machines that cost tens of thousands, even hundreds of thousands of dollars.

OpenGL ES was introduced in 2003 and was designed for feature phones that had 32k of memory running extremely slow. They could draw 100s of polygons a frame vs the 100s of thousands or millions available on machines running OpenGL.

He rants they should have kept these deprecated features anyway but never considers the repercussions of such a decision. So what would have been the results of keeping the old features?

One benefit is some shitty old demos would run on new hardware. Unfortunately they'd run extremely slowly making the hardware look bad causing less sales. Other devs would rant "my 10 year old demo that runs on $40k hardware runs like shit on this 32k ram phone" leading other devs not to even consider that maybe it issue is they guy's shitty techniques and not actually adapting to the new constraints. In other words it would scare devs away from the target platform. How is that a plus?

Note if his demo ran well on an iPhone that's go nothing to do this. OpenGL ES wasn't written for iPhones, it precedes them by 4 years.

Another issue is work time. He claims it took 3 days to re-write his code implying it would have taken just a few days to keep to the old API. No, as someone that's implemented the old API it's actually a shit ton of work, especially if you want to pass the conformance tests so that your implementation of the old API works as the spec says it does. He only had to support the parts of the API that his demo needed, not the entire API.

So, what do we get, probably 6+ man months of work, maybe $100k of time and for what? So people can port shitty old demos that run too slow and make the phones look bad. Not a good trade off. I'd much rather those 6 man months of work get spent making the next version of the hardware, or fixing bugs in other places, or adding new features. Anywhere but wasting it for nonsense goals. Worse there would be pressure to waste more and more time trying to get the old API as performant as possible to try to stop lazy devs from making the platform look bad with their poor API usage. Again wasting time and money that could be spent elsewhere.

Another issue is rom space. Keeping the old API requires space in the phone itself. So you need a larger rom making the phone cost more. Your phone ends up selling less because of the higher cost all so some jerk can run his 10 year old demo on your phone that runs too slow and further effects sales in the negative.

Yet another issue is ram space. The old APIs require 2x to 3x the ram because of the way they need to be emulated. So now devs try to port their old stuff, maybe they they are lucky and they run but then they try to add some features and run out of space. Some percentage of devs will now ship this shitty software rather than optimize. Pretty much "I'm out of memory so I'm done. Let's ship!". Effectively users get worse apps giving them a bad experience a causing unhappy users and less sales.

On the other hand OpenGL (not ES) has followed his advice. You can use OpenGL 4.x today and use the OpenGL 1.x API on it. The result is, every week I see new devs asking questions on Stack Overflow and they're using the 10-15yr old deprecated API. They are effectively having their time wasted. They're learning the wrong thing for progression in their careers and learning out of date trivia. The world would arguably be better off if those deprecated APIs just stopped working. Apple, many tech people's favorite company, does this all the time. Some API is deprecated, they give you warnings for 1-2 yrs and if you haven't updated your app it ceases to function.

The short of it is, with even just a little though it was exactly the correct decision to remove those features from the API. It was right because they wasted memory. It was right because they waste memory space. It was right because it makes the phone cheaper. It was right because it saves personnel time that can be used better elsewhere. It was right because it would be a waste of money to pay to have it done. It was right because it prevents new devs from learning obsolete APIs wasting their time. It was right because it discourages old boring ports. It was right because it discourages lazy devs to make the new hardware look bad with their ugly 10yr old slow running demos.

The only issue is the name. They should have chosen a different name for the API. On that I agree and if they had that idiot's entire rant would have never appeared.


Could ImGUI be the future of GUIs?


A random un-thought out idea that came up was could something like Dear ImGUI ever be the future for a mainstream UI library?

For those that don't know what an Immediate Mode GUI or ImGUI is there's a semi famous video by Casey Muratori about it from 2005ish.

Most programmers that use an ImGUI style find it infinitely easier to make UIs with them than traditional retained mode GUIs. They also find them significantly more performant.

The typical retained mode object oriented GUI framework is a system where you basically create a scenegraph of GUI framework widgets. (windows, grids, slider, buttons, checkboxes, etc). You copy your data into those widgets. You then wait for events or callbacks to get told when a widget was edited. You then query the widget's values and copy them back into your data.

This pattern is used in practically every GUI system out there. Windows, WFP, HTML DOM, Apple UIKit, Qt, you name it 99% of GUI frameworks are Retained Mode Object Oriented Scenegraph GUIs.

A few problems with this GUI style are

In contrast in an ImGUI there are no objects and there is almost no state. The simple explanation of most ImGUIs is that you call functions like

// draw a button
if (ImGUI::Button("Click Me")) {
// draw slider
ImGUI::SliderFloat("Speed:" &someInstance.speed, 0.0f, 100.0f);

Button and Slider do two things.

  1. They append into a vector (array) the positions and texture coordinates needed to draw the widget (or not insert them if they'd be clipped off screen or outside the current window / clip rectangle)

  2. They check the position of the mouse pointer, state of keyboard, etc to manipulate that widget. If the data changed they return it immediately

So, pluses:

Possible minus

Perceived but probably not minuses

I guess I'm really curious. I know most GUI framework authors are skeptical that ImGUIs are a good pattern. AFAICT though, no one has really tried. As mentioned above most ImGUIs are used for game development. It would take a concerted effort to try to find the right patterns to completely replicate something as fancy as say Apple's UIKit. Could it be done and stay performant? Would it lose the performance by adding back in all the features? Does the basic design of an ImGUI mean it would end up keeping the perf and the easy of use? Would we find certain features are just impossible to really implement without a scenegraph?

Let me also add that to some degree React is similar to an ImGUI in usage. React has JSX but it's just a shorthand for function calls. The biggest differences would be

If we were to translate the code above into some imaginary ImReact it might be something like

const Button = (props) => {
  return ImGUI:Button(props.caption);

const SliderFloat = (props) => {
  return ImGUI:SliderFloat(props.caption, props.value, props.min, props.max);

const Form = (props) => {
  if (<Button caption="Click Me">) {
  <SliderFloat min="0" max="100" value="&props.speed" caption="Speed:" />

Just looking at that React code you can see the translation back into real code is really straight forward.

Not exactly sure how the update to speed would work but I guess I'm mixing C++ (ImGUI) with JavaScript (React). Typical ImGUIs either have the pattern of being able to pass in a pointer to a primitive, something JavaScript doesn't have. Or, they return the new value as in

newValue = ImGUI::SliderFloat(caption, currentValue, min, max);

which if you want to use the same as the Dear ImGUI C++ example you'd write

someInstance.speed = ImGUI::SliderFloat("Speed:", someInstance.speed, 0.0f, 100.0f);

So if we assumed that style of API then

const Button = (props) => {
  return ImGUI:Button(props.caption);

const SliderFloat = (props) => {
  return ImGUI:SliderFloat(props.caption, props.value, props.min, props.max);

const Form = (props) => {
  if (<Button caption="Click Me">) {
  props.speed = (<SliderFloat min="0" max="100" value="{props.speed}" caption="Speed:" />);

Notice the components are not returning virtual dom nodes since there's no need. The only thing we're really taking is JSX just to show that you could use a React style pattern if you wanted to.

Note: Don't get caught up in the direct state manipulation in the example. How you update state should not be dicated by your UI library. You're free the manage state anyway you please regardless of which UI system you use. Still the example shows how simple ImGUI style is.

state.value = ImGUI:SliderFloat(caption, value, min, max);

is certainly simpler than

// at init time
const slider = new SliderWidget(caption, state.value, min, max);
slider.onChange = function(newValue) {
  state.value = newValue;

// if state.value changed slider needs to show the new value
function updateSlider(newValue) {
  state.value = newValue;

Even worse now you need to some how call updateSlider either everywhere state.value is updated or you need to write some elaborate system so that all places that want to update state.value call into a system that tracks all the widgets and what state they reflect.

ImGUI libraries needs no such complication. There is no widget. Every frame whatever value is in the state is what's in the widget. This is the same promise of React but React ends up being hobbled by the fact that it's on top of slow retained mode GUI libraries.

As an example of complexity possible the most prolific ImGUI is Unity's Editor UI.

So at least there is some precedence of using an ImGUI in user facing app instead of just a game even if Unity itself is for making games.

There are also lots of screenshots of various ImGUI made UIs in the readme.

Here is also a live version of the included example in the Dear ImGUI library

If you decide to interact with it be aware that it's not actually been designed for the browser and so has issues that need fixing. Those issues can easily be fixed so don't get bogged down in nitpicking tiny issues. Rather, notice how complex the UI is and yet it's running at 60fps. Use the "examples" menu in the main window and open more windows. Expand the examples in the main window and see all kinds of live and complex widgets. Now imagine you tried to make just as complex UI using HTML/DOM/React. Not only would the HTML/DOM version have lots of pauses and likely not run 60fps but the code to actually implement it would probably be 5x to 10x as much code along multiple dimensions. One dimension is how much code you have to write to implement the UI using HTML/DOM and/or React vs ImGUI. The other dimension is how much code executes to get the UI on the screen. I suspect the amount of CPU instructions executed in the HTML/DOM version is up to 100x more than the ImGUI version.

Consider the ImGUI::Button function vs making <button> element.

For the <button> element

  1. HTMLButtonElement object as to be created.

    It has all of these properties that need to be set to something

     autofocus: boolean 
     disabled: boolean 
     form: object 
     formAction: string 
     formEnctype: string 
     formMethod: string 
     formNoValidate: boolean 
     formTarget: string 
     name: string 
     type: string 
     value: string 
     willValidate: boolean 
     validity: object ValidityState
     validationMessage: string 
     labels: object NodeList
     title: string 
     lang: string 
     translate: boolean 
     dir: string 
     dataset: object DOMStringMap
     hidden: boolean 
     tabIndex: number 
     accessKey: string 
     draggable: boolean 
     spellcheck: boolean 
     autocapitalize: string 
     contentEditable: string 
     isContentEditable: boolean 
     inputMode: string 
     offsetParent: object 
     offsetTop: number 
     offsetLeft: number 
     offsetWidth: number 
     offsetHeight: number 
     style: object CSSStyleDeclaration
     namespaceURI: string 
     localName: string 
     tagName: string 
     id: string 
     classList: object DOMTokenList
     attributes: object NamedNodeMap
     scrollTop: number 
     scrollLeft: number 
     scrollWidth: number 
     scrollHeight: number 
     clientTop: number 
     clientLeft: number 
     clientWidth: number 
     clientHeight: number 
     attributeStyleMap: object StylePropertyMap
     previousElementSibling: object 
     nextElementSibling: object 
     children: object HTMLCollection
     firstElementChild: object 
     lastElementChild: object 
     childElementCount: number 
     nodeType: number 
     nodeName: string 
     baseURI: string 
     isConnected: boolean 
     ownerDocument: object HTMLDocument
     parentNode: object 
     parentElement: object 
     childNodes: object NodeList
     firstChild: object 
     lastChild: object 
     previousSibling: object 
     nextSibling: object 
     nodeValue: object 
     textContent: string 
  2. More objects need to be created.

    Looking above we can see we need to create

    NodeList            // an empty list of children of this button
    HTMLCollection      // another empty list of children of this button
    StylePropertyMap    //
    NameNodeMap         // the attributes
    DOMTokenList        // the CSS classes as a list
    CSSStyleDeclaration // an object used to deal with CSS
    DOMStringMap        // empty but used for dataset attributes
    ValidityState       // ?? no idea

This is just creation time so far. Tons of properties need to be set to defaults, filled out with empty strings and or other objects need to be created and those objects also need all their properties filled out and as well may need deeper objects created.

Now that an HTMLButtonElememt exists it get inserted into the DOM

At render time the browser will walk the DOM, I'm sure there is some amount of caching but it needs to figure out where the button is. It will likely build some separate internal scene graph separate from the DOM itself which is rendering specific so 1000s more lines of code get executed.

Eventually it will get the to point to render the button. Here again it has to check the 100s of CSS attributes. Text color? Font size? Font Family? Text Shadow? Transform? Animation? Border? Multiple Borders? Background color? Background Image? Background gradient? Is it transparent? Is it on its own stacking context? Literally 100s of options.

Let's assume it's using nothing special, eventually it will generate some quad vertices to render font glyphs. It will likely render these glyphs into a texture or grid of textures for the stacking context. It does this as an optimization so ideally if a different stacking context has its content change but nothing in this stack context changes it can skip re-rendering the texture(s) for this context and just use the one it created last time.

I'm sure there's a 100 other steps I missing related to caching positions, marking things as computed so they don't get recomputed, and on and on.

Compare to ImGUI:Button which is just a function, not an object. All it effectively does is

  1. Clip the button rectangle to the current clip space and exit if it's completely clipped
  2. Insert the vertices for the rectangle of the button into the pre-allocated vertex array
  3. Insert the vertices for each glyph stopping when the first glyph is clipped by the button area.
  4. Return true if the mouse button was pressed and if its position is inside button rectangle, else false.

That's it

Note that those 4 steps also exist in the browser in HTML/DOM land except they are 4 steps of 100s.

So, in summary, ImGUI style is potentially much faster and easier to use. It's both easier to use in the simple case and easier to use in the complex case. The API is easier to use. It's easier to reason about. There is no state. There are no objects. There is no data marshalling. There are no events or callbacks. Because it's so fast when the UI gets complex no giant frameworks like React's virtual dom need to be created. Because of the speed little to no effort is required to workaround slowness like with the DOM. More research into ImGUI style UIs could lead to huge gains in productivity.


When will we get secure desktop OSes?


PC/Mac using people. We have a problem. That problem is our machines are not remotely secure. 10yrs ago I used to not really worry about it but I feel like the time as come that if something isn't done soon we're all going to lose our data and have our bank accounts stolen etc...

Maybe this comes from working on Chrome where security is taken seriously. That's not to say it's not serious in other places but rather working on Chrome all the ways in which a program can do bad things and how to stop them comes up. Of course a browser runs "untrusted code" by which they mean, unless you shut off JavaScript, every page you visit gets to run code on your machine via JavaScript and/or WebAssembly.

That's awesome IMO. It gives us things like Google Maps which is amazing and things like which gives instant and live results but it also means browsers have to be vigilant and have to consider how APIs are designed so that a random site's code can't do bad things to your computer.

Sure, some people will read this and rant that browser's shouldn't run code in the first place. I disagree. I think Google Maps is far better with JavaScript than without. But, that's beside the point.


The disinction is supposed to be that code in the browser could come from anywhere. The ads on your favorite sites include code to track you. The site itself has code to do whatever (get the latest posts live or send you a message notification). You didn't explicitly say "I trust this code" so it's "untrusted".

That is supposed to be in contrast to apps you install on your computer. The act of choosing to install a native app is implicitly saying "I trust this app".

That is a problem. You shouldn't trust apps. You should haven't to trust apps any more than you should have to trust webpages. Apps can be just as evil as a webpage. In fact apps can be more evil because at the moment they aren't sandboxed on Mac or Windows or Linux so they can do far more damage than a webpage.

An app can, read your entire hard drive or at least all the data in your user folder which is probably where all your important data is anywhere. That means it can look at all your photos, all your movies, read though all your files including whatever finanical files you've saved on your computer. If you're a geek and you have private SSH keys stored in ~/.ssh all those keys can be read by any software you install.

A native app can constantly read your clipboard even when it's not the front app and send that to some server on the net. A native app can turn on your camera or your mic without asking. It can scan your network for other devices, some of which might have known exploits.

And of course a large percentage of apps, especially on Windows but even on Mac, ask for admin perission to install which means they can pretty much do anything they want. Install a key logger and watch all your keys. Install a screen reader and download images of your desktop at anytime. They can look at which apps you're running an report that back to their respective companies. They can report the file names of which videos you're watching. They can monitor your network to see which sites you're accessing, what files you're downloading.


The reason it's this way is basically historical. Before all machines were connected to the internet it just never crossed anyone's mind that these things might be problems. We had say 20-30yrs like that where it was just assumed installed apps were trustworthy. So, we have the issue that if Windows and MacOS and Linux were to switch overnight to prevent apps from doing these things all old software would break. Because of that it's hard to push sandboxing apps as the default.

Both Apple and Microsoft put a step forward on this with their respective app stores. Apps installed from the Mac App Store or the Windows App Store run somewhat sandboxed. Microsoft recently removed that sandbox requirement though as the are shipping their Ubuntu integeration on the Windows App Store and it has access to everything.

In general, I don't think any app should have perminent permission to access your mic or your camera. Even Facetime should be asking for permission each and every time. It wouldn't be that bad of a UX. You'd make a call and the OS would prompt "Facetime would like to use the camera Y/N" and your call would be made.

The problem is once you give an app perission then you never know when it's turning on the mic or the camera. For mobile this might be slightly less of an issue since, at least on iOS, only one app runs at a time so you know some background app can't be using the mic (AFAIK). For desktop that's not true so once you've given an app permission to use the camera you really never know when it will do so. Of course on desktop right now there is no camera permission even if sandboxed. You gave it permission just by the act of installing it. You can see MacOS doesn't even have the option to prevent an app from accessing the mic or the camera.

And, this isn't just an issue of trusting each company. You might trust Slack and install their Slack app. Or you might trust Adobe and install Photoshop but those companies are using 100s of 3rd party libraies. You're really trusting 1000s of people for each piece of software you install. Your trusting all 1000+ people are not trying to do anything bad. That all 1000+ people, every disgruntled employee, every schemey person in the chain, didn't decide to try to sneak in some backdoor.

And, even if they aren't intentionally doing something bad there are still bugs. You install some game like Call of Duty. That game sends data between players. Turns out because of a bug another player can hack the networking on their computer to send your computer bad data through the game. They can then own your machine, read all your data, use your camera, mic, etc, hack your router, infect other machines on your network.

This kind of bug is potentially true of any app that exchanges data between users. Your chat app (slack, irc), your email app, your online games, your social netorking apps (facebook, line, whatsapp), etc... You're trusting there are no bugs. Right now there's an extremely popular app framework called Electron. Lots of famous and not so famous apps use it. And yet It's UNSECURE BY DEFAULT. I'd guess if you're using an app based on Electron and that app communicates with a server then odds are > 50% it's insecure. The app itself might be hard coded to only ever talk to the app company's servers which means some random hacker can't use it to pown your machine easily (though maybe they can when you're on their fake wifi). But, unless all the security adivce is followed then any disgruntaled employee or evil manager could use the path between the company and your computer to use the insecurities to do whatever they want to your machine.

Heck, even without frameworks like Electron, apps auto update and each update could add code to do bad things to your machine. The nice people that started the app might have been replaced with less nice people. The company that made the app might have decided they wanted to do more spying on your machine for marketing purposes so your software that used to be trustworth no longer is.

Sandboxings apps solves, or at least is a step to solving all of those issues. With a good sandbox an app can't read the data or files of other apps (without your permission). With a good sandbox an app can't access the mic or camera. With a good sandbox even if the app has bugs that let other users hack your app they can only affect that app and it's data, not your entire computer and all data. A really good sandbox could even prevent scanning your local network.

Steam recently had an exploit anyone could have used to hack your machine. Sure they fixed it but that's not the lesson that should have been learned. The lesson should have been that Steam should never have been in a position to exploit your entire machine. It should be running in a sandbox!

People, in particular software developers, rebel at the idea of sandboxing their software. This is especially frustrating because they should be able to see the dangers. Dev software itself has this issue. Many devs install software almost daily. They download software libraries as packages or as git repositories and trust those libraires are not owning their machines or spying on them. Seriously, typing npm install pick-your-favorite-lib is like literally trusting 100s if not 1000s of people you've never met not to trash your computer and or steal your data. Being told to type apt get some-package and or brew install package should not be opening your data to getting stolen. There needs to be a better way to sandbox even command line apps.

One way to solve this is to create a new VM for every project but at the moment that's too burdensome so that almost no one does it. Microsoft is apparently adding a feature to do this but it doesn't sound like a serious solution. It's only for temporarily running some software. It's not for putting each app in its own sandbox.

Another solution is to run a more sandboxed os like Qubes OS. Unfortunately the apps people want to run are generally not availabe on Qubes so that isn't really a solution. Apple and Micrsoft are really the two companies that need to lead the fix for this.

Unfortunately their current solutions so far are broken. First off their new sandboxes are optional. You can stlll install software that runs outside the sandbox and because of that that's still the norm to install un-sandboxed software. Probably 95% of all the software installed in both OSes runs un-sandboxed. All games on steam and pretty much all brand name apps run unsandboxed.

Secondly their desktop sandboxes are leaky. See the fact that the camera and mic are not sandboxed yet.

And, finally and most important is they've conflated sandboxing with their store. Getting apps from an offical store should be separated from sandboxing apps. Any app, regardless of how it's installed should be subject to sandboxing restrictions by default. You shouldn't have to only get store apps to be sandboxed. The OS should be designed to be safe by default.

I'm sure you can think of an exception, some software that can't function in a sandbox and that's mostly fine but that should be the exception and Apple and Microsoft and users at large should shame and boycott any software that tries to avoid sandboxing. This includes companies like Adobe that are known for hacking your OS at a deep level to spy on you.

I'm not holding my breath and I'm sure I'll get some rants about sandboxing in the comments. I've even seen arguments that sandboxing isn't a solution. To that argument I'd argue back that if you believe sandboxing is not a solution then you should be fine to run your browser with sandboxing off and you should be fine running all software as root/admin. If you aren't willing to that then you do actually believe sandboxing has an important role to play in protecting your computer.

At some point I expect the exploits to multiply like crazy. I'll bet the majority of multi player games have exploits. I'll bet lots of apps that have semi constant networking have exploits. I'll bet that more and more desktop apps will be caught spying on you in one way or another. And I'll bet this will get worse and worse over the years until it becomes clear we need sandboxes. I think iOS and Android have already shown how important they are. Desktop PCs are no different.

Let's hope Microsoft and Apple make better sandboxes. Let's also hope they separate them from their stores and from their certification systems. Sandboxes should be the default.


Thoughts on Magic Ink


If you've never read a Bret Victor paper or watched a Bret Victor presentation you're missing out. For paper's I'd recommend starting with Learnable Programming.

Recently via some random chain of events I stumbled on the Future of Coding podcast. The theme I guess could be summed up as "we're doing it wrong!". "It" being programming computers. I don't know if it's true that we're doing it wrong but it's definitely fun to think about.

In a couple of the podcasts the Bret Victor paper, Magic Ink, was brought up. I hadn't read it yet so I checked it out.

It's inspiring as usual for Bret Victor. I'm not sure if I can do a good job of summarizing it but my take away was that with only a little extra thought it becomes clear that lots of software could have far better user experiences.

Bret claims that interface designers should consider themselves graphic designers first and foremost and that a graphic designer's skill is supposed to be to make images and make it easy to understand and compare information as easily and quickly as possible. Many software applications have given little to no thought about what the user actually wants to accomplish, what data they need to see and how to present it for the app to be useful to them and how most apps make it tedious to get what the user really wants out of them and that maybe with a little more thought much of that could be fixed. I really liked that idea and many of the examples were very compelling.

The paper was written in or around 2005 so before nearly everyone had a PDA in their pocket like they do now. Of course lots of people did have PDAs on them in 2005 but internet access over cellular was still rare and expensive.

In any case I'm curious about some of the things that seem to have changed in the 13 years since that paper was posted.

I think probably the biggest example of something that sounded like a good idea at the time and maybe still is but it's probably not as clear.... Much software could be better if it took context into account. That much is somewhat obvious. One example given might be what a map like Google Maps shows you when you open it. If it can know your location it should probably start with a map of where you currently are. If you click the search bar maybe it should try to guess where you're most likely to want to go as possible suggestions like if it's 8am on a weekday maybe it would suggest your work or if it's 6pm on a weekday maybe it should suggest your home. There was a time when maps didn't have GPS data so they couldn't start by showing you where you are. I'm not sure even Google Maps looks at your location and the time of day and what day of the week to decide what options to show you when you click search.

Bret went on to suggest that if all apps communicated with each other then let's say you just read an email about a business trip you need to take. If you opened your maps app now and the email app and the maps app could talk to each other then there should be a way for the map app to guess that maybe what you want to look at right now is the location of the business trip.

Not sure that's a good suggestion BUT, in 2018 the idea that multiple apps would all share their data sounds super scary because we know now that every app would record everything, send it to their servers, share it with their various partners, most of them advertizing partners and we'd all have profiles built on us. Of course we have that today but not to the level that the paper was hoping to see in the future of 2005.

I'm really curious if Mr. Victor still sees that world as a possible future or if he's settled that it's impossible to do without too many privacy issues or if he's got ideas for solutions.

Another example of something that seems to have changed, the paper suggests using context to help searches. I wish Google did this (or did it better). One example in the paper is if you search for "Blender", "3DSMax", "Modo", and then search for "Maya" it should be abundantly clear that you mean "Maya" the 3D Software Application by Autodesk and not "Maya" the civilization from Central America.

I find these context mistakes infuriating, especially when I know the application has the data it needs. Sometime in 2012, while in Tokyo, I searched for "pizza" on Google Maps and it ended up showing me some place in Texas. Given it knew my GPS and even if it didn't know it at the moment it new my previous searches it seems really stupid to somehow think I was searching for anything in Texas without explicitly typing "Texas" in my search. Even typing "Texas" in my search doesn't seem like it should show me Texas USA if I'm in Tokyo as I could be looking for a store called "Texas" or a restaurant serving "Texas BBQ" or "Texas style Mexican food". It seems like it should require some pretty specific extra context for it to ever give me any results from the state of Texas if it knows I'm currently in Tokyo.

In any case though, of the problems I have with context based interfaces is inconsistency. In Google search maybe it's not so bad to get different results each time. Things change, there's new info that matches the search. But, there are time when I feel like consistency trumps context. I'm trying to think of a good example but until then I guess the way I'd explain it is I have muscle memory. Click this option, press down 3 times, press enter to select. Or click here to make a new fob, then click 2 inches to the right to set the fob's options. If a context aware app made it so my muscle memory failed often I think it would drive me nuts.

Not quite the same but an example in the paper is typing a zip code and having the app start zooming as digits are entered. Type a "9" and we know it's the west coast, followed by a "4" brings us to the SF Bay Area, followed by a "5" brings us to the Easy Bay.

That sounds great except ... is it? Maybe it's a different case but Google tried live search results. As you'd type each letter Google would change the results live on the page. Google since got rid of that feature. I'd be curious to know why. The claim is that it wasn't good for mobile so why not make it consistent. I'm not sure I buy that claim as all kinds of things are different on mobile. For one I don't have a mouse with 1 or 2 buttons that can hover over things and I have a tiny display.

In any case though I absolutely loathed the old "show them as you type instant search results". Let's make it clear, I'm not talking about the search suggestions that appear just below the search input area. I'm talking that Google used to update the entire page as I typed. The problem for me is I'm often referencing things on the page as I type. With Google making them disappear I couldn't reference them and it actually made this harder for me.

Given that I wondered if similar issues would happen with things suggested in the paper related to instantly showing new info as the user starts entering their data. I think I'm pretty glad Google Maps doesn't jump around as I type but waits for me to select something. I'm curious if others have noticed that too that trying to be too responsive can actually be annoying and or counter productive.

The last potion of the paper was about his award winning Bart app that ran as an accessory on MacOS. He went over in detail all the design decisions and it was certainly a beautiful app with lots of thought and care put into the design.

My personal reaction though was not quite as awe struck and it made me wonder if "Bart" wasn't also the product of a personal bubble.

The first thing that stuck out was it had this semi-fancy interface for choosing a destination in which it shows a map of the entire Bart system. The Bart system is not really that big. There are basically 5 lines, they all come out of Oakland so it's an extremely simple system. You could hover your mouse down the tracks to choose any station.

Compare to Tokyo where I live there are something like 40 lines and 2000 stations. Most of the line cross other lines, sometimes multiple times zig zagging here and there. Such a UI would arguably never work here.

But, thinking about it more at almost seemed like Mr. Victor missed is own advice. The paper points out that often software isn't helping the user do what they actually want to do and the Bart app is a perfect example of exactly that. No one is trying to get from one Bart station to another. They are actually trying to get from one place to another neither of which is a Bart station!

If I'm at the north side of Lake Merritt in Oakland and I want to go to the Metreon in San Francisco my goal is not to "take the Bart". It's to get from where I am to where I want to be. That might be bus, Uber, Lyft, Ferry, Bart, maybe even some carpool service.

That idea came up because the paper mentioned showing only one route and that clearly wouldn't work in Tokyo where there are often multiple routes. There's the fastest route which could depend on the time of day. There's the route that gets you to your destination soonest which might be different. In other words, if you leave right now one route might get you there 50 minutes from now. Another route might get you there 60 minutes from now but you leave 15 minutes later so only 45 minutes travel time. Yet another route might require less transfers, less walking either to the bus stop or train station or at transfer area. One route might take the bus, another a train.γ€€One route might be cheaper via more trips on the same company's lines instead of switching companies. There are at least 10 different train/subway/lightrail companies in Tokyo.

From my own apartment downtown I'm only about a 2-3 minute walk to a bus stop but when I ask how to get somewhere, depending on where it is it might be best for me to walk to one of the 5 stations within a 25 minute walk. If it's 5am and I'm planning to go to the beach there is no bus running so I need to walk 15 minutes to a major station where as at 7am its much faster to catch the bus to that station.

As another example from Shibuya to Azabujuban there are at least 4 routes.

Why would I pick one over the other? Well, Ginza line and Hanzomon Line run parallel but their platforms in Shibuya are 5-6 minute walk apart. If I'm nearer one or the other I'd pick the one I'm nearer. Hanzomon Line also skips one station so it might be faster. I have a similar dilemma at my destination as the Oedo platform and Namboku platforms are also 5-6 minutes apart so I might want to take into account my final destination. Another concern would be if I have a commuter pass then one of those routes might be free or 1/2 free. Yet another price consideration is even if I don't have a pass the bus is the cheapest option as it's one bus where as the 3 other routes each use two diferent train companies. The bus takes 25 minutes where as the train routes only take 12-15 minutes but, the bus starts at Shibuya so I'm almost guaranteed to have a seat. If I have the time I might perfer a comfortable seated ride in the bus vs standing on the train and having the 2-3 minute transfer walks. Yet another consideration would be if I'm carrying something heavy like if I just bought something maybe I'd prefer a cab/uber/lyft.

This all gets even worse if my destination is between stations. For example from my house to Enoshima, an island a little over an hour a away, I can go

Considerations? The last one is cheaper by $3. ($9 vs $12). The monorail might be more scenic. On the Shonan-Shinjuku line I can pay an extra $9 and get a fancy comfortable airplane like seat for 40 minutes of my trip.

The Bart app's one route design struck a cord knowning it wouldn't work in Tokyo which after a little thought made it clear it wasn't following its own suggestion and solving the actual user's problem of getting from A to B where A and B are not "Bart Stations".

In any case the paper is still amazing and thought provoking and you should totally read it and take away the bigger message. I'd love to hear your thoughts.


OffscreenCanvas and Commit


Chrome is planning to ship OffscreenCanvas.

I know lots of devs that have been wanting that feature for ages so it's exciting to see it finally here. What is OffscreenCanvas? It's basically the ability to draw to a canvas from a web worker.

Drawing a complex scene often takes lots of CPU power. By being able to move all those calculations to a web worker we can make sure the main thread, the one reading the keyboard, responding to the mouse, etc... has all the power it needs to stay responsive.

There was debate for a long time about how it should be done. Ian Hickson wrote one idea orginally and with zero review stuck it in the spec. MDN even documented it though it was never implemented by anyone. I wrote another proposal in around 2012 that pointed out the issues with the one in the spec and suggested another solution. That was never implemented either though it was referenced from time to time as a reminder of some of the issues invovled.

In any case the current solution that chrome appears about to ship is that WebGL and Canvas2D mostly work in workers exactly the same as they do outside if workers. There's a small amount of code you need to write to transfer control of a canvas to some object that will exist inside the worker. The worker then creates a WebGL context or a 2D context and renders just like it would if it was in the main page. Results show up automatically just like they do on the main page. In other words, for those familar with graphics programming, there is no explict present or swapBuffers call. The moment you call one of the rendering functions in the respective APIs the browser "queues a task to do the present/swap" when your event exits.

This is great as it's the path of least surprise. No crazy new changes are needed to your code.

Even better they added requestAnimationFrame to workers so a worker can effectively just do a standard render loop

function render(time) {

So far so good.

But, ... they are also considering adding something else which is an explicit present/swapbuffers function called commit. Normal JavaScript apps would likely not use this API. Rather it's an API for WebAssembly ports of native games.

The issue they are trying to solve is that most native games run in what's called a "spinloop". They have code like this

   while(!userWantsToExit) {

They never pause and never stop rendering they just run as fast as they can in a loop. By adding commit they feel they can better support native ports.

I see several problems with this approach and I hope I've convinced them to put the brakes and do a little more testing before releasing this API.

You can NOT use any other Web APIs with this model!

For those that don't know how JavaScript works it works on an event model. You provide functions to be called when certain events happen. Events include things like key pressed, mouse clicked, button clicked, slider moved, image downloaded, websocket message received, etc..

When one of these events arrives the browser calls the JavaScript function you assigned to that event. Your JavaScrpt runs AND THE BROWSER IS FROZEN until your JavaScript exits. Once your JavaScript exits the browser will run any other events on the list of events waiting to be run.

A spinloop like the one enabled by commit means your JavaScript never exits so you'll never process any other events. In other words, the worker rendering with a commit spinloop CAN NOT USE ANY OTHER WEB APIs. It can not receive messages from the main page. It can not download images. It can not read files or request data from a server. It can not use a websocket.

WOW! An api that removes the ability to use all other APIs!?!?!

When I asked about it I was told the solution is to use SharedArrayBuffers. SharedArrayBuffers are a way for workers and the main thread to share a chunk of memory with each other. They can all read and write from it and so they can use shared array buffers to commicate with each other.

Ok, I guess that works. It sounds like a ton of work. For example there is no way to get the raw data from an image in the current Web APIs. You can download images and use them in Canvas 2D and WebGL but as we've just pointed out you can't use those APIs in a worker using commit. Because you can't get the raw data you also can't download those imaegs in another worker or the main thread and pass them via sharedmemorybuffers into the render/commit worker. Soooo, you're left to write your own image decoders throwing away a bunch of the web API again. This is one reason why webassembly apps are so bloated is they include their own versions of image loading libraries even though libraries already exist in the browser.

I suppose that's a minor thing but that's not the end of it.

The next issue is what happens when your page that has worker that's using commit is not the front tab. This is a problem too.

With a normal requestAnimationFrame loop the only thing that happens is the browser stops sending animation frame events. It's still fully able to deliever other events. Events for fetching json, events for loading images, events loading other data. Your program can keep responding.

With commit it was suggested they'd just block commit forever until the tab is put in the front again. The problem then is that you've got the main page and or workers still receiving mesages but when they try to communicate those messages to the rendering worker that worker never responds. It's frozen. This will be a HUGE source of race bugs. Developers think their code works only to find it fails is subtle and hard to reproduce ways depending on when the user switches tabs. Not good.

Okay, so they suggested maybe they can throttle commit. They'll call it just once a second for example. Unfortunately we can show that's not a solution. Many GPU pages (and many even non-GPU pages) can be really slow. Here's a page that's really slow at least on my machine. When it's the front tab I can barely type. I'm glad that page exists as it's super educational so I think pages like that should exist. If I make some other tab the front tab my machine is back to normal and responsive. Now imagine if that page used commit and commit was only throttled. Imagine it was called once per second. The experience would be horrible and my machine would still seem unusable as once a second my machine would hiccup as it processes that graphics page offscreen. So no, throttling is not a viable solution. whatever solution happens must stop the rendering period.

So what solutions are their?

Well why do we need commit at all?

The reason some people think we need commit is because they want to support native ports to webassembly. A typical spinloop based C/C++ program might have some code like this

void main() {
  KeyboardSystem* keySys = new KeyboardSystem();
  GraphicsSyatem* gfxSys = new GraphicsSystem();
  DataSystem* dataSys = new DataSystem();

  GameData* gameData = dataSys.loadData();

  bool done = false;
  while(!done) {
    done = keySys.checkKeyboard();


The problem they see is that this code can't work in the browser's current system. Like I mentioned above the browser only calls your code via events. Your code needs to exit so that the browser can then process the next event. The code above never exits. If it does exit then keySys, gfxSys, dataSys and gameData would all be cleaned up which is not what we want.

Of course programmers can refactor their code so this isn't a problem but the people pushing for the commit are trying to to make it so those developers don't have to change their code and things will just work.

Here comes a place where we disagree. First, the amount of work to refactor that code is small. Of course the example above is small but I suspect even large native code bases would not take that much work to refactor to work with events. You'd need a few globals or singletons but otherwise you just split up your code

static KeyboardSystem* keySys;
static GraphicsSyatem* gfxSys;
static DataSystem* dataSys;

static GameData* gameData;

void init() {
  KeyboardSystem* keySys = new KeyboardSystem();
  GraphicsSyatem* gfxSys = new GraphicsSystem();
  DataSystem* dataSys = new DataSystem();

  GameData* gameData = dataSys.loadData();

void render()

void cleanup() {

now call init then call render on a requestAnimationFrame loop just like JavaScript. What was so hard?

Second is that even if native developers don't have to refactor that code there are tons of other places they have to refactor. There is no path from native to browser that does not require a bunch of work if you want users to have a good experence. As a simple example I tried porting some native code. The first thing I had to do was refactor to be event based. The app came up. But, then I needed to deal with the fact that the native app was hardcoded to, at compile time, decide what keys to use for Windows, Mac, Linux. That doens't work in the browser where depending on what machine the page is viewed the keys need to change at runtime. Ctrl-C vs Cmd-C for copy etc. For that particular app it would have been far more work to make it do the correct thing at runtime instead of compile time than it was to refactor to make it event based.

That wasn't the end of it though. Next up was the clipboard support. The native code was designed to expect it could read the clipboard on demand but that's not how the clipboard works in the browser. In the browser the user presses Paste (Ctrl-V or Cmd-V etc) and only then is the clipboard made available to the app via a clipboard event. In this way the page can't read the clipboard as data is being passed to other apps. It can only read it when the user has pasted into this app.

And those were just the start. The apps that use a spinloop are 99% games. Non-games are more often than not event based. Games have lots of issues needing far more data that most other native apps. No user wants to wait 5mintes to an hour for all that data to download so if the ported native apps hope to have any kind of audience they need to refactor to stream the data and ideally start up with a minimal amount of data while they continue to download the rest in the background.

They also need to be able to save state, read mods, and lots of other things which change drastically and all of which require lots of work to be a good user experience in a browser.

My point being that just adding commit will not be enough. There's a ton of work involved in bringing a native app to the browser and it not having a very bad user experience. By adding commit it just makes it slightly easier to barf bad content on the web. That shouldn't be encoraged. If devs are going to bring their native app to the browser they need to actually do the work to make it a good experience. Refactoring to be event based is the least of their problems.

I hope that at least gives some creedence to the idea that we shouldn't use the fact that many native games use a spinloop as an arguement to support spinloops. Let them refactor their apps.

The bigger issue is I don't believe there is actually a solution to the issues above about blocking commit in a spinloop. If you block I guarantee there will be race issues. The next most obvious solution is to provide some kind of API that lets developers stop rendering. They can use the focus and blur events to do their own throttling or commit can return some value saying effectively "the next time you call me I'm going to freeze so you'd better get ready". Another idea is the browser runs the spinloop a few more iterations but some other API lets the spinloop worker check if it's going to be frozen.

It really doesn't matter which of those solutions happen. The problem is they are solutions that require perfection. Developers are told do, A, then B, then C and it will work. Yet we know with 100% certainty that will not happen. Developers, especially web developers, never do things perfectly. An API that requires perfection to work correctly will basically never work correctly on the web. To put it another way, if developers have to deal with all the race conditions that come up from using sharedarraybuffers with a commit function that can block at anytime then likely the majority of pages will have race conditions that trigger randomly.

IMO that's not a solution we should chose. rAF just works. If you're not the front page rAF does not get called but other events still get processed. You can still communiate with the worker. Blocking commit doesn't work. The moment it's blocked ZERO commication with that worker can happen. You can't rescue it or nudge it out of it's blocked state. It's blocked. And, as pointed out above, thottling is not a solution.

So, in summary, I would argue commit should NOT be added to the set of web APIs period. Require devs to refactor to use events is the only reasonable solution IMO.