Why you should hang in there and learn git


A friend of mine was (is) struggling with learning git. I know what it's like. I was there. My progression was Source Safe -> CVS -> Subversion -> Perforce -> Mercurial -> Git. I found it frustrating at first and I didn't get it. Now that I do (mostly?) get it can't imagine switching back. So if you're frustrated learning git. You can't understand why it has to be so hard compared to what you're used to and you don't get the point. You feel like git adds nothing to what you're used to and it's just stupid then I hope this will help if only a little.

First off, an analogy. Imagine some one was working with a flat file system, no folders. They somehow have been able to get work done for years. You come along and say “You should switch to this new hierarchical file system. It has folders and allows you to organize better”. And, they’re like “WTF would I need folders for? I’ve been working just fine for years with a flat file system. I just want to get shit done. I don’t want to have to learn these crazy commands like cd and mkdir and rmdir. I don’t want to have to remember what folder I’m in and make sure I run commands in the correct folder. As it is things are simple. I type “rm filename” it gets deleted. Now I type “rm foldername” and I get an error. I then have to go read a manual on how to delete folders. I find out I can type “rmdir foldername” but I still get an error the folder is not empty. It’s effing making me insane. Why I can’t just do it like I’ve always done!”. And so it is with git.

One analogy with git is that a flat filesystem is 1 dimensional. A hierarchical file system is 2 dimensional. A filesystem with git is 3 dimensional. You switch in the 3rd dimension by changing branches with git checkout nameofbranch. If the branch does not exist yet (you want to create a new branch) then git checkout -b nameofnewbranch.

Git’s branches are effectively that 3rd dimension. They set your folder (and all folders below) to the state of the stuff committed to that branch.

What this enables is working on 5, 10, 20 things at once. Something I rarely did with cvs, svn, p4, or hg. Sure once in awhile I’d find some convoluted workflow to allow me to work on 2 things at once. Maybe they happened to be in totally unrelated parts of the code in which case it might not be too hard if I remembered to move the changed files for the other work before check in. Maybe I’d checkout the entire project in another folder so I'd have 2 or more copies of the project in separate folders on my hard drive. Or I’d backup all the files to another folder, checkout the latest, work on feature 2, check it back in, then copy my backedup folder back to my main work folder, and sync in the new changes or some other convoluted solution.

In git all that goes away. Because I have git style lightweight branches it becomes trivial to work on lots of different things and switch between them instantly. It’s that feature that I’d argue is the big difference. Look at most people’s local git repos and you’ll find they have 5, 10, 20 branches. One branch to work on bug ABC, another to work on bug DEF, another to update to docs, another to implement feature XYZ, another working on a longer term feature GHI, another to refactor the renderer, another to test out an experimental idea, etc. All of these branches are local to them only and have no effect on remote repos like github (unless they want them to).

If you’re used to not using git style lightweight branches and working on lots of things at once let me suggest it’s because all other VCSes suck in this area. You’ve been doing it so long that way you can’t even imagine it could be different. The same way in the hypothetical example above the guy with the flat filesystem can’t imagine why he’d ever need folders and is frustrated at having to remember what the current folder is, how to delete/rename a folder or how to move stuff between folders etc. All things he didn’t have to do with a flat system.

A big problem here is the word branch. Coming from cvs, svn, p4, and even hg the word "branch" means something heavy, something used to mark a release or a version. You probably rarely used them. I know I did not. That's not what branches are in git. Branches in git are a fundamental part of the git workflow. If you're not using branches often you're probably missing out on what makes git different.

In other words, I expect you won’t get the point of git style branches. You’ve been living happily without them not knowing what you’re missing, content that you pretty much only ever work on one thing at a time or find convoluted workarounds in those rare cases you really have to. git removes all of that by making branching the normal thing to do and just like the person that’s used to a hierarchical file system could never go back to a flat file system, the person that’s used to git style branches and working on multiple things with ease would never go back to a VCS that’s only designed to work on one thing at a time which is pretty much all other systems. But, until you really get how freeing it is to be able to make lots of branches and work on multiple things you’ll keep doing it the old way and not realize what you’re missing. Which is basically way all anyone can really say is “stick it out and when you get it you’ll get it”.

Note: I get that p4 has some features for working on multiple things. I also get that hg added some extensions to work more like git. For hg in particular though, while they added after the fact optional features to make it more like git go through pretty much any hg tutorial and it won't teach you that workflow. It's not the norm AFAICT where as in git it is the norm. That difference in base is what really set the two apart.

Let me also add that git is 4 dimensional. If branches are the 3rd dimension then versioning is the 4th. Every branch has a history of changes and you can go back to any version of any branch at anytime. Of course all VCSes have history and branches but again it's git's workflow that makes the differnce.

Let me also add that branches don't need to have anything in common. One branch might have your source. Another branch might have you docs. Whether that's common or not I don't no but it points out git doesn't care.

The most common example of this is probably github's github pages where github will, by default, serve as public a branch named "gh-pages". In several projects that branch has zero in common with the main branch. Instead some build script possibly running on a CI service builds the project's website and then checks it into the gh-pages branch. Whether using unrelated branches is a good practice or not AFAIK it's pretty much not done in other VCSes which I think hihglights a difference.


Software Development is never simple


Recently I wrote a little interval-timer.

I have a no-equipment interval workout, 30 rounds, 50 seconds each with a 10 second break between each one I've been doing for a while. I was using some online timer but it was buggy. It often displayed incorrectly and you had to resize your window once to get it work. It also didn't adjust to the window size well so if your window was the wrong aspect it wouldn't fit. Minor things but still annoying.

I checked out 5 or 6 others but they all required registration in order to try to sell you stuff or where covered in ads or didn't save your settings so you had to set them up every time or etc... etc...

I'd had it on my list of "This should be a simple few hour project, I should make my own" for at least a couple of years and finally recently I decided to do it.

Fighting CSS was, as always, no fun but eventually I got it to work, at least in modern current as of 2018/1 Firefox, Chrome, and Safari on desktop.

But! ... and I know this is normal but it's ridiculous how many small issues there have been.

First I thought "what the heck, it's a web page, let's make it work well on mobile (iOS) so I set the appropriate meta tags and futsed with the CSS a little and it comes up as expected. Except of course mobile has issues with how it computes full height 100vh and so my buttons at the bottom were off the screen. That is unless you saved the page to your home screen in which case iOS Safari goes fullscreen. Seemed good enough for me. SHIP IT!

So I post it to my facebook (friends only). First feedback I get is the controls weren't clear. Probably still aren't. One friend thought the circle arrow ↻ meant "go" instead of "rewind/reset" and I guess didn't recogonize the right pointing triangle ▶ as a "play" button. I made the circle arrow counter clockwise ↺ (not sure that helps) and added tooltips that say "reset", "start", and "stop" although that's only useful on desktop since you can't hover your finger on the phone.

Next friends complained it didn't run in iOS 10. I really didn't care when I wrote it, I'm on iOS 11, but then friends wanted to use it so I go look into it. Fortunately it was just adding prefixed CSS properties to fix it.

Then I was using URLs to store the settings like https://blabla?duration=50&rounds=30 etc.. but that meant if you added a bookmark and tried to change the settings you'd come back and your settings would be gone. Originally I thought putting them in the URL would let you have multiple timers but I doubt anyone would get that so I changed it to save the settings in local storage. You only get one timer. No, I'm not going to create a UI for multiple timers! I suspect most people just have one anyway and it's not too hard to change the 3 settings so it's probably good.

Then I realized I didn't handle hours for those few people that work out more than 1 hour so I added that. Speaking of which I also realize that entering seconds only is probably not a good UX. If you want 3 minute rounds you need to calculate in your head 3 * 60 = 180 seconds rather than put in 3 minutes 0 seconds but I'm seriously too lazy to deal with that and don't care. 😜

Ok, then I realized I was using requestAnimationFrame which doesn't run if the window is not visible. So for example you switch tabs to run music on youtube or something and the timer stops. Okay so I switched to using setTimeout and also made it set the title so you can still see the timer running even if it's not the current tab.

Then I noticed the tooltips I'd added above broke mobile. Buttons with tooltips required 2 clicks to work so I removed the tooltips.

Then I realized people were using it on the phone (I wasn't) and that the phone will go to sleep making it useless on the phone unless you manually prevent your phone from sleeping. I found a solution (which is friggen ridiculous) so now it doesn't let your phone sleep if the timer is running.

Then I realized the solution that prevents the phone from sleeping (which is to play a silent hidden video) stops your background music which is not very good for workouts. I found a solution which is to mute the video. I hope these 2 solutions continue to work.

Then I noticed that at least on iOS, if you add the page to your home screen, then, anytime you switch away from timer to another app and comeback it reloads the page meaning you lose your place. So today I made it save it's state constantly so if the page reloads it will continue. At the moment it continues as though it was still running. In other words if you're out of the timer for 3 minutes when you come back 3 minutes will have elapsed. I wasn't sure if that's what it should do or if I should pause the timer. As it is there so no way to rewind the timer a little which is probably the next thing I should consider adding.

So then I tried it on iOS but because of a bug it was actually pausing while switched away. That's when I noticed that when you come back, although the timer continues where it left off, because of limitations of mobile browsers the page is not allowed to make any sound unless the sound starts from a user interaction. Which means I'm basically forced to pause it and present a "continue where you left off" button. Or, just come back already paused and let the user press play.

And so this simple interval timer which I thought would take at most a few hours has now gobbled up a few days. Software dev is never simple. I wonder what the next issue will be 🤣


Why I Hate Software Dev


Ugh!!! This is why I hate computer dev! 🤣

So I decide I want to fix the CSS on vsa.com for Windows. In particular I wanted to try to fix the scrollbars so they look like the MacOS version. I'm thinking it will only take a few mins.

So I decide to start vsa.com in dev mode on Windows. (I normally do that dev on Mac).


Okay, F!!!, I don't want my machine effed up when different versions of meteor needed for one project vs another. Rant mode on, I wish all dev worked without installing anything globally or needing any kind of admin. As we learn of more and more exploits you should NEVER EVER BE ASKED FOR ADMIN. EVER!!!. I know it will be years or decades until this BS stops. Millions of machines will get powned by running un standboxed software that has exploits and installs via admin but until then I guess VMs it is 😡.

Eventually I got meteor running only to find out that tar on macOS and tar on Linux have different options and the ones I was using to re-write paths as I untar a backup won't work on Linux. I try restoring the DB manually but it doesn't work (no errors, but nothing showing up on the site).

I guess this is just par for the course. Doing new stuff is always a pain in the ass. It always takes time to setup something new and get it working and it's only after you've done that that you then go back to ignoring it because it's only important once every few months or years. Then next time you need to do it it's been so long that it's all changed and you have to spend the hours or even sometimes days getting your dev environmnet setup for your new project. Unfortunately new projects are getting more common or rather switching between many projects all with different needs for globally installed software is becoming far more common.

Still, it's super frustrating when you think something is going to only take a few mins and just getting to the point where you can actually do those few mins takes hours.

Finally I did what I probably should have done in the first place. I just run the mac version and go to http://<ipOfMac>:3000 on Windows, edit on Mac, check on Windows. I'm done in a few. Now if only all the browsers had a standard for scrollbar styling as it only works in Chrome.


Rethinking UI APIs


A few weeks ago I wrote a rant, "React and Redux are a joke right?". While many of my friends got the point most of the comments clearly responded mostly to the title without really getting the point. That's my fault for burying the point instead of making it clear, and for picking a clickbait title although that did get people to look.

I wrote a response back then but wordpress ate it. Since then I spent 2 weeks redoing my blog to be a static blog and now I'm finally getting around to writing what I hope will clear up my point.

Maybe my point can best be summed up as "Let's rethink the entire UI paradigm. Let's consider alternatives to the standard retained mode GUI. Those APIs are making us write way too much code".

If you're not familiar with what a "retained mode GUI" is it's any graphic user interface that requires a tree of objects that represent the UI. You build up the tree of objects. It sticks around "is retained" even if your code is not running. The UI system looks at the tree of objects to show the UI. This is the single most common paradigm for UIs. Examples include the DOM and all the DOM elements. On iOS all the UI objects like UIButton, UILabel, UITextField. Android, MacOS, Windows etc all work this way.

There maybe many reasons for doing it that way and I'm not arguing to stop. Rather I'm just pointing out there's another style, called "Immediate Mode GUI", and if you go try and use it for a complex UI you'll find you probably need to write 1/5th as much code to get things done as you do with a retained mode GUI.

The issue with a retained mode GUI is having to manage all of those objects. You need some way to associate your data with their representation in the UI. You also need to marshal data between your copy of the data and the UI's copy of the data. On top of that most retained mode GUIs are not that fast so you have to minimize churn. You need to try to manipulate, create, and destroy as few of these GUI objects as possible otherwise your app will be unresponsive.

Contrast that with an Immediate Mode GUI. In an Immediate Mode GUI there are no objects, there is no (or nearly no) UI state at all. The entire UI is recreated every time it's rendered. This is no different than many game engines at a low level. You have a screen of pixels. To make things appear to move over time, to animate them, you erase the entire screen and re-draw every single pixel on the entire screen. An Immediate Mode GUI is similar, there's nothing to delete because nothing was created. Instead the UI is drawn by you by calling the functions in the Immediate Mode GUI API. This draws the entire UI and with the state of the mouse and keyboard also handles the one single UI object that's actually being interacted with.

What happens when you do this, all the code related to creating, deleting, and managing GUI objects disappears because there are no GUI objects. All the code related to marshaling data into and out of the GUI system disappears because the GUI system doesn't retain any data. All the code related to try to touch as few GUI objects as possible disappears since there are no objects to touch.

And so, that's what struck me when I was trying to write some performant code in React and some site suggested Redux. React as a pattern is fine. It or something close to it would work well in an Immediate Mode GUI. But in practice part of React's purpose is to minimize creation, deletion, and manipulation of GUI objects (DOM Elements for ReactDOM, native elements for React Native). To do that it tracks lots of stuff about those objects. It also has integrated functions to try to compare your data to the GUI's data. It has its this.setState system that gets more and more convoluted with requiring you not to set state directly and not even to inspect state directly as a change might be pending. All of that disappears in an Immediate Mode GUI. Redux is one of many suggested ways to take all that tracking a step further, to make it work across branches of your GUI object hierarchy. All of that disappears in an Immediate mode GUI.

Now I'm not saying there aren't other reason you might want command patterns. Nor am I saying you don't want more structure to the ways you manipulate your data. But, those ways should be 100% unrelated to your UI system. Your UI system should not be influencing how you use and store your data.

When I wrote the title of my rant "React and Redux are a joke right?" my mindset was realizing that the entire construct of React and Redux are their to manage a retained mode GUI. If we were using an Immediate Mode GUI we wouldn't need them. The joke is on us thinking we're making our lives easier when in reality we're just making it more complex by piling on solution on top of solution on top of solution when the real problem is below all of that.

Now, I'm not suggesting we all switch to Immediate Mode GUIs. Rather, I'm suggesting that experience writing a complex UI with an immediate mode GUI might influence ways to make things better. Maybe there's some new design between retained mode GUIs and Immediate Mode GUIs that would be best. Maybe a different API or paradigm all together. I don't know if it would lead anywhere. But, I think there's room for research. As a more concrete example the entire Unity3D UI is basically an immediate mode GUI. All of their 1000s of plugins use it. Their API could probably use a refactoring but Unity3D is clearly a complex UI and it's running using an Immediate Mode style system so it's at least some evidence that such a system can work.

There are lots of objections to Immediate Mode GUIs. The biggest is probably it's assumed they are CPU hogs. I have not written any benchmarks and it would be hard as the 2 paradigms are so different it would be like those benchmarks where a C++ person translates their benchmark to Haskell not really getting Haskell end up making a very inefficient benchmark.

That said, I think the CPU objection might be overblown and might possibly be the complete opposite with Immediate Mode GUIs using less CPU than their retained mode counterparts. The reason I think this is true is that Immediate Mode GUIs almost always run at 60fps (or smooth) and retained mode GUIs almost never run smooth. To make a retained mode GUI run smooth for a complex UI requires tons and tons of code. Code like React's checks for changes, like the state tracking stuff, all the componentDidMount, componentWillUnmount, shouldComponentMount stuff, various kinds of caches etc. All that code, possibly 5x to 10x the code of an Immediate Mode GUI is code that is using the CPU. And when I say 5x to 10x the code I don't necessarily mean just written code. I mean executed code. A loop comparing if properties changed is code that's running even if the code implementing the loop is small. Some languages and or system generate tons of code for you to help with these issues. You didn't have to personally write the code but it is there bloating your app, using CPU. When you see a retained mode GUI like a webpage not be entirely responsive or take more than 16ms to update that's because the CPU was running 100% and could not keep up. That situation rarely happens with an Immediate Mode GUI. That suggests to me it's possible the Immediate Mode GUI is actually using less CPU.

Other than Unity3D which is probably the most used Immediate Mode GUI in use but is tied to their editor the most popular Immediate Mode GUI is Dear ImGUI. Yes, it's only C++. Yes, it's not designed for the Web (or phones). Again, for like the 1000th time, that is not the point. The point is not Dear ImGUI as a solution. The point is Dear ImGUI as inspiration. In realizing how much extra code we're being made to write and or execute to deal with retained mode APIs. The point is to take a step back and consider that just maybe this whole ubiquitous retained mode GUI API should be revisited.


CSS Grid - Fail?


I'm probably just missing the solution but I'm getting the impression CSS Grid really isn't the right solution and we'll be asking for another soon.

If you haven't used CSS Grid it's way of using CSS to declare that one element is a grid of NxM cells. Almost like a table except CSS Grid happens in CSS. The cool parts are that you can name groups of cells and then you can tell direct children of the grid what group of cells they cover just by name. Because it's CSS it's also easy to use media queries to change the layout based on the size of the user's browser.

That all sounds great, except AFAICT a grid doesn't actually solve this issue. Here's an example. Here's a video about CSS Grid

Note the screenshot for the video itself. In case it gets changed at the time I wrote this post it looked like this

Seeing that thumbnail you'd expect that layout is what you'd use CSS Grid for but if you watch the video they never make a grid like that.

When I look at the thumbnail I see 2 columns.

In the left column I see an upper half and a lower half.

The lower half is split into 2 columns itself

The left of those 2 columns is also split into an upper and lower part.

On the right I see a 3 row layout which each row being 1/3rd of the height of the entire thing.

So, can we really make that layout with CSS Grid and make it responsive? I suppose the answer is that we can but I doubt the way we have to do it is really what CSS grid people want. Remember CSS Grid is just that, a grid, so in order to make the layout above we'd need a grid like this

That hardly seems reasonable. The left column split into 2 50% height parts really shouldn't care about the right split into three 33% parts. The left top half really shouldn't have to care the the left bottom half needs to split into 2. And it all gets more messed up by the left most bottom area being split horizontally. Everything gets conflated this way. If you want to add another split somewhere you might have to redo a ton of stuff. That's not a reasonable way to go about this.

The more obvious way to do this is to nest grids just like we used to nest tables. In other words what we want is this

That's 5 grids. The outer one with 2 sides. The inner right one with three 33% rows. The left with two 50% rows and so on.

The problem is now we lose the ability to place things arbitrarily because grids only affect their direct children.

It seems like we need yet another new css type. Let's call it css layout. I have no idea how that would work. The problem comes back to separating content from style which means in order to do this we'd need some way to specify a hierarchy in CSS since that hierarchy shouldn't be expressed in HTML.

I have no idea what that expression would be, some large json looking like structure in CSS like this?

.mylayout {
  display: layout;
  layout-spec: "{
    rows: [
      { height: 100%;
        columns: [
          { width: 50%;
            name: left-column;
            rows: [
              { height: 50%;
                name: top-left;
              { height: 50%;
                name: bottom-left;
                columns: [
                  { width: 50%;
                    name: bottom-left-left;
                    rows: [
                       { height: auto;
                         name: bottom-left-left-top;
                       { height: auto;
                         name: bottom-left-left-bottom;
                  { width: 50%;
                    name: bottom-left-right;
          { width: 50%;
            name: right-column;
            rows: [
              { height: 33%; name: right-top; }
              { height: 33%; name: right-middle; }
              { height: 33%; name: right-bottom; }

It seems arguably what's needed though. Unless I'm missing something CSS Grid really doesn't solve all the layout issues what we've been trying to solve since nested table days.


Trying to help noobs is SOOOO FRUSTRATING!


I often wonder if I'd like to teach. It's certainly fun to teach when the students are easy to teach ? But where's the challenge in that?

I wrote webglfundamentals.org (and webgl2fundamentals.org) and I answer tons of WebGL questions on stackoverflow but sometimes it's sooooooooooooooo frustrating.

I'm trying to take those frustrations as an opportunity to learn better how to teach, how to present things, how to be patient, etc but still...


Simplifying HappyFunTimes


I’m feeling rather stupid.

So a little over two years ago I started taking the code from PowPow and turning it into a library. That became HappyFunTimes which I’ve spent the better part of 2 years working on.


Saving and Loading Files in a Web Page


This article is targeted at people who've started learning web programming. They'd made a few web pages with JavaScript. Maybe they've made a paint program using 2d canvas or a 3d scene using three.js. Maybe it's an audio sound maker, maybe it's a tile map editor. At some point they wonder "how do I save files"?

Maybe they have a save button that just puts all the data into a textarea and presents it to the user and says "copy this and paste it into notepad to save".

Well, the way you save in a web page is you save to a web server. OH THE HORROR! I hear you screaming WHAT? A server? Why would I want to install some giant server just to save data?

I'm here to show you a web server is not a giant piece of software. In fact it's tiny. The smallest web server in many languages can be written in a few lines of code.

For example node.js is a version of JavaScript that runs outside the browser. If you've ever used Perl or Python it works exactly the same. You install it. You give it files to run. It runs them. Perl you give perl files, python you give python files, node you give JavaScript files.

So using node.js here is the smallest web server

const http = require('http');
function handleRequest(request, response){
    response.end('Hello World: Path = ' + request.url);
http.createServer(handleRequest).listen(8000, function() { });


Now, all these 5 lines do is return "`Hello World:: Path =

" for every page but really that's the basics of a web server. Looking at the code above without even explaining it you could imagine looking atrequest.url` and deciding to do different things depending on what the url is. One URL might save, one might load, one might login, etc. That's really it.

Let's explain these 5 lines of code

const http = require('http');

require is the equivalent of import in python or #include in C++ or using in C# or in JavaScript using <script> src="..."></script>. It's loading the module 'http' and referencing it by the variable http.

function handleRequest(request, response){
    response.end('Hello World: Path = ' + request.url);

This is a function that will get called when we get a request from a browser. The request holds data about what the browser requested like the URL for the request and all the headers sent by the browser including cookies, what language the user's browser is set to, etc...

response is an object we can use to send our response back to the browser. As you can see here we're sending a string. We could also load a file and send the contents of that file. Or we would query a database and send back the results. But everything starts here.

const server = http.createServer(handleRequest);
server.listen(8000, function() { 
  console.log("Listening at http://localhost:8000");

The last line I expanded a little. First it calls http.createServer and passes it the function we want to be called for all requests.

Then it calls server.listen which starts it listening for requests from the browser. The 8000 is which port to listen on and the function is a callback to tell us when the server is up and running.


To run this server install node.js. Don't worry it's not some giant ass program. It's actually rather small. Much smaller than python or perl or any of those other languages.

Now open a terminal on OSX or on windows open a "Node Command Prompt" (node made this when you installed it).

Make a folder somewhere and cd to it in your terminal / command prompt

Make a file called index.js and copy and paste the 5 lines above into it. Save it.

Now type node index.js

In your browser open a new tab/window and go to http://localhost:8000 It should say Hello World: Path = \. If you type some other URL like http://localhost:8000/foo/bar?this=that you'll see it returns that back to you.

Congratulations, you just wrote a web server!

Let's add serving files

You can imagine the code to serve files. You'd parse the URL to get a path, read the corresponding file, call response.end(contentsOfFile). It's literally that easy. But, just to make it less code (and cover more cases) there's a library that does it for us and it's super easy to use.

Press Ctrl-C to stop your server if you haven't already. Then type

npm install express

It will download a bunch of files and put them in a subfolder called "node_modules". It will also probably print an error about no "package.json" which you can ignore (google package.json later)

Now let's edit our file again. We're going to replace the entire thing with this

"use strict";
const express = require('express');
const baseDir = 'public';

let app = express();
app.listen(8000, function() {
    console.log("listening at http://localhost:8000");

Looking at the last 2 lines you see app.listen(8000... just like before. That's because express is the same http server we had before. It's just added some structure we'll get to in a bit.

The cool part here is the line


It says "serve all the files from baseDir".

So, make a subfolder called "public". Inside make a file called test.html and inside that file put O'HI You. Save it. Now run your server again with node index.js

Go to http://localhost:8000/test.html in your browser. You should see "O'HI You" in your browser.

Congratulations. You have just made a web server that will serve any files you want all in 9 lines of code!

Let's Save Files

To save files we need to talk about HTTP methods. It's another piece of data the browser sends when it makes a request. Above we saw the browser sent the URL to the server. It also sends a method. The default method is called GET. There's nothing special about it. it's just a word. You can make up any words you want but there are 7 or 8 common ones and GET means "Get resource".

If you've ever made an XMLHttpRequest (and I hope you have because I'm not going to explain that part), you specify the method. Back on the server we could look at request.method to see what you specified and use that as yet another piece of data to decide what to do. If the method is GET we do one thing. If the method is BANANAS we do something else.

express has wrapped that http object from our first example and it adds a few major things.

(1) it does more parsing of request.url for us so we don't have to do it manually.

(2) it routes. Routing means we can tell it for any of various paths what function to call. For example we could say if the path starts with "/fruit" call the function HandleFruit and if the path starts with "/purchase/item/:itemnumber" then call HandleItemPurchase etc.. In our case we're going to just say we want all routes to call our function.

(3) it can route based on method. That way we don't have to check if the method was "GET" or "PUT" or "DELETE" or "BANANAS". We can just tell it to only call our handler if the path is XYZ and the method is ABC.

So let's update the code. Ctrl-C your server if you haven't already and edit index.js and update it to this

"use strict";
const express = require('express');
*const path = require('path');
*const fs = require('fs');
const baseDir = 'public';

let app = express();
*app.put('*', function(req, res) {
*    console.log("saving:", req.path);
*    let body = '';
*    req.on('data', function(data) { body += data; });
*    req.on('end', function() {
*        fs.writeFileSync(path.join(baseDir, req.path), body);
*        res.send('saved');
*    });
app.listen(8000, function() {
    console.log("listening at http://localhost:8000");

The first 2 added lines just reference more built in node libraries. path is a library for manipulating file paths. fs stands for "file system" and is a library for dealing with files.

Next we call app.put which takes 2 arguments. The first is the route and '*' just means "all routes". Then it takes a function to call for this route. app.put only routes "PUT" method requests so this line effectively says "Call our function for every route when the method is "PUT".

The function adds a tiny event handler to the data event that reads the data the browser is sending by adding it to a string called body. It adds another tiny event handler to the end event that then writes out the data to a file and sends back the message 'saved'.

And that's it! We'd made a server that saves and loads files. It's very insecure because it can save and load any file but if we're only using it for local stuff it's a great start.

Loading And Saving From the Browser

The final thing to do is to test it out by writing the browser side. I'm going to assume if you've already made some web pages and you're at the point where you want to load and save that you probably have some idea of what XMLHttpRequest is and how to make forms and check for users clicking on buttons etc. So with that in mind here's the new test.html

    textarea {
        display: block;

<label for="savefilename">filename:</label>
<input id="savefilename" type="text" value="myfile.txt" />
<textarea id="savedata">
this is some test data
<button id="save">Save</button>

<label for="loadfilename">filename:</label>
<input id="loadfilename" type="text" value="myfile.txt" />
<textarea id="loaddata">
<button id="load">Load</button>

// make $ a shortcut for document.querySelector
const $ = document.querySelector.bind(document);

// when the user clicks 'save'
$("#save").addEventListener('click', function() {

    // get the filename and data
    const filename = $("#savefilename").value;
    const data = $("#savedata").value;

    // save
    saveFile(filename, data, function(err) {
        if (err) {
            alert("failed to save: " + filename + "\n" + err);
        } else {
            alert("saved: " + filename);

// when the user clicks load
$("#load").addEventListener('click', function() {

    // get the filename
    const filename = $("#loadfilename").value;

    // load 
    loadFile(filename, function(err, data) {
        if (err) {
            alert("failed to load: " + filename + "\n" + err);
        } else {
            $("#loaddata").value = data;
            alert("loaded: " + filename);

function saveFile(filename, data, callback) {
    doXhr(filename, 'PUT', data, callback);

function loadFile(filename, callback) {
    doXhr(filename, 'GET', '', callback);

function doXhr(url, method, data, callback) {
  const xhr = new XMLHttpRequest();
  xhr.open(method, url);
  xhr.onload = function() {
      if (xhr.status === 200) {
          callback(null, xhr.responseText);
      }  else {
          callback('Request failed.  Returned status of ' + xhr.status);

If you now save that and run your server then go to http://localhost:8000/test.html you should be able to type some text in the savedata area and click "save". Afterwords click "load" and you'll see it got loaded. Check your hard drive and you'll see a file has been created.

Now of course again this is insecure. For example if you type in "text.html" in the save filename and pick "save" it's going to overwrite your "text.html" file. Maybe you should pick a different route instead of "*" in the app.put('*', ... line. Maybe you want to add a check to see if the file exists with another kind of method and only update if the user is really sure.

The point of this article was not to make a working server. It was to show you how easy making a server is. A server like this that saves local files is probably only useful for things like an internal tool you happened to write in JavaScript that you and/or your team needs. They could run a small server like this on their personal machines and have your tool load, save, get folder listings, etc.

But, seeing how easy it is also hopefully demystifies servers a little. You can start here and then graduate to whole frameworks that let users login and share files remotely.

I should also mention you can load and save files to things like Google Drive or Dropbox. Now you know what they're basically doing behind the scenes.


CAs now get to decide who's on the Internet


It started with a legit concern. The majority of websites were served using HTTP. HTTP is insecure. So what you might be thinking? HTTPS is used on my bank and amazon and anywhere I might spend money so it seems not a problem. Except ... HTTP allows injections. Ever use some bad hotel or bad airport WiFi and get a banner injected at the top of the screen? That's HTTP vs HTTPS. Are you sure those articles you're reading are the originals? Maybe someone is changing words, pictures, or ads. HTTPS solves these issues.

So, the browser vendors and other standards bodies got together and made a big push for HTTPS only. Sounds great right!?

Well instead of just pushing metaphorically by putting out the word, "Stop using HTTP! Start using HTTPS" the browser vendors got together and decided to try to kill off HTTP completely. Their first order of business was to start requiring HTTPS to use certain features in the browser. Want to go fullscreen? Your site must be served from HTTPS. Want to read the orientation and motion data of the phone from the browser? Your website must use HTTPS. Want to be able to ask the user for permission to access to mic or the camera? Your website must use HTTPS.

Okay well that certainly can be motivating to switch to HTTPS as soon as possible.

Except ... HTTPS requires certificates. Those certificates can only be acquired from Certificate Authorities. CAs for short. CAs charge money for these certificates. $50 per certificate or more. Often the certificates only last for a limited time so you've got to pay every year or 2.

Suddenly every website just got more expensive.

Okay you say but that's still not a ton of money.

Yes but, maybe you've got an innovative project. One that lets any user access their media from their browser(example). You'd like to let them go fullscreen but you can't unless it serves the media pages as HTTPS. The rules of HTTPS say your not allowed to share certs ever. If you get caught sharing your cert will be invalidated. So, you can't give each of these people running your innovated software a copy of your cert. Instead every user needs their own cert. Suddenly your software just got a lot more expensive! What if your software was free and open source? In 2015 people were able to run it for free. In 2016 the are now required to get a cert for $50

So what do you do? Well you hear about self−signed certs. So you check those out. Turns out they require complex installation into your OS. Your family and aunts and uncles and cousins and nephews and nieces aren't going to find that really manageable. And besides there's the feature where anyone can come to a party at your places and queue some music videos using their phone's browser but that's never going to fly if they have to first install this self−signed cert. Official certs from CAs don't have this issue. They just work.

Okay well you shop around for CA's. Dear CA#1 will you give my users free certs? No! Dear CA#2 will you give my users free certs? No!

Oh I hear you say, there's a new kid on the block, letsencrypt, they offer free certs.

They do offer free certs BUT, certs are tied to domain names. To get a cert from letsencrpyt you have to have a domain. Example "mymediastreamer.org". So even if you can get the cert for free your users now need to buy a domain name. That can be relatively cheap at $10−$20 a year but it's a big technical hurdle. Your non−tech family members are not really going to be able to go through all the process of getting a domain name just to use your media server.

Oh I hear you say, what if my software ran a public DNS server. I could issue users subdomains like "<username>.mymediastreamer.org". Then I can give out DNS names to the users and they can get certs. That might work ... except, DNS points to specific IP addresses. User's IP address changes. You can re−point DNS to the new address but it takes time to propagate. That means when their IP address changes it might be a few hours until they can access their media again. Not going to work.

Ok then here's a solution. We'll make up domains like this "<ipaddress>.<username>.mymediastreamer.org". That will make the DNS server even easier. We don't even need a database. We just look at the "<ipaddress>" part of the DNS name and return that IP address. Now when the user's IP address changes there will be zero delay because they can immediately use a DNS name that matches. We'll setup some rendezvous server for them so they don't need to lookup the correct domain. It will all just work.

Great! We have domains. We can get free certs from letsencrypt.

Except....letsencrypt limits the number of certs to 240 per root domain. So once you have 240 you can't get more certs. That means we can only support 240 users at best. But then there's another problem. Letsencrpt doesn't support wildcard certs. Because we added the part above we need a wildcard cert for each user matching "*.<username>.mymediastreamer.org".

Effectively we are S.O.L. For our purposes letsencrypt is just another CA. "CA#3 can we please have free certs for our users?" No!

As of 2015 we could do anything we wanted on the internet. Now in 2016 we need permission from a CA. If the CA doesn't give permission we don't get on the internet.

To put it another way because of the chain of validation in HTTPS each CA is effectively a little king/bureaucrat who gets to decide who gets on the internet and who doesn't. If one king doesn't let you your only option is to go ask another king. Letsencyrpt is the most generous king as they don't ask for tribute but that doesn't change the fact you still need permission from one of these kings.

You might be thinking, "so what? who cares about a media streamer?". But it's not just streamers. It's "ANY DEVICE OR SOFTWARE THAT SERVES A WEBPAGE". Got an IP camera that serves a webpage? That camera wants to give you a nice interface that goes fullscreen? It can't without certs and it can't get certs without permission from a CA. Got some Raspberry PI project that wants to serve a webpage and needs any of the banned features? Again, it can't do it without a cert and it can't get a cert without permission from a CA. Maybe you have a NAS device and it would like provide web page access? It can't do it without a cert and it can't get certs without permission from a CA.

That wasn't the case just 6 months ago because HTTPS wasn't required. Now that it is these kings all just got a bunch more power and innovative products like the media streamer described above and projects like this are effectively discouraged unless you can beg or bribe a king to ordain them. 😎




Sorry to rant on this more but sheesh! How much does it take?

So I want to watch the filesystem for changes. Unfortunately node's fs.watch isn't OS independant. Some OSes return individual files. Other OSes just return the parent folder changed. I'm sure there's some legit reason like perf not to fix this for all devs but whatever. I assume there must be a library that fixes it.

So I search NPM. Maybe I should be using Google to search NPM because NPMs results are usually pretty poor but after searching a while I find this watch module. It's got nearly 700k downloads in the last month and tons of dependents so it must be good.

Given my prior experience the first thing I check is how many dependencies does it list?

Okay only 2 but they are both related to command line stuff. WTF does a library have command line dependencies? Oh, I see this library is conflated with a utility that watches for changes and launches a specified program anytime a file changes. Sigh... ?

Okay well let's look through the dependencies. It's not too deep. Fine, maybe I'll live with it if it works.

Ok I file an issue "Separate library from command line?" and move on.

A quick glance at the code and it doesn't appear to abstract the issues I need fixed but it's easy to test. So I write some test code

var watch = require('watch');

var dir = process.argv[2];
console.log("watching: ", dir);

var options = {
  ignoreDotFiles: true, // - When true this option means that when the file tree is walked it will ignore files that being with "."
  filter: function() { return true; }, // - You can use this option to provide a function that returns true or false for each file and directory to decide whether or not that file/directory is included in the watcher.
  ignoreUnreadableDir: true, // - When true, this options means that when a file can't be read, this file is silently skipped.
  ignoreNotPermitted: true, // - When true, this options means that when a file can't be read due to permission issues, this file is silently skipped.
  ignoreDirectoryPattern: /node_modules/, // - When a regex pattern is set, e.g. /node_modules/, these directories are silently skipped.

watch.createMonitor(dir, options, function(monitor) {
  monitor.on('created', function(f, s)     { show("created", f, s    ); });
  monitor.on('removed', function(f, s, s2) { show("removed", f, s, s2); });
  monitor.on('changed', function(f, s)     { show("changed", f, s    ); });

function show(event, f, s, n) {
  console.log(event, f);

I run it and pass it my temp folder which has a ton of crap in it including lots of sub folders.

I edit temp/foo.txt and see it print changed: temp/foo.txt. I delete temp/foo.txt and see it print removed: temp/foo.txt. I create a temp/foo.txt and see it print created: temp/foo.txt.

So far so good. Let's test one more thing.

I look for a subfolder since I've only been changing stuff in the root of the folder I'm watching. I just happen to pick temp/delme-eslint/node_modules/foo. BOOM! It prints

created temp/delme-eslint/node_modules/abab
created temp/delme-eslint/node_modules/abbrev
created temp/delme-eslint/node_modules/acorn
created temp/delme-eslint/node_modules/acorn-globals
created temp/delme-eslint/node_modules/acorn-jsx
created temp/delme-eslint/node_modules/amdefine
created temp/delme-eslint/node_modules/ansi-escapes
created temp/delme-eslint/node_modules/ansi-regex
created temp/delme-eslint/node_modules/ansi-styles
created temp/delme-eslint/node_modules/argparse
created temp/delme-eslint/node_modules/array-differ
created temp/delme-eslint/node_modules/array-union
created temp/delme-eslint/node_modules/array-uniq
created temp/delme-eslint/node_modules/arrify
created temp/delme-eslint/node_modules/asn1
created temp/delme-eslint/node_modules/assert-plus
created temp/delme-eslint/node_modules/async
created temp/delme-eslint/node_modules/aws-sign2
created temp/delme-eslint/node_modules/aws4
created temp/delme-eslint/node_modules/balanced-match
created temp/delme-eslint/node_modules/bl
created temp/delme-eslint/node_modules/bluebird
created temp/delme-eslint/node_modules/boolbase
created temp/delme-eslint/node_modules/boom
created temp/delme-eslint/node_modules/brace-expansion
created temp/delme-eslint/node_modules/caller-path
created temp/delme-eslint/node_modules/callsites
created temp/delme-eslint/node_modules/caseless
created temp/delme-eslint/node_modules/chalk
created temp/delme-eslint/node_modules/cheerio
created temp/delme-eslint/node_modules/cli-cursor
created temp/delme-eslint/node_modules/cli-width
created temp/delme-eslint/node_modules/code-point-at
created temp/delme-eslint/node_modules/coffee-script
created temp/delme-eslint/node_modules/color-convert
created temp/delme-eslint/node_modules/colors
created temp/delme-eslint/node_modules/combined-stream
created temp/delme-eslint/node_modules/commander
created temp/delme-eslint/node_modules/concat-map
created temp/delme-eslint/node_modules/concat-stream
created temp/delme-eslint/node_modules/core-util-is
created temp/delme-eslint/node_modules/cryptiles
created temp/delme-eslint/node_modules/css-select
created temp/delme-eslint/node_modules/css-what
created temp/delme-eslint/node_modules/cssom
created temp/delme-eslint/node_modules/cssstyle
created temp/delme-eslint/node_modules/d
created temp/delme-eslint/node_modules/dashdash
created temp/delme-eslint/node_modules/dateformat
created temp/delme-eslint/node_modules/debug
created temp/delme-eslint/node_modules/deep-is
created temp/delme-eslint/node_modules/del
created temp/delme-eslint/node_modules/delayed-stream
created temp/delme-eslint/node_modules/doctrine
created temp/delme-eslint/node_modules/dom-serializer
created temp/delme-eslint/node_modules/domelementtype
created temp/delme-eslint/node_modules/domhandler
created temp/delme-eslint/node_modules/domutils
created temp/delme-eslint/node_modules/ecc-jsbn
created temp/delme-eslint/node_modules/entities
created temp/delme-eslint/node_modules/es5-ext
created temp/delme-eslint/node_modules/es6-iterator
created temp/delme-eslint/node_modules/es6-map
created temp/delme-eslint/node_modules/es6-set
created temp/delme-eslint/node_modules/es6-symbol
created temp/delme-eslint/node_modules/es6-weak-map
created temp/delme-eslint/node_modules/escape-string-regexp
created temp/delme-eslint/node_modules/escodegen
created temp/delme-eslint/node_modules/escope
created temp/delme-eslint/node_modules/eslint
created temp/delme-eslint/node_modules/espree
created temp/delme-eslint/node_modules/esprima
created temp/delme-eslint/node_modules/esrecurse
created temp/delme-eslint/node_modules/estraverse
created temp/delme-eslint/node_modules/esutils
created temp/delme-eslint/node_modules/event-emitter
created temp/delme-eslint/node_modules/eventemitter2
created temp/delme-eslint/node_modules/exit
created temp/delme-eslint/node_modules/exit-hook
created temp/delme-eslint/node_modules/extend
created temp/delme-eslint/node_modules/extsprintf
created temp/delme-eslint/node_modules/fast-levenshtein
created temp/delme-eslint/node_modules/figures
created temp/delme-eslint/node_modules/file-entry-cache
created temp/delme-eslint/node_modules/find-up
created temp/delme-eslint/node_modules/findup-sync
created temp/delme-eslint/node_modules/flat-cache
created temp/delme-eslint/node_modules/foo
created temp/delme-eslint/node_modules/forever-agent
created temp/delme-eslint/node_modules/form-data
created temp/delme-eslint/node_modules/generate-function
created temp/delme-eslint/node_modules/generate-object-property
created temp/delme-eslint/node_modules/getobject
created temp/delme-eslint/node_modules/glob
created temp/delme-eslint/node_modules/globals
created temp/delme-eslint/node_modules/globby
created temp/delme-eslint/node_modules/graceful-fs
created temp/delme-eslint/node_modules/graceful-readlink
created temp/delme-eslint/node_modules/grunt
created temp/delme-eslint/node_modules/grunt-eslint
created temp/delme-eslint/node_modules/grunt-legacy-log
created temp/delme-eslint/node_modules/grunt-legacy-log-utils
created temp/delme-eslint/node_modules/grunt-legacy-util
created temp/delme-eslint/node_modules/har-validator
created temp/delme-eslint/node_modules/has-ansi
created temp/delme-eslint/node_modules/hoek
created temp/delme-eslint/node_modules/hawk
created temp/delme-eslint/node_modules/hooker
created temp/delme-eslint/node_modules/htmlparser2
created temp/delme-eslint/node_modules/http-signature
created temp/delme-eslint/node_modules/iconv-lite
created temp/delme-eslint/node_modules/ignore
created temp/delme-eslint/node_modules/inflight
created temp/delme-eslint/node_modules/inherits
created temp/delme-eslint/node_modules/inquirer
created temp/delme-eslint/node_modules/is-fullwidth-code-point
created temp/delme-eslint/node_modules/is-my-json-valid
created temp/delme-eslint/node_modules/is-path-cwd
created temp/delme-eslint/node_modules/is-path-in-cwd
created temp/delme-eslint/node_modules/is-path-inside
created temp/delme-eslint/node_modules/is-property
created temp/delme-eslint/node_modules/is-resolvable
created temp/delme-eslint/node_modules/is-typedarray
created temp/delme-eslint/node_modules/isarray
created temp/delme-eslint/node_modules/isstream
created temp/delme-eslint/node_modules/jodid25519
created temp/delme-eslint/node_modules/js-yaml
created temp/delme-eslint/node_modules/jsbn
created temp/delme-eslint/node_modules/jsdom
created temp/delme-eslint/node_modules/json-schema
created temp/delme-eslint/node_modules/json-stable-stringify
created temp/delme-eslint/node_modules/json-stringify-safe
created temp/delme-eslint/node_modules/jsonify
created temp/delme-eslint/node_modules/jsonpointer
created temp/delme-eslint/node_modules/jsprim
created temp/delme-eslint/node_modules/levn
created temp/delme-eslint/node_modules/load-grunt-tasks
created temp/delme-eslint/node_modules/lodash
created temp/delme-eslint/node_modules/lru-cache
created temp/delme-eslint/node_modules/mime-db
created temp/delme-eslint/node_modules/mime-types
created temp/delme-eslint/node_modules/minimatch
created temp/delme-eslint/node_modules/minimist
created temp/delme-eslint/node_modules/mkdirp
created temp/delme-eslint/node_modules/ms
created temp/delme-eslint/node_modules/multimatch
created temp/delme-eslint/node_modules/mute-stream
created temp/delme-eslint/node_modules/node-uuid
created temp/delme-eslint/node_modules/nopt
created temp/delme-eslint/node_modules/nth-check
created temp/delme-eslint/node_modules/number-is-nan
created temp/delme-eslint/node_modules/nwmatcher
created temp/delme-eslint/node_modules/oauth-sign
created temp/delme-eslint/node_modules/object-assign
created temp/delme-eslint/node_modules/once
created temp/delme-eslint/node_modules/onetime
created temp/delme-eslint/node_modules/optionator
created temp/delme-eslint/node_modules/os-homedir
created temp/delme-eslint/node_modules/parse5
created temp/delme-eslint/node_modules/path-exists
created temp/delme-eslint/node_modules/path-is-absolute
created temp/delme-eslint/node_modules/path-is-inside
created temp/delme-eslint/node_modules/pify
created temp/delme-eslint/node_modules/pinkie
created temp/delme-eslint/node_modules/pinkie-promise
created temp/delme-eslint/node_modules/pkg-up
created temp/delme-eslint/node_modules/pluralize
created temp/delme-eslint/node_modules/prelude-ls
created temp/delme-eslint/node_modules/process-nextick-args
created temp/delme-eslint/node_modules/progress
created temp/delme-eslint/node_modules/pseudomap
created temp/delme-eslint/node_modules/qs
created temp/delme-eslint/node_modules/read-json-sync
created temp/delme-eslint/node_modules/readable-stream
created temp/delme-eslint/node_modules/readline2
created temp/delme-eslint/node_modules/request
created temp/delme-eslint/node_modules/require-uncached
created temp/delme-eslint/node_modules/resolve
created temp/delme-eslint/node_modules/resolve-from
created temp/delme-eslint/node_modules/resolve-pkg
created temp/delme-eslint/node_modules/restore-cursor
created temp/delme-eslint/node_modules/rimraf
created temp/delme-eslint/node_modules/run-async
created temp/delme-eslint/node_modules/rx-lite
created temp/delme-eslint/node_modules/sax
created temp/delme-eslint/node_modules/shelljs
created temp/delme-eslint/node_modules/sigmund
created temp/delme-eslint/node_modules/slice-ansi
created temp/delme-eslint/node_modules/sntp
created temp/delme-eslint/node_modules/source-map
created temp/delme-eslint/node_modules/sprintf-js
created temp/delme-eslint/node_modules/sshpk
created temp/delme-eslint/node_modules/string-width
created temp/delme-eslint/node_modules/string_decoder
created temp/delme-eslint/node_modules/stringstream
created temp/delme-eslint/node_modules/strip-ansi
created temp/delme-eslint/node_modules/strip-json-comments
created temp/delme-eslint/node_modules/supports-color
created temp/delme-eslint/node_modules/symbol-tree
created temp/delme-eslint/node_modules/table
created temp/delme-eslint/node_modules/text-table
created temp/delme-eslint/node_modules/through
created temp/delme-eslint/node_modules/tough-cookie
created temp/delme-eslint/node_modules/tr46
created temp/delme-eslint/node_modules/tryit
created temp/delme-eslint/node_modules/tunnel-agent
created temp/delme-eslint/node_modules/tv4
created temp/delme-eslint/node_modules/tweetnacl
created temp/delme-eslint/node_modules/type-check
created temp/delme-eslint/node_modules/typedarray
created temp/delme-eslint/node_modules/underscore
created temp/delme-eslint/node_modules/underscore.string
created temp/delme-eslint/node_modules/user-home
created temp/delme-eslint/node_modules/util-deprecate
created temp/delme-eslint/node_modules/verror
created temp/delme-eslint/node_modules/webidl-conversions
created temp/delme-eslint/node_modules/whatwg-url-compat
created temp/delme-eslint/node_modules/which
created temp/delme-eslint/node_modules/wordwrap
created temp/delme-eslint/node_modules/wrappy
created temp/delme-eslint/node_modules/write
created temp/delme-eslint/node_modules/xml-name-validator
created temp/delme-eslint/node_modules/xregexp
created temp/delme-eslint/node_modules/xtend
created temp/delme-eslint/node_modules/yallist

BUG!!! Really, I used it for 5 minutes ran into bug and yet 700k downloads and a ton of dependents are using this buggy library. Sigh....?

Now the question is (a) fix it or 🍺 write my own.

Yes I know there are good arguments for fixing it. Just last month 700k downloads = 700k fixed downloads next month probably. But of course it possible means arguing with the author, jumping through their hoops. And, I still don't know if it solves the problem I want solved.

On top of that I there's at least one thing I wanted to re−design I think. Not only do I want the tree to be watched, I want to know what's in the tree. It kind of seems like that should be one operation. In other words I don't want to call some other function to scan the tree to get the list of all things in it and then separately call another function to watch the tree. If I do that then a file that's created between the first scan and the watch will never show up. It will also be twice as slow AFAICT as the the watch has to scan as well. So, now yet another discussion to be haggled over if I decide to fix vs do it myself.

The worst part is the thing I wanted to do I thought would take all of 20 minutes if that functionality I was looking for existed. Now will also probably take 8x as long to fix vs do it myself on top of writing the feature I was originally setting out to d. Probably not fair but (a) no negociation and 🍺 I feel more obligated to right tests for fixing but not for my own code ?



I filed a bug on that issue but I didn't purse fixing it. Instead I wrote my own. It took about 20hrs of solid work. Probably only a 2 hours to write but 18 hours to test across 3 platforms and learn new issues. I found out ubuntu, osx, windows, and travis−ci's containers all behave differently in this area. I'm sure I didn't catch all the stuff and I still would like to write another 4−8 hours of tests but I need to move on and it's working so far.

Some things that were interesting is where to make the divisions, what features to put in. For example the most used library I linked to above, the buggy one, has options for ignoring dot files (files that start with .) as well as an option to give it a regular expression to filter (the one that didn't actually work). After writing mine I found another library that takes complex globs and has debouncing features.

I feel like those features should be layers. It's probably as little as one line of code to filter. Example:

  var watcher = new SimpleTreeWatcher(pathToWatch, {
    filter: (filepath) => { return path.basename(filepath)[0] !== '.'); },

So why clutter the library with options when doing it this way keeps the library simple and let's people be as complex or simple as they want. It seems better to just provide a few examples of how to filter than have a bunch of options.

Similarly the debouncing can relatively easily be layered on top. I should add examples to the docs but at least I have something that's working for me so far.