React and Redux are a joke right?


Note: follow up here

This is probably an ill thought through rant but no one reads my blog anyway. I'd tweet where at least a few people read but 140 characters is not enough.

I've been using react for some personal projects and it doesn't completely suck. I like its component nature.

That said, eventually you want to start displaying data and then you run into this issue which is state in react.

Each component in react can have props (data passed down to the component from a parent component) and state (data that is managed by the component itself). React keeps its own retained mode "Virtual DOM" (an internal tree representation of all the HTML elements it can/will generate). When you update state you have to do it in this special way because React wants to know what you changed. This way it can compare the old state to the new state, if nothing changed then it knows it doesn't have to re−render (generate or modify the actual browser elements). Sounds like a win right?

In the context of the browser that's probably a win but let's take a step back from the browser.

Imagine you have an ordered array of people

const orderedListOfPeople = [
  { name: "Andrea",  location: "Brussels", },
  { name: "Fiona",   location: "Antwerp", },
  { name: "Lorenzo", location: "Berlin", },
  { name: "Gregg",   location: "Tokyo", },
  { name: "Rumi",    location: "London", },
  { name: "Tami",    location: "Los Angeles", },

Your component either has that as state or it's passed down from the state of some parent as props.

Let's say you're receiving your list of people async (over the net) so you have code to insert a person

function insertPerson(person) {
   const ndxToInsertAt = getIndexToInsertAt(orderedListOfPeople);
   orderedListOfPeople.splice(ndxToInsertAt, 0, person);

In the react world though you can't *mutate* your data. You have to make copies

function insertPerson(component, person) {
   const state = component.state;
   const orderedListOfPeople = state.orderedListOfPeople;
   const ndxToInsertAt = getIndexToInsertAt(orderedListOfPeople);

   // must make a new array, can't mutate
   const newOrderedListOfPeople = [
      orderedListOfPeople.slice(0, ndxToInsertAt),  // people before
      orderedListOfPeople.slice(ndxToInsert),       // people after

     orderedListOfPeople: newOrderedListOfPeople,

Look at all that extra code just so react and check what changed and what didn't

Let say your request receives an array of people.

getMorePeopleAsync(people => people.forEach(insertPerson));

If you get 3 people react will end up re−rendering 3 times because of the code above. Each call to `setState` triggers a re−render. You have to ADD MORE CODE to work around that issue.

And the hole keeps getting deeper.

Let's say you just want to change the location of a user. Normal code

orderedListOfPeople[indexOfPerson].location = newLocation;

But again react needs to know what you changed, it wants a you to make a new array so

// make a copy of the people array
const newOrderedListOfPeople = state.orderedListOfPeople.slice();

// must copy the person, can't mutate
const newPerson = Object.assign({}, newOrderedListOfPeople[indexOfPerson]); 
newPerson.location = newLocation;
newOrderedListOfPeople[indexOfPerson] = newPerson; 

  orderedListOfPeople: newOrderedListOfPeople,

Ok, that's a pain so they came up with immutability-helpers so you can do this

const mutation = {};
mutation[indexOfPerson] = { location: {$set: newLocation} };
  orderedListOfPeople: update(state.orderedListOfPeople, mutation),


Now imagine you have a tree structure. You end up having to write a mutation description generation function. Given a certain child node in your tree you need to be able to generate a description of how to mutate it. For example if you had this tree

  name: "root",
  children: [
    { ... },
       name: "middle",
       children: [
         { ... },
           name: "leaf",
           children: [],

In normal code if I have a reference to "leaf" and I want to change its name it's just = newName

In react land using immutable−helpers, first I'd have to add parent references to all the nodes

// no way to reference the parents with static declaration so
function makeNode(name, parent) {
  const node = {
    name: name,
    parent: parent,
    children: [],
  return node;

const root = makeNode("root", null);
const middle = makeNode("middle", root);
const leaf = makeNode("leaf", middle);

Now, given a reference to leaf I'd have to do something like

function generateMutation(node, innerMutation) {
   if (node.parent) {
     const ndx = node.parent.indexOf(node);
     const mutation = {};
     mutation[ndx] = innerMutation;
     return generateMutation(node.parent, mutation);
   return innerMutation;

const mutation = generateMutation(leaf, {name: newName});
  orderedListOfPeople: update(state.orderedListOfPeople, mutation);

SO MUCH CODE!!! All just to set one field of a node in a tree.

Remember what I wrote above that that if you want to modify 3 things, 3 calls to setState will end up re−rendering 3 times. Well imagine the contortions you need to go through to merge the mutation above so it handles 3 arbitrary updates at once.

So then you go looking for other solutions. A popular one is called redux. Effectively instead of directly manipulating data you WRITE LOTS OF CODE to indirectly manipulate data. The set location example above you'd first write a function that made a copy of the person and set the new location. You'd call that an action. You can think of actions like the action in transACTION. You're basically building an transaction system for indirectly manipulating your data.

Let's go back to just setting the location. Redux would want something like this. First they want you to make function to generate *actions*.

function setLocationAction(indexOfPerson, newLocation) {
  return {
    type: SET_LOCATION,
    indexOfPerson: indexOfPerson,
    newLocation: newLocation, 

All the function above does is make on object used to describe the thing you want to happen. The type field will be used later to execute different user supplied code based on the type. So we write that code

function setLocationReducer(people, action) {
  // copy the person since we can't mutate the existing person
  const newPerson = Object.assign({}, people[action.indexOfPerson]);  
  newPerson.location = action.newLocation;
  // copy the persons list as no mutation allowed
  const newPeople = [
     people.slice(0, action.indexOfPerson),  // everything before person
     newPerson,                              // modified person
     people.slice(action.indexOfPerson + 1); // everything after person
  return newPeople;

Now you register setLocationReducer with redux and then when you want to set the location you'd do something like

dispatch(setLocationAction(indexOfPerson, newLocation));

That generates an *action* object, then using the *type* field ends up calling setLocationReducer all to indirectly set the location of a person

NOTE: I'm new to redux so the example above might not be perfect but that's irrelevant to my point. Please keep reading.

So what about trees of data with redux? The docs tell you try not have nested data. Instead put things in flat arrays and use indices to reference things in other flat arrays.

Read around the net about how everyone seems to think redux is the bees knees (as in they love it).

But take a step back. All of this extra code is because of React. React wants to know which data changed so it doesn't re−render too much. People say React makes the UI simpler but it makes dealing with our data MUCH HARDER! Why is the UI library dictating how we store and manipulate our data!!!

Now, there may be good reasons. I get we're working in the browser, the browser uses the DOM. The DOM is complicated. Each node is very heavy, hundreds of properties. Each node has hundreds of CSS styles directly or indirectly. The browser handles all of this and multiple non−ASCII languages and fancy styles and CSS animation and all kinds of other stuff.

BUT, ... Go try using something like Dear ImGUI. You store your data however you want. You manipulate your data however you want. You then have your UI use that data in place. No extra code just manipulate your data just because the UI framework needs it. And it's running at 60fps with very complicated UIs. No jank! Want to see it running live here's the sample code running in the browser.

Now I get that ImGUI does much less than the browser. See 2 paragraphs up. But that's not the point. The point is to look at how much of a pain it is to use React (and probably the DOM in general). To notice all the hoops it's asking you to jump through. To notice that maybe your UI code got simpler than direct DOM manipulation but your data management code got 10x harder. It feels like we need to take a step back and re−evaluate how we got in this mess. There has got to be a better way!. We shouldn't need immutability helpers and/or redux and actions and reducers. Those are all solution for a problem we shouldn't have in the first place. We should just be able to have our data and manipulate it directly and we should be able to have a practically stateless UI system that can run at 60fps no jank.

Maybe someone needs to start the browser over. window.ImGUI. The no DOM use path. I'm sure that idea has issues. That's not the point again. The point is to see how much extra work we're being told to do, how many hoops we're being made to jump through, and then consider if there are better solutions. I don't know what those solutions are and I'm really not trying to dis React and Redux and the whole environment. Rather I just feel like setState, immutability helpers, redux, and all the other solutions are going down the wrong path solving a problem that shouldn't be there in the first place. If we solved the original issue (maybe that the DOM is slow and possibly the wrong solution) then the need to do all this extra work would disappear.


Visual Studio Code Wishlist


I'm in the process of switching to or at least trying out Visual Studio Code.

I'm not familiar with every editor but I certainly know that an expandable editor is not a new thing. Emacs is as old as dirt and has been customizable since it was first created. vi has been customizable just as long I'm guessing. Brief, a DOS based editor from the late 80s was probably the first editor I customized a bunch with custom scripts.

For whatever reason though Visual Studio Code feels like it's done something new. I think the biggest thing it did (and I get that I might have gotten this from Atom) is not just to have an extension system but to actually integrate an extension browser/market directly in the product. Most other extensible editors require digging deeply into the manual to find out how to add an extension and then you're left on your own to find them. VSCode on the other hand asks you when you start or you can just click one of the main icons.

If that was all it brought that might be enough but it seems to have brought a few other innovations (expect to get corrected in the comments). One is that Microsoft designed an asynchronous language server protocol for heavy processes to run outside the editor. So for example if you want to have a C++ linter that runs LLVM to find places to warn you you don't have to put it in the editor and have it slow down your editing. Rather you run it as an external process, it can take as long as it needs and provide the result asynchronously.

Some editors did stuff like that but they didn't define a standard for it, instead it was just built in to their few custom linters. VSCode on the other hand, because it defined a standard opened that up to add support for any language. Write the linter, interface using the protocol, install, BOOM! Your editor now supports a new language. They even suggested other editors consider supporting the same protocol so they can take advantage of the same linters.

Maybe other editors support a similar feature but I was kind of blown away at some of the lint warnings I saw. For example I recently started playing with React. I followed the docs but of course things change over time. I followed some instructions that said to install the React linter into VSCode and suddently I was getting super React specific linter warnings directly in my editor. Is that a common thing anywhere else? I've certainly seen lint warnings for the language itself but I've never seen them for specific libraries.

It also helps, at least for JavaScript, that there's is almost a defacto standard for declaring your dependencies. This means the editor can easily look them up and know which linters to run. Of course you can also manually specify them but dang, that's pretty awesome.

Another feature I hadn't seen so much before is AST based linters that can expose correctors. I've seen a linter tell me "2 spaces required here" or "must use double quotes" but I hadn't seen "Fix all occurrences of this issue" before. Of course I've seen re−formatters before but they were generally all or nothing. You run your entire file through, it reformats everything, you get stuff out the other end. Some of you might like that kind of stuff but I'm not a fan as I find I generally want more flexibility than most coding standards provide.

This is not VSCode related but that brings up yet another thing which is the most common linter for JavaScript is eslint and not only is it massively configurable but it also takes plugins. That means unlike most linters I'm familiar with eslint you can relatively easily customize it for your personal style instead of just having to pick one of a few presets. Similarly the way these are specified are someone standardized so VSCode can adapt to difference lint settings per project.

Unfortunately VSCode is still missing many things I'm used to. Some might be easy to add as plugins. Others will require big changes I can only pray the team will make

Big needs include recordable keyboard macros. That's a feature I've used often for over 30 years and it's a little hard to believe that it wasn't part of the MVP for VSCode. It even exists in Visual Studio.aspx). It requires a certain level of architecture at a deep level. Hopefully that architecture is already in place.

Another personal big need is support for unlimited window splitting. As it is VSCode is hardcoded to 3 vertical splits period. No horizontal splits. No 2x2. Emacs since the 80s has had unlimited spitting. In this age of 30inch monitors a limit of 3 splits, and only vertical splits, seems very limiting.

Yet another is support for column editing. VSCode has support for multiple cursors and that's great but it's not a substitute for column editing.

column editing

multi-cursor editing

Similarly in my old editor there are at least 3 modes of selecting. (1) Normal character to character select. (2) line select (3) column select. You press Alt−M (or just hold shift) for the normal character to character select, Alt−L switches to line select, Alt−C to column select. Moving the cursor keys once in selection mode adjusts the selection. Pressing almost anything else stops selecting which basically means you need the editor to tell you some other key was pressed. Just assigning a function to get called on Alt−L, Alt−M, Alt−C is not enough.

Mapping the keys might be easy but handling virtual space not so much.

Those things probably need core changes to the editor's internals. The following can probably just be done with extensions some of which might already exist.

Less mouse dependence. I could be I'm just not used to it yet but in my previous editor there is an optional custom open dialog that doesn't need a mouse. The default is to use the OS's open dialog but when editing I find that a PITA. Another is a buffer selector. Some people might see this as the same as file selector but for some reason I find it useful to be able to switch among the short list of files I have open rather than the entire list of files in my project. Plugins for those probably would not be that hard.

Another is to be able to search and replace based on coloring or the AST. Example: search and replace "for" with "four" but only in strings. Without that feature every for loop would get replaced. Or search and replace "which" with "witch" but only in identifiers.

More auto formatting would be nice. It's possible it already exists and I just have to configure or turn it on. As a higher level example my old editor could auto wrap comments

And it's also aware when I'm in a comment block and knows to indent the next line with a comment. It's probably just a minor tweak to the current auto indent which looks at the previous line.

This one might require callback/event support but my previous editor keeps backups and I can visit and diff them

You probably thinking I should use git for that but that's not really the point. The point is this just happens in the background and saves me from when I haven't committed to git yet or when I'm working on a file not related to software dev.

One more is safe multi−file search and replace. Basically my current editor can search and replace across files and then undo the entire thing. Of course without that feature I could manually backup first, then do my search and replace but it's so much nicer just to be able to do it without thinking about it.

I'm not entirely sure I want to switch but I do find some of the new features compelling so I'm hoping I can get VSCode customized to surpass my previous editor.


Does Chrome need to change it's mic/camera permission UX?


I'm a little worried about webcam and mic access in browsers at least as currently implemented. (and native apps too I suppose but less so)

As it is, in Chrome, if you let a webpage access your camera (or mic) that page's domain gets permanent permission to access the camera or mic whenever it wants forever, no questions asked again.

I recently visited one of the early HTML5 Webcam demos that I hadn't visited in years. I expected to get asked for permission to access the camera. Instead the camera just came on. That does not seem like a good model for web sites that are covered in ads and scripts from all over the net.

I'm sure the Chromium team was thinking of supporting hangouts when designing webcam support and I might be convinced that if hangouts always had access to the camera that might be no worse than a native app. But, it's the browser it's not a native app, it's untrusted code.

Even for communications sites though if I run them in an iframe they get camera permission. In other words, say yes just once and now that domain can follow you all over the net and use your camera and mic, at least as of Chrome 59. Did you ever use your mic on to make a call? Well now any page that has facebook social embeds can now use your mic without asking.

I don't know what the best permission UX is. Always asking might might be tedious for actual communication websites (messenger, hangouts, slack?, ...) but not asking sucks for the open web. It even sucks on a communications website if the camera or mic is not something I use often. I don't want any app to have permission to spy on me. Imagine, imagine you use it to do a video call just once, then not again for 6 months even though you're text chatting constantly. During that entire time, at any time slack could have been accessing your mic or you camera. I personally want to opt−in to always ask. I think I'd prefer this even in native apps but especially for the web. The UX doesn't have to suck. Clicking "call" and having the browser say "this app wants to access your mic, Y/N" doesn't seem like a burden.

Here's a demo of the issue.

At a minimum it seems like iframes should not get automatic permission even if that domain had permission before. Otherwise some ad company will make a compelling camera demo just to get you to say yes once to using the camera on their domain. Once they do all their ads all over the net can start spying on you or using the mic to track you.

Even then though there are plenty of sites that allow users to post JavaScript, give access to one user's page and all users' pages get access. So nice person makes camera demo, bad person takes advantage that that domain now has access to your mic or camera.

I filed a bug on this about 5 months ago but no word yet. If you think this is an important issue consider starring the bug. Until then, if this concerns you go to chrome://settings/content/camera in Chrome and remove all the sites you've enabled the camera for. Do the same for the microphone by going to chrome://settings/content/microphone.

If you use a site that needs access to the camera or the mic, once you've given it permission to use the camera or the mic a small icon will appear in the URL bar. When you're done with the site you can click that icon to remove permission. That's tedious and you're likely to forget but it's better than nothing for now.


Don't disable web security!!!


This basic question is all over stack overflow.

People ask how can they access files when developing HTML locally. They make a .HTML file, then open it in Chrome. They add a script that needs to access an image for canvas or WebGL or whatever and find they can't. So they ask on Stack Overflow and the most common answer is some form of "Start Chrome with the option −−disasble−web−security" (or one of 5 or 6 other similar flags)

I keep screaming DON'T DO THAT! but almost no one listens. In fact not only that the downvote my answers.

Well here's two proof of concepts of why it's ill−advised to disable web security.

The first one is an example that will get your stack overflow or github username if you are logged in and you started chrome with --disable-web-security. Of course you probably don't care that someone is looking up your username on various sites but that's not really the point. The point is some webpage not related to those sites was able to access data from those sites. The same webpage could access any other site. You bank, your google account, all because you disabled security.

You might say "I'd never run a script like that" but you likely run lots of 3rdparty scripts.

The second example will show files from your hard drive. It could upload them to a remote server. Which files some baddie would want I have no idea. The point is not to show uploading dangerous files. The point is only to show if you disable web security it's possible for a script, your own or a 3rd party one to access your local files.

Many of you will be thinking "I'd never do either of those" but I think that's being short sighted. I know I often forget which browser I'm in, the dev one or the non−dev one. If I mistakenly used the dev one with web security disabled then oops.

Of course you might also be thinking you'd never do any of the things above. You're running your own hand coded webpages with scripts and not using any 3rd party libraries and you never use the wrong browser. But again, that's not the point. The point is you turned off security. The point is not to enumerate all the ways you might get hacked or have data stolen or accounts manipulated. The point is if you disable web security you've made yourself more vulnerable period.

This is especially frustrating because the better solution is so simple. Just run a simple local server! It will take you all of 2 minutes at most. Here's one I wrote for those people not comfortable with the command line. Here's also 6 or 7 others.


Sony Playlink


So Sony announced Playlink at E3 this year 2017.

It's a little frustrating. It's basically HappyFunTimes.

The part that's frustrating is in that video Shuhei Yoshida is happily playing a Playlink game and yet about a year ago I saw Shuhei Yoshida at a Bitsummit 2016 party and I showed him a video of happyfuntimes. He told me using phones as a controller was a stupid idea. Is objection was that phone controls are too mushy and laggy. He didn't have the foresight to see that not all games need precise controls to be fun. And yet here he is a year later playing Playlink which is the same thing as happyfuntimes.

In 2014 I also showed Konou Tsutomu happyfuntimes at a party and even suggested a "Playstation Party" channel with games for more than 4 players using the system. My hope was maybe he'd get excited about it and bring it up at Sony but he seemed uninterested.

Some people don't see the similarity but I'd like to point out that there are

And many others.

And of course there are also games you could not easily play on PS4 like this giant game where players control bunnies or this game using 6 screens.

And to Shuhei's objection that the controls are not precise enough

You just have to design the right games. Happyfuntimes would not be any good for Street Fighter but that doesn't mean there aren't still an infinite variety of games it would be good for.

In any case I'm not upset about it. I doubt that Shuhei had much to do with Playlink directly I doubt Konou even brought it up at Sony. I think the basic idea is actually a pretty obvious idea. Mostly it just reinforces something I already knew. That pitching and negotiation skills are incredibly important. If I had really wanted happyfuntimes to become PS4 Playlink I should have pitched it much harder and more officially than showing it casually at a party to gauge interest. It's only slightly annoying to have been shot down by the same guy that announced their own version of the same thing. ?

In any case, if you want to experiment with games that support lots of players happyfuntimes is still a free and open source project available for Unity and/or HTML5/Electron. I'd love to see and play your games!


Isolating Devices on a Home Network


Call me paranoid but I'd really like to be able to easily isolate devices on a home network.

As it is most people have at a best a single router running a single local area network. On that network they have 1 or more computers, 1 or more tablets, 1 or more phones. Then they might have 1 or more smart TVs, 1 or more game consoles. And finally now people are starting to add Internet of Things (IoT) devices. IP Webcams, Network connected door locks, Lights that change color from apps, etc...

The problem is every device and every program run on every phone/tablet/tv/game consoles/computer can hack all your other devices on the same network. That includes when friends visit and connect to your network.

So for example here's a demonstration of hacking into your network through the network connected lights. There's ransomware where your computer gets infected with a virus which encrypts all your data and then demands a ransom to un−encrypt it. The same thing is happening to smart TVs where they infect your TV, encrypt it so you can't use it and demand money to un−encrypt it. Printers can get infected.

All of this gets easier with every app you download. You download some new app for your phone, you have no idea if, when it's on your home network, that it's not scanning the network for devices with known exploits to infect. Maybe it's just hacking your router for various reasons. It could hack your DNS so when you type "" it actually takes you to a fake site where you type in your password and then later get robbed. Conversely you have no idea what bugs are in the app itself that might let it be exploited.

One way to possibly mitigate some of these issues seems like it would be for the router to put every device on its own network. I know of no router than can do this easily. Some routers can make virtual networks but it's a pain in the ass. Worse, you often want to be able to talk to other devices on your home network. For example you'd like to tell your chromecast to cast some video from your phone except you can't if they're not on the same network. You'd like to access the webcam in your baby's room but you can't if they're not on same network. You'd like to print but you can't if they're not on the same network etc...

So, I've been wondering, where's the router that fixes this issue? Let me add a device with 1 button that makes a lan for that one device. Also, let me choose what other devices and over which protocols that new device is allowed to communicate. All devices probably also need to use some kind of encryption since with low−level network access an app could still probably manage to hack things.

I get this would only be a solution for geeks. Maybe it could be more automated in some way. But in general there's clearly no way you can expect all app makers and all device makers to be perfect. So, the only solution seems like isolating the devices from each other.

Any other solutions?

Comments and stuff


I recently made WebGL2 is backward compatible with WebGL1 which means anything you learn about WebGL1 is applicable to WebGL2 but there's a few things that made it seem like it needed a new site.

The biggest were GLSL 3.00 ES which is an updated version of GLSL that's not available in WebGL1. It adds some great features but it's not backward compatible so it seemed like making all the samples use GLSL 3.00 es was better than leaving it as is.

Another big reason is WebGL2 has Vertex Array Object support. I had not used them much in WebGL1 because there were an optional feature. After using them though I feel like because it's possible to make a polyfill I should have always used them from day 1. Those machines that need the polyfill are also probably machines that don't run WebGL well in the first place. On the other hand I think people would be annoyed learning WebGL1 if they had to rely on a polyfill so as it is I'll leave the WebGL1 site not using Vertex Array Objects.

The second biggest reason is I got my math backward. Matrix multiplication is order dependent.  B != B  A I've used various 3d math libraries in the past and I personally never noticed a convention. I'd just try  B  C  D and if I didn't get the result I wanted I'd switch to  C  B  A. So, when I made the math library for WebGL1 I picked a convention based off matrix names. It's common to have names like viewProjection and worldViewProjection so it seemed like it would be easier to understand if

viewProjection = view * projection


worldViewProjection = world view projection

But, I've been convinced I was wrong. It's actually

viewProjection = projection * view


worldViewProjection = projection view world

I won't go into the details why but I switched all the math on to use the second style.

Anyway, that's beside the point. The bigger issue is the pain of updating it all. There are currently over 132 samples. If I decide that all samples need to switch their math that's several days of work. Even if I decide something small like all samples should clear the screen or all samples should set the viewport it takes hours visit each and every one and update. Some have patterns I can search and replace for but they're often multi−line and a pain to regex. I wonder if there is some magic by which I could go edit git history to make each change back when I made the first sample and then see that get propagated up through all the samples. Sadly I don't think git knows which samples were based off others and even if it did I'm sure it would be even more confusing figuring out where it couldn't merge.

I'd hard to get everything right the first time, at least for me, and so as I create more and more samples and get more and more feedback I see places where I should probably have done something different right from the start.

For example my original goal in writing the first sample was to be as simple as possible. There's about 20 lines of boilerplate needed for any WebGL program. I originally left those out because it seemed like clutter. Who cares how a shader is compiled, all that matters is it's compiled and we can use it. But people complained. That's fine. I updated the sample to put those 20 lines in but removed them in the 2nd sample.

Another issue was I didn't use namespaces so there are functions like createProgramFromScripts instead of webglUtils.createProgramFromScripts. I'm guessing being global people were like "where is this from?". I switched them. I'm hoping the prefixes make it clear and they'll see the webgl-utils.js script tag at the top.

Similarly the first sample just makes 1 draw call on a fixed size 400x300 canvas. Again, to keep it as simple as possible a ton of stuff is kind of left out. For example setting uniforms once during initialization. If you're only going to draw once that seems natural but drawing only once is a huge exception so it ends up making the sample not representative of real WebGL code. Similarly because the canvas was a fixed size there no reason to set the viewport. But 99% of WebGL apps have a canvas that changes size so they need to set the viewport. Similarly because they resize they need to update the resolution of the canvas but I had left both of those steps out. Yet another was using only one shader program. A normal WebGL app will have several shader programs and therefore set which shader programs to use at render time but the samples were setting it at init time since there was only 1. The same for attributes, textures, global render states, etc.. All of these things are normally set at render time but most of the samples were setting them at init time.

Anyway, I updated almost all of that on Now I'm trying to decide how much time to spend backporting to the old site given it took easily 40−60 hours of work to make all the changes.

I recently added a code editor to the site so people can view the code easily. That's another one of those things where I guess having written JS related stuff for the last 8 years I know some of the tools. I know you can pick "view−source" and/or you can open the devtools in any browser and see all the code. I also know you can go to github which is linked on every page and see the source. That said I got several comments from people who got lost and didn't know how to do any of those. Should I put a link under every single sample "click here to learn how to view source" that leads to tutorials on view source, devtools, and github? How to get a copy locally. How to run a simple web server in a few seconds to run the tutorials locally. I suppose I should write another article on all that stuff.

Well, I added the code editor for now which I'm a tiny bit proud of. At least in chrome and firefox it seems to catch both JavaScript errors and WebGL errors and will put your cursor at the correct line in your source. It also displays the console messages. I got that inspiration from Stack Overflow's snippet editor but theirs gives the wrong line numbers. Unfortunately it's a pretty heavy editor but it does do intellisense like help. Hopefully it will be updated to handle JSDocs soon like it's big brother.

But, that brought up new issues which I'm not sure I should handle or not. The original samples had a fixed size. With the code editor though the size can change. I updated all the samples to handle that but it's not perfect. Most real WebGL apps handle this case. Should I clutter the code in all the samples to handle it? All the animated samples handle it but non animated samples don't. Some of those samples it would basically just be one line

window.addEventListener('resize', drawScene);

But of course it's not that simple. Many samples are only designed to draw once period and would have to be re−written to even have a drawScene function.

I'm not sure what my point was in writing this except to say that it's a new experience. Normally I work on 1 program for N months. If I decide I need to refactor it I refactor it. But for these tutorials I'm effectively working on 132+ tiny programs. If I decide I need to refactor I have to refactor all 132+ of them and all the articles that reference them.


Meteor's downsides


Meteor is a really cool framework for making websites. It runs on node.js and by default it uses mongo db (you can change that). It's a "fullstack" framework meaning it handles both the server (backend) and the client (browser).

You can install it and have their samples up in minutes. They have publishing utilities to help you get it up live on the internet either through their hosted service or through other means.

It's got some really nice features. Code is easily shared across backend and browser. You can access data on both sides with nearly the same code. It's got live updating of data and code. It's really awesome!

Except ... AFAIK it's EXPENSIVE to use. Another way of putting that is it's not for hobbies, only for serious stuff. Let me explain


The VR Workspace


I'm going to guess there's already a zillion blog posts like this but .... here's mine if only to record my thoughts so you can laugh at me in 3−5 years for dumb predictions.

A couple of weeks ago I go to see the Oculus DK3 Crescent Bay demo. Very very impressive. If you've used a DK2 (2014) it was mostly PS2 quality graphics. It still felt awesome to see things in stereoscopic 3D but Crescent Bay is somewhere at PS3 or PS4 level. On top of that it's 90fps and has skewing meaning if your head moves left/right, up/down, back/forward that's reflected in the simulation whereas DK2 (and Gear VR, Google Cardboard), currently only support head rotation. It makes a huge difference.