NES/Famicom, A Visual Compendium – Corrections

I very nice book came out with images from tons of NES games called “NES/Famicom, A Visual Compendium“. I saw several of my friends had backed the kickstarter and when they go their copies they mentioned one of my games was in it so I had to buy a copy.

It’s a gorgeous book. I’m not 100% sure I’m into the sharp emulator captures graphics as they look absolutely nothing at all like the original games. As art they’re very cool but as representations of what those games looked like they are far off the mark. You can see a comparison on this article and see how when blown up in an emulator they look all blocky but back when the came out they looked smooth. Still as graphic art it’s cool to see them in the book.

That said I looked up M.C. Kids and was a little disappointed to see things reported incorrectly. I not blaming anyone in particular. I assume it’s an issue of like the game “telephone” where as the message got passed from person to person it got re-interpreted and ended up in it’s present form

For M.C Kids it implies Rene did all the enemies but that was not the case. I’m not sure the percent of enemies created by Rene but IIRC Darren Bartlett and Ron Miller both did enemies as well.

I then notice there was an unreleased section and sure enough there was Robocop vs Terminator listed.

It’s also wrong. It’s says “Graeme Devine moved me from Caesars Palace Gameboy to Robocop Vs Terminator”. What actually happened though is Graeme Devine moved me from Caesars Palace to Terminator NES, not Robocop vs Terminator. I worked on an animation tool to be shared between Terminator NES and M.C. Kids NES. Later I was asked to work on M.C Kids and Terminator was given to David Parry to make Terminator for Sega Genesis. When I finished M.C. Kids and I was no longer at Virgin Games I got a contract from Interplay to code Robocop Vs Terminator for NES.

Trying to help noobs is SOOOO FRUSTRATING!

I often wonder if I’d like to teach. It’s certainly fun to teach when the students are easy to teach ? But where’s the challenge in that?

I wrote webglfundamentals.org (and webgl2fundamentals.org) and I answer tons of WebGL questions on stackoverflow but sometimes it’s sooooooooooooooo frustrating.

I’m trying to take those frustrations as an opportunity to learn better how to teach, how to present things, how to be patient, etc but still… (more…)

After 11 years of waiting The Last Guardian has some how lost the magic

It’s been 11 years since Shadow of the Colossus shipped for PS2. I was such a fan of Ico that even though I sat directly next to the Shadow of the Colossus team at Sony Japan and all I had to is stand up and look over my cubicle’s divider to see work in progress I made my best effort to not look because I didn’t want to spoil the experience of whatever they were making.

So, now, 11 years since then the team has finally shipped their next game, skipping an entire generation of console.

And …. (more…)

Isolating Devices on a Home Network

Call me paranoid but I’d really like to be able to easily isolate devices on a home network.

As it is most people have at a best a single router running a single local area network. On that network they have 1 or more computers, 1 or more tablets, 1 or more phones. Then they might have 1 or more smart TVs, 1 or more game consoles. And finally now people are starting to add Internet of Things (IoT) devices. IP Webcams, Network connected door locks, Lights that change color from apps, etc…

The problem is every device and every program run on every phone/tablet/tv/game consoles/computer can hack all your other devices on the same network. That includes when friends visit and connect to your network.

So for example here’s a demonstration of hacking into your network through the network connected lights. There’s ransomware where your computer gets infected with a virus which encrypts all your data and then demands a ransom to un-encrypt it. The same thing is happening to smart TVs where they infect your TV, encrypt it so you can’t use it and demand money to un-encrypt it. Printers can get infected.

All of this gets easier with every app you download. You download some new app for your phone, you have no idea if, when it’s on your home network, that it’s not scanning the network for devices with known exploits to infect. Maybe it’s just hacking your router for various reasons. It could hack your DNS so when you type “mybank.com” it actually takes you to a fake site where you type in your password and then later get robbed. Conversely you have no idea what bugs are in the app itself that might let it be exploited.

One way to possibly mitigate some of these issues seems like it would be for the router to put every device on its own network. I know of no router than can do this easily. Some routers can make virtual networks but it’s a pain in the ass. Worse, you often want to be able to talk to other devices on your home network. For example you’d like to tell your chromecast to cast some video from your phone except you can’t if they’re not on the same network. You’d like to access the webcam in your baby’s room but you can’t if they’re not on same network. You’d like to print but you can’t if they’re not on the same network etc…

So, I’ve been wondering, where’s the router that fixes this issue? Let me add a device with 1 button that makes a lan for that one device. Also, let me choose what other devices and over which protocols that new device is allowed to communicate. All devices probably also need to use some kind of encryption since with low-level network access an app could still probably manage to hack things.

I get this would only be a solution for geeks. Maybe it could be more automated in some way. But in general there’s clearly no way you can expect all app makers and all device makers to be perfect. So, the only solution seems like isolating the devices from each other.

Any other solutions?

WebGL2Fundamentals.org and stuff

I recently made webgl2fundamentals.org. WebGL2 is backward compatible with WebGL1 which means anything you learn about WebGL1 is applicable to WebGL2 but there’s a few things that made it seem like it needed a new site.

The biggest were GLSL 3.00 ES which is an updated version of GLSL that’s not available in WebGL1. It adds some great features but it’s not backward compatible so it seemed like making all the samples use GLSL 3.00 es was better than leaving it as is.

Another big reason is WebGL2 has Vertex Array Object support. I had not used them much in WebGL1 because there were an optional feature. After using them though I feel like because it’s possible to make a polyfill I should have always used them from day 1. Those machines that need the polyfill are also probably machines that don’t run WebGL well in the first place. On the other hand I think people would be annoyed learning WebGL1 if they had to rely on a polyfill so as it is I’ll leave the WebGL1 site not using Vertex Array Objects.

The second biggest reason is I got my math backward. Matrix multiplication is order dependent. A * B != B * A I’ve used various 3d math libraries in the past and I personally never noticed a convention. I’d just try A * B * C * D and if I didn’t get the result I wanted I’d switch to D * C * B * A. So, when I made the math library for WebGL1 I picked a convention based off matrix names. It’s common to have names like viewProjection and worldViewProjection so it seemed like it would be easier to understand if

viewProjection = view * projection

and

worldViewProjection = world * view * projection

But, I’ve been convinced I was wrong. It’s actually

viewProjection = projection * view

and

worldViewProjection = projection * view * world

I won’t go into the details why but I switched all the math on webgl2fundamentals.org to use the second style.

Anyway, that’s beside the point. The bigger issue is the pain of updating it all. There are currently over 132 samples. If I decide that all samples need to switch their math that’s several days of work. Even if I decide something small like all samples should clear the screen or all samples should set the viewport it takes hours visit each and every one and update. Some have patterns I can search and replace for but they’re often multi-line and a pain to regex. I wonder if there is some magic by which I could go edit git history to make each change back when I made the first sample and then see that get propagated up through all the samples. Sadly I don’t think git knows which samples were based off others and even if it did I’m sure it would be even more confusing figuring out where it couldn’t merge.

I’d hard to get everything right the first time, at least for me, and so as I create more and more samples and get more and more feedback I see places where I should probably have done something different right from the start.

For example my original goal in writing the first sample was to be as simple as possible. There’s about 20 lines of boilerplate needed for any WebGL program. I originally left those out because it seemed like clutter. Who cares how a shader is compiled, all that matters is it’s compiled and we can use it. But people complained. That’s fine. I updated the sample to put those 20 lines in but removed them in the 2nd sample.

Another issue was I didn’t use namespaces so there are functions like createProgramFromScripts instead of webglUtils.createProgramFromScripts. I’m guessing being global people were like “where is this from?”. I switched them. I’m hoping the prefixes make it clear and they’ll see the webgl-utils.js script tag at the top.

Similarly the first sample just makes 1 draw call on a fixed size 400×300 canvas. Again, to keep it as simple as possible a ton of stuff is kind of left out. For example setting uniforms once during initialization. If you’re only going to draw once that seems natural but drawing only once is a huge exception so it ends up making the sample not representative of real WebGL code. Similarly because the canvas was a fixed size there no reason to set the viewport. But 99% of WebGL apps have a canvas that changes size so they need to set the viewport. Similarly because they resize they need to update the resolution of the canvas but I had left both of those steps out. Yet another was using only one shader program. A normal WebGL app will have several shader programs and therefore set which shader programs to use at render time but the samples were setting it at init time since there was only 1. The same for attributes, textures, global render states, etc.. All of these things are normally set at render time but most of the samples were setting them at init time.

Anyway, I updated almost all of that on webgl2fundamentals.org. Now I’m trying to decide how much time to spend backporting to the old site given it took easily 40-60 hours of work to make all the changes.

I recently added a code editor to the site so people can view the code easily. That’s another one of those things where I guess having written JS related stuff for the last 8 years I know some of the tools. I know you can pick “view-source” and/or you can open the devtools in any browser and see all the code. I also know you can go to github which is linked on every page and see the source. That said I got several comments from people who got lost and didn’t know how to do any of those. Should I put a link under every single sample “click here to learn how to view source” that leads to tutorials on view source, devtools, and github? How to get a copy locally. How to run a simple web server in a few seconds to run the tutorials locally. I suppose I should write another article on all that stuff.

Well, I added the code editor for now which I’m a tiny bit proud of. At least in chrome and firefox it seems to catch both JavaScript errors and WebGL errors and will put your cursor at the correct line in your source. It also displays the console messages. I got that inspiration from Stack Overflow’s snippet editor but theirs gives the wrong line numbers. Unfortunately it’s a pretty heavy editor but it does do intellisense like help. Hopefully it will be updated to handle JSDocs soon like it’s big brother.

But, that brought up new issues which I’m not sure I should handle or not. The original samples had a fixed size. With the code editor though the size can change. I updated all the samples to handle that but it’s not perfect. Most real WebGL apps handle this case. Should I clutter the code in all the samples to handle it? All the animated samples handle it but non animated samples don’t. Some of those samples it would basically just be one line

window.addEventListener('resize', drawScene);

But of course it’s not that simple. Many samples are only designed to draw once period and would have to be re-written to even have a drawScene function.

I’m not sure what my point was in writing this except to say that it’s a new experience. Normally I work on 1 program for N months. If I decide I need to refactor it I refactor it. But for these tutorials I’m effectively working on 132+ tiny programs. If I decide I need to refactor I have to refactor all 132+ of them and all the articles that reference them.

Saving and Loading Files in a Web Page

This article is targeted at people who’ve started learning web programming. They’d made a few web pages with JavaScript. Maybe they’ve made a paint program using 2d canvas or a 3d scene using three.js. Maybe it’s an audio sound maker, maybe it’s a tile map editor. At some point they wonder “how do I save files”?

Maybe they have a save button that just puts all the data into a textarea and presents it to the user and says “copy this and paste it into notepad to save”.

Well, the way you save in a web page is you save to a web server. OH THE HORROR! I hear you screaming WHAT? A server? Why would I want to install some giant server just to save data?

Well I’m here to show you a web server is not a giant piece of software. In fact it’s tiny. The smallest web server in many languages can be written in a few lines of code.

For example node.js is a version of JavaScript that runs outside the browser. If you’ve ever used Perl or Python it works exactly the same. You install it. You give it files to run. It runs them. Perl you give perl files, python you give python files, node you give JavaScript files.

So using node.js here is the smallest web server

var http = require('http');
function handleRequest(request, response){
    response.end('Hello World: Path = ' + request.url);
}
http.createServer(handleRequest).listen(8000, function() { });

JUST 5 LINES OF CODE!!!

Now, all these 5 lines do is return “Hello World:: Path = <path>” for every page but really that’s the basics of a web server. Looking at the code above without even explaining it you could imagine looking at request.url and deciding to do different things depending on what the url is. One URL might save, one might load, one might login, etc. That’s really it.

Let’s explain these 5 lines of code

var http = require('http');

require is the equivalent of import in python or #include in C++ or using in C# or in JavaScript using <script src="...">. It’s loading the module ‘http‘ and referencing it by the variable http.

function handleRequest(request, response){
    response.end('Hello World: Path = ' + request.url);
}

This is a function that will get called when we get a request from a browser. The request holds data about what the browser requested like the URL for the request. response is an object we can use to send our response back to the browser. As you can see here we’re sending a string. We could also load a file and send the contents of that file. Or we would query a database and send back the results. But everything starts here.

var server = http.createServer(handleRequest);
server.listen(8000, function() { 
  console.log("Listening at http://localhost:8000");
});

The last line I expanded a little. First it calls http.createServer and passes it the function we want to be called for all requests.

Then it calls server.listen which starts it listening for requests from the browser. The 8000 is which port to listen on and the function is a callback to tell us when the server is up and running.

TRY IT OUT!

To run this server install node.js. Don’t worry it’s not some giant ass program. It’s actually rather small. Much smaller than python or perl or any of those other languages.

Now open a terminal on OSX or on windows open a “Node Command Prompt” (node made this when you installed it).

Make a folder somewhere and cd to it in your terminal / command prompt

Make a file called index.js and copy and paste the 5 lines above into it. Save it.

Now type node index.js

In your browser open a new tab/window and go to http://localhost:8000 It should say Hello World: Path = \. If type some other URL like http://localhost:8000/foo/bar?this=that you’ll see it returns that back to you.

Congratulations, you just wrote a web server!

Let’s add serving files

You can imagine the code to serve files. You’d parse the URL to get a path, read the corresponding file, call response.end(contentsOfFile). It’s literally that easy. But, just to make it less code (and cover more cases) there’s a library that does it for us and it’s super easy to use.

Press Ctrl-C to stop your server if you haven’t already. Then type

npm install express

It will download a bunch of files and put them in a subfolder called “node_modules”. It will also probably print an error about no “package.json” which you can ignore (google package.json later)

Now let’s edit our file again. We’re going to replace the entire thing with this

"use strict";
const express = require('express');
const baseDir = 'public';

let app = express();
app.use(express.static(baseDir));
app.listen(8000, function() {
    console.log("listening at http://localhost:8000");
});

Looking at the last 2 lines you see app.listen(8000... just like before. That’s because express is the same http server we had before. It’s just added some structure we’ll get to in a bit.

The cool part here is the line

app.use(express.static(baseDir));

It says “serve all the files from baseDir”.

So, make a subfolder called “public”. Inside make a file called test.html and inside that file put O'HI You. Save it. Now run your server again with node index.js

Go to http://localhost:8000/test.html in your browser. You should see “O’HI You” in your browser.

Congratulations. You have have a web server that will serve any files you want all in 9 lines of code!

Let’s Save Files

To save files we need to talk about HTTP methods. It’s another piece of data the browser sends when it makes a request. Above we saw the browser sent the URL to the server. It also sends a method. The default method is called GET. There’s nothing special about it. it’s just a word. You can make up any words you want but there are 7 or 8 common ones and GET means “Get resource”.

If you’ve ever made an XMLHttpRequest (and I hope you have because I’m not going to explain that part), you specify the method. Back on the server we could look at `request.method` to see what you specified and use that as yet another piece of data to decide what to do. If the method is GET we do one thing. If the method is BANANAS we do something else.

express has wrapped that http object from our first example and it adds a few major things.

(1) it does more parsing of `request.url` for us so we don’t have to do it manually.

(2) it routes. Routing means we can tell it for any of various paths what function to call. For example we could say if the path starts with “/fruit” call the function HandleFruit and if the path starts with “/purchase/item/:itemnumber” then call HandleItemPurchase etc.. In our case we’re going to just say we want all routes to call our function.

(3) it can routes based on method. That way we don’t have to check if the method was “GET” or “PUT” or “DELETE” or “BANANAS”. We can just tell it to only call our handler if the path is XYZ and the method is ABC.

So let’s update the code. Ctrl-C your server if you haven’t already and edit index.js and update it to this

"use strict";
const express = require('express');
*const path = require('path');
*const fs = require('fs');
const baseDir = 'public';

let app = express();
*app.put('*', function(req, res) {
*    console.log("saving:", req.path);
*    let body = '';
*    req.on('data', function(data) { body += data; });
*    req.on('end', function() {
*        fs.writeFileSync(path.join(baseDir, req.path), body);
*        res.send('saved');
*    });
*});
app.use(express.static(baseDir));
app.listen(8000, function() {
    console.log("listening at http://localhost:8000");
});

The first 2 added lines just reference more built in node libraries. path is a library for manipulating file paths. fs stands for “file system” and is a library for dealing with files.

Next we call app.put which takes 2 arguments. The first is the route and '*' just means “all routes”. Then it takes a function to call for this route. app.put only routes “PUT” method requests so this line effectively says “Call our function for every route when the method is “PUT”.

The function adds a tiny event handler to the data event that reads the data the browser is sending by adding it to a string called body. It adds another tiny event handler to the end event that then writes out the data to a file and sends back the message 'saved'.

And that’s it! We’d made a server that saves and loads files. It’s very insecure because it can save and load any file but if we’re only using it for local stuff it’s a great start.

Loading And Saving From the Browser

The final thing to do is to test it out by writing the browser side. I’m going to assume if you’ve already made some web pages and you’re at the point where you want to load and save that you probably have some idea of what XMLHttpRequest is and how to make forms and check for users clicking on buttons etc. So with that in mind here’s the new test.html

<html>
<head>
    <style>
    textarea {
        display: block;
    }
    </style>
</head>
<body>

<h1>Saving</h1>
<label for="savefilename">filename:</label>
<input id="savefilename" type="text" value="myfile.txt" />
<textarea id="savedata">
this is some test data
</textarea>
<button id="save">Save</button>

<h1>Loading</h1>
<label for="loadfilename">filename:</label>
<input id="loadfilename" type="text" value="myfile.txt" />
<textarea id="loaddata">
</textarea>
<button id="load">Load</button>



</body>
<script>
// make $ a shortcut for document.querySelector
var $ = document.querySelector.bind(document);

// when the user clicks 'save'
$("#save").addEventListener('click', function() {

    // get the filename and data
    var filename = $("#savefilename").value;
    var data = $("#savedata").value;

    // save
    saveFile(filename, data, function(err) {
        if (err) {
            alert("failed to save: " + filename + "\n" + err);
        } else {
            alert("saved: " + filename);
        }
    });
});

// when the user clicks load
$("#load").addEventListener('click', function() {

    // get the filename
    var filename = $("#loadfilename").value;

    // load 
    loadFile(filename, function(err, data) {
        if (err) {
            alert("failed to load: " + filename + "\n" + err);
        } else {
            $("#loaddata").value = data;
            alert("loaded: " + filename);
        }
    });
});

function saveFile(filename, data, callback) {
    doXhr(filename, 'PUT', data, callback);
}

function loadFile(filename, callback) {
    doXhr(filename, 'GET', '', callback);
}

function doXhr(url, method, data, callback) {
  var xhr = new XMLHttpRequest();
  xhr.open(method, url);
  xhr.onload = function() {
      if (xhr.status === 200) {
          callback(null, xhr.responseText);
      }  else {
          callback('Request failed.  Returned status of ' + xhr.status);
      }
  };
  xhr.send(data);
}
</script>
</html>

If you now save that and run your server then go to http://localhost:8000/test.html you should be able to type some text in the savedata area and click “save”. After words click “load” and you’ll see it got loaded. Check your hard drive and you’ll see a file has been created.

Now of course again this is insecure. For example if you type in “text.html” in the save filename and pick “save” it’s going to overwrite your “text.html” file. Maybe you should pick a different route instead of “*” in the app.put('*' line. Maybe you want to add check if the file exists with another kind of method and only update if the user is really sure.

The point of this article was not to make a working server. It was to show you how easy making a server is. A server like this that saves local files is probably only useful for things like an internal tool you happened to write in JavaScript that you and/or your team needs. They could run a small server like this on their personal machines and have your tool load, save, get folder listings, etc.

But, seeing how easy it is also hopefully demystifies servers a little. You can start here and then graduate to whole frameworks that let users login and share files remotely.

I should also mention you can load and save files to things like Google Drive or Dropbox. Now you know what they’re basically doing behind the scenes.

CAs now get to decide who’s on the Internet

It started with a legit concern. The majority of websites were served using HTTP. HTTP is insecure. So what you might be thinking? HTTPS is used on my bank and amazon and anywhere I might spend money so it seems not a problem. Except … HTTP allows injections. Ever use some bad hotel or bad airport WiFi and get a banner injected at the top of the screen? That’s HTTP vs HTTPS. Are you sure those articles you’re reading are the originals? Maybe someone is changing words, pictures, or ads. HTTPS solves these issues.

So, the browser vendors and other standards bodies got together and made a big push for HTTPS only. Sounds great right!?

Well instead of just pushing metaphorically by putting out the word, “Stop using HTTP! Start using HTTPS” the browser vendors got together and decided to try to kill off HTTP completely. Their first order of business was to start requiring HTTPS to use certain features in the browser. Want to go fullscreen? Your site must be served from HTTPS. Want to read the orientation and motion data of the phone from the browser? Your website must use HTTPS. Want to be able to ask the user for permission to access to mic or the camera? Your website must use HTTPS.

Okay well that certainly can be motivating to switch to HTTPS as soon as possible.

Except … HTTPS requires certificates. Those certificates can only be acquired from Certificate Authorities. CAs for short. CAs charge money for these certificates. $50 per certificate or more. Often the certificates only last for a limited time so you’ve got to pay every year or 2.

Suddenly every website just got more expensive.

Okay you say but that’s still not a ton of money.

Yes but, maybe you’ve got an innovative project. One that lets any user access their media from their browser(example). You’d like to let them go fullscreen but you can’t unless it serves the media pages as HTTPS. The rules of HTTPS say your not allowed to share certs ever. If you get caught sharing your cert will be invalidated. So, you can’t give each of these people running your innovated software a copy of your cert. Instead every user needs their own cert. Suddenly your software just got a lot more expensive! What if your software was free and open source? In 2015 people were able to run it for free. In 2016 the are now required to get a cert for $50

So what do you do? Well you hear about self-signed certs. So you check those out. Turns out they require complex installation into your OS. Your family and aunts and uncles and cousins and nephews and nieces aren’t going to find that really manageable. And besides there’s the feature where anyone can come to a party at your places and queue some music videos using their phone’s browser but that’s never going to fly if they have to first install this self-signed cert. Official certs from CAs don’t have this issue. They just work.

Okay well you shop around for CA’s. Dear CA#1 will you give my users free certs? No! Dear CA#2 will you give my users free certs? No!

Oh I hear you say, there’s a new kid on the block, letsencrypt, they offer free certs.

They do offer free certs BUT, certs are tied to domain names. To get a cert from letsencrpyt you have to have a domain. Example “mymediastreamer.org”. So even if you can get the cert for free your users now need to buy a domain name. That can be relatively cheap at $10-$20 a year but it’s a big technical hurdle. Your non-tech family members are not really going to be able to go through all the process of getting a domain name just to use your media server.

Oh I hear you say, what if my software ran a public DNS server. I could issue users subdomains like “.mymediastreamer.org”. Then I can give out DNS names to the users and they can get certs. That might work … except, DNS points to specific IP addresses. User’s IP address changes. You can re-point DNS to the new address but it takes time to propagate. That means when their IP address changes it might be a few hours until they can access their media again. Not going to work.

Ok then here’s a solution. We’ll make up domains like this “..mymediastreamer.org”. That will make the DNS server even easier. We don’t even need a database. We just look at the “” part of the DNS name and return that IP address. Now when the user’s IP address changes there will be zero delay because they can immediately use a DNS name that matches. We’ll setup some rendezvous server for them so they don’t need to lookup the correct domain. It will all just work.

Great! We have domains. We can get free certs from letsencrypt.

Except….letsencrypt limits the number of certs to 240 per root domain. So once you have 240 you can’t get more certs. That means we can only support 240 users at best. But then there’s another problem. Letsencrpt doesn’t support wildcard certs. Because we added the part above we need a wildcard cert *for each user* matching “*..mymediastreamer.org”.

Effectively we are S.O.L. For our purposes letsencrypt is just another CA. “CA#3 can we please have free certs for our users?” No!

As of 2015 we could do anything we wanted on the internet. Now in 2016 we need permission from a CA. If the CA doesn’t give permission we don’t get on the internet.

To put it another way because of the chain of validation in HTTPS each CA is effectively a little king/bureaucrat who gets to decide who gets on the internet and who doesn’t. If one king doesn’t let you your only option is to go ask another king. Letsencyrpt is the most generous king as they don’t ask for tribute but that doesn’t change the fact you still need permission from one of these kings.

You might be thinking, “so what? who cares about a media streamer?”. But it’s not just streamers. It’s “ANY DEVICE OR SOFTWARE THAT SERVES A WEBPAGE”. Got an IP camera that serves a webpage? That camera wants to give you a nice interface that goes fullscreen? It can’t without certs and it can’t get certs without permission from a CA. Got some Raspberry PI project that wants to serve a webpage and needs any of the banned features? Again, it can’t do it without a cert and it can’t get a cert without permission from a CA. Maybe you have a NAS device and it would like provide web page access? It can’t do it without a cert and it can’t get certs without permission from a CA.

That wasn’t the case just 6 months ago because HTTPS wasn’t required. Now that it is these kings all just got a bunch more power and innovative products like the media streamer described above and projects like this are effectively discouraged unless you can beg or bribe a king to ordain them. 🙁