Category Archives: Uncategorized

New HappyFunTimes for Unity Coming Soon

TL;DR: HappyFunTimes for Unity will soon be a stand alone library.

HappyFunTimes started as a HTML5 project with HTML5 based games and therefore needed an HTML5 based server to serve those games. Originally I put all my sample games into the same repo. As I wanted to make it easier for others to add games I came up with a way for the games to live outside of the main HappyFunTimes app/folder. This also made HappyFunTimes kind of like a mini game console since there was a menu to pick games from.

That design influenced how the Unity version worked where games also needed to be installed because a web server needed to serve the files to the controller and a websocket server needed to pass the messages from the game to the controllers.

Well, I’m pleased to announce that’s changing. The “game console” like feature of HappyFunTimes isn’t really much of a feature for most people. Most people make a single game for a game jam, event, or installation and that’s all they need.

So, lately, I’ve been working hard to make the Unity plugin do everything on its own. I’ve put in a web server and websocket server. This means you should be able to just stick it in any Unity project. No need for special ways to export. No need to “install it in happyfuntimes”. Just export from Unity like a normal unity project then run the project.

This also means theoretically you could export to iOS, Android, PS4, etc, run the game there and have people join with their phones. I say theoretically because I haven’t tested those platforms yet.

It also means making controllers is simpler. In the old version the server was doing lots of stuff. Your controller files got inserted into templates. The templates used requirejs, something most Unity devs were not familiar with. The new plugin, since it doesn’t have to integrate with anything else, means all that has been simplified. Make your own full .HTML files however you want. No more templating getting in your way or messing up your CSS.

Yet another thing you should be able to do is turn it on and off from Unity. In other words you could make game that supports 1-4 players using traditional gamepads but have an option to use HappyFunTimes for more players.

Installation mode should still work as will multiple computer based games.

Crossing my fingers people find this more useful.

Taking things for granted

I’m trying my best to SHIP the minimum viable product version of HappyFunTimes but it seems like every day I finish one thing only to find 2 more.

One of the things I learned early on is that many Unity devs are fairly new to many parts of software development. I keep forgetting that. One simple example is a friend wanted to use HappyFunTimes. He’s not an experienced programmer but he can hack stuff together in Unity. When he finds out he also needs to program the controllers in JavaScript and HTML5 he’s done. That’s not something he knows how to do. Of course he could learn how but he’s probably got a limited amount of time to get something done and learning JavaScript and HTML5 is not going to fit in that amount of time.

I have several ideas about that which I can list in another post.

Today though I was working on automatic uploading new versions of HappyFunTimes. There’s a lot of steps and it’s best to automate them. Since I expect lots of bugs it’s best I automate that early to avoid the frustration of updating and hopefully so I don’t forget a step and mess up.

One of the issues is keeping everything in sync. To make a release I need to

  1. Create an installer for Window
  2. Create an installer for Mac
  3. Check what the newest version of HappyFunTimes is online
  4. Check what version I have locally and make sure it’s newer than the one online.
  5. Check the versions of the installers match the version I expect them to be.
  6. Create a github release
  7. Upload the installers

At least that’s what I thought the steps were. One issue though is that when I make a release on github it takes a snapshot of all the files in the repo. It would nice of those files matched what’s in the installers. Especially because if you want to install in Linux I don’t have an installer so it would be nice if you could get the correct version of the files from github. In order to do that I’d need to check

  1. That my local git repo is clean.

    In other words I need to prove there are no changes sitting around not committed.

  2. That the local git hash matches the one one github

    This proves the files on github match the local files

So I checked if there’s an easy way to do that. I think there is. But … then I realize I should probably do the same thing when you publish a game. I’ve already written all the code to automate publishing a game although it doesn’t have that feature.

But, then I realize my code for publishing a game is all written using node.js and requires using git and github. I suspect more often than not Unity developers don’t know github nor git and I suspect most of them don’t have the time to learn.

I’m not sure what to do there. I think the system is automated enough that if you didn’t know git and github and all you wanted to do is publish a game you could do that by signing up with github, then typing one command and entering your github username / password. That would upload your game. That’s not really appropriate for github though. Github is only free if your game is open source which means you need to check in all your code at least.

Hmmm. I’m going to have to think about this. Maybe I could check the code in for you if you want? Or maybe MVP just means you’ve got to learn github and git and I can try to get that more automated later.

I know the devs that know git and github generally think “yea, do that. It’s easy” but having worked with less experienced devs I know there’s some big and frustrating hurdles there.

Fixing the wrong bug.

This is hilarious. I had Player, the server side object that tracks the connection to a smartphone for a single player, have a heartbeat ping because often players would just stop playing on their phone by having their browser go into the background. In such a case they aren’t disconnected from the server. So there’s this idle player in the game waiting for a networking message. Maybe “waiting” is the wrong word, rather it’s as though the player is making no input.

I wanted to remove those players so I have a heart beat. If no input from the player comes in 5 seconds I ping the player’s smartphone. If no message comes back in 1 second I kill the player because his browser is likely no longer active.

So, I’m trying to ship. I test Unity. When the Unity game quits the players are not disconnected. My (bad) intuition says “hmm, the socket must not be getting disconnected by Unity. I don’t remember this being a problem but I’ve almost always tested Unity from the editor instead of exported apps. I’m testing the apps. I try disconnecting it manually using OnApplicationExit. No change. I figure given that maybe because the C# websocket stuff I’m using is multi-threaded it must be that the disconnect not actually getting executed before the app quits.

Fine, I’ll add a heartbeat to the Game as well as the Player. The current heartbeat code is embedded directy in the Player object. I look at the code and see the Player is not deleted when it’s changed into a Game. I try refactoring the code so the Player‘s heartbeat just keeps going but I run into issues. Revert all that and decide to implement differenty. I’ll make the heartbeat code a separate class and have both the Player code and the Game code use it separately. Run into issues again and revert all that.

I figure that my heartbeat should go at a lower level than it was. It shouldn’t be at the Game / Player level it should be at a layer between WebSockets and that. I implement that. Spend 60-90 minutes debugging. It finally works.

I go back to my Unity sample and test again. Controllers still don’t get disconnected when the game exists even though I know the ping is working.

I finally realise the issue has nothing to do with disconnecting. That was working all along. The issue is there’s a flag the game can pass disconnectPlayersIfGameDisconnects. In JavaScript it has 3 values. undefined (the default), true and false. Just a couple of days ago I added it to C#/Unity. It defaults to false in C#! DOH!!!! Changed it to default to true.

All that work adding a ping at a lower level had nothing to do with anything. Works. 6-8 hours mostly wasted. Well, let’s hope that’s better anyway 😛

Picking the wrong Framework?

I struggled with deciding how to implement superhappyfuntimes the website.

There’s a million options from a LAMP stack to python, node.js, ruby, go, and the zillions of frameworks written on top of them. I was mostly sold on using a node.js since I like the idea of one language on both the client and server.

In some ways though superhappyfuntimes is a really simple site. All it needs is a database of games. At the moment there’s no logging in, no user accounts, etc. That might change. Ideally when you register a game it should be registered to you so you and only you have permission to edit it. But, seeking my minimum viable product (mvp) that stuff can wait. Still, it seemed like a good idea to pick a framework that supports that stuff so I can easily add it later.

At the same time, sometimes I thought maybe I should do it with no framework. Just checking an big .json file with an array of info from all the games. There’s not likely to be more than 100 games for a while. I can hope for more but realistically it’s going to stay small. But still, I’d need a way to edit that file. I didn’t want to have to edit it by hand anytime someone wanted to add a game. I could write tools to edit it but that’s basically rewriting all the stuff I’d get from a framework.

Ultimately I picked Meteor. It was really easy to get started, it just worked, no set up, and they even had deployment somewhat solved. So, it was easy to get things basically working.

But, you knew there was a but…Meteor is heavy. Each user that connects gets a not-too-small JavaScript app downloaded to their browser. That app then contacts the server to get database info, by default using WebSockets which itself is heavy. That might be fine if you’re trying to write a gmail clone but it’s not so fine if you’re trying to keep expenses small. I’m running superhappyfuntimes out of my pocket. I’d prefer if I only needed one small server and not a whole server farm. The pages superhappyfuntimes serves are for the most part static. They only need to be updated when a new game is added or updated, otherwise they’re the same for every user. I’m running on a single 512meg digital ocean server. I suspect that’s not enough for more than a few users with meteor whereas if I had some kind of static page server it would probably be enough for thousands of users?

So now the question. Do I throw meteor under the bus and re-write? Is there someway to leverage what’s there? Apparently meteor’s current solution to static pages so to run a full browser on the server (phantomjs) and capture the pages that area created. I’m not even sure that would work. Browsers are huge memory hogs. I’d probably have to run it offline, capture the pages, then upload the results. That means I’d be back to manually running it and no realtime update unless I setup yet another server.

Meteor is pretty awesome. Although I don’t have too much experience with other frameworks some cool things about meteor are that it’s default development environment is 100% live. Edit any .js, .html, or .css file and the moment you save it it updates. The server restarts, your page reloads, it’s seriously awesome! It also has a relatively nice templating system.

Maybe I should switch to another framework? I have no idea which one to switch to. Or maybe I should just pull apart meteor’s pieces and pair it down to just what I need? Or maybe I should start over from scratch? The public interfaces that update the game database are already handled it might actually be relatively simple to switch it all to custom code. I’ve no idea at the moment.

Sigh. I want to be working on games, not on infrastructure. Oh well. Live and learn. 😛

more npm sadness

I feel bad these last few posts are rants but …

I needed to make http json requests. I have a snippet of code I’ve been using for the last 4 years. It’s about 90 lines line. It’s for client side browser JavaScript. I needed something similar for server side node.js code. At first I thought I’d write my own. Having just submitted a pull request for the github package I’m generally familar with how to make a request. It seems pretty straight forward.

But…, I decide, well, in the spirit of npm I should see if there’s already a package for this. I quick search and request-json comes up. The API looks simple. How bad can it be? I take a brief look at the code. It’s in coffeescript. That’s a little scary as now I need a coffeescript compiler but what the heck, let’s give it a try.

$ npm install request-json --save
npm http GET https://registry.npmjs.org/request-json
npm http 200 https://registry.npmjs.org/request-json
npm http GET https://registry.npmjs.org/request-json/-/request-json-0.4.10.tgz
npm http 200 https://registry.npmjs.org/request-json/-/request-json-0.4.10.tgz
npm http GET https://registry.npmjs.org/request/2.34.0
npm http 200 https://registry.npmjs.org/request/2.34.0
npm http GET https://registry.npmjs.org/request/-/request-2.34.0.tgz
npm http 200 https://registry.npmjs.org/request/-/request-2.34.0.tgz
npm http GET https://registry.npmjs.org/qs
npm http GET https://registry.npmjs.org/json-stringify-safe
npm http GET https://registry.npmjs.org/node-uuid
npm http GET https://registry.npmjs.org/forever-agent
npm http GET https://registry.npmjs.org/tough-cookie
npm http GET https://registry.npmjs.org/form-data
npm http GET https://registry.npmjs.org/tunnel-agent
npm http GET https://registry.npmjs.org/http-signature
npm http GET https://registry.npmjs.org/aws-sign2
npm http GET https://registry.npmjs.org/oauth-sign
npm http GET https://registry.npmjs.org/hawk
npm http 304 https://registry.npmjs.org/json-stringify-safe
npm http 200 https://registry.npmjs.org/node-uuid
npm http 304 https://registry.npmjs.org/forever-agent
npm http 304 https://registry.npmjs.org/form-data
npm http 200 https://registry.npmjs.org/qs
npm http 304 https://registry.npmjs.org/http-signature
npm http 304 https://registry.npmjs.org/tunnel-agent
npm http 304 https://registry.npmjs.org/hawk
npm http 304 https://registry.npmjs.org/aws-sign2
npm http 304 https://registry.npmjs.org/tough-cookie
npm http 304 https://registry.npmjs.org/oauth-sign
npm http GET https://registry.npmjs.org/combined-stream
npm http GET https://registry.npmjs.org/async
npm http GET https://registry.npmjs.org/assert-plus/0.1.2
npm http GET https://registry.npmjs.org/ctype/0.5.2
npm http GET https://registry.npmjs.org/asn1/0.1.11
npm http GET https://registry.npmjs.org/punycode
npm http GET https://registry.npmjs.org/sntp
npm http GET https://registry.npmjs.org/boom
npm http GET https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/cryptiles
npm http 304 https://registry.npmjs.org/assert-plus/0.1.2
npm http 304 https://registry.npmjs.org/ctype/0.5.2
npm http 304 https://registry.npmjs.org/combined-stream
npm http 200 https://registry.npmjs.org/async
npm http 304 https://registry.npmjs.org/sntp
npm http 304 https://registry.npmjs.org/asn1/0.1.11
npm http 304 https://registry.npmjs.org/cryptiles
npm http GET https://registry.npmjs.org/delayed-stream/0.0.5
npm http 200 https://registry.npmjs.org/boom
npm http 200 https://registry.npmjs.org/punycode
npm http 200 https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/punycode/-/punycode-1.3.0.tgz
npm http 304 https://registry.npmjs.org/delayed-stream/0.0.5
npm http 200 https://registry.npmjs.org/punycode/-/punycode-1.3.0.tgz

WTF!!!! Seriously?

npm uninstall request-json --save

Here’s the code I wrote.

"use strict";

var url = require('url');

var sendJSON = function(apiurl, obj, options, callback) {
  var options = JSON.parse(JSON.stringify(options));
  var parsedUrl = url.parse(apiurl);

  options.hostname = parsedUrl.hostname;
  options.port = parsedUrl.port;
  options.method = options.method || 'POST';
  options.headers = options.headers || {};
  options.path = parsedUrl.pathname;

  var body = JSON.stringify(obj);
  var headers = options.headers;

  headers["content-length"] = Buffer.byteLength(body, "utf8");
  headers["content-type"] = "application/json; charset=utf-8";

  var callCallback = function(err, res) {
    if (callback) {
      var cb = callback;
      callback = undefined;
      cb(err, res);
    }
  };

  var protocol = parsedUrl.protocol.substring(0, parsedUrl.protocol.length - 1);
  var req = require(protocol).request(options, function(res) {
    res.setEncoding("utf8");
    var data = "";
    res.on("data", function(chunk) {
      data += chunk;
    });
    res.on("error", function(err) {
      callCallback(err);
    });
    res.on("end", function() {
      if (res.statusCode == 301 ||
          res.statusCode == 302 ||
          res.statusCode == 307 ||
          res.statusCode == 308) {
        sendJSON(res.headers["location"].toString(), obj, options, callCallback);
      } else if (res.statusCode >= 400 && res.statusCode < 600 || res.statusCode < 10) {
        callCallback(new Error("StatusCode: " + res.statusCode + "\n" + data));
      } else {
        try {
          callCallback(null, JSON.parse(data));
        } catch (e) {
          e.content = data;
          callCallback(e);
        }
      }
    });
  });

  if (options.timeout) {
    req.setTimeout(options.timeout);
  }

  req.on("error", function(e) {
    callCallback(e.message);
  });

  req.on("timeout", function() {
    callCallback(new error.GatewayTimeout());
  });

  req.write(body);
  req.end();
};

exports.sendJSON = sendJSON;

less than 100 lines vs 25k lines. Why do I need 25k lines? Maybe it supports some things my 100 lines doesn’t? Authentication can be handled by setting options.auth to the appropriate thing. Headers can be set in options.headers. What else is there? Yea I see it supports streaming the body. That’s literally < 10 lines of code to add. I see stuff about cookies but cookies are sent in the headers which means you can add cookie support by setting headers. Maybe it’s a structure thing? I’d prefer a few separate libraries I combine rather than a library where someone else has already combine them. At least for something as seemingly low level as “make a request for some json”.

Maybe I’m looking at it the wrong way but it sure seems like something is wrong when I need 25k lines to do something that seems like it should take only a few.

npm is awesome and sucks at the same time.

npm is a node package manager. It’s pretty cool. You make a folder and type “npm init”. You answer a few questions and it makes a “package.json” file recording those answers. From then on you can install one of the 81000+ “packages” by typing “npm install packagename –save”. The “–save” part make it update the package.json file to record that your project needs the package you just installed. That means if you give a copy of your project to someone else they only need the parts you actually wrote. The rest will be downloaded for them.

It also helps you update packages to current versions etc..

So that sounds great in theory. The problem is finding anything useful and working. 81000 packages in a lot. How do you find the good ones? I have no idea.

For example I needed a library to zip and unzip files. I search for zip on npmjs.org and see there’s a package, adm-zip, with 19000+ stars. Clearly it’s a popular package so I install it and start writing my code.

The first problem they can zip up folders but you can’t easily choose any kind of filter? Want to zip up a folder but skip the “.git” or “.svn” folders or skip files that end in “.pyc” or “.o” or “.bak”. Too bad for you, write your own from scratch.

So I submitted a patch. It’s been 3 weeks and I haven’t heard a peep out of the maintainers. In fact even with 19000+ stars there’s been no commits in 3 months.

So, having written the filter I write some code to zip up a folder.  The very first set of files I build a zip from don’t unzip with the standard “unzip” program built into OSX. Seriously!? WTFBBQ? 19000+ people and it makes bad files? Lucky for me it was the first test I made. Imagine if I had not found that bug for months.

Looking at the code it leaves a lot to be desired. I often write crap code to but a zip library is not that hard to write. There aren’t lots of design decisions to make. It shouldn’t be that hard.

I go looking for other libraries. Results are mixed. Most of the libraries require the entire contents of the zip to be in memory either compressed or uncompressed. That’s arguably unacceptable. You can have multi-gigabyte zip files. You need to be able to stream them through the library. I looked for something along those lines. Didn’t find anything. Needed to get shit done so I’m using JSZip and Moxie-Zip and have a TODO in my notes to replace them with something that steams.

On top of lots of non-working packages there are packages that seem like they should be small that have way too many dependencies. For example I needed a prompt library since because of node’s event driven model that’s harder than non event driven languages. Sure I could roll my own but who knows what dragons lurk there. Better to use something where the dragons have already been slain.

So I do a search and the first thing that comes up is the “prompt package” with 5000+ stars. I install it and see it install 20+  dependences. Seriously? I need 20+ dependencies. That’s 186000 lines of code just to present a prompt?

  • Then I read the docs and I see this.
    var schema = {
      properties: {
        name: {
          pattern: /^[a-zA-Z\s\-]+$/,
          message: 'Name must be only letters, spaces, or dashes',
          required: true
        },
        password: {
          hidden: true
        }
      }
    };
    
    //
    // Start the prompt
    //
    prompt.start();
    
    //
    // Get two properties from the user: email, password
    //
    prompt.get(schema, function (err, result) {
      //
      // Log the results.
      //
      console.log('Command-line input received:');
      console.log('  name: ' + result.name);
      console.log('  password: ' + result.password);
    });
    

    Pretty easy right? The output from the above script is:

    $ node examples/property-prompt.js
    prompt: name: nodejitsu000
    error:  Invalid input for name
    error:  Name must be only letters, spaces, or dashes
    prompt: name: Nodejitsu Inc
    prompt: password:
    Command-line input received:
      name: Nodejitsu Inc
      password: some-password
    

That makes no sense. Object properties in JavaScript do not have a guaranteed order so there’s no reason to believe the above example would ask for “name” before “password”.

I think “oh, maybe their sorting the properties alphabetically” so I check the code. Nope! So, no confidence in this package 🙁  I did at least file an issue to point out the problem. So far no response.

Switch to the asks package. Hey, only 30000+ lines of code just to prompt the user?!? 🙁

Now before you think I’m just complaining I do try to fix things when I can. I needed to interface with github. I thought maybe I could find a library. I did a search and found the github package. Yea! Unfortunately the only API I needed was the Releases API and the github package had no support. Since I didn’t find any other solutions I just added it myself and submitted a pull request AND I GOT A RESPONSE! Yea, a package that is still actively maintained! Still waiting for a response to the second pull request.

To be fair this has nothing to do with npm which is awesome. It has to do with various libraries which are not. This is not unique to JavaScript or node.js. I’ve found lots of poorly written and bloated libraries in every language I’ve used. I appreciate that people are making their code available and that it solves people’s needs. I just kind of wish their was some trustworth curation about which packages actually work and have a quality code base.

Frustrations of making a web site

Tech gets out of data. Most of the websites I’ve made use a LAMP stack. Most of them are just static pages with some JavaScript or they’re wordpress based like this one.

For SuperHappyFunTimes I felt like I wanted to use node.js. SuperHappyFunTimes itself is mostly written in node.js and it seems like for the last 2-3 years it’s one of the major topics so I thought I should probably go that way.

On top of that I’ve never written a website with users and the ability to log in etc and I didn’t want to write my own since it’s easy to make a ton of mistakes. It seemed best to use some framework that handles all of this.

The first problem though is that ISPs that support node.js are far more expensive than LAMP based ISPs with no limit on how much money you’ll be charged.

This is arguably one of the reasons why PHP and LAMP stacks are still hugely popular. PHP’s design lets multiple sites run on the same server. Every other server tech except CGI requires a server per website. Sure, you can use virtual servers but virtual servers are still way more resource intensive than a shared LAMP stack.

So, that’s a little scary. I have no current monetary plans for HappyFunTimes so I’d prefer not to blow wads of my own money on it. (although I guess if I value my time I have blown wads of money on it :P)

Next up is frameworks. My original idea was to leverage npm.  It’s open source, it has a website and repo of “packages”. Rename “packages” to “games” and reskin the website and that seemed like enough.

Unfortunately I tried cloning the npm-www repo and bringing up its vagrant box. It failed.  The server it tries to clone from is gone apparently.

Then I realised npm was probably not the solution I wanted. I found this out when I saw a friend try to use HappyFunTimes. He downloaded it from github, typed “npm install” to install the dependences and saw it fail because someone deleted a repo. Not an experience I wanted to put end users through.

npm uses CouchDB so I spent a few days trying about CouchDB, walking through the official book. The tech seemed pretty awesome but I ran into several errors in the book. The book is open source so I submitted a patch to the first error I found but when I ran into more and realized some of the issues HAD NOT BEEN FIXED IN OVER 3 YEARS I lost all confidence in CouchDB. I’m sure it’s great tech but if they can’t be bother to update the docs it’s hard to believe it’s a good way to go.

Next up I tried different frameworks. Nodejitsu has some stuff. Again the docs and stuff seemed massively out of date. For example they push flatiron but as far as I can tell it’s dead. How can I have any faith in them if their own docs point to basically dead projects? So I gave up on that.

I found Meteor which certainly seemed nice. But…, it uses websockets which as far I can tell are resource hogs. Specifically a normal website you connect, it gives you page, the server and your computer then disconnect. This lets the server serve other people. WebSockets through keeps the connection open. This has the advantage that the server can send updates and other messages to the page for real time updates but it has the disadvantage that keeping that connection open requires lots of memory and there’s a limit number of connections a server can handle. For my needs I don’t need a realtime connection. The data on SuperHappyFunTimes is not currently massively changing and doesn’t need to be real time.

Apparently you can turn that feature off. I haven’t tried it yet. But I suppose that’s a minor issue. The bigger issue is these frameworks are scary. There’s no guarantee the teams making them will be around. You just have to pick one and hope for the best. I look for solutions, I see “todos” on their website that have been there for months or years, I see little activity on their repo. Are they still working on it?

Making HappyFunTimes into a Platform

I’m in the process of making HappyFunTimes into a platform! What does that mean? It means I want the following

  • A user facing installer for OSX and Windows

    That means the user can just download some file and install it and start playing games with friends

  • An App Store

    Running HappyFunTimes will present some UI similar to XBox/PS3/XBoxOne/PS4 where you can pick installed games,
    browse for more, install them.

I could use lots of help 🙂

The basic parts.

  • Need to make native installers for HappyFunTimes
  • Need to make store UI
  • Need to make it possible to easily install games from store (guessing a custom extension handler might work)
  • Need to setup database of games
  • Need to make individual games installable

Yes there are a lot of details but I don’t think it’s too big a project. I’m planning to leverage as much as I can on existing stuff. For example maybe I can use npm to manage the games.