HackTO

Yesterday Scott Downe, Matthew Schranz, Dave Seifried, and I participated in HackTO, the Toronto event of HackDays. The idea was to create an application in only 7 hours using these APIs:

There was prizes for the top 3 apps ($2000, $1500, and $500) as well as separate prizes for the best use of an API. The hackday was held at the Freshbooks HQ, which was a great location. Plenty of tables, chairs, power, and the wifi worked! Beautiful office too, reminded me of the Mozilla Toronto office:

App demos

Our idea, planned out over email beforehand was to use the TinEye MultiColor API to build a composite image. We wanted something iconic, something that represented who we are:

The day started off with presentations by all of the API representatives. We were pretty set on using the TinEye API, but we also looked at integrating Popcorn playback into the creation process. Since SoundCloud was one of the APIs available, Scott started using to create a Popcorn plugin that would display images in a 2D grid. I started on slicing the logo into many different slices, Dave worked on interfacing with the TinEye Multicolor API, and Matt worked on analyzing colours with Web Workers.

At lunch, we had lasagna and garlic bread. The perfect food to keep hackers fueled. I finished up my image slicing work, and started to help Dave with the API. There were two problems with the API:

  1. It was not CORS-enabled, so we needed a same-origin proxy
  2. The extract_colors method required uploading a binary PNG

The first problem was solved with a nodejs proxy; the tricky part was also getting node to host the static site content.

Solving the second problem was difficult, until we did a bit of reading. Everyone knows that you can create PNG data URIs from canvas tags, but we weren’t sure how to send data in a form that the API would accept. After some failed attempts at encoding data and some more reading, I stumbled upon the Building an HTML Image Uploader article on hacks.m.o and Using FormData on MDN.

After we solved these fundamental problems we were on a mad dash to polish our demo as best we could. We ended up dropping the Popcorn Soundcloud integration in favour of simply having the image be composited live. In the final half hour before the bell, the network and TinEye APIs conspired to make us as stressed as possible. This is what we get for breaking David Humphrey’s golden rule, never rely on the network. In the end, I closed the lid of my laptop, and we left it to fate to decide whether our demo would work.

And it did.

Hilariously, there were three other teams that implemented the *exact* same idea. And one of them even used the Firefox logo! We used the TinEye API better (extract_colors and color_search), but the other teams had better looking composite images. Our heavy reliance on the TinEye API meant that we couldn’t make an image as detailed as the other two teams.

If you’re interested in seeing all of our code, it’s available in my GitHub account.

We didn’t end up winning any prizes, but we built an app in 7 hours, got some great swag (I got *5* t-shirts!), and had a great time seeing all the apps people built. It was great comparing notes with the other TinEye API users, to see how they built their version of the app. I’ll definitely be going to the next HackTO. And I hope that you, dear reader, will come too!

Posted in Open Source | 1 Comment

Butter v0.2 and Popcorn v1.2

Two weeks ago the Popcorn team released Butter v0.2 and Cornfield v0.2, and this week we’ll be releasing Popcorn v1.2.

Butter v0.2

My two main contributions to Butter in this release was scrolling in the timeline and Cornfield. We use a custom scroll bar implementation in Butter, so Bobby wrote the initial scrolling implementation. It worked pretty well, except for how horizontal scrolling was initiated by the user. As a bourgeois MacBook Pro user, I expect two-finger scrolling to work in two dimensions, not just vertically. Imagine my disappointment when I found that I had to hold down control to horizontally scroll the timeline. Preposterous! This was not difficult to fix, but two code paths are needed: DOMMouseScroll for Firefox, and mousescroll for everyone else. Ah, standards.

Developing Cornfield and the Butter integration for it took up most of my time on this release. I decided to start small with Cornfield and create a REST API in Node.js, authenticated against BrowserID that stores data directly on the file system. Writing the REST API and the code to interact with file storage was not difficult, but integrating BrowserID took a few days. This is because BrowserID has a completely different authentication flow from normal server-based authentication:

As you can see, it’s a fair bit more complex. Fortunately, someone has written a Node.js library for handling steps 6-10, meaning that I only had to handle 4 and 5 (1-3 is handled by the JavaScript file that you include from browserid.org). I highly recommend reading the documentation on MDN if you want to add BrowserID to your own site.

Popcorn.js v1.2

Most of my fixes here all were based around finding and fixing breakage in our test suite. We decided to upgrade to Qunit v1.3, which meant that we needed to rewrite many of our test harnesses to be completely asynchronous, rather than synchronous with a timeout. There were also some fixes made to the Vimeo and SoundCloud tests as we’ve been neglecting them pretty badly. This put us in a great position for our project for the next two weeks:

Posted in Open Source | Comments Off

Cross-origin video

Back in December when we were working on Popcorn Maker v0.1, we were trying to integrate a template by Brian Chirls that implemented chroma-keying. Chroma keying is a process where you film a video with a pure color (usually blue or green) backdrop, and then in post production, replace it with other content. This is used in movies to film dangerous or impossible scenes, and in the weather forecasts to display the weather behind the newscaster. Brian’s template allowed people to do the same live, within their web browser. Then we tried the template with videos other than the default, and found it wouldn’t actually work. This is due to the browser same-origin security restriction, and the lack of cross-origin resource sharing (CORS) support on the <video> element.

The browser same-origin security restriction is a fundamental part of protecting users from malicious users. What it does is make cross-origin data write-only; it can be written to the display, but scripts cannot read data from it. There are only two exceptions to this rule:

  • Loading JavaScript
  • CORS

CORS is a protocol to selectively enable the reading of cross-origin data on an HTML element. It is a two-part client-server handshake. First, the client adds a crossorigin=”…” attribute to the HTML element. There are 3 valid crossorigin states:

  • none, which is the default mode
  • anonymous, which does not send cookies data to the server
  • use-credentials, which does send cookies to the server

When the browser loads the resource, it sends an additional header (Origin) to the server, which contains the host portion of the HTML page. On the server side, it needs to send back an additional header called “Access-Control-Allow-Origin”, which contains a comma-separated list of hosts that can read cross-origin data, or a * which signifies that anyone can read cross-origin data.

Browser support for the cross-origin specification varies. They all support it on the XMLHttpRequest object. But support on fonts (for @font-face rules), images, videos, and audio is very limited. In fact, for video and audio, it doesn’t exist. But Firefox is open-source software. We can fix it!

Bug 682299

This bug was filed back in August 2011 by Benoit Jacob, Mozilla’s WebGL expert. Cross-origin video textures were actually supported in WebGL in Firefox 4. However, a security exploit (https://bugzilla.mozilla.org/show_bug.cgi?id=656277) that performed a timing attack based on the brightness value of each pixel meant that all cross-domain WebGL textures must be disabled.

My first patch for this bug cargo-culted heavily off the Image elements CORS support (thanks Joe!). The key parts were:

  • Adding CORS-related attributes and enumerations to the nsHTMLMediaElement
  • Adding the crossorigin attribute to the nsHTMLMediaElement IDL
  • Changing nsHTMLMediaElement to use nsCORSListenerProxy if crossorigin is set
  • Changing nsLayoutUtils::SurfaceFromElement() to look at the nsHTMLMediaElement CORS mode when grabbing a frame from the Video element.

The trickiest part here was figuring out that I needed to add code in SurfaceFromElement() to get this all working. I started by coming up with a failing test case, where I would drawImage(videoElement) into a canvas, then try to getImageData() from it. This causes a NS_ERROR_DOM_SECURITY_ERR to appear in the web console. I searched MXR for the error, and worked my way backwards through the Canvas2DContext code until I found that it was SurfaceFromElement() that set the origin and CORS mode of a frame.

It was a working patch, but it needed clean-up before landing in mozilla-central. In the end, I had to unify the CORS-related attributes and enumerations in nsGenericHTMLElement, remove old preferences, fix my lack of specification-compliance, fix some logical errors in my code, fix existing tests, and add new ones.

It was landed on January 25th so this feature will be shipping in Firefox 12. But we’re still not done; after all, we need the server to be CORS-compatible as well.

Bug 703566

This bug here was to enable CORS on Mozilla’s video CDN. Once the feature landed in mozilla-central, I asked the folks in #it where it should go, and they got their part done in 2 days.

So Firefox has CORS support for pretty much every resource. Now all we need to do is convice the rest of the web to enable CORS where it makes sense.

Posted in Open Source | Comments Off

Validating Open Badges

During the Foundation All-hands last week I had the opportunity to work with Brian Brennan, who is the technical lead on Mozilla’s Open Badges project. One of the problems that interested me during the work week was validating open badges.

First off, lets see a real badge:

Why, it looks just like a normal PNG! And it is! The thing that makes a PNG a badge is metadata stored within the PNG.

A PNG is not just image data; it’s composed of multiple ‘chunks’ which specify different kinds of data: image, transparency, colour space, text, time, and other miscellaneous data. The chunk structure is actually pretty understandable:

Currently, the Open Badges spec specifies that the metadata be stored in a ‘tEXt’ chunk, which looks like this:

Each tEXt chunk contains a keyword and a text string, in the format:

Keyword 1-79 bytes (character string)
Null separator 1 byte (null character)
Text string 0 or more bytes (character string)

The keyword and text string are separated by a zero byte (null character). Neither the keyword nor the text string may contain a null character. The text string is not null-terminated (the length of the chunk defines the ending). The text string may be of any length from zero bytes up to the maximum permissible chunk size less the length of the keyword and null character separator.

The keyword indicates the type of information represented by the text string as described in 11.3.4.2: Keywords and text strings.

Text is interpreted according to the Latin-1 character set [ISO-8859-1]. The text string may contain any Latin-1 character. Newlines in the text string should be represented by a single linefeed character (decimal 10). Characters other than those defined in Latin-1 plus the linefeed character have no defined meaning in tEXt chunks. Text containing characters outside the repertoire of ISO/IEC 8859-1 should be encoded using the iTXt chunk.

This doesn’t look too difficult to work with. Now we just need to get access to the metadata somehow. But how are we going to do that? The browser doesn’t allow access to PNG metadata through the <img> element, and I certainly don’t want to write a PNG parser from scratch in JavaScript. Fortunately for me, Devon Govett did! PNG.js is a PNG decoder written in JS for the <canvas> element without using a DOM Image. I was easily able to add support for the ‘tEXt’ chunk, and found that the sample badge used ‘openbadges’ as the keyword for storing the validation URL.

The last step in the puzzle was retrieving the JSON data located at the validation URL. Unfortunately, that’s a cross-domain request:

Fortunately, the Open Badges server is being built in the open, and I was easily able to enable CORS on the server by submitting a pull request.

With all that done, I built a demo that showed off in-browser verification of badges. I’d love to see the badge spec continue to evolve, and become even more robust. The next step is signing badges is going to ensure that they can’t be tampered with. It should be interesting to verify that inside the browser. It’d also be good to switch to the ‘iTXt’ PNG chunk, which is designed to hold international text, such as URLs.

Posted in Open Source | 1 Comment

I contributed to Firefox and you can too

Here in CDOT at Seneca College, we regularly push the boundaries of what is capable in browsers with projects like Processing.js and Popcorn.js. But sometimes we break those boundaries and we find neat bugs in browsers. Rather than sitting around and complaining, David Humphrey encourages the following steps be taken to fix a bug:

  1. Create a test
  2. File it
  3. Fix it

I consider myself a good developer, but filing and fixing my first few Firefox bugs was intimidating. I’m documenting the process here to show that it’s possible for someone with a small understanding of how Mozilla’s development process works can contribute.

Bug 679977 – TM: Variable assignment fails, gets assigned undefined

The first bug that I filed was found in the Processing.js reference tests. The reference tests compare an image that we consider correct, to the live Processing sketch, which is compared visually to look for errors. Every so often, some of our ref tests would fail like this:

It was maddening to track down though. It would only occur in Firefox, and any attempt to debug the problem made it go away. Clearly the ref test environment was important in triggering the bug. The key to finding this bug lied in the error message from the first ref test: “cachedVertArray is undefined”. I looked at the associated lines of code and what I saw made absolutely no sense at all:

var cachedVertArray;
// Loop through the list of vertices
for (i = 0; i < vertArrayLength; i++) {
  // Copy a single vertice to our variable
  cachedVertArray = vertArray[i]; /* This gets assigned undefined */
  if (!cachedVertArray) { // Check to see if the assignment worked
    // If we assign it again, it works fine!
    cachedVertArray = vertArray[i];

For some reason, the JavaScript engine in Firefox was not assigning values correctly. That’s bad. But I had a reproducible test case now; that’s good. With my test case in hand, I visited the #jsapi channel on irc.mozilla.org, which is where are the JavaScript engine hackers lived. I linked to my live test case, and asked for some help in filing the bug properly. One of the users, sstangl, confirmed that it was a JavaScript JIT bug:

sstangl: jbuck: it’s a JIT bug in JaegerMonkey. If you turn off ‘javascript.options.methodjit.content’ in about:config, it passes.

The JIT (just-in-time) compiler is the magic that makes JavaScript so fast these days. However, it was mis-optimizing some code, which was causing the bug in the Processing.js ref tests.

So I had a test case, and a bug filed. At this point, others jumped in and reduced the test case down to a manageable 20 lines. shu contributed the patch that fixed the bug for us, which was also his first JavaScript tracer bug. And now that Firefox 9 is out, we can edit our list of known ref test failures. And all I had to do was get a good test case, and file a bug with Mozilla.

Bug 679977 – Add an XInput backend for Windows gamepad support

One of the things we’re working on adding to Firefox is Gamepad support (Bug 604039). For each operating system, we need to add a backend to communicate with platform-specific gamepad APIs. On Windows this is DirectInput, on Linux it’s SDL, and on Mac OS X it’s IOHIDManager. On Windows, there’s a second gamepad API called XInput. XInput is for Xbox 360 controllers, while DirectInput is for any controller. Although it might seem like a waste of time to implement dual backends for Windows, XInput makes the Xbox 360 controller work much better than it would with DirectInput. I’ve worked with XInput before, so I requested that I get assigned to the bug.

Step 1 – Build Firefox

The first step in working on any Firefox bug is getting a local build compiling. I’ve done before on on Mac OS X and Linux, but doing the same thing on Windows was simply not working for me. My console would just randomly hang, with no discernible reason. After a few hours of banging my head against the build system, I asked for help in the #developers channel, where nearly all of the Mozilla developers hang out. khuey was easily able to diagnose the problem after looking at my .mozconfig file:

mk_add_options MOZ_MAKE_FLAGS="-s -j8"

ac_add_options --enable-debug
ac_add_options --disable-optimize

The problem was my use of -j8 on Windows with GNU make. On any other platform, it’ll allow compiles to work in parallel, and offer a significant speed boost. On Windows, it randomly deadlocks! The solution was to use pymake, a GNU make replacement written entirely in Python. I was able to continue working on my XInput patch, which is currently hibernating on Github, but I was bothered by the fact that I wasted hours on a simple problem that could’ve been solved with an error message in my console.

Step 3 – Find something to work on

A good starter bug in Firefox is fixing your pet peeve. In my case, I wanted to print out an error message when building Firefox on Windows when -jN is passed to GNU make. Fixing the Firefox build system can’t be too hard, right?

This guy showed up when I said that

Bug 705479 – Parallel builds (-jN) on Windows should fail early if gmake is being used (aka Steps 4, 5, 6, and 7)

Before I assigned myself to the bug, I decided to try and bang out a quick patch. After all, this was my first Firefox patch and I wasn’t too sure I could do it yet. I decided to start my patch with the main Makefile that is used for all top-level builds, client.mk. This is not ideal for two reasons pointed out in review:

  1. Applies to all Mozilla projects, with no way for other Mozilla projects to opt-out
  2. Developers doing a build inside the object directory would not get the error message

khuey recommended moving the Makefile check to browser/build.mk which is only executed when building Firefox, and run whether doing a top-level or object directory build. ted also mentioned on IRC that GNU make would only deadlock when using multiple CPUs, so it’d be important to check the NUMBER_OF_PROCESSORS environment variable is greater than 1. With this feedback, I came up with patch attempt #2. I was getting closer, but my patch would break builds for people with multiple CPUs who did not pass -jN to make.

At this point I was at an impasse. I needed to check the MAKEFLAGS variable, but make is odd in that it actually does two passes over a file. First, it reads in the entire file, executing lines that are not within a target. Next it expands all variables in the second pass, when it executes a target. This means that the MAKEFLAGS check must be within a target. Google helped me to find a way to do this, but I was still stuck with the problem of putting it in a target that would always get executed. While browsing the various Makefile.in’s, I found this curious line in the root Makefile.in:

default::

What the heck does default:: mean? After a bit of searching I found the Make manual section on Double-Colon Rules which says:

When a target appears in multiple rules, all the rules must be the same type: all ordinary, or all double-colon. If they are double-colon, each of them is independent of the others. Each double-colon rule’s recipe is executed if the target is older than any prerequisites of that rule. If there are no prerequisites for that rule, its recipe is always executed (even if the target already exists). This can result in executing none, any, or all of the double-colon rules.

This is exactly what I needed! Due to the structure of the Mozilla build system, the default:: target will always be executed in any Makefile. With my MAKEFLAGS problem solved, I whipped up a third patch which passed review and got landed by khuey in mozilla-central on November 30th.

Step 8 – Repeat

Now that I got my first bug landed, I wanted to do another. Before doing that I took the recommended step of requesting level 1 access. Level 1 access means that I can push patches to the try servers, which can build and test my patches on more platforms than you can shake a stick at. The process to get the different levels of access is spelled out on the Becoming A Mozilla Committer page.

Bug 691876 – Cannot run Windows jsshell nightlies: mozutil.dll missing

Back in the summer, David added a standalone js shell to the Firefox nightly builds (Bug 543463). We use it in Processing.js for unit and parser tests, and it’s nice to have a downloadable binary, rather than building from source. A change in the memory allocator for the js shell broke the binary that shipped in the Nightly build; it required mozutils.dll/.so, but the packager did not zip that up with the binary. Looking at the patches that David wrote, it was pretty easy to figure out what I needed to add. And I got to use my newfangled try server privileges which was neat.

Because only level 3 developers can commit code to mozilla-central, once a patch has passed review, the assignee (myself, in this case) is required to add ‘checkin-needed’ to the Whiteboard. This indicates to whoever is on build duty that they should push the patch to mozilla-inbound, where it gets built and tested before being pushed to mozilla-central.

The patch I’m working on currently is adding CORS support to cross-origin <video> elements (Bug 682299). I’m nearly done it, and I promise to write up a timely blog post when I finish it :)

You can contribute too

As developers, we can do better than just waiting for someone else to fix it. There’s plenty of help and guidance available on the Mozilla Developer Network wiki and on the #introduction chat room on irc.mozilla.org. Even if you’re not a C++ guru, there’s tons of places that Firefox could use some help in. Jump in, and contribute!

Posted in Open Source | 1 Comment

Getting a fullscreen <canvas>

One of the demos that Dave and I worked on for our Processing.js Workshop Tour was a full page Processing.js sketch that displayed words from tweets containing “#OWS”:

OWS Twitter Wall

Occupy your browser

You can see a live demo of this here, and the code for it on Github. One problem we ran into while developing this was getting a full page canvas to appear without scrollbars. We found that you could use window.innerWidth reliably, but window.innerHeight would return a value that would make scrollbars appear. We ended up just subtracting 10 pixels from window.innerHeight and calling it a day, as seen here.

However, I was determined to solve this issue. Here’s a bare-bones jsfiddle showing the problem:

As you can see, the div is slightly taller than the canvas, even with the standard css reset applied. The key thing here was setting line-height to 0 (thanks Daniel!). Once you do that, then the div will be the same height as the canvas. Wondering as to why, I asked our font expert Pomax, and he got an answer for me:

Pomax: <divya> canvas is inline block
Pomax: <divya> err inline
Pomax: <divya> its not a bug
Pomax: <divya> its a feature
Pomax: <divya> much like <img>

TL;DR

So, the next time you need a full page canvas?:

* {
margin: 0;
padding: 0;
line-height: 0;
}

Posted in Open Source | Tagged , | Comments Off

Getting started with Input.js

Input.js is a JavaScript library to map controller and OS-specific USB enumerations provided by the Gamepad API in Mozilla Firefox to an ideal virtual gamepad. While developed for use in Paladin, Mozilla’s open source 3D gaming initiative, Input.js is designed to be used by any web application.

To get started with Input.js, you must first download a special try-server build of Firefox. The latest build (Oct 7 as of this post) of is available in bug 604039. look for a post from the Mozilla RelEng Bot. If the download links have expired, I have uploaded a copy to my server.

Next, you need to clone the Input.js git repository to your computer. We’ve got a few files here:

LICENSE.txt
README.md
create-mapping.html
input.js
json2.js
raw-gamepad-api-test.html
simple-input-js-test.html

First, open up raw-gamepad-api-test.html in Gamepad Firefox. This will let you test the Gamepad API without Input.js. You may need to install a special driver for your gamepad for this to work. If all of your gamepads buttons and axes appear to work, you can open up create-mapping.html to create a mapping that describes how the physical device maps to the virtual gamepad. Once you pasted that JSON blob into input.js, you can test the mapping with simple-input-js-test.html.

Posted in Open Source | Tagged | Comments Off

Getting git working in mozilla-build

Mozilla-build is a meta-installer for Windows that installs of the dependencies necessary for building Firefox. Some of the projects that I’m working on, such as Processing.js and Paladin use Python, jsshell, and make for command-line automation. Mozilla-build is a lifesaver because you don’t need to have developers install cygwin. Unfortunately mozilla-build does not contain git yet. However, the work-around is very easy!

First, you need to install MSYS Git, version 1.7.7 or higher. Previous versions had a bug that required copying libiconv2.dll to another directory. Also, make a note of the install path, you’ll need it later. On my Windows 7 64-bit machine it was C:\Program Files (x86)\Git. Now open up the git bash shell and run the following commands:

  1. touch .profile
  2. echo ‘export PATH=$PATH:/c/Program\ Files\ \(x86\)/Git/bin’ > .profile # replace the install path with your own and add /bin to the end

Now open up your mozilla-build shell (in C:\mozilla-build, start-l10n if you don’t have Visual Studio installed, and guess-msvc if you do) and start using git commands to your heart’s content!

If you are still having problems getting git working in mozilla-build, Gary Kwong has pointed me to a stackoverflow post describing another solution to the libiconv2.dll missing error.

Posted in Open Source | Tagged , , , , | Comments Off

Github’s Compare View

GitHub recently hit one million users, and in celebration of this fact, I thought that I’d post about some of GitHub’s hidden but useful features. The first one I want to show you is compare view:

Compare view is GitHub’s online git diff. It’s a fantastic way of doing code reviews, especially when multiple commits are involved.

On the top left, you can see the two branches that are being compared. In this case, I am comparing my develop branch to t1660c. In the middle you see all the commits that are part of this compare view. I’ve only got one here, but you could do something ridiculous like comparing Processing.js v1.2.1 to v1.2.3. And on the bottom of the page we have all of the diffs for each file modified in the compare view. If you have more than 50 commits, then the diff is put on a different tab.

Accessing a compare view can be done in two different ways:

  1. You can build the compare view URL manually with the following format: https://github.com/USER/REPO/compare/[USER:]GIT-REF[USER:]GIT-REF. USER is your GitHub username, REPO is your GitHub repository, and GIT-REF can be a tag, branch, or commit SHA1. It’s also possible to do cross-repository compares by prepending another GitHub username to the git ref, with a colon dividing the username and git ref.
  2. Use the branch list page by clicking on the Source tab, then the Branch List subtab. Besides each branch is a Compare button.

Of the two ways to do a compare view, I prefer creating it from the URL because it’s faster. If you want to read more about GitHub’s Compare View, you can read the official blog posts:

 

Posted in Open Source | Comments Off

Getting started with Paladin development

From our wiki:

Paladin is an initiative by the Mozilla community to create the best gaming technology available for the open web. Paladin sits at the intersection of 3D gaming, JavaScript framework and library development, and the browser. Where the web is missing critical gaming support, we aim to fill those gaps by adding new browser APIs, enhancing existing ones, and building technologies on top of the web.

Paladin is not just one thing, it’s a variety of projects working together to further our shared goal. For that reason, we don’t have a single project lead, we have a group of project owners that work together. All of our regular communication happens on our IRC channel, #paladin on irc.mozilla.org. We also have a mailing list, paladin-dev on Google Groups, for discussions that require feedback from a larger group of developers.

Paladin started up only a few months ago, but we already have a functional engine, called Gladius, and a game built with it, Rescue Fox. If you choose to work on Paladin, there are a wide variety of areas you can work on. Once you have found an area of Paladin that you want to work on, have a look at the code and bug tracker for that module, and speak to the project leads in #paladin. So, without further ado…

Gladius – The 3D game engine

Gladius is the main product of the Paladin team. We’re building an open source 3D game engine, that runs without plugins, and works on desktop and mobile devices. If you choose to work on the core API, you’ll be integrating other JavaScript libraries, implementing modules if they don’t exist, and improving the existing ones.

CubicVR.js – The graphics subsystem

CubicVR is a JavaScript port of the C++ CubicVR engine. It uses the HTML5 canvas tag and WebGL to render all of Gladius’ graphics. If you choose to work on this, you’ll be creating an automated test framework to augment the current manual tests, and implementing any other features that Paladin requires.

Ammo.js – The physics subsystem

Ammo.js is a Javascript port of the C++ Bullet engine using emscripten. emscripten is a LLVM-based C++ to Javascript compiler. If you choose to work on this, you’ll be improving the efficiency of the compilation process, adding automated tests, and implementing any other features that Paladin requires.

The audio subsystem

A custom audio subsystem was written for Gladius, as there were no existing ones that fit our needs. It currently supports basic playback functionality with whatever audio formats the browser supports. If you choose to work on this, you’ll be adding features to the audio subsystem, such as positional audio, audio effects, and file format handling.

  • Project Lead: David Humphrey (humph)
  • Code: Same as Gladius
  • Bug tracker: Same as Gladius

The input subsystem

We also wrote our own input subsystem for Gladius. It currently has basic support for keyboard, mouse, and touch input. If you choose to work on this, you’ll be improving the input subsystem with gesture recognition, support for the Joystick API, key mapping, and any other features Gladius requires.

  • Project Lead: David Humphrey (humph)
  • Code: Same as Gladius
  • Bug tracker: Same as Gladius

Other subsystems

We’ve just begun to scratch the surface with what we want Gladius to do. On the bug tracker right now, we have plans for a multiplayer server and resource loading components. We don’t know what else we’ll add, but you can drive it yourself if you want!

Rescue Fox – The first game

Rescue Fox is the first in a series of games that will help us drive development of Gladius in a practical direction. The goal of the same is to navigate through an asteroid field to rescue your comrade. If you choose to work on this, you’ll be using Gladius to make the game better, and filing bugs against Gladius when you can’t.

Firefox – The browser

Gladius is meant to be a 3D game engine for the open web. We want to support as many browsers as we possibly can. But for development purposes, we’re concentrating on supporting Firefox and Fennec (Firefox Mobile for Android). The other third of Paladin is implementing WebAPIs in the browser to making gaming better on the web.

Joystick API

The Joystick API allows Firefox to take input from gamepads, joysticks, and other gaming peripherals and use it on any web page without the need for browser plugins. You might need to install a driver to use your gamepad though! If you choose to work on this, you’ll be hacking Firefox to make the API work better and creating JavaScript libraries to make it easier to work with.

Mouse Lock API

The Mouse Lock API allows Firefox to lock the mouse within the browser window, which is critical for any game that uses the mouse to manipulate the camera. Without any sort of mouse locking, it’s near impossible to control the camera effectively. If you choose to work on this, you’ll be hacking Firefox to improve the API and creating JavaScript libraries to make it easier to work with.

You – The developer

If this list looks overwhelming to you, don’t worry. 8 months ago, I was in the exact same position as you. Come chat with us on IRC, we’ll be happy to answer any questions you have about Paladin. Pick something that you’re interested in working on, make sure you keep on trying, and you’ll do fine.

Posted in Open Source | Tagged , , , | 2 Comments