Traveling to Japan for business

I recently traveled to Japan for work. It was an awesome experience. I wanted to share some of the knowledge I gained.

Business Cards

When meeting executives and other hight level personnel there is a right and wrong way of exchanging business cards. You should always carry business cards. In this case less is not more. Running out of business cards will be awkward.

Try to get your business cards translated with one side in English and the other one in Japanese. This is a nice to have but I found that people were really impressed.

Here is a good video on how you should exchange business cards. I found that it was spot on.


I feel like this is obvious but not everyone speaks English in Japan. If you are presenting you will need to consider that there will be a translator on-site. If you have never had someone translate for you here are some things you need to consider:

If you have slides of some sort allow for the translator to finish translating your spiel before you switch slides. Furthermore, do not point to the slides while speaking. You think you are pointing something out but they will not understand you and I doubt the translator will mimic your actions.

If you have written a script keep in mind that, depending on your skills and delivery, it might be hard for the translator to translate what you are saying. Some people get nervous and stutter, say the wrong word, or pause awkwardly when they shouldn’t.  I found that speaking freely is way more effective.

Gift Giving

If you have brought a gift ensure it is in a presentable package. Presentation is more important than the gift itself. Gift exchange is done on the last day and not the first.

Business Lunch/Dinner

I found that the Japanese will pre-arrange everything from the time and place to the menu. If you have any restrictions (allergies, vegetarian) let them know in advance and they will accommodate you.

I believe all restaurants will give you moist towels to clean your hands with. These towels will remain at the table and the waiter/waitress will not collect them. This may see strange to some people so if you wish you can ask the  waiter/waitress to take the towel away.

If you are a smoker I believe it is rude to leave the table for a smoke break but that may depend on if members of the host team also smoke. When I was in Japan none of my co-workers smoked but a couple of the executives from the Japanese company did smoke. During my week there they NEVER left the table during a meal to go for a smoke break.


I hope you found this knowledge useful 🙂

Getting around CORS with Node.js

I have recently wrote a JavaScript web application that depended on AJAX and a Java driven back end. The back end was a standard  RESTful Web service running an a Glassfish server. Before I started working on the application a designer was hired to make the HTML/CSS layout that was desired. As I got further into the development process and the moving parts started to change the CSS also needed to change. This proposed a problem for us mainly because all of the data ( and the html elements needed to display it) that were needed to populate each page had to be requested from the Web service. Typically what I would do is log in to the web application running on the test server fired up firebug and use the HTML/CSS edit feature to get the CSS to where I needed it to be. Alternatively, I would save the website and open the HTML and CSS pages locally and edit from there. Any CSS changes would then get committed to the repo. However, this wasn’t a good enough way for the designer. The designer wanted to put the repo on their localhost and simply run the application on their local server. This of course caused a CORS error because an XHR request could not be made from localhost to a remote server.

To solve this problem the designer could have got a local copy of the back end working on his localhost as well. However that would involve mysql, eclipse and glassfish to be configured correctly. Not really ideal. So I went out in search for another solution. After talking to some people I decided to use node.js. Havent yet worked with node I wasn’t quite sure of what I was going to do. Some people said i needed a Proxy some said i needed Middleware but at the end of it all I just needed a simple server and a client.

The Structure

The idea is for a server to listen to the requests coming in from your localhost on a particular port. This means that the URL of the XHR requests has to be changed to localhost:portnumber. Once the request is captured, a dummy client will make the same exact request but instead of the client being your localhost, it will be a node client whose domain is the same as the domain of the back end. Create a client using the domain of the back end. I never really found any documentation of what the createClient function accepted so let me show you what I used:

var aclient = http.createClient(80, '');

The next step is to create a server that listens on a particular port. This is the same port i mentioned above. Ensure that this port is not being used by another application running on your computer otherwise you will get a weird port not available exception. Now, in the function that you pass in when you create the server you need to capture the request, get the needed data and make a similar request with a new url. You also need to figure out what the request method is. This is important because sometimes there will be an OPTIONS method sent which is basically a way of the browser testing if CORS is allowed.

if (req.method === 'OPTIONS') {
	// add needed headers
	var headers = {};
	headers["Access-Control-Allow-Origin"] = "*";
	headers["Access-Control-Allow-Methods"] = "POST, GET, PUT, DELETE, OPTIONS";
	headers["Access-Control-Allow-Credentials"] = true;
	headers["Access-Control-Max-Age"] = '86400'; // 24 hours
	headers["Access-Control-Allow-Headers"] = "X-Requested-With, Access-Control-Allow-Origin, X-HTTP-Method-Override, Content-Type, Authorization, Accept";
	// respond to the request
	res.writeHead(200, headers);
} else if (req.method === 'GET') { // no data is coming
	// use the client you created to make a request, this request will basically
	// need all of the information captured in this GET request  coming from your localhost:portnumber
	var clientrequest = aclient.request(req.method, '/api' + req.url, {
		'host': '',
		'authorization': req.headers['authorization'],
		'content-type': 'application/json',
		'connection': 'keep-alive',
	var msg = "", clietheaders;
	// get the response from the back-end
	clientrequest.on('response', function (clientresponse) {
		clientheaders = clientresponse.headers;
			clientresponse.on('data', function (chunk) {
			msg += chunk;
	setTimeout(function () {
		// send the data you just received from the back end back to you
		// client application on localhost
		res.writeHead(200, clientheaders);
	}, 500); // wait a bit just in case we don't have all of the chunks of data
This is a simple implementation and it works great. It might not be the perfect solution but it gets the job done. Feel free to contact me if you need to implement this type of solution. Also if you need to go ahead and join the node.js irc channel: #node.js on the server. The people there are really helpful and forgiving.

Area Code Data Retrieval

Last week I was tasked with populating a database with area codes/prefix combinations and the geographic location they map to. This was an interesting task that required me to retrieve data from a foreign API. For those of you that are not sure what I am talking about when I say area code/prefix let me take a second to explain. A telephone number consists of 10 digits (not everywhere but for the most part) the first three digits are the area code and the next three digits are the prefix.  This of course does not take into account international country codes like 001 that get prepended to a telephone number. When you have a particular area code/ prefix combination you can use it to figure out which State/Province and City the owner of that combination resides in.  If you would like to research this further visit the North American Numbering Plan (NANP) website.

In order to populate my database I used two separate API’s: local calling guide’s XML query interface, and Telephone Number Identification (tnID) search functionality.

To start, I downloaded a list of all of the area codes used and the countries they belonged to. I got this list from the NANP website but thinking back on it now I could have just went from numbers 200 – 999 and got that information from  the API’s. (This of course would have taken more time since not all of those area codes are in use.) I then wanted to use that list to get all of the available prefixes for each of the area codes and then finally the city they mapped onto.

The approach I took in getting the data was a bit flawed. I used local calling’s query interface to first get a list of rate centers per area code, then I used the rate center’s exch code to get all of the prefixes available for that area code. Finally I used their xmllocalprefix function to get the city information. You can imagine that this is a lot of data. You need to go through each area code, retrieve a list of rate centers and then retrieve the city information. I believe it took up to a minute to get all of the data for a single area code. This is definitely a long time however i was thinking I would create a script to do this automatically – press go once, wait some time and done. Boy was I wrong. Also I needed all of these steps because the local calling’s API did not provide a more direct way of getting all of the prefixes per area code.

First Attempt

My initial script consisted of an HTML form with a textarea and a submit button. The idea was that I would copy and paste the area code/ country information (“416 – Canada \n 905- Canada…”) from the NANP list I mentioned above and then press submit and let my PHP script do the work. Essentially, a POST request was sent with all of the area codes and then the PHP script would go through each area code and get the city information in the manner I described above. I learned quickly that if a POST request takes two long to process it times out! Leaving only 3 or so of the area codes processed.

Second Attempt

In order to get around the POST timeout I decided to do a PHP Header redirect after each area code has processed. Since the redirect lost the area code – country data that was in the textarea, I had to use a SESSION variable to store that information. I now had two separate files. The first file initialized the SESSION variable if it wasn’t already initialized, then it called the second file. The second file processed the next area code in the SESSION, removed it from the session, and then called the first file. This seemed like a good way to do things however it resulted in a “too many redirects” error 😦 On the upside sometimes I was able to process up to 15 area codes at a time which was a big improvement from the first script.

Third and Final Attempt

After my first attempt failed I took a step back and thought about what to do next. I figured out that I should use some JavaScript magic to make it appear like something else is happening. After all when you browse a website the server never complains about too many clicks. I edited the first file. I added a document onLoad event. Now when the document loaded it would display some information on the screen before it loaded the second page. The first piece of information was what area code was just processed and the second piece was what area code will be processed next. This was brilliant since it actually let me know what was going on behind the scenes. Before this i was using SQL select statement on the database to see what data actually got stored. This flow worked perfectly. No actual errors. However I still was not getting what I wanted. Apparently after a couple of redirects PHP’s SESSION variable gets whipped (most likely some PHP config variables needed editing). Which meant that after 15 or so go around  my area code SESSION variable would get re-initialized and the script would attempt to store the data for the very first area code.  This really sucked and since I have already taken way to long to complete my task I decided to split my area code data into smaller chunks which meant running the script a dozen times or so.


Telephone Number Identification (tnID) search functionality would have probably been a better source for my data. I didn’t end up using it to store data into the database because I did not figure out how to use it until I was done my task. It wasn’t a total loss because I did end up comparing what I had in the database with what the search results returned.

Data Cleanup

After some investigations into the database I had I notices that some items did not make sense. For example I had a lot of ‘Washington Zone 1’. I needed to clean this up but I wasn’t about to spend a whole day on doing so. This time I has an advantage. I know both the area code and the prefix. After some googling I stumbled on and their API. So I made a new script. I still had two files. The first file indicated which State needed to get updated as well as listed information on which prefixes were already checked. The second one, a PHP file got the results from the api (, updated the database and redirected to the first file with updated querysting parameters. The query string parameters indicated which prefix needed to be updated next. With this process, there were no time-outs or errors.


View all of my blogs

Buttercamp – New York

I just got back from New York City and I am happy to announce that Buttercamp was a success! Buttercamp took place at the  ITP labs of New York’s Tisch School of the Arts. It was a hack session sponsored by the WebMadeMovies project. The idea behind the hack session was simple – Make cool HTML5 demos using popcorn.js and butter.js (Butterapp – The Popcorn.js Authoring Tool)  and any other tool you find. In preparation for the day Brett Gaylor and Ben Moskowitz (who did an awesome job organizing btw) reached out to artists, filmmakers and designers. Ben also had some of his students attend. Anyone interested in participating simply had to fill out a form proposing their idea or project. The requirements were simple, you had to have an HTML5 video, a story to tell, and a developer who knew their way around the web. Each project group was assigned a popcorn.js and butter.js expert to help with the JavaScript part of the demo. The process was flawless. The filmmaker/artist explained their idea and their vision and started annotating their video. The team developer worked on the look and feel. The popcorn.js/butter.js expert started on the functionality. Watch the Video Blog.

The Groups


You can read more about the project on their website. The inspiration for their buttercamp demo came from the current conflict in Egypt. The idea was to produce a non linear timeline. The main video was positioned to take over the entire screen.  The video was of a protest happening on a bridge. As the video played information about the location appeared on the screen like Wikipedia articles, close-up photos of the protesters, and even videos of protesters being interviewed. The main challenge for this demo was getting content that was related. Who was on this bridge tweeting posting photos to Flickr etc. as the protest was happening. The main video showed one angle of the protest but the extra data formed a bigger however an incomplete picture. The question remains: how does one go about getting the whole story from every angle. Demo links here!

through a lens darkly

You can read more about the project on their website. The teams wanted to showcase the work of Sylvia Isabe using butter.js. Since this project has a lot of material, it is fair to say that they wanted to get a deeper understanding of how the tools work so that they can use their knowledge and apply it to future work. The team’s tinkering led to an improvement made to the butter.js tool. An import/export tool! The feature is still in review but the idea is to be able to import work previous done using the tool in order to make changes and add content. Demo links coming soon!

everything is a remix

This project aims at revealing how a particular video came to be. Which resources were used in its making and how the content was “remixed”.  This was more of a proof of concept than an actual demo request. Kirby Ferguson and I worked on this. Kirby wanted to explore an interface that  jumped down rabbit holes for more stuff to watch/learn. Kind of like Jonathan’s Donald Duck demo however instead of having information around the video provide the ability for the user to see the original clip in a clear way.  Kirby came up with a simple wire-frame:

When the main video came to a point where an original source video was available a button would appear. In this case we had two source videos. When the user clicked the button a new video would open up on top of the original with extra content (in this case an amazon link). As a result of this we realized that we need a video plugin in popcorn.js which i took some time at the beginning of the day to develop. It is currently making its way into the review process.  The main challenge of this demo was CSS. The position of the second video on top of the video proved to be oddly challenging and at the end just did not work. Temporarily you can view the demo here. The idea that Kirby wanted to explore is possible however it really need a designer to make it work.


The idea of the buttercamp demo was to provide a non-linear type of story telling. The user made their own experience by choosing a path to explore. Bobby, a new member of the webmademovies/Mozilla team did an awesome job fine tuning the demo. View it here (Firefox only for now).

Graffiti Markup Language (GML)

The GML project has been around for some time. You can read about it on their website. The video aimed at connecting graffiti with video. A GML popcorn.js plugin is already in the works. The demo can be viewed here (it is most likely getting tweaked as you read this) a similar but different demo can be viewed here.


Tubeyloops’ focus was actually remixing video. You can read the project proposal.  Greg Dorsainville had a vision of having multiple videos and allowing the user to remix them on the fly in order to produce a finished product. What ended up happening here was AWESOME and hopefully in time I can link you to a blog explaining more. Data from pattern sketch was used to alter the video’s audio and to produce sequences of the final product. How it worked: There were four video clips each linked to a button on the keyboard (QWER). When one of the buttons were pressed the corresponding video played until another button was pressed. A real unique remix was formed each time the demo was used.


There were a lot of people there, including Ben’s students, that wanted to learn more about HTML5 video and popcorn.js. We had about an hour dedicated to them in order to provide an overview of what HTML5 was and what you can do with it and just an overall tutorial on using popcorn.js and butter.js. As a result of this group using butter.js a number of bugs have been filled in order to improve the Butterapp – The Popcorn.js Authoring Tool

Lessons Learned

The day went great, participation was through the roof, and the demos were mind-blowing. However, as with most things in this world, Buttercamp can use some improvements.

  • The day was way too long. We started at 9:30 am and finished at 10pm. I would say we started seeing people leaving around 4pm. A little less than half of the people stayed for the show and tell at the end.
  • More designers were needed. A lot of the demos were centered around the design. I already talked about CSS being the only thing blocking my demo from doing what it is supposed to do. On days like this design experts are needed to fine tune the demo once all of the content has been collected.
  • A server to host all of the demos. It would be nice to allow people to ftp their demos as they were working on them. It definitely would have made this blog better, but it would also eliminate the time it will now take to get all of the demos from each team.



Buttercamp was fantastic. If you missed it maybe you can start a petition to get Buttercamp going in your town. The day went smoothly and encountered no real problems. Everyone had a great time collaborating and sharing ideas. If you attended buttercamp please share your stories, pictures, and results.

View all of my blogs on popcorn-js
View all of my blogs

Code Review, SR+ … but why?

I wanted to take some time and talk about code review. Let me start of by explaining what “code review” is, or rather what it is in reference to this blog. Code review is the act of looking at someone’s code in order to evaluate it.  Code that is in review is often referred to as a patch. The purpose of the code is to fix a bug, add functionality, or improve performance. Once the patch passes review it is then staged/added to the core of the project. The reason behind the review is simple. Does the code do what it says it is supposed to do? It is important to note that every project has review requirements.  For example, the popcorn-js project I am working on has the following requirements:

  1. Ensure the code follows the style guide and is appropriate
  2. Ensure the code passes lint
  3. Ensure main tests pass on multiple browsers: test/index.html
  4. Ensure new tests were added to test the new functionality and that they pass
  5. Ensure that other tests such as parser or plugin tests that are affected by the new code also pass

Looking at these requirements a patch for the popcorn-js project has to: fix/add the functionality it is meant to fix/add, it has to include tests, and it also has to follow a style guide. If the specific patch that you are looking at is missing any of these the review simply fails. However, what happens when it passes? Lately I have been seeing short and sweet review comments: “Super Review (SR) + ” . But what does this mean exactly? Did you follow the review requirements? Do you even know they exist? When a code patch fails review the reviewer always states the reason for the failure. This is obvious since the problem has to be outlined before it can be fixed. Is it too much to expect the same type of courtesy for a passing review? After all, the way a patch was tested is significant. I am not saying that the person reviewing the patch is not to be trusted. I am however pointing out that there is merit behind doing reviews. However, if a review is not properly documented it will be unofficially re-reviewed by the person who is responsible for staging the patch. Why? Simply because the person staging/adding the new code wants to ensure that nothing broke in the process. I am aware that the person staging usually checks to ensure noting is broken but there is a major difference here. For example, looking back at  the popcorn-js project and it’s review process requirements you will notice that the project has core unit tests as well as other parser and plugin tests. Typically after something has been staged the core unit tests, including any main demos, would be run. The plugin and parser test however would not. From a release engineer’s perspective proper review documentation saves a lot of time. Let me provide an example of good review documentation based on popcorn-js’ requirements:

SR+, code looks good

No lint errors

Unit tests passing on (Vista) Firefox, Chrome and Safari

This patch affects the googleMap plugin. I verified that all unit tests/demos using this plugin work as expected on the browsers mentioned above.

Notice that I am not writing a whole paragraph. Point form notes is all you really need to let the appropriate people know what you did and why the review had passed.  I hope you keep this in mind when doing a review.


View all of my blogs on popcorn-js
View all of my blogs

Popcorn-js 0.3v Release

Popcorn-js is already on version 0.3. If you haven’t been keeping up-to-date feel free to read my 0.2v release blog. The major addition in this release is subtitle support. As it stands popcorn-js can take TTXT, SRT, WebSRT, TTML, SSA, and  SBV files, parse them, and spit out subtitles positioned right on top of the video!!! If you want more information read Steven’s blog post.  Of course we have made numerous other fixes and additions. For a complete list view the changelog.

Looking to get involved?

There is countless ways for people to get involved in the project including idea generation, video generation, bug filing, documentation, promotion, and of course writing code. If you want to get involved here is a list of links to get you started:

If you want to use popcornjs and you are having problems feel free to contact us. You can comment on this blog, file a ticket on Lighthouse, send me an email, or come on IRC. WE WILL HELP YOU 🙂

Popcorn-js in Use

If you have yet to realized popcornjs’ potential, take a second to look at these two sites that use popcorn-js. The first one is the Annotation of the 2011 State of the Union brought to you by PBS and the popcornjs team. The second one shows of Jonathan McIntosh’s Donald Duck remix originally showcased at the Open Video Conference in NY city. Before the making of said page people can only look at the video. Whereas now, you can see all of the different components that had to be mixed together in order to make the video.

View all of my blogs on popcorn-js
View all of my blogs

Sync Server

I recently set up my own sync server. It is one of the requirements for a projects I am currently working on. All of the information I needed was on two separate Mozilla wiki pages: sync setup and user setup.  After spending some time in the #sync IRC Channel. I finally got it working. In order to configure a server on Fedora you will need PHP with the mbstring extension, mysql, apache, mercurial, and captcha.

Setting up the sync server:

– Get the latest server from Mozilla. You can save this directory anywhere on the hard-drive.

 hg clone

– Edit the Apache config file found under etc/httpd/conf/httpd.conf

Append these two lines:

Alias /1.0 <full path to the dir you just saved>/sync-server/1.0/index.php

– Copy 1.0/default_constants.php.dist to 1.0/default_constants.php

Open this file and change the following parameters:

 define('WEAVE_AUTH_ENGINE', 'mysql');
 define('WEAVE_MYSQL_AUTH_HOST', '<db host>');
 define('WEAVE_MYSQL_AUTH_DB', '<db name>');
 define('WEAVE_MYSQL_AUTH_USER', '<db username>');
 define('WEAVE_MYSQL_AUTH_PASS', '<db password>');

Note that you have to create the database and the above user. If you have never set up mysql this blog may help.

-Make a database name it the same as above
-Make a user with and give him privileges to the

– Create three tables: wbo and collections using the following script:

 CREATE TABLE `collections` (
 `userid` int(11) NOT NULL,
 `collectionid` smallint(6) NOT NULL,
 `name` varchar(32) NOT NULL,
 PRIMARY KEY  (`userid`,`collectionid`),
 KEY `nameindex` (`userid`,`name`)

 `username` int(11) NOT NULL,
 `collection` smallint(6) NOT NULL default '0',
 `id` varbinary(64) NOT NULL default '',
 `parentid` varbinary(64) default NULL,
 `predecessorid` varbinary(64) default NULL,
 `sortindex` int(11) default NULL,
 `modified` bigint(20) default NULL,
 `payload` longtext,
 `payload_size` int(11) default NULL,
 `ttl` int(11) default '2100000000',
 PRIMARY KEY  (`username`,`collection`,`id`),
 KEY `parentindex` (`username`,`collection`,`parentid`),
 KEY `modified` (`username`,`collection`,`modified`),
 KEY `weightindex` (`username`,`collection`,`sortindex`),
 KEY `predecessorindex` (`username`,`collection`,`predecessorid`),
 KEY `size_index` (`username`,`payload_size`)

This table is used by the user server:
 CREATE TABLE `users` (
  id int(11) NOT NULL PRIMARY KEY auto_increment,
  username varbinary(32) NOT NULL,
  password_hash varbinary(128) default NULL,
  email varbinary(64) default NULL,
  status tinyint(4) default '1',
  alert text,
  reset varchar(32),
  reset_expiration datetime )

insert into users (username, password_hash, status) values ('username', md5('password'), 1);

– Ensure that the following constants are listed in the 1.0/default_constants..php file:

 define('WEAVE_PAYLOAD_MAX_SIZE', '');

Setting up the user server:

– Get the latest server from Mozilla. You can save these anywhere on the hard-drive.

hg clone

– Edit the Apache config file found under etc/httpd/conf/httpd.conf

Append these lines two lines:

Alias /user/1.0 <full path to services/reg-server directory>/reg-server/1.0/index.php
Alias /user/1 <full path to services/reg-server directory>/reg-server/1.0/index.php

– Copy 1.0/weave_user_constants.php.dist  of the new directory to 1.0/weave_user_constants.php

Open this file and change the following parameters:

define('WEAVE_AUTH_ENGINE', 'mysql');
define('WEAVE_MYSQL_AUTH_HOST', '<db host>');
define('WEAVE_MYSQL_AUTH_DB', '<db name>');
define('WEAVE_MYSQL_AUTH_USER', '<db username>');
define('WEAVE_MYSQL_AUTH_PASS', '<db password>');

– To set up captcha you will need to get yourself a public key and private key from

-Add an alias to the 1.0/weave_user_constants.php

Alias /misc/1.0/captcha_html /reg-server/1.0/captcha.php

Once you completed this setup you need to set-up your sync profile. To do this go to the Tools=>Sync menu in Firefox or download the Sync add-on.
You will meed to set up sync to use your own serve. This tutorial will guide you through setup however it uses the Mozilla server.

View all of my blogs