Category Archives: mashups

GamePro burns developers on $5000 programming contest

[UPDATE 12/17/09] I was contacted by phone by GamePro President Marci Hughes on December 4th. She explained to me that there was a slip up within the company where it was believed there had been no submissions to the contest. She was very polite and apologetic and assured me the contest entries would still be judged. I assured her I would be happy to update this post if GamePro could clarify the situation on their developer blog. As of today, they have done just that. I understand that communication mistakes happen within companies, and I’m happy to see GamePro correct the situation.

I was very disappointed today to learn that has canceled its API programming contest after the entry deadline. I submitted an entry to the contest, and since the judging results were due, I reached out to the API guys via Twitter. Here’s what I learned:

John - We've discontinued the API contest - there wasn't enough interest.

Direct message via Twitter

I can’t be any more clear when I say this: I and any other developer putting time and energy to create an entry for your contest were very interested.

I think GamePro should be reminded of  the first rule of Open Space:

“Whoever shows up is the right group”

This is a fabulous example of an API vendor doing it wrong.

I love making mashups. Experimenting with APIs is my favorite thing to do as a programmer. They’re like geek Lego.  One of the most exciting parts of Mashup Camp is the speed geeking contest, where developers pitch their mashup to a couple dozen small groups in hopes of winning recognition and some great prizes. The speed geeking contest is something I look forward to participating in every time, and over the years I’ve done well getting votes and winning prizes. I’ve won awards from IBM, Thomson-Reuters, Dapper, and others. In Dublin I even managed to win the Best Mashup grand prize. A couple times I’ve come prepared with a mashup, but as my friend Andrew Bidochko can tell you, half of the fun is spending a sleepless night hacking away.

For API providers, sponsoring a mashup contest is a great way to stir interest in their services. Not only do contests attract developers to using their APIs, but it helps vendors discover valuable use cases for their data. The kind of creative use cases that clever mashup developers uncover can be invaluable to vendors. Even simple mashups can evolve to new revenue models and uncover vertical niches not thought of by the vendor. I would think the return on investment for running a contest, including prizes would be high. If nothing else, you’ll get bug reports, feedback, and dozens to hundreds of developer hours in testing your API.

For the developer, mashup contests provide a great incentive to try out an API. If I had a choice, the only coding I would do would be mashups. But since I was laid off in September and freelancing once again, time for the fun stuff is scarce. That’s why I was excited to read about the API contest on the best mashup news site in the world, Programmable web has a special page for mashup contests, and I perked up a bit when I noticed GamePro was offering $5000 for theirs.

GamePro contest on Programmable Web. The link is a 404 now.

Now for the lowly freelancer, five grand is a nice chunk of change. I figured there would be a decent amount of competition in this contest, but I already had a good idea for it and thought I’d take a stab. So, with some great help from my wife Liz and our good friend Emily with the artwork, over the course of several weeks I created Pew Pew Zap!

Pew Pew Zap! is a devilishly cute and simple mashup using the GamePro API. It’s basically a ‘hot or not’ style site where the user is presented two games and votes on their favorite. The games are organized by platform and genre, the end goal being that the human-powered bubble sort will eventually reveal the best game for each platform and genre as indicated by the leaderboards.

This site by no means is a great engineering feat, but I think it’s a clever use of the GamePro API. Additionally, I wanted to test this interaction model, as I had previously spent some time on a similar but yet unreleased mashup for Etsy. My hypothesis being that the average page views per visitor would be high due to the simple nature of the site. So far, I’ve found that is the case according to my stats.

Working with the GamePro API was straight forward, although I did run into some errors with the service. The guys at GamePro were helpful and responsive on the forum, and even answered some questions I asked via Twitter direct messages.

So a couple days before the contest deadline I submitted my mashup to GamePro, and as the now non-existent contest rules page stated, I waited 30 days to hear the results. Suspiciously, a day or so after the November 1st entry deadline, GamePro removed all the information about the contest from their site. Here’s Google’s cached version of the contest announcement. The GamePro Terms of Service page, nor its cached version, contain the contest rules any longer. In fact, the only mention I can find of the contest is this forum post. I did not contact GamePro about any of this until today, trusting that there would be no issue. I guess that’s a lesson learned.

Regardless, I’m glad I created the mashup. It ended up winning Programmable Web’s “mashup of the day” on November 19th. The actions by GamePro will not discourage me from entering programming contests in the future. I have to wonder though, how many other developers got let down on this one. I hope the folks participating in GamePro’s comment drawing don’t get burned, too. I’m not bitter–I got to spend some time doing what I love, and was able to submit some bug reports to help GamePro improve their API. I just don’t think it’s right for them to dangle a carrot.


Reading Radar featured on Boing Boing

A guest blogger, Connie Choe, has posted a profile of Reading Radar on my all-time favorite blog Boing Boing. I can’t believe that this simple little mashup has received so much attention. Would anyone be interested in a video tutorial or something to learn how to build mashup sites like this?

Liz’s Photography Site is Live

Liz has been asking me for some time now for a website to show off her photography. She’s been quite the shutterbug these last several years, consuming gigabytes upon gigabytes of disk space. I’ve had a lot of fun watching her try new things and watching her confidence build as what was one was a hobby has become much more. She has taken quite an interest in photographing local musicians, made some good contacts, and has begun selling prints. I feel really grateful to her that we have such and awesome time machine to look back on past memories in all of their splendor. - Elizabeth Herren Photography - Elizabeth Herren Photography

I finally sat down this evening and put together a simple site.  Her site, pulls in an album from her “professional” Flickr account (as opposed to her personal one), so she’ll be able to manage the slideshow without needing to edit the site at all.

Technically you could call this site a mashup. It’s all client side, using some jquery plugins to access her Flickr feed, and the slimbox2 plugin to power the lightbox effect.

Mashing up the New York Times Best Sellers:

readingradarI’m a fan of APIs, Web Services, and Mashups, and it’s no secret. One of my New Year’s resolutions is to create and publish more mashups, instead of simply lengthening my ever-growing ideas.txt file. A couple of weeks ago I relaunched, with little fanfare, DRM News, a domain I’ve owned for several years now. It’s simply an RSS aggregator from several sources revolving around digital rights management. It was trivial to slap together, to the point where I’d hardly consider it programming. But hey, at least I deployed something.

When I saw the Reddit post about the New York Times Best Seller API, I thought it would be a good opportunity to do a mashup for popular books. My hope was to create a site that could be on “auto-pilot” and maybe even send me a Amazon Associate check every now and then. I designed the site to use extensive caching of the NYT and Amazon APIs to minimize remote calls, but update the data often enough so that the information would be fresh. A few nights of hacking later and we have:

Reading Radar – From the New York Times Bestseller Lists

The NYT API was simple enough to use. The REST API offers three response formats, XML, JSON, or serialized PHP. I did find a bug in the API, and was very pleased how reactive the NYT API team was to resolve the problem. Kudos!

The guts of the mashup are simple. I’m using YUI for the layout and initial CSS, and JQuery for some visual effects on the list pages. On the server side, I’m using my current favorite thing ever, the Maintainable Framework. I was made aware of the framework by Mike Naberezny, one of the two main authors. Mike and I are ex-Zenders, and Mike was responsible for much of the code early on with the Zend Framework. The Maintainable Framework is very Rails-like, and because I’m familiar with some of the conventions in Rails, getting going with Maintainable was a cinch. Mike’s documentation is well-done, and I’m looking forward to using this framework for all things PHP for the foreseeable future, and hope to help out with bugs and maybe even some code.

Reading Radar is simple enough that I decided to forgo a database and just use a file based cache, powered by the Zend_Cache component of the Zend Framework. I’m using two caches, one that does not expire, and one that expired every several hours so that my data is fresh. I’m caching just a few API calls: the list of lists from the NYT API, which powers the left navigation; the actual lists themselves, which also provide book data; and the individual book histories. The NYT API offers a few other bits of information I’m not using, such as links to the first chapters and editorial reviews. The reason I don’t is because when I spot-checked several books, most of them did not have these links. My hope is that as the API matures, more of the data will find its way in.

To pull the Amazon information, I turn to the easy to use Zend_Service_Amazon component from Zend Framework. It provides me the ratings data, reviews, related products, images, number of pages, and so forth. So using the ISBN numbers from the NYT API, I’m able to query for the Amazon products, and spew those gratuitous affiliate links everywhere using the Amazon ASIN IDs.

So the end result, ideally, is that I have an automatically up-to-date site that I’ll never have to touch, is relevant to the search engines,  and will generate passive income. Realistically, it was a fun way to spend a few evenings doing what I love: making mashups. The site is far from perfect, and I’m sure there will be bugs to squash.

I’m love to hear opinions and ways to make this little site better. I know my designer buddies could offer suggestions, and I’d be interested to hear from the affiliate marketing gurus about better ways to integrate the Amazon links better than I have.  My real question is, how would you drive traffic to something like this, and would it even be worthwhile?

Jimmy Palmer and I have been thinking of some interesting ideas to extend sites like Reading Radar.. more on that as it develops.

Mashup Sighting

Via the Hype Machine blog, I just found one of my mashups, Gblinker, mentioned in a textbook called Business Driven Technology. Chapter 14 has a section on mashups, and here’s a photo of the page. Judging by the other entries on the page, it looks like this section was lifted from the speek geeking page from the Mashup Camp III in Boston.

The idea for Gblinker was simple. At some of the conferences I attended, Google handed out some fun blinky LED pins as swag. I thought it would be a cool idea to rig one up to a computer and have it blink whenever I received a Gmail message. So to be lazy I decided to reuse what I could find instead of writing a program from scratch. I knew there was a Gmail widget for the Yahoo widget engine (formerly Konfabulator), so I started there. In the documentation, I discoveded that Yahoo widgets could make COM calls–what luck! So I wrote a simple dll in C# to flip on and off the serial port’s RTS bit, and modified the Gmail widget to call the dll. That’s all there was to it. The fun hack ended up winning 5th or 6th place at Mashup Camp in Boston, and as far as I know, was the first hardware mashup from any of the camps. The prize: a copy of Visual Studio 🙂

It’s always fun to see things you’ve done pop up out of nowhere, especially in dead-tree version. My favorite surprise was when my TagCloud prototype ended up in the book Yahoo Hacks. Coincidentally, the free sample chapter (PDF link) just happens to be the one on TagCloud.

A couple new APIs

Work’s been busy, so I haven’t had a chance to do a lot of mashup stuff lately. Via Chad, here’s some  APIs I just found out about that could make for some interesting mashing:

NPR API – “over 250,000 stories that are grouped into more than 5,000 different aggregations.”

Crunchbase API

Yahoo Pipes adds support for serialized PHP

A few days ago I sent an email to Chad Dickerson, who I’ve met at Yahoo! and had a chance to hang out with at Mashup Camp in Dublin.


From what I can tell, if you create a Pipe and add additional fields (Shortcuts, Term Extraction), the only way to get to them in an API-like way is to use the JSON renderer. The RSS renderer removes those extra fields to follow the RSS spec. PHP supports JSON decoding, but you need a PEAR library or a quite recent version of PHP. If Yahoo supported serialized php with Pipes like you do with the other common API’s, it would be a lot easier for folks on shared hosting to work with Pipe data on the server side. I imagine with the new badge stuff you released that there’s a push to keep things client side, but there’s a huge advantage to rendering server-side to keep things nice and spiderable.

Short Version:

Expose Pipe results as serialized PHP. Pretty please.

Chad sends this along to the Pipes team, and less than three days later:
Pipes Blog » Blog Archive » New Yahoo Pipes PHP serialized output renderer


John Herren and Chad DickersonTwo points to be made: first, I’m damn impressed that one of the largest sites on the ‘net would roll a feature request from an outside developer in less than three days. Second, developers should never resist the urge to ask for help from an API provider. If a company is taking the time to support an API, chances are very good that they will listen to developers and react. I can personally say I’ve gotten immediate results from Technorati, Dapper, and now Yahoo!. So blow off the idea that a big website would never listen to little ol’ developer you. With that negative attitude it’s guaranteed you’ll never get it. Ask, believe, receive, right?

So props to Chad, Jonathan Trevor, Paul Donnelly, and the rest of the Pipes team!

The Details

I’m a big fan of Yahoo Pipes. It’s an incredibly useful tool for putting together quick aggregators and filters for mashups. To integrate a Pipe on a webpage, you have a few options. You can go the cut and paste route and use a Badge, which works client side, or you roll your own code to integrate a pipe.

Put this in your pipe..

After you run a Pipe, you’re given a list of output formats. Copy the link location of these to get the URL of the output and tweak the parameters.

Until yesterday, the output formats useful for mashups were JSON and RSS. JSON is great for client side mashups, but as you know, search engines will not index client side content, so you lose any SEO love you might get. RSS is easy to consume server side, but Pipes will normalize the output to conform to the RSS spec. That means if you’re using term extraction or Shortcuts or any other meta data to your pipe, you’ll lose it with RSS ouput unless you put that data into one of the RSS fields (title, description, etc.). So that leaves us with hacking JSON on the server side. The JSON output format retains all that sweet metadata. In PHP, the best options are a JSON PEAR module or, if you’re rocking 5.2 and above, you have the handy json_decode() function.

Now that Yahoo supports serialized PHP, using Pipe output just got a lot easier. I made a Pipe to add Term Extraction info from any RSS feed. Basically what we’re doing is automatically tagging all the posts in the feed and to retrieve the tags in your own script, all it takes is:


$pipeURL = ‘‘;
$feedURL = ‘‘;

$tags = array();
$response = unserialize(file_get_contents($pipeURL.rawurlencode($feedURL)));
foreach ($response[‘value’][‘items’] as $item) {
foreach ($item[‘tags’] as $itemTags){
$tags[] = $itemTags[‘content’];

At this point $tags is and array of all of the terms from the feed. Now what could be done with that data?

Serialized PHP or JSON?

If you have json_decode() available in your PHP install, is there any advantage to using JSON over serialized PHP? Let’s find out.

File Size

Saving the output directly to disk gave me

JSON – 51192 bytes
Serialized PHP – 56885 bytes

Because of syntax and PHP’s type specification, serialized PHP is about 11% larger than JSON. This ratio will increase as the number of elements in your output increases.

Decoding Speed

How long does it take to slurp these formats into PHP variables? My tests decode each 100 times.

real    0m0.269s
user    0m0.264s
sys     0m0.004s

Serialized PHP
real    0m0.088s
user    0m0.088s
sys     0m0.000s

It’s clear that unwinding serialized PHP is faster than JSON, so it’s a better choice performance-wise despite being slightly bigger over the wire.