jQuery gotcha with ‘class’

Not jQuery specific, but ran into a cross browser issue today with the following snippet:

$('<span/>',{class: 'check', text: 'Checking'}).insertAfter('#checker');

This will work fine in Firefox, but in Safari it throws a parse error. Evidently ‘class’ is a reserved word in webkit. The solution is to quote it:

$('<span/>',{'class': 'check', text: 'Checking'}).insertAfter('#checker');

The documentation even shows an example of this exact thing, but doesn’t point it out explicitly.


Beer and Loafing in Austin: finding the free stuff at SXSW

Once again I find myself not at SXSW, but participating vicariously though the tweets and posts of friends and colleagues. However, I can’t help but feel I’ve made my own little contribution to the levity though my recent work with the guys at SCHED*. I’ve known SCHED*’s creator, Taylor McKnight, for several years now, and I’m so happy to finally get the chance to work with him on a project. And while sure, I’m a little biased, SCHED*’s SXSW online schedule is the best one you’ll find on the web.

Not only can you browse all the official SXSW activities and parties by any facet imaginable, SCHED* brings together all the unofficial events surrounding the conference/festival/whateverthisthingis.

SCHED* makes it easy to see what events are most popular, as well as hook up with your Twitter, Facebook, and LinkedIn connections to see what they’re excited about. You can also use the search and tagging functions to find the hidden gems such as:

Free Food at SXSW

Free Alcohol at SXSW

or..

Every damn free thing in Austin!


GamePro burns developers on $5000 programming contest

[UPDATE 12/17/09] I was contacted by phone by GamePro President Marci Hughes on December 4th. She explained to me that there was a slip up within the company where it was believed there had been no submissions to the contest. She was very polite and apologetic and assured me the contest entries would still be judged. I assured her I would be happy to update this post if GamePro could clarify the situation on their developer blog. As of today, they have done just that. I understand that communication mistakes happen within companies, and I’m happy to see GamePro correct the situation.

I was very disappointed today to learn that GamePro.com has canceled its API programming contest after the entry deadline. I submitted an entry to the contest, and since the judging results were due, I reached out to the API guys via Twitter. Here’s what I learned:

John - We've discontinued the API contest - there wasn't enough interest.

Direct message via Twitter

I can’t be any more clear when I say this: I and any other developer putting time and energy to create an entry for your contest were very interested.

I think GamePro should be reminded of  the first rule of Open Space:

“Whoever shows up is the right group”

This is a fabulous example of an API vendor doing it wrong.

I love making mashups. Experimenting with APIs is my favorite thing to do as a programmer. They’re like geek Lego.  One of the most exciting parts of Mashup Camp is the speed geeking contest, where developers pitch their mashup to a couple dozen small groups in hopes of winning recognition and some great prizes. The speed geeking contest is something I look forward to participating in every time, and over the years I’ve done well getting votes and winning prizes. I’ve won awards from IBM, Thomson-Reuters, Dapper, and others. In Dublin I even managed to win the Best Mashup grand prize. A couple times I’ve come prepared with a mashup, but as my friend Andrew Bidochko can tell you, half of the fun is spending a sleepless night hacking away.

For API providers, sponsoring a mashup contest is a great way to stir interest in their services. Not only do contests attract developers to using their APIs, but it helps vendors discover valuable use cases for their data. The kind of creative use cases that clever mashup developers uncover can be invaluable to vendors. Even simple mashups can evolve to new revenue models and uncover vertical niches not thought of by the vendor. I would think the return on investment for running a contest, including prizes would be high. If nothing else, you’ll get bug reports, feedback, and dozens to hundreds of developer hours in testing your API.

For the developer, mashup contests provide a great incentive to try out an API. If I had a choice, the only coding I would do would be mashups. But since I was laid off in September and freelancing once again, time for the fun stuff is scarce. That’s why I was excited to read about the GamePro.com API contest on the best mashup news site in the world, programmableweb.com. Programmable web has a special page for mashup contests, and I perked up a bit when I noticed GamePro was offering $5000 for theirs.

GamePro contest on Programmable Web. The link is a 404 now.

Now for the lowly freelancer, five grand is a nice chunk of change. I figured there would be a decent amount of competition in this contest, but I already had a good idea for it and thought I’d take a stab. So, with some great help from my wife Liz and our good friend Emily with the artwork, over the course of several weeks I created Pew Pew Zap!

Pew Pew Zap! is a devilishly cute and simple mashup using the GamePro API. It’s basically a ‘hot or not’ style site where the user is presented two games and votes on their favorite. The games are organized by platform and genre, the end goal being that the human-powered bubble sort will eventually reveal the best game for each platform and genre as indicated by the leaderboards.

This site by no means is a great engineering feat, but I think it’s a clever use of the GamePro API. Additionally, I wanted to test this interaction model, as I had previously spent some time on a similar but yet unreleased mashup for Etsy. My hypothesis being that the average page views per visitor would be high due to the simple nature of the site. So far, I’ve found that is the case according to my stats.

Working with the GamePro API was straight forward, although I did run into some errors with the service. The guys at GamePro were helpful and responsive on the forum, and even answered some questions I asked via Twitter direct messages.

So a couple days before the contest deadline I submitted my mashup to GamePro, and as the now non-existent contest rules page stated, I waited 30 days to hear the results. Suspiciously, a day or so after the November 1st entry deadline, GamePro removed all the information about the contest from their site. Here’s Google’s cached version of the contest announcement. The GamePro Terms of Service page, nor its cached version, contain the contest rules any longer. In fact, the only mention I can find of the contest is this forum post. I did not contact GamePro about any of this until today, trusting that there would be no issue. I guess that’s a lesson learned.

Regardless, I’m glad I created the mashup. It ended up winning Programmable Web’s “mashup of the day” on November 19th. The actions by GamePro will not discourage me from entering programming contests in the future. I have to wonder though, how many other developers got let down on this one. I hope the folks participating in GamePro’s comment drawing don’t get burned, too. I’m not bitter–I got to spend some time doing what I love, and was able to submit some bug reports to help GamePro improve their API. I just don’t think it’s right for them to dangle a carrot.


Reading Radar featured on Boing Boing

A guest blogger, Connie Choe, has posted a profile of Reading Radar on my all-time favorite blog Boing Boing. I can’t believe that this simple little mashup has received so much attention. Would anyone be interested in a video tutorial or something to learn how to build mashup sites like this?


Liz’s Photography Site is Live

Liz has been asking me for some time now for a website to show off her photography. She’s been quite the shutterbug these last several years, consuming gigabytes upon gigabytes of disk space. I’ve had a lot of fun watching her try new things and watching her confidence build as what was one was a hobby has become much more. She has taken quite an interest in photographing local musicians, made some good contacts, and has begun selling prints. I feel really grateful to her that we have such and awesome time machine to look back on past memories in all of their splendor.

elhphotos.com - Elizabeth Herren Photography

elhphotos.com - Elizabeth Herren Photography

I finally sat down this evening and put together a simple site.  Her site, elhphotos.com pulls in an album from her “professional” Flickr account (as opposed to her personal one), so she’ll be able to manage the slideshow without needing to edit the site at all.

Technically you could call this site a mashup. It’s all client side, using some jquery plugins to access her Flickr feed, and the slimbox2 plugin to power the lightbox effect.


Twitter and the Case for Web Hooks

My, my, we do love our Twitter. Tiny messages in a list. A global chatroom really, but just the messages you want to see. What an elegant system.

  • But it’s hard to track conversations.
  • But 140 characters isn’t enough for some dude who has trouble making his thoughts a wee bit more concise and is accustomed to using long, redundant, extra words.
  • But it doesn’t automatically link up my stock symbols in my tweets.

But, but, but.

Enter the API

Everyone who’s used Twitter for a significant amount of time has concocted a list of features that Twitter lacks and desperately needs. Fortunately, Twitter has a given us a nice API to let geeks fill in the gaps somewhat. Without Googling around to find the most recent stat, I’ve read that the majority of tweets are placed through the API instead of the Twitter site, through mobile and desktop apps. That’s a good indication.

Besides API based clients–alternate software that does what the Twitter site does, there has been a surge in Twitter powered bots. These bots read and write to Twitter using some kind of logic in between. I’ve seen autonomous chat bots and autoresponders.

Who Are My Twitter Followers?

A pattern I’ve seen a lot lately with Twitterized apps is “all you have to do to join my app is follow @sometwittername”. @sometwittername then uses the API to find his followers, and then sends the user tweets, or, links via direct message with special links just for that user.  It’s a MLM’ers dream. The top essential tactic of these web marketing guys is to BUILD YOUR LIST, and there’s no friendlier way than Twitter, right?

If you want to see an example of this pattern in action, here’s one, which I reluctantly link to for demonstration purposes only.

Yes, apparently we have “Twitter marketing gurus” now.

So now the developer needs to know who his followers are, so he can spam or provide $ValueAddedService. Currently, the Twitter API gives developers two ways of finding out who is following you. The first one requires polling, which means you have to call the API at regular intervals.

Polling sucks. If you request too often, it’s wasteful. If you don’t request often enough, you miss the opportunity to do cool stuff with new followers in a timely fashion.

A better way to detect new followers is to tap into your email stream. When Twitter emails you with a new follower notification, it tucks away some extra headers into your message. This is documented for some reason in the FAQ. The headers look something like this:

  • X-TwitterEmailType - will be ‘is_following’ or ‘direct_message’
  • X-TwitterCreatedAt - ex: Thu Aug 07 15:17:15 -0700 2008
  • X-TwitterSenderScreenName - ex: ‘bob’
  • X-TwitterSenderName - ex: ‘Bob Smith’
  • X-TwitterSenderID - ex: 12345
  • X-TwitterRecipientScreenName - ex: ‘john’
  • X-TwitterRecipientName - ex: ‘John Doe’
  • X-TwitterRecipientID - ex: 67890
  • X-TwitterDirectMessageID - ex: 2346346

So all you have to do to maintain an instantly up-to-date list of your followers is to send your email notifications through a script (most any SMTP server can do this) and check for messages with the X-TwitterEmailType set to ‘is_following’, and then grab the SenderID or SenderScreenName and you’re all set.

This is a wannabe web hook in action.

WTF is a Web Hook?

If this is the first time you’re hearing the term “web hook”, pay attention. It’s the next step in this whole read-write web thing.

In short, a web hook, or “http callback” allows you to subscribe to another website’s events with a URL. You tell a website, “Hey, when some event happens, send some information about it to this URL”. Why? It doesn’t matter, it’s up to the developer to decide what to do with that information.

An excellent example of a web hook is Paypal’s Instant Payment Notification. It works like this:

  1. You come to my store, fill up your cart, and hit the checkout button.
  2. My site sends you and your order info to Paypal, and you pay up.
  3. In my Paypal account, I’ve set up my IPN (web hook!) URL.
  4. Paypal POSTs the order information to the hook URL letting me know that you’ve paid
  5. I record the sale, do whatever business logic I need to do so my Oompa Loompas can ship your stuff to you.

There’s a little more to it, but that’s the basics. I can choose to do whatever I want with that data from Paypal.

The key part is step 5. I do whatever logic I need to do with my own code.

Web hooks are APIs knowing to call other API endpoints. That’s it.

Now back to the Twitter. Why should you have to hook into your email stream to get those “is_following” notifications?  In the same way Twitter sends you an email, it could just as easily send an HTTP POST with that same data to a script you control. Would it not be much simpler to specify a callback URL in your Twitter settings?

Your site would become integrated with Twitter itself. That word integrated is key. Instead a basic request-response API message, now our websites are having a conversation.

I Understand Now! What Possibilities! Web Hooks Rock My Casbah!

Now imagine what would be possible if Twitter allowed hooks for all the different events that happen on the platform.

  • Twitter, ping this URL of mine when someone I follow tweets or DM’s or replies to me.
  • Twitter, ping a different URL of mine when someone unfollows me.
  • Twitter, ping yet another URL when someone I follow updates her profile. She’s cute, and I need to stay on top of those thumbnail changes.

Twitter, just ping my scripts when stuff happens.

Now Make Hooks Programmable

Maybe I’m not interested in every single hookable event. How cool would it be if I could get a list of all my subscribed events through an API call?

  • Me: Yo Twitter, what do I care about? What am I subscribed to?
  • Twitter: Sup dawg, I heard you like profile changes and unfollows, so I send twirps to your tweets so you can twit when I twat.

But instead of jive talk you’d get back XML or JSON with event names and your hook URLs.

Take it one step further and suppose you only want your hook called when only certain people perform certain actions. Why not make setting and unsetting the hooks themselves programmable via the API?

Example: Call my URL http://myawesometwittersite.com/myhooks/profile/update when @cutetwitterchick updates her profile.

I could easily perform that logic in my own code, but telling Twitter to do it is that much cooler:

set_hook ( hook_name, hook_URL [, screename | ID | email ]  )
unset_hook ( hook_name, hook_URL [, screename | ID | email ]  )
list_hooks()

The Hooked Web

Jump to the future when all of your favorite sites implement programmable hooks. The pipedream, holy grail, end result is that you no longer even need Twitter, because it’s become a protocol. Just like blogs happily send pingbacks, you can install a Twitter-speaking, open sourced package on a Slicehost account that is your own personal Twitter. The protocol is extensible, so you can do things like:

  • display replied messages in threads as conversations
  • allow longer than 140 character messages
  • automatically hyperlink those stock symbols

And, unlike email, newsgroups,  and chatrooms, it doesn’t turn into a jacked-up, spammed to hell system, because you’re still only following what you want to follow. It’s a decentralized, pluggable architecture, and it integrates with any site using web hooks. At your service.

Take this platform one step further and visualize a user interface layer. You have a nice, decentralized activity stream–an open FriendFeed platform with all of your stuff. Maybe it’s organized by the source, or grouped by service type or however you want it organized. Messages can carry their own css styling, but only if you allow it. Messages can have interactive elements, but only if you allow them. And, holy crap, messages can come packaged with next actions! Twitter messages allow you to reply, calling a hook specified in the message. Flickr messages allow you to comment directly from this ‘StreamReader’ software. You subscribe to Mechanical Turk hits, and receive a new hit whenever you complete one.

One more step further. Not only do messages have next actions, but I can write plugins to attach next actions to any message, and I can do it conditionally. This would be like being able to write filters for Google Reader. For example, filter out all the messages from my group of PHP programming buddies when a message is about photography. I’m not interested, but those guys LOVE fucking cameras for some reason.

I’ve very conveniently ignored the implementation details of such an animal. Twitter could quickly become a push architecture with all of these hooks in place, and that turns quickly into n-squared network traffic patterns. Luckily we have some heavy brains working on xmpp and pubsub and the like, and maybe the end solution would be half push, half poll.

At this point it’s just really fun to imagine the possibilities of a read-write-read-write-read-write web.


Mashing up the New York Times Best Sellers: ReadingRadar.com

readingradarI’m a fan of APIs, Web Services, and Mashups, and it’s no secret. One of my New Year’s resolutions is to create and publish more mashups, instead of simply lengthening my ever-growing ideas.txt file. A couple of weeks ago I relaunched, with little fanfare, DRM News, a domain I’ve owned for several years now. It’s simply an RSS aggregator from several sources revolving around digital rights management. It was trivial to slap together, to the point where I’d hardly consider it programming. But hey, at least I deployed something.

When I saw the Reddit post about the New York Times Best Seller API, I thought it would be a good opportunity to do a mashup for popular books. My hope was to create a site that could be on “auto-pilot” and maybe even send me a Amazon Associate check every now and then. I designed the site to use extensive caching of the NYT and Amazon APIs to minimize remote calls, but update the data often enough so that the information would be fresh. A few nights of hacking later and we have:

Reading Radar – From the New York Times Bestseller Lists

The NYT API was simple enough to use. The REST API offers three response formats, XML, JSON, or serialized PHP. I did find a bug in the API, and was very pleased how reactive the NYT API team was to resolve the problem. Kudos!

The guts of the mashup are simple. I’m using YUI for the layout and initial CSS, and JQuery for some visual effects on the list pages. On the server side, I’m using my current favorite thing ever, the Maintainable Framework. I was made aware of the framework by Mike Naberezny, one of the two main authors. Mike and I are ex-Zenders, and Mike was responsible for much of the code early on with the Zend Framework. The Maintainable Framework is very Rails-like, and because I’m familiar with some of the conventions in Rails, getting going with Maintainable was a cinch. Mike’s documentation is well-done, and I’m looking forward to using this framework for all things PHP for the foreseeable future, and hope to help out with bugs and maybe even some code.

Reading Radar is simple enough that I decided to forgo a database and just use a file based cache, powered by the Zend_Cache component of the Zend Framework. I’m using two caches, one that does not expire, and one that expired every several hours so that my data is fresh. I’m caching just a few API calls: the list of lists from the NYT API, which powers the left navigation; the actual lists themselves, which also provide book data; and the individual book histories. The NYT API offers a few other bits of information I’m not using, such as links to the first chapters and editorial reviews. The reason I don’t is because when I spot-checked several books, most of them did not have these links. My hope is that as the API matures, more of the data will find its way in.

To pull the Amazon information, I turn to the easy to use Zend_Service_Amazon component from Zend Framework. It provides me the ratings data, reviews, related products, images, number of pages, and so forth. So using the ISBN numbers from the NYT API, I’m able to query for the Amazon products, and spew those gratuitous affiliate links everywhere using the Amazon ASIN IDs.

So the end result, ideally, is that I have an automatically up-to-date site that I’ll never have to touch, is relevant to the search engines,  and will generate passive income. Realistically, it was a fun way to spend a few evenings doing what I love: making mashups. The site is far from perfect, and I’m sure there will be bugs to squash.

I’m love to hear opinions and ways to make this little site better. I know my designer buddies could offer suggestions, and I’d be interested to hear from the affiliate marketing gurus about better ways to integrate the Amazon links better than I have.  My real question is, how would you drive traffic to something like this, and would it even be worthwhile?

Jimmy Palmer and I have been thinking of some interesting ideas to extend sites like Reading Radar.. more on that as it develops.


Follow

Get every new post delivered to your Inbox.