The land of Shattered dreams

Welcome to the online revolution…

Maneuverable Web Architecture

| Comments

One of my colleagues attend QCon London and saw a presentation by Michael Nygard on Maneuverable Web Architecture. A few of us got together and watched it and it got us thinking…

Businesses need to be able to move quickly, and some of the established architectural patterns stop them doing this. The presentation contained lots of ideas but I am going to concentrate on one: sending an email to a customer.

The traditional approach is for the application sending the email to write to a database saying who the email is to goto, when to send it and the data to be included in the email. A batch job wakes up, scans the database for emails to send, builds them and then sends them on.

This approach works well, loads of people have done it, and it is pretty easy to implement. There are some problems though: changing things can be difficult; you have to make sure that you don’t break any of the queued emails whenever you make a change. The solution isn’t very flexible either, it is good for sending emails but if you want to do something else you will have to revisit your entire solution.

Is there a better way? Michael proposed this approach. When I first saw it I thought it was a bit crazy, but I had a go at implementing it and it has a lot going for it…

We first create a few components:

  • at service - this service calls a specified url at a given point in time
  • script engine - this service is passed a script and executes it
  • script factory - this service builds scripts for the script engine to execute

How does this let us send an email to a customer?

  • the client sending the email says to the script factory: “Give me an email sending script”
  • the script factory builds the script and passes a url to execute the script to the client
  • the client says to the at service: “Call this url now”
  • the at service calls the url, and the script engine executes the script
  • the script does everything required to send an email to the customer

How is this better?

  • The at service just calls urls at a point in time, you can use it for anything
  • If the you add new functionality you can just deploy a new script engine and script factory on a different url. All the exisitng scripts will still work, any new scripts will go to the new versions. If you want to add email tracking, change the script factory to include it and then all new scripts will have email tracking. All the old scripts will still work.
  • All clients are doing is asking for a script and scheduling them. You can write scripts to do anything you can think of…

I am glossing over some of the problems… The script engine is non-trivial, error handling seems complicated and it takes some explaining to developers.

If your interested I had a go at implementing the approach, you can see the results on github. It is by no means production ready, but you can kind of see how it could work…

Adventures in OAuth2

| Comments

I’ve not been able to blog for a while as a lot of things have been going on. In November I left Morrisons to take a Solutions Architect role at Laterooms and it has taken me a while to get settled in and to work out what’s going on.

One of the things I am looking at for Laterooms is how we use APIs to power our platform. We’ve developed RESTful APIs that are used by our mobile applications but we can do a whole lot more with our APIs. Long term, I’d like to build our entire platform on APIs using them as the base for our mobile apps, the websites and our integration with affiliates and hotel providers. We’ve got a long way to go, but I guess a journey of a thousand miles begins with a single step.

One of the key considerations we have when designing our API platform is controlling who has access to our data. We want to build a platform capable of allowing authorized consumers unlimited access to Lateroom’s data. Note authorized consumers, some data is public and we want to make it as easy as possible to access it; some data is most definitely not public and access to it has to be limited.

To help us control access to our APIs I have been looking at various API management platform and how they implement OAuth2 to provide authorized access to an API. The three I talk about in this post are from Apigee, Mashery and Mulesoft.

OAuth2

Before we start I supose it is worth talking a bit about OAuth2. To quote wikipedia:

OAuth is an open standard for authorization. OAuth provides a method for clients to access server resources on behalf of a resource owner such as a different client or an end-user. It also provides a process for end-users to authorize third-party access to their server resources without sharing their credentials, typically a username and password pair, using user-agent redirections.

The OAuth2 spec is a long and complicated document with many, many options. To compare the three management platforms I built a simple API hosted on Heroku. I also wrote a simple client web app that consumed the API again hosted on Heroku. Finally I wired in each of the API management platforms to protect the API and to ensure only authenticated users could use the client.

I used the “Authorization Code” flow to authenticate and authorise the client users, which means that the client does not need any knowledge of the user’s credentials and all the authentication and authorization is done on the API side.

At a high level, every API request by the client requires an access token to call a protected resource. The steps to get an access token are:

  • the user authenticates themselves using a web application hosted on the API side
  • the client application is passed an authorization code via a callback method. The client exchanges this code for an access token, authenticating itself with a client ID and secret.
  • the client attaches the access token to any API calls

Each of the API management platform achieves this in a slightly different way.

Mashery

To implement the flow in Mashery I wrote a simple node app that presented a user a screen to enter their username and password and which validated them. The user clicks on a link in the client app and is taken to this validation screen.

If the validation succeeds the web app uses a Mashery API to generate an authorization code and this is passed back to the client via it’s callback url. The client makes a call to a “token” endpoint configured in the Mashery API and which responds with an access token. This token is passed with every subsequent API call and the Mashery platform ensures that it is valid.

This is the implementation I tried last, and it took me very little effort to change the code a developed using Apigee to work with Mashery.

Apigee

Apigee’s approach is slightly different to Mashery. I had to expose three endpoints in Apigee. The first endpoint redirects to the login application where the user enters their details. If the details are correct, the login app calls the second endpoint which generates an authorization code and passes it to the client app via its callback url. The client then uses the third endpoint to exchange the code for an access token.

It took me a while to remember how OAuth works, but once I had that the example from Apigee was easy to follow and I got things up and running pretty quickly.

Mulesoft

Mulesoft is a relatively new player in the API management space, primarily being known for their ESB.The biggest difference between the the solution I built with this and the others was that Mulesoft provided the login application, with no ability to develop your own. It is relatively simple to plug this login app into an existing database, and you do have control over how it looks but it does mean you have to know about Spring and Java (both of which I do). Getting an access token was easy, and wiring in the API was pretty simple. I did like that you can run everything on a local machine whilst developing - but it also required the most technical knowledge to get working.

Conclusion

All three platforms made it simple to implement the required functionality. Mashery and Apigee’s approach are pretty similar (although the platforms are quite different), and Mulesoft’s approach also work’s well.

Documenting APIs

| Comments

One of the projects I am working on at work is revamping our mobile applications.

We are designing a RESTful web service layer that a mobile device will use to allow a customer to do various things.

One of the problems I have is that a third party is going to build the service layer and they need to be told what services the API layer has to expose. The mobile devs also need to know what to expect from the service.

I’ve tried to document everything using Word documents, but I couldn’t really get to a format that I liked. After talking to a few people I decided to try a couple of tools

Apiary

Apiary allows you to define an API using a simple, Markdown inspired DSL.

HOST: http://shop.acme.com/
--- Sample API v2 ---
---
Welcome to our API. Comments support [Markdown](http://daringfireball.net/projects/markdown/syntax) syntax.
---
-- Shopping Cart Resources --
GET /shopping-cart
> Accept: application/json
< 200
< Content-Type: application/json
{ "items": [
    { "url": "/shopping-cart/1", "product":"2ZY48XPZ", "quantity": 1, "name": "New socks", "price": 1.25 }
  ] }

It turns this markdown into some nice looking documentation to describe your API. It even lets you call the API from within Apiary and it can stub your API for you as well…

I liked it, but I couldn’t put the level of detail in the documentation that I wanted. I’d also prefer to deploy the documentation with the API not on Apiary’s site.

Swagger

Swagger have taken a slightly different approach, or actually several approaches. You can document you api in a number of ways: you can write some json to describe it, you can write java/scala and annotate your code or you can do it in javascript. Either way you end up with a load of json that describes your API.

Swagger-ui can be deployed with the API and renders the json in a nice looking format. You can also use it to call the API, but it doesn’t stub it for you.

You can see a demo here.

In the end I used swagger, and annotated a Javascript stub that I was writing anyway. I can document everything I need to, and the developers get a nice UI to see what the API does.

Brompton for Sale

| Comments

For sale one Brompton H6L, hand built in September 2012. One careful owner (me!), selling as commuting on the train doesn’t fit in with work and childcare commitments.

The bike is pretty much as new, and in very good condition. It’s orange, it’s really well made and it’s very small when folded.

I had it custom built so there are a few features over the standard build:

  • it has a Shimano dynamo hub, so lights are always available
  • it has Eazy Wheels fitted
  • it has the telescopic seat post, I’m 6’4 and I had no trouble riding it
  • it has the luggage block, and a Brompton C bag
  • it has a cover and saddle bag, so you can take it on the tram

I’ll consider any sensible offer…

Welcome to the Land of Shattered Dreams

| Comments

I’ve been blogging at as typingincolor for about three years. It started as an exercise to learn a bit about wordpress, and was a place where I reviewed whatever DVDs and books I’d been looking at.

As time progressed, I began to add some stuff about work and technology. In particular I talked a bit about the journey the company I was working for was taking into the world of agile development. I also put in some stuff about technology that I was currently looking at.

It worked (and works) pretty well, but I’ve decided to try something new. I’ve chosen to use octopress as it will let me use some technologies I’m interested in. I particularly like that the styling is done using sass and the content written in markdown. That fact that it’s built on ruby is a happy coincidence.

The other thing I do really like is that it uses git to publish the content. I can run a local copy of any changes I make and preview them locally, I can then push the content up to heroku where they are visible to the world.

Playing With Rest-Assured

| Comments

We are moving to a service oriented architecture at work based using technology from Apigee which is helping us build securable and scalable APIs for our applications to use.

Our APIs are based on RESTful Web Services with JSON message bodies. This is means that they can be used by anything that can send a HTTP request, including a browser.

One of the challenges is testing these APIs. Being a Java shop, sending the HTTP requests can be a bit of a faff, as can parsing the resultant JSON responses. We want the tests to be automated, and seeing as we use Junit and Hamcrest something that works in a similar way would be good. After a bit of googling, one of my colleagues pointed me in the direction of Rest-Assured.

Rest-Assured is a domain specific language for the easy testing of REST services, and attempts to bring the simplicity of using a dynamic language such a Ruby or Groovy to Java. It does this pretty well!

What I particularly like is that you can set up a fairly complex scenario with a minimum of code and it is really easy to see what is being tested. The documentation on the website is really good, but as an example:

Assume that a GET request to http://localhost:8080/ returns JSON as:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
    "lotto":{
         "lottoId":5,
         "winning-numbers":[2,45,34,23,7,5,3],
         "winners":[{
               "winnerId":23,
               "numbers":[2,45,34,23,3,5]
         },{
               "winnerId":54,
               "numbers":[52,3,12,11,18,22]
         }]
    }
}

Then:

1
expect().body("lotto.lottoId", equalTo(5)).when().get("/lotto");

Verifies that the lottoId is 5 when you get it. It’s easy to read, simple to setup, and really powerful…

Facebook London Hackathon 2012

| Comments

Yesterday I went to Facebook’s London office for my first “Hackathon”.

We got started around 0930, and had a few hours of talks from the Facebook team as well as a number of their partners. The guys from Facebook talked to us about Open Graph, mobile development and Facebook games. The talks were a good introduction to Facebook’s various apis that we would be using for during the afternoon’s hacking. Three of Facebook’s partners then gave a very quick overview of their services.

We saw presentations from:

  • Twilio, who provide all sorts of telephony goodness
  • Deezer, a web based music platform
  • Pusher, a way of doing real time magic

Once we’d finished we moved on the hacking. Everybody split into teams and had 7 hours to build something using the technologies discussed in the morning. Various prizes were on offer for the best hacks, they just had to use the Facebook platform.

Ben and I were at a slight disadvantage in that there were only two of us in our team, and we wanted to build something that we could use a work. We had a few ideas over the previous couple of days and decided to build a store visit application. Basically, a customer goes to app’s Facebook page, chooses which store they visited and are asked to submit a review of the store. The review is posted to Open Graph and will appear on the customer’s Facebook timeline. When a customer visits the page of the store, they will see all the reviews for that store and be able to like, comment etc on them.

We also wanted to use Twilio to contact the customer if their review was negative to try to address whatever problem the customer had.

We’re kind of limited to our technology choice seeing as we do things in Java, so we built our app on Tomcat using Spring MVC. We got to the point of posting the customer’s review to Open Graph, but we didn’t really have anything in a fit state to present to the group.

Most of our problems were around our choice of technology. Spring is a great framework, but it can be a bit of a pain to configure properly. I tried to get hibernate working with hsqldb but failed miserably and spent 90 minutes trying to persist our reviews. In the end I gave up, installed mongodb and got it working in about 20 minutes. I used the localtunnel gem to allow Facebook access to my dev machine, but it kept on timing out which meant I had to continually reconfigure the app in Facebook. I used bootstrap for the colours and shapes, but it didn’t work properly in the Facebook canvas iframe.

Going forward, I’m going to look at Java Heroku which will remove the need for localtunnel. I need to play with bootstrap to get that to work properly and I will get the persistence layer working before I start when I go to my next hackathon.

All in all, it was a very useful day. I don’t get much chance to code anymore, let alone for 7 hours straight. The facilities provided by Facebook were excellent and all their engineers were very helpful.

Facebook Open Graph

| Comments

I left my job at the Hut Group after 3.5 years and decided to move on to Morrisons to help them develop their online offering. The first thing I did on joining them was look at Open Graph in Facebook for a project they are working on.

Open Graph allows you to specify an action and an object, and do the action to the object. It will put the fact you’ve done this on you Facebook timeline for all your friends to see. On the AutoTrader website for example, you can “want” a car. The demo app from Facebook shows you how to “cook” a recipe. The possibilities are endless.

I found the whole thing really interesting. I got to write some javascript which I’ve not really done much of, and figuring out how to stop a user doing the same action twice was quite challenging.

Testing my app was a bit of a faff, as I was running on a corporate network and you need to allow Facebook access to your website. I managed to get around it by setting up a reverse ssh tunnel to an Amazon EC2 instance and forwarding the web requests from there to my application. It took me a while to get working, but it is actually dead simple once you know what you are doing.

The Facebook documentation is quite good, and the tutorial is really easy to follow. I got two relatively complicated apps going in about three days and I’m not great a javascript…

You can find out more by going to here.

Trying Haml, Coffeescript, Ruby and Sinatra

| Comments

I am looking at ways that I can build a web front-end for a system I am going to develop at work. Basically, I want to have a number of web services that will provide the data and then write a front end in html and java script to consume these services and display the data to a user.

I’m not a javascript expert, can do a bit of html and really didn’t want to do use java as it isn’t the quickest thing develop a prototype.

In the end I chose Sinatra, a DSL for quickly creating web applications in Ruby with minimal effort. It is stupidly easy to create a web application! For example:

1
2
3
4
require 'sinatra'

get '/hi' do
    "Hello World!"

Will return “Hello World!” when you go to http://localhost:4567/hi. I used the ROXML gem to squirt out an xml representation of a ruby object and I had a web service in around 30 lines of ruby.

So I had my web service up and running. Now to consume it… As I’ve mentioned before, I like bootstrap as it is really easy to write a good looking interface without having to know loads of CSS. HTML is ok, but it can look a bit of a mess, so I decided to use Haml. It gets transformed into html, but it makes your markup look beautiful. Using it you have to go out of your way to write nasty html. Indentation is significant (a la python) so you have to right it neatly!

The final piece of the jigsaw is coffeescript, a little language that compile into javascript. The aim of coffeescript is to expose the good bits of javascript in a simple way. I like it because it removes the need for endless curly brackets and the function keyword.

All in all, I’m pretty pleased with the results. None of it is earth shattering, but the progress bars are pretty neat and I like the way I’ve built the results table.

If you want to see the code, you can clone it using git 

1
git clone https://bitbucket.org/typingincolor/ajax-prototype.git

The Hut Group’s First Code Dojo

| Comments

A couple of years ago, one of our technical architects at the Hut Group ran a few code dojos to give the team experience of TDD and Pair Programming. The team has grown massively in the meantime, so I thought it would be a good opportunity to try it again.

To quote from codingdojo.org, a dojo is:

…a meeting where a bunch of coders get together to work on a programming challenge. They are there to have fun and to engage in DeliberatePractice in order to improve their skills.

So, I booked the boardroom, setup an Intellij project on my MacBook and waited for the developers to arrive. In the end about 10 people came, with a range of development experience from less than a year, to more than we’d care to admit to.

The exercise I chose for the group was to implement the “fizz buzz” game. Basically for a given natural number greater zero return

  • “fizz” if the number is divisible by 3

  • “buzz” if the number is divisible by 5

  • “fizzbuzz” if the number is divisible by both 3 and 5

  • otherwise return the number

The first comedy moment was when we went round the table and tried to play “fizz buzz”. You’d have thought a group of highly skilled IT professionals would be able to do simple mental arithmetic, but this is apparently not the case…

The problem itself isn’t all that complicated, a decent developer could write the code in 5 minutes, but the point of the exercise isn’t really the end solution. More important is how you get to it. A pair of developers worked at the laptop. The driver types, focuses on tactics and writing clean code that compiles and runs. The navigator focuses on strategy. How the code fits into the overall design, which tests will drive the code forward, and which refactorings will improve the entire codebase.

We also wanted to build the solution using Test Driven Development. You write a test that fails, write some code to make it pass and then refactor. With a problem this simple it is quite difficult to force yourself to follow these steps.

We started with the simplest case we could think of, i.e. if you pass in 1, you get 1 back and built from there. We managed 7 iterations in the time we had available and got to a reasonable solution.

Going through the exercise prompted a lot of interesting debate. We talked about how you identify test cases, different ways of implementing the algorithm and even got into some OO design. I think the group found the exercise useful, I guess the proof will be when I see how many people come to the second code dojo.

There was one thing that I didn’t think worked too well. The navigator didn’t really get to contribute as the entire group chipped in ideas so effectively we had 10 navigators, but I’m not sure if this wasn’t a good thing.

Next time I may try using cyber-dojo which would allow pairs to work at the same time. It is a pretty neat site where you select a exercise and can code it online in a huge number of languages. One thing it does prove is that Java is a bit of a nightmare without an IDE, Ruby is the future!