CAT | nodejs
Earlier this month I gave a presentation at ComoRichWeb on RabbitMQ and one question from an attendee was “Is it possible to publish a message to be consumed at a later date?” I answered that it wasn’t possible to the best of my knowledge, but that there might be some hack to accomplish it. Well, this evening while trying to figure out how to use a push vs. polling model for timed notifications I discovered a clever hack using temporary queues, x-message-ttl and dead letter exchanges.
The main idea behind this is utilizing a new feature available in 2.8.0, dead-letter exchanges. This AMQP extension allows you to specify an exchange on a queue that messages should be published to when a message either expires or is rejected with requeue set to false.
With this in mind, we can simply create a queue for messages we want to be delivered later with an x-message-ttl set to the duration we want to wait before it is delivered. And to ensure the message is transferred to another queue we simply define the x-dead-letter-exchange to an exchange we created (in this case I’ll call it immediate) and bind a queue to it (the “right.now.queue”).
In coffeescript with node-amqp this looks like this:
Next I define the immediate exchange, bind a queue to it and subscribe.
Finally, after defining the queue I created earlier we want publish a message on it. So to revisit the earlier queue definition we add a publish call to publish directly to the queue (using the default exchange).
The result of running this is we’ll see a 5 second wait and then the message content and headers get dumped to the console. Since the queue is only used temporarily in this scenario I also set the x-expires attribute of the queue to expire in a reasonable amount of time after the message expires. This makes sure we don’t wind up with a ton of unused queues just sitting around.
Here’s the result of this exercise in its entirety.
You can get this exercise in full on github.
This is pretty interesting and I plan to experiment further with utilizing this in one of my production node.js applications that use interval based polling to trigger scheduled events.
This morning I woke up with a lingering thought on my mind that was left over from recent conversations. In the technical community we often get so invested in our work that rather than talk about the simple building blocks that build our success we talk about the huge breakthroughs we make. The problem however is that our breakthroughs most often aren’t accessible to someone who wants to just get started. So today I will give an intro tutorial to using node.js, coffeescript and mongodb to build a simple blog. It builds off the concept in a tutorial I first used to learn node.js more than a year ago, but with a completely from scratch approach. In this tutorial I will also cover practicing Behavior Driven Development using Mocha.
Not too long ago I tweeted what I felt was a small triumph on my latest project, streaming files from MongoDB GridFS for downloads (rather than pulling the whole file into memory and then serving it up). I promised to blog about this but unfortunately my specific usage was a little coupled to the domain on my project so I couldn’t just show it off as is. So I’ve put together an example node.js+GridFS application and shared it on github and will use this post to explain how I accomplished it.
First off, special props go to tjholowaychuk who responded in the #node.js irc channel when I asked if anyone has had luck with using GridFS from mongoose. A lot of my resulting code is derived from an gist he shared with me. Anyway, to the code. I’ll describe how I’m using gridfs and after setting the ground work illustrate how simple it is to stream files from GridFS.
I created a gridfs module that basically accesses GridStore through mongoose (which I use throughout my application) that can also share the db connection created when connecting mongoose to the mongodb server.
We can’t get files from mongodb if we cannot put anything into it, so let’s create a putFile operation.
This really just delegates to the putFile operation that exists in GridStore as part of the mongodb module. I also have a little logic in place to parse options, providing defaults if none were provided. One interesting feature to note is that I store the filename in the metadata because at the time I ran into a funny issue where files retrieved from gridFS had the id as the filename (even though a look in mongo reveals that the filename is in fact in the database).
Now the get operation. The original implementation of this simply passed the contents as a buffer to the provided callback by calling store.readBuffer(), but this is now changed to pass the resulting store object to the callback. The value in this is that the caller can use the store object to access metadata, contentType, and other details. The user can also determine how they want to read the file (either into memory or using a ReadableStream).
This code just has a small blight in that it checks to see if the filename and fileId are equal. If they are, it then checks to see if metadata.filename is set and sets store.filename to the value found there. I’ve tabled the issue to investigate further later.
In my specific instance, I wanted to attach files to a model. In this example, let’s pretend that we have an Application for something (job, a loan application, etc) that we can attach any number of files to. Think of tax receipts, a completed application, other scanned documents.
Here I define files as an array of Mixed object types (meaning they can be anything) and a method addFile which basically takes an object that at least contains a path and filename attribute. It uses this to save the file to gridfs and stores the resulting gridstore file object in the files array (this contains stuff like an id, uploadDate, contentType, name, size, etc).
This all plugs in to the request handler to handle form submissions to
/new. All this entails is creating an Application model instance, adding the uploaded file from the request (in this case we named the file field “file”, hence
req.files.file) and saving it.
Now the sum of all this work allows us to reap the rewards by making it super simple to download a requested file from gridFS.
Here we simply look up a file by id and use the resulting file object to set Content-Type and Content-Disposition fields and finally make use of
ReadableStream::pipe to write the file out to the response object (which is an instance of WritableStream). This is the piece of magic that streams data from MongoDB to the client side.
This is just a humble beginning. Other ideas include completely encapsulating gridfs within the model. Taking things further we could even turn the gridfs model into a mongoose plugin to allow completely blackboxed usage of gridfs.
Feel free to check the project out and let me know if you have ideas to take it even further. Fork away!
I’ve been doing a bit of exploration in spring-amqp lately and came across some of the built-in features to automatically marshall/unmarshall AMQP messages to java objects. Although you can get away with just having java objects implement Serializable but that just means you’ll only be dealing with java to java communication and that’ll break down the amount of flexibility you have.
Luckily there’s JsonMessageConverter, which allows you to marshal and unmarshal messages as JSON. By default this uses a _TypeID_ header which maps to the java class name. For this me this just didn’t work as I have an app in node.js communicating with a Consumer and Producer living in java land. The solution comes from a little undocumented feature.
Essentially you just need to set a map of class mappings on the java side of things:
Then assign it to the JsonMessageConverter you’ll be using to marshall messages… in this example it’s used for both a consumer and producer, but I’ll just focus on the consumer:
And the actual consumer (which is just a plain java object).
Now whatever messages get published will just refer to the _TypeId_ header to map json objects to classes behind the scenes (no JAXB annotations or anything needed). For example, publishing a message from node.js:
As a final note, I didn’t like the default header field _TypeId_. The only way to change this with the current API is to override getClassIdField in DefaultClassMapper to return the header field name you want to use. For my purposes I just changed this to the field type:
Pretty sweet stuff. Feel free to check out some of the examples in the github repository for this.
This morning I took some time out to do a few chores on my pet node.js project, paynode. In case you missed it, this has been my attempt to fill a void in node.js land for payment gateway integration. Think ActiveMerchant for node.js.
Today I’ve made a few updates to it. In an effort to work towards full Paypal Payflow Pro API I’ve added the methods RefundTransaction, DoVoid, and TransactionSearch. As always these pretty much match up with the API description details, except TransactionSearch follows the paynode conventions thus far in regards PayPal and converts keys in the format l_keyname0 to indexed objects. For example, here’s what the result of a transaction search will look like:
One gotcha with TransactionSearch is that it seems the NVP API has a limit when a string exceeds 8192 bytes which can garble results that are long (and by long I just mean 80 or so results).
Since releasing paynode another payment gateway module was released by the folks at Braintree for integrating with the Braintree Payment API. So I’ve integrated into paynode as well by simply using braintree’s node module.
There’s more to come in the future… I’m hoping to create some unified API that lets one switch between modules without caring but at the moment just sticking with pulling as many payment gateways into the fold as possible.
Recently I created a little web app for a friend’s conference to accept talk submissions and gather votes on those submissions to rank the top ones. For this task I used heroku’s node.js beta preview to host the application and a free couchone instance for the data store. Things were a bit rocky but I learned some important lessons that I thought I’d share.
- Tag your last successful heroku deployment
- While adding one additional feature for the site I reached a point where the app worked fine on my laptop but for reasons I couldn’t figure out for quite sometime it failed on heroku. I had to manually checkout the date of the last time I deployed, branch and push to reset it back. Tagging would have made this easier. This also leads to my next lesson learned…
- Create a Staging Heroku Instance
- This helps catch errors arising from differences that might exist between your local machine and heroku. After the mishap mentioned previously, all my deployments included deploying to staging first just to make sure there were no odd inconsistencies before pushing live.
- heroku config:add NODE_ENV=production
- This has become something that should go into every app now imho. It made it a breeze for me to configure my app to use different resources in different environments.
- Replicate Your CouchOne Instance
- In the 11th hour my main couchone instance crashed. To make matters worse, the guy who could fix the instance was currently in an international flight and wouldn’t be able to help till morning. Although it was a rare case, simply replicating my databases would have made it easy and painless to recoup and work around the instance being down. Plus it only takes one minute.
As a node.js hosting platform, I still think Heroku is just kinda okay. I’ll confess that I liked Joyent a lot better from my experiences with the Node Knockout but I’m willing to stick in there and see how Heroku’s node.js support turns out. So far, it’s good.
Tuesday night I gave a talk at a local Java User Group (that’s four JUG appearances this year, hoorah!) on RabbitMQ and demonstrated not just using it communicate between two java processes, but also as a way of communicating asynchronously between a node.js application and a java application… I have to say it was pretty awesome and I really think it opens the doors for integrating node.js applications seemlessly into an existing java infrastructure.
The setup was actually pretty simple and I have to admit I only spent a total of two days on it. I first created the node.js application front end using Socket.IO to manage websockets on both the server and client side and Ryan Dahl’s node-amqp plugin for communicating over amqp. Here’s the relevant node.js code:
I think this is pretty straightforward, but I’ll describe what’s going on here. First, we open a connection to the amqp server, which could be configured with any additional details in accordance with the amqp spec but for now it’s just hitting localhost. Once the connection is established we declare the exchange we’ll use and then declare a queue named “queueB” and bind it to the exchange using the routing key key.b.a. This will let us receive any messages from the queue with that routing key.
With all this established, we start up the web server (I am using expressjs) and once it starts up also have socket.io listen on the same port. With the infrastructure setup, we add a listener for when a client connects and route any messages sent from the client to the exchange with the routing key “key.a.b”, which will be picked up by our java app (whose queue is bound to that key on the same exchange). We also subscribe to the queue we previously setup and messages sent out on it will be published back to the client over websocket (which in turn displays it on the page).
Now here comes the java application portion (brace thyself!), which will take stock ticker symbols and return a hard coded price for that ticker (but hypothetically it could be a real value). Here’s the javaclass that acts as both a listener and a publisher, with it’s onMessage method taking the message it receives, looks up the stock ticker symbol in an ImmutableMap and publishes the price using RabbitTemplate (I’ll show the configuration for it soon):
This is all configured using Spring 3.0′s annotation based configuration:
The CommonConfiguration class is used because in the presentation I showed multiple behavior. The connectionFactory bean simply defines the ConnectionFactory with various details (these are defaults in node) and the rabbitTemplate bean represents the template that will be used to listen on queueC bound to some-exchange using the routing key “key.a.b” (which if you recall is the key used when node publishes messages on the exchange). The binding() is used to just bind the queue to the exchange.
FInally, we create a SimpleMessageListener bound to the queue (this is used in the application runner class to make our StockLookup listen on the queue) and create a bean for our StockLookup, which injects a rabbitTemplate that will be used for publishing messages back out on the exchange. Whew!
Now we have the main class that is used to startup the java app and context. This could just be a one liner that just instantiates the context, but during the presentation I wanted to expose the listener to do some live demonstrations and ended by setting the StockLookup object as a listener.
That’s all… just start up the node.js application and the java app, navigate to http://localhost:3000 and entering IBM or GOOG in the text box should make the relevant stock prices show up. A neat little thing I did in the presentation was modified the java app to publish different prices and intervals, with the client side updating in real time without me doing anything in the node.js side. Fun stuff.
The relevant code is available on github at http://github.com/jamescarr/nodejs-amqp-example for trying it out on your own, the readme gives some instructions on what you’ll need to do to set it up and run it.
This was all the result of spending just a day exploring spring-amqp (which is relatively new) and getting node and java talking… I can’t wait to delve deeper and see what other potential uses will be. The slide deck I used for the presentation is also included below.
I thought I’d take a quick moment to provide some examples of making object properties read only in EcmaScript 5 (and by extension node.js). There’s several ways to accomplish it, so I’ll just iterate over all the different ways.
The quickest way to make all properties of an object read only is by calling Object.freeze on it. The interesting thing here is that (at least in node.js) no exception or warning will take place if you try to assign a read only property… it will appear that the assignment succeeded when in reality it didn’t.
Let’s try an object with some additional types… nested object and an array.
In this example, we see that the array and object represented by b can in fact be modified, they just can’t be reassigned to something new. It really just locks the reference. However if we loop over each property and freeze each one each will be unmodifiable and the attempt to push an element onto the array with throw an exception stating
TypeError: Can't add property 4, object is not extensible.
Define Only a Getter
Another way to make a property read only is by only defining a getter for it. You can do this both via defineProperty or defineGetter
Both will throw an exception on an attempt to reassign them.
Defined as Not Writable Via Property Descriptor
One more way is to define the writable attribute in the property descriptor.
That’s just a quick overview, there’s also quite a few interesting tricks to locking object instances.
I’m documenting this only because I had some difficulty finding info online… the API docs tell you what you need, but it took me a couple hours to get things working and a little known bug that threw my work off track.
So say you want to make a secure request to a website, perhaps a secure API call (as is most common with payment gateway APIs) and you’re using nodejs… what do you do? Assuming you already have a private key and an SSL Certificate handy and that you’ve compiled nodejs with ssl support, I’ll show you how in the following steps.
First, place the cert somewhere that your script can access it from, I usually prefer a location like
./certs. Make absolutely positively sure that you have no trailing newlines at the end. You also have the option of embedding it within your script (I’ve seen it done) but I believe this is a poor practice. Still, you have the option of doing that.
Given that your script/app/module is located in the same directory as the certs directory, load the contents of both keys into memory and use the crypto module to create the credentials.
With the credentials now available, we can set up the client and make a request
That is, createClient(port, host, secure, credentials). If everything goes well you should be able to make a request. One thing to watch out for is a trailing newline at the end of your key or cert. Often times I add
key.replace(/\n$/, '') to it just to be safe.
On and off over the past couple days I’ve been working on a nodejs module to interact with Paypal’s Payflow Pro API to allow the acceptance of online payments within node.js apps. The feature list is steadily growing and soon I hope to implement the parts of the API that let paypal do the heavy lifting as well as certificate based authentication.
It’s available via npm and installation is a snap. Given that you have npm installed, just type
and bam, it’s now available for use. Here’s a quick sample to get started (this is using my existing sandbox account):
If all goes well, you should see some console output for either success or failure.
Feel free to check out the github repository, especially if you’d like to contribute.
Stay tuned… there’s more to come both from this module and more payment modules I plan to develop.