Recently a friend of mine posted something interesting. He had referred to a blog post that he wrote three years ago during a meeting and, afterward, read some of his other posts and realized that he really missed blogging. Likewise, I realized lately that my blog posts have been very scarce and few between. While I used to blog whatever I was randomly thinking at the moment, I’ve kind of gotten away from that and only blog a tutorial here or there. There’s a lot of excuses I could make, such as twitter being the new outlet for my random thoughts or that I just don’t have the time anymore but both are really lame (and overplayed) excuses.
So the new goal for me is two write two posts per week minimum. Even my random thoughts feel much more fleshed out and better conveyed in a blog post vs. just a tweet. I’m not worried that the two a week goal will be hard to meet, there are just too many interesting things going on in my life (and the tech community at large) to not have something to blog about.
Earlier this month I gave a presentation at ComoRichWeb on RabbitMQ and one question from an attendee was “Is it possible to publish a message to be consumed at a later date?” I answered that it wasn’t possible to the best of my knowledge, but that there might be some hack to accomplish it. Well, this evening while trying to figure out how to use a push vs. polling model for timed notifications I discovered a clever hack using temporary queues, x-message-ttl and dead letter exchanges.
The main idea behind this is utilizing a new feature available in 2.8.0, dead-letter exchanges. This AMQP extension allows you to specify an exchange on a queue that messages should be published to when a message either expires or is rejected with requeue set to false.
With this in mind, we can simply create a queue for messages we want to be delivered later with an x-message-ttl set to the duration we want to wait before it is delivered. And to ensure the message is transferred to another queue we simply define the x-dead-letter-exchange to an exchange we created (in this case I’ll call it immediate) and bind a queue to it (the “right.now.queue”).
In coffeescript with node-amqp this looks like this:
Next I define the immediate exchange, bind a queue to it and subscribe.
Finally, after defining the queue I created earlier we want publish a message on it. So to revisit the earlier queue definition we add a publish call to publish directly to the queue (using the default exchange).
The result of running this is we’ll see a 5 second wait and then the message content and headers get dumped to the console. Since the queue is only used temporarily in this scenario I also set the x-expires attribute of the queue to expire in a reasonable amount of time after the message expires. This makes sure we don’t wind up with a ton of unused queues just sitting around.
Here’s the result of this exercise in its entirety.
You can get this exercise in full on github.
This is pretty interesting and I plan to experiment further with utilizing this in one of my production node.js applications that use interval based polling to trigger scheduled events.
Lately at work we’ve been using the tomcat plugin for our gradle projects instead of the bundled jetty plugin. There were a lot of reasons for doing this with the main reason being our production environment for our current project is tomcat so it makes sense to have an embedded server that mirrors that environment. I had already devoted time getting JMX working with the jetty plugin, so today I investigated doing the same with the tomcat plugin.
Luckily this is one of those cases where “it just works.” Well, almost. You’ll need to add the following properties to the GRADLE_OPTS environment variable.
That’s it. Now when you you fire up tomcatRun you can open up jconsole and navigate to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi and you’re good to go.
One question that came up a couple times this week is how to set gradle up to deploy jars locally. For the most part I was satisfied with just having people push snapshot releases to our Artifactory server but some people did express a real desire to be able to publish a jar to the local resolution cache to test changes out locally. I’m still a fan of deploying snapshots from feature branches but luckily you can do a local publish and resolve with gradle.
First off, ask yourself if the dependency is coupled enough to warrant being a submodule. Also, could just linking the project in your IDE be enough to get what you want done? If the answer to both questions are no then your next recourse is to use gradle’s excellent maven compatibility (don’t run!).
For the project you want to publish locally you simply need to apply the maven plugin and make sure you have version and group set for the project (usually I put group and version in gradle.properties).
That’s all you need to install it locally, just run
gradle install from the project root to install it to the local m2 cache. Now let’s update your project that will depend on it.
The magic sauce here is using
mavenLocal() as one of your resolution repositories. This will resolve against the local m2 cache.
mavenCentral() can be replaced by whatever repositories you might use, it is only included since it’s the most often used.
That’s it! I know some people dislike this approach due to ingrained disdain for maven but the beauty of it is that maven is silently at work and you really don’t get bothered by it.
Today I found myself thinking again of what I see as two distinct cultures in the development world: Hackers and Enterprise Developers. This really isn’t any kind of a rant just an observation that I’ve been thinking over lately.
Hackers are really bleeding edge. They have no problem using the commandline, using multiple languages, or contributing back to open source. They’ll find and fix bugs in the opensource software they use and issue pull requests frequently. They’ll always be willing to use new tools that help them produce better software when there might not even be any good IDE support. Finally, they’re always constantly investigating new technologies and techniques to give them a competitive edge in the world.
Now when I say hacker I don’t mean someone who just hacks lots of random shit together and calls it a day (that kind of developer isn’t good for anyone). Just someone who isn’t afraid to shake up the status quo, isn’t afraid to be a bit different and go against the grain. They’re the polar opposite of enterprise developers.
Enterprise Developers on the other hand are fairly conservative with their software development methodology. I’m not saying that a lack of standards is a good thing, but enterprise developers want standards for doing everything and they want it standardized across the company. If there isn’t IDE support for a tool they’ll refuse to use it. Want to use mongodb, riak, etc? Not unless there’s a fancy GUI client for interacting with it. If they find a bug they’ll back away from the framework they’re using and simply declare that the company shouldn’t use the framework until the bug is fixed externally. I find this group prefers to play it safe and work on solidifying their existing practices rather than explore new ideas.
Now don’t get me wrong, this isn’t another rant on IDEs or developers who don’t use the command line. But give me a couple days in any organization and I can quickly point out who the Hackers and Enterprise Developers are. The hackers are always pushing the envelope, trying new ideas out, giving presentations. Most likely they’re facing off against enterprise developers on a daily basis who attempt to rebuff their ideas. The enterprise developers on the other hand are pretty content to do their same daily routine for the rest of their lives without any change or growth. To paraphrase Q from the Star Trek episode Tapestry, “He learned to play it safe. And he never, ever got noticed by anybody.”
What I’ve been considering though is whether or not both are beneficial to an organization. It’s no secret I associate myself with the hacker group (and thus I am a bit biased) but I keep wondering if enterprise developers truly are just the right fit for some organizations. I always think hackers are perfect because they push the envelop and come up with all kinds of interesting solutions to scalability problems, such as using Bitorrent to deploy to thousands of servers. Enterprise developers on the other hand rarely exhibit such innovation and would require shelling out several million dollars for an application to copy a file to multiple destinations. In a nutshell, you can really get more done with hackers (who will seek to automate manual tasks as much as possible) while you can use enterprise developers in bulk to brute force through any problem.
To repeat the beginning of my post… this isn’t a rant. And I don’t mean to put “enterprise developers” in a negative light. This is all just some random thoughts going through my mind about the two cultures I commonly see in every organization I have been in. What’s your opinion?
Last night I had the pleasure of attending the very first HackCoMo meetup at Bambinos and thought I’d share the experience. For those of you who don’t know, HackCoMo is a weekly meetup to get together and just hack at various projects or ideas. All in the company of really cool geeks local to Columbia, MO while enjoying some beer and appetizers.
First, it was pretty awesome to be exposed to what others were working on. Ted was working on some DropBox integration for DocumentCloud, Bryan was hacking at some imap webhook type integration, and various other people I could see some ruby code + rvm type magic in their terminals.
For me, I decided to hack on something relevant to one of my side projects that I had been meaning to look into for awhile: using elasticsearch with mongodb. I already had mongodb populated with the enron email corpus (around 500K records) so it was the perfect opportunity to try pushing it into elastic search. Why? Well out of the box mongodb doesn’t have any world class full-text searching and ranking and I want the ability for users of an app to do a full text search across multiple fields in a mongodb document. Ranking is very important. Also, I need synonym based results (for example “Truck License”, “Auto License” and “Car License” should all match results for an Automobile License).
As I started coding the first thing I discovered that it was going to take awhile to import all 500K records. I could have scaled out to multiple processes but that would lose focus of what I was trying to accomplish, so I capped the number of records to 10K. I used the elastical node.js module (only for the reason it was the first thing I saw) and it was okay but there were a couple times I just made REST calls to elasticsearch myself. You can see the results of my 2 hours of coding here.
Overall it was a fun night and in addition to hacking I got to help someone else out with node.js while enjoying some good music, food, and conversation. It’s definitely a great addition to our growing tech community here in Columbia.
If you’re free on Tuesday night, join us!
I stumbled across this post earlier this week on the workspaces employees at 37Signals have. Noticeably a good portion of the workspaces are home offices (or even better, a laptop on a kitchen table at home). This is what every business needs to consider imho. I’ll never understand why a lot of great companies pass on really awesome developers for the sole reason that those developers don’t have the resources to pack up the family and relocate to California.
I’m just ranting, but with the way the housing market is these days it’s no easy feat for developers who own a home to easily pack up and move. So, why are you not hiring remote workers?
In the past many people could come up with easy excuses for not learning a skill. College, after all, is quite expensive! Time consuming! I’ve even heard the lamest excuse of all where people who did go to college make the excuse that their skills are poor because their school didn’t have a good program in their field of study. Hogwash!
I think my school’s Computer Science program was actually pretty good. When I got in the real world I feel I’m a really good developer. But I also have come to recognize that algorithms are my weak spot. Thankfully in this day and age all you really need is the motivation to go look for the knowledge you need and take the time to absorb it. I’ve found MIT’s Open Courseware to be a really excellent resource in this regard. If you were like me an applied at MIT you know that the tuition costs were upwards of $30k… way beyond the means of someone who made “okay” grades from Hannibal, MO. What is awesome is that you can still benefit from the lectures, notes, and exams if only you have the motivation to do so.
So I’ve been working my way through Introduction to Algorithms, which come complete with captions for the hearing impaired (like me)!
Really awesome stuff. Which gets me back to my original point. We hear in the news everyday that there are so many job losses in the unskilled job market or even in certain skilled markets that were traditionally profitable. Meanwhile, EVERY SINGLE COMPANY I KNOW HAS BEEN DESPERATELY LOOKING FOR DEVELOPERS! And even more important, I’ll tell you from experience of being the interviewer for over 100 interviews (and also being the interviewee for several) that interviewers don’t give jack about whether you have a Bachelor, Masters, or even a PhD degree, just whether or not you have good problem solving skills, work well with others, and can write good clean code. So why don’t you help the economy out and start watching lectures and completing coursework for MIT Introduction to Computer Science today?
This morning I was working on a project and one of the modules I depended on had a small bug in it. As I was about to log an issue on the project’s github page I discovered that it was already fixed, just not yet released. I really wanted to push my changes out to our staging server and my build process relies on npm gathering all the dependencies my project needs, so I looked for ways to install through npm without much modification to my build process.
What I discovered could be considered an abuse of npm’s preinstall hook, but it works.
This brings in not just the module, but also all of the module’s transitive dependencies as well. This trick worked for me, but I’m still a little doubtful that this is considered the “right” way around the problem.
This morning I woke up with a lingering thought on my mind that was left over from recent conversations. In the technical community we often get so invested in our work that rather than talk about the simple building blocks that build our success we talk about the huge breakthroughs we make. The problem however is that our breakthroughs most often aren’t accessible to someone who wants to just get started. So today I will give an intro tutorial to using node.js, coffeescript and mongodb to build a simple blog. It builds off the concept in a tutorial I first used to learn node.js more than a year ago, but with a completely from scratch approach. In this tutorial I will also cover practicing Behavior Driven Development using Mocha.