Saturday, July 24, 2010

OSCON 2010 - Closing Thoughts

It was a very hard landing that made everyone in the plane leave finger nail marks in their arm rests, but I am now back home in Ottawa. Portland is a great city and overall OSCON surpassed my expectations.

I met some great people and witnessed a lot of cool things but I think the key take away that I got out of this convention is that open source matters. Not that I didn't believe it before, but this really drove it home. Some people believe that open source is dangerous and full of broken promises. I strongly disagree. Sure there are bad projects out there, but that's not exclusive to the open source community. You still need to do your home work before adopting any technology. 

Think about the web and try to picture where it would be Today without open source. Would it even exist? I can build a list of game changers without even trying:
  • W3C
  • Apache
  • WebKit
  • MySQL, Postgres, etc.
  • PHP
  • jQuery
  • memcache
  • ... and on and on ...
As much as people love to hate PHP, its contributions to the web cannot be ignored.  There are also the technologies that are not directly tied to the web:
  • Linux
  • Perl
  • Python
  • Subversion, git, Mercurial, Bazaar ...
  • ... and on and on ...
It's really not hard at all to expand these lists. Take the time to stop and think about this for a minute. How many other industries can you think of that has this kind of community effort. But better yet, how much did the efforts put into those projects help accelerate the evolution of new technologies? 

Out of the few tech conventions that I've gone to, one thing that makes OSCON stick out from the rest is that the discussions here were not about products. They were not about making money. It was a gathering of people that are passionate about their craft and that want to make a difference with their skills. I'm not trying to be dramatic, just watch to Tim O'Reilly's keynote, he'll sell it to you better than I could. One of the things he said was:
"[...] work on stuff that matters, not just to build cool tech that's going to make you rich or make people have a good time."
He then went on to talk about how open source technologies were used to literally save lives in Haiti. Sure, this is the extreme end of the spectrum and obviously not every open source project results in saved or improved lives, but they are in the least enabling. If you ever find yourself with that career angst where you want to stop writing yet another form that dumps data into a database just because it pays your mortgage, then you might profit considerably from contributing to an open source project that you find interesting. Better yet, before you complain about an open source project that did not live up to its promise, think about how you can help and then create a patch and submit it.

The world of computer science and software engineering is still very young when you compare it to traditional forms of engineering. We still have a long way to go but there is absolutely no denying that open source is not helping us get there faster.

    Thursday, July 22, 2010

    True cross-compilation for Android and iPhone?

    I went to another presentation on mobile development, this time it was for the xmlvm project presented by Arno Puder. Most of the frameworks that I've seen so far that attempted to ease cross platform development in the mobile space seemed to tackle the problem in similar ways. They introduce some kind of soft layer between your code and the underlying OS and hardware. 

    This is where xmlvm sets itself apart from the rest. It does not rely on a runtime or interpreter to abstract the code from the underlying layers. The development cycle looks a bit like this:
    1. Write an android app in Java, with eclipse and the android plugin.
    2. Build your UI and other components using the xmlvm library
    3. Build and execute within their own emulator
    4. Repeat
    5. When you want the real thing, run their compilation tool which compiles native binary code for either Android or iPhone.
    I hope I got that right, it's what I remember from the talk. The most interesting and unique part is step number 5. How do they do this? Java and Objective-C are such drastically different languages, this can't possibly be reliable.... right? The trick is that it doesn't work at the language level. The cross compilation uses the java byte code as the source, parses and morphs the logic into an XML document (hence the name) and from that XML, it can generate several types of output applications. This is brilliant.

    Wait, did he say "their own emulator"!?
    Yes I did. They have their own iPhone emulator so that you can test your code without having to go through the process of cross compiling (yet). From the demo that was shown, it works great. It might even be better than Apple's emulator since there are controls for tweaking the accelerometers. Arno even showed a demo app on his iPod which connected remotely to his emulator running on his laptop. He was then controlling the emulator's accelerometer by moving around his iPod. All of this, of course, is developed using xmlvm.

    So far, I'm quite impressed, but there are differences between Android and iPhone that are more complex than just syntax. A question that I had was how the memory model is managed between the two outputs. Java uses Garbage Collection while Objective-C uses the retain/release model and that impacts how you write your code. I didn't grasp exactly the answer that Arno gave me, but in essence he said they use reference counting and so circular references would cause memory leaks. This part would need to be understood clearly by any developer that would adopt this technology to make sure they really understand how their memory is being managed on the device that they are targeting. 

    Xmlvm most likely violates the Apple's terms of service and they are aware of this. However, I consider it to be in the same gray zone as Titanium and PhoneGap. An important distinction is that xmlvm generates a true native binary and is probably much harder for the Apple app review process to prove that an app was actually built using a cross-compiler than the other frameworks that rely on interpreters.

    The presenter made it clear that this library is not perfect and still has a way to go, but if this technology really does as much as what was presented, then I am definitely interested. I'll have to fiddle with it when I have a chance, now that I have my brand new and free nexus one. Oops! Did I just say that!?

    Wednesday, July 21, 2010

    /me loves javascript

    I attended a session on jQuery yesterday. This presentation was put on by Mike Hostetler and Jonathan Sharp.... these guys. Yes! Anyway, it was interesting to have them walk us through developing with jQuery. I waited a bit to report on this talk mostly because I was already somewhat familiar with the library, but also because I needed some time to understand what I got out of it.

    There were some neat tricks that I wasn't aware of and they did a great job of explaining how the library traverses the DOM and how you should structure your code to do it more efficiently. However, there were quite a few people in the room that had a hard time following the syntax during the live coding examples. There was even a question asked whether the appendTo team would consider developing ways to make the syntax easier to write. I was thrown off by this. jQuery (in my opinion anyway) offers such an elegant way of expressing your intent. The chained method support makes it really easy and quick to drill down to the data that you want to work with. Also, the context stack lets you drill down and back up within the same chain with a syntax that can be indented and left easy to read and follow. For example:

    $('tr')
      .filter(':odd')
      .addClass('odd') 
      .end()
      .find('td.company')
      .addClass('companyName'); 

    This is what I really love about JavaScript. This is also what I think new JavaScript developers have the hardest time getting used to. You really need to understand what the language is doing when you start attempting things like nesting closures. But once you have that primer tucked under your belt, you can really start appreciating the advantages that the JavaScript syntax has to offer.  If you want to learn more about the good parts of JavaScript, I recommend Javascript: The Good Parts by Douglas Crockford. It's a quick read and extremely informative.

    For those who are curious, Jonathan Sharp's response to the question about easier syntax was perfect: (paraphrasing from memory) "No. We would rather spend that time improving our documentation and tutorials to help developers learn the syntax". 

    Code for America

    There was a quick keynote this morning put on by Code For America. This really caught my attention and I suggest you go to their website and take a look at what they are working on. There is a video on the first page with some popular figures that can give you a quick idea of what they are about. This is truly amazing and harvests the open source model to really make a difference.

    One thing I don't truly understand, is their name. I can see why they would use something like CODE for AMERICA (the caps pattern is taken from their logo). It's very rah rah rah and the colour scheme is surely not an accident.  It gets people's patriotic blood flowing. But why not CODE for PEOPLE and expand their scope? In particular, the Canadian governments are structured differently but are tackling the same problems. Isn't everyone? Why not make it truly open?

    EDIT: I asked the previous questions to folks from Code For America that were here at the convention.  Basically, the organization is less than a year old and they've recently grown to three people on their payroll.  They are tackling immediate problems but are aware of other similar groups, notably some folks interested in starting Code For Canada.  The most interesting part of this effort is that they call upon the public to help out with these government inefficiencies and in return, any organization that will profit from projects that come out of this are forced into a transparency clause.  This means that any data that is produced and managed by systems that are born out of this effort will be open.  Also, the code is open source and therefore available to cities outside the US.

    Scalable Internet Architectures

    Yesterday was the second and last tutorial day at OSCON.  My morning session was on scalable internet architectures which was presented by Theo Schlossnagle.  This session was excellent.  It's hard to put together a meaningful summary as there was so much covered in the three or so hours.  Some of the topics that were covered include:
    • proper configuration of static resources such as javascript files and images in order to maximize your bandwidth and make the most out of browser cache
    • traditional and less traditional databases and other persistence solutions
    • sharding, and why you want to avoid it when possible
    • network routing with dos and don'ts
    • concurrency and multi threading models
    • staffing
    • programming practices
    • ..and much more that I didn't have time to jot down.
    Here are some of the take-aways that come to mind:

    Cache or Accuracy?
    Proper caching can usually drastically improve performance but the drawback is information accuracy.  If information is cached for 5 minutes, when that information changes, it can be reported inaccurately for 5 minutes.  The proper balance will depend on the application.  You'll often see accuracy as a requirement, such as System Xyz must be accurate.  Period.  As if you would ever assume the opposite, but still, it needs to be explicit.  Your peers in fancy suits will tell you that you can't cache during a particularly expensive task because it needs to be accurate.  But if that certain operation takes 100 hits per second and you can show that implementing a 30 second cache can avoid the need to purchase more hardware, mr. fancy suit might decide that it's a reasonable trade off.  Better yet, 30 seconds may have been reasonable by his definition the entire time.

    Premature Optimization
    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.  Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified."
       -- Sir Tony Hoare (popularized by Donald Knuth)

    This famous quote is stating that a good developer will know when time is worth spending on micro-optimizations.  Some developers will use optimization methods such as caching as a cop out for not spending the time it takes to design more efficient routines and algorithms.  You should design and write code to be as efficient as possible
     but really only worry about optimization techniques once you can prove that a part of the code is inefficient.  You will then be in a better position to properly analyse the problem and find a suitable optimization technique.  Of course this doesn't mean you should always ignore possible bottlenecks and performance issues, but a good developer will know when that premature optimization is appropriate.  

    Mind your URL
    This is usually not a proper URL:
    http://www.domain.com/super_promo 
    this is:
    http://www.domain.com/super_promo/
    and that trailing slash that your marketing team might forget to add to the URL that will be printed in flyers and typed into emails that will fill up people's spam folder could be a killer for your system.  This is extremely subtle, but most web servers will return a 302 "Object moved" response to redirect the user to the same address but with a trailing slash. That extra noise on the network could be expensive on a busy site.  It's not exactly practical to try to avoid it, the point is to plan for it.  If you load test your system using the trailing slash and think your system can handle a reasonable load with room to spare, you might be in a world of hate when your site gets Digg'ed with a URL that doesn't have a trailing slash.

    A few other drive-bys:
    • Your solution will never be perfect.  You can spend an extra two weeks or month on it and it will still not be perfect.  You are often better off to implement a solution and instrument the heck out of it so that you can analyse, learn and improve.
    • Architectures are without specification of a vendor.  People tend to stick to what they know, it's in our nature.  It's important to put those personal biases aside to propose the best tool for the task.
    • Decouple your services.  This doesn't just make your code easier to maintain, it makes it easier to deploy and scale.  But most importantly, if implemented properly, it will isolate failures.
    There was so much more that was discussed during this presentation, this is just what comes to mind (and what I can make out from my digital scribbles).  The presentation was recorded, but that may only have been for the live feed.  If the video is made public, I strongly suggest you grab some popcorn and sit through the entire 3 hours.  In the least, the slides for this presentation (and others) should appear on this page shortly.

    Monday, July 19, 2010

    The Titanium Promise

    I've been hearing a lot of talk around Appcelerator's Titanium framework lately.  Appcelerator has found a way to ease the development of apps across the iOS and Android operating systems, with blackberry support trailing not far behind.  They provide a free SDK that allows you to write your mobile application in JavaScript and compile it into native applications for multiple platforms.  This is huge and could mean significant savings in development costs.

    I spent a some time looking into the framework a few weeks ago in preparation for Today's hands-on tutorial.  My initial sentiment was that the framework is still in its infancy.  Having sat through this tutorial, my original assessment remains intact.  Appcelerator does not provide an IDE or a step-through debugger which is kind of annoying, but that's not really a show stopper.  My biggest issue is that I was often faced with compilation errors when trying to build and run a brand new project or their kitchen sink demo.  At first I thought that I massacred my dev environment, but Today's presentation was regularly interrupted with similar "I'm getting several errors when I do that on my system" comments.  It turns out that the release of iOS 4 broke a few things in the Titanium framework and we all had to install an unreleased version to circumvent the errors.  That solved the immediate problem but it uncovered an even bigger issue; the Titanium framework is heavily dependent on third parties that they have no control or influence over.  What makes this even worse is that one of those third parties is Apple.  That dependency reveals two major risks for a developer:
    1. Apple can drastically change their SDK and/or tools with little to no notice.  If this change breaks the Titanium compiler, you may not be able to release an updated version of your app until Appcelerator releases a patched version of its SDK.   You'll probably still be able to build it using the older Apple SDK, but picture this scenario:
      1. Apple releases iOS 4.0 which breaks the Titanium build.
      2. iPhone/iPod owners start upgrading to iOS 4.
      3. There is a problem with an existing Titanium app when running on iOS4 that causes it to crash.
      4. The developer is unable to release an updated version which will fix the crash in iOS4 until Appcelerator fixes the Titanium build.
    2. The Titanium framework is flirting with Apple's updated terms of service.  Titanium apps are currently being approved into the app store, but knowing apple, that could change anytime.  Steve Jobs' infamous open letter talked about a common denominator for features which could potentially be applied to Titanium.
    All of this might never end up causing any issues.  It's probably in Apple's best interest to keep allowing quality Titanium apps into its store since it makes their devices more appealing.  However, it is notable and probably worth mentioning to stake holders.

    There is another important piece of information that I learned over this session that was not obvious before: the JavaScript code is not cross compiled to Java or Objective-C.  The Titanium SDK generates Android and XCode projects and does compile native binaries using the XCode and Java compilers but the app itself is not really a native app.  The application that results from this process, atleast the iPhone version, is essentially a JavaScript interpreter with your JavaScript code compiled into it as a string resource.  This has several implications:
    • Performance is sacrificed for convenience.  No matter how well you optimize your JavaScript, it simply will not run as fast as if it were compiled to native machine code or byte code.
    • Though the JavaScript code is minified and obfuscated, it is most likely still very easy to reverse engineer.  Other JavaScript frameworks like Google's GWT suffer from the same problem except that in the GWT case, you can move some of that sensitive logic to the server.
    Don't get me wrong, there were a lot of good things that came out of this presentation and the Titanium frameworks still have a lot to offer.  The use of JavaScript and it's literal syntax makes it easy to build apps in a decoupled fashion and the event handling model will be familiar to any web developer.  There was also mention of a community effort to build a tool that can take a XIB file and convert it to declarative JavaScript to build the UI.  It apparently works well, though I'm not sure about the maintainability of this approach.  The framework can surely ease the development of most apps available Today, but it is still young and has room for improvements.  The Appcelerator team has big plans for the future such as providing an IDE with a step-through debugger, support for universal binaries, and more.  If you're working in the mobile space, you should definitely keep an eye on it.  I know I will.

    Roam with me!

    One of the challenges for me during this week will be getting used to using a crippled iPhone.  I have data roaming turned off on my phone this week while I'm in the US to avoid remortgaging my home in order to afford my next cellular bill.  I don't want to get into how roaming charges work in North America, because this will quickly turn into a rant, but I'm going to have to get used to actually doing some basic planning before I leave a wifi hotspot.  This is something I didn't notice I had stopped doing when I got my iPhone.

    When I started looking into buying a smartphone, I wasn't really interested in getting a data plan.  It was expensive and I didn't think it was worth the cost.  A colleague, who had an iPhone for a while already, told me I'd regret not getting a data plan and I shrugged.  Turns out when I bought my phone, Rogers' retention specialist offered me a basic 500MB data plan which I decided to try out.  I have to admit that I would be kicking myself if I actually would have gone through the trouble of buying a smart phone without a data plan.  It even recently got my wife to say this magical phrase: It's a good thing you had your phone! after she constantly tried to convince me that it was a waste of money.   (It's true, I swear!)

    You really need to have it to really appreciate how useful it is to have the internet in your pocket.  Or better yet, you need to lose it in order to really appreciate what you had.  Do you remember watching Back To The Future 2 when you were a kid and thought "Man, I can't wait to ride a hoverboard!".  Turns out, those didn't really materialize (it's not 2015 yet!) but lately I've often found myself reading headlines and think "Geez, we're in the future!".  Think about SixthSense Projection3D holograms, cars that park themselves and even 3D Televisions.  Though mobile internet has been around a while now and you might think I'm lame for talking about this Today in 2010, but it is pretty much the equivalent to the hitchhiker's guide to the galaxy.  When you stop to think about it, that is an incredible technological achievement.

    It's also incredible how fast we become dependent on these technological advancements.  A friend once asked me whether I had used a computer without an internet connection lately.  My answer was no and I hadn't really thought about it, but a disconnected computer feels quite useless in 2010 while it was state of the art technology "only" 15 years ago.

    So now I'll need to go about only partially connected for the next week.  At least there is free wifi at OSCON, unlike another tech convention I went to earlier this year..

    OSCON 2010 - Day Zero

    It's my first time in Portland Oregon and so far I am impressed.  The city is clean, green and the people that I've met so far are friendly and appear to be genuinely glad to help.  There is also some great architecture here.

    I landed last night and will be spending the week here attending OSCON 2010.  I'm excited about this convention, there are some excellent topics and some very smart people on the agenda.  I almost wish there weren't so many tracks to choose from, it makes the decision that much harder.  I can definitely understand Kevin Whinnery's comment: "wish I didn't have to miss presentations to give mine".

    The Oregon Convention Center, where OSCON is being held, is huge.  The crowd this morning is fairly small, but the first two days of the five day event are optional tutorial sessions.  I anticipate it getting busier on Wednesday.

    I'll try to post the interesting bits that I come across during the week.  Today will be mobile-centric with tutorials on Android and Appcelerator's Titanium. I'll post a follow up to these sessions later on Today.