manicwave

Surf the wave

Sharpening the Code Coverage Saw

Permalink

Here's a quick followup to my Hudson - Cocoa - Coverage Reporting blog post from the other day.

I didn't show the summary output that Cobertura displays in Hudson.  It looked like this:

Coverage before cleanup

You'll note (or I will do so for you) that there's a variety of packages here (in the cocoa case, these are just subdirectories of the current workspace).

Notwithstanding the anemic percentages of coverage overall, outside of the and CDGenerated packages, these are all third party components. While I'm extremely interested in knowing that they work correctly, it's not on my radar to build out test coverage for each of these.

What I want is accurate reporting for the code that I write.

When we setup the gcovr build step in Hudson, the command looked like this:

/usr/local/bin/gcovr -r . -x -b -e /Developer  1> html/coverage.xml 2>/dev/null

The -e /Developer command line argument instructs gcovr to exclude any files with names that match /Developer.  The final config that I'm now working with is:

/usr/local/bin/gcovr -r . -x -b -e /Developer -e './UKKQueue/' -e './DebugUtils/' -e './Foundation/' -e './UnitTesting/' -e './Third Party Sources/' -e '.*/ShortcutRecorder.framework/' 1> html/coverage.xml 2>/dev/null

Which is obtuse at best, but works.  The '.*/xxx/' is necessary because the fully qualified path is processed by gcovr.  In my case it would be /Users/jschilli/.hudson/jobs/Tickets-MASTER/workspace/...

The results now look like this:

coverage chart after tweaks

The absolute measure of coverage for each of the two remaining packages has not changed, but the information is now focused on the data that is most important to me.

With all of that said, the exclusions you choose to add are project specific. Hopefully this will help you hone the reporting to your liking.

That Feels Better - Cocoa, Hudson and Running Green

Permalink

Continuous Integration

Continuous Integration (CI) has been around for a while now. Popularized in the java/ruby/[lang] communities, CI, when properly implemented promotes good code practices.  CI alone won't guarantee great code, but it helps support good behavior and in fact rewards users routinely and reliably.

I've used Continuous Integration in many former lives - CI was essential on large geographically distributed teams - driving out incompatibilities in interface and implementation early and often.

My definition of a successful CI system and implementation are:

  1. Automated and unattended application build
  2. Automated and unattended test execution Everything beyond that is gravy (or sugar).

CI & the indie

When I released my first iPhone app, I was building the project in Xcode, switching to Finder and/or Terminal.app, compressing, copying and generally screwing up at every possible step.  Although I've seen the benefits of automation multiple times, I was so busy getting this app out that I couldn't see how I could take the time to write scripts.  That airlock of paradox didn't last long.  I wrote a few scripts and every aspect of my build/sign/archive workflow was automated - when I ran the script.

I repeated this exercise for my first Mac product - this time a hodge-podge of scripts to build the app, generate the help files, generate the sparkle appcast, release notes, upload, etc.  I still use this script and it works great - when I run the script.

Although I've had great success with CI in the past, I wasn't convinced that my one workunit indie shop could or would benefit from implementing CI.  There were a few things that helped turn me around on this:

  1. The weakness of my script based build system continues to be the user centered part - me running the scripts.  As I bounce from machine to machine, branch to branch, tucking frameworks away on one machine and not replicating them to another, [insert favorite 'in the heat of the battle' screwup here], issues might not emerge for some time.
  2. Increased desire to capture metrics and data about my personal development process.  I'm not implementing heavy weight metrics, but I understand absolutely that data can empower me to make decisions - test data, build data, coverage data.
  3. Renewed belief that removing rote non value-adding activities from my routine will increase my effectiveness and throughput

Rule #1 of CI - Automated and Unattended Build

When something changes, your CI should build the system to ensure that nothing has broken.  If you're in the zone and a failure pops up - easy to fix.  If you find an issue weeks later - well we've all been there.

CI is all about automating those rote tasks.  It is important to emphasize both the automated aspects as well as the unattended aspects of CI.  The only thing worse than no CI is CI that is broken and neglected.  We'll come back to this point in a bit.

Hudson CI & Cocoa

There are several compelling CI solutions in the market - CruiseControl, Hudson and scores of others in the opensource space.  There are a spectrum of commercially available solutions as well including Bamboo.  To my knowledge, there are no CI solutions that focus on the Cocoa space [just found BuildFactory - haven't checked it out]. The good news is that most of these systems can run external processes - the by-product is good news for cocoa devs.

Hudson seems to be the leading choice - it's really straight forward to get the basics working.  From there, incremental tweaks should get you up and running.

Preparing for the move

I screwed up more than a few times getting my apps to build in Hudson.  There are more than a few pages on the web that illustrate Cocoa/Hudson builds.

My suggestion is to ensure that you can take a fresh cut of your project from your SCM system, check it out to a new directory and build it clean.

I would encourage you to do this outside of your normal dev tree - it's surprising how easily a relative path will find its way into your xcode build settings.

  1. cd /tmp
  2. checkout project to foobaz
  3. build
  4. if errors, rm -rf /tmp/foobaz, fix errors in main tree, checkin, goto #1 This process should rid you of (many of) those hidden dependencies that will prevent a clean build once you're executing inside of Hudson.

Once you have a clean repeatable build - from your scm system - you should move on to getting hudson up and running.

Setting up Hudson

Installing Hudson is well documented on the net.  The Hudson site includes installation instructions that work well.  There are several examples of cocoa specific sites - I started here.

Because I have multiple targets setup in my Xcode project, I selected 'This build is parameterized' and added some targets to choose from.  Hudson will remember your last choice.

Parameterized Build Settings

Setting up SCM

If you use Git, Christian Hedin's article covers that configuration as well.  The critical thing is to use either SCM polling or a post-commit-hook to invoke the build.  Hudson will allow you to setup a time based build e.g. build every thirty minutes.  The issue with that is that it will execute the build whether there are changes or not.  Polling or post- commit-hooks will ensure that builds are invoked when change occurs.

scm-config.png

You will note that I've elected to only build my master branch - by default, Hudson will checkout and build each branch that it finds in your Git repo. While I see this as advantageous (forward dev on master, branches for production release and bug fix), my branches haven't gotten the Hudson CI/gcov/unit testing love that master has.

SCM Polling

Setting up your project

In the interest of walking before I run, I want my Hudson build to checkout my updated code, compile my code, execute unit tests and capture any reporting output for test coverage and unit test failures.  It turns out that most of this is already performed when I build my UnitTests target in my Xcode projects.

Build Step

Click on build now - you can check the console to see the steps that Hudson is taking.

If the stars are aligned, you should have a successful build.  If not, you'll need to crawl through the console logs to determine where the failure occurred.

It is critical that you go back to your Xcode project/standalone build directory and correct mistakes there.  Check in your changes and repeat.  No one has to know how many times you repeat this cycle, but it's critical to meet the spirit and law of Rule #1!

Sugar

Once the basic build is working you should add unit test reporting.  If you have or are planning to run unit tests (Rule #2), download this ruby script, install it in /usr/local/bin or the directory of your choice and change your build step to look like this:

Safari.png

In the Post-build actions, configure Hudson to publish your test results.

Safari.png

Trigger a build and you'll now see a chart with the build results.  As your test suite grows, you should see a trending graph with increased numbers of tests.

Code Coverage

Unit tests execution is what we're after for Rule #2, but the number of tests as a key metric is easily misleading.  I've seen a lot of cases where the same code is tested over and over again.  Coverage is the key indicator!

Download and install gcovr and install it again in /usr/local/bin

Add the following as a new build step (after the xcodebuild step)

gcovr converts gcov data into a format parseable by Cobertura - a coverage analysis tool.

(See Tommy McLeod's blog post here for some additional details)

Safari.png

Cobertura Configuration

Assuming you have gcov correctly working for your project (the subject of an as yet unwritten post), executing the build will result in some nice graphs.

You can now navigate through the coverage reports and see your annotated source code including what's covered - and more importantly, what's not. (There's a one-line patch to gcovr detailed here that allows Cobertura/Hudson to navigate into your code)

Safari.png

[Edit 3/2/2010 - new example showing a real miss]

This example illustrates the value of visualizing test coverage - I had ~15 valid operations on a model class - I wrote this code from the spec - I erroneously interpreted running green on my unit tests meant all good.  In fact, I had missed several cases - clearly identified here.

Safari.png

Finally

Make some changes in your project, commit them to your SCM system and monitor the build.  Make a test fail, introduce a compiler error and monitor the results.

You want to be able to rely on your CI system to accurately report failures. If you have instability in the process, now is the time to grind through the issues.

You can install the Jabber notification plugin in Hudson, configure your jabber address (or that of a group chat if you're working with multiple people) and Hudson will now inform you of build successes and failures.  You can also configure email.

The compelling aspect of the Jabber plugin is that Hudson has a jabber bot that you can use to get status, trigger builds and more.

Jabber Configuration

What's left?  There are a lot of different directions you can take Hudson now that the basics are in hand.  I want to spend some more cycles getting better diagnostics when the build fails.  Unit test failures are clearly reported. Compilation failures (forget to commit that new file to the build?) require spelunking through the console log.  I also plan on moving my production builds to Hudson, but for now, getting that jabber notification that the build is clean is totally worth the time I've invested in setting this up.

Logitech - I Want My Day Back

Permalink

Yesterday sucked from a productivity perspective. I'm deep into the development on Tickets 2.0 and spending a lot of time generating new versions of my Core Data based data model. This is a (normally) straight-forward exercise in Xcode - Design>Data Model>Add Model Version. The rub here is that this works fine on my Mac Book Pro, but failed without error on my Mac Pro. Yesterday, I'd had enough with git commit && git push -> switch to MBPro, make data model changes -> git commit && git push -> back to Mac Pro -

I spent several hours trying to isolate the differences between the two setups - same project, different rev of Xcode. I down leveled my Xcode install on the Mac Pro, same result - now things are weird.

I moved /Developer to /Developer.old - clean install. No Love!

What I observed on the failing machine was that the versioned data model was being created in the .xcdatamodeld directory, but was not being added to the Xcode project.pbxproj file. Very Frustrating.

I grabbed Activity Monitor to watch the open files for Xcode to see if I could determine what was going on.

I noticed that DefaultFolderX (DFX) had a scripting addition loaded into my Xcode process. I disabled DefaultFolderX and voila I was able to add my versioned data model file.

Were it that this is the end of the story. I sent a note off to Jon Gotow at St. Clair Software with my observations. Jon quickly replied and asked if I was by chance using a Logitech mouse. I am. He further suggested that I look to see if /Library/ScriptingAdditions/LCC Scroll Enhancer Loader.osax was being loaded.

I reenabled DFX and saw that indeed LCC Scroll enhancer was loaded, with errors. I did a quick sudo rm /Library/ScriptingAdditions/LCC Scroll Enhancer Loader.osax, restarted Xcode and everything is working well again.

Many thanks to Jon for his quick and professional response. Logitech - my bill has been remitted.

Application Development Post Mortem

Permalink

Were it that this was a post mortem for the recently released Tickets.app :-)

Rather, its a note that I need to do so.

Daniel Kennett of kennetnet software has put together a few nice post mortems, most recently this one detailing the development of an iPhone companion app.

Whether you put together a presentation, a video or simply scratch some notes in your moleskine, the act of analyzing your performance on a product development or contract development effort is a good one.

I keep a page in VoodooPad for each development release and capture notes about what I could do better or differently the next time around.

iPhone Companion Apps: New Project to App Store in Two Months | Daniel Kennett: ""

Links for 2010-01-12

Permalink

set :git_enable_submodules, 1

(tags: git capistrano github recipes configuration)

Froth is a Objective-C web application framework that brings the power and simplicity of Cocoa development to the web.

While froth web apps are technically deployable on many different platforms using Cocotron, currently our focus has been on the Amazon EC2 cloud.

(tags: objc amazon ec2 ami web programming)

Forwarding ssh keys

(tags: git ssh capistrano)

sudo a2enmod rewrite

(tags: apache ubutnu configuration)