,

Refactoring Success

“I don’t know how much more emphasized step 1 of refactoring could be: don’t touch anything that doesn’t have coverage. Otherwise, you’re not refactoring; you’re just changing shit.” – Hamlet D’Arcy

It’s no secret that Agile and Lean methodologies have a lot in common. At Code for America we try to apply them both: in the software that we write as well as the projects we build and deploy to cities. While the quote above refers to Test Driven Development and rewriting code, it can just as easily be applied to the projects as a whole and their metrics for success. How can we define the success of the software projects we’re building at Code for America? And as those projects change and develop, how do we know we’re making progress?

David Binetti of Votizen spoke to Code for America last month about how he developed Votizen using lean methodology. The slide at the top of this post is from his deck, and comes for Dave McClure’s Acquisition, Activation, Retention, Referral, Revenue (AARRR) Model.

  1. Acquisition: users visit the website
  2. Activation: users sign up
  3. Retention: users come back
  4. Referral: users like and refer
  5. Revenue: users pay something for something

Revenue is hairy: as a nonprofit, Code for America doesn’t measure success by the amount of money people pay or donate–it’s important and necessary, but it’s not success. Our success is measured by the fulfillment of our mission: that we’re able to make behavioral, structural, and social changes in the relationship between government and citizens. So we have to make a slight change:

  1. Mission: users do something differently and better

Mission is still hairy: it can’t be measured to the decimal place like revenue can, but at least it aligns more closely with the structure of Code for America. Also, as shown by David Binetti’s slide, that fifth element—in our case Mission—likely won’t make an appearance for several iterations. So what is it? The typical metrics of nonprofit communications are “outreach” and “awareness”—but these are already covered in steps one and two. Even “engagement” can be covered by referral: the user is actively sharing and reinterpreting its message. Not to mention that stakeholders—funders, government, and community leaders—may have different visions or interpretations of that mission.

As a “user” of the product, they’re inherently doing something differently, so it’s important to make sure the action that results from the product is an improvement on the baseline status quo: for example, simply format shifting the same users from paper forms to smartphone apps isn’t inherently better without making explicit claims as to convenience, time, or accessibility. From the perspective of Code for America’s mission in this scenario, increasing the satisfaction a user has when interacting with government could be a Mission metric, but you still have to explicitly measure that as the project iterates.

Measurement brings us back to the initial quote. At Code for America we’re expected to iterate on our projects; but to actually iterate—and not just “change shit”—we have to document our metrics and measure them for every iteration. Defining metrics is hard: not only does it involve aligning stakeholders on the definition and quantification of success, but it requires facing a long and indeterminate path at a time when stakeholders (myself included) might rather be held in the rose-colored, fuzzy glow of undefined potential.

Original post

Leave a Comment

2 Comments

Leave a Reply

Josh Nankivel

Great post, very thought provoking.

Let me throw this out there.

Revenue is simply a proxy attempting to quantify value delivered.

Couldn’t you just change ‘Revenue’ in this model to ‘Value Delivered’ and make specific quantified goals and their achievement (or not) your version of ‘Revenue’?

These goals would have to be in line with your mission and reflect value from your beneficiary standpoint.

So with Code for America you have:

Fellowship – “which connects technologists with cities to work together to innovate”

Accelerator – “which will support disruptive civic startups”

Brigade – “which helps local, community groups reuse civic software”

So how do you measure these? I think metrics like # of users, quantitative measures of user engagement, # of connections made, # and/or impact of supported ‘disruptive civic startups’, # of cases of reuse of civic software, etc.

I’m sure you’ve read The Lean Startup, anyone who hasn’t should. I think there are lots of analytics that are stable enough through the process of iteration that could be used to accurately gauge the value you are creating for your users.

Thoughts?

Chris Cairns

Josh, absolutely. I agree with you 100% about establishing a value measurement methodology for the project. Sure, requires work, but it’s a perfect substitute to revenue as a metric.