“I don’t know how much more emphasized step 1 of refactoring could be: don’t touch anything that doesn’t have coverage. Otherwise, you’re not refactoring; you’re just changing shit.” – Hamlet D’Arcy
It’s no secret that Agile and Lean methodologies have a lot in common. At Code for America we try to apply them both: in the software that we write as well as the projects we build and deploy to cities. While the quote above refers to Test Driven Development and rewriting code, it can just as easily be applied to the projects as a whole and their metrics for success. How can we define the success of the software projects we’re building at Code for America? And as those projects change and develop, how do we know we’re making progress?
David Binetti of Votizen spoke to Code for America last month about how he developed Votizen using lean methodology. The slide at the top of this post is from his deck, and comes for Dave McClure’s Acquisition, Activation, Retention, Referral, Revenue (AARRR) Model.
- Acquisition: users visit the website
- Activation: users sign up
- Retention: users come back
- Referral: users like and refer
- Revenue: users pay something for something
Revenue is hairy: as a nonprofit, Code for America doesn’t measure success by the amount of money people pay or donate–it’s important and necessary, but it’s not success. Our success is measured by the fulfillment of our mission: that we’re able to make behavioral, structural, and social changes in the relationship between government and citizens. So we have to make a slight change:
- Mission: users do something differently and better
Mission is still hairy: it can’t be measured to the decimal place like revenue can, but at least it aligns more closely with the structure of Code for America. Also, as shown by David Binetti’s slide, that fifth element—in our case Mission—likely won’t make an appearance for several iterations. So what is it? The typical metrics of nonprofit communications are “outreach” and “awareness”—but these are already covered in steps one and two. Even “engagement” can be covered by referral: the user is actively sharing and reinterpreting its message. Not to mention that stakeholders—funders, government, and community leaders—may have different visions or interpretations of that mission.
As a “user” of the product, they’re inherently doing something differently, so it’s important to make sure the action that results from the product is an improvement on the baseline status quo: for example, simply format shifting the same users from paper forms to smartphone apps isn’t inherently better without making explicit claims as to convenience, time, or accessibility. From the perspective of Code for America’s mission in this scenario, increasing the satisfaction a user has when interacting with government could be a Mission metric, but you still have to explicitly measure that as the project iterates.
Measurement brings us back to the initial quote. At Code for America we’re expected to iterate on our projects; but to actually iterate—and not just “change shit”—we have to document our metrics and measure them for every iteration. Defining metrics is hard: not only does it involve aligning stakeholders on the definition and quantification of success, but it requires facing a long and indeterminate path at a time when stakeholders (myself included) might rather be held in the rose-colored, fuzzy glow of undefined potential.