Conversations on Big Data is a series of discussions with executive-level leaders about using big data analytics in creative and interesting ways to improve business outcomes. Today’s Conversation is with Gerald Ray, the Deputy Executive Director of the Office of Appellate Operations (Appellate Ops) at the Social Security Administration (SSA). Appellate Ops manages cases where an applicant for disability has appealed a ruling on their eligibility for benefits. Gerald cites Supreme Court Justice Rehnquist as describing this type of law, administrative law, as only slightly less complex than tax law.
Gerald’s interest in applying analytics to the law started “when I went to law school … they would have us read cases and then extract some maxims … as if there was logic to it all. And there was some logic to it. But what I also saw is … divergent opinions from different judges.” For example, Gerald cites the premise that judges possess expertise as a result of years of training and experience. Yet data shows that different judges, sitting at similar levels of the judicial system, with very similar training and experience can and do render different judgments. There appears to be a subjective factor involved in applying the law that makes measuring the performance of applying the law difficult to quantify. Gerald says that he realized that while the law has room for interpretation, the legally prescribed policy typically does not. Applying the policy is a process that can be readily measured, once defined. Gerald also realized that, if the policy has not been applied consistently, the only practical remedial action available is behavior change.
Gerald cites several interesting influences on his approach to “marrying” analytics and the law. One is Nobel Prize winning economist Daniel Kahnaman. Gerald told me a story where he asked Professor Kahneman what it takes to make someone truly expert at what they do:
1. A stable process
2. People immersed in the process
3. Feedback on adherence to the process
Without the feedback component, people will start to diverge. With feedback, people tend to conform to the process, even if outcomes still diverge.
When Gerald joined SSA there was little performance data per se but there was a lot of historical data about numbers and types of cases. The cases managed by Appellate Ops ultimately result in one of two possible verdicts: eligible or ineligible. Given both the “binary” nature of the overall policy applications processed and the requirement to align compliance with policy, Gerald realized that he needed to map all the various process paths that a case could possibly take to get to an outcome. Gerald and his staff analyzed those facts and created a “decision tree” model based on over 2,000 data points.
Based on this model, the Appellate Ops team then ran the historical data against the model to measure historical compliance. The initial findings showed that there were a large number of cases which were transferred back and forth between Appellate Ops the District Court. This presented a focus problem to address: how could the quality of the work be improved so that the applicant either receives approval of benefits or proper and accurate due process with no bouncing back and forth?
To tackle this objective, Appellate Ops had created a heuristic model, “a sort of rule of thumb approach” to applying the law. And based on Professor Kahneman’s research, Gerald required that a feedback loop be added to that model to guide staff, including judges, to specific training activities if a case was returned. They worked with the operations subject matter experts to create training programs that were aimed not at teaching law – but at teaching how to apply the law.
A feedback loop was also built into the training programs and was refined over time. One of the training innovations that the SSA built into the process, which won the Deming Award from the US Graduate School in 2011, is tiered training. “Every time we’re returned a case, they get a link … first tier is a little handy-dandy guide to applying the law correctly. There’s also a second tier … with hyperlinks to the regulations for more detail. So if you really don’t understand the issue, you can self-train”.
To increase the effectiveness of the training, Gerald researched how adults learn. Most training programs present material and expect adults to understand and retain it without any context. This type of rote learning is successful with children, but not so much with adults. Adults learn better in context of value added: what practical use does this give me? Adults are also more likely to learn more effectively when they can relate their own progress to that of a group of peers. So an additional motivational strategy Gerald proposed was to implement a tool called “How Am I Doing?” where employees can see how their own performance ranks within the group.
Gerald says that as a result of applying analytics to the continuous improvement approaches outlined above, as applied to policy compliance, performance has improved over 12% and the percentage of cases being sent back lowered from 25% to 15%, “a substantial improvement in the quality of the work”. Stay tuned for a future blog where he shares more about the tools and techniques he used.
For the complete audio interview, please visit: http://ourpublicservice.org/OPS/events/bigdata/