, ,

3 Reasons Why I Love the Digital IQ Rankings

Recently an organization named L2 released the Public Sector Digital IQ rankings. They provided an assessment and ranking of the digital competence of 100 public sector organizations across four dimensions: Site, Digital Marketing, Social Media and Mobile. It is a pretty fascinating read and some of the top public sector entities were NASA, Army, State Department, Marines, and the White House with highlighted projects such as data.gov and notifications dashboard


As expected, as soon as the study came out there were a number of critiques of the study including a great analysis by Mark Drapeau “Does Your Public Sector Digital IQ Mean Anything?” where he criticizes the metrics and instead says the focus should be on mission goals. There are also other critiques of the study from some of my friends such as Open Forum Foundation analysis.

In contrast, I would like to offer 3 reasons why I really like the study…and rankings in general.

1) Something is Better than Nothing – An 80% accurate metric is better than no metric at all . Yes, the Digital IQ ranking isn’t perfect. But before it, there was no way to show whether an agency was doing well online. It is hard to know if you or an organization is doing good, if there is no ranking or measurement. I’d argue the same is true with educational measurements of teacher effectiveness – lots of debate on how accurate they are – but I’d rather have something than nothing.

2) Rankings Provide Legitimacy – By ranking a topic, it provides legitimacy to the topic. Leaders always want to be first at something and by ranking something, it shows it is important. For example, the Partnership for Public Service “Best Places to Work” rankings have highlighted an area that was undervalued for years – workplace environment in the public sector. And after the rankings were created, senior leaders started focusing on this issue. Hopefully, the same will be true with digital IQ and public sector social media.

3) Rankings Create Change – Tied to number 2, I’ve seen first-hand how after the Best Places to Work rankings became popular, senior leaders started spending time, money, and energy to improve their workplace. After the U.S. News and World Report created rankings of universities, universities started spending time, money, and energy to do better. After the Digital Cities rankings became prominent, mayors and senior IT leaders were tasked to work hard to get on the list and become a world-class digital city.

Yes, rankings aren’t perfect. The equations are often questionable (can you tell a health city by the number of bars – men’s health?), the specific rankings of who is on top may not be perfect, and quantitative numbers often fail to capture qualitative distinctions.

But I do think rankings are very useful.
-They offer a rough guide on who is doing a good job (we do know NIH is doing something good in workplace environment vs DHS based on Best places to work)
-They provide emphasis on an issue for senior leaders
-On a change management framework, they give an impetus to move.

Just my 2 cents…

Leave a Comment

13 Comments

Leave a Reply

Profile Photo Stephen Peteritas

I think the biggest point with any rankings system is to create competition and light the fire under whoever or whatever is being ranked. It doesn’t matter if the system is flawed (just look at the BCS) as long as it makes those being ranked strive to be better.

Reply
Profile Photo J.D.Bailey

I read, I suspect, there is a failure in appropriately applied technology?

Technology applied to a problem is problematic on ROI. A high D-IQ for an organization does not always translate into high-operational value, sometimes there is a until the user/experience problems are resolved with high D-IQ.

There are many web-sites/portals in .gov/.mil that have glitz, but add little or nothing to progress or performance of Org/Ops. The problem is parochial-boss management focus, while mission, purpose and progress focus are limited or misdirected by good management policy being applied with technology poorly.

NOTE:

Strategy is a plan,
Tactics are applied.

Strategy is rigid in intent,
Tactics are responsive agility.

Strategy plans a reality,
Tactics deliver actuality.

Management Strategy designs reality from the top-down,
Employee Tactics execute strategy that build from the bottom-up.

All failures of execution start with strategic planning failures in understanding an supporting possible realities.

To frequently the D-IQ genius (Org/Ops) is lacking in practical common sense and experience.

Reply
Profile Photo Wayne Moses Burke

I think it’s important to note that my analysis was on the methodology for the report they released in August on the Digital IQ of Senators. To their credit, they have significantly modified their metric since then, and seem to be very open to constructive criticism.

Reply
Profile Photo Lauren Modeen

I see smart thinking on both sides of the story – 1) mis-guided metrics fostering a false sense of competition (to quote Steve Radick) but also 2) some benchmark is better than 100% no benchmark, in that at least it gets agencies thinking…

All this social media stuff is so new to everyone (and that’s only really even talking about the people who roughly can describe it – there’s a host of people that the concept is 100% as foreign to as building a house on Mars). I guess the reality is, we still have a LONG WAY to go in understanding how to best communicate via these tools, how to measure this communication, how to guide others, and even how to compare the tools within different agencies.

For the first time, these technology tools allow us to measure interactions like never before. So, naturally, we’re obsessed with ranking, calculating, benchmarking, etc. Human behavior, sentiment, communication, etc, is a tricky topic, and one, I suspect, we need quite a bit more time to understand.

I spent some time with a little cousin this weekend, who recently learned how to count. Naturally, he counts everything in sight. The number of plates on the Thanksgiving table, the number of wine glasses on the counter, the number of dog toys strewn about the living room. I think we’re all so excited we now have the capability to measure something, we want to jump right in, and start tallying. It’s great that we’ve come this far, and counting something will teach us better how to later subtract, do multiplication, geometry, calculus, etc. We have to start somewhere.

Overall, I think the entire concept is very exciting, and I’m glad to see smart people taking the time to put something out there, and then other smart people try to make sense of it. That’s exactly what we need to be doing to improve.

Reply
Profile Photo Gadi Ben-Yehuda

I’m going to be slightly contrarian to the contrarian contrarian (per Mark Drapeau’s post below). Steve: an 80 percent accurate metric will make some people 100 percent sure about a finding that could be 40 percent off the mark. It adds a patina of science (and with it certainty) to a highly contentious assertion.

That leads to the second point: legitimacy gained through shoddy metrics ends up undermining legitimate legitimacy.

Finally, (related to #1), when people change their policies or strategies the better to comport with flawed metrics, they’ll end up with flawed policies and strategies. Is that the change we’re looking for?

In response to J.D. Bailey, I’d say that tactics are great at winning battles, but you need a strategy to win a war. Tactics don’t speak to mission, only to execution. You need a firm coupling of strategy and tactics if you want to succeed. Either one alone is likely to fail.

Reply
Profile Photo Lauren Modeen

Gadi – you make some good points. My question to you is, until we get to legitimate legitimacy, what is the best way to talk about any kind of progress made, or how best to, as Stephen P says, “light a fire under whoever” is being ranked? Is it better to try, iterate, find the flaws and improve, or say nothing until legitimate legitimacy has been achieved?

Reply
Profile Photo Gadi Ben-Yehuda

Lauren: I think the best way is to be specific rather than use loaded terms like “IQ.” (the implication being: if you’re at the bottom of the list, you’re a dummy). How about: Best Agency Use of Twitter to Achieve Mission. Or Most Successful Implementation of Fund Drive Using Social Media. Something with concrete metrics and meaningful lessons for other agencies/organizations.

Reply
Profile Photo John Bordeaux

@Lauren – agree with Gadi here. Comparisons of ‘use’ absent a purpose is not the type of iteration that’s useful. I can only imagine the charge inside some agency today: “Get us off the Feeble list!” “Ok, sir, I’m off to ask all my friends to like our Facebook page!” We may be early on this journey, as you say, but we know enough about aimless initiatives to know where they lead… (And yes, perhaps a methodology that wasn’t slanted towards business marketing metrics would be useful as well!)

Reply
Profile Photo Lauren Modeen

Gadi + John – I like your thinking. I agree and think the IQ is too severe. Reminds me of why Daniel Goleman’s book Emotional Intelligence was so important when first published. Not everything is meant to be categorized, tabulated, quantified in the same manner, and doing so is very misleading, unproductive, and potentially damaging. If people take the results too literally, they might end up wasting a lot of money, time, and feeling unnecessarily inadequate. The organizations, agencies, etc, ranked, are quite the different animal than, say, a private product company.

I guess herein lies the delicate fine line of rating/ranking to incite positive change, and simultaneously comparing apples to oranges, and having certain categories be misleading and cause more damage than good…

Reply
Profile Photo James Merritt

I see the digital IQ as a replacement for your SSN. Then every item or thing sold will come with a IQ value printed on the item. Then you would not be allowed to purchase or use any item with a higher IQ value then your digital IQ. If you purchase an item and the IQ value is above your digital ID then the person that sold it to you gets a manditory jail sentence of 1 month the first time. Every time they again sell an item with a IQ value higher then the person buying is allowed they again go to jail. However, the length of time spent in jail doubles with each repeat offense. That would stop of law suites and people sueing manafactures for stupid stuff.

Reply
Profile Photo Christina Morrison

Not all the companies on this list will find value in these rankings but I think just attempting to rank and put some form of measurement can help validate the social media programs for the agencies who do have social media programs and policies, and measurements can help make the case to managers for implementing social media.

Reply
Profile Photo Joshua joseph

Am coming to this late but, for what it’s worth, a few thoughts to share including several things I’ve learned from the Best Places to Work rankings (I’m a researcher at Partnership for Public Service).

While there are definitely a number of flaws in the Digital IQ rankings, as folks have pointed out, the real measure of value isn’t flawlessness but whether or not the rankings make a contribution over what we currently know…warts and all. Sure, I’d like to know a bit more about how the ratings were determined and while I probably disagree with some of the metrics, I really liked their overall accessiblity. Making a report like this readable isn’t easy, so hats off on even keeping our attention. That’s something to build on! I also like the use of cross-sector comparisons.

From the Best Places rankings, I’ve learned that you don’t have to get it perfect out of the box. You can keep tweaking and improving year after year. Our first year we only ranked the best agencies in government. The next time we included the agencies ranking lowest and folks hated it. It was like being “outed.” Before the rankings, very few people even within government were paying attention to these data (which originate from OPM). Then we teamed up with U.S. News for the early rankings and it drew a lot of people in, including the media. Again, the attention wasn’t all good, but lit a fire under some high-level folks and that made a difference.

Even today things aren’t perfect with the rankings. We’re always learning. But putting them out there gets people to ask important questions that they weren’t asking before and (hopefully) to do more to understand what’s working and to address the problems/challenges. In government where the problems can seem so big, its making progress that gives you hope.

Reply