Consolidating data centers has always made sense. Having the data securely tucked away into well-managed data centers has got to improve performance and even cyber security. It will also require fewer of those really smart, well-trained (expensive) system administrators to keep things straight. And if done right, it should save on energy costs by reducing the square foot requirement for environmentally sensitive equipment.
As DoD is frantically looks under every rock in sight to find funding to pay for critical must-haves, perhaps it’s a good time to review how this consolidation is occurring. Most consolidations go mostly like this:
1.) Assess: Go out there and figure out what equipment is still lurking in closets and back rooms and who is supporting them, with what money. There are plenty of great tools out there that are being sold to DoD to help locate pieces of rogue, holdover equipment. Some are more effective than others. From my experience, this can be really hard! There is an entire “army” of IT professionals out there who can keep their gear under the radar. After all, the end-user doesn’t really want to give up control of their important equipment to an entity back “there”.
2.) Analyze: This is usually just the engineers looking at how the equipment can be moved, and preparing a technical design to do so.
3.) Execute: Do what your designs from step 2. told you
4.) Sustain: Keep the stuff running and provide support so the end user doesn’t regret it’s geographical displacement.
Yes, there are pennies to be saved by doing the above, which is why they keep doing it, over and over again at different levels, often in an independent and uncoordinated fashion. But we are missing a huge opportunity here! Before the equipment and data are moved to these consolidated data centers, hard decisions need to be made in the form of “data rationalization”.
Most DoD CIO’s are trying very hard to do this rationalization, but it’s really hard work. Before you can rationalize the data, you have to standardize definitions and identify commonalities. From there, hard decisions have to be made to choose the best-of-breed and common data models to map from legacy systems that have been around a long time (and work just fine.. thank you very much). To make that happen you need to have the magic combination of technical authority AND technical expertise. Usually the folks how have the authority (own the data) don’t have a clue about how to do this. And the folks who understand (geeks) this don’t have a shred of authority.
I often think about it like this: what if you moved your furniture from San Diego to Norfolk, and THEN decided you didn’t really need it all? After it was all unpacked from the moving van, you started the process of weaning out what you don’t want to keep. Wouldn’t it be better to put forward the effort BEFORE you moved?
Here’s the thing: you can order all the IT cuts you want – pick a number: 10%, 20%, whatever! But cuts don’t really generate efficiencies, do they? If you want to reduce costs and be more efficient, it’s going to take the careful marrying of governance and technical expertise, and it all needs to be timed in an orderly manner that makes sense.