It took me years to “reconfigure” (“set aside” is perhaps too harsh) much of what I had learned in grad school about scales and statistics, and focus on providing surveys that: a) first and foremost had respectable (though maybe not sublime) measurement properties, b) made it easy for respondents to “find their answer” (i.e., minimize mental effort for them), and c) yielded quantitative results that were easy for management to think with.
There is a sort of ethical and pragmatic obligation for A and B to always trump C. You can’t make effective decisions on the basis of data with poor measurement properties, and you can’t get decent survey data if people are unwilling to answer or confused. But, that aside, there is a big difference between the way one approaches survey data for the purposes of academia, and how one approaches it for facilitating policy or budgetary decision-making. Sometimes, you just have to suppress your instincts and ask yourself “How is management going to decide on the basis of this information? How can I assist them in not wasting time and energy, and in avoiding bad decisions?”.