I spent the first five years of my career at a company that purports to do benchmarking and best practices. Really a lot of what they do is profile innovative case studies of leading companies and then help other companies (their clients) understand how to replicate these practices. Nevertheless, I also did some real benchmarking as well, and I can tell you: it's a gigantic pain.
There's the problem of getting consistent responses over time. There's the problem of different customers not measuring the same things. There's the problem of customers measuring the same things differently. There's the problem of accounting properly for exogenous variables, such as equipment depreciation cost. There's the problem of aggregating the data in a way that's meaningful for the customers who gave you the data in the first place while still hiding specific participants' performance.
Often I have found that benchmarking is extremely valuable despite all the problems. Back when I worked on bank operations benchmarking, for example, we found out that productivity at check processing operations starts to decline somewhere around 250 million checks per year, probably due to the dominance of the three evil C's of operations (chaos, confusion, congestion) after that volume. We also found no practical limit to cost improvement in ACH operations at any scale, explaining why Norwest Bank (later Wells Fargo) dominated the ACH processing business.
I mention this value because I found out this week how Vocollect's superior speech recognition for distribution centers has been effective in replacing our competitors in a number of installations in Australia. Although the engineering team has been resisting putting our speech engine up to a benchmark test versus the competition due to the difficulty of benchmarking, I believe it's time to do so. We are really the only speech recognition engine that works in a loud distribution center environment. Perhaps it's time to prove it despite the pain.