Recent comments from several new listeners brought up a subject I hear about pretty often, especially from people who have not yet had much experience with the distinction I make between knowledge management and knowledge services. To colleagues and clients who know me it’s not a big deal, since like many people I have long understood that defining knowledge management (KM) is site specific and relates very closely to how intellectual capital is managed (and defined) in the specific organization or company. Knowledge services, on the other hand, is site agnostic, an idea that comes through in the definition my colleagues and I use:
“Knowledge services is an approach to the management of intellectual capital that converges information management, knowledge management, and strategic learning into a single enterprise-wide discipline. The purpose of knowledge services is to ensure the highest levels of knowledge sharing within the organization in which it is practiced, with leadership in knowledge sharing the responsibility of the knowledge strategist.”
For me this distinction comes up often, for I enjoy spending time thinking about how the management of intellectual capital relates to the wide variety of management and leadership principles we’re exposed to. As it turns out, I’m a library person, and one of my favorite pastimes is to visit a research library — usually Butler Library or the Watson Library in the Uris School of Business, both at Columbia University where I teach — and browse through literature looking for articles that might be of interest to me. The list of topics I search is pretty varied, and much of what I browse through is captured in hard copy, always fun to spend time with and, for me, giving me the chance to play with the advertisements, usually providing pretty good insight into whatever time period any particular issue covers.
Other reading, naturally, takes place in online collections, and if I’m reading about how this or that management or customer service idea (to refer to two types of content I look for) was handled some years back, of course I’m going to look through professional and academic journals online. That’s where the articles show up now, and reading about concepts and ideas from the past helps me understand how some of these management techniques were applied in different time periods. Of course I don’t find much about how knowledge services and knowledge strategy, per se, were implemented if I go back too far but that doesn’t matter. I like to think about how different practices and principles were put to use in different lines of work, and then I have fun applying them to what we do as we attempt to handle intellectual capital in today’s knowledge domain.
It’s a very personal exercise but it’s one I find rewarding, and it’s led me to thinking about how we deal with measures and metrics in knowledge services, our topic for today. My moving in this direction was a little roundabout, I guess you could say, but I ran across a more general paper that seemed to resonate with me about how (and why) we apply measures and metrics in knowledge services today. A long time ago (by that I mean a really long time ago — back in 1993 to be exact), a man by the name of W.R. Duncan wrote about the process of project management in The Project Management Journal. In his paper, Duncan noted that most management models establish three processes that support the ongoing management activities of the organization: planning, executing, and controlling. In the intervening years, I’m not sure we have identified any other management processes that replace this particular threesome, and that assertion applies whether we are working directly with knowledge services or in some other capacity in which our expertise in knowledge sharing and knowledge strategy is put to work. And it’s the third process — controlling — that drives us toward success. In my opinion, there isn’t a better definition for “controlling” than the phrase Duncan used in his paper: “measuring progress and taking corrective action when necessary.”
Is there a key word — as we speak about knowledge services — in Duncan’s phrase? I think so, for while we have an earlier step before we can measure progress, it is the progression toward whatever goal has been established and verified by the enterprise that determines whether knowledge-sharing efforts are going to be successful or not. That earlier step is, of course, the knowledge services audit (or “evaluation,” or “opportunity assessment,” or “appreciation inquiry,” or whatever term is used in the particular organization to describe this important data-gathering exercise). This established procedure (part of Duncan’s executing process, I would suggest) has come to be recognized as a critical element in knowledge services and knowledge strategy development. And it is a different process than controlling, which, to my way of thinking, is where we measure. Indeed. it is through the analysis of the findings of the knowledge services audit that the measuring begins, with those findings and the measurement results used to determine further steps to be taken as the organization moves toward that corrective action Duncan’s triad requires.
Which brings us to another long-standing management idea (almost a cliché, we hear it so much, and its ubiquitousness is quickly explained because it is, when all is said and done, a commonly accepted truism). How often have you heard someone — management staff or otherwise — say that “you can’t manage what you don’t measure.” And, yes, this old management adage is as reliable today as it’s always been. Or, as one commentary I read recently put it — unwittingly connecting managing-and-measuring to Duncan’s attention to progress — “unless you measure something you don’t know if it is getting better or worse.” I shared this one with a colleague and she had an immediate response: “Of course,” she said. “We’re not going to improve anything unless we know where things stand. It’s only then that we can start looking at how to make improvements.”
So if we agree on the need to measure (the “why” of our approach before we get to the “what” we’re going to do or “how” we’re going to do it), what are we looking for? Are there key performance indicators for knowledge services? And how do we put those KPIs to work when we’re dealing with intellectual capital in the organizations where we knowledge strategists are employed?
Well, KPI lists are all over the place, and it’s not hard to find one that works with most knowledge-focused departments of a business or even enterprise-wide across the organization itself. For this discussion I had fun with one I liked — undated, sadly — from Florida Tech. The title for the blog post is What makes a good KPI for project management? (yes, I’m back again to the good ol’ PM framework) and this list works well for knowledge services.
According to the post’s author (or authors), effective KPIs should include this group of indicators, and I don’t think we lose anything by connecting these to how we structure any knowledge services project. In this case, the knowledge services effort is:
- Agreed upon by all parties before the project begins
- Meaningful to the intended audience
- Regularly measured and includes quantifiable measurements that can be shared and analyzed across organizational divisions
- Directed toward the benefits the project seeks to deliver
- A basis for critical decision-making throughout the project
- Aligned with organizational objectives and unified with the organization’s efforts
- Realistic, cost-effective, and tailored to the organization’s culture, constraints and time frame
- Reflective of the organization’s success factors and specific to the organization and the particular project.
And while the folks at Florida Tech note that KPIs may differ from project to project, certain data can be helpful in any organizational situation, including knowledge services. So we give attention to return on investment (ROI), productivity, cost performance, cycle time, customer satisfaction, schedule performance, employee satisfaction, and alignment with strategic business goals, when it is appropriate, using such indicators as:
- Project schedule
- Stated estimate to project completion
- Any current development backlog
- Labor costs spent per month (or within whatever time cycle that might have been chosen)
- The current resource allocation for undertaking the project or effort and bringing it to fruition.
From where I sit, the entire controlling process for knowledge services is required. And this includes measures. But component elements of the process, like so much else having to do with knowledge services, knowledge sharing, and the implementation of a knowledge strategy, require an enormous level of subjectivity on the part of the knowledge strategist, despite the fact that we are using — as much as possible — the most objective standards and KPIs we can find. They all come together to make up what could possibly be described as the knowledge strategist’s most serious management challenge.
For more on this subject, look at the announcement for Dale Stanley and Deb Hunt’s up-coming course in the SLA/SMR International series, beginning October 2 KMKS 106: Critical Success Factors: Measuring Knowledge Services. This course offers techniques and tools for measuring knowledge services success.
Leave a Reply