29 March 2008

Studies of Complexity theory & KM

After a post on ActKM today where I mentioned Complexity and SME research, David Snowdon made the following suggestions:

Durham university and also Trieste have done research on SMEs using complexity theory along with others. Some of the original work on co-opetition between garment workers in Milan is seminal in the field, along with Allen's work on Trawlers.

As a result I found the following papers:
Dr Pierpaolo Andriani at Durham University, UK.
Complexity & Systematics, Trieste

I will add more to this post as I find them.

Posted at 29 Mar @ 8:48 PM

26 March 2008

Analysing 43 definitions

Stephen Bounds, a contributor to the ActKM forum has done an analysis of the 43 definitions of KM and published it on his Wiki.

He broke the definitions down by theme and interestingly, the top 5 came out as follows:

  • Improved Execution (28)
  • Knowledge Distribution (26)
  • Knowledge Creation (21)
  • Learning (14)
  • Value from Knowledge Assets (12)

He made the comment in his posting that he was surprised at the number of process-based definitions that don't actually require any "management" to do KM. I'm not sure but I think this is to be expected when it comes to KM Definitions. The tactic stuff isn't just hard to do, it's hard to define too, so they would easily be out-numbered by the definitions focusing on the more explicit forms of KM. In fact I'm surprised that more didn't focus on the technology itself.

His Wiki page can be seen here.

19 March 2008

What is Knowledge Management???

Ray Simms list of 43 definitions of KM (posted on ActKM), is a great start to a sometimes frustrating subject. What is the true definition of KM?
http://blog.simslearningconnections.com/?p=279

I have to say though, I am not as negative as Ray when it comes to the title KM. I think the fact that it adapts to people's understanding can be a nice benefit. Rather than being misleading it evokes first an inquisitive frown as people consider the possibility of these two words being together, and then after some discussion about the benefits (I usually say something like "It is teaching organisations how to stop reinventing the wheel") they start to think of the possibilities that managing knowledge can bring, even if it can never be managed in the same way explicit information is.

06 March 2008

People First

The post below on ActKM from Keith De La Rue at Telstra presents an interesting application of the People first, technology second principle at Lend Lease.

Peter-Anthony -

You asked about a "... collaboration Enterprise tool that enables you to find internally (to the Org.) the individual (or group) who has the right experience/knowledge to help you out with a specific problem?"

I'm surprised that no-one seems to have referred yet to the Lend-Lease "ikonnect" model (unless it came in since the last Digest). I am not an expert on this, but from what I understand, it relies on people first, and systems second. A central group of well-connected people field questions for experts, and use their personal networks to connect the question askers with the relevant experts. After the question is answered, they document the response. This way, a data base of both experts and expertise can be built up.

You can see the public face of this - with the names of contact people - at http://www.ikonnect.com/.

Does this help? Hopefully someone out there has more info on this. It seems to me to be a really strong application of Organisational Networks.

In fact, what we are talking about here is meta-expertise. Sort of sounds a bit like Luke's post on where Librarianship is going...

"We are moving away from content and collection, and moving to context and connection."

  • Michel Bauwens.

Regards,

Keith.
----------------------------------------------------------------------
Keith De La Rue
Knowledge Manager
Telstra, Australia
+61 3 9203 7812
Blogging at: http://delarue.net/


03 March 2008

Gameability - the ability for people to modify responses

Today on ActKM, Joe Firestone responded to Dave Snowdon's email that I mentioned in my blog (I have a dreading feling that I have opened a Pandoras box and will now have to keep posting these increasingly interesting posts here ).

I post it here because it raises some good questions :

Dave,

Glad to hear you're coming my way the week of the 17th. Please let me know what your schedule is and I'll be happy to make some time.

I'm agreed on Popper off-line. This is not the place for extended discussions about particular philosophers. We seem to be in agreement up to here where I'll begin to interleave:

> However I cannot agree with your argument that better theoretical work
> would produce measures that could not be gamed, or at least not
> without a major paradigm shift in the theoretical underpinnings.

I never did set much store by the notion of "Paradigm Shift," since I was never very clear on what Kuhn meant by "paradigm," even after he tried to clarify what he meant in later editions of "Structure . . ." Anyway, insofar as "paradigm shift" refers to a change in world view, I think that "better theoretical work" pretty much implies that one's world view will change since it implies both different and better theoretical frameworks and thedifferent theories and models employing these frameworks.

> Given that we agree that the current measures are subject to gaming
> this may not be a major issue.

Agreed.

> However I think it goes to the heart
> of our different understandings of the application of Complexity.
> Taking a complex system as non-causal (in the sense of linear
> causality or the sort of interlocking causality you see in systems
> thinkers like Forester) then any outcome based measure is a priori
> false.

Unfortunately, I'm not clear on either what you mean by an "outcome-based measure" or on what you mean by false in this context? By "false," do you mean that the variation in a data variable (whatever the level of its scaling) doesn't correspond to the variation in the abstract variable it is supposed to be measuring?

>I have studied this (thanks for checking) and I have not found any
>meaningful outcome based measure that has validity or is not gamable
>other than ones which are so explicit they are useful. If you want to
>throw up some examples then I am happy to deal with them.

I'll hold off throwing up some examples until I'm clearer on what you mean by "outcome-based" and "false." Sorry to be so cautious, but I think this area is fraught with confusion and I want to make sure I understand your vocabulary.

> Indicators on the other hand, of impact I think are possible.

And that's one of the reasons why I just asked for clarification. The distinction you seem to be making between "outcome-based measures," and indicators of impact isn't apparent to me.

Moving on a bit further. your distinction between gameable and non-gameable indicators suggests that variations in "gameable indicators" do not measure the variation in abstract variables they are supposed to measure, due to the increasing influence, with the use of such indicators over time, of human efforts to manipulate the indicators so that variation in them reflects this attempt to influence the value of the indicators rather than the influence of systemic factors which in the absence of such influence would produce a correspondence between variation in the indicator and variation in what it is measuring.

Put simply, "gameable indicators" of abstract variables or constructs are not indicators at all. Only "non-gameable indicators" may be valid indicators, which suggests that when indicators are specified in the first place, they should be evaluated in the light of their "gameability," and immediately rejected if found to be gameable in the normal context of organizational interaction.

Now, what makes a "non-gameable" indicator? Either an inability for humans to control or greatly influence a variable, or an inability for them to control or influence its variation without paying a cost great enough so that the clear net benefit they derive from influencing its variation is less than the clear net benefit they derive from not attempting to influence it.

The trouble with survey indicators is that respondents can manipulate them virtually without cost to themselves.

Best,

Joe