Gameability - the ability for people to modify responses

Today on ActKM, Joe Firestone responded to Dave Snowdon's email that I mentioned in my blog (I have a dreading feling that I have opened a Pandoras box and will now have to keep posting these increasingly interesting posts here ).

I post it here because it raises some good questions :

Dave,

Glad to hear you're coming my way the week of the 17th. Please let me know what your schedule is and I'll be happy to make some time.

I'm agreed on Popper off-line. This is not the place for extended discussions about particular philosophers. We seem to be in agreement up to here where I'll begin to interleave:

> However I cannot agree with your argument that better theoretical work
> would produce measures that could not be gamed, or at least not
> without a major paradigm shift in the theoretical underpinnings.

I never did set much store by the notion of "Paradigm Shift," since I was never very clear on what Kuhn meant by "paradigm," even after he tried to clarify what he meant in later editions of "Structure . . ." Anyway, insofar as "paradigm shift" refers to a change in world view, I think that "better theoretical work" pretty much implies that one's world view will change since it implies both different and better theoretical frameworks and thedifferent theories and models employing these frameworks.

> Given that we agree that the current measures are subject to gaming
> this may not be a major issue.

Agreed.

> However I think it goes to the heart
> of our different understandings of the application of Complexity.
> Taking a complex system as non-causal (in the sense of linear
> causality or the sort of interlocking causality you see in systems
> thinkers like Forester) then any outcome based measure is a priori
> false.

Unfortunately, I'm not clear on either what you mean by an "outcome-based measure" or on what you mean by false in this context? By "false," do you mean that the variation in a data variable (whatever the level of its scaling) doesn't correspond to the variation in the abstract variable it is supposed to be measuring?

>I have studied this (thanks for checking) and I have not found any
>meaningful outcome based measure that has validity or is not gamable
>other than ones which are so explicit they are useful. If you want to
>throw up some examples then I am happy to deal with them.

I'll hold off throwing up some examples until I'm clearer on what you mean by "outcome-based" and "false." Sorry to be so cautious, but I think this area is fraught with confusion and I want to make sure I understand your vocabulary.

> Indicators on the other hand, of impact I think are possible.

And that's one of the reasons why I just asked for clarification. The distinction you seem to be making between "outcome-based measures," and indicators of impact isn't apparent to me.

Moving on a bit further. your distinction between gameable and non-gameable indicators suggests that variations in "gameable indicators" do not measure the variation in abstract variables they are supposed to measure, due to the increasing influence, with the use of such indicators over time, of human efforts to manipulate the indicators so that variation in them reflects this attempt to influence the value of the indicators rather than the influence of systemic factors which in the absence of such influence would produce a correspondence between variation in the indicator and variation in what it is measuring.

Put simply, "gameable indicators" of abstract variables or constructs are not indicators at all. Only "non-gameable indicators" may be valid indicators, which suggests that when indicators are specified in the first place, they should be evaluated in the light of their "gameability," and immediately rejected if found to be gameable in the normal context of organizational interaction.

Now, what makes a "non-gameable" indicator? Either an inability for humans to control or greatly influence a variable, or an inability for them to control or influence its variation without paying a cost great enough so that the clear net benefit they derive from influencing its variation is less than the clear net benefit they derive from not attempting to influence it.

The trouble with survey indicators is that respondents can manipulate them virtually without cost to themselves.

Best,

Joe

Post a Comment

0 Comments