You see, our confidence is misplaced. We mistake the indicator for reality. The relevance of data resides in the definition (a.k.a. "operational definition") of what is counted, and rarely are such definitions exclusive and exhaustive of phenomena, outside of some narrow scientific disciplines.
Speaking from his experience in the area of Psychology, Jack Martin of Simon Fraser University explains that, whereas operational definitions provide investigators with initial cues to the identification and more thorough understanding of phenomena, psychologists frequently treat such definitions as if they were conclusive and exhaustive. Complex phenomena, such as human motivation and confidence, are narrowly understood in terms of a small set of predetermined factors. Understanding becomes reduced to some kind of criteriology, a labelling game. The purposes of investigation, learning and discovery, making new and deeper interconnections, are left out of this paint-by-numbers story.
From the psychology lab to the office, the situation becomes culturally entrenched. Those who report, from students to office workers, spend their days manipulating and their nights worrying about institutionalized performance measures, standardized tests, and a daily barrage of transactional data, all narrowly defined and often rather arbitrary, devoid of much concrete relevance. These measures are meant to provide insight into trends, and evidence of this or that performance, but in fact they mostly just
- eliminate context,
- block connections, common sense and insight
- erode our capacity for reasoning
- keep us too preoccupied to examine our purposes
- and ultimately put us on the self-sustaining hamster wheel of empty bureaucratic process
See also The Logic of Quantophrenia and possibly Amy Lemay on assessing impact as reported by Asha Law at casrai.org.
No comments:
Post a Comment