Monday, June 10, 2013

Doables, deliverables and results - change habitual mental models for more sustainable results

When the Bloor Viaduct in Toronto was being designed, the designer apparently had the prescience to include enough in the design to allow for future subway trains, even though at the time no such trains existed in Toronto.

In today's world of narrow targets and measurable deliverables, we obscure the broader horizon of potential, making us less resilient.The impulse to make things manageable by analyzing them into discrete components, picking two or three to "do" and finding easy "measures" for them means we lose context and enjoy the illusory sense of safety provided by a narrowed horizon. We may be able to check things off the list, but are they meaningful contributions? Planning that engenders tidy, comfortable conceptual grids is not the path to resiliency but of "a foolish consistency." Today, our planning models render us more machine-like than organic and responsive to real needs. Therefore they break more easily, littering and cluttering our mindscape. That's why we're so grey and tired at the end of the day.

So the question for all us is:  How we can insert possibility and breadth into our mindspace and workspaces?


  1. this is the problem of an inherent design principle in our bureaucratic structures - they are designed to be implicitly 'transaction machines' - rather than knowledge systems. A transaction machine is a mechanical implementer of 'executive design/decision' - but a knowledge system requires explicit processes for conversation.

    Knowledge can't be exchange in a mechanical manner - and especially machine-like mechanism cannot generate new knowledge. To access the iceberg of knowledge that lay below the explicit surface of transaction requires conversation - establish context, develop appropriate language then produces engagement that enables agreement that leads to coordinated action -trans-action.

  2. The mechanical model has morphed out of the success of the law of nature model in eliminating context in order to make statements of universality. And while it does work for machines (a la Wigner's unreasonable effectiveness), it only works when the initial conditions are knowable in advance and "placeable by hand". (Where this breaks down is best illustrated by Stuart Kauffman in his discussion of the adjacent possible.) Most things are far from that, yet we continue to play that mental model and think we're OK to justify what comes out of it as "objective."