Archive for October, 2012

Some consideration on the role of systems modeling in planning discourse

 

Suggestions made by proponents of ‘systems thinking’ or systems analysis to discussions we might call ‘planning or policy discourse’ often take the form of recommendations to construct models of the ‘whole system’ in question, and to use these to guide policy decisions.

A crude explanation of what such system models are and how they are used might be the following: The ‘model’ is represented as a network of all the parts (variables, components; e.g. ‘stocks’) in the ‘whole’ system. What counts as the whole system is the number of such parts that have some significant relationship (for example, ‘flows’) to one another — such that changes in the state or properties of some part will produce changes in other parts. Of particular interest to system model builders are the ‘loops’ of positive or negative ‘feedback’ in the system — such that changes in part A will produce changes in part B, but those changes will, after a small or large circle of further changes, come back to influence A. Over time, these changes will produce behaviors of the system that would be impossible to track with simple assumptions e.g. about causal relationships between individual pairs of variables such as A and B.

The usefulness of such system models — which simply means the degree of reliability with which simulation runs of those changes over time will produce predictions that would come true if the ‘real system’ that is represented by the model could be made to run according to the same assumptions. The confidence in the trustworthiness of model predictions thus relies on a number of assumptions (equally simplistically described):

– the number of ‘parts’ (variables, components, forces, ‘(stocks’) included;
– the nature and strength of relationships between the system variables;
– the magnitudes (values) of the initial system variables, e.g. stocks.

System models are presented as ‘decision-making tools’ that allow the examination of the effects of various possible interventions in the system (that is, introduction of changes in systems variables that can be influenced by human decision-makers) given various combinations of conditions in variables that cannot be influenced but must be predicted, as well as assumptions about the strength of interactions. All in order to achieve certain desirable states or system behaviors (the ‘goals’ or objectives measures by performance criteria of the system). System modelers usually refrain from positing goals but either assume them as ‘given’ by assumed social consensus or directives by authorities who are funding the study (a habit having come in for considerable criticism) or leaving it up to decision-maker ‘users’ of the system to define the goals, and use the simulations to experiment with different action variables until the desired results are achieved.

Demonstrations of the usefulness or reliability of a model rest on simulation runs for past system states (for which the data about context and past action conditions can be determined): the model is deemed reliable and valid if it can produce results that match observable ‘current’ conditions. If the needed data for this can be produced and the relationships can be adjusted with sufficient accuracy to actually produce matching outcomes, the degree of confidence we are invited to invest in such models can be quite high: very close to 100% (with qualifications such as ‘a few percentage point plus or minus’.

The usual planning discourse — that is, discussion about what actions to take to deal with situations or developments deemed undesirable by some (‘problems’) or desirable improvements of current conditions (‘goals’) — unfortunately uses arguments that are far from acknowledging such ‘whole system’ complexity. Especially in the context of citizen or user participation currently called for, the arguments mostly take a form that can be represented (simplified) by the following pattern, say, about a proposal X put forward for discussion and decision:

(1) “Yes, proposal X ought to be implemented,
because
implementing X will produce effect (consequence) Y
and
Y ought to be aimed for.”

(This is of course a ‘pro’ argument; a counterargument might sound like:

(2) ” No, X should NOT be implemented
because
Implementing X will produce effect Z
and
Z ought to be avoided.”

Of course, there are other forms of ‘con’ arguments possible, targeting either the claim that X will produce Y granted that Y is desirable; or the claim that Y is desirable, granting that X will indeed produce Y…)

A more ‘sophisticated’ version of this typical (‘standard’) planning argument would perhaps include consideration of some conditions under which the relationship X — Y holds:

(3) “Yes, X ought to be implemented,
because
Implementing X will produce Y if conditions c are present;
and
Y ought to be aimed for;
and
conditions c are present.”

While ‘conditions C’ are mostly thought of as simple, one-variable phenomena, the systems thinker will recognize that ‘conditions C’ should include all the assumptions about the state of the whole system in which action X is one variable that can indeed be manipulated by decision-makers (while many others are context conditions that cannot be influenced). So from this point of view, the argument should be modified to include the entire set of assumptions of the whole system. The question of how a meaningful discourse should be organized to take this expectation into account while still accommodating participation by citizens — non-experts — is a challenge that has yet to be recognized and taken on.

Meanwhile, however, the efforts to improve the planning discourse consisting of the simpler pro and con arguments might shed some interesting lights on the issue of the reliability of system models for predicting outcomes of proposed plans over time.

The improvements of the planning discourse in question have to do with the proposals I have made for a more systematic and transparent assessment of the planning argument — in response to the common claim of having public interest decisions made ‘on the merit of arguments’. The approach I developed implies that the plausibility of a planning argument of the types 1,2,3 above (in the mind of an individual evaluator) will be a function of the plausibility of all the premises. I am using the term ‘plausibility’ to apply both to the ‘factual’ premises claiming the relationship X –>Y and the presence of conditions C (which traditionally are represented as ‘probability’ or degree of confidence) as well as the to the deontic premise ‘Y ought to be aimed for’ that is not adequately characterized by ‘probability’ much less ‘truth’ or ‘falsity’ that is the stuff of traditional argument assessment. The scale on which such plausibility assessment is expressed must be one ranging from an agreed-upon value such as -1 (meaning ‘totally implausible) to +1 (meaning totally plausible, entirely certain) with a midpoint of zero (meaning ‘don’t know’; ‘can’t tell’ or even ‘don’t care’).

The plausibility of such an argument, I suggest, will be some function of the plausibilities assigned to each of the premises, arguably also to the implied claim that the argument pattern itself (the inference rule

“D(X)
because
FI(X –> Y) | C
and
D(Y)
and
F (C )”

applies meaningfully to the situation at hand. (D prefixes denote deontic claims, FI factual-instrumental claims, F factual claims)

(The weight of each argument among the many pro and con arguments is one step later: it will be a function of its plausibility and weight of relative importance of the goals, concerns, objectives referred to in the deontic premise.)

This means that the argument plausibility will decrease quite rapidly as the plausibilities for each of these premises deviate from 100% certainty. Experiments with a plausibility function that consists of the simple product of those plausibilities have shown that the resulting overall argument plausibility often shrinks to a value much closer to zero that to +1; and the overall proposal plausibility (e.g. a sum of all the weighted argument plausibilities) will also be far away from the comfortable certainty (decisively ‘pro’ or decisively ‘con’) hoped for by many decision-makers.

These points will require some further study and discussion in the proposed approach to systematic argument assessment. For the moment, the implication of this effect of argument plausibility tending towards zero on the issue of enhancing arguments with the proper recognition of ‘all’ the system condition assumptions of the ‘whole’ system deserve some comment.

For even when a model can be claimed to represent past system behavior with reasonable degree of certainty plausibility close to 1, the projection of those assumptions into the future must always be done with a prudent dose of qualification: all predictions are only more or less probable (plausible), none are 100% certain. The deontic premises as well are less than totally plausible — indeed usually express legitimate opposing claims by people affected in different ways by a proposed plan, differences we are asked to acknowledge and consider instead of insisting that ‘our’ interests are to be pursued with total certainty. We might even be quite mistaken about what we ask for… So when the argument plausibility function must include the uncertainty-laden plausibility assessments of all the assumptions about relationships and variable values over time in the future, the results (with the functions used thus far, for which there are plausible justification but which are admittedly still up for discussion) must be expected to decline towards zero even faster than for the simple arguments examined in previous studies.

So as the systems views of the problem situation becomes more detailed, holistic, and sophisticated, the degree of confidence in our plan proposals that we can derive from arguments including those whole system insights is likely getting lower, not higher. This nudge towards humility even about the degree of confidence we might derive from honest, careful and systematic argument assessment may be a disappointment to leaders whose success in leading depends to some extent on such degree of confidence. Then again, this may not be a bad thing.