Posts Tagged 'systems modeling'

Combining systems modeling maps with argumentative evaluation maps: a general template

Many suggested tools and platforms have been proposed to help humanity overcome the various global problems and crises, each with claims of superior ability or adequacy for addressing the ‘wickedness’ of the problems.

Two of the main perspectives I have studied – the general group of models labeled as ‘systems thinking’, ‘systems modeling and simulation’, and the ‘argumentative model of planning’ proposed by H. Rittel (who incidentally saw his ideas as part of a ‘second generation’ systems approach) have been shown to fall somewhat short of those claims: specifically, they have so far not been able to demonstrate the ability to adequately accommodate each others’ key concerns. The typical systems model seems to assume that all disagreements regarding its model assumptions have been ‘settled’; it shows no room for argument and discussion or disagreement, while the key component of the argumentative model: the typical ‘pro’ or ‘con’ argument of the planning discourse, — the ‘standard planning argument’ does not connect more than two or three of the many elements of a more elaborate systems model of the respective situation, and thus fails to properly accommodate the complexity and multiple loops of such models.

It is of course possible that a different perspective and approach will emerge that can better resolve this discrepancy. However, it will have to acknowledge and then properly address the difficulty we can now only express with the vocabulary of the two perspectives. This essay explores the problem of showing how the elements of the two selected approaches can be related in maps that convey both the respective system’s complexity and the possible disagreements and assessment of the merit of arguments about system assumptions.

A first step is the following simplified diagram template that shows a ‘systems model’ in the center, with arguments both about how the proposal for intervention in the system (consisting of suggested actions upon specific system elements) should be evaluated, and about the degree of certainty – the suggested term is ‘plausibility’ – about assumptions regarding individual elements.

A key aspect of the integration effort is the insight that the ‘system’ will have to include all the features discussed in the discourse under the terms of ‘plan proposal’ with its details of initial conditions, proposed actions (what to do, by whom, using what tools and resources, and the conditions for their availability), the ‘problem’ a solution aims at remedying, which is described (at least) by specifying its current ‘IS’ state, the desired ‘OUGHT’ state or planning outcome, the means by which the transition of is- to ought-state can be achieved; and the potential consequences of implementing the plan, including possible ‘unexpected’ side-and-after-effects. Conversely, the assessment of arguments (the “careful weighing of pros and cons”) will have to explicitly address the system model elements and their interactions – elements that should be (but mostly are not) specified in the argument as ‘conditions under which the plan or one of its features is assumed to effectively achieve the specific outcome or goal referenced by the argument.

For the sake of simplicity, the diagram only shows two arguments or reasons for or against a proposed plan. In reality, there always will be at least two arguments (benefit and cost of a plan), but usually many more, based on assessment of the multiple outcomes of the plan and actions to implement it, as well as of conditions (feasibility, availability, cost and other resources) for its implementation. The desirability assessments of different parties will be different; the argument seen as ‘pro’ by one party can be a ‘con’ argument for another, depending on the assessment of the premises. Therefore, arguments are not shown as pro or con in the diagram.

 

AMSYST 1
The diagram uses abbreviated notations for conciseness and convenient overview that are explained in the legend below, that presents some key (but by no means exhaustively comprehensive) concepts of both perspectives.

*  PLAN or P Plan or proposal for a plan or plan aspects

*  R    Argument or ‘reason’. It is used both for an entire ‘pro’ or ‘con’ argument about the plan or an issue, — the entire set of premises supporting the ‘conclusion’ claim (usually the plan proposal) and for the relationship claimed to connect the Plan with an effect, usually a goal, or a negative consequence of plan implementation, in the factual-instrumental premise.
The ‘standard planning argument’ pattern prevailing in planning discourse has the general form:
D(PLAN) Plan P ought to be adopted (deontic ‘conclusion’)
because
FI (PLAN –>R –>O)|{C} P has relationship R with outcome O given
Conditions {C} (Factual-instrumental premise)
and
D(O) Outcome O ought to be pursued (Deontic premise)
and
F{C} Conditions {C} are given (true)

The relationship R is most often a causal connection, but also stands for a wide variety of relationships that constitute the basis for pro or con arguments: part-whole, identity, similarity, association, analogy, catalyst, logical implication, being a necessary or sufficient condition for, etc. In an actual application, these relationships may be distinguished and identified as appropriate.

*    O or G   Outcome or goal to be pursued by the plan, but also used for other effects including negative consequences

*    M —   the relationship of P ‘being a means’ to achieve O

*     C or {C}     The set of a number of
c conditions under which the claimed relationship M between P and    O is assumed to hold

*     pl ‘plausibility’ judgments about the plan, arguments, and argument premises, expressed as values on a scale of +1 (completely plausible) to -1 (completely implausible) with a midpoint ‘zero’ understood as ‘so-so or ‘don’t know, cant decide’) in combination with the abbreviations for those:
*       plPLAN or plP plausibility judgment of the PLAN,
this is some individual’s subjective judgment.
*       plM plausibility of P being effective in achieving O;
*       pO plausibility of an outcome O or Goal;
*       pl{C} plausibility (probability) of conditions {C} being present;
*       plc plausibility of condition c being present;
*       plR plausibility of argument or reason R;
*       pl PLAN GROUP a group judgment of plan plausibility

*       wO weight of relative importance of outcome O ( 0 ≤ w ≤ 1; ∑w = 1)

*       WR Argument weight or weight of reason

Functions F between plausibility values:

*      F1     Group plausibility aggregation function:
n
pl PLANGROUP = F1 (plPLANq) for all n members q of the group
q=1, 2

*      F2    Plan plausibility function:
m
Pl(PLAN)q = F2 (WRi) for all m reasons R, by person q
i = 1,2…

*      F3   Argument weight function:

WRi = F3 pl Ri)* wOj

*     F4   Argument plausibility function:

Pl(Ri) = F4: {pl(P –>Mi –>Oi)|{Ci}) , pl(Oi), pl{C}}
The plausibility of argument R is a function of all
Premise plausibility judgments

*     F5     Condition set plausibility function:

Pl{C} = F5 (pl ck) pl of set {C} is a function of the
K = 1,2… plausibility judgmens of all c in the set.
n
*     F6 Weight of relative importance of outcome Oi: wOi = 1/n ∑ vOi
i=1,2…
Subject to conditions 0 ≤ wOi ≤ 1, and ∑wO = 1.

*    System S The system S is the network of all variables describing both the initial  conditions c (the IS-state of the problem the plan is trying to remedy), the  means M involved in implementing the plan, the desired ‘end’ conditions or goals G of the plan, and the relationships and loops between these.

The diagram does not yet show a number of additional variables that will play  a role in the system: the causes of initial conditions (that will also affect the  outcome or goal conditions; the variables describing the availability, effectiveness, costs and acceptability of means M, and potential consequences of both M and O of the proposed plan. Clearly, these conditions and their behavior over time (both the time period needed for implementation, and the assumed planning horizon or life expectancy of the solution) will or should be given due consideration in evaluating the proposed plan.

Planning discourse: Integration of argumentation into systems models or systems modeling information into argumentative discourse.

Various discussions about how complex societal problems and crises can be dealt with have revealed, among other things, a mutual shortcoming of two conceptual ‘models’ held to carry the best promises for overcoming the challenges: ‘Systems Thinking’ on the one hand, and the Argumentative Model of Planning on the other. Briefly, systems modeling tools are considered the best available tools for the understanding and analysis of complex systems behavior, while a carefully orchestrated argumentative discourse with wide participation appears to offer the best – because most familiar and accessible – vehicle for assembling the ‘distributed’ information and connecting that information forward to acceptable agreements and decisions.

The problem or shortcoming is the following: The detailed information embodied in complex systems models is not accommodated in the familiar patterns of argumentative discourse, and thus difficult to adequately bring to bear on the decisions reached at the end of such discourse. On the other hand, the disagreements  (and thus conflicting, inconsistent information) that characterize argumentative discourse in the form of ‘pros’ and ‘cons’ are not accommodated in the typical systems models whose assumptions regarding variables, parameters, and their values and relationships have the appearance of being either valid on the basis of scientific verification, or ‘settled’ by other means (e.g. as goals ‘given’  by the clients of analysis projects, or opinion surveys).

The consequences of decision processes adopting either ‘model’ can be equally defective:  decisions based on the output of model simulations, for example, run the risk of overriding critical disagreements and interests of parties whose information has not been included, or downplayed, in the model, and thus lead to future conflict. Decisions reached on the basis of argumentative discourse in which the complexity of the system in question has not been fully understood because it couldn’t be adequately represented in the tools of the discourse are equally likely to be flawed. This would be true even if the main shortcoming of the ‘parliamentary’ tradition were successfully resolved – that of the possibility of the final majority vote completely ignoring and overriding the concerns of the minority. (A possible solution for this problem has been suggested with the proposals for systematic and transparent argument assessment for planning arguments (Mann 2010),  it will be assumed to be adopted in some form in the following.)

The mutual difficulties of these two models to appropriately accommodate each other’s content is considered to be a main obstacle for the successful development of a viable framework for planning  / policy-making from the small scale, local level to the scale of global crises and conflicts. It has not, to my knowledge, received sufficient attention, analysis and discussion. The following two suggestions, exploring the possibility for each of the two models to be integrated into the other are intended as a starting point for this much needed discussion. The possibility for the emergence of a ‘third model’ that would resolve the difficulty is left open as a challenge for future thinking.

  1. A.   How can Argumentation be integrated into ST tools?

 

Possibility: (using the example of simulation models for clarity, and

i)            Starting with the model diagram:

ii)  Each variable and parameter in the model diagram is shown in a ‘box’ with attached expansion symbols:

E for explanatory information about item x:  What is x?  Also: description?

F  for factual information:  What is (the value of) x currently; evidence, data?

O  (instead of D) for deontic / ought information and arguments:  should x be set (as part e.g. of an intervention package)?

H  or I  for instrumental (‘How to’) information:  How can x be achieved?

Clicking on the symbol will open a discussion page where the question is stated and answers / arguments are listed.

Plan proposals are described as packages of variable and parameter values of the model that serve as the proposed ‘intervention’ settings whose performance will be simulated over time in the model.

Main menu symbols shown in the ‘legend’ box of the diagram show the links for the issues:

–        What should be the plan proposal? (Described as initial intervention settings of model); clicking should link to follow-up questions:

–             H- question:  plan proposals (alternatives)?

–            Evaluation work sheets for selected proposals. (To develop a pl-value for             the proposal based on the assessment argument weight, argument plausibility and plausibility of argument premises).

–      What is the critical performance variable that should be simulated with the model?

–        What additional variable / parameter should be included in the model?

Subsequent additional links for the follow-up question:

–            Should this item be included in the model?

–            What are the values and relationships?

This information can be ‘automatically’ extracted from the discussion and             shown in the model.

–            Should the proposed variable be part of the intervention (plan) package?

–            How can the initial / intervention variable setting be achieved (if not already in place…)

iii) These pages should have convenient ‘back’ links to the question from where they were accessed.

iv) The pages for these questions should be complemented or linked to issue maps showing the relationships between the various issues in the entire discussion, (with the ‘current issue from which the page was linked shown bold or highlighted).

v) These requirements imply that the different functions described:  model diagram, issue discussions, mapping, evaluation etc. must be part of one single integrated software program.

  1. B.    How can systems modeling information be integrated into argumentative discourse platforms and maps? 

 

Possibility:

Assuming, as a starting point, that there is a discussion about whether a plan proposal X should be decided upon for implementation. The discussion support documentation (drawn from the ‘live‘ or conventional online discussion) is organized along the principles of adapted planning discourse IBIS (‘issue based information system’) resp. APIS (‘argumentative planning information system’).

Arguments pro or con the proposal will be raised and displayed in the ‘standard’ format:

“Proposal X ought /ought not be implemented because it is/is not a fact that X with help achieve goal Y, given conditions C, and conditions C are/are not (or will be) present.” Formally:

“+/-O(X) <— (+/-FI((X–>Y)| C) & +/-O(Y) & +/- F(C)”

Here, ‘conditions C‘ stand for the set of assumed variable and parameter values of a simulation model; and the proposal X will be described as the package of such model assumptions that are under the control of planners as the starting ‘intervention‘ into the situation and for which the performance over time is to be simulated with the model.

Successor questions about C will be answered by displays of the entire model, listing all variables and parameters with their assumptions and relationships so that they can be discussed, within the regular format provisions of the argumentative discourse platform.

The platform will be structured according to the main considerations described e.g. in Mann (2010) including the components of the verbatim file of contributions, the topic and issue lists, the discussion files of each issue in a condensed / formalized manner, argument maps, and evaluation worksheets and analysis tools.

This requires that the platform be structured so as to

i)               allow discussion of each issue as a separate thread;

ii)             permit visual displays of not only issue and argument maps but also systems model diagrams  (and  ideally, running provisions) within the same platform;

iii) allow convenient forward and backward linking between all its components.

—-

For discussion

Some consideration on the role of systems modeling in planning discourse

 

Suggestions made by proponents of ‘systems thinking’ or systems analysis to discussions we might call ‘planning or policy discourse’ often take the form of recommendations to construct models of the ‘whole system’ in question, and to use these to guide policy decisions.

A crude explanation of what such system models are and how they are used might be the following: The ‘model’ is represented as a network of all the parts (variables, components; e.g. ‘stocks’) in the ‘whole’ system. What counts as the whole system is the number of such parts that have some significant relationship (for example, ‘flows’) to one another — such that changes in the state or properties of some part will produce changes in other parts. Of particular interest to system model builders are the ‘loops’ of positive or negative ‘feedback’ in the system — such that changes in part A will produce changes in part B, but those changes will, after a small or large circle of further changes, come back to influence A. Over time, these changes will produce behaviors of the system that would be impossible to track with simple assumptions e.g. about causal relationships between individual pairs of variables such as A and B.

The usefulness of such system models — which simply means the degree of reliability with which simulation runs of those changes over time will produce predictions that would come true if the ‘real system’ that is represented by the model could be made to run according to the same assumptions. The confidence in the trustworthiness of model predictions thus relies on a number of assumptions (equally simplistically described):

– the number of ‘parts’ (variables, components, forces, ‘(stocks’) included;
– the nature and strength of relationships between the system variables;
– the magnitudes (values) of the initial system variables, e.g. stocks.

System models are presented as ‘decision-making tools’ that allow the examination of the effects of various possible interventions in the system (that is, introduction of changes in systems variables that can be influenced by human decision-makers) given various combinations of conditions in variables that cannot be influenced but must be predicted, as well as assumptions about the strength of interactions. All in order to achieve certain desirable states or system behaviors (the ‘goals’ or objectives measures by performance criteria of the system). System modelers usually refrain from positing goals but either assume them as ‘given’ by assumed social consensus or directives by authorities who are funding the study (a habit having come in for considerable criticism) or leaving it up to decision-maker ‘users’ of the system to define the goals, and use the simulations to experiment with different action variables until the desired results are achieved.

Demonstrations of the usefulness or reliability of a model rest on simulation runs for past system states (for which the data about context and past action conditions can be determined): the model is deemed reliable and valid if it can produce results that match observable ‘current’ conditions. If the needed data for this can be produced and the relationships can be adjusted with sufficient accuracy to actually produce matching outcomes, the degree of confidence we are invited to invest in such models can be quite high: very close to 100% (with qualifications such as ‘a few percentage point plus or minus’.

The usual planning discourse — that is, discussion about what actions to take to deal with situations or developments deemed undesirable by some (‘problems’) or desirable improvements of current conditions (‘goals’) — unfortunately uses arguments that are far from acknowledging such ‘whole system’ complexity. Especially in the context of citizen or user participation currently called for, the arguments mostly take a form that can be represented (simplified) by the following pattern, say, about a proposal X put forward for discussion and decision:

(1) “Yes, proposal X ought to be implemented,
because
implementing X will produce effect (consequence) Y
and
Y ought to be aimed for.”

(This is of course a ‘pro’ argument; a counterargument might sound like:

(2) ” No, X should NOT be implemented
because
Implementing X will produce effect Z
and
Z ought to be avoided.”

Of course, there are other forms of ‘con’ arguments possible, targeting either the claim that X will produce Y granted that Y is desirable; or the claim that Y is desirable, granting that X will indeed produce Y…)

A more ‘sophisticated’ version of this typical (‘standard’) planning argument would perhaps include consideration of some conditions under which the relationship X — Y holds:

(3) “Yes, X ought to be implemented,
because
Implementing X will produce Y if conditions c are present;
and
Y ought to be aimed for;
and
conditions c are present.”

While ‘conditions C’ are mostly thought of as simple, one-variable phenomena, the systems thinker will recognize that ‘conditions C’ should include all the assumptions about the state of the whole system in which action X is one variable that can indeed be manipulated by decision-makers (while many others are context conditions that cannot be influenced). So from this point of view, the argument should be modified to include the entire set of assumptions of the whole system. The question of how a meaningful discourse should be organized to take this expectation into account while still accommodating participation by citizens — non-experts — is a challenge that has yet to be recognized and taken on.

Meanwhile, however, the efforts to improve the planning discourse consisting of the simpler pro and con arguments might shed some interesting lights on the issue of the reliability of system models for predicting outcomes of proposed plans over time.

The improvements of the planning discourse in question have to do with the proposals I have made for a more systematic and transparent assessment of the planning argument — in response to the common claim of having public interest decisions made ‘on the merit of arguments’. The approach I developed implies that the plausibility of a planning argument of the types 1,2,3 above (in the mind of an individual evaluator) will be a function of the plausibility of all the premises. I am using the term ‘plausibility’ to apply both to the ‘factual’ premises claiming the relationship X –>Y and the presence of conditions C (which traditionally are represented as ‘probability’ or degree of confidence) as well as the to the deontic premise ‘Y ought to be aimed for’ that is not adequately characterized by ‘probability’ much less ‘truth’ or ‘falsity’ that is the stuff of traditional argument assessment. The scale on which such plausibility assessment is expressed must be one ranging from an agreed-upon value such as -1 (meaning ‘totally implausible) to +1 (meaning totally plausible, entirely certain) with a midpoint of zero (meaning ‘don’t know’; ‘can’t tell’ or even ‘don’t care’).

The plausibility of such an argument, I suggest, will be some function of the plausibilities assigned to each of the premises, arguably also to the implied claim that the argument pattern itself (the inference rule

“D(X)
because
FI(X –> Y) | C
and
D(Y)
and
F (C )”

applies meaningfully to the situation at hand. (D prefixes denote deontic claims, FI factual-instrumental claims, F factual claims)

(The weight of each argument among the many pro and con arguments is one step later: it will be a function of its plausibility and weight of relative importance of the goals, concerns, objectives referred to in the deontic premise.)

This means that the argument plausibility will decrease quite rapidly as the plausibilities for each of these premises deviate from 100% certainty. Experiments with a plausibility function that consists of the simple product of those plausibilities have shown that the resulting overall argument plausibility often shrinks to a value much closer to zero that to +1; and the overall proposal plausibility (e.g. a sum of all the weighted argument plausibilities) will also be far away from the comfortable certainty (decisively ‘pro’ or decisively ‘con’) hoped for by many decision-makers.

These points will require some further study and discussion in the proposed approach to systematic argument assessment. For the moment, the implication of this effect of argument plausibility tending towards zero on the issue of enhancing arguments with the proper recognition of ‘all’ the system condition assumptions of the ‘whole’ system deserve some comment.

For even when a model can be claimed to represent past system behavior with reasonable degree of certainty plausibility close to 1, the projection of those assumptions into the future must always be done with a prudent dose of qualification: all predictions are only more or less probable (plausible), none are 100% certain. The deontic premises as well are less than totally plausible — indeed usually express legitimate opposing claims by people affected in different ways by a proposed plan, differences we are asked to acknowledge and consider instead of insisting that ‘our’ interests are to be pursued with total certainty. We might even be quite mistaken about what we ask for… So when the argument plausibility function must include the uncertainty-laden plausibility assessments of all the assumptions about relationships and variable values over time in the future, the results (with the functions used thus far, for which there are plausible justification but which are admittedly still up for discussion) must be expected to decline towards zero even faster than for the simple arguments examined in previous studies.

So as the systems views of the problem situation becomes more detailed, holistic, and sophisticated, the degree of confidence in our plan proposals that we can derive from arguments including those whole system insights is likely getting lower, not higher. This nudge towards humility even about the degree of confidence we might derive from honest, careful and systematic argument assessment may be a disappointment to leaders whose success in leading depends to some extent on such degree of confidence. Then again, this may not be a bad thing.