Archive for December, 2020

Connecting Systems Models With Argumentation and Systematic Evaluation

Thorbjørn Mann 2020

Introduction

      My attempts to sketch an outline of a (potentially global) ‘Planning Discourse Support Platform’ encountered difficulties in accommodating the differences between Systems Modeling, the Argumentative Model of Planning, and the approaches to systematic evaluation (‘formal evaluation’ techniques). These issues suggest an effort to develop better connections between these perspectives. The following is a summary of the main tenets of such a connection perspective.

The planning discourse: a general ‘systems’ view

      The planning discourse can be (roughly) described as the exchange of communications about PLANS aimed at remedies some PROBLEM.

PROBLEM      

      Understood as some person or group’s claim that some aspect or state S of reality IS not as it OUGHT to be and calls for some PLAN of ACTIONS to be developed to change that discrepancy:

      The state of affairs or ‘situation’  SI  — assigned a ‘quality’ assessment judgment Q  (on some scale such as  Q ={‘couldn’t be better’ / good  (satisfactory/ good enough) / so-so / bad/ couldn’t be worse} as ‘worse’ than the ‘better assessment of a ‘desired’ state SO that OUGHT to be (in that person’s opinion). Upon questioning the ‘problem-raising party may describe the IS- situation by a set of descriptors s(values of variables); and  the OUGHT-state, or OUTCOME state as a set of different values of those variables.

SI = {si1,si2,…sin}    

and 

OS = {so1,so2… son)     

A restatement of he PROBLEM in terms of quality judgments Q:

PROBLEM = Qi ≠ Qo  or   Qi < Qo 

      Aims or different desirable outcomes SO can be distinguished as

SOo — > Qosomax      Any ‘optimal’ outcome given the judgment ‘couldn’t be better’

                                    and

OSo — >  Qosoge         Any outcome that can be given a judgment of ‘good enough’.

PLAN = {Actions a}     

      Plans aiming at a better outcomes consist of a set A of actions a that, given situations SI in overall CONTEXT C  are claimed to achieve SO.

SYSTEM MODEL   SM     

      The system of the planning situation is understood as SI-descriptions, the set of C-descriptions, and the set of actions A, and the set of relationships REL between them. The system can be described in a Systems Models  SM:

SM = {IS, A, C, REL} 

      The SM model must contain the variables describing SI, A, C , and the relations REL.  (Many SM simulations (aiming primarily at ‘understanding the system and its behavior) only describe SI and C, and their relationships, and explore (simulate) different settings of SI and C that result in different outcomes). For planning, the connections to Q assessments of outcomes are not explored, – since Q judgments are individual ‘subjective’ assessments: the model would have to include all those individual assessments. The standard ‘shortcut’ practice is to resort to some (‘objective’) aggregated state measure of the extent to which SO has been achieved, to serve as the group’s basis for decision. A ‘group’ GQ would require obtaining information of how the outcome variable values determine / influence Q- assessments. This is not part of the standard  practice – even if the modeling is done within a small team:  the team is supposed to achieve ‘consent’ or consensus about which aggregated state variable is to be ‘optimized’ and serve as the basis of decision, and the decision is postulated as legitimate ‘on behalf of’  the actual population affected in one way or another by the problem and proposed solution. The legimacy of this ‘shortcut’ is of course open to question. A more legitimate approach would have to include the Q-assessments of all parties in the model and simulation. Formal Evaluation Techniques offer explanations of what this would require, as follows:

Connecting the systems view with a ‘formal’ evaluation technique

Qe  = AF(qe1, 2,3,…n)     

      The overall Qe assessment of a plan by an individual evaluator e is a function AF of the individual’s  assessments q of evaluation ‘aspects’ and the ‘weights of relative importance w the individual assigns to each aspect (for example: 0<w <1.0 and ∑wi = 1). Each aspect judgment can be a function of a set of sub-aspect judgments, sub-sub-aspects etc. – or a function of a ‘criterion’ or ‘performance measure’ that measures how well a Plan is expected to achieve that aspect. ‘Criterion’ is a different name for a variable s in the  systems model. The criterion function can be expressed in a graph (or equation) showing how different values of so correspond to different values of q (in the diagram as ‘the objectively measurable variable’):

Figure 1:  Criterion function example,, showing how one person’s q–judgment depends on a performance variable s, on a q-scale of +U to –U

Group judgment indicators

    Statistical measures of judgments by individuals in a group should not be called  ‘group judgments’: groups do not make judgments but group decisions.  (The exception perhaps being the unlikely total ‘consensus’ results) can now be derived from individuals’ overall Q judgments: the mean, range, lowest judgment (of the ‘worst-off’ party); or coefficients measuring the degree of disagreement in the group.

Connection the evaluation aspect tree with a systems model

      The following diagram shows how the evaluation criteria relate to a systems model – or vice versa – how a systems model should be connected to an individual’s evaluation aspect tree.  It raises questions such as: should all systems variables be represented in the aspect tree if it aims at ‘complete ‘objectification’? Or can some judgments – for example aesthetic judgments or issues pertaining to individuals’ moral ‘system’ – be left as un-explained or unrelated to any measurable variable?

Figure 2 – Evaluation aspect tree and ‘quality’ judgments and systems model

Argumentation and Systems Model

      In the Argumentative Model of Planning, an approach to evaluate the plausibility of plans as a function of the plausibility or merit of the ‘pro’ and ‘con’ arguments about a proposed plan, the corresponding elements and steps can be described as a process of emerging complexity of mutual additions to what may be called a ‘systems-enhanced’ discourse (or a discourse-validated systems model) of the evaluation of planning project proposals:

(In the following, ‘argumentative discourse activities are regular; the ‘systems modeling’ process italicized ).

Starting, again:  with a PROBLEM being raised:  “Situation SI is not as it ought to be” Specifically: the initial (‘now’ at time 0 ) state of SI is not as it ought to be:

            SI ≠ So          (initial is-state ≠ future ought-state)

SI  (as understood by problem-raisers) is described by a set of variables s1, s2, s3,…sn:

            SI =  {s1i, s2i , s3i…sni};  and  SO =  {s1o, s2o, s3o,…sno}

A first version systems model SM1 is prepared,

consisting of the variables of situation SI,

and exploring relationships between the variables.

A PLAN of actions A is proposed:

+A!’  (“Plan A ought to be adopted for implementations!”) A  is described in detail as composed of a set of actions a:

      A = {a1, a2, a3.. }

SM1 is modified to SM2 by adding the proposed actions of A.

Can they be connected (according to general laws and relationships)

with ‘new’ variables of S, in the ‘context’ of the system

that ought to be brought to the attention of participants in the discourse?

ARGUMENTS are raised for (‘pro’) and against (‘con’) the proposed PLAN: first, in general, ‘qualitative’ form:

      ‘PRO’:

            +A!                         (Plan-Actions A ought to be adopted

             because                  because

            +(A àRà SO)!                  A will bring about (have relation to) SO                       

and                         and

            +(SO)!                        SO ought to be aimed for)!

      ‘CON’:

            ~A!                         A ought not to be adopted

            because                        because

            ~(A àRà So)!                  A will not bring about SO

            and                        and

            So!                        SO is what should be aimed for)!

            or

            ~A!                        (A ought not to be adopted

            because                        because

            +(A à R à SO)!            A will bring about SO

            and                        and (but)

            ~SO!                        SO (as described in the problem explanation)                                           should not be aimed for)!

      The authors of arguments may be asked to offer more detail – to explain – the claims in their premises. For example:

            ‘PRO’:

            +A!                         (Plan-Actions A ought to be adopted

             because                  because

            +(ai àri,jà sj)!                  part ai of A will bring about  effect sj of SO,                 

and                         and

            +(So)!                        sj in SO ought to be aimed for)!

      This raises the claims ai, ri,j, sj to ‘successor issues’ that need to be discussed or for which evidence is called.

The variables ai, sj, and relationship ri,j  , and the respective variables of all other pro and con arguments will now have to be added to the systems model SM2, for a revised version SM3.

Calculations of the effects of actions A throughout the system may not be performed without including assumptions about conditions in the ‘context’ or environment C      of the system: the first argument premise may have to be further explained or qualified as follows:

The ‘factual-instrumental’ premise should specify the conditions {c} under which the relationship ri,j is expected to hold, and a third premise added to the argument: +(ck1,2,..)!

The values of these ‘conditions must then be verified and added to the model.

The calculations or simulation will now present all the ‘objective’ measures of performance (‘consequences’) of implementing the actions A of the plan; these may give rise to more issues regarding their plausibility or desirability, and corresponding new arguments.

Entering ‘quality’ judgments      

With the presentation of the revised systems calculations, discourse participants will now be able to ‘evaluate’ the merit of these contributions. One way this can be done is by assigning a ‘plausibility’ judgment to each premise, use these to derive a plausibility value of each entire argument; develop ‘argument weights’ by first assigning weights of relative importance to all ‘deontic’ (ought-) claims in all arguments and multiply the plausibility of arguments with the respective weight of its ought-claim, and finally ‘aggregate’ the argument weights into an overall plausibility judgment of the plan.

Q =  FP(PlanPl)     

      An individual’s assessment of ‘Quality’ of a plan is a function FP of Plan plausibility Planpl. The relationship can be shown in a diagram like the criterion functions in figure 1 above, using Planpl as the ‘performance criterion’.

PlanPl =FA{Argwi-1,2,…n

      Plan plausibility PlanPl is a function FA of the argument weights argwi of all pro and con arguments i raised in the discourse of the plan. For example: Planpl = ∑{Argwi}

Argwi = FAW(Argpli, wi)

      The weight of an argument is a function of the Argument weight function FAW:  of the plausibility of that argument and the weight of relative importance of its deontic (ought)-premise, for example:

Argwi = Argpli x wi        and

Argpli = APL{prempl}      

      The argument plausibility is a function of the plausibility prempl of all argument premises (for the ‘planning argument’.  Here, the so outcome of premise 1 is the systems variable  so;  the expression ‘will result in’ of premise 1 is equivalent to the system’s model relationship between the plan (actually, to be specific, the action a of the Plan) and the variable value so which the argument then claims in premise 2 to  be desirable, and the set {c} of conditions under which premise 1 is expected to hold, of premise 3,  is a subset of context conditions of the overall systems model.  (A diagram showing the connection between argument assessment and the systems model remains to be developed.)

Implications

      The connections between the different ‘perspectives’ guiding evaluation – the systems model, the ‘formal evaluation’ model, and the argument assessment model can now be seen, at least, to facilitate the ‘translation’ between the different approaches.

Mutual shortcomings of approaches

      The decision to choose equivalent symbols for the elements or variables involved also shows the shortcomings of each perspective: The standard systems model does not accommodate the evaluation aspects, suggesting to decision-makers that the ‘objective’ variables suffice to support decisions; the formal evaluation and the argument assessment approach fail to provide the ‘systemic’ overview of the ‘whole system’ that is the major contribution of the systems model.

There is no completely adequate single approach

      An essential insight, I suggest,  is the following: Neither the construction and calculations of the systems model, the assessment of plan plausibility and its argumentative ‘pro and ‘con’ discussion, nor any ‘formal’ evaluation procedure, can be a sufficient condition to support decisions guided by the merit of all aspects that ought to be given ‘due consideration’ for important planning projects. Nor can they be arranged in some linear sequence, such as first, collect the ‘data’ to construct a systems model, then develop pans, argue the pros and cons, or evaluate alternative plan options.

‘Parallel’ development of systems model and evaluation

        The work involved in each of these views should be going on ‘in parallel’: the argumentative discussion (with participation by all parties affected by the problem or plan) identifies aspects and variables that should be included in the systems model.  The model supplies variables whose probability, plausibility and desirability (contribution to ‘quality’) should be discussed and evaluated with tools like the formal evaluation or argument discussion.

      The decision about which tools should be used for each individual project must be taken – agreed upon — by the participants in the process. However:  the platform should provide the tools, the guidelines, opportunity and support, even encouragement, for the use of different techniques in the project, as appropriate and applicable. 

      This work is ongoing; the above interim observations are offered for discussion.

— o —