Archive for the 'Uncategorized' Category



Combining systems modeling maps with argumentative evaluation maps: a general template

Many suggested tools and platforms have been proposed to help humanity overcome the various global problems and crises, each with claims of superior ability or adequacy for addressing the ‘wickedness’ of the problems.

Two of the main perspectives I have studied – the general group of models labeled as ‘systems thinking’, ‘systems modeling and simulation’, and the ‘argumentative model of planning’ proposed by H. Rittel (who incidentally saw his ideas as part of a ‘second generation’ systems approach) have been shown to fall somewhat short of those claims: specifically, they have so far not been able to demonstrate the ability to adequately accommodate each others’ key concerns. The typical systems model seems to assume that all disagreements regarding its model assumptions have been ‘settled’; it shows no room for argument and discussion or disagreement, while the key component of the argumentative model: the typical ‘pro’ or ‘con’ argument of the planning discourse, — the ‘standard planning argument’ does not connect more than two or three of the many elements of a more elaborate systems model of the respective situation, and thus fails to properly accommodate the complexity and multiple loops of such models.

It is of course possible that a different perspective and approach will emerge that can better resolve this discrepancy. However, it will have to acknowledge and then properly address the difficulty we can now only express with the vocabulary of the two perspectives. This essay explores the problem of showing how the elements of the two selected approaches can be related in maps that convey both the respective system’s complexity and the possible disagreements and assessment of the merit of arguments about system assumptions.

A first step is the following simplified diagram template that shows a ‘systems model’ in the center, with arguments both about how the proposal for intervention in the system (consisting of suggested actions upon specific system elements) should be evaluated, and about the degree of certainty – the suggested term is ‘plausibility’ – about assumptions regarding individual elements.

A key aspect of the integration effort is the insight that the ‘system’ will have to include all the features discussed in the discourse under the terms of ‘plan proposal’ with its details of initial conditions, proposed actions (what to do, by whom, using what tools and resources, and the conditions for their availability), the ‘problem’ a solution aims at remedying, which is described (at least) by specifying its current ‘IS’ state, the desired ‘OUGHT’ state or planning outcome, the means by which the transition of is- to ought-state can be achieved; and the potential consequences of implementing the plan, including possible ‘unexpected’ side-and-after-effects. Conversely, the assessment of arguments (the “careful weighing of pros and cons”) will have to explicitly address the system model elements and their interactions – elements that should be (but mostly are not) specified in the argument as ‘conditions under which the plan or one of its features is assumed to effectively achieve the specific outcome or goal referenced by the argument.

For the sake of simplicity, the diagram only shows two arguments or reasons for or against a proposed plan. In reality, there always will be at least two arguments (benefit and cost of a plan), but usually many more, based on assessment of the multiple outcomes of the plan and actions to implement it, as well as of conditions (feasibility, availability, cost and other resources) for its implementation. The desirability assessments of different parties will be different; the argument seen as ‘pro’ by one party can be a ‘con’ argument for another, depending on the assessment of the premises. Therefore, arguments are not shown as pro or con in the diagram.

 

AMSYST 1
The diagram uses abbreviated notations for conciseness and convenient overview that are explained in the legend below, that presents some key (but by no means exhaustively comprehensive) concepts of both perspectives.

*  PLAN or P Plan or proposal for a plan or plan aspects

*  R    Argument or ‘reason’. It is used both for an entire ‘pro’ or ‘con’ argument about the plan or an issue, — the entire set of premises supporting the ‘conclusion’ claim (usually the plan proposal) and for the relationship claimed to connect the Plan with an effect, usually a goal, or a negative consequence of plan implementation, in the factual-instrumental premise.
The ‘standard planning argument’ pattern prevailing in planning discourse has the general form:
D(PLAN) Plan P ought to be adopted (deontic ‘conclusion’)
because
FI (PLAN –>R –>O)|{C} P has relationship R with outcome O given
Conditions {C} (Factual-instrumental premise)
and
D(O) Outcome O ought to be pursued (Deontic premise)
and
F{C} Conditions {C} are given (true)

The relationship R is most often a causal connection, but also stands for a wide variety of relationships that constitute the basis for pro or con arguments: part-whole, identity, similarity, association, analogy, catalyst, logical implication, being a necessary or sufficient condition for, etc. In an actual application, these relationships may be distinguished and identified as appropriate.

*    O or G   Outcome or goal to be pursued by the plan, but also used for other effects including negative consequences

*    M —   the relationship of P ‘being a means’ to achieve O

*     C or {C}     The set of a number of
c conditions under which the claimed relationship M between P and    O is assumed to hold

*     pl ‘plausibility’ judgments about the plan, arguments, and argument premises, expressed as values on a scale of +1 (completely plausible) to -1 (completely implausible) with a midpoint ‘zero’ understood as ‘so-so or ‘don’t know, cant decide’) in combination with the abbreviations for those:
*       plPLAN or plP plausibility judgment of the PLAN,
this is some individual’s subjective judgment.
*       plM plausibility of P being effective in achieving O;
*       pO plausibility of an outcome O or Goal;
*       pl{C} plausibility (probability) of conditions {C} being present;
*       plc plausibility of condition c being present;
*       plR plausibility of argument or reason R;
*       pl PLAN GROUP a group judgment of plan plausibility

*       wO weight of relative importance of outcome O ( 0 ≤ w ≤ 1; ∑w = 1)

*       WR Argument weight or weight of reason

Functions F between plausibility values:

*      F1     Group plausibility aggregation function:
n
pl PLANGROUP = F1 (plPLANq) for all n members q of the group
q=1, 2

*      F2    Plan plausibility function:
m
Pl(PLAN)q = F2 (WRi) for all m reasons R, by person q
i = 1,2…

*      F3   Argument weight function:

WRi = F3 pl Ri)* wOj

*     F4   Argument plausibility function:

Pl(Ri) = F4: {pl(P –>Mi –>Oi)|{Ci}) , pl(Oi), pl{C}}
The plausibility of argument R is a function of all
Premise plausibility judgments

*     F5     Condition set plausibility function:

Pl{C} = F5 (pl ck) pl of set {C} is a function of the
K = 1,2… plausibility judgmens of all c in the set.
n
*     F6 Weight of relative importance of outcome Oi: wOi = 1/n ∑ vOi
i=1,2…
Subject to conditions 0 ≤ wOi ≤ 1, and ∑wO = 1.

*    System S The system S is the network of all variables describing both the initial  conditions c (the IS-state of the problem the plan is trying to remedy), the  means M involved in implementing the plan, the desired ‘end’ conditions or goals G of the plan, and the relationships and loops between these.

The diagram does not yet show a number of additional variables that will play  a role in the system: the causes of initial conditions (that will also affect the  outcome or goal conditions; the variables describing the availability, effectiveness, costs and acceptability of means M, and potential consequences of both M and O of the proposed plan. Clearly, these conditions and their behavior over time (both the time period needed for implementation, and the assumed planning horizon or life expectancy of the solution) will or should be given due consideration in evaluating the proposed plan.

Towards adding argumentation information to systems maps and systems complexity to argument maps.

This brief exploration assumes that discussions as well as any systems analysis and modeling are essentially part of human efforts to deal with some problem, to achieve some change of conditions in a situation, — a change that expected to be different from how that situation would exist or change on its own without a planning intervention.

1       Adding questions and arguments to systems diagrams.

Focusing on a single component of a typical systems diagram: two elements (variables)
A and B are linked by a connection / relationship R(AB) :

A ———R———> B

For convenience, in the following these elements are listed vertically to allow adding questions people might ask about them, and hold different opinions about the possible answers.

A What is A?
|           What is the current value (description) of A? (at time i)
|           How will A change (e.g. what will the value of A be at time i+j)?
|           What causes / caused A?
|           Should changing A be a part of a policy / plan?
|                  If so: What action steps S (Sequence? Times? Actors?) and
|                            What Means / resources M will be needed?
|             Are the means actors etc. available? Able? Willing?
|             What will be the consequences KA of changing A?
|            Who would be affected by KA? In what way?
|             Is consequence KAj desirable? Undesirable?
|           Q: Is A the appropriate concept for the problem at hand?
|               (and the questions about A the appropriate questions?)
|
R(AB)   What is the relationship R(AB)?
|            What is the direction of R?
|            Should there be a relation R(AB)?
|            What is the (current) rate of R? (Other parameters? E.g. strength)?
|            What should the rate of R be?
|
B          What is B?
.            What is the current state / value of B?
.            Should B be the aim / goal G of a policy / plan?
.             Are there other (alternative) means for attaining B?
.            What should be the desired state / value of B? (At what time?)
.             What factors (other than A) are influencing B?
.            What would be the consequences K of attaining G?
.            Who would be affected by K? In what way?
.            Is consequence KBj desirable? Undesirable?
.            Q: Is B the appropriate concept for the problem at hand?
.            (and the questions about B the appropriate questions?)

Most systems models and diagrams do not show such questions and arguments – it is my impression that they either assume that differences of opinion about the underlying assumptions have been ‘settled’ in the respectively last version of the model, or that the modeler’s understanding of those assumptions is the best or valid one (on the authority of having constructed the model?). They thereby arguably discourage discussion. They also do not easily accommodate the complete description of plans or policies, assuming a kind of ‘refraining from committing to solutions’ attitude of just ‘objectively’ conveying the simulated consequences of different policies while limiting the range of policy or plan options by omitting the aspects addressed by the questions and arguments.

2             Adding systems complexity information to argument maps

Typically, the planning discourse will consist of a growing set of ‘pro’ and ‘con’ arguments about plan proposals; any decision should be based on ‘due consideration’ of all these arguments. In the common practice of discussion (even in carefully structured participatory events) the individual typical planning argument can be represented as follows:
“Plan P ought to be adopted and implemented
because
Implementing the plan P will have relationship R (e.g. lead to) consequence K, given conditions C
and
Consequence K ought to be pursued (is a goal G)
and
Conditions C are present.

This argument, in which several premises already have been added that in reality often are omitted as ‘taken for granted’, can be represented in more concise formal ways , for example as follows:

D(P)                           (Deontic claim: conclusion, proposal to be supported)
Because
FI((P –R—>K)|C)    (Factual-instrumental premise)
and
D(K)                           (Deontic premise)
and
F(C)                            (Factual premise)

The argumentative process, in the view of Rittel’s ‘Argumentative Model of Planning’, consists of asking questions (in the case of controversial questions, ‘raising issues’) for the purpose of clarifying, challenging or supporting the various premises. This serves to increase participants’ understanding of the situation and its complexity, which from the point of view of the ‘Systems Perspective’ may be merely ‘crudely’, only qualitatively and thus inadequately represented in the arguments in a ‘live’ discussion. Some potential questions for the above premises are the following:

D(P)         Description, explanation of the plan and its details:
Problem addressed?
Current condition / situation?
Causes, necessary conditions for problem to exist, contributing factors?
Aims / goals?
Available means?
Other possible means of addressing problem?
Q: wrong question: wrong way of looking at the problem?
Implementation details? Steps, actions? Sequence?

Actors / responsibilities?
Means and resources needed? Availability? Costs?

FI((P –R–>K)|C)) : Does the relationship hold? Currently? Future?

R(P,K)      Explanation: Type of relationship?

(Causal, analogy, part-whole, logical implication…)
Existence and direction of relationship? Reverse? Spurious?
Strength of relationship?
Conditions under which the relationship can exist / function?

D(K)       Should consequence K be pursued?
Explanation / description of K: details?
What other factors (than the provisions of plan P) affect / influence K?
Other (alternative) means of achieving K?

F(C)         Are the conditions C (under which relationship R holds) present?
Will they be present in future?
What are the conditions C?
What factors (other than those activated by plan P) affect / influence C?
If conditions C are NOT reliably present,
what provisions must be made to secure them? (Plan additions?)

These questions, (which arguably should be better accommodated in systems diagrams) can be taken up and addressed in the normal discussion process. Their sequence and orderly treatment representation, especially to provide adequate overview, can be improved, and could be significantly improved by better representation of the variety and complexity of the additional elements introduced by the questions raised.

This is especially true with respect to the question about Conditions C under which the claimed relationship R is assumed to hold. A more careful examination of this question (i.e. more careful than the common qualification ‘everything else being equal’: what IS that ‘everything else’ – and IS it ‘equal’?) will reveal that there are many conditions, and that they are interrelated in different, complex ways, with behaviors over time that we have trouble fully understanding. In other words, they constitute a ‘systems network’ of elements, factors and relationships including positive and negative feedback loops – precisely the kind of network shown in systems diagrams.

Thus, it must be argued that in order to live up to the sensible principle that decisions to adopt or reject plans should be made on the basis of due consideration (i.e. understanding) of all the pro and con arguments, the assessment of those arguments should include adequate understanding of the systems networks referred to in all the pro and con arguments.

3          Conclusion

The implication of the above considerations is. I think, fairly clear: Neither does common practice of systems modeling or diagramming adequately accommodate questions and arguments about model assumptions, nor do common representations (issue and argument maps) of the argumentative discourse adequately accommodate systems complexity. Which means that the task of developing better means of meeting that requirement is quite urgent; the development of effective global discourse support platforms for addressing the global crises we are facing will depend on acceptable solutions for this question. But this is still a vague goal: I have not seen anything in the way of specific means of achieving it yet. Work to do.

A Less Adversarial Planning Discourse Support System

A Fog Island Tavern conversation
about defusing the adversarial aspect of the Argumentative Model of Planning

Thorbjoern Mann 2015

(The Fog Island Tavern: a figment of imagination
of a congenial venue for civilized conversations
about issues, plans and policies of public interest)

– Hi Vodçek, how’s the Tavern life this morning? Fog lifting yet?
– Hello Bog-Hubert, good to see you. Coffee?
– Sure, the usual, thanks. What’s with those happy guys over there — they must be drinking something else already; I’ve never seen them having such a good time here?
– No, they are just having coffee too. But you should have seen their glum faces just a while ago.
– What happened?
– Well, they were talking about the ideas of our friend up at the university, about this planning discourse platform he’s proposing. They were bickering about whether the underlying perspective — the argumentative model of planning — should be used for that, or some other theory, systems thinking or pattern language approaches. You should have been there, isn’t that one of your pet topics too?
– Yes, sorry I missed it. Did they get anywhere with that? What specifically did they argue about?
– It was about those ambitious claims they are all making, about their approach being the best foundation for developing tools to tackle those global wicked problems we are all facing. They feel that those claims are, well, a little exaggerated, while accusing each other’s pet approach of being far from as effective and universally applicable as they think. Each one missing just the main concerns the other feels are the most important features of their tool. And lamenting the fact that neither one seems to be as widely accepted and used as they think it deserves.
– Did they have any ideas why that might be?
– One main point seemed to be the mutual blind spot that the Argumentative Model, besides being too ‘rational’ and argumentative for some people, and not acknowledging emotions and feelings, did not accommodate the complexity and holistic perspective of systems modeling (in the view of the systems guys), while the systems models did not seem to have any room for disagreements and argumentation, from the point of view of your argumentative friends.
– Right. I am familiar with those complaints. I don’t think they are all justified, but the perceptions that they are need to be addressed. We’ve been working on that.
– Good. Another main issue they were all complaining about — both sides — was that there currently isn’t a workable platform for the planning discourse, even with all the cool technology we now have. And therefore some people were calling for a return to simple tools that can be used in actual meeting places where everybody can come and discuss problems, plans, issues, policies. The ‘design tavern’ that Abbé Boulah kept talking about, remember?
– Yes. It seemed like a good idea, but only for small communities that can meet and interact meaningfully in ‘town hall’- kind places. Like his Rigatopia thing, as long as that community stays small enough.
– Well, they seemed to get stuck in gloom about that issue for a while, couldn’t decide which way to go, and lamenting the state of technology for both sides. That’s when Abbé Boulah showed up for a while, and turned things around.
– How did he do that?
– He just reminded them of the incredible progress the computing and communication technology has seen in the last few decades, and suggested that they might think about how that progress might have been focused on the wrong problems, or simply not getting around to the real task of their topic — planning discourse support — yet. Told them to explore some opportunities of the technology – possibilities already realized by tools already on the market or just as feasible but not yet produced. He bought them a round of his favorite Slovenian firewater and told them to brainstorm crazy ideas for new inventions for that cause, to be applied first in his Rigatopia community experiment on that abandoned oil rig. That’s what set them off. Boy, they are still having fun doing that.
– Did they actually come up with some useful concepts?
– Useful? Don’t know about that. But there were some wild and interesting ideas I heard them toss around. Strangely, most of them seemed about tech gizmos. They seem to think that the technical problem of global communication is just about solved — messages, information can be exchanged instantaneously all over the world, and that concepts like Rittel’s IBIS provides an appropriate basis for organizing, storing, retrieving that information, and that the missing things have to do with the representation, display, and processing the contributions for decision-making: analysis and evaluation.
– Do you have an example of ideas they discussed?
– Plenty. For the display issue, there was the invention of the solar-powered ‘Googleglass-Sombrero’ — taking the Google glass idea a step further by moving the internet-connected display farther away from the eye, to the rim of a wide sombrero, so that several display maps can be seen and scanned side by side, not sequentially. Overview, see? Which we know today’s cell-phones or tablets don’t do so well. There was the abominable ‘Rollupyersleeve-watch’. It is actually a smartphone, but would have an expandable screen that can be rolled up to your elbow so you can see several maps simultaneously. Others were still obsessed with making real places for people to actually meet and discuss issues, where the overall discourse information is displayed on the walls, and where they would be able to insert their own comments to be instantly added and the display updated. ‘Democracy bars’, in the tradition of the venerable sports bars. Fitted with ‘insect-eye’ projectors to simultaneously project many maps on the walls of the place, with comments added on their own individual devices and uploaded to the central system.
– Abbé Boulah’s ‘Design Tavern’ brought into the 21st IT age. Okay!
– Yes, that one was immediately grabbed by the corporate – economy folks: Supermarkets offering such displays in the cafe sections, with advertisement, as added P/A attractions…
– Inevitable, I guess. Raises some questions about possible interference with the content?
– Yes, of course. Somebody suggested a version of the old equal-time rule: that any such ad had to be immediately accompanied by a counter-ad of some kind, to ‘count’ as a P/A message.
– Hmm. I’d see a lot of fruitless lawsuits coming up about that.
– Even the evaluation function generated its innovative gizmos: There was a proposal for a pen (for typing comments) with a sliding up-down button that instantly lets you send your plausibility assessment of proposed plans or claims. It was instantly countered by another idea, of equipping smartphones with a second ‘selfie-camera’ that would read and interpret you facial expressions when reading a comment or argument: not only nodding for agreement, shaking your head to signal disagreement, but also reading raised eyebrows, frowns, smiles, confusion, and instantly sending it to the system, as instant opinion polls. That system would then compute the assessment level of the entire group of participants in a discussion, and send it back to the person who made a comment, suggesting more evidence, or better justification etc.
– Yes, there are some such possibilities that a kind of ‘expert system’ component could provide: not only doing some web research on the issues discussed, but actually taking part in the discussion, as it were. For example, didn’t we discuss the idea of such a system scanning both the record of discussion contributions and the web, for example for similar cases? I remember Abbé Boulah explaining how a ‘research service’ of such a system could scan the data base for pertinent claims and put them together into pro and con arguments the participants hadn’t even thought of yet. Plus, of course, suggesting candidate questions about those claims that should be answered, or for which support and evidence should be provided, so people could make better-informed assessments of their plausibility.
– I’m glad you said ‘people’ making such assessments. Because contrary to the visions of some Artificial Intelligence enthusiasts, I don’t think machines, or the system, should be involved in the evaluation part.
– Hey, all their prowess in drawing logical conclusions from data and stored claims should be kept from making valuable contributions: are you a closet retro-post-neoluddite? Of course I agree: especially regarding the ought-claims of the planning arguments, the system has no business making judgments. But the system would be ‘involved’, wouldn’t it? Processing and calculation of participants’ evaluation results? In taking the plausibility and importance judgments, and calculating the resulting argument plausibility, argument weights, and conclusion plausibility, as well as the statistics of those judgments for the entire group of participants?
– You are right. But those results should always just be displayed for people to make their own final judgments in the end, wasn’t that the agreement? Those calculation results should never be used as the final decision criterion?
– Yes, we always emphasized that; but in a practical situation it’s a fine balancing act. Just like decision-makers were always tempted to use some arbitrary performance measure as the final decision criterion, just because it was calculated from a bunch of data, and the techies said it was ‘optimized’. But hey, we’re getting into a different subject here, aren’t we: How to put all those tools and techniques into a meaningful design for the platform, and a corresponding process?
– Good point. Work to do. Do you think we’re ready to sketch out a first draft blueprint of that platform, even if it would need tools that still have to be developed and tested?
– Worth a try, even if all we learn is where there are still holes in the story. Hey guys, why don’t you come over here, let’s see if we can use your ideas to make a whole workable system out of it: a better Planning Discourse Support System?
– Hi Bog-Hubert. Okay, if you feel that we’ve got enough material lined up now?
– We’ll see. How should we start? Does your Robert’s Rules expert have any ideas? Commissioner?
– Well, thanks for the confidence. Yes, I do think it would be smart to use the old parliamentary process as a skeleton for the process, if only because it’s fairly familiar to most folks living in countries with something like a parliament. Going through the steps from raising an issue to a final decision, to see what system components might be needed to support each of those steps along the way, and then adding what we feel are missing parts.
– Sounds good. As long as Vodçek keep his bar stocked, we can always go back to square one and start over if we get stuck. So how does it start?
– I think there are several possible starting points: Somebody could just complain about a problem, or already make a proposal for how to deal with it, part of a plan. Or just raise a question that’s part of those.
– Could it just be some routine agency report, monitoring an ongoing process, — people may just accept it as okay, no special action needed, or decide that something should be done to improve its function?
– Yes, the process could start with any of those. Can we call it a ‘case’, as a catchall label, for now? But whatever the label, there needs to be a forum, a place, a medium to alert people that there is a candidate case for starting the process. A ‘potential case candidate listing’, for information. Anybody who feels there is a need to do something could post such a potential case. It may be something a regular agency is already working on or should address by law or custom. But as soon as somebody else picks it up as something out of the ordinary, significant enough to warrant a public discussion, the system will ‘open’ the case, which means establishing a forum corner, a venue or ‘site’ for its discussion, and invite public contributions to that discussion.
– Yeah, and it will get swamped immediately with all kinds of silly and irrelevant posts. How does the system deal with that? Trolls, blowhards, just people out to throw sticks into the wheels?
– Good question. The problem is how to sort out the irrelevant stuff — but who is to decide what’s what? And throw out what’s irrelevant?
– Yes, that itself could lead to irrelevant and distracting quarrels. I think it’s necessary to have a first file where everything is kept in its original form, a ‘Verbatim’ depository, for reference. And deal with the decision about what’s relevant by other means, for example the process of assessment of the merit of contributions. First, everybody who makes a contribution will get a kind of ‘basic contribution credit point’, a kind of ‘present’ score, which is initially just ‘empty’. If it’s the first item of some significance for the discussion, it will get filled with an adjustable but still neutral score — mere repetitions will stay ‘noted’ but empty.
– Good idea! This will be an incentive to make significant information fast, and keep people from filling the system with the same stuff over and over.
– Yes. But then you need some sorting out of all that material, won’t you?
– True. You might consider that as part of an analysis service, determining whether a post contains claims that are ‘pertinent’ to the case. It may just consist of matching a term — of a ‘topic’ or subject, that’s part of the initial case description, or provides a link to any subsequent contribution already posted. Each term or topic is now listed as the content subject of a number of possible questions or issues — the ‘potential issue family’ of factual, explanatory, instrumental, and deontic (ought-) questions that can be raised about the concept. This can be done according to the standard structure of an IBIS (issue based information system), a ‘structured’ or formalized file that consists of the specific questions and the respective answers and arguments to those. Of course somebody or something must be doing this — an ‘Analysis’ or ‘Formalizing’ component — either some human staff, or an automated system which needs to be developed. Ideally, the participants will learn to do this structuring or formalizing themselves, to make sure the formalized version expresses their real intent.
– And that ‘structured’ file will be accessible to everybody, as well as the ‘verbatim’ file?
– Yes. Both should be publicly accessible as a matter of principle. But access ‘in principle’ is not yet very useful. Such files aren’t very informative or interesting to use. Most importantly, they don’t provide the overview of the discussion and of the relationship between the issues. This is where the provision and rapid updating of discourse maps becomes important. There should be maps of different levels of detail: topic maps, just showing the general topics and their relationships, issue maps that provide the connections between the issues, and argument maps that show the answers or arguments for a specific issue, with the individual premises and their connections to the issues raised by each premise.
– So what do we have now: a support system with several storage and display files, and the service components to shuffle and sort the material into the proper slots. Al, I see did you draw a little diagram there?
– Yes – I have to doodle all this in visual form to understand it:

AMwoADV 14a

 

Figure  1 — The main discourse support system: basic content components

– Looks about right, for a start. You agree, Sophie?
– Yes, but it doesn’t look that much different from the argumentative or IBIS type system we know and started from. What happened to the concern about the adversarial flavor of this kind of system? Weren’t we trying to defuse that? But how? Get rid of arguments?
– Well, I don’t think you can prevent people from entering arguments — pros and cons about proposed plans or claims. Every plan has ‘pros’ – the benefits or desirable results it tries to produce – and ‘cons’, its costs, and any undesirable side-and after-effects. And I don’t think anybody can seriously deny that they must be brought up, to be considered and discussed. So they must be acknowledged and accommodated, don’t you think?
– Yes. And the evaluation of pro and con merit of plan proposals, based on the approach we’ve been able to develop so far, will depend on establishing some argument plausibility and argument weight.
– I agree. But isn’t there a way in which the adversarial flavor can be diminished, defused?
– Lets’ see. I think there are several ways that can be done. First, in the way the material is presented. For example, the basic topic maps don’t show content as adversarial, and the issue maps can de-emphasize the underlying pro-and con partisanship, if any, by the way the issues are phrased. Whether argument maps should be shown with complete pro and con arguments, is a matter of discussion, perhaps best dealt with in each specific case by the participants. This applies most importantly to the way the entire discourse is framed, and the ‘system’ could suggest forms of framing that avoid the expectation of an adversarial win-lose outcome. If a plan is introduced as a ‘take-it-or-leave-it’ proposal to be approved or rejected, inevitably some participants can see themselves as the intended or unintended losing party, which generates the adversarial attitudes. Instead, if the discourse is started as an invitation to contribute to the generation of a plan that avoids placing the costs or disadvantages unfairly on some affected folks, and the process explicitly includes the expectation of plan modification and improvement, that attitude will be different.
– So the participants in this kind of process will have to get some kind of manual of proper or suggested behavior, is that right? How to express their ideas?
– I guess that would helpful. Suggestions, yes, not rules, if possible.
– Also, if I understand the evaluation ideas right, the reward system for contributions can include giving people points for information items that aren’t clearly supporting one party or the other, so individual participants can ‘gain’ by offering information that might benefit ‘the other’ party, would that help to generate a more cooperative attitude?
– Good point. Before we get to the evaluation part though, there is another aspect — one of the ‘approach shortcomings’, that I think we need to address.
– Right, I’ve been waiting for that: the systems modeling question. How to represent complex relationships of systems models in the displays presented to the participants? Is that what you are referring to?
– Yes indeed.
– So do you have any suggestions for that? It seems that it is so difficult — or so far off the argumentative planners’ radar – that it hasn’t been discussed or even acknowledged let alone solved yet?
– Sure, it almost looks like a kind of blind spot. I think there are two ways this might, or should be, dealt with. One is that the system’s research component — here I mean the discourse support system — can have a service that make searches in the appropriate data bases to find and enter information about similar cases, where systems models may have been developed, and enter the systems descriptions, equations and diagrams — most importantly, the diagrams — to the structured file and the map displays. In the structured file, questions about the model assumptions and data can then be added — this was the element that is usually missing in systems diagrams. But the diagrams themselves do offer a different and important way for participants to gain the needed overview of the problem they are dealing with.
– So far, so good. Usually, the argumentative discussion and the systems are speaking different languages, have different perspectives, with different vocabularies. What can we do about that?
– I was coming to that — it was the second way I mentioned. But the first step, remember, is that the systems diagrams are now becoming part of the discussion, and any different vocabulary can be questioned and clarified together with the assumptions of the model. That’s looking at it from the systems side. The other entry, from the argumentative side, can be seen when we take a closer look at specific arguments. The typical planning argument is usually only stated incompletely — just like other arguments. It leaves out premises the arguer feels can be ‘taken for granted’. A more completely stated planning argument would spell out these three premises of the ‘conclusion-claim’, that
‘Proposal or Plan P should be adopted,
          because
          P will lead to consequence or result R , (given conditions C)
           and
          Result R ought to be pursued
          (and
           conditions C are present)’.

The premise in parenthesis, about conditions C, is the one that’s most often not spelled out, or just swept under the rug with phrases such as ‘all else being equal’. But take a closer look at that premise. Those conditions — the ones under which the relationship between P and R can be expected to hold or come true — refer to the set of variables we might see in a systems diagram, interacting in a number of relationship loops. It’s the loops that make the set a true system, in the minds of the systems thinkers.
– Okay, so what?
– What this suggests is, again, a twofold recommendation, that the ‘system’ (the discourse system) should offer as nudges or suggestions for the participants to explore.
– Not rules, I hope?
– No: suggestions and incentives. The first is to use existing or proposed system diagrams as possible sources for aspects — or argument premises — to study and include in the set of concerns that should be given ‘due consideration’ in a decision about the case. In other words, turn them into arguments. Of the defused kind, Sophie. The second ‘nudge’ is that the concerns expressed in the arguments or questions by people affected by the problem at hand, or by proposed solutions — should be used as material for the very construction of the model of problem situation by the system modeler for the case at hand.
– Right. For the folks who are constructing systems models for the case at hand.
– Yes, That would likely be part of the support system service, but there might be other participants getting involved in it too.
– I see: Reminders: as in ‘do you think this premise refers to a variable that should be entered into the systems model?’
– Good suggestion. This means that the construction of the system model is a process accompanying the discourse. One cannot precede the other without remaining incomplete. It also requires a constant ‘service’ of translation between any disciplinary jargon of the systems model — the ‘systems’ vocabulary as well as the special vocabulary of the discipline within which the system studied is located. And of course, translation between different natural languages, as needed. For now, let’s assume that would be one of the tasks of the ‘sorting’ department; we should have mentioned that earlier.
– Oh boy. All this could complicate things in that discourse.
– Sure — but only to the extent that there are concepts that need to be translated, and aspects that are significantly different as seen from ordinary ‘argumentative’ or ‘parliamentary’ planning discussion perspective as opposed to a systems perspective, don’t you agree?
– So let’s see: now we have some additional components in your discourse support system: the argument analysis component, the systems modeling component, the different translation desks, and the mapping and display component. What’s next?
– That would be the evaluation function. From what we know about evaluation, in this case evaluating the merit of discussion contributions, the process of clarifying, testing, improving our initial offhand judgments about things to more solidly well-founded, deliberated judgments requires that we make the deliberated overall judgments a function, that is, dependent on, the many ‘partial’ judgments provided in the discussion and in the models. And we have talked about the need for a better connection between the discourse contribution merit and the decision judgment. This is the purpose of the discourse, after all, right?
– Yes. And the reason we think there needs to be a distinct ‘evaluation’ step or function is that quite often, the link between the merit of discussion contributions and the decision is too weak, perhaps short-circuited, prejudiced, or influenced by ‘hidden agenda’ — improper, illicit agenda considerations, and needs to be more systematic and transparent. In other words, the decisions should be more ‘accountable’.
– That’s quite a project. Especially the ‘accountability’ part — perhaps we should keep that one separate to begin with? Let’s just start with the transparency aspect?
– Hmm. You don’t seem too optimistic about accountability? But without that, what use is transparency? If decision makers, whoever they might be in a specific case, don’t have to be accountable for their decision, does it matter how transparent they are? But okay, let’s take it one item at a time.
– Seems prudent and practical. Can you provide some detail about that evaluation process?
– Let me see. We ask the participants in the process to express their judgments about various concepts in the process, on some agreed-upon scale. The evaluation process of our friend suggests a plausibility scale. It applies to judgments about how certain we are that a claim is true, or how probable it is — or how plausible it is — if neither truth nor probability really apply, as in ought-claims. It ranges from some positive number to a negative point, agreed to mean ‘couldn’t be more plausible’ or ‘couldn’t be less plausible’, respectively, with a midpoint of zero expressing ‘don’t know’, ‘can’t judge’.
– What about those ‘ought’ claims in the planning argument? ‘Just ‘plausible’ doesn’t really express the ‘weighing’ aspect we are talking about?
– Right: for ought-claims — goals, objectives — there must be a preference ranking or a scale expressing weight of relative importance. The evaluation ‘service’ system component will prepare some kind of form or instrument people can use to express and enter those judgments. This is an important step where I think the adversarial factor can be defused to some extent: if argument premises are presented for evaluation individually, not as part of the arguments in which they may have been entered originally, and without showing who was the original author of a claim, can we expect people to evaluate them more according to their intrinsic merit and evidence support, and less according to how they bolster this or that adversarial party?
– I’d say it would require some experiments to find out.
– Okay: put that on the agenda for next steps.
– Can you explain how the evaluation process would continue?
– Sure. First let me say that the process should ideally include assessment during all phases of the process. If there is a proposal for a plan or a plan detail, for example, participants should assign a first ‘offhand’ overall plausibility score to it. That score scan then be compared to the final ‘deliberated’ judgment, as an indicator of how the discussion has achieved a more informed judgment, and what difference that made. Now, for the details of the process. To get an overall deliberated plausibility judgment, people only need to provide plausibility scores and importance weights for the individual premises of the pro and con planning arguments. For each individual participant, the ‘system’ can now calculate the argument plausibility and the argument weight of each argument, based on the weight the person has assigned to its deontic premise, and the person’s deliberated proposal plausibility, as a function of all the argument weights.
– I seem to remember that there were some questions about how all those judgments should be assembled and aggregated into the next deliberated value?
– Yes, there should be some more discussion and experiments about that. But I think those are mostly technical details that are solved in principle, and can be decided upon by the participants to fit the case.
– And the results are then posted or displayed to the group for review?
– Yes. This may lead to more questions and discussion, of course, or for requests for more research and discussion, if there are claims that don’t seem to have enough support to make reasonable assessments, or for which the evidence is disputed. I see you are getting worried, Sophie: will this go on forever? There’s a kind of stopping rule: when there are no more questions or arguments, the process can stop and proceed to the decision phase.
– I think the old parliamentary tradition of ‘calling the question’ when the talking has gone on for too long should be kept in this system.
– Sure, but remember, that one was needed mainly because there was no other filter for endless repetition of the same points wrapped in different rhetoric. The rule of adding the same point only once into the set of claims to be evaluated will put a damper on that, don’t you think?
– So Al, did you add the evaluation steps to your diagram?
– Yes. Here’s what it looks like now:

AM wo ADV 14c

Figure 2 — The discourse support system with added evaluation components

– Here is another suggestion we might want to test, and add to the picture – coming back to the idea of the reward system helping to reduce the adversarial aspect: We now have some real measures — not only for the individual claims or information items that make up the answers and arguments to questions, but also for the plausibility of plan proposals that are derived from those judgments. So we can use those as part of a reward mechanism to get participants more interested in working out a final solution and decision that is more acceptable to all parties, not just to ‘win’ advantages for their ‘own side’.
– You have to explain that, Bog-Hubert.
– Sure. Remember the contribution credit points that were given to everybody, for making a contribution, to encourage participation? Okay: in the process of plausibility and importance assessment we were asking people to do, to deliberate their own judgments more carefully, they were assessing the plausibility and weight of relative importance of those contributions, weren’t they? So if we now take some meaningful group statistic of those assessments, we can modify those initial credits by the value or merit the entire group was assigning to a given item.
– ‘Meaningful’ statistic? What are you saying here? You mean, not just the average or weighted average?
– No, some indicator that also takes account of the degree of support presented for a claim, and the degree of agreement or disagreement in the group. The needs to be discussed. In this way, participants will build up their ‘contribution merit credit account’. You could then also earn merit credits for information that –from a narrow partisan point of view — would be part of an argument for ‘the other side’ — credit for information that serves the whole group.
– Ha! now I understand what you said initially about the evaluation function also serving to reduce the amount of trivial, untrue, and plain irrelevant stuff people might post in such discussions: if their information is judged negatively on the plausibility scale, that will reduce their credit accounts. A way to reward good information that can be well supported, and discourages BS and false information… I like that.
– Good. In addition to that, people could also get credit points for the quality of the final solution — assuming that the discourse includes efforts to modify initial proposals some people find troublesome, to become more acceptable — more ‘plausible’ — to all parties. And the credit you earn may be in part determined by your own contribution to that result. So there are some possibilities for such a system to encourage more constructive cooperation.
– Sounds good. As you said, we should try to do some research to see whether this would work, and how the reward system should be calibrated.
– So the reward mechanism adds another couple of components to your diagram, Al?
– Yes. Bog-hubert said that the evaluation process should really be going on throughout the entire process, so the diagram that shows it just after the main evaluation of the plan is completed is a little misleading. I tried to keep it simple. And there’s really just one component that will have to keep track of the different steps:

 

AM wo ADV 14d

Figure 3 –The process with added contribution reward component

 

– Looks good, thanks, Al. But what I don’t see there yet is how it connects with the final decision. I think you got derailed from finishing your explanation of the evaluation process, Bog-Hubert?
– Huh? What did I miss?
– You explained how each participant got a deliberated proposal plausibility score. Presumably one that’s expressed on the same plausibility scale as the initial premise plausibility judgments, so we can understand what the number means. Okay. Then what? How do you get from that to a common decision by the entire community of participants?
– You are right; I didn’t get to that. Well…
– Why doesn’t the system calculate an overall group proposal plausibility score from the individual scores?
– I guess there are some problems with that step, Vodçek. If you mean something like the average plausibility score. Are you saying that it should be the deciding criterion?
– Well… why not? It’s like all those opinion polls, only better, isn’t it? And definitely better that just voting?
– No, friends, I don’t think the judgment about the final decision should not be ‘usurped’ by such a score. For one, unless there are several proposals that have all been evaluated in this way so you could say ‘pick the one with the highest group plausibility score’, you’d have to agree on a kind of threshold plausibility a solution would have to achieve to get accepted. And that would just be another controversial issue. Also, a simple group average could gloss over, hide serious differences of opinion. And like majority voting, just override the concerns of minority groups. So such statistics should always be accompanied by measures of the degree of consensus and disagreement, at the very least.
– Couldn’t there be a rule that a proposal is acceptable if all the individual final plan plausibility scores are better than the existing problem situation? Ideally, of course, all on the positive side of the plausibility scale, but in a pinch at least better than before?
– That’s another subject for research and experiments, and agreements in each situation. But in reality, decisions are made according to established (e.g. constitutional) rules and conventions, habits or ad hoc agreements. Sure, the discourse support systems could provide some useful suggestions or advice to the decision-makers, based on the analysis of the evaluation results. A ‘decision support component’. One kind of advice might be to delay decision if the overall plausibility for a proposal is too close to the midpoint (‘zero’) value of the plausibility scale — indicating the need for more discussion, more research, or more modification and improvement. Similarly, if there is too much disagreement in the overall assessment – if a group of participants show very different results from the majority, even if the overall ‘average’ result looks like there is sufficient support, the suggestion may be to look at the reasons for the disagreement before adopting a solution. Back to the drawing board…
– Getting back to the accountability aspect you promised to discuss: Now I see how that may be using the evaluation results and credit accounts somehow — but can you elaborate how that would work?
– Yes, that’s a suggestion thrown around by Abbé Boulah some time ago. It uses the credit point account idea as a basis of qualification for decision-making positions, and the credit points as a form of ‘ante’ or performance bond for making a decision. There are decisions that must be made without a lot of public discourse, and people in those positions ‘pay’ for the right to make decisions with an appropriate amount of credit points. If the decision works out, they earn the credits back, or more. If not, they lose them. Of course, important decisions may require more points than any individual has compiled; so others can transfer some of their credits to the person, unrestricted, or dedicated for specific decisions. So they have a stake, — their own credit account — and lose their credits if they make or support poor decisions. This also applies to decisions made by bodies of representatives: they too must put up the bond for a decision, and the size of that bond may be larger if the plausibility evaluations by discourse participants show significant differences, that is, disagreements. They take a larger risk to make decisions about which some people have significant doubts. But I’m sorry, this is getting away from the discussion here, about the discourse support system.
– Another interesting idea that needs some research and experiments before the kinks are worked out.
– Certainly, like many other components of the proposed system — proposed for discussion. But a discussion that is very much needed, don’t you agree? Al, do you have the complete system diagram for us now?
– So far, what I have is this — for discussion:

AM wo ADV 14

Figure 4 — The Planning Discourse Support System – Components

– So, Bog-Hubert: should we make a brief list of the research and experiments that should be done before such a system can be applied in practice?
– Aren’t the main parts already sufficiently clear so that experimental application for small projects could be done with what we have now?
– I think so, Vodçek — but only for small projects with a small number of participants and for problems that don’t have a huge amount of published literature that would have to be brought in.
– Why is that, Bog-Hubert?
– See, Sophie: the various steps have been worked through and described to explain the concept, but it had to be done with different common, simple software programs that are not integrated: the content from one component in Al’s diagram have to be transferred ‘by hand’ from one component to the next. For a small project, that can be done with a small support staff with a little training. And that may be sufficient to do a few of the experiments we mentioned to fine-tune the details of the system. But for larger projects, what we’d need is a well-integrated software program that could do most of the transferring work from one component to the next ‘automatically’.
– Including creating and updating the maps?
– Ideally, yes. And I haven’t seen any programs on the market that can do that yet. So that should the biggest and top priority item on the research ‘to do’ list. Do you remember the other items we should mention there?
– Well, there were a lot of items you guys mentioned in passing without going into much detail – I don’t know if that was because any questions about those aspects had been worked out already, or because you didn’t have good answers for them? For example, the idea of building ‘nudging’ suggestions into the system to encourage participants to put their comments and questions into a form that encourages cooperation and discourages adversarial attitudes?
– True, that whole issue should be looked into more closely.
– What about the issue of ‘aggregation functions’ – wasn’t that what you called them? They way participants’ plausibility and importance judgments about individual premises of arguments, for example, get assembled into argument plausibility, argument weights, and proposal plausibility?
– Not to forget the problem of getting a reasonable measure of group assessment from all those individual judgment scores.
– Right. It may not end up being a multivariable one, not just a single measure. Like the weather, we need several variables to describe it.
– Then there is the whole idea of those merit points. It sounds intriguing, and the suggestion to link them to the group’s plausibility assessments makes sense, but I guess there are a lot of details to be worked out before it can be used for real problems.
– You say ‘real problems’ – I guess you are referring to the way they could be used in a kind of game, just like the one we ran here in the Tavern last year about the bus system, where the points are just part of the game rules, as opposed to real cases. I think the detailed development of this kind of game should be on the list too, since games may be an important tool to make people familiar with the whole approach. How to get these ideas out there may take some thinking too, and several different tools. But using these ideas for real cases is a whole different ball game, I agree. Work to do.
– And what about the link between all those measures of merit of people’s information and arguments and the final decision. Isn’t that going to need some more work as well? Or will it be sufficient to just have the system sound an alarm if there is too much of a discrepancy between the evaluation results and, say, a final vote?
– We’ll have to find out – as we said, run some experiments. Finally, to come back to our original problem of trying to reduce the adversarial flavor of such a discourse: I’d like to see some more detail about the suggestion of using the merit point system to encourage and reward cooperative behavior. Linking the individual merit points to the overall quality of the final decision — the plan the group is ending up adopting — sounds like another good idea that needs more thought and specifics.
– I agree. And this may sound like going way out of our original discussion: we may end up finding that the decision methods themselves may need some rethinking. I know we said to leave this alone, accept the conventional, constitutional decision modes just because people are used to them. But don’t we agree that simple majority voting is not the ultimate democratic tool it is often held out to be, but a crutch, a discussion shortcut, because we don’t have anything better? Well, if we have the opportunity to develop something better, shouldn’t it be part of the project to look at what it could be?
– Okay, okay, we’ll put it on the list. Even though it may end up making the list a black list of heresy against the majesty of the noble idea of democracy.
– Now there’s a multidimensional mix of metaphors for you. Well, here’s the job list for this mission; I hope it’s not an impossible one:
– Developing the integrated software for the platform
– Developing better display and mapping tools, linked to the formalized record (IBIS)
– Developing ‘nudge’ phrasing suggestions for questions and arguments that minimize adversarial potential
– Clarifying questions about aggregation functions in the evaluation component
– Improving the linkage between evaluation results (e.g. argument merit) and decision
– Clarifying, elaborating the discourse merit point system
– Adding improvement / modification options for the entire system
– Developing alternative decision modes using the contribution merit evaluation results.
– That’s enough for today, Bog-Hubert. Will you run it by Abbé Boulah to see what he thinks about it?
– Yeah, he’ll just take it out to Rigatopia and have them work it all out there. Cheers.

Does Logic Settle The Issue?

Bog-Hubert, entering the Fog Island Tavern, tries to get the attention of Tavern-keeper Vodçek, who is bent over a piece of paper on the counter, scribbling notes in its margin.

– So my friend, are you embarking on a new career of literary critic, or editor? What august publishing entity are you working for?

= Huh? Oh, sorry, didn’t hear you come in. What’s that you say about career? Or did you mean a beer?

– No, thanks, coffee would be fine. I was curious about your editing work there.

= Oh, this? It’s just a letter to my grand-aunt that came back as ‘undeliverable’.

– Your grand-aunt? Hasn’t she been dead for quite some time already? The one and only Aurelia Fryermouth? or do you have another equally grand aunt?

= No, that’s the one. And yes, she died many years ago. Here’s your coffee.

– And your letter took this long to get back to you? I knew the postal service to that country was kind of, well, unpredictable, but this…

= No, I wrote this about a month ago, and it just came back.

– Of course, if she’s dead. Stands to reason. But now you have me seriously worried. Why in three twister’s name did you write her when she’s dead?

= Oh, I do that all the time. I used to write her whenever I’ve written something I’m not quite sure about, and she always sent me useful, insightful comments back. So now I do the same thing, and when the letters come back after a while, I imagine her comments and write them in the margin, with my comments and rebuttals. Using a four-color Bic to keep track of what’s what. Very useful.

– The Bic? Okay. But this strange habit?

= Don’t knock it, it has kept me out of a lot of trouble. It should be required of everybody who’s writing, especially folks who write all those comments in social media discussions.
Now I admit, not everybody has an aunt Aurelia whose wisdom, even of the imagined kind, can be of such profound quality and assistance. She had a way of cutting through the distractions and BS, and put her finger on the real sore spots, like no teacher I ever had. But just imagining what she would say — just like those so-called conservatives who keep parroting ‘what would Reagan do?’; they really should look for somebody more … well, let’s not get into that — is immensely helpful. Not to mention the time delay. Remember the old advice to ‘sleep on it’ before jumping into action? One night is not enough, my friend. Looking at your impulsive writing after several weeks, during which you may also have gained some extra insights and wisdom, however infinitesimal, given your age (compared to aunt Aurelia’s), can be a very sobering experience.
– Ah. I see. It explains the wide margins you’ve left in the letter. But how to you ensure that your margin entries are not as impulsively imprudent as the original writing?

= Good point. I can only say there is a marked marginal improvement, if you’ll excuse the puns. And I indeed have at times resorted to sending her my comments back for review and revision… She does not mind that, unlike live editors who have a tendency to react with irritated and, if I may say so, rather impolite retorts to even the slightest challenges of their authority.

– Hmm. I admit, it sounds like a wise routine. Widely adopted, it would save humanity from a lot of, — what did you call it? — ‘impulsive’ writing, I agree. But now you have made me curious: what are those profound questions you have her comment upon from the Great Aurelian Beyond?

= Don’t know about profound. This last one was just about the puzzlements I felt about the offer by the climate scientist Dr. Keating, to pay a considerable sum of money to any ‘denier’ (his term) of man made climate change to provide a proof, via scientific method, that man made climate change does not occur.

– I heard something about that, yes. Did he get any such proof?

= Several dozen, as far as I know. So he had a big discussion on his blog about why the proofs didn’t hold water, and responding to all the folks who didn’t think the challenge was serious, or ill-stated etc.

– So what about your puzzlement?

= Well, there were several. One was about the reasons why some people seem very reluctant to accept the idea of man made climate change, (MMCC) for reasons they couldn’t really discuss because Dr. Keating insisted on it all being ‘scientific’. But even if they weren’t, does that mean they were totally illegitimate and nonsensical? For example, could it be that — for Dr. Keating and others — accepting the MMCC hypothesis would be seen as also accepting some implications, sight unseen, that might very well be worthy of discussion?

– Okay: seems worth looking into. The other issue?

= That was a strange one. In the discussion, both parties were at times insisting that they had valid logical reasoning on their side, and that the other side was guilty of violating logic. Now somebody found this a bit curious, not to say logically questionable. But upon investigating the matter, he found out that if you just looked a the logical validity of the arguments people put forward — not the truth or probability of the premises — it is entirely possible for both sides to propose quite logically valid arguments for their case, while also being vulnerable to accusations of using arguments that are not deductively valid but perhaps merely ‘plausible’, but logically inconclusive.

= Huh. Can you explain that?

– Sure. Take the main scientific argument abut a hypothesis H — in this case, that MMCC is true. You examine the hypothesis and find that if it is true, then we should be able to find some evidence E that must occur as a consequence. That makes the first premise “If H then E must be observed” or H –> E. Now we observe E. Does this ‘prove’ that H is true?
No: it is the inductive reasoning scheme
((H –> E) & E) –> H
which is logically inconclusive, not deductively valid. It’s what they call a ‘just another white swan’ argument: observing any number of white swans — E — does not prove that the hypothesis that all swans are white is true. (If true, it implies that all swans observed will be white.) You can test that with a truth table: there is one case among all the possible states of the world involving H and E that makes the main implication ‘false’. So if that is your main argument for H, you can be accused of using less than deductively valid logic. But observing just one black swan (or even a pink one, for a more colorful discussion) ~E, deductively, validly refutes the hypothesis:
((H –>E) & ~E) –> ~H
This is a perfectly valid deductive argument (called modus tollens by the logicians).

= I remember that now, yes. But in science, they have developed that trick with the ‘null hypothesis Ho’ — the hypothesis put on its head — haven’t they? And use the same modus tollens argument showing that if E is observed, Ho can’t possibly be true?

– Yes, at least for questions involving large numbers of data observations, where Ho is understood as e.g. climate changes happen at random, unrelated to human activities. Then the argument is not claiming total refutation, just that it is so unlikely (having such a low probability) that E could be observed of Ho is true, that Ho is rejected, and provisionally H is accepted instead. This is accepted as valid scientific reasoning.

= So If they can produce such evidence and arguments, doesn’t that settle it?

– Not so fast. For one, the argument scheme is a different one, depending on whether you accept or reject the premises. Which are of course part of the controversy. And the evidence E is not a simple observation or experiment result, but consisting of a ‘body of evidence’ that starts with the definition and understanding of the things you are discussing. Say: what qualifies as ‘climate change’, what human activities are influencing climate. Then selecting appropriate variables for those concepts, that must be measured: temperature; okay, or CO2 — but of what? air? water? land? some combination? measured how, over what time period, where? (e.g. on the surface? or in the stratosphere, or somewhere in-between? Then there must be some distinctive and significant correlation between the measures for climate change and human shenanigans, and some provisions that the correlation actually indicates causation and not the other way around.

= Wait a minute: ‘the other way around’ — what do you mean by that?

– Oh, maybe somebody claims that human activities increase CO2 levels in the air, which change the climate. And somebody else says: wait — the climate is actually cooling, — the winters are getting colder, which causes humans to do more heating, which maybe increases CO2 somewhat, but the cause of that is really climate cooling? Even if the argument doesn’t make sense to you, you can’t just dismiss it as unlogical, you have to make sure that if you see a correlation between man-made CO2 and climate change, you have the cause and effect going in the proper direction.

= All that puts quite a burden on the scientists who claim there is a connection between climate change and human activities.

– Right. They have to provide solid evidence and arguments for all the components of that body of evidence. And it makes it relatively easy for anybody to challenge that hypothesis: they only have to put reasonable doubt on one single component of that chain of evidence, to turn the corroborating argument H –> E and E into the modus tollens
((H –> E) & ~E) –> ~H (‘black swan’) argument ‘refuting’ H. Allowing the ‘denier’ to claim a deductively valid argument.
But: What if somebody came up with an argument like this one: “((E –>H) & E) –> H”
(“If we see evidence E, this must mean that H is true; now we observe E, so H is true”)

= Huh? is that the way science works?

– That may be up for discussion. You could argue that this is precisely the way scientists come up with — conjecture — the hypothesis: they see some things E that suggest H. Science of course also insists that such observations must be repeatable and confirmed bu other observers etc. But if somebody makes such a case, they can claim a perfectly logical and deductively valid argument — a respectable modus ponens. Remember, whether the conclusion is true depends on the validity of the argument scheme and the truth or plausibility of the premises. To claim logical validity you don’t have to also claim truth of premises. But of course you can’t jump to any specific conclusions yet.

= I see the problem here: both sides can claim to have logic on their side. So logic by itself does not settle the controversy. Now if you accept that, shouldn’t both sides agree that the final conclusion will rest on both logical validity and true premises, and to then refrain from trying to clinch the case by just claiming valid logic?

– You’d think so. And youd’ think that the scientist would make that clear in stating his case, wouldn’t you?

= Sure. So?

– So part of my puzzlement was the reaction by Dr. Keating to somebody pointing out this story of both sides claiming logic. He just dismissed this, writing that ‘logic is just a tool’, what counts is valid science. Here little ol’ me always thought that valid logical reasoning, together with confirmed observation, correct measurement, calculations etc. was an integral part of scientific method, the science toolkit. What do I know…

= I can see where this might be a puzzlement for you. So what does your grand-aunt Aurelia have to say about all this?

– She jumped right on the first one of my puzzlements: the other, perhaps ‘illegitimate’, or non-scientific reasons people might have for hesitating to accept the hypothesis that human activities are screwing up the global climate. That perhaps the context of challenges and claims, like the one by Dr. Keating, subtly or not so subtly implies acceptance of some conclusions that are quite partisan and political, but that can’t be entered in this discussion because they are not ‘scientific’.

= I’m sure that may not be intended by Dr. Keating and other climate scientists?

– Sure — but it may be in the minds of some folks out there. And that makes them look for any little chinks in the body of evidence they can find.

= What are some of those implications — did you revered grand-aunt suggest some?

– Her main point was this: if man-made climate change is true, it raises the question whether we actually can do something meaningful about it, and if so, what. But they suspect that the scientists — calling them ‘alarmists’ in return for being called ‘deniers’ by Keating and others — already have an agenda of proposed strategies and rules up their sleeve. And that those will be very expensive. Even worse: that many people will have to change some of their cherished habits regarding energy use. And –psst– that some folks who are now making fine profits from conventional energy sources and life habits will lose those profits. The worst, though: that those new strategies will allow o t h e r guys, not them, to now make more profit. Utterly unacceptable, that one.

= Ahh. Of course. It may also be the fact that the costs of the new strategies will have to be paid ‘now’ or ‘soon’, obviously by people who now have or make money, by way of taxes — but that the profits or benefits will manifest themselves much later, in terms not of cash revenues but avoided disaster. So does that answer your concerns?

– You mean can I sleep better at night for these insights? Don’t think so. But I think that it might be better if those issues would also be put on the table and discussed, negotiated. Perhaps such questions could be more productively dealt with if they were stated differently.

= What do you mean — does it matter how a problem is stated to answer what we should do about it? A problem is a problem is a problem, after all…

– No, I think the way they are thrown up for discussion does matter. For example, consider raising a challenge about the climate change in the following way: Look at a table showing — I’m simplifying now, perhaps dangerously so, but just to make it clear — the possible answers to the MMCC question as columns: is MMCC real, or is it not (or so insignificant that we don’t have to worry about it), and our strategies as rows: do we do something about it, or do we not?
There will be four main outcomes, the boxes 1,2,3,4. For each one, there are three major questions that should be answered: a) what will happen? What will be the consequences? b) What, if anything, will be done? and c) depending on what is done, what is the likely result? That would allow the discussion to address each question separately and more explicitly, and perhaps make it easier to reach some decisions. If decisions are needed. And if they are, avoid wasting more time by quibbling about issues like proof or disproof of MMCC (which is a wrong question in itself because it’s not a yes-no question but one of relative significance and relationships between many variables).

= So what does your table look like?

– Here’s a first simple draft, for filling in the boxes and discussion: What if:

____________________________________________________________________________
MMCC is:                              real & significant                   not real or insignificant ____________________________________________________________________________

We decide to                                     1                                                         2
take steps
____________________________________________________________________________

We do nothing, or                           3                                                         4
continue what we do
____________________________________________________________________________

= You might add another question there, my friend.

– Sure, there will be many more as people start talking more thoroughly about it. What’s the question you have in mind?

= It has to do with responsibility. Or accountability, if you wish: Who will take on the responsibility for decisions? And be accountable — whatever that means, which should be discussed more carefully — if it’s the ‘wrong’ decision?

– Huh. I need to send this back to aunt Aurelia. With wider margins…

—–

About Dr. Keating’s $10,000 challenge to prove that man-made climate change is not occurring

I foolishly accepted a chore that turned out to be more work than anticipated: to follow a discussion about Dr. Keating’s $10,000 Challenge and to put together an IBIS as well as a few Issue and Argument maps about it. Physicist Dr. Keating has offered $10,000 to anyone who can produce, via scientific method, a proof that man-made climate change is not occurring. He invited such submissions to be entered on a blog, and to offer his reasons as to whether and why he would accept or reject a submission. There were some such submissions, some of which were quickly rejected, others still awaiting judgment, but a flood of posts offering a variety of opinions — about the challenge and about submitted entries.

Taking a break from slogging through this material to ferret out essential information for the IBIS and maps, some thoughts occurred to me that I just have to write about, ‘to get them out of my system’.

These thoughts amount to not only raising some questions about the challenge as it was presented, but also about the nature of many of the responses offered. Many have to do with the concept of ‘proof’ Dr. Keating invited. He defends the choice of terms by referring to claims by ‘deniers’ of man-made climate change that such proof exists and is easy to provide. Perhaps his certainty in offering the reward stems form the knowledge that ‘proof’ is not appropriate term for evidence in favor of scientific hypotheses, especially for issues involving matters of degrees of several different sources contributions to an effect. The argument of offering evidence in support of a hypothesis is an inductive kind, not a deductively valid one that deserves the label of ‘proof’.

So while the ‘deniers’ should perhaps not have used the term (he did not cite any such claim specifically), in the larger interest of the controversy his use of the word ‘proof’ in his challenge is not really helpful either. Not because, as some of the posts claim, ‘it is not possible to prove a negative’. Isn’t it the scientific way to disprove a hypothesis by showing, by valid ‘scientific means (experiments, observations verified by repeatability, measurement etc.) that a piece of evidence that constitutes a necessary consequence of the hypothesis is NOT true, is indeed a deductively valid one (known as ‘modus tollens’) that can be accepted as proof? But because if the issue is one involving matters of degrees, acceptance or rejection of a hypothesis hinges on the degree of certainty — for example, of the probability that the ‘null hypothesis’ (the negation of the hypothesis) could have occurred under the conditions of the evidence data or arguments provided is too small to be believable. So this would have required specification of that level of confidence.

The choice of ‘weight of evidence’ or ‘argument’ instead of ‘proof’ may have been avoided for several reasons: the difficulty of ‘weighing’ the evidence, or because ‘argument’ is perceived as something unduly (unscientifically?) adversarial. like a brawl, with win-lose outcomes not necessarily based on the merit of the arguments. The significance of the overall issue (at least as seen by Dr. Keating and his laudable effort to get it ‘settled’) should suggest that such a win-lose outcome is not in the best interest of humanity.

It is no wonder that many posts therefore tried to represent the challenge as ‘meaningless’. But they also did not acknowledge the elephant in the room: the question of what should be done about the problem — if it is indeed a problem serious enough to worry about, and if ‘we’ (humanity) actually can do something about it. Getting ‘the right thing’ done is NOT the likely result of a win/lose brawl.

Getting the right thing done would require re-framing the understanding of ‘argument’ as the mutual attempt to
a) identify and acknowledge the ‘real’ underlying issue;
b) identify possible solutions (‘what should be done’)
c) reaching into each others’ minds to show how one’s proposed ‘solution (conclusion to the argument) really follows plausibly from beliefs the other already holds, or that the other would accept as plausible upon being shown the arguments or evidence (validated ‘scientifically?) for it;
or
d) changing the proposed solution to the point that it becomes acceptable to the other — ideally of course ‘better’, but at the very least ‘not worse’ than before, or if nothing is done.

So the question becomes one of determining what is ‘acceptable’. That includes not only the plausibility of the scientific data presented as evidence in the pro and con arguments — which of course are important, — but also the consequences of whatever action or inaction is chosen as a result of the allegedly ‘basic’ issue, in this case whether and to what degree man-made climate change is occurring. Specifically: does ‘acceptability’ include such aspects as whether some parties will lose income, property, security, property, respect, ‘face’?

As long as proponents on either side of the issue feel that they can legitimately suggest that the other side is not entirely without ‘hidden agenda’ interests (funding, reputation, payments from certain industry segments, political entities etc…?) in order to influence public assessment of the purity of the scientific arguments offered, would it be appropriate and helpful is all such could be brought up and discussed? For example: re-framing the challenge to explore what actions ‘we’ ought to take depending on the various outcomes of the ‘scientific issue: what if MMCC IS occurring? what if it is NOT occurring? and what if it does occur, but to some degree (to be estimated..) — such that the respective actions will have ‘acceptable’ outcomes for all affected parties even if the resolution of the scientific issue were NOT as expected?

As long as these issues are kept off the discussion table, I am afraid it will not only prevent a proposed ‘proof’ or hesitancy of people to present it to Dr. Keating’s exclusive judgment, from ‘settling’ the issue and allow humanity to proceed towards working out feasible and acceptable action solutions; it will poison the entire discourse in a way that not even the most complete and representative argument maps of its contributions will be able to clarify let alone remedy.

Some explorations of emergence

Image

Starting from square 1

Image

Lean Cuboids Dancing at Sunset

Image

Untitleable III

Image

Tristeps II

Image

Labyrandalah 3

====

Updated Planning Discourse Positions

Re-examining various efforts and proposals on discourse support over time, I have tried to identify and address some key issues or problems that require attention and rethinking. Briefly, the list of issues includes the following (in no particular order of importance):

•        The question of the appropriate Conceptual Framework for the discourse support system;

•      The preparation and use of discourse, issue and argument maps,  ncluding the choice of map ‘elements’ (questions, issues, arguments, concepts or topics…);

•      The design of the organizational framework:  the ‘platform’;

•      The Software problem: Specifications for discourse support software;

•      Questions of appropriate process;

•        The role and design of discourse mapping;

•       The aspect of distributed information;

•      The problem of complexity of information  (complexity of linear verbal or written discussion, complex reports, systems model information);

•       The role of experts;

•      Negative associations with the term ‘argument’;

•      The problem of ‘framing’ the discourse;

•      Inappropriate focus on insignificant issues;

•       The role of media;

•      Appropriate Discussion representation;

•      Incentives / motivation for participation (‘Voter apathy’)

•      The ‘familiar’ (comfortable?) linear format of discussions versus the need (?) for structuring discourse contributions;

•      The need for overview of the number of issues / aspects of the problem and their relationships;

•      The effect of ‘last word’ contributions (e.g. speeches) on collective decisions; or mere ‘rhetorical brilliance’;

•      Linking discussion merit / argument merit with eventual decisions;

•      The issue of maps ‘taking sides’;

•      The problem of evaluation: of proposals, arguments, discussion contributions;

•      The role of ‘systems models’ information in common (verbal, linear, including ‘argumentative’) discourse

•      The question of argument reconstruction.

•      The appropriate formalization or condensation needed for concise map representation;

•      Differences between requirements for e.g. ‘argument maps’ as used in e.g. law or science versus planning;

•      The deliberate or inadvertent ‘authoritative’ effect of e.g. argument representation as ‘valid’; (i.e. the extent of evaluative content of maps);

•      The question of appropriate sequence of map generation and updating;

•    The question of representation of qualifiers in evaluation forms.

 

In previous work on the structure and evaluation of ‘planning arguments’ within the overall framework of the ‘Argumentative Model of Planning’ (as proposed by Rittel), I have been making various assumptions with regard to these questions — assumptions differing from those made in other studies and proposed discourse support tools. Such assumptions, for example regarding the conceptual framework, as manifested in the choice of vocabulary, — adopted as a mostly unquestioned matter of course in my proposals as well as in other’s work, — have significant implications on the development of such discourse support tools. They therefore should be raised as explicit issues for discussion and re-examination.

A first step in such a re-examination might begin with an attempt to explicitly state my current position, for discussion. This position is the result, to date, of experience with my own ideas as well as the study of others’ proposals. Not all of the issues listed above will be addressed in the following. Some position items still are, in my mind, more ‘questions’ than firm convictions, but I will try to state them as ‘provocatively’ as possible, for discussion and questioning.

1       The development of a global support framework for the discussion of global planning and policy agreements, based on wide participation and assessment of concerns, is a matter of increasingly critical concern; it should be pursued with high priority.

While no such system can be expected to achieve definitive universal validity and acceptance, and therefore many different efforts for further development of alternative approaches should be encouraged, there is a clear need for some global agreements and decisions that must be based on wide participation as well as thorough evaluation of concerns and information (evidence).

The design of a global framework will not be structurally different from the design of such systems for smaller entities, e.g. local governments. The differences would be mainly ones of scale. Therefore, experimental systems can be developed and tested at smaller scales to gain sufficient experience before engaging in the investments that will be needed for a global framework. By the same token, global systems for initially very narrow topics would serve the same purpose of incremental development and implementation.

2      The design of such a framework must be based on — and accommodate — currently familiar and comfortable habits and practices of collective discussion.

While there are analytical techniques and tools with plausible claims of greater effectiveness, ability to deal with the amount and complexity of data etc., the use of these tools in discourse situations with wide participation of people of different educational achievement levels would either be prohibitive of wide participation, or require implausibly massive information/education programs for which precisely the needed tools for reaching agreement on the selection of method / approach (among the many competing candidates) are currently not available.

3      Even with the growing use of new information technology tools, the currently most familiar and comfortable discourse pattern seems to be that of the traditional ‘linear discussion’ (sequential exchange of questions and answers or arguments) — the pattern that has been developed in e.g. the parliamentary tradition, the agreement of giving all parties a chance to speak, air their concerns, their pros and cons to proposed collective actions, before making a decision.

This form of discourse, originally based on the sequential exchange of verbal contributions, is of course complemented and represented by written documents, reports, books, and communications.

4      A first significant attempt to enhance the ‘parliamentary’ tradition with systematic information system, procedural and technology support was Rittel’s ‘Argumentative Model of Planning’. It is still a main candidate for the common framework.

Rittel’s main argument for the general acceptance of this model was the insight that its basic, general conceptual framework of ‘questions’, ‘issues’ (controversial questions), ‘answers’, and ‘arguments’ could in principle accommodate the content of any other framework or approach, and thus become a bridge or common forum for planning at all levels. This still seems to be a valid claim not matched by any other theoretical approach.

5      However, there are sufficiently worrisome ‘negative associations’ with the term ‘argument’ of Rittel’s model to suggest at least a different label and selection of more neutral key concepts and terms for the general framework

            The main options are to only refer to ‘questions’ and ‘responses’ and ‘claims’, and to avoid ‘argument’ as well as the concepts of ‘pro’s and ‘cons’ — arguments in favor and opposed to plan proposals or other propositions.

Argumentation can be seen as the mutually cooperative (positive) effort of discussion participants to point out premises that support their positions, but that also are already believed to true or plausible by the ‘opponent‘, (or will be accepted by the opponent upon presentation of evidence or further arguments). But the more common, apparently persistent view is that of argumentation as a ‘nasty’, adversarial, combative ‘win-lose’ endeavor. While undoubtedly discourse by ay other label will produce arguments, pros and cons etc., the question is whether these should be represented as such in support tools, or in a more neutral vocabulary.

Experiments should be carried out with representations of discourse contributions — in overview maps and evaluation forms — as ‘questions’ and ‘answers’.

6      Any re-formatting, reconstruction, condensing of discussion contributions carries the danger of changing the meaning of an entry as intended by its author.

Regardless of the choice of such formatting — which should be the subject of discussion — the framework must preserve all original entries in their ‘verbatim’ form for reference and clarification as needed. Ideally, any reformatting of an entry should be checked with its author to ensure that it represents its intended meaning. (Unfortunately, this is not possible for entries whose authors cannot be reached, e.g. because they are dead.)

7      The framework should provide for translation services not only for translation between natural languages, but also from specialized discipline ‘jargon’ entries to natural language.

8      While researchers in several disciplines are carrying out significant and useful efforts  towards the development of discourse support tools, and some of these efforts seem to claim to produce universally applicable tools, such claims are overly optimistic.

The requirements for different disciplines are different, and lead to different solutions that cannot comfortably be transferred to other realms. Specifically, the differences between scientific, legal, and planning reasoning are calling for quite different approaches. and discourse support systems. However, they are not independent: the planning discourse contains premises from all these realms that must be supported with the tools pertinent to those differences.  The diagram suggests how different discourse and argument systems are related to planning:

(Sorry, diagram will be added later)

9      Analysis and problem-solving approaches can be distinguished according to the criteria they recommend as the warrant for solution decisions:

–         Voting results (government, management decision systems, supported by experts);

–             ‘Backwards-looking’ criteria:  ‘Root cause’ (Root cause analysis, ‘Necessary conditions, contributing factors (‘Systematic Doubt’ analysis), historical data (Systems models);

–        ‘Process/Approach’ criteria (“the ‘right’ approach guarantees the solution”);

solutions legitimized by participation vote or authority position; or argument merit;

–        ‘Forward-looking’ criteria:  Expected result performance, Benefit-Cost Ratio, simulated performance of selected variables over time, or stability of the system, etc.

It should be clear that the framework must accommodate all these approaches, or preferably, be based on an approach that could integrate all these perspectives, as applicable to context and characteristics of the problem. There is, to my knowledge, currently no approach matching this expectation, though some are claiming to do so   (e.g. ‘Multi-level Systems Analysis’, which however is looking at only approaches deemed to fit within the realm of ‘Systems Thinking).

10        While the basic components of the overall framework should be as few, general, and simple as possible, — for example ‘topic’,  ‘question’ and ‘claim’ or ‘response’, — actual contributions in real discussions can be lengthy and complex, and must be accommodated as such (in ‘verbatim’ reference files). However, for the purposes of overview by means of visual relationship mapping, or systematic evaluation, some form of condensed formatting or formalization will be necessary.

The needed provisions for overview mapping and evaluation are slightly different, but should be as similar as possible for the sake of simplicity.

11      Provisions for mapping:

a.   Different detail levels of discourse maps should be distinguished:  ‘Topic maps’, ‘Issue maps’ (or ‘question maps’), and ‘argument maps’ or ‘reasoning maps’.

–      Topic maps merely show the general topics or concepts and their relationship as linked by discussion entries.  Topics are conceptually linked (simple line) if they are connected by a relationship claim in a discussion entry.

–      Issue or question maps show the relationships between specific questions raised about topics. Questions can be identified by type: e.g. factual, deontic, explanatory, instrumental questions. There are two main kinds of relationships: one is the ‘topic family’ relation (all questions raised about a specific topic); the other is the relationship of a question (a ‘successor’ question) having been raised as a result of challenging or query for clarification of an element (premise) of another (‘predecessor‘) question.

–       Argument or reasoning maps show the individual claims (premises) making up an answer or argument about an issue (question), and the questions or issues having been raised as a result of questioning any such element (e.g. challenging or clarifying, calling for additional support for an argument premise.

b.  Reasoning maps (argument maps) should show all the claims making up an argument, including claims left not expressed in the original ‘verbatim’ entry as assumed to be ‘taken for granted’ and understood by the audience.

Reasoning maps aiming at encouraging critical examination and thinking about a controversial subject might show ‘potential’ questions (for example the entire ‘family of issues for a topic) that could or should be raised about an answer or argument. These might be shown in gray or faint shades, or a different color from actually raised questions.

c.   Reasoning maps should not identify answers or arguments as ‘pro’ and ‘con’ a proposal or position (unless it is made very clear that these are only the author’s intended function.)

The reason is that other participants might disagree with one or several of the premises of an intended ‘pro’ argument, in which case the set of premises (not with the respective participant’s negation) can constitute a ‘con’ argument — but the map showing it as ‘pro’ would in fact be ‘taking sides’ in the assessment. This would violate the principle of the map serving as a neutral, ‘impartial’ support tool.

d.  For the same reason, reasoning maps should not attempt to identify and state the reasoning pattern (e.g. ‘modus ponens’ or modus tollens’ etc.) of the argument. Nor should they ‘reconstruct’ arguments into such (presumably more ‘logical’, even ‘deductively valid’) forms.

Again, if in a participant’s opinion, one of the premises of such an argument should be negated, the pattern (reasoning rule) of the set of claims will become a different one. Showing the pattern as the originally intended one by the author, (however justified by its inherent nature and validity of premises it may seem to map preparers), the map would inadvertently or deliberately be ‘taking sides’ in the assessment of the argument.

e.   Topic, issue and reasoning maps should link to the respective elements in the verbatim and any formalized records of the discussion, including to source documents, and illustrations (pictures, diagrams, tables).

d.      The ‘rich image’ fashion (fad?) of adding icons and symbols (thumbs up or down, plus or minus signs) or other decorative features to the maps — moving bubbles, background imagery, etc. serve as distracting elements more than as well-intended user-friendly devices, and should be avoided.

12      Current discourse-based decision approaches exhibit a significant shortcoming in that there is no clear, transparent, visible link between the ‘merit’ of discussion contributions and the decision.

Voting blatantly permits disregarding discussion results entirely. Other approaches (e.g. Benefit-Cost Analysis, or systems modeling) claim to address all concerns voiced e.g. in preparatory surveys, but disregard any differences of opinion about the assumptions entering the analysis. (For example: some entities in society would consider the ‘cost’ of government project expenditures as ‘benefits’ if they lead to profits for those entities (e.g. industries) from government contracts).

The proposed expansion of the Argumentative Model with Argument Evaluation (TM 2010) provides an explicit link between the merit of arguments (as evaluated by discourse participants) and the decision, in the form of measures of plan proposal plausibility. This approach should be integrated into an approach dropping the ‘argumentative‘ label, even though it requires explicit or implicit evaluation of argument premises.

13      Provisions for evaluation.

In discussion-based planning processes, three main evaluation tasks should be distinguished: the comparative assessment of the merit of alternative plan proposals (if more than one); the evaluation of one plan proposal or proposition, as a function of the merit of arguments; and the evaluation of the merit of single contributions, (item of information, arguments) to the discussion.

For all three, the basic principle is that evaluation judgments must be understood as subjective judgments, by individual participants, about the quality, plausibility, goodness, validity desirability etc. While traditional assessments e.g. of truth of argument premises and conclusions were aiming at absolute, objective truth, the practical working assumption here is that while we all strive for such knowledge, we must acknowledge that we do not have any more than (utterly subjective) estimate judgments of it, and it is on the strength of those estimates we have to make our decisions. The discussion is a collective effort to share and hopefully improve the basis of those judgments.

The first task above is often approached by means of a ‘formal evaluation’ procedure developing ‘goodness’ or performance judgments about the quality of the plan alternatives, resulting on an overall judgment score as a function of partial judgments about the plans’ performance with respect to various aspects. sub-aspects etc. Such procedures are well documented; the discourse may be the source of the aspects, but more often, the aspects are assembled (by experts) by a different procedure.

The following suggestions are exploring the approach of developing a plausibility score for a plan proposal based on the plausibility and weight assessments of the (pro and con) arguments and argument premises. (following TM 2010 with some adaptations).

a.  Judgment criterion: Plausibility.

All elements to be ‘evaluated’ are assessed with the common criterion of ‘plausibility’, on an agreed-upon scale of +n  (‘completely plausible’) to -n (‘completely implausible’), the midpoint score of zero meaning ‘don’t know’ or ‘neither plausible nor implausible’.

While many argument assessment approaches aim at establishing the (binary) truth or falsity of claims, ‘truth’, (not even ‘degree of certainty’ or probability about the truth of a claim), does not properly apply to deontic (ought-) claims and desirability of goals etc. The plausibility criterion or judgment type applies to all types of claims, factual, deontic, explanatory etc.

b.   Weights of relative importance

Deontic claims (goals, objectives) are not equally important to people. To express these differences in importance, individuals assign ‘weight of relative importance) judgments to deontics in the arguments, on an agreed upon scale of zero to 1 such that all weights relative to an overall judgment add up to 1.

c.       All premises of an argument are assigned premise plausibility judgments ppl; the deontic premises are also assigned their weight of relative importance pw.

d.       The argument plausibility argpl of an argument is a function of the plausibility values of all its premises.

e.       Argument weight argw is a function of argument plausibility argpl and the weight ppw of its deontic premise.

f.      Individual Plan or Proposal plausibility PLANpl is a function of all argument weights.

g.  ‘Group’ assessments or indicators of plan plausibility GPLANpl can be expressed as some function of all individual PLANpl scores.

However, ‘group scores’ should only be used as a decision guide, together with added consideration of degrees of disagreement (range, variance), not as a direct decision criterion. The decision may have to be taken by traditional means e.g. voting. But the  correspondence or difference between deliberated plausibility scores and the final vote adds an ‘accountability’ provision: a participant having assigned a deliberated positive plausbility score for a plan but voting against it will face strong demands for explanation.

h.   A potential ‘by-product’ of such an evaluation component of a collective deliberation process is the possibility of rewarding participants for discussion contributions not only with reward points for making contributions — and making such contributions speedily, (since only the ‘first’ argument making the same point will be included in the evaluation) — but modifying these contribution points with the collective assessments of their plausibility. Thus, participants will have an incentive — and be rewarded for — making plausible and meritorious contributions.

14      The process for deliberative planning discourse with evaluation of arguments and other discourse contributions will be somewhat different from current forms of participatory planning, especially if much or all of it is to be carried out online.

            The main provisions for the design of the process pose no great problems, and small experimental projects can be carried out with current tools ‘by hand’ with human facilitators and support staff using currently available software packages.  But for larger applications adequate integrated software tools will first have to be developed.

15      The development of  ‘civic merit accounts’ based on the evaluated contributions to public deliberation projects may help the problem of citizen reluctance (often referred to as the problem of ‘voter apathy’) to participate in such discourse.

However, such rewards will only be effective incentives if they can become a fungible ‘currency’ for other exchanges in society.  One possibility is to use the built-up account of such ‘civic merit points’ as one part of qualification for public office — positions of power to make decisions that do not need or cannot wait for lengthy public deliberation. At the same time, the legitimization for power decisions must be ‘paid for’ with appropriate sums of credit points — a much-needed additional form of control of power.

16      An important, yet unresolved ‘open question’ is the role of complex ‘systems modeling’ information in any form of argumentative planning discourse with the kind of evaluation sketched above.

Just as disagreement and argumentation about model assumptions are currently not adequately accommodated in systems models, the information of complex systems models and e.g. simulation results is difficult to present in coherent form in traditional arguments, and almost impossible to represent in argument maps and evaluation tools. Since systems models arguably are currently the most important available tools for detailed and systematic analysis and understanding of problems and system behavior, the integration of these tools in the discourse framework for wide public participation must be seen as a task of urgent and high priority.

17      Another unresolved question regarding argument evaluation (and perhaps also mapping) is the role of statement qualifiers. 

Whether arguments that are stated with qualifiers (‘possibly’, ‘perhaps’; ‘tend to’ etc.) in the original ‘verbatim’ version should show such qualifiers in the statements (premises) to be evaluated. Arguably, qualifiers can be seen as statements about how an unqualified, categorical claim should be evaluated; the proponent of a claim qualified with a ‘possible’ does not ask for a complete 100% plausibility score. This means that the qualifier belongs to a separate argument about how the main categorical claim should be assessed, and thus should not be included in the ‘first-level’ argument to be evaluated.  The problem is that the qualified claim can be evaluated — as qualified — as quite, even 100% plausible — but that plausibility can (in the aggregation function) be counted as 100% for the unqualified claim. Unless the author can be persuaded to add an actual suggested plausibility value in lieu of the verbal qualifier, one that other evaluators can view and perhaps modify according to their own judgment (unlikely and probably impractical), it would seem better to just enter unqualified claims in the evaluation forms, even though this may be seen as misrepresenting the author’s real intended meaning.

18       Examples of topic, issue, and argument maps according to the preceding suggestions.

a.  A ‘topic map’ of the main topics addressed in this article:

Topic map

Map of topics discussed

b.  An issue map for one of the topics:

Mapping issues

Argument mapping issues

c.  A map of the ‘first level’ arguments in a planning discourse: the overall plan plausibility as a function of plausibility and weight assessments of the planning arguments (pro and con) that were raised about the plan.Plan plausibility

The overall hierarchy of plan plausibility judgments

      d.  The preceding diagram with ‘successor’ issues and respective arguments added.Successor issues

Hierarchy map of argument evaluation judgments, with successor issues

e. An example of a map of first level arguments for a selected mapping issuesArgument map

Argument map for mapping issue ‘Should argument map show ‘pro’ and ‘con’ labels?

References

Mann, T.       (2010)  “The Structure and Evaluation of Planning Arguments”  Informal Logic, Dec. 2010.

Rittel, H.             (1972)  “On the Planning Crisis: Systems Analysis of the ‘First and Second Generations’.” BedriftsØkonomen. #8, 1972.

–      (1977) “Structure and Usefulness of Planning Information Systems”, Working Paper  S-77-8, Institut für Grundlagen der Planung, Universität Stuttgart.

–      (1980) “APIS: A Concept for an Argumentative Planning Information System’. Working Paper No. 324. Berkeley: Institute of Urban and Regional Development, University of California.

–      (1989)  “Issue-Based Information Systems for Design”. Working Paper No. 492. Berkeley: Institute of Urban and Regional Development, University of California.

—-

A topic map for a systems thinking discussion on participatory democracy

The discussion “Participatory democracy through systems thinking’ on the Systems Thinking in Action’ network on LinkedIn raises a number of issues that merit discussion. This applies even to the assumptions (taken for granted?) that Improved participatory democracy is desirable, and that Systems thinking is the best approach for achieving it. These assumptions may not be universally shared and at least some rationale for them might be addressed. The rationale or justification for these assumptions would have to address the expected efficiency or quality with which the tools of systems thinking (that still need to be determined; the discussion did not reach a clear consensus on this, and left the question whether ‘new’ ST tools would have to be developed to achieve the expected effect) would address the task in comparison with other approaches. These issues have to do with the criteria that determine ‘better participatory democracy’, as judged by whom, according to what method. The results of the evaluation would then, presumably, support a process of decisions, actions to be taken, by what agents and what methods.

The discussion arguably did not explore all of these questions yet, nor establish a decisive agenda about how systems thinking people would or should go about the project of improving participatory democracy. (The issues, incidentally, apply both to the process of working towards the stated aim, as well as to the workings of the resulting democratic governance system.) The discussion did focus more on the need for such improvement; some posts even pointed at available systems approaches and tools that have been developed and applied but did not achieve a convincing consensus about their appropriateness.

The issues raised and shown in the map below are not as trivial as they might seem (since they relate to matters describing any governance system). For example, is the assumption that systems thinkers – a species of experts presumably in possession of insights and skills not necessarily present in the average citizen – will be involved in many or most of the issues and decisions, itself somewhat at odds with the ‘democratic’ criterion that citizens rather than experts should have the determining say in those decision?

The map should be seen as a first tentative step towards helping to clarify the agenda for such a process. For each of the ‘end’ nodes of the map, further issues about whether the particular option should be chosen (deontic issues) and if so, how (by what means) the specific choice might be achieved and implemented (instrumental issues). One of the main advantages of systems models, the identification of the various conditions (context variables, parameters) under which interventions would be expected to perform, has not been adequately explored by the discussion this far. Unfortunately, the LinkedIn format does not accommodate visual material such as diagrams of the systems components and relationships; the necessity to divert participants to other sites for such material has been (as usual) a main obstacle to the systemic exploration of the topic.

The map shown below is still mostly a ‘topic’ map; only at the first level are different individual issues (deontic, instrumental, explanatory for the same topic) identified. More detailed ‘issue maps’ will have to be drawn for each of the main topics of this map.

democracy systems thinking3

A beautiful Wicked Problem

Here’s a Beautiful Wicked Problem:

The Government must decide how to react to the revelation about secret illegal espionage in the form of listening to phone conversations of citizens and foreigners. Where it isn’t clear whether listening to the phone conversations of foreigners is indeed illegal (there being no explicit law against it other than international courtesy)  and  not just impolite and — if there is no adequate provision against discovery — plain stupid. Just as stupid as any heads of government running their own similar surveillance programs but don’t make sure their own cellphones can’t be tampered with…?
For — the tame argument that ‘every country must try to gather information, and therefore it’s not a big deal’ aside as asinine  just like the unbelievable claim that ‘I didn’t know about it’  (why doing it at all if the information gathered isn’t used?)  — if such a surveillance program has been discovered, anybody having information to convey to others that he or she does not wish the government to know, will obviously not use cellphones to do so. It does not seem to occur to anybody that this (discovery) having indeed occurred — which could be interpreted as indisputable proof of incompetence and thus cause for immediate dismissal of those responsible — that the entire operation devoted to cellphone surveillance has  thereby become entirely useless and a humongous waste of money.  Psst, Obama (and  foreign colleagues) — there’s a bunch of money to be cut from the budget to reduce the debt… Just don’t tell anybody you did it. Why?  Because if you declare that you dropped that program, well, the evildoers might feel free to plot their evil deeds by phone again, wouldn’t you know.
 What does this tell us regular cellphone addicts? It tell us: we will never be told what the government is going to do about it. If such an announcement were to materialize, either way — which would have to be considered as either a blatant lie, or an incontrovertible sign of incompetence, it should not, repeat NOT, be believed. This should be clear to any thinking citizen. And given the confidence in government being what it is these days, any government providing another reason for not believing it must be out of its mind. Lets see… the obvious solution would be to shut down NSA to save the money but keep it secret. Can it be kept secret? Should it? This looks like a supremely delicious test of political and diplomatic competence. Abbé Boulah He say:  Invest in carrier pigeons.

A solution idea for the use of civic credit points for the control of power

In the Fog Island Tavern:

Hey Vodçek — what’s all the excitement over there about?

Hi Bog-Hubert. Some good news from Rigatopia — why don’t you go over to find out? Here’s your coffee.

Thanks. You can’t provide a quick summary? I’ll have to catch the ferry in a few minutes…

Okay. It looks like they have developed a new solution for the problem of power and accountability. You remember Abbé Boulah’s campaign for argument evaluation in policy-making?

Applying the ideas of our architect friend — to evaluate design and planning arguments — to more general policy discussions and decisions? Isn’t he developing some kind of game to get people used to the concept?

Yes. The game is a good starting point. So you know how people get points for any contributions they make to the discussion — but they get modified by everybody’s assessment of the plausibility and importance of those contributions, and by the overall quality (plausibility) of the solution they collectively work out.

I remember. So how does this get used to control power in real life?

It’s actually quite simple. People who participate in public discussions build up a credit points account based on the quality of their contributions. The participation in public discourse is of course free and open to all: the possibility to earn credits is an incentive to participate.

Yes — we have talked about how that might be used to actually get decisions made. There were some questions about how the plausibility assessments could be used to guide decisions. And about some kinds of public decisions that have to be made quickly so there’s no time to have a long discussion about them…

Right. So some people have to be appointed to positions where they are responsible for making such decisions. One part of the idea — the solution they are trying out on Rigatopia — is that a person’s credit account will play a significant part in the appointment to such positions: you have to show a certain level of creditable participation in public discourse to qualify for positions where you have the power to make decisions.

So how does that solve the problems with power in those positions? We don’t have to go through the entire litany of power addiction, temptations for corruption, etc?

No. The solution is that each decision must be ‘paid for’ — up-front — with a credit point ‘ante’. Which is lost if the decision is no good; but can be seen as an ‘investment’ to earn new points if the decision is successful. But eventually, the points are ‘used up’.

Makes sense: we often talked about how power — as ’empowerment’ to pursue your happiness — should be ‘paid for’ just like you have to pay for your food and clothes and car.

Yes — but not with the same currency. And here, the currency is credit points — something anybody can earn, but which must be earned, and which can be lost by making stupid decisions.

That finally gives some substance to the notion of ‘accountability’. I agree. What about public decisions for which there is — and should be — some thorough discussion before decisions are made?

You are asking about how we can realize the expectation that such decisions should be made on the basis of the merit of arguments, of the contributions people make to the discourse. And how to add this element of accountability to the basic idea of using some overall group measure of proposal plausibility as a guide to the collective decision.

Right. The argument evaluation approach has been worked out reasonably well to produce an individual judgment of overall proposal plausibility (as a function of argument plausibility and argument weight — that was described in the paper in the ‘Informal Logic’ journal). But we all had some reservations about how to fashion a group decision from those individual judgments, and whether traditional decision methods such as (majority) voting could easily be replaced.

Okay: here’s the answer to that. Whatever decision method is being used — say voting — will have to be assessed in relation to some such measure of collective proposal plausibility — the plausibility assessments of all the people who have contributed to the discourse and assessments. But somebody has to take some responsibility for the decision. And that must involve accountability — which brings us back to the civic credit accounts. If you wish to actually have some actual ‘say’ in such a decision, you have to commit some of your credit points — perhaps your traditional ‘vote’ or polling opinion is ‘weighted’ by the credit points you are willing to put up as ‘ante’ — and lose if your decision is flawed. Of course, if it’s a good decision, it will earn you point points back, with ‘interest’ depending on how good it is.

Makes sense. It sounds a bit complicated, but with all the new information technology we have, it shouldn’t be too difficult to implement. I assume that such decisions — if they are to apply (e.g. as ‘law’) to the entire community, city, state, or whatever entity — must be ‘announced’ in a format ‘validated’ by the credit points that are backing them up. I like the aspect that the currency for influencing decisions and making decision-makers ‘accountable’ has been shifted away from money to civic credits. But tell me: won’t there be decisions that are so important and consequential — and require vast resources such that no individual decision-maker alone can reasonably be accountable for them?

Sure. The provision for this is also quite simple: If there is such a momentous decision, requiring so much money or other resources that the responsibility for it must be shared by the community — or at least by the supporters in the community, — this can be achieved by people backing the decision transferring credit points from their own accounts to that of the ‘official’ in charge of actually ‘signing’ for the decision. If it’s a bad one, they all, including the official, will be ‘accountable’ by losing their points. If it’s successful and ‘earning’ new credits, the points will have to be paid back to the supporters — with ‘points interest’ according to the size of their respective investment.

Sounds interesting, even like a breakthrough, almost. Thanks for the summary; I guess they are still discussing quite a few of the details that must be worked out. Looking forward to hear more about it when I get back, gotta run.

Remember, you heard it here first, Bog-Hubert. Have a safe trip!