Archive for the 'Evaluation of plans' Category



EVALUATION IN THE PLANNING DISCOURSE: ASPECTS and ‘ASPECT TREES’

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.  Thorbjørn Mann,  January 2020

The questions surrounding the task of assembling ‘all’ aspects calling for ‘due consideration’.

 

ASPECTS AND ASPECT TREE DISPLAYS

Once an evaluation effort begins to get serious about its professed aims: of deliberating, making overall judgments a transparent function of partial judgments, of ‘weighing all the pros and cons’, trying not to forget anything significant, to avoid missing things that could lead to ‘unexpected’ adverse consequences of a plan (but that could be anticipated with some care), the people involved will begin to create ‘lists’ of items that ‘should be given due consideration’ before making a decision. One label for these things is ‘aspects’.  Originally meaning just looking at the object (plan) to be decided upon, from different points of view.

A survey of different approaches to evaluation shows that there are many different such labels ‘on the market’ for these ‘things to be given due consideration’. And many of them — especially the many evaluation and problem-solving, systems change consultant brands that compete for commissions to help companies and institutions to cope with their issues — come with very different recommendations for the way this should be done. The question for the effort to develop a general public planning discourse support platform for dealing with projects and challenges that affect people in many governmental and commercial ‘jurisdictions’ — ultimately: ‘global’ challenges — then becomes: How can and should all these differences of the way people talk about these issues be accommodated in a common platform?

Whether a common ground for this can be found — or a way to accommodate all the different perspectives, if a common label can’t be agreed upon — depends upon a scrutiny of the different terms and their procedural implications. This is a significant task in itself, one for which I have not seen much in the way of inquiry and suggestions (other than the ‘brands’ recommendations for adopting ‘their’ terms and approach.) So raising this question might be the beginning of a sizable discussion in itself (or a survey of existing work I haven’t seen). Pending the outcome of such an investigation, many of the issues raised for discussion in this series of evaluation issues will continue to use the term ‘aspect’, with apologies to proponents of other perspectives.

This question of diversity of terminology is only one reason for needed discussion, however. One such reason has to do with the possibility of bias in the very selection of terms, depending on the underlying theory or method, or whether the perspective is focused on some ‘movement’ that by its very nature puts one main aspect at the center of attention (‘competitive strength and growth’; ‘sustainability’, ‘regeneration’; ‘climate change’; ‘globalization’ versus ‘local culture’ etc.) There are many efforts to classify or group aspects — starting with Vitruvius’ three main aspects ‘firmness, convenience and delight’ to the simple ‘cost, benefit, and risk’ grouping, or the recent efforts that encourage participants to explore aspects from different groups of affected or concerned parties, mixed in with concepts such as ‘principles’, best and worst expected outcomes, etc. shown in a ‘canvas’ poster for orientation. Are these efforts encouraging contribution of information from the public, or giving the impression of adequate coverage and inadvertently missing significant aspects? It seems that any classification scheme of aspects is likely to end up neglecting or marginalizing some concerns of affected parties.

Comparatively minor questions are about potential mistakes in applying the related tools: Listing preferred or familiar means of plan implementation as aspects representing goals or concerns, for example; listing the essentially same concern under different labels (and thus weighing it twice…). The issue of functional relationships between different aspects — a main concern of systems views of a problem situation — is one that is often not well represented in the evaluation work tools. A major potential controversy is, of course, the question of who is doing the evaluation, whose concerns are represented, what is the source of information a team will draw upon to assemble the aspect list?

It may be useful to look at the expectations for the vocabulary and its corresponding tools: Is the goal to ensure ‘scientific’ rigor, or to make it easy for lay participants to understand and to contribute to the discussion? To simplify things or to ensure comprehensive coverage? Which vocabulary facilitates further explanation (sub-aspects etc) and ultimately showing how valuation judgments relate to objective criteria — performance measures?

Finally: given the number of different ‘perspectives’ , how should the platform deal with the potential of biased ‘framing’ of discussions by the sequence in which comments are entered and displayed — or is this concern one that should be left to the participants in the process, while the platform itself should be as ‘neutral’ as possible — even with respect to potential bias or distortions?

The ‘aspect tree’ of some approaches refers to the hierarchical ‘tree’ structure emerging in a display of main aspects, each further explained by ‘sub-aspects’, sub-sub-aspects etc. The outermost ‘leaves’ of the aspect tree would be the‘criteria’ or objective performance variables, to which participants might carry their explanations of their judgment basis. (See the later section on criteria and criterion functions.) Is the possibility of doing that a factor in the insistence on the part of some people to ‘base decisions on facts’ — only — thereby eliminating ‘subjective’ judgments that can be explained only by listing more subjective aspects?

An important warning was made by Rittel in discussing ‘Wicked Problems’ long ago: The more different perspectives, explanations of a problem, potential solutions are entered into the discussion, the more aspects will appear claiming ‘due consideration’. The possible consequences of proposed solutions alone extend endlessly into the future. This makes it impossible for a single designer or planner, even a team of problem-solvers, to anticipate them all: the principle of assembling ‘all’ such aspects is practically impossible to meet. This is both a reminder to humbly abstain from claims to comprehensive coverage, and a justification of wide participation on logical (rather than the more common ideological-political) grounds: inviting all potentially affected parties to contribute to the discourse as the best way to get that needed information.

The need for more discussion of this subject, finally, should be shown by the presence of approaches or attitudes that deny the need for evaluation ‘methods’ altogether. This takes different forms, ranging from calls for ‘awareness’ or general adoption of a new ‘paradigm’ or approach — like ‘systems thinking’, holism, relying on ‘swarm’ guidance etc, to more specific approaches like Alexander’s Pattern Language which suggests that using valid patterns (solution elements, not evaluation aspects) to develop plans, will guarantee their validity and quality, thus making evaluation unnecessary.

One source of heuristic guidance to justify ‘stopping rules’ in the effort to assemble evaluation aspects may be seen in the weighting of relative importance given (as subjective judgments by participants) to the different aspects: if the assessment of a given aspect will not make a significant difference in the overall decision because that aspect is given too low a weight, is this a legitimate ‘excuse’ for not giving it a more thorough examination? (A later section will look at the weighting or preference ranking issue).

–o–

EVALUATION IN THE PLANNING DISCOURSE — AGGREGATION

An effort  to clarify the role of evaluation in the planning process.

Thorbjørn Mann

THE AGGREGATION PROBLEM:

Getting Overall Judgments from Partial Judgments

The concept of ‘deliberation’ was explained, in part, as the process of ‘making overall judgments a function of partial judgments’. We may have gone through the process of trying to explain our overall judgment about something to others, or made the effort of ‘giving due consideration’ to all aspects of the situation, we arrived at a set of partial judgments. Now the question becomes: just how do we‘assemble’ (‘aggregate’) these partial judgments into the overall judgment that can guide us in making the decision, for example, to adopt or reject the proposed plan.

The discussion has already gone past the level of familiar practices such as merely counting the number of supporting and opposing ‘votes’ and even some well-intentioned approaches that begin to look at the number of explanations (arguments or support statements) in the ‘breadth‘ (number of different aspects brought up by each supporting or opposing party, and ‘depth‘ — the number of levels of further support for the premises and assumptions of the individual arguments.

The reason why these approaches are not satisfying is that neither of them even begin to consider the validity, truth and probability (or more generally: plausibility), weight or relevance of any of the aspects discussed, or whether the judgments about any such aspects or justifications even have been ‘duly considered’ and understood.

Obviously, it is the content merit, validity, the ‘weight’ of arguments etc. we try to bring to bear on the decision. Do we have better, more ‘systematic’ ways to do this than Ben Franklin’s suggestion? (He recommended to write up the pros and cons in two different columns on a sheet of paper, then look at pairs of pros and cons that carry approximately equal weight and cancel each other out, and cross those pairs out, until there are the remaining arguments left that do not have any opposing reasons in the opposite column: those are the ones that should tilt the decision towards approval or rejection.)

What we have, on the one hand, is the impressively quantitative ‘Benefit/Cost’ approach, that works by assigning monetary value to all the b e n e f i t s of a proposed plan (the ‘pro’ arguments), and compare those with the monetary value of the ‘c o s t’ of implementing it. It has run into considerable criticism, mainly for the reasons that the ‘moral’ reluctance of having to assign monetary value to people’s health, happiness, lives; the fact that the approach usually has to be done by ‘experts’, not by citizens or affected groups, and from the overall point of view of some overall ‘common good’ perspective that is the usually ‘biased’ perspective of the government currently in power, that may not be shared by all segments of society, because it tends to hide the issue of the distribution of benefits and costs: inequality.

On the other hand, we have the approaches that separate the ‘description’ of the evaluated plan or object to be evaluated from the perceived ‘goodness’ (‘quality’) judgments about the plan and its expected outcome, from the‘validity’ (plausibility, probability) of the statements (arguments) conveying the claims about those outcomes. And, so far, the assumption that ‘everybody‘ including all ‘affected’ parties can make such judgments and ‘test’ their merit in a participatory discourse. What is still missing are the possible ways in which they can be ‘aggregated’ into overall judgments and guiding measures of merit for the decision– first, for individuals, and then for any groups that will have to come to a commonly supported decision. This is the topic to be discussed under the heading of ‘aggregation’ and ‘aggregation functions’ — the rules for getting ‘overall’ judgments from partial judgments and ‘criterion function’ results.

It turns out that there are different possible rules about this, assumptions that must be agreed upon in each evaluation situation, because they result in different decisions: The following are some considerations about assumptions or expectations for ‘aggregation functions (suggested in H. Rittel’s UC Berkeley lectures on evaluation, and listed in H. Dehlingers article  “Deontische Fragen: Urteilsbildung und Bewertungssysteme”  in “DIe methodische Bewertung: Ein Instrument des Architekten”  Festschrift zum 65. Geburtstag von Prof. Arne Musso, TU Berlin, 1993):

Possible expectation considerations for aggregation functions:

1 Do we wish to arrive at a single overall judgment (of quality / goodness or plausibility etc.) — one that can help us distinguish between e.g. plan alternatives of greater or lesser goodness?

2 Should the judgments be expressed on a commonly agreed-upon judgment scale whose end points and interim values ‘mean’ the same for all participants in the exercise? For example, should we agree that the end points of a ‘goodness’ judgment scale should mean ‘couldn’t possibly be better’ and ‘couldn’t possibly be worse’, respectively; and that there should be a ‘midpoint ‘ meaning’ neither good nor bad; indifferent; or ‘don’t know, can’t make a judgment’? (Most judgments scales in practice are expressed on a ‘zero to some ‘one-directed’ scale such as zero to some number.)

3 Should the judgment scale be the same at all levels of the aspect tree, to maintain consistency of the meaning of scores at all levels? So any equations for the aggregation functions should be designed to produce the respective overall judgment at the next higher level to be a score on the same scale.

4 Should the aggregation function ensure that if a partial score is improved, the resulting overall score should also be higher or the same, but not lower (‘worse’) than the unimproved score? By the same rule, the overall score should not be better than the previous score, if one of the partial judgments becomes lower than before.
This expectation means that in a criterion function, the line showing the judgments cores should be steadily declining and decreasing, but not have sudden spikes or valleys.

5 Should the overall score be the highest one (say, +3 = ’couldn’t be better’, on a +3/-3 scale) only if all partial scores are +3?

6 Should the overall score be a result of ‘due consideration’ of all the partial scores?

7a Should the overall score be ‘couldn’t be worse’ (e.g. -3 on the +3/-3 scale) if all partial scores are -3?
Or
7b Should the overall score become -3 if one of the partial scores becomes -3 and thus unacceptable?

Different functions — equations of ‘summing up partial judgments — will be needed for this. There will be situations or tasks in which aggregation functions meeting expectation 7b may be needed. There is no one aggregation function meeting all these expectations. Thus, the choice of aggregation functions must be discussed and agreed upon in the process.

Examples:

‘Formal’ Evaluation process for Plan ‘Quality’

Individual Assessment

The aggregation functions that can be considered for individual ‘quality’ evaluation (deliberating goodness judgments, aspect trees, and criteria i what may be called ‘formal evaluation procedures’) include the following:

Type I:    ‘Weigthed average’ function:    Q = ∑ (qi * wi)
                                                                       
where Q is the overall deliberated ‘quality’ or ‘goodness’ score; qi is the partial score of aspect or sub-aspect i, n is the number of aspects at that level; wi is the weight of relative importance of aspect i, on a scale of 0 ≤ wi ≤ 1 and such that ∑wi = 1. This is needed to ensure that Q will be on the same scale (and the associated meaning of the resulting judgment score the same) as q.

This function does not meet expectation 7b; it allows ‘poor scores’ on some aspects to be compensated for by good scores on other aspects.

Type II a:  (“the chain as strong as its weakest link” function):      Q = Min (qi)

Type IIb:        Q = ∏ ((qi + u) ^wi ) – u
                       
Here, Q is the overall score, qi the partial score i of n aspects, and u is the extreme value of the judgment score (e.g. 3 in the above examples). This function, (multiplying all the components of (qi + u) with the exponent of their weights wi, and then subtracting u from the result to get the overall score back to the +3/-3 scale) acts much like the type I function as long as all the scores are in the positive range, but pulls the overall score the closer to -u , the lower one of the scores comes to – u, the ‘unacceptable’ performance or quality. (Example: if the structural stability of a building does not stand up against expected loads, it does not matter how otherwise functionally adequate or aesthetically pleasing it is: its evaluation should express that it should not be built.)

Group assessments:

Individual scores from these functions can be applied to get statistical ‘Group’ indicators GQ : for example:

GQ = 1/m ∑ Qj
This is the average or mean of all individual Qj scores for all m participants j.

GQ = Qj
This takes the judgment of one group member as the group score.

GQ = Min (Qj)
The group score is equal to the score of the member with the lowest score in the group; both these functions effectively make one participant the ‘dictator’ of the group…

Different functions should be explored that, for example, would consider the distribution of the improvement of scores for a plan, compared with the existing or expected situation the plan is expected to remedy. For example, the form of aggregation function type IIb could also be used for group judgment aggregation.

The use of any of these aggregated, (‘deliberated’ ) judgment scores as a ‘direct’ guiding measure of performance determining the decision c a n n o t be recommended: they should be considered decision guides, not determinants. For one, the expectation of ‘due consideration of all aspects‘ would require complete knowledge of all consequences of a plan and causes of the problem it aims to fix — an expectation that must be considered unrealistic in many situations but especially in ‘wicked’ problems or ‘messes’. There, decision-makers must be willing to assume responsibility for the possibility of being wrong — a condition impossible to deliberate, by definition, when caused by ignorance of what we might be wrong about.

Aggregation functions for developing overall ‘Plan plausibility’ judgment
from the evaluation of ‘pro’ and ‘con’ arguments.

Plausibility judgments

It is necessary to reach agreements about the use of terms for the merit of judgments about plans as derived from argument evaluation, because the evaluation task for planning arguments is somewhat different from the assessment usually applied to arguments. Traditionally, the purpose of argument analysis and evaluation is seen as that of verifying whether a claim — the ‘conclusion’ of an argument — is true or false, and this is seen as depending on the truth of the premises of the argument and the ‘validity’ of the form or pattern or ‘inference rule’ of the argument. These criteria do not apply to planning arguments, that can generally be represented as follows: (Stating the ‘conclusion’ — the claim about a proposed plan A first:)

Plan A ought to be implemented
because
Plan A will result in outcome B, (given or assuming conditions C);
and
Outcome B ought to be aimed for / pursued;
and
Conditions C are given (or will be when the plan is implemented)

Like many arguments studied by traditional logic and rhetoric, not all argument premises are stated explicitly in discussions; some being assumed as ‘taken for granted’ by the audience: ‘Enthymemes’. But to evaluate these arguments, all premises must be stated and considered explicitly.

This argument pattern — and its variations due to different constellations of assertion or negation of different premises — does not conform to the validity conditions for ‘valid’ arguments in the formal logic sense: it is, at best inconclusive. Its premises cannot be established as ‘true or false‘ — the proposed plan is discussed precisely because it as well as the outcomes B aren’t there (‘true’) yet. This also means that some of the premises — the factual-instrumental claim ‘If A is implemented, then B will happen, given C) and the claim ‘C will be present’ are estimates or predictions qualified as probabilities. And ‘B ought to be pursued’ as well as the conclusions ‘A ought to be implemented) are neither adequately called ‘probable’ nor true or false: the term ‘plausible’ seems more fitting at least for some participants, but not necessarily for all. Indeed: ‘plausible’ judgments may be applied to all the claims, with the different interpretations easily understood to each kind. This is is a matter of degrees, not a binary yes/no quality. And unlike the assessment of factual and even probability claims in common logic argumentation studies, the ‘conclusion’ (decision to implement) is not determined by a single ‘clinching’ argument: it rests on several or many ‘pros and cons’ that must be weighed against each other. That is the evaluation task for planning argumentation, that will lead to different ‘aggregation’ tools.

The logical structure of planning argumentation can be stated in simplified for as follows:

– An individual’s overall plausibility judgment of plausibility PLANPL is a function of the ‘weight’ Argw of the various pro and con arguments raised about the proposal.
– The argument weight is a function of the argument’s plausibility Argpl and the weight of relative importance w of its deontic (ought-) premise.
– The Argument plausibility Argpl is a function of the plausibility of its premises.

Examples of aggregation functions for this process might be the following:
                                                   
1. a Argument plausibility:        Argpli = ∏ {Premplj} for all n premises j.

Or  

1.b   Argpli = Min{ Premplj}

2.    Argument weight:               Argwi = Argpli * wi with 0 ≤ wi and ∑ wi = 1
for the ought-premises of all m arguments

3. Proposal plausibility PLANPL = ∑ Argwi
                                               

Aggregation functions for Group judgment statistics: (Similar to the Quality group aggregations)

Mean Group average plausibility   GPLANPL = 1/k ∑ PLANPLp for all k participants p.                                                  

There are of course other statistical measures of the set of individual plausibility judgments that can be examined and discussed. Like the ‘Quality’ Aggregated measures, these ‘Group’ plausibility statistics should not be used as decision determinants but as guides, for instance as indicators of need for further discussion and explanation of judgment differences, or for revision of plan details to alleviate concerns leading to large judgment differences.
Evalmap11 Aggregation

Comments? Additions?

–o–

EVALUATION IN THE PLANNING PROCESS: EVALUATION TASKS


An effort to clarify the role of deliberative evaluation in the planning and policy-making process

Thorbjoern Mann

EVALUATION TASKS / SITUATIONS

The necessity for this review of evaluation practices and tools arises from the fact that evaluation tasks and judgments and related activities occur at many stages of planning projects. A focus on the most common task, the evaluation of a proposed plan or a set of plan alternatives in preparation for the last action, may hide the role and impact of many judgments along the way, where explicitly or implicitly not only different labels but also very different vocabulary, tools and principles are involved. Is it necessary to look at these differences, to ask whether there should be more of an effort of coordination and common vocabulary in the set of working agreements for a project?

This section will at least raise the question and begin to explore the different disguises of evaluation acts throughout the planning process to answers these questions.

Many plans are started as extensions of routine ‘maintenance’ activities on existing processes and systems, using established performance measures as indicators of a need for extraordinary steps to ensure the continued desirable function of the system in question. In such tasks, the selected performance criteria, their threshold values demanding action and most of the expected remedial steps and means, are part of the factual ‘current conditions’ data basis of further planning.

To what extent are these data understood as part of the planning project — either as ‘given’ aspects or as needing revision, discussion, change — when the situation is so unprecedented as to call for activities going beyond the routine maintenance concerns? Such situations are often referred to as ’problems’, which tends to trigger a very different way of talking. There are many different ‘definitions’ or views, understandings of problems, as well as different problem types. To what extent is an evaluation group’s decision to talk about the situation as a problem, a specific problem type, already an evaluative task? Even adopting a view of ‘problem’ as a perceived (by somebody!) discrepancy between an existing ‘IS’ state of affairs and a view of what that state ‘OUGHT’ to be, calling for ideas about ‘HOW’ to get from the IS to the OUGHT.

Judgments about what ‘is’ the case do call for judgments, perhaps even measurements, of current conditions: assessments of factual matters, even as those are perceived — again, by whom? — as ‘NOT-Ought’. Judgments specifying the OUGHT — ‘goals’ , ‘visions’, ‘desirable’ states of affairs — belong to the ‘deontic’ realm, much as this often is obscured by the invocation of ‘facts’ in the form authorities and of polls of percentages of populations ‘wanting’ this or that ‘OUGHT’: the ‘good’ they are after. The judgments about the ‘HOW’ — means, tools, etc. to reach those goals may look like ‘factual-instrumental’ judgments — but also getting into the deontic realm; some possible ‘means’ are decidedly NOT what we OUGHT to do, no matter how functionally effective they seem to be.

The ‘authority’ source of judgments that participants in planning will have to consider come in the form of laws and ‘regulations’. Examined as ‘givens’, they may be helpful in defining, constraining the ‘solution space’ for the development of the plan. But they often ‘don’t fit the circumstances’ of a current planning situation, and raise questions about whether to apply for a ‘variance’, an exception to a rule. Of course, any regulation is itself the outcome of an evaluation or judgment process — one that may be acknowledged but usually not thoroughly examined by the planners of a specific project. The temptation is, of course, to ‘accept’ such regulations as the critical performance objective (‘to get the permit’), conveniently forgetting that such regulations usually specify m i n i m a l performance expectations. They usually focus on meaningful concerns such as safety and conformance to setback and functional performance conventions — and neglecting or drawing attention away from other issues such as aesthetics, sustainability, environmental or mental health impact of the resulting ‘permitted’ but in many other ways quite mediocre and outright undesirable solutions.

Other guidance tools for the development of the plan — buildings, urban environments, but also general societal policy and policy implementation efforts — are the ‘programs’ (briefs’) and equivalent statements about the desired outcome. One main consideration of such statements is to describe the scope of the plan (in buildings; how many spaces, their size and functions , etc.) in relation to the constraint of the budget. In many cases, such descriptions are in turn guided by ‘standards’ and norms for similar uses, in each case moving responsibility for the evaluation judgments onto a different agency: asking for the basis of judgment of the provision of such expectations is becoming a complex task in itself.

The ‘participation’ demand for involving the eventual users, citizens, affected parties in these processes seems to take two main forms: one being general surveys — asking the participants to fill out questionnaires that try to capture expectations and preferences; the other being ‘hearings’ in connection with the presentation of in-progress ‘option decisions or final plans. Do the different methodological basis and treatment of these otherwise laudable efforts raise questions about their ultimate usefulness in nurturing the production of ‘quality’ plans?

The term ‘quality’ is a key concern of a very different approach to design and planning — on that explicitly denies the very need for ‘method’ in the form of systematic evaluation procedures. This is the key feature (from the current point of view) of the ‘Pattern Language’ by Christopher Alexander. Its promise (briefly and arguably unfairly distorting) is that using ‘patterns’ such as the design precepts for building and town planning of his book ‘A Pattern Language’ in the development of the plan will ‘guarantee’ an outcome that embodies the ‘quality without a name’ — including many of the aspects not addressed by the ‘usual’ design process and its regulation and function-centered constraints.

This move seems to be very appealing to designers (surprisingly, even more in other domains such as computer programming than in architecture) — any outcome done in the proper way with the proper patterns is thereby ‘good’ (‘has the ‘quality’ ) and does not need further evaluation. Not discussed, as far as I can see, is the fact that the evaluation issue is merely moved to the process of suggesting and ‘validating’ the patterns — in the building case, by Alexander and his associates, and assembled in the book. Is the admirable and very necessary effort to bring those missing quality issues back into the design and planning process and discussion undercut by the removal of the evaluation problem from that discussion?

The Pattern Language example should make it very clear how drastically the treatment of the evaluation question could influence the process and decision-making in the planning process.

Comments: Missing items / issues? Wrong question?

–o–

EVALUATION IN THE PLANNING DISCOURSE: ISSUES, CONTROVERSIES, (OVERVIEW)

Thorbjoern Mann

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.

Many aspects of evaluation-related tasks in familiar approaches and practice, call for some re-assessment and improvement even for practical applications in current situations. These will be discussed in more detail in sections addressing requirements and tools for practical application. Others are more significant in that they end up questioning the entire concept of deliberative evaluation in planning on a ‘philosophical’ level, or resist adopting smaller detail improvements of the first (practical) kind because they may mean abandoning familiar habits based on tradition and even constitutional provisions.

The very concept of deliberative evaluation — as materialized in procedures and practices that look too cumbersome, bureaucratic and elitist ‘expert-model‘ to many — is an example of a fundamental issue that can significantly flavor and complicate planning discourse. The desire to do without such ‘methods’ is theoretically and emotionally supported by concepts such as the civic, patriotic, call and need for consensus, unity of purpose and even ideas such as swarm behavior or ‘wisdom of the crowds’ that claim to more effortlessly produce ‘good’ solutions and community behavior. A related example is the philosophy behind Christopher Alexander’s ‘Pattern Language’ . Does its claim that using patterns declared ‘valid’ and ‘good’ (having ‘Quality Without a Name — ‘QWAN’) in developing plans and solutions, e.g. for buildings and neighborhoods, will produce overall solutions that will ‘automatically’ be valid / good etc. and thus require no evaluation ‘method’ at all to validate it?

A related issue is the one about ‘objective’ measurement, fact, ‘laws’ (akin to natural laws) as opposed to ‘subjective’ opinion. Discussion, felt to consist mainly of the latter, ‘mere opinions’, difficult to measure and thus lacking reliable tools for resolution of disagreement is seen as too unreliable a basis for important decisions.

On a more practical level, there is the matter of ‘decision criteria’ that are assumed to legitimize decisions. Simple tools such as voting ratios — even of votes following the practice of debating the pros and cons of proposed plans: the practice (accepted as eminently ‘democratic’ even by authoritarian regimes as a smokescreen) in reality results in the concerns of significant parts of affected populations (the minority) to be effectively ignored. Is the call for reaching decisions better and more transparently based on the merit of discourse contributions and ‘due consideration’ of all aspects promising but needing different tools? What would they look like?

An understanding of ‘deliberation’ as the process of making overall judgment (of the good, value, acceptability etc.) a function of partial judgment raises questions of ‘aggregation’: how do or should we convert the many partial judgments into overall judgments? How should the many individual judgments of members of a community be‘aggregated’ into overall ‘group’ judgments or indicators of the distribution of individual judgments that can guide the community’s decision on an issue? Here, to, traditional conventions need reconsiderations.

These issues and controversies need to be examined not only individually but also how they relate to one another and how they should guide evaluation procedures in the planning discourse. The diagram shows a number of them and some relationships adding to the complexity, there are probably more that should be added to the list.

Additions, connections, comments?
–o–

The World Is Not As It Ought To Be — And What To Do About It

A Fog Island Tavern discussion
Thorbjørn Mann 2019

About: The aggravating spectacle of humanity’s inadequate response to challenges;
And countless confusing ideas and proposals and calls for a ‘New System’ — without convincing remedies for some key flaws of current systems such as poor communication, coordination, decision modes leading to agreements based on merit of discourse contributions and the adequate control of power; suggestions for a discourse platform with participation incentives, evaluation of contribution merit, new decision modes and provisions that serve as new tools for the control of power: following the principle of making key system provisions serve multiple purposes.

The aggravating spectacle of humanity’s response to its challenges

– Arrgh! I give up!
– Again, Sophie? What is it this time? Soft drinks? Men? GMO’s? Meditation? Politics?
– Oh stuff it, Vodçek. Make all the fun you want of it. But this is getting serious.
– I’m sure I’d agree. But it would help to know specifically what it is about? I have a feeling you’ve been doing too much surfing on the social networks…
– Guilty as charged. But where can we find out what’s really going on and what people are doing about it? And what really should be done about it?
– By ‘it’, I assume you mean the avalanche of crises and emergencies and disasters that the people on those networks are predicting will do us all in? Not even to mention the impending critical shortage of Sonoma Zinfandel if the folks over there don’t get the wildfires under control?
– Just keep it up, you zinical zinpusher. But it’s also the other guys, the ones that are getting all worked up about those predictions and just deny all of them, except the unpatriotic if not outright treasonous growing phenomenon of the doomsayers of course — who just spout foul language and curses all over the networks.
– Not that they have done any serious studies or investigations of their own, just projecting their own desire for taking over the government or keeping power and telling everybody what to do onto the other side…
– Now don’t you get started down that road too, Stephan. Isn’t it that kind of mutual mudslinging that’s making the problems worse rather than getting solutions?
– Well, you may be right, Sophie, but somebody has to point out the reasoning flaws and rhetorical dirty tricks and contradictions, to clear the way for finding better answers?
– Yes, I’m just as fed up about the contradictions and dirty rhetoric as you are, but when it just deteriorates into mutual accusations and name-calling, it isn’t helping, it’s making things worse.
– I agree, Sophie. But I’m curious: what are those contradictions you are worried about?
– Hi Bog-Hubert, glad you got here. Well: take the folks who are going on and on about participation and empowerment of the citizens. Power to the people, the downtrodden, the poor and disadvantaged. All good and justified — but in the next sentence, those same folks — or people in their networks — are complaining of lack of leadership on those issues. Leadership — the very thing they were railing against! Or the people on the other side — dismissing all the proposals and initiatives to cope with impending emergencies as just power grabs for big government that will take freedom away from the people — and relying on the most authoritarian bullies to run ‘their’ government and putting the progressives in their place… As if history isn’t full of examples of ‘free’ people electing themselves the most dictatorial and oppressive governments?
– Okay, Sophie, I think we share your worries. So what do you think ought to be done about all that?
– That’s what I came here to find out — are there any better ideas, some real solutions around? You guys have been talking and talking about things like this — have you got any answers? Where is Abbé Boulah now that we need him?

What would Abbé Boulah do?

– Ah Sophie — you’re beginning to sound like the folks who keep ranting ‘What would Reagan do’? And inadvertently admitting that they don’t have any ideas of their own about what to do… So now you’re asking ‘What would Abbé Boulah do’? I agree: it’s a better question, but…
– What you’re saying is: we should sit down and figure this out on our own, Bog-Hubert?
– I have a feeling that’s exactly what Abbé Boulah would say… Know anything better to do in this fog?
– All right. Let’s try to get started on it, at least. What’s the first step?
– Well, I’d say: have Vodçek get the air out of these glasses, for starters.
– Here you go, Bog-Hubert.
– Thanks, Vodçek. Okay, Let’s see. I’m not sure there’s a good rule about the sequence of steps we should follow. Discussions about plans, or problems can start anywhere: raising the issues about some problem, proposing some solution, etc. Anything can trigger the effort. So we can start anywhere we want.

Acknowledge: there are crises, problems, challenges.

– Sophie: You were talking about problems we face. Can we assume that there’s some agreement about that, as a starting point?
– Well, some people keep saying we should use different words. ‘Problem’: soo negative. And then there are those folks who say they’re just fear-mongering figments of power-hungry Big Government fans?
– Right, Dexter. So, avoiding that useless quarrgument: can we just acknowledge and describe those things as issues people get worried, annoyed, aggravated about? Getting hurt? Whatever those language purists want to call them instead?
– Sounds right. Whatever they want to call them: problems, challenges, emergencies, crises, ‘situations’ — when somebody feels that something ought to be done about them.
– I like that: Even for the folks who don’t think those worries are real — the fact that there are people who say there are problems aggravates them, for them that makes just one such item, even is they don’t agree on what they are and what they should be called. Aggravations?
– Makes my head spin already, but yes: Even whether something should be done about people who say something should be done that they call problems. So it’s a very inclusive concept. Everybody agrees that something should be done about something. Even visions of a better future that isn’t here yet but should be…
– Good. And Sophie was getting confused — is that the right word? — about all the things people already propose ought to be done:

Many ‘alternative’ efforts already proposed or underway

– Right. I don’t blame her. I was surprised to learn about all those groups that are already doing interesting and important things — alternative initiatives, theories about what to do and how to do it, experiments, projects. All over the globe, even in places you wouldn’t expect much alternative creativity.
– So what’s wrong with that, Bog-Hubert? Isn’t that grounds for hope? What’s confusing you, Sophie?
– Well, you’d think it’s an encouraging sign and trend. But if you look at them in more closely, say to decide which of those projects you should join to do your part, it becomes confusing. They all claim that they are working on THE answers, THE ‘New System’, THE collective future for the planet and humanity that everybody should join, calling for ‘unified’ teams, movements, efforts ….
– Or selling their brand of ‘approach’…
– Bog-Hubert, you cynic… Well, I guess many of them are, trying to make a living from their latest New Thing. But they are all so different, based on beliefs and prime principles that are so ‘unique’ and different, and, well, ‘competitive’ rather than unifying and cooperative. Didn’t I mention that a while ago — the curious fact that many are calling for participation, emancipation, empowerment, self-organizing governance systems, but either call for or claim ‘leadership’ for those efforts?
– I agree, Sophie. But what I am worrying about is less their diversity but their lack of mutual constructive communication. Yes, you mentioned competition. So what you see on their websites and other promotional material is all positive, success stories. What’s missing is critical information, not just successes but also shortcomings and failures.
– Stands to reason though, doesn’t it, Bog-Hubert: why would any such group boast about their failures?
– Ah Vodçek. How can we learn anything from just glorifying ads and videos? How can we ever get to common agreements about the ‘New System’ they are calling for, if we can’t learn what works and what doesn’t work? If we can’t reach a stage where acceptance of new ways of doing things is not achieved by force or coercion or brainwashing, against the conviction of those who are convinced of different ideas? The old ways of ‘revolution’, ‘throwing out the old corrupt systems’, ‘regime change’ by smart or stupid bombing and ballistic missiles or mass demonstrations don’t work anymore: To often they just result in putting new faces into old organizational structures with the same fundamental shortcomings, for all their different party flags and logos and acronyms.
– Good point, about learning from all those experiments. But I’m not sure I understand the thing about replacing corrupt or oppressive governments with new systems that have the same problems. Isn’t it better to establish democracy — or reinstating it where it has gone awry?
– Even at the cost of another bloody revolution or war? Well, sure, it depends on how bad the old regime has gotten. But the problem is really with democracy too, isn’t it? Hold on, Sophie, I’m not gone over to the Dark Side of authoritarian governments of any stripe. Let me explain.
– That better be a good explanation.
– Or else? Okay. There are two main issues with democracy now, in my mind. The first one is that for all its meritorious principle of ‘lets leave our weapons outside, let’s talk and listen to each other, and then decide’: — the great parliamentarian idea to replace conflict resolution by force with persuasion and reason. The way decisions are made now, when the talking stops, is still a crutch, a shortcut. One that you might even say betrays that very principle.
– What in Tate’s Hell are you talking about, Bog-hubert?
– Well, voting, of course, Sophie. Voting. Yes: the great democratic principle and human right. It’s only a crutch, a shortcut to decision. What is it really doing? The usual majority vote — 50 plus a tiny fraction percent — in effect is allowing the ‘winning’ party to say: Okay, you had your say and your vote, but the vote means that you can forget all your concerns and reasons: we the majority have the say now. It means that the real concerns and ideas of as much as nearly half the population can now be ignored. And the upshot of that is that when we are sure to win the majority vote, — perhaps because we have more money to buy campaign ads — we don’t even have to listen to your reasons and your speeches. If that is the best democracy can do, some people will feel very justified looking for other systems.
– You’ll have to tell us what other, better systems are on the market to fix that problem   – the alternatives I know of that have been tried don’t make me eager for giving them another chance. But first tell us that second main flaw you mentioned?
– Sure. Now remember: I think the mature, well-designed democratic governance constitutions have the best provisions in human history against the abuse of power — the power of incumbent rulers installed by the voting rule. The election for limited time periods, the balance of powers of the different branches, the tools of impeachment or vote of no confidence, the role of the free press, the independent judicial branch, of freedom of information etc. The problem is that these provisions have increasingly been undermined by the power of money in the industrial and finance sectors of society, and often by these forces in combination with the military. That’s no news, no secret: elections are determined by campaign financing. Even candidates who have promised to restrain that influence — “Take on Wall Street”, “Rein in the big corporations” — are subtly or unsubtly pushed to toe the line when elected.
– I see: so any regime change where the ‘new system’ still leaves those two factors in play, is liable to become as bad as the previous one — is that what you are saying?
– Yes, as two major factors in the game. So whatever the current majority / minority constellation, it has become very difficult for any society governed by those forces to reach agreements even for issues that all parties agree should be fixed. Meaningful decisions that have been proposed by one party must be opposed by the other, even if it’s an idea than benefits everybody. Decisions based on the merit of the information contributed to the discourse? Impossible.
– If there is a meaningful discourse, which also seems to be in short supply these days: It’s all about power.
– Right: dIscussion is meaningless and just wasting time.
– I assume you are referring to the fact that while there is more information twittered and advertised about than ever before, with the new so-called information technology, the discourse seems to consist mainly of the parties talking to themselves, on their preferred channels or social media sites and followers. Talk show hosts blatantly refusing to allow callers critical to their positions to ask questions and engage in discussions on their shows?
– Right. We can go on and on about flaws of the current democracy systems, there are many issues contributing to these problems. But what I was getting to is this: While there is justified criticism of the current systems, what I don’t see in all the material about ‘new system’ and ‘throw out the old system’ groups are convincing ideas for addressing those two problems in our governance systems.

So again: what to do?

– You are making a convincingly depressing case here, Bog-Hubert. So do you have any better ideas for all this up your sleeve? Or do I have to cut you off and throw you out for making my customers miserable?
– More miserable than the daily news, Vodçek — if they even have the stomach for watching it before heading over here for distracting convivial comfort and conversation?
– Speak for yourself, my friend. But back to the issue. So what should be done, in your opinion?
– Well, we have talked about some interesting ideas here before. But maybe it’s useful to pull them together into a coherent, what do they call it in politics — ‘platform’? or ‘agenda’?
– That would be useful. ‘Story’ might be even more desirable, but maybe you could give us the main headings of it first?
– Wait, Vodçek. I know the commissioner was planning to come over, I’d like for him to hear this. Could we use a little break? Maybe you could refresh the life support stuff in our glasses and tend to your Grunt Bucket Stew or Fårikål, — pardon me, your ‘pot-au-feu’ or whatever you’ve got slow-simmering over in the corner?
– Sophie, watch your language, my dear. Okay, break it is. Give Bog-Hubert a chance to gather and diagram his confabulations. Here are some napkins for making notes, Bogmeister.
– Thanks, Vodçek. I’m touched to tears by your kind concern, may have to blow my nose. Where are you going, Sophie?
– Out on the terrace to see if the fog is lifting ’till the commissioner is showing up. Fresh air and all that…

A New Agenda?

– Welcome, Commissioner: we’ve been waiting for you. The usual?
– Good evening. Yes, thanks. Looks like you folks are in the middle of something important here?
– We’ll see; the middle or stuck in the muddle? Okay, Bog-Hubert: What have you got there on your napkin?
– Well, you asked for the main headings. It was a good idea — the important part is to see the connections between the different issues.
– Could you pass it around?
– Sure. You should really get a big screen for sharing napkin ideas here, Vodçek. Or at least a pink or greenboard. Blackboards are soo 19th century, and white soo 20th…, don’t you think? Well: The first items are the topics we have actually covered already here, that triggered this diabolical assignment: Your concerns about the sorry state of the world. The crises, conflicts, problems, disasters, emergencies you are afraid will spell the end of human civilization as we know it and ruin the oyster harvest in the bay if nothing is done about them.
– Yes; in short, like Rittel said: there is really only ONE Wicked Problem: the world is not as it ought to be.
– And you were waiting for lil’ ol me for this? That’s way above my local responsibilities: you should have called in some national or global fat cats for that!
– Well, we have to start somewhere, Commissioner. And the problems and what should be done about them are present at all scales, local to global, aren’t they?
– Okay, I guess. Go on, Bog-Hubert.

Issues and possible answers

– Well, what was expressed here was a a feeling that the current system of governance — at all levels — is not going to tackle those issues properly. So many hotheads out there, — so sorry, concerned citizens — are calling for throwing it out together with the swamp creatures who run it, and establish a new system.
– That’s nothing new: it’s the bread and butter of daily news and history book re-writers everywhere.
– Right, Commissioner. Now these folks here don’t seem to have much faith in all those ‘new system’ ideas.
– That’s not really what we were saying, is it? It’s that there are too many of them, and their are so different that it’s quite unlikely that there will be any commonsense agreement about just which kind of system we should adopt, in time to face the emergencies.
– Different way to put it, okay. And it seems that while there are innumerable well-intentioned ideas, efforts, projects, initiatives out there already, we — concerned humanity in general — do not know enough to agree on a global new system. And that was bringing up the question of what to do given that sorry state of affairs. Is that a good way to describe where we are?
– Sounds about right. But you seemed to think that some of the ideas we discussed here over several fogged-in sessions might be spun together into some kind of coherent agenda that folks like the revered Commissioner here should take a look at? And those bubbles on the napkin are your main steps of that agenda?
– I’m afraid so, yes. But ‘steps’ is not the right word. They should not be seen as a sequence of steps in a kind of systematic process but as issues to be addressed more or less simultaneously.

Acknowledge, embrace, support the different initiatives

– The first suggestion is that we should simply acknowledge all the different ideas and initiatives and accept the differences as a positive aspect. Not in spite but precisely because of their differences.
– Why? Isn’t that the problem? Isn’t it essential to work towards some kind of unified process?
– Good point. But if we do that by dismissing, denying the differences, but don’t have an adequate understanding and agreement of what the unified answer should be, we’d make at least two serious mistakes — again.
– Only two?
– Well — two main ones. There may be more. sure. One is that if we don’t know what the unified system really should be like, — do we? — but are jumping to premature conclusions based on some commonalities, we’d shut off what we could learn from the different experiments. So we should embrace, encourage, even actively support those experiments and ideas.
– Even if they are denouncing each other as the devils work? And support that?
– Even so. And yes, support them, on some conditions. A first condition would be an agreement to not get in each others’ way: to at least suspend the sentence to eternal damnation and destruction until we learn enough about what works and what doesn’t work from each. Part of that would be to abstain from labeling the ideas as the devils work and their proponents as his followers. Or idiots. Even if they are sure that it would be some superior being’s pleasure to see them destroyed, to leave sentence and punishment up to that almighty entity in the hereafter. Meanwhile, secondly: to agree to honestly share not only the superior aspects but all experiences of their efforts, successes as well as failures and obstacles. Don’t we urgently need that information?
– Okay, There’ll be much discussion about the details of those agreements. But that’s for later, I guess. What about the other mistake you mentioned, Bog-Hubert?
– Thanks for reminding me. It’s an important one. For all the hue and cry about unification, aren’t we all interested — to some degree or other — to ‘make a difference’ in our lives, to give it meaning? To become ‘better’ at something, that will define us as distinct individuals, — or groups? So shouldn’t perhaps part of our unified effort be to create many opportunities for everybody to make their differences in their lives? Not just becoming happy but indistinguishable cogs in the unified big machine, the big system?
– What you are saying is that there should be a deliberate balance in the collective aim, between the need for common projects such as remedies or responses to crises, and opportunities for individual differences?
– Yes, Sophie. Balance. Not one or the other. And that balance must be carefully negotiated and maintained. I’m not sure that there are general rules that apply to all situations — much as we might wish for a general ‘constitution’ that clearly governs all projects and conflicts.
– So we’d be seeing a lot of negotiation and haggling to achieve that balance, not even to speak of developing the solutions for projects or crisis response masers. How in the world… ?

Developing a Planning / Policy-Making Discourse Platform

– You are right, Sophie. How will that be done? If these assumptions are anywhere close to plausible, what we’ll need, as a priority, is a better platform that facilitates the various tasks:
o better communication between all the different initiatives and projects;
o developing and negotiating the common ‘road rule’ agreements;
o sharing the ideas and experiences;
o developing common solutions;
o evaluating the information and proposals;
o reaching better decisions, based on the merit of discourse contributions;
o and perhaps contributing to a better control of power…
– How would that platform be different from all the information systems, networks, platforms, data bases and ‘expert systems’ we have already?
– Good question. I guess its easiest to look at the specific tasks we want to improve, to see how much of that the current systems can provide, and what new provisions must be developed to tie them together.

Getting the information: participation: incentives?

– Consider a proposed project to prevent or mitigate some problem, undesirable trend or disaster: one that will affect many people in different countries or jurisdictions. The traditional information systems aim at supplying the data, the scientific and technical information that can be brought to bear on the issue. Simulation programs can help predicting future effects of past or current processes, for which we know the underlying ‘laws’ and forces. But for a project dealing with unprecedented features, that information is not in your textbooks or data bases — information must be obtained by observation in the situation and from the people affected by the consequences of the plan.
– Okay. That has become accepted theory if not always done right in practice. Opinion surveys, participation lip service. Many people don’t take advantage of their rights to participate.
– Why is that?
– Many reasons. An important one is that they don’t see how their contributions will be heard; don’t feel the expected outcome will be worth the needed effort — if the bigwigs and experts end up doing what they want to do anyway. Like ‘Voter apathy’, the sense that it won’t make a difference.
– So the missing ingredient is to provide better incentives (making it worth the effort) and better transparency of how everybody’s contributions affect the outcome?

Measures of merit of discourse contributions to guide decisions

– Right. Part of that task is to build a process into the platform that shows how the merit of contributions — ideas, arguments — will determine the decision.
– What’s the problem with that now? If the free press and free speech are guaranteed and working, won’t the discussion, the surveys and the votes bring out the merit of what’s being said?
– In theory, yes. Let everybody have their say, then decide. In practice: why do you think all election campaigns — as well as campaigns for or against some proposed legislation — are clamoring for contributions — but not meritorious information or arguments, but — you guessed it — money. And what’s the money for? Repeating and spreading the message. Emphasis on repeating. More ads and posters. But it’s always just messages cooked down into slogans, pretty pictures, 30-second visuals.
– Coming to think of it: I’ve seen TV ads for senate candidates — or was it the incumbent one himself? — just showing the candidate walking into his office and sitting down — no message or argument at all. As if they’re saying: you know what he’ll do. But he doesn’t even bother to say it. So he can’t be held to what he said?
– And all the yard signs — just the name, the logo.
– Yes: The message that so many of your neighbors support candidate x or policy y: is that an argument whose validity and merit can be measured? At best, the assumption is that the merit has come across in the ads and speeches — full of empty slogans, motherhood issue affirmations and promises that sound good but whose likelihood of being fulfilled is not even to be mentioned; Read my lips… The much touted ‘swarms’ in nature and humanity include lemmings and frenzied masses deluded by big audacious lies. No: what we need ia a better way to get a measure of the merit of discourse contributions. And we’ve made a start on that before, haven’t we, with the plausibility measures of planning arguments? And the suggestions for combining those with assessments of plan quality to form measures of plausibility-modified quality judgments?
– That’s a still a big task, to develop procedures for deriving those measures, that people will actually go through.
– Yes, but there are tools for improving that. And it can be done in such a way that the role of big money swaying voters just by buying more TV ads can be reduced.
– You need to describe that in some more detail, Bog-Hubert.
– Sure, Vodçek, but let’s go through the remaining provisions we want for the platform first.
– All right: what’s next?

Decision modes and procedures

– Okay, where were we? Let’s assume that we’ll be able to develop tools and methods for determining the merit of discourse contributions and derive overall measures of support for plans or policies or candidates from them. We’ve talked about some of those ideas. Now those measures must be either included in actual new decision-making modes or — in situations where traditional decision modes like majority voting must be used — to compare, confront those decision results with the merit measures.
– What do you mean, confront?
– Sophie, let’s assume that there is a decision-making body constitutionally charged with making a decision on a plan, and doing so by majority voting. All the talking heads on TV will predict the outcome based on the ratio of members from the competing parties in that body: party discipline — which does not relate to the merit of arguments in any way. It’s just about power. Now assume there is a parallel process of developing a measure of plan plausibility or quality based on assessment of plan quality and argument (pro and con) plausibility. If that result shows that the plan is questionable or implausible, should that group get away with a decision to approve the plan, without some explanations, or additional efforts to make the plan more acceptable? Or — the other way around: If the contribution merit measures show that plan as being meaningful, plausible, beneficial: should they be allowed to just turn the proposal down?
– Wait a minute. All this talk about measures of merit: how is that measured? And who does the measuring? If it’s just a kind of Benefit-Cost Ratio in disguise, with the benefits expressed in dollars by a bunch of experts who aren’t even affected by the consequences of a plan, forget it.
– Good question, Dexter. No, the plausibility and quality measures will be derived from the judgments made by participants in the public discourse. The participants should include members of the public who have seen/read/heard the contributions even if they haven’t made any themselves — their comments may have just been repeating arguments that have already been made — but the assessments should be given contribution rewards as well.
– There’s usually much reliance on teams of experts for such projects, or small ‘focus groups’ led or ‘moderated’ by experts — won’t that be enough?
– That’s a serious issue all by itself, Dexter. Yes, in this scenario, there will be experts, but their judgments will be assessed by everybody. The plausibility of a plan does not depend on just the support or probabilities of the factual and technical information — for which they will produce evidence judged by what you might call ‘scientific’ methods — but also on the meaning and importance assigned by everybody to the ‘ought’ – premises of arguments: what is felt to be good or bad about the consequences.
– And that will be judged by people’s concerns, fears, desires and principles about what’s good, bad, fair or moral? Subjective judgments all, of course. Though I guess there could be some AI-type check on consistency, on the extent of evidence or support for the claims involved.
– Yes, Vodçek: if you disagree with somebody’s take on a plan, you — or the ‘system’ — can ask for explanations of judgments, the reasoning behind it, the factual, technical, scientific evidence and other principles. But we have to realize and accept that some people will like and be in favor of what they see as ‘beneficial’ features of the plan, that to others appear as ‘costs’ and disadvantages. We have discussed those issues as well here, haven’t we? The efforts to declare something as ‘common good’ that everybody must accept, are often just attempts of one group to get everybody to accept their interests without question. There are new and better answers up for discussion than what is being done now. They just need to be discussed, tried out and fine-tuned. [ ] But yes, to develop better decision modes based on the merit of discourse contributions, we need some better measures of that merit? Do we have the tools for that?

Measures of contribution merit and the role of Artificial Intelligence Tools

– I remember some discussions we had here, about how discourse contributions might be more carefully evaluated to arrive at a kind of measure of plausibility and expected quality of plan proposals. Has there been more progress on that? I think there are people out there selling ‘AI’ — ‘Artificial Intelligence’ — for that?
– Good question. And a serious controversy, if you ask me.
– Controversy? Why is that?
– Well, Commissioner: I know you are one of the people wanting to enlist the power of new tools in artificial intelligence and information technology to support your governmental decision procedures. And you’re worried about people’s suspicions about that — the fear of machines taking over, not knowing how they work, and what they really are trying to do.
– Yes, I’m aware of that.
– Remember the comment just a while ago, about all this assessment on the merit of discourse contributions being ‘subjective’?
– Yes… We didn’t follow up on that, perhaps we should have done that right there.
– I understand. To your credit, I’d say, you’re worried about being accountable for your decisions, and would like to be able to point out that your judgments are based on objective facts and data. Not just your or others’ ‘subjective’ opinions, isn’t that right? Though your election campaign was stressing that you are a person of sound common sense and moral convictions, whose judgment could be trusted — even on decisions that stray into the area of intuitive subjective judgments?
– What are you trying to say?
– Nothing personal, Commissioner, sorry if you get any such impression. Just that there is another issue of balance involved here, that has a bearing on how we think about policy and plan proposals — and how AI tools can support us in those decisions.
– Explain: what’s the problem?
– Okay. Let’s put it this way: The folks who are trying to sell you their data, their data analysis tools, their AI programs, are banking on your sincere concern about basing your decisions on factually correct, complete, objectively ‘true’ information. And that is what expert systems, as they were called a generation ago, and Ai tools as they are promoted now, are offering: data, with programs based on scientific analysis and logic, to be reliably and objectively true.
– Yes: is there something wrong with that? I’d be irresponsible if I didn’t try to have data, a sound factual basis of truth for our decisions,wouldn’t I?
– Of course, Commissioner. But — as we also discussed: that’s only part of the judgment task. The planning arguments rest on premises — the ‘ought’-premises — that are not evaluated in terms of true or false, or even ‘probability’ (the basis of ‘risk’ assessment they also would want your to do), before deciding. The ‘ought’ claims about what we should try to achieve are not objective ‘true or false’ — no matter how much factual or probability evidence the algorithms are offering you about the likelihood of your proposals to succeed, and the desired or undesired consequences to occur. Whether we ought to pursue those goals, or avoid the possible side-effects, isn’t just a matter of objective measurement facts, but subjective, personal, intuitive judgments. So even if we could trust the big computers to give you all the necessary evidence and factual data for the factual and instrumental premises of planning arguments, — I’m not sure we should uncritically trust them to do even that — those judgments about what we ought to do are what we are worried about: what we will have to trust you with.
– So?
– So any measure of plausibility support of plans, policy proposals, common actions we can develop must be based on the individual judgments of those participating in the discussion and decision process. AI can perhaps help sorting things out, checking consistency and logic of supporting evidence, the specific sources of disagreements, relationships between claims in the discourse, keeping track. But any decision-guiding measures, as well as the decisions themselves, must be made by people, individuals.
– But there are proposed tools out there that claim to have ‘objective’ measures e.g. of depth and breadth of policy statements, aren’t there?
– Yes, Of course they all have to claim to be trustworthy and objective. But as long as they are just based on simple counts of topics mentioned and claims of relationships between them, they are not really evaluating true merit of the support. So an elaborate package of false, implausible, inconsistent claims might get a high ‘support’ score which you’d have to agree is meaningless. The cynical demagogues even rely on making their lies bold enough and repeating them often and loudly enough to get people to believe them…
– I understand that you have better techniques, methods for having participants make and evaluate arguments, and derive overall indicators for proposal support?
– Yes. They need to be tested and discussed, but they are ready for application as soon as the programming — for compilation, keeping track, displaying interim and final results etc. can be developed. Financing is what’s missing for that.

Displays: concise overview

– You mention ‘display’ — what’s the issue with that?
– Good question. See: planning decisions do not rest on single ‘clinching’ arguments like the deductive syllogisms you study in the logic books: If an argument has a valid ‘deductive’ structure, and you believe all its premises, you must accept the conclusion. That doesn’t apply to planning: planning decisions depend on many ‘pro’ ad ‘con’ arguments; their structure is not deductively valid, and the premises — as we have seen — are not properly labeled ‘true’ or ‘false’ but more or less plausible and important. So any ‘due consideration’ of all those pros and cons, — and the relationships between them — really must take that entire network of reasoning into account. That should be assisted by visual displays: diagrams’ or ‘maps’ of the evolving discourse. Showing ‘the whole system’ rather than just one aspect or the last word in a debate. We can make such maps, — it takes a bit more effort than just listing the comments as they are posted. And it would be nice if the AI support could help constructing those maps to accompany the evolving discussion, and if it could show precisely where people have disagreements or misunderstandings that could be alleviated with more explanation, better evidence, or improving the plan.
– Sounds good. If you can get people to understand the tools and agree on using them to make decisions.
– Yes, I was going to ask about that aspect: how would those support indicators be used to get an agreement, a decision? What did you call it in your list decision modes?

Tools and rules for the use of decision guides based on merit

– You are right: just taking a vote doesn’t work anymore: In fact, just voting really could be disregarding all the evidence and pros and cons. Party discipline: any proposal of the other party gets voted down, regardless of its merit, — that’s the reality today. And for decisions that affect many people across existing governance borders, the issues of who’s entitled to vote, and what constitutes a majority etc. are going to be critical. So we must develop decision modes that give those measures an appropriate role in the decision process. And learn to use them properly.
– Something like what you mentioned earlier: Even if a decision-making body has to decide by voting (according to its constitution) — if that body has developed an overall positive plausibility score in favor of the plan, should it be allowed to vote it down?

Power and Accountability for Decisions

– Right. That has to do with accountability.
– Huh?
– Yes, Commissioner. Look — so far, it may look as if we’re saying that all important decisions should be made by a kind of ‘referendum’. Preceded by a more thorough deliberation and haggling process resulting in a measure that will determine the decision in some form. That’s unrealistic, of course. For one thing: In any kind of organization and community, society, there will always be situations calling for decisions that can’t wait for lengthy deliberations. Or just have to be made quickly according to prior agreements or rules and judgments about whether or which rules apply to the given situation. By people ’empowered’ to make those decisions, and have the training and experience to make those judgments.
– Oh. Of course: ‘Leaders’. Tell you the truth, I never really understood why there’s such a cry for better leadership, all the leadership improvement programs and seminars — even by folks who’ve just been to seminars touting ‘self-organizing’ teams and societies, or out demonstrating for more ‘power to the people’.
– Right. The balance problem, again. We need both. The problem is that leaders often become obsessed more with the power to make such decisions than with the quality of the results for the community. The old power problem. The observation that power becomes addictive and a goal in itself, to the point of insanity. The Romans knew that; their crazy emperor Caligula drove that point home. Sure, in the governance systems for most nations there are now provisions to control that power, make the power holders ‘accountable’ for what they are doing, to contain the addictiveness of power.
– Yes: you mentioned elections for limited time periods, the balance of powers between the different branches of governments — is that what you have in mind? Are you saying that’s not enough? We just want those things to be applied properly!
– Calm down, Commissioner. Didn’t we agree that it looks like all too often, they don’t work too well anymore? At least all the folks calling for ‘tossing out the system’, ‘revolution’, ‘New Systems’ seem to think so.
– ‘Regime change’?
– Hush, Sophie — that’s troublesome concept and a different power gang. But either way: what they are offering instead doesn’t really solve the power problem: Just getting different folks into power doesn’t get rid of its addictiveness. And pretty soon we see the same patterns of power abuse again, with even worse consequences if the ‘new system’ does not have better power control provisions. If they relied on the assumption that those valiant freedom fighters and revolutionaries just can’t become as corrupt as they folks they kicked out…

Paying for power decisions?

– So what’s the suggestion for dealing with that power issue, and what does it have to do with the discourse platform?
– Important questions. Hold on: This needs a bit of background. The first thing is to acknowledge that the desire for power is a common human trait. And as such, not illegitimate. At the low end, we call it ‘empowerment‘ for the disempowered; at the higher end we justify it by calling for ‘leaders‘. Isn’t that a little like other ‘human needs’ — like the need for food, water, shelter, security — which even the most ‘disempowered’ folks have to ‘pay’ for in one way or another? So why not having the powerful ‘pay’ for the power decisions they’ll make — instead of the people paying them to lord it over us?
– Good grief. Who’d they pay — with money somebody already gave them to get into their power positions, to help those donors to get even more money?
– Calm down, Sophie. Yes, you put your finger on the sore spot all right: money. The problem that the governance power control systems today have been overrun by the power of money from the industrial and financial systems — where power control is obviously not working as well, and not according to the governance controls.
– Until the money runs out…
– Right: So we agree the ‘currency’ for ‘paying for power decisions’ can’t be money. But what would be an ‘account’ that secures ‘accountability’? Here’s the idea: If the discourse process produces measures the merit of people’s contributions to the discourse, couldn’t we add up those merit measures — as assessed for plausibility and significance by the entire community — into contributors’ ‘merit’ accounts?
– You are talking about a kind of quantified reputation system? That people have to ‘earn’?
– You might call it that. If people contribute good ideas, well-thought out, plausible and well-supported arguments that indicate good judgment, they will ‘earn’ a reputation account that might be part of the assessment of their qualification for positions where they have to use that judgment for important decisions. People whose contributions are assessed as unsupported untruthful, will not accumulate enough merit points.
– I see: Now decision-makers will have to ‘invest’ their merit points with each decision that has to be ‘paid for’.
– Yes. I like the idea of ‘investing’ their merit points — into plans and projects, that would earn them future points — the closer the outcome turns out to be to the promised results. Or lose those points if it doesn’t turn out that well… And that depletes a decision-maker’s power over time. An automatic regulator of power? Well, at least an additional tool to combat the addictive feature and temptation to abuse power. And a challenge to revive and engage in that discussion and develop better tools.
– Couldn’t we — regular citizens — transfer some of our own merit points to leaders, to endorse their ability to make needed big policy decisions we’d like to see?
– Great idea, Sophie! Yes: some decisions are so important that no single person will have accumulated enough merit points to pay for them. So this feature of merit points that have to be earned by demonstrating sound and valuable judgments could be used to ’empower’ leaders to make those decisions on our behalf but also to control them. We — citizens — could specify what kinds of decisions or policies we endorse. And most importantly: if we see that officials are not doing what they promised — we could withdraw our supporting merit points before they have been wasted on decisions contrary to our interests.
– Once the decisions are made though, our points would be gone, used up, too — right?
– Yes: that’s the idea.
– That’s an interesting twist: It makes supporters as ‘accountable’ as the leaders? Sounds like a good thing: would it make them more responsible?
– It’s something to be explored. Too many related issues: will only people who have built up some account of merit points have the right to influence policy this way? Or should there be something like a ‘basic’ amount of points for every citizen, like the election vote, without any such conditions? And could merit points be ‘earned’ back as a result of the outcome of the decisions for which they were ‘invested’? Yes: let’s explore and discuss it!

Too complex solutions? Response options?

– I don’t know, you visionaries. This is all getting a bit too much for me to swallow in one sitting. Does it all have to be so complicated? Do we have time to experiment with this kind of esoteric schemes to face the emergencies and crises?
– I agree with your concern there, Commissioner. But let me ask you this: Do you have any better ideas for making the current systems work better? Because what you are saying is, in essence: Among the few options we have for dealing with unprecedented challenges, you seem to propose the one that has gotten us into the trouble: doing nothing? Business as usual, just according to the rules?
– That’s not a fair description of what I’m suggesting, Bog-Hubert: I’m saying let’s go back and fix and properly apply the tools we have and know how to use. Find the root cause of the problems we have and make the inherited system work as it was meant to — that you yourself have described as the best we’ve had in history, didn’t you, a while ago?
– Right. What the Commissioner is saying isn’t exactly ‘doing nothing’, is it, Bog-Hubert? What are the other options you said we have besides ‘doing nothing’? Isn’t what he’s suggesting one of those other options?
– Good point, Vodçek. The crude options I had in mind besides ‘doing nothing’ were, either to join the ‘New System’ proponents: Those who are clamoring to ditch the current system, but don’t have a good description of their new system that remedies the basic flaws we discussed that marred both the current system and the disastrous alternatives humanity has tried out, to make it appealing enough to justify the costs involved in ‘ditching’ the system. Or, to engage in developing, discussing and applying some new ideas. The ones we discussed here aren’t the only ones we can think of — we are simply saying: We don’t know enough to adopt a Big New System yet: so let’s encourage as many different small experiments as we can, try to learn from them, but put in place an addition to the system(s) we have that consists of provisions, processes for agreements to not get in each others’ ways and address the main ’causes’ of trouble with the current system.
– So is that not a ‘New System’?
– No, Sophie: it’s a strategy for getting there, not the vision of the New System. Yes, in a sense, it’s similar to the Commissioner’s ‘fix-it’ version of ‘doing nothing’, but more open to new and creative solutions. Solutions that really aim at realizing, for example, what the parliamentary principle promised but couldn’t deliver with the current decision modes: decisions truly based on the merit of our contributions to the discourse. And doing something about the problem of power.
– But why does it have to become so complicated? Your merit points ideas quickly became such a convoluted tangle of steps and calculations…
– We may be too used to the current system to realize how complex that current system really is. And therefore, vulnerable to manipulation.

– I agree. The innovations don’t have to be introduced all at once, and certainly not by violent wholesale ‘regime change’ or revolutionary overthrow of existing provisions. But looking at the whole network of interrelated features and uses is important for a different reason, I think. See, the Commissioner’s suggestion of ‘fix the root cause’ of a problem is a well intentioned example of a traditional way of looking at things, that try to sidestep the complexity of problems or problems networks that some wise systems people called ‘messes’ and ‘wicked problems’. The idea of a ‘root cause’ is a desperate, even delusional device to simplify a problem situation by imagining a simple problem source, so that a simple solution — one that can be provided by a short teamwork session led by some systems thinking consultant — will be seen as acceptable.
– A delusion? You’re not going to make many friends in the Systems Thinking World, my friend.

The principle of ‘multitasking’: making single provisions serve multiple purposes

– We are aware of that. But see: If we seem to begin to understand that some problems or emergencies are really quite complex, is it not a reasonable suspicion that simple isolated ‘fixes’ — hardware answers such as border walls for immigration issues, more and ‘smarter’ bombs for international conflicts, more and bigger weaponry for police to fight violent crime, to name but a few, — aren’t going to do the job: the solutions’ also will exhibit the same degree of complexity?
– You are talking about the old Ashby principle of ‘requisite variety’, right? That a control system of a complex problem must have the same degree of variety of response options as the problem? Lots of complex questions to think about.
– I agree. And the example of the merit point idea is really also an example of a kind of strategy we should consider: The principle of looking for initially single devices that can serve many different purposes in the affected system. Which will begin to look a bit more complex, sure. The merit points idea is an example of such s provision. But if such improvement to the discourse system could help alleviate the power problem in our societies, would it be worth a bit of complex effort?
– Even more to think about.
– Right, Commissioner: And if one such idea seems to be a little too complicated: the fact that it might work indicates that the problem can be dealt with better than we are doing now, so perhaps it might trigger and encourage efforts to conceive of and develop better answers?
– That’s a policy issue if I ever saw one. It really should be discussed and thoroughly evaluated.
– I’ll drink to that, Vodçek.
– Last call!

 

–o–

WRONG QUESTION?

(Thorbjørn Mann overhearing a conversation in The Fog Island Tavern)

– What in three twisters’ names is wrong with you today, Bog-Hubert? How many times have I told you that reading those papers from the mainland in the morning will just ruin your attitude for an otherwise glorious foggy day on the island? And furiously stirring your weird habitual dose of crushed red peppers your coffee with nothing in it except will n o t improve its taste! At least let the grounds and the peppers settle to the bottom where they belong!

– Ah Vodçek, quit your nagging. But you’re right, I should have left reading the paper to the evening when a dose of guilt for imbibing too much of your Zin would balance out the outrage this so-called news creates in any righteously reading man’s mind. Well, too late now: What is wrong with these people?

– All right then, I’ll let you vent yer spleen this time. Let’s get this out of your system before the others come in so you won’t ruin their moods too. What dismal news gets your temper up today?

– Well, it’s not really news. It’s that they can’t come up with some useful new ideas. I was just reading this column, some right-wing think tank guy, pontificating about the stupidity of the leftists who want to overturn the capitalist system of free enterprise and replace it with socialism, big government and taxes for free welfare for everyone.

– Right, sounds like the old battle cries: capitalism versus socialism, less government, more government, old hat slogans, but they still work, getting people worked up, right? So can you really blame the politicians to use them? If they don’t have any better ideas?

– You’re putting the finger on the sore spot: better ideas. Or the lack of any.

– Well, if you are worried about the size and power of government, and the issue hasn’t been resolved yet – which I agree it doesn’t look like they have, yet – what’s wrong with hammering away on the question to get some better answers about it? Or are you finally deciding which side you are going to root for now? Which side you’re on?

– Vodçek, you know me better than that. There are well-intentioned ism-ists on both sides, equally exasperating. Isn’t the problem that they all, right and left, each blinded by the alleged flaws and misdeeds of the other side, are missing the real problems? Capitalism-socialism, big government – small government:  all

w r o n g   q u e s t i o n!?

– I see. I have some ideas about that myself. But please enlighten me: what, in your mind, is the right question?

Misrepresentation, polarization

– Or questions – there is never just one. The first problem – before getting to the better questions — is the way those issues are framed in the first place – both deliberately misrepresenting the other. Needless polarization. Nobody on the right really wants a regime with a government so feeble that it can’t effectively deal with any common problems we can’t address as the rugged individualists we are supposed to be; nobody on the left really wants a big government so big that it would tyrannical control every aspect of our lives, along the lines of older feudal, monarchist, soviet-style communist, fascist, maoist, militarist let alone mono-religious and other dictatorial regimes. So the first step would be to quit that misrepresentation and begin to focus on the underlying and shared problems.

– Shared problems? Ad several of those? Explain.

The control of power

– Just a few, for starters, should be enough. One main feature that truly bugs me is that none of those regime models have yet found a good way to deal with the problem of power.

– You don’t think the provisions of western really democratic constitutions are adequate?

– Yes and no. – Yes, I think it can’t be said too often that those provisions – based on the separation and balance of powers in the political governance system – are important and impressive achievements of civilization; the best we have seen in history. When they work properly.

– And you are saying they don’t?

– Yes. Compared to some regimes that emulated these trappings but ended up turning autocratic, the mechanics of which remain to be fully understood, I think you can argue they are still valid. But —

– I agree. The assumption of some revolutionary regimes that it would be enough to wrest the power away from the current powers and just proclaim that thereby power has been returned to ‘the people’, it is rather childish if not deliberately deceptive.

– Yes: they ignore that in whatever regime that needs positions of power – which every society does need, regardless of its policy-decision-making process: any ship at sea suddenly encountering an iceberg must have a captain who decides whether to pass it on starboard or port, the decision can’t wait for a lengthy deliberative process – such power becomes addictive and self-serving. And therefore must be controlled.

– But, you were going to say?

– Yes: Even in so-called democratic systems, the traditional power control provisions in the governance system are losing their effectiveness, because the military and private enterprise sector that has morphed into huge near–monopolistic, transnational corporate monsters, and financial entities, as well as the media conglomerates, where the governance system controls do not apply, have intruded and overwhelmed the governance control systems. To the point where legislatures can now without constraint pass laws that ignore and contradict even explicit popular opinion and referendum results. Do I have to give you more detailed examples?

– I get the idea. How did this happen?

– That is one of the questions that need to be asked and explored. But even the governance system based on the parliamentary principle of deliberation before deciding – remember, the old civilized gentlemen’s agreement of “let’s leave our weapons outside, sit down and talk, each side listen to the other, and then decide”, is fatally incomplete and flawed in one crucial respect.

– Ah, I see. Majority voting.

Voting as a decision criterion

– Yes. Majority voting; where votes are determined less by reasoning, communication and deliberation than by propaganda, partisan news reporting of the media, if not outright censorship, and ignorance. When voters are led by government and media misinformation and lack of adequate news reporting, to vote for going to wars in faraway countries they can’t even find on a map, let alone explain what threat those countries pose to our national interests, what is the value or merit of those votes? But that is what we get decisions based on the count of votes where there’s no distinction between the votes of well-informed people who have been listening to each other, getting the supporting evidence before making their judgment, and votes by totally uninformed folks who may just have been getting one side of the issues, if any.

– You’re getting yourself on thin ice there, my friend.

– I know, I know. As if all this global warming will even get us any ice on which to get ourselves in trouble. Yes, the last thing we should aim for is a system that denies any person the right to make their opinion count, however ill-or well-informed. But that does not mean that we should stop looking for better ways to inform each other, to get people involved in the discourse, to get a better overview of all the opinions on each side, to eventually get decisions better influenced by the merit of all entries into the discussion? To even refrain from looking for better ways?

– Calm down, Bog-Hubert. I applaud your exasperation. Yes, it’s curious that we don’t hear more about such improvement efforts.

– Yes, Vodçek – all we hear about is the constant battle-cry of ‘getting the vote out’ and the controversies about gerrymandering and other obstacles to people’s voting rights. But isn’t there another aspect to this that deserves attention and suggests looking for alternative measures of merit of discussion entries that should influence decisions?

– What’s that, besides the issue of finding out who can be trusted to make the decisions we don’t have time to even discuss?

Issues transcending traditional governance boundaries

– Well, think about it. The challenges we are facing, increasingly, are not neatly confined to the geographical governance boundaries where we can (if we really want to, which is another question) count voters and determine majorities. Where law-makers like those I heard about in North Carolina can make fools of themselves (and apparently even get voter support) for proposed legislation such as denying state funding for research on global warming and rising sea levels that might result in predictions of more that three or four feet of sea level rise. Which well illustrates your point about who to trust with such decisions…

– Yes, they may have very powerful legislators there? Or governors? That can stop global warming single-handedly?

– Don’t you wish? For one, that gets back to our power issue. But more importantly: when problems like global warming, or ocean pollution, transcend the borders of our arbitrary governance entities and their governing institutions – what are you going to do to get decisions based on citizen participation, with the majority voting system? Whose information – concerns, fears, ideas, votes or judgments — are going to be invited to contribute to the discussion, to the formation of decisions?

– I see. How do you determine who will or should have the voting rights, or even legitimate polling credentials, to calculate majorities. So you are saying that for such issues, there should be different decision criteria?

– Right. For example, criteria such as the plausibility measures we could derive from the systematic assessment of the pro and con arguments people raise in planning discussions, in policy-making discussions. As I said, for example: the abbeboulistic ideas we have discussed in this very tavern should be seen as challenges to all the better funded think tanks to come up with better ideas, if they don’t think these proposals are worth trying out?

– Hmm. I see. So what do you think should be done about all this? That you are so disappointed not seeing in your morning paper?

– Good question. Well, even with my fortifications, your coffee isn’t powerful enough to make me conjure up all the answers by my lil’ ol’ self, I’m sorry. But didn’t we discuss some ideas here in this very Tavern, that should be considered as first step items on a better agenda than trying to figure out how to get candidates elected to ensure party majorities – majorities by party discipline, by Abbé Boulah’s drooping mustache!! — in the bought and paid-for institutions that we have determined to be systemically incapable of dealing with the problems?

Agenda?
Discourse platform

– Agenda? Let me guess. The development of the better discourse for planning and political decision-making would be right on the top of that one, if I know you.

Participation incentives
Assessment of contribution merit

– Right. A platform that includes meaningful incentives for participating, and at least the possibility for a more systematic assessment of the pro and con arguments and their supporting evidence.

Overview display

– Is that complication really necessary? I’d have thought that a good, concise display of all the pertinent aspects and opinions would be the more important addition to current practice to inform participants?

Development of measures of proposal merit to guide decisions

– I agree – it’s an important item – but the key to develop different measures of merit of the information to guide the overall decisions is the combination of the incentive provisions with the assessment results. And …

Control of power

– I see where you are headed with this: These provisions are part of your Abbeboulistic schemes of having people in power p a y for the privilege of making power decisions?

– Right. At least that is one partial idea for getting at the problems. I don’t know if the fact that these provisions are all interrelated tools serving different purposes, if that is a problem or an advantage. But as I said before, isn’t it at least a challenge to come up with better ideas if you don’t like these suggestions? O buckle down and engage with this agenda, discuss them, work out the details, do some experiments?

– Bog-Hubert, I don’t know if I should congratulate or feel sorry for you guys who keep harping on this. Well, here’s a shot of Fundador to fortify your coffee. And give you the strength to do what needs to be done, as they say in Minnesota. Okay if I don’t light it first?

– By all means, if you join me. Cheers.

–0–

About public planning discourse contribution credits

A Fog Island Tavern Discussion. Thorbjoern Mann 2019

 

– Bog-Hubert: Got a minute? I want to ask you something…

– Hi Sophie — sure — if it’s not too complex and involving long-term memory this early in the morning…

– Coffee hasn’t taken yet?

– Well,  It’s the third cup, so there’s a chance…. What’s the question?

– Planning discourse contribution credit points. Remember the other day, there was a lively discussion here about the credit points in that planning discourse platform you guys — I mean Abbé Boulah and his buddy up in town have been cooking up — and I’ve been telling my email friend over in Europe about it. He is politely asking to get to know more about it, but I have a feeling he’s, well, not outright against it, but very skeptical about the idea.

– Well, I wouldn’t blame him; it is a bit involved, if you’d try to apply all parts of what it might be used for. Any feelings about what specifically is bothering him?

-It could be that I haven’t been able to explain it well enough; the discussion was mainly about the use of those credits outside of the planning discussions itself. The idea of those credits replacing the rile of money in the political process. I got the impression he sees it as elitist. So he may have lost sight of their basic purpose in the planning discourse itself, that as I understand it establishes their value in the first place. So maybe you can help me go through the basic benefits or uses of those points from the very beginning of the process? What they are for, again?

– Well, I’m glad you’re asking, because since that discussion, I’ve gotten kind of confused about it myself: it can get kind of complicated, and there are some questions that aren’t quite worked out yet, as far as I can see. So it gives me the chance to clarify things in my mind as well, I hope. As that old German professor said that Abbé Boulah keeps invoking: If you are not quite certain about something, it helps to give a lecture about it. Or better, a discussion, so you can interrupt and ask questions any time it’s getting derailed.

– Okay.

– Hmm. Let’s see. So we assume somebody has raised an issue, started a discussion about some problem or plan that needs attention; asking that something needs to be done about it, because it’s an an issue that affects, bothers, hurts many people in different ways. A community problem, perhaps even in a community that isn’t clearly defined — in that people in several different governance entities — state, countries — are affected and not at all clear what can be done about it and who is supposed to do it. And we optimistically assume that there is something like that discourse platform where all that can be discussed, eh?

-If you say so.

Participation: getting all pertinent information

– Well, there’s got to be some talk about it, whether it’s properly public and organized is a different question. Now the first thing that needs to happen is to get the information about that issue. What’s going on, who is getting affected, and how? And who has the knowledge, — call it expertise or just common sense — about what could be done about it — and then how that in turn would affect people. The common cry and demand is for public ‘participation’. The principle is that all the concerns of all the affected and interested parties must be brought into the discussion so that that can be given ‘due consideration’ in making plans and decisions, right?

– Right. Shouldn’t that be a matter of course, by now?

Credit points as incentives to contribute

– Sure. Well, governments don’t seem to like it much. But it’s also a bit of a problem for the public. You can make all kinds of provisions for citizen participation, but the fact is that that requires time and effort on the part of those citizens, time and effort that the ordinary citizen doesn’t always can afford and doesn’t get paid for, right? And some folks may have their suspicions about whether and how their concerns will actually make much if any difference in the decision. So shouldn’t there be some incentives — for everybody who has something meaningful to contribute to the issue, to contribute that information?

– Hmm. I see the problem: you can’t do it with money, it isn’t in anybody’s budget yet. So you are saying that it should be done with those credit points?

– Yes. At the very least, there should be some form of public recognition, appreciation for bringing in pertinent information. So you’d reward any such contribution with a ‘civic credit’ unit. And that credit should actually be ‘worth’ something, not just an empty and useless gesture. We’ll get to that later.

– Oh boy. That looks like more of a problem than a solution to me — now everybody comes in with all kinds of silly information, all the same ‘concerns’ or demands — what a mess.

Avoiding duplication, repetition

– You are quite right. That’s why full credit should be given only to the first entry of the same essential content — not to repetitions of the same point. This provision, incidentally, serves another purpose besides keeping the mass of incoming information manageable: it encourages people to enter the information fast, not to wait and let the discussion go around in circles missing a vital item of information.

– Okay, that makes sense. But is it a problem that some people who have made an effort to enter information but don’t get it in fast enough, will be annoyed at having their effort ignored?

Getting the information ‘fast’

– Good point. This makes it supremely important to have a good public display of information entered, updated as fast as possible, ideally of course, in ’real time’ so that a new item is instantly seen the moment it’s posted. The technology for doing that is available today, but if you want to allow information to get entered by different means — letters, phone calls, email etc. there will necessarily be some delay in getting it posted. So if there’s such a delay, you may want to have people who enter the same point before it gets posted for everybody to see, share that reward.

– Uh. Okay, but…

Decisions based on the merit of contributions

– Ah, I think I see what bothers you. It looks like something like restricting people’s ‘voting rights’ or ‘rights to free speech, does it? Yes. That impression would be bad, not necessarily justified. There must be a clear distinction between entering an information item, and making it ‘count’ in the decision. As long as decisions are made on the basis of vote counts, that impression is understandable, sure. But what we are after, is to reach decisions based on the merit of the information, not just on the number of votes — votes that may be un-informed , ill-informed, or deliberately ignoring the concerns of many other parties — the losing minority of the voting process. And to determine the merit of a piece of information or argument pro or con a plan, it only needs to the stated once. But now we have to deal with the issue of the method by which the merit of a piece of information can be determined.

– Yes. Just another question, before we get into that: what about intentionally ‘bad’ information? Wrong or unintentionally ill-informed claims, as you said — even deliberately false, misleading, confusing information? Trolls? Obscene language? Shouldn’t there be some boundaries on that in public discourse? — You agree, Vodçek?

– Well, Sophie, my responsibility as the keeper of this establishment is a little different than what’s going on in a public discourse. See, I am very much in favor of freedom of speech, in principle. But I also have a great interest in keeping this tavern somewhat civilized. So here, I feel entitled to ask people who start that kind of thing to please shut up or leave, before it degenerates into physical brawls. And I’m using my personal judgment on drawing the boundary, such as it may be, and I’m okay with some people thinking it’s not strict enough, and others steamed up about my stuffy old-timer attitude. Now in a public discussion, I’m not sure my standards should be imposed on everybody. So who’s entitled to set those standards? In fact, shouldn’t everybody have the right to publicly make fools or buffoons out of themselves?

– Huh. Never looked at it that way. Some people seem to get a kick out of doing that. A right to offend?

‘Empty’ acknowledgement points and later judgments

– I think Vodçek has a point there, Sophie. Whether you want to call it a right or not. But the platform process is actually dealing with that in a different way. It doesn’t try to finagle the distinction between acceptable and unacceptable entries — by language or content. So it accepts all entries as they come, stores them and makes them accessible in what it calls the ‘Verbatim’ file. And it acknowledges an entry as a contribution with a ‘point’ that is not much more than that: an acknowledgement: You participated. But then it gives all the other participants the opportunity to respond to it, either by making another verbal entry, or by assigning the original — offending — entry a judgment score. One on a scale of, say, minus 3 (for totally offensive, unacceptable, useless) to plus 3 for totally valuable and proper content. So if there is an actual ‘account’ for those contributions, that contribution point will add to — or diminish — a person’s public credit account, as judged by the public, by all participants in that discourse.

– Well, how will that be done? Are you saying that everybody gives every such contribution a judgments score?

– That would be one way, or a first step towards such a credit point account. Sure, it’s a little more involved that the current practice of ‘liking’ or adding another kind of emoticon to a person’s comment. But some such evaluation, actually a somewhat more specific evaluation, will be necessary further on in the process if the participants are going to be serious about getting a decision based on the merit of all contributions. Then the adjustment of a person’s initial entry will just be result of that systematic deliberation.

– Sounds good, but you’ll have to give me some more detail about that, Bog-Hubert.

Plausibility and importance judgments of argument premises

– Patience, Sophie. First let me ask you, were you here to listen to Abbé Boulah about his buddy’s method for evaluation planning arguments?

– You mean that story about looking at all the premises of those arguments and giving them plausibility scores?

– Right. Plausibility scores, on a scale of -1 (for ‘totally implausible’) to +1 (for ‘totally plausible, virtually certain) and the midpoint of zero (for ‘don’t know, can’t judge without more evidence…). But also weights of relative importance of all the ‘ought’ premises in the entire set of arguments pro and con a plan proposal. That one would be on a scale of zero (for totally unimportant) to +1 (for totally important) such that all the weights of all the ought-premises in that set of arguments will add up to 1.

– Right,  I got that part. I remember there was some math involved in getting from that bunch of scores to an overall measure of plausibility of the plan being discussed — I’m not sure I really understand that.

– We can go over that part separately some other time. Can you have some faith, for now, that some such equations can be developed that explain how your overall plan assessment should depend on those scores for the argument premises?

– Well… I’m not sure I’m ready for buying that cat in a bag, but let’s hear the rest of the story about the credit points.

– Okay. Remember, a person’s comments to the planning discourse, that is, to a discussion about whether a proposed plan should be adopted for implementation, can be roughly distinguished as three kinds of claims, premises, of arguments: One that claims that the plan A (or some plan detail) will lead to some outcome, result, consequence. B, given some conditions C. Another claim is about whether B ought to be aimed for or not; and the third is about whether the conditions C under which A will produce B are actually present or will be present when the plan is implemented.

Premise plausibility and importance scores are merit judgments

– Yes I remember now. And the plausibility and importance scores actually are judgments about the merit of those claims — is that what you are saying?

– Bravo! That is precisely it: those assessment scores are another way of saying how much someone’s comment providing those claims is worth, in that planning discussion. So if we now get some overall statistic of the whole group’s assessment scores of those contribution items, we are not only getting a measure of that item’s merit or weight towards the group’s plan decision, which is also a measure of that entry’s merit in the entire discourse. And the original ‘credit point’ acknowledgement can now be adjusted up or down according to those scores. If the scores as plausible, truthful, supported by evidence, and the ought-premise is important, the entry credit will shift from just ‘present’ upwards to a positive value, but if the claims are less credible, untrustworthy, the entry credit will become negative.

– I see, It’s beginning to sound interesting. But won’t everything now depend on the math shenanigans you’ll use to add up or whatever you are doing to the scores?

– True. But as we said, let’s assume for now that this can be made to work. Because then, there are more interesting things that can be done with these credits.

– Well, go on.

Calculation of overall plan plausibility
Versus
Calculation of group’s judgment of premise merit

– Fine. Where were we? Okay. With a little help from our computers, we have calculated the overall judgment scores for the proposed plan, from all the personal scores of discourse participants. And along the way, the computer has stored and used all the judgment scores for the set of argument premises that the participants have made. So we can also calculate the group scores for each premise judgment, don’t you see? And that would be one way to express the value, the merit of that piece of information in the opinion of that group. So now the original contribution credit can be adjusted up or down according to that score.

Negative credit scores discouraging poor or unsupported contributions

– Ah, I see. So that will become part of the contributor’s credit point account. And you think that will be some kind of encouragement to make valuable, constructive comments?

– Yes. And discouraging information that is misleading, false, unsupported by evidence or further plausible arguments, that will reduce your credit account.

– So why should people care about that? Is that account made public?

– Good question. I guess that needs to be discussed, or left to each participant to decide, whether to make it public or not. But the real question is how that account can be really useful to a person, other than giving them the personal satisfaction of having made useful contributions. Can you think of ways that could happen?

– Credit points influence on overall decision?

– Well, now that you mention it: would it make any difference in how the decision is made for the plan?

– Hmm. I guess it could. It depends on how each decision has to be made, legitimately, in each case. You could argue, for example, that the total public score for a plan should determine the decision. In the sense that if that final score is positive (somewhere between zero and plus one, or above a threshold that must be agreed upon, the plan is approved. If it falls below that, it’s rejected? Then, of course an individual’s contribution merit will be part of that final score.

– But that’s not very visible, right?  I remember those equations or ‘aggregation functions’, I think you called them, to  somehow add up all the individual judgments  to some overall score and then to a group score?

– Yes, that part needs discussion, sure. For now, let’s say that this overall score is just a recommendation to guide an official’s decision or elected committee’s final, traditional yes/no vote (that may be mandated by law or constitution), then somebody might suggest that the vote of each member of the voting entity could be ‘weighted’ according to that person’s credit score: If you have a high contribution credit score, your vote could be ‘worth’ more than the vote of somebody whose credit account is low or negative because of a lot of bad contributions to public discourse. Okay, okay, don’t hit me. That calls for a lot more discussion, and perhaps agreements in each case. But it offers some interesting possibilities that do need discussion, don’t you agree?

– Interesting, yes. I’m not sure I see how it would really improve things, perhaps I need to look at some actual examples. By the way, isn’t this why my friend was worrying about this scheme being, wait, what did he call it: some kind of meritocracy? Is that bad?

– Well, it’s a valid concern if in a society power and income is going mainly to people who have been declared as ‘meriting’, at the expense of all the other folks who never evened a chance to build up a merit for anything. Where ‘merit’ is mainly measures in financial terms.  The kind of measure we are talking about here is a very different  thing, isn’t it? Actually moving away from the power of money, wouldn’t you say? But I’ d say there is an issue of balancing the concern for getting decisions based on the merit — value, truth, plausibility, appeal, quality — of information brought into the discussion about plans, and making sure that the concerns of people who don’t have the time or opportunity to earn those credits aren’t being neglected.

– Yes, Bog-hubert: balancing is the problems sounds like the key. I agree that this open planning discourse platform seems to aim at making that easier. The concerns may be articulated and entered into the record to be considered by representatives of people who can’t do that for themselves for one reason or another — children, people too worn out by their work to get as well-informed as the democratic ideals want us to assume. But maybe there should be more robust safeguards to ensure that the meritocracy aspect doesn’t get out if line?

– I agree. As I said, there are many aspects that need more detail work and discussion, or better ideas…

Credit account use ‘outside of project discourse

– Now, those issues were all potential uses ‘within’ the discourse about a specific plan. Perhaps looking at how your contribution credits might be used ‘outside’ the particular project, in general, might be more useful?

– What do you mean?

– Well, think about it. Your financial credit score makes a difference in your ability to get a mortgage or a business loan. Could your public discourse contribution credit score make a difference in landing a job, say, or a public office? As part of your qualification for such offices? Sort of indicating how much you can be trusted to use sound judgment in decisions that can’t wait for a lengthy public discussion? A more quantitative indicator of your reputation?

– I see, yes. People are looking at a candidate’s voting record in previous positions, already — but that can be misleading, not very clear information. So yes, such a contribution credit record may be useful.

– Hey, I’m not so sure about that. One way of looking at it is how single incidents can destroy years of apparently trustworthy behavior and judgments: Like your famous Zinfandel: One glass of Zin poured into a sinkful of dishwater doesn’t do much to the essence of dishwater — but what do you get if you pour one glass of dishwater  into a bottle of Zin, eh? One big case of defrauding Medicare by millions of dollars should make you ineligible for any job in the government’s health departments, shouldn’t it?

– You’d think.

– Well. Those details must be ironed out, but I’d say that’s one way your credit account can be come ‘fungible’ — worth something, in everyday social life. Can you think of other ways?

– I think I need more coffee to cope with all these unusual issues. Vodçek, can you help with that? Too early for Zin, anyway, after that story of yours.

– Sure, Sophie, here you go. I remember hearing Abbé Boulah mention some real unusual ways to apply that credit account — something about making officials pay for their privilege to make decisions?

      Contribution credits versus money and power in public governance:  

PAYING for power decisions? 

– Right, I heard that too. I thought he and his buddy were really going out in utopia-land with those ideas. But it’s sticking in my mind: If you really are looking for ways to get some better control of the role of money and the temptations for less than beneficial public use of its power in public governance: do you see many promising innovative ideas out there? Better ideas than the traditional venerable ‘balance of power’, term limits, re-election, and in the extreme: impeachment tools? That all are more and more losing their effectiveness to the power of money from the private sector that is undermining those provisions? Because they don’t cover the relations between private sector money and public controls well enough?

– Well, how could ‘paying for decisions’ make a dent in that? You are making me curious.

– Okay. See, It’s really based on a different understanding of power. It’s not only the ability to make important decisions about your own life — where we call it ‘empowerment’, as a good thing, right? — but also about projects that involve others, that are too big for just one person to decide for themselves. So what if we look at the desire for power as a human kind of need, just like the need for food and shelter?  Getting those are considered close to human rights — but we make folks pay for them. So why should we treat power differently from those needs? But we also need to find better ways of preventing power from becoming addictive and abusive. Now, can we say that the general social concern regarding needs is to first make sure that people will be able to earn the means to satisfy those needs, that is, to p a y for them, and then actually make them pay. So why not apply that pattern to the power issue?

– But, uh. But…

– But but. Yes, I hear you: Pay? But with money, no: that would just dig our hole deeper? Well,  perhaps we can use a different currency?  Now that we have one: what about paying for public power decisions with your discourse contribution credit points? The more important a decision, the more points you need as an office holder, to make it. And you use up your credits with each decision. There might be a way to treat it like a kind of investment by providing a way to earn credits back with the public appreciation of successful, beneficial decisions, which means that if it was a poor decision, you lost your credit investment and some of your power to make more decisions. If it’s all gone: time to step down from the power office, hmm?

– What about decisions that require more credit points than a single person can ever come up with?

– Good question! But doesn’t the very question contain the core of the answer? See, if you are a faithful supporter of an office holder, meaning that you’re confident that this person will make good judgments in power decisions, you can transfer some of your own hard-earned judgment credits to that person, to enable them to make those big decisions on your behalf. Not money: judgment credits. Instead of all the election financing ending up in the advertising media coffers…

– Hey, great idea! Perhaps I could even specify the kind of programs and decisions my points should be used for?

– Power to your kind of people! Yes! There might be a way to build that into the system — with the possibility of your taking your credits back if the person makes decisions you don’t approve of, huh?

– Bog-Hubert, Sophie, do I have to point out to you that this devious scheme will make you, the supplier of credit point power, just as a c c o u n t a b l e for the decisions you support? Because you too, would lose your points for poor decisions? With the possibility that you too might lose your credit investment in an incompetent power holder?

– Trying to scare us now, Vodçek? Yes, it makes sense: I guess it will make people more careful with their credit point contributions, besides being able to ‘punish’ politicians who win elections with promises like ‘no more taxes’ but then go ahead and raise taxes anyway once they are in office. If you can withdraw your credit points, that is.

Power and accountability

– Yes: If your glorious leader has squandered all his own and your points on lousy decisions, there may be nothing to get back. Unless we can make those fellows go back to regular status to earn more points to pay back their credit point debts… I guess the point is: all the talk about power and accountability is rather meaningless without an account whose wealth you could lose is you use your power unwisely or irresponsibly.

– This is getting way too futuristic to have a chance to be realized any time soon, Bog-Hubert, don’t you think?

– Well, that’s what they said about those crazy fools who said let’s build flying machines… Abbé Boulah says that the technology for doing these things is already available, so now it’s just about working out the details, getting the system programmed and set up, and getting public support for putting it into practice. But perhaps there are better ways to do those things — public planning discourse, better control of power, — could better ideas be triggered into the open by the very outrageous nature of these proposals?

The discourse platform as a first needed step

– Sounds like what we need as a first step is that planning discourse platform itself, to run the public discussion needed to reach agreement about what to do. And to work out the details?

– Couldn’t have said it better myself, Sophie.

Overview

–o–

‘CONNECTING THE DOTS’ OF SOME GOVERNANCE PROBLEMS

There is much discussion about flaws of ‘democratic’ governance systems, supposedly leading to increasingly threatening crises. Calls for ‘fixing’ these challenges tend to focus on single problems, urging single ‘solutions’. Even recommendations for application of ‘systems thinking’ tools seem to be fixated on the phase of ‘problem understanding’ of the process; while promotions of AI (artificial / augmented intelligence) sound like solutions are likely to be found by improved collection and analysis of data, of information in existing ‘knowledge bases’. Little effort seems devoted to actually ‘connecting the dots’ – linking the different aspects and problems, making key improvements that serve multiple purposes. The following attempt is an example of such an effort to develop comprehensive ‘connecting the dots’ remedies – one that itself arguably would help realize the ambitious dream of democracy, proposed for discussion. A selection (not a comprehensive account) of some often invoked problems, briefly:

“Voter apathy” The problem of diminishing participation in current citizen participation in political discourse and decisions / elections, leading to unequal representation of all citizens’ interests;

“Getting all needed information”
The problem of eliciting and assembling all pertinent ‘documented’ information (‘data’) but also critical ‘distributed’ information especially for ‘wicked problems’, – but:

“Avoiding information overload”
The phenomenon of ‘too much information’, much of which may be repetitive, overly rhetorical, judgmental, misleading (untruthful) or irrelevant;

“Obstacles to citizens’ ability to voice concerns”
The constraints to citizens’ awareness of problems, plans, overview of discourse, ability to voice concerns;

“Understanding the problem”
Social problems are increasingly complex, interconnected, ill-structured, explained in different, often contradicting ways, without ‘true’ (‘correct) or ‘false’ answers, and thus hard to understand, leading to solution proposals which may result in unexpected consequences that can even make the situation worse;

“Developing better solutions”
The problem of effectively utilizing all available tools to the development of better (innovative) solutions;

“Meaningful discussion”
The problem of conducting meaningful (less ‘partisan’ and vitriolic, more cooperative, constructive) discussion of proposed plans and their pros and cons;

“Better evaluation of proposed plans”
The task of meaningful evaluation of proposed plans;

“Developing decisions based on the merit of discourse contributions”
Current decision methods do not guarantee ‘due consideration’ of all citizens’ concerns but tend to ignore and override as much as the contributions and concerns of half of the population (voting minority);

“The lack of meaningful measures of merit of discourse contributions”
Lack of convincing measures of the merit of discourse contributions: ideas, information, strength of evidence, weight of arguments and judgments;

“Appointing qualified people to positions of power”
Finding qualified people for positions of power to make decisions that cannot be determined by lengthy public discourse — especially those charged with ensuring

“Adherence to decisions / laws / agreements”
The problem of ‘sanctions’ ensuring adherence to decisions reached or issued by governance agencies: ‘enforcement’ – (requiring government ‘force’ greater than potential violators leading to ‘force’ escalation;

“Control of power”
To prevent people in positions of power from falling victim to temptations of abusing their power, better controls of power must be developed.

Some connections and responses:

Problems and remedies network

Details of possible remedies / responses to problems, using information technology, aiming at having specific provisions (‘contribution credits’) work together with new methodological tools (argument and quality evaluation) to serve multiple purposes:

“Voter apathy”

Participation and contribution incentives: for example, offering ‘credit points’ for contributions to the planning discourse, saved in participants’ ‘contribution credit account’ as mere ‘contribution’ or participation markers, (to be evaluated for merit later.)

“Getting all needed information”
A public projects ‘bulletin board’ announcing proposed projects / plans, inviting interested and affected parties to contribute comments, information, not only from knowledge bases of ‘documented’ information (supported by technology) but also ‘distributed, not yet documented information from parties affected by the problem and proposed plans.

“Avoiding information overload”
Points given only for ‘first’ entries of the same content and relevance to the topic
(This also contributes to speedy contributions and assembling information)

“Obstacles to citizens’ ability to voice concerns”
The public planning discourse platform accepts entries in all media, with entries displayed on public easily accessible and regularly (ideally real-time) updated media, non-partisan

“Understanding the problem”
The platform encourages representation of the project’s problem, intent and ‘explanation’ from different perspectives. Systems models contribute visual representation of relationships between the various aspects, causes and consequences, agents, intents and variables, supported by translation not only between different languages but also from discipline ‘jargon’ to natural conversational language.

“Developing better solutions”
Techniques of creative problem analysis and solution development, (carried out by ‘special techniques’ teams reporting results to the pain platform) as well as information about precedents and scientific and technology knowledge support the development of solutions for discussion

“Meaningful discussion”
While all entries are stored for reference in the ‘Verbatim’ repository, the discussion process will be structured according to topics and issues, with contributions condensed to ‘essential content’, separating information claims from judgmental characterization (evaluation to be added separately, below) and rhetoric, for overview display (‘IBIS’ format, issue maps) and facilitating systematic assessment.

“Better evaluation of proposed plans”
Systematic evaluation procedures facilitate assessment of plan plausibility (argument evaluation) and quality (formal evaluation to mutually explain participants’ basis of judgment) or combined plausibility-weighted quality assessment.

“Meaningful measures of merit”
The evaluation procedures produce ‘judgment based’ measures of plan proposal merit that guide individual and collective decision judgments. The assessment results also are used to add merit judgments (veracity, significance, plausibility, quality of proposal) to individuals’ first ‘contribution credit’ points, added to their ‘public credit accounts’.

“Decision based on merit”
For large public (at the extreme, global) planning projects, new decision modes and criteria are developed to replace traditional tools (e.g. majority voting)

“Qualified people to positions of power”
Not all public governance decisions need to or can wait for the result of lengthy discourse, thus, people will have to be appointed (elected) to positions of power to make such decisions. The ‘public contribution credits’ of candidates are used as additional qualification indicators for such positions.

“Control of power”

Better controls of power can be developed using the results of procedures proposed above: Having decision makers ‘pay’ for the privilege of making power decisions using their contribution credits as the currency for ‘investments’ in their decision: Good decision will ‘earn’ future credits based on public assessment of outcomes; poor decisions will reduce the credit accounts of officials, forcing their resignation if depleted. ‘Supporters’ of officials can transfer credits from their own accounts to the official’s account to support the official’s ability to make important decisions requiring credits exceeding their own account. They can also withdraw such contributions if the official’s performance has disappointed the supporter.
This provision may help reduce the detrimental influence of money in governance, and corresponding corruption.

“Adherence to decisions / laws / agreements”
One of the duties of public governance is ‘enforcement’ of laws and decisions. The very word indicates the narrow view of tools for this: force, coercion. Since government force must necessarily exceed that of any would-be violator to be effective, this contributes both to the temptation of corruption, — to abuse their power because there is no greater power to prevent it, and to the escalation of enforcement means (weaponry) by enforces and violators alike. For the problem of global conflicts, treaties, and agreements, this becomes a danger of use of weapons of mass destruction if not defused. The possibility of using provisions of ‘credit accounts’ to develop ‘sanctions’ that do not have to be ‘enforced’ but triggered automatically by the very attempt of violation, might help this important task.


 

Artificial Intelligence for the Planning Discourse?

The discussion about whether and to what extent Artificial Intelligence technology can meaningfully support the planning process with contributions similar or equivalent to human thinking is largely dominated by controversies about what constitutes thinking. An exploration of the reasoning patterns in the various phases of human planning discourse could produce examples for that discussion, leaving the determination of that definition label ‘thinking’ open for the time being.

One specific example (only one of several different and equally significant aspects of planning):
People propose plans for action, e.g. to solve problems, and then engage in discussion of the ‘pros and cons’ of those plans: arguments. A typical planning argument can be represented as follows:
“Plan A should be adopted for implementation, because
i) Plan A will produce consequences B, given certain conditions C, and
ii) Consequences B ought to be pursued (are desirable); and
iii) Conditions C are present (or will be, at implementation).

Question 1: could such an argument be produced by automated technological means?
This question is usually followed up by question 2: Would or could the ‘machine’ doing this be able (or should it be allowed) to also make decisions to accept or reject the plan?

Can meaningful answer to these questions be found? (Currently or definitively?)

Beginning with question 1: Formulating such an argument in their minds, humans draw on their memory — or on explanations and information provided during the discourse itself — for items of knowledge that could become premises of arguments:

‘Factual-instrumental’ knowledge of the form “FI (A –> X)”, for example (“A will cause X’, given conditions C;
‘Deontic’ Knowledge: of the form “D(X)” or “X ought to be’ (is desirable)”, and
Factual Knowledge of the form “F ( C)” or “Conditions C are given”.
‘Argumentation-pattern knowledge’: Recognition that any of the three knowledge items above can be inserted into an argument pattern of the form
D(A) <– ((A–> X)|C)) & D(X) & F( C)).

(There are of course many variations of such argument patterns, depending on assertion or negation of the premises, and different kinds of relations between A and X.)

It does not seem to be very difficult to develop a Knowledge Base (collection) of such knowledge items and a search-and-match program that would assemble ‘arguments’ of this pattern.

Any difficulties arguably would be more related to the task of recognizing and suitably extracting such items (‘translating’ it into the form recognizable to the program) from the human recorded and documented sources of knowledge, than to the mechanics of the search-and-match process itself. Interpretation of meaning: is an item expressed in different words equivalent to other terms that are appropriate to the other potential premises in an argument?

Another slight quibble relates to the question whether and to what extent the consequence qualifies as one that ‘ought to be’ (or not) — but this can be dealt with by reformulating the argument as follows:
“If (FI(A –> X|C) & D(X) & F( C)) then D(A)”.

(It should be accompanied by the warning that this formulation that ‘looks’ like a valid logic argument pattern is in fact not really applicable to arguments containing deontic premises, and that a plan’s plausibility does not rest on one single argument but on the weight of all its pros and cons.)

But assuming that these difficulties can be adequately dealt with, the answer to question 1) seems obvious: yes, the machine would be able to construct such arguments. Whether that already qualifies as ‘thinking’ or ‘reasoning’ can be left open; the significant realization is equally obvious: that such contributions could be potentially helpful contributions to the discourse. For example, by contributing arguments human participants had not thought of, they could be helping to meet the aim of ensuring — as much as possible — that the plan will not have ‘unexpected’ undesirable side-and-after-effects. (One important part of H. Rittel’s very definition of design and planning.)

The same cannot as easily be said about question 2.

The answer to that question hinges on whether the human ‘thinking’ activities needed to make a decision to accept or reject the proposed plan can be matched by ‘the machine’. The reason is, of course, that not only the plausibility of each argument will have to be ‘evaluated’, judged, (by assessing the plausibility of each premise) but also that the arguments must be weighed against one another. (A method for doing that has been described e.g  in ‘The Fog Island Argument” and  several papers.)

So a ‘search and match’ process as the first part of such a judgment process would have to look for those judgments in the data base, and the difficulty here has to do with where such judgments would come from.

The prevailing answers for factual-instrumental premises as well as for fact-premises — premises i) and iii) — are drawing on ‘documented’ and commonly accepted truth, probability, or validity. Differences of opinion about claims drawn from ‘scientific’ and technical work, if any, are decided by a version of ‘majority voting’ — ‘prevailing knowledge’, accepted by the community of scientists or domain experts, ‘settled’ controversies, derived from sufficiently ‘big data’ (“95% of climate scientists…”) can serve as the basis of such judgments. It is often overlooked that the premises of planning arguments, however securely based on ‘past’ measurements, observations etc, are inherently predictions. So any certainty about their past truth must at least be qualified with a somewhat lesser degree of confidence that they will be equally reliably true in future: will the conditions under which the A –> X relationships are assumed to hold, be equally likely to hold in the future? Including the conditions that may be — intentionally or inadvertently — changed as a result of future human activities pursuing different aims than those of the plan?

The question becomes even more controversial for the deontic (ought-) premises of the planning arguments. Where do the judgments come from by which their plausibility and importance can be determined? Humans can be asked to express their opinions — and prevalent social conventions consider the freedom to not only express such judgments but to have them given ‘due consideration’ in public decision-making (however roundabout and murky the actual mechanisms for realizing this may be) as a human right.

Equally commonly accepted is the principle that machines do not ‘have’ such rights. Thus, any judgment about deontic premises that might be used by a program for evaluating planning arguments would have to be based on information about human judgments that can be found in the data base the program is using. There are areas where this is possible and even plausible. Not only is it prudent to assign a decidedly negative plausibility to deontic claims whose realization contradicts natural laws established by science (and considered still valid…like ‘any being heavier than air can’t fly…’). But there also are human agreements — regulations and laws, and predominant moral codes — that summarily prohibit or mandate certain plans or parts of plans; supported by subsequent arguments to the effect that we all ought not break the law, regardless of our own opinions. This will effectively ‘settle’ some arguments.

And there are various approaches in design and planning that seem to aim at finding — or establishing — enough such mandates or prohibitions that, taken together, would make it possible to ‘mechanically’ determine at least whether a plan is ‘admissible’ or not — e.g. for buildings, whether its developer should get a building permit.

This pattern is supported in theory by modal logic branches that seek to resolve deontic claims on the basis of ‘true/false’ judgments (that must have been made somewhere by some authority) of ‘obligatory’, ‘prohibited’, ‘permissible’ etc. It can be seen to be extended by at last two different ‘movements’ that must be seen as sidestepping the judgment question.

One is the call for society as a whole to adopt (collectively agree upon) moral, ethical codes whose function is equivalent to ‘laws’ — from which the deontic judgment about plans could be derived by mechanically applying the appropriate reasoning steps — invoking ‘Common Good’ mandates supposedly accepted unanimously by everybody. The question whether and how this relates to the principle of granting the ‘right’ of freely holding and happily pursuing one’s own deontic opinions is usually not examined in this context.

Another example is the ‘movement’ of Alexander’s ‘Pattern Language’. Contrary to claims that it is a radically ‘new’ theory, it stands in a long and venerable tradition of many trades and disciplines to establish codes and collections of ‘best practice’ rules of ‘patterns’ — learned by apprentices in years of observing the masters, or compiled in large volumes of proper patterns. The basic idea is that of postulating ‘elements’ (patterns) of the realm of plans, and relationships between these, by means of which plans can be generated. The ‘validity’ or ‘quality’ of the generated plan is then guaranteed by the claim that each of the patterns (rules) are ‘valid’ (‘true’, or having that elusive ‘quality without a name’). This is supported by showing examples of environments judged (by intuition, i.e. needing no further justification) to be exhibiting ‘quality’, by  applications of the patterns. The remaining ‘solution space’ left open by e.g.  the different combinations of patterns, then serves as the basis for claims that the theory offers ‘participation’ by prospective users. However, it hardly needs pointing out that individual ‘different’ judgments — e.g. based on the appropriateness of a given pattern or relationship — are effectively eliminated by such approaches. (This assessment should not be seen as a wholesale criticism of the approach, whose unquestionable merit is to introduce quality considerations into the discourse about built environment that ‘common practice’ has neglected.)

The relevance of discussing these approaches for the two questions above now becomes clear: If a ‘machine’ (which could of course just be a human, untiringly pedantic bureaucrat assiduously checking plans for adherence to rules or patterns) were able to draw upon a sufficiently comprehensive data base of factual-instrumental knowledge and ‘patterns or rules’, it could conceivably be able to generate solutions. And if the deontic judgments have been inherently attached to those rules, it could claim that no further evaluation (i.e. inconvenient intrusion of differing individual judgments would be necessary.

The development of ‘AI’ tools of automated support for planning discourse — will have to make a choice. It could follow this vision of ‘common good’ and valid truth of solution elements, universally accepted by all members of society. Or it could accept the challenge of a view that it either should refrain from intruding on the task of making judgments, or going to the trouble of obtaining those judgments from human participants in the process, before using them in the task of deriving decisions. Depending on which course is followed, I suspect the agenda and tasks of current and further research and development and programming will be very different. This is, in my opinion, a controversial issue of prime significance.

Levels of assessment depth in planning discourse: A three-tier experimental (‘pilot’) version of a planning discourse support system

Thorbjoern Mann, February 2018

Overview

A ‘pilot’ version of a needed full scale Planning Discourse Support System (‘PDSS’)
to be run on current social media platforms such as Facebook

The following are suggestions for an experimental application of a ‘pilot’ version of the structured planning discourse platform that should be developed for planning projects with wide public participation, at scales ranging from local issues to global projects.

Currently available platforms do not yet offer all desirable features of a viable PDSS

The eventual ‘global’ platform will require research, development and integrated programming features that current social media platforms do not yet offer. The ‘pilot’ project is aiming at producing adequate material to guide further work and attract support and funding a limited ‘pilot’ version of the eventual platform, that can be run on currently available platforms.

Provisions for realization of key aims of planning: wide participation;
decisions based on merit of discourse contribution;
recognition of contribution merit;
presented as optional add-on features
leading to a three-tier presentation of the pilot platform

One of the key aims of the overall project is the development of a planning process leading to decisions based on the assessed merit of participants’ contributions to the discourse. The procedural provisions for realizing that aim are precisely those that are not supported by current platforms, and will have to be implemented as optional add-on processes (‘special techniques’) by smaller teams, outside of the main discourse. Therefore, the proposal is presented as a set of three optional ‘levels’ of depth of analysis and evaluation. Actual projects may choose the appropriate level inconsideration of the project’s complexity and importance, of the degree of consensus or controversy emerging during the discourse, and the team’s familiarity with the entire approach and the techniques involved.

Contents:
1 General provisions
2 Basic structured discourse
3 Structured discourse with argument plausibility assessment
4 Assessment of plausibility-adjusted Quality assessment
5 Sample ‘procedural’ agreements
6 Possible decision modes based on contribution merit
7 Discourse contribution merit rewards

—-
1 General Provisions

Main (e.g. Facebook) Group Page

Assuming a venue like Facebook, a new ‘group’ page will be opened for the experiment. It will serve as a forum to discuss the approach and platform provisions, and to propose and select ‘projects’ for discussion under the agreed-upon procedures.

Project proposals and selection

Group members can propose ‘projects’ for discussion. To avoid project discussions being overwhelmed by references to previous research and literature, the projects selected for this experiment should be as ‘new’ (‘unprecedented’) and limited in scope as possible. (Regrettably, this will make many urgent and important issues ineligible for selection.)

Separate Project Page for selected projects

For each selected project, a new group page will be opened to ensure sufficient hierarchical organization options within the project. There will be specific designated threads within each group, providing the basic structure of each discourse. A key feature not seen in social media discussions is the ‘Next step’ interruption of the process, in which participants can choose between several options of continuing or ending the process.

Project participants
‘Participants’ in projects will be selected from the number of ‘group members’ having signed up, expressing an interest in participating, and agree to proceed according to the procedural agreements for the project.

Main Process and ‘Special Techniques’

The basic process of project discourse is the same for all three levels; the argument plausibility assessment and project quality assessment procedures are easily added to the simple sequence of steps of the ‘basic’ versions described in section 2.
In previous drafts of the proposal, these assessment tools have been described as ‘special techniques’ that would require provisions of formatting, programming and calculation. For any pilot version, they would have to be conducted by ‘special teams’ outside of the main discourse process. This also applies to the proposed three-level versions and the two additional ‘levels’ of assessment presented here. Smaller ‘special techniques teams’ will have to be designated to work outside of the main group discussion, (e.g. by email); they will report their results back to the main project group for consideration and discussion.

For the first implementation of the pilot experiment, only two such special techniques: the technique of argument plausibility assessment, and the evaluation process for plan proposal ‘quality’ (‘goodness’) are considered; they are seen as key components of the effort to link decisions to the merit of discourse contributions.


2 Basic structured discourse

Project selection Group members post suggestions for projects (‘project candidates) on the group’s main ‘bulletin board’. If a candidate is selected, the posting member will act as its ‘facilitator’ or co-facilitator. Selection is done by posting an agreed-upon minimum of ‘likes’ for a project candidate. By posting a ‘like’, group members signal their intention to become ‘project participants’ and actively contribute to the discussion.

Project bulletin page, Project description

For selected projects, a new page serving as introduction and ‘bulletin board’ for the project will be opened. It will contain a description of the project (which will be updated as modifications are agreed upon). For the first pilot exercise, the projects should be an actual plan or action proposals.

Procedural agreements

On a separate thread, a ‘default’ version of procedural agreements will be posted. They may be modified in response to project conditions and expected level of depth, ideally before the discussion starts. The agreements will specify the selection criteria for issues, and the decision modes for reaching recommendations or decisions on the project proposals. (See section 5 for a default set of agreements).

General discussion thread (unstructured)

A ‘General discussion’ thread will be started for the project, inviting comments from all group members. For this thread, there are no special format expectation other than general ‘netiquette’.

Issue candidates
On a ‘bulletin board’ subthread of the project intro thread, participants can propose ‘issue’ or ‘thread’ candidates, about questions or issues that emerge as needing discussion in the ‘general discussion’ thread. Selection will be based on an agreed-upon number of ‘likes, ‘dislikes’ or comments about the respective issue in the ‘general discussion’ thread.

Issue threads: For each selected issue, a separate issue thread will be opened. The questions or claims of issue threads should be stated more specifically in the expectation of clear answers or arguments, and comments should meet those expectations.

It may be helpful to distinguish different types of questions, and their expected responses:

– “Explanatory” questions (Explanations, descriptions, definitions);
– “Factual’ questions (‘Factual’ claims, data, arguments)
– “Instrumental questions” (Instrumental claims” “how to do …”)
– “Deontic” (‘Ought’- questions) (Arguments pro / con proposals)

Links and References thread

Comments containing links and references should provide brief explanations about what positions the link addresses or supports; the links should also be posted on a ‘links and references’ thread.

Visual material: diagrams and maps

Comments can be accompanied by diagrams, maps, photos, or other visual material. Comments should briefly explain the gist of the message supported by the picture. (“What is the ‘argument’ of the image?) For complex discussions, overview ‘maps’ of the evolving network of issues should be posted on the project ‘bulletin’ thread.

‘Next Step?’
Anytime participants sense that the discussion has exhausted itself or needs input of other information or analysis, they can make a motion for a ‘Next step?’ interruption, specifying the suggested next step:

– a decision on the main proposal or a part,
– call for more information, analysis;
– call for a ‘special technique’ (with or without postponement of further discussion)
– call for modifying the proposal, or
– changing procedural rules;
– continuing the discussion or
– dropping the issue, ending the discussion without decision.

These will be decided upon according to the procedural rules ‘currently’ in force.

Decision on the plan proposal

The decision about the proposed plan — or partial decisions about features that should be part of the plan — will be posted on the project’s ‘bulletin board’ thread., together with a brief report. Reports about the process experience, problems and successes, etc. will be of special interest for further development of the tool.

3    Structured discourse with argument plausibility assessment

The sequence of steps for the discourse with added argument plausibility assessment is the same as those of the ‘basic’ process described in section 2 above. At each major step, participants can make interim judgments about the plausibility of the proposed plan, (for comparison with later, more deliberated judgments). At each of these steps, there also exists the option of responding to a ‘Next step?’ motion with a decision to cut the process short, based on emerging consensus or other insights such as ‘wrong question’ that suggest dropping the issue. Without these intermediate judgments, the sequence of steps will proceed to construct an overall judgment of proposal plausibility ‘bottom-up-fashion’ from the plausibility judgments of individual argument premises.

Presenting the proposal

The proposal for which the argument assessment is called, is presented and described in as much detail as is available.
(Optional: Before having studied the arguments, participants make first offhand, overall judgments of proposal plausibility Planploo’ on a +1 / -1 scale, (for comparison with later judgments). Group statistics: e.g. GPlanploo’ are calculated (Mean, range…) and examined for consensus or significant differences. )

Displaying all pro/con arguments

The pro / con arguments having been raised about the issue , displayed in the respective ‘issue’ thread, are displayed and studied, if possible with the assistance of ‘issue maps’ showing the emerging network of interrelated issues. (Optional:) Participants assign a second overall offhand plan plausibility judgment: Planploo”, GPlanploo”)

Preparation of formal argument display and worksheets

For the formal argument plausibility assessment, worksheets are prepared that list
a) the deontic premises of each argument (goals, concerns), and
b) the key premises of all arguments ((including those left unstated as ‘taken for granted’)

Assignment of ‘Weights of Relative Importance’ w

Participants assign ‘weights of relative importance’ w to the deontics in list (a), such that 0 ≤ wi ≤ 1, and ∑wi = 1, for all i arguments.

Assignment of premise plausibility judgments prempl to all argument premises

Participants assign plausibility judgments to all argument premises, on a scale of -1 (totally implausible) via 0 –zero – (don’t know) to +1 (totally plausible)

Calculation of Argpl Argument plausibility

For each participant and argument, the ‘Argument plausibility’ Argpl is calculated from the premises plausibility judgments. E,g. Argplod = ∏ (premplj) for all j premises of the argument.

Calculation of Argument Weight Argw

From the argument plausibility judgment s and the weight of the deontic premise for that argument, the ‘weight of the respective argument Argw is calculated. E.g. Argwi = Argplod * wi.

Calculation of Plan plausibility Planpld

The Argument weights Argw of all arguments pro and con are aggregated into the deliberated plan plausibility score Planplod for each participant. E.g. Planpld = ∑(Argwi) for all i arguments.

Calculating group statistics of results

Statistics of the Plan plausibility judgment scores across the group (Mean, Median, Range, Min /Max) are calculated and discussed. Areas of emerging consensus are identified, as well as areas of disagreements of lack of adequate information. The interim judgments designated as ‘optional’ above can serve to illustrate the learning process participants go through.

Argument assessment team develops recommendations for decision or improvement of proposed plan

The argument assessment team reports its findings and analysis, makes recommendations to the entire group in a ‘Next Step?’ deliberation.

4 Assessment of plausibility-adjusted plan Quality

Assigning quality judgments

Because pro / con arguments usually refer to the deontic concerns (goals, objectives) in qualitative terms, they do not generally generate adequate information about the actual quality or ‘goodness’ that may be achieved by a plan proposal. A more fine-grain assessment is especially important for the comparison of several proposed plan alternatives. It should be obvious that all predictions about the future performance of plans will be subject to the plausibility qualifications examined in section 3 above. So a goodness or quality assessment may be grafted onto the respective steps of the argument plausibility assessment. The following steps describe one version of the resulting process.

Proposal presentation and first offhand quality judgment

(Optional step:) Upon presentation of a proposal, participants can offer a first overall offhand goodness or quality judgment PlanQoo, e.g. on a +3 / -3 scale, for future comparison with deliberated results.

Listing deontic claims (goals, concerns)

From the pro / con arguments compiled in the argument assessment process (section 3) the goals, concerns (deontic premises) are assembled. These represent ‘goodness evaluation aspects’ against which competing plans will be evaluated.

Adding other aspects not mentioned in arguments

Participants may want to add other ‘standard’ as well as situation-specific aspects that may not have been mentioned in the discussion. (There is no guarantee that all concerns that influence participants’ sense of quality of a plan will actually be brought up and made explicit in a discussion).

Determining criteria (measures of performance) for all aspects

For all aspects, ‘measures of performance’ will be determined that allow assessment about how well a plan will have met the goal or concern. These may be ‘objective’ criteria or more subjective distinctions. For some criteria, ‘criterion functions’ can show how a person’s ‘quality’ score depends on the corresponding criterion.
Example: plan proposals will usually be compared and evaluated according to their
expected ‘cost’; and usually ‘lower cost’ is considered ‘better’ (all else being equal)
than ‘higher cost’. But while participants may agree that ‘zero cost’ would be
best so as to deserve a +3 (couldn’t be better’) score, they can differ significantly
about what level of cost would be ‘acceptable’, and at what level the score should
become negative: Participant x would consider a much higher cost to be still
‘so/so’, or acceptable, than participant o.
+3 –xo————————————————–
+2 ———–o–x—————————————
+1 —————-o—–x——————————-
+/-0 ——————o———x———————–
-1 ———————–o————x—————–
-2 ——————————o———x————-
-3 ——————————————————- ($∞ would be -3 ‘couldn’t be worse’)
$0 |       |        |        |        |         |        |        |         |  > Cost criterion function.

“Weighting’ of aspects, subaspects etc.

The ‘weight’ assignments of aspects (deontics) should correspond to the weighting of deontic premises in the process of argument assessment. However, if more aspects have been added to the aspect list, the ‘weighting’ established in the argument assessment process must be revised: Aspects weights are on a zero to +1 scale, 0 ≤ w ≤ 1 and ∑wi = +1 for all i aspects. For complex plans, the aspect list may have several ‘levels’ and resemble an ‘aspect tree’. The weighing at each level should follow the same rule of 0 ≤ w ≤ 1 and ∑w=1.

Assigning quality judgment scores

Each participant will assign ‘quality’ or ‘goodness’ judgments, on a +3 to -3 scale (+3 meaning ‘could not possibly be better’, -3 ‘couldn’t possibly be worse’, with zero (0) meaning ‘so-so’ or ‘can’t decide’, not applicable) to all aspects / subaspects of the evaluation worksheet, for all competing plan proposals.

Combining quality with plausibility score for a ‘weighted plausibility-adjusted quality score Argqw

Each (partial) quality score q will be combined with the respective argument plausibility score Argpl from the process in section 3, resulting in a ‘weighted plausibility-adjusted quality score’ Argqplwi = Argpli * qi * wi .

Aggregating scores into Plan quality score PlanQ

The weighted partial scores can be aggregated into overall plan quality scores: e.g. :
PlanQ = ∑i (Argqplwi) for all n aspects. or
PlanQ = Min (Argqplw) or
PlanQ = ∏ (Argqpli +3)wi -3
(The appropriateness of these functions for a given case must be discussed!)

Group statistics: GArgqpl and GPlanQ

Like the statistics of the plausibility assessments, statistical analysis of these these scores can be calculated. Whether a resulting measure such as Mean (PlanQ) should be accepted as a ‘group judgment’ is questionable, but such measures can become helpful guides for any decisions the group will have to make. Again, calculation of interim results can provide information about the ‘learning process of team members, ‘weaknesses’ of plans that are responsible for specific poor judgment scores, and guide suggestions for plan improvements.

Team reports results back to main forum

A team report should be prepared for presentation back to the main discussion.

5     Sample procedural agreements

The proposed platform aims at facilitating problem-solving, planning, design, policy-making discussions that are expected to result in some form of decision or recommendation to adopt plans for action. To achieve decisions in groups, it is necessary to have some basic agreements as to how those decisions will be determined. Traditional decision modes such as voting are not appropriate for any large asynchronous online process with wide but unspecified participation (Parties affected by proposed plans may be located across traditional voting eligibility boundaries; who are ‘legitimate’ voters?). The proposed approach aims at examining how decisions might be based on the quality of content contributions to the discourse rather than the mere number of voters or supporters.

Default agreements.

The following are proposed ‘default’ agreements; they should be confirmed (or adapted to circumstances) at the outset of a discourse. Later changes should be avoided as much as possible; ‘motions’ for such changes can be made as part of a ‘Next step’ pause in the discussion; they will be decided upon by a agreed upon majority of participants having ‘enlisted’ for the project, or agreements ‘currently’ in place.

Project groups.

Members of the Planning Discourse FB group (Group members) can propose ‘projects’ for discussion on the Main group’s ‘Bulletin Board’ Thread. Authors of group project proposals are assumed to moderate / facilitate the process for that project. Projects are approved for discussion if an appropriate number __ of group members ‘sign up for ‘participation’ in the project.

Project Participants

Project participants are assumed to have read and agreed to these agreements, and expressed willingness to engage in sustained participation. The moderator may choose to limit the number of project participants, to keep the work manageable.

Discussion

Project discussion can be ‘started’ with a Problem Statement, a Plan Proposal, or a general question or issue. The project will be briefly described in the first thread. Another thread labeled ‘Project (or issue) ___ General comments’ will then be set up, for comment on the topic or issue with questions of explanation clarification, re-phrasing, answers, arguments and suggestions for decisions. Links or references should be accompanied by a brief statement of the answer or argument made or supported by the reference.

Candidate Issues

Participants and moderator can suggest candidate issues: potentially controversial questions about which divergent positions and opinions exist or are expected, that should be clarified or settled before a decision is made. These will be listed in the project introduction thread as Candidate Issues. There, participants can enter ‘Likes’ to indicate whether they consider it necessary to ‘raise’ the issue for a detailed discussion. Likely issue candidates are questions about which members have posted significantly different positions in the ‘General comments’ thread; such that the nature of the eventual plan would significantly change depending on which positions are adopted.

‘Raised’ issues

Issue Candidates receiving an agreed upon number of support (likes, or opposing comments, are accepted and labeled as ‘Raised’. Each ‘raised’ issue will then become the subject of a separate thread, where participants post comments (answers, arguments, questions) to that issue.
It will be helpful to clearly identify the type of issue or question, so that posts can be clearly stated (and evaluated) as answers or arguments: for example:
– Explanations, definitions, meaning and details of concepts to ‘Explanatory questions’;
– Statements of ‘facts’ (data, answers, relationship claims) to Factual questions;
– Suggestions for (cause-effect or means to ends) relationships, to Instrumental questions;
– Arguments to deontic (ought-) questions or claims such as ‘Plan A should be adopted’, for example:
‘Yes, because A will bring about B given conditions C , B ought to be pursued, and conditions C are present’).

‘Next step?’ motion

At any time after the discussion has produced some entries, participants or moderator can request a ‘Next Step?’ interruption of the discussion, for example when the flow of comments seems to have dried up and a decision or a more systematic treatment of analysis or evaluation is called for. The ‘Next step’ call should specify the type of next step requested. It will be decided by getting agreed-upon number of ‘likes’ of the total number of participants. A ‘failed’ next step motion will automatically activate the motion of continuing the discussion. Failing that motion or subsequent lack of new posts will end discussion of that issue or project.

Decisions

Decisions (to adopt or reject a plan or proposition) are ‘settled’ by an agreed-upon decision criterion (e.g. vote percentage) total number of participants. The outcome of decisions of ‘next step?’ motions will be recorded in the Introduction thread as Results, whether they lead to an adoption, modification, rejection of the proposed measure or not.

Decision modes

As indicated before, traditional decision modes such as voting, with specified decision criteria such as percentages of ‘legitimate’ participants, are going to be inapplicable for large (global’) planning projects whose affected parties are not determined by e.g. citizenship or residency in defined geometric governance entitites. It is therefore necessary to explore other decision modes using different decision criteria, with the notion of criteria based on the assessed merit of discourse contributions being an obvious choice to replace or complement the ‘democratic’ one-person, one-vote’ principle, or the principle of decisions made by elected representatives (again, by voting.)
Participants are therefore encouraged to explore and adopt alternative decision modes. The assessment procedures in sections 3 and 4 have produced some ‘candidates’ for decision criteria, which cannot at this time be recommended as decisive alternatives to traditional tools, but might serve as guidance results for discussion:
– Group Plan plausibility score GPlanpl;
– Group Quality assessment score GPlanQ
– Group plausibility-adjusted quality score GPlnQpl;
The controversial aspect of all these ‘group scores is the method for deriving these from the respective individual scores.

These measures also provide the opportunity for measuring the degree of improvement achieved by a proposed plan over the ‘initial’ problem situation a plan is expected to remedy: leading to possible decision rules such as that rejecting plans that do not achieve adequate improvement for some participants (people being ‘worse off ‘after plan implementation) or selecting plans that achieve the greatest degree of improvement overall. This of course requires that the existing situation be included in the assessment, as the basis for comparison.

Special techniques

In the ‘basic’ version of the process, no special analysis, solution development, or evaluation procedures are provided, mainly because the FB platform does not easily accommodate the formatting needed. The goal of preparing decisions or recommendations based on contribution merit or assessed quality of solutions may make it necessary to include such tools – especially more systematic evaluation than just reviewing pro and con arguments. If such techniques are called for in a ‘Next step?’ motion, special technique teams must be formed to carry out the work involved and report the result back to the group, followed by a ‘next step’ consideration. The techniques of systematic argument assessment (see section 3) and evaluation of solution ‘goodness’ or ‘quality’ (section 4) are shown as essential tools to achieve decisions based on the merit of discourse contributions above.
Special techniques teams will have to be designated to work on these tasks ‘outside’ of the main discourse; they should be limited to small size, and will require somewhat more special engagement than the regular project participation.
Other special techniques, to be added from the literature or developed by actual project teams, will be added to the ‘manual’ of tools available for projects. The role of techniques for problem analysis, solution idea generation, as well as that of systems modeling and simulation (recognizing the fact that the premise of ‘conditions’ under which the cause-effect assumption of the factual-instrumental premise of planning arguments can be assumed to hold, really will be the assumed state of the entire system (model) of interrelated variables and context conditions; an aspect that has not been adequately dealt with in the literature nor in the practice of systems consulting to planning projects.)

6 Decision modes

For the smaller groups likely to be involved in ‘pilot’ applications of the proposed structured discourse ideas described, traditional decision modes such as ‘consensus’, ‘no objection’ to decision motion, or majority voting may well be acceptable because familiar tools. For large scale planning projects spanning many ordinary ‘jurisdictions’ (deriving the legitimacy of decisions from the number of legitimate ‘residents, these modes become meaningless. This calls for different decision modes and criteria: an urgent task that has not received sufficient attention. The following summary only mentions traditional modes for comparison without going into details of their respective merit or demerits, but explores potential decision criteria that are derived from the assessment processes of argument and proposal plausibility, or evaluation of proposal quality, above.

Voting:
Proposals receiving an agreed-upon percentage of approval votes from the body of ‘legitimate’ voters. The approval percentages can range from simple majority, to specified plurality or supermajority such as 2/3 or 3/4 to full ‘consensus’ (which means that a lone dissenter has the equivalent of veto power.) Variations: voting by designated bodies of representatives, determined by elections, or by appointment based on qualifications of training, expertise, etc.

Decision based on meeting (minimum) qualification rules and regulations.
Plans for building projects will traditionally receive ‘approval’ upon review of whether they meet standard ‘regulations’ specified by law. Regulations describe ‘minimum’ expectations mandated by public safety concerns or zoning conventions but don’t address other ‘quality’ concerns. They will lead to ‘automatic’ rejection (e.g. of a building permit application) if only one regulation is not met.

Decision based on specified performance measures
Decision-making groups can decide to select plans based on assessed or calculated ‘performance’. Thus, real estate developers look for plan versions that promise a high return on investment ratio (over a specified) ‘planning horizon’. A well known approach for public projects is the ‘Benefit/Cost’ approach calculating the Benefit minus Cost (B-C) or Benefit-Cost ration B/C (and variations thereof).

Plan proposal plausibility
The argument assessment approach described in section 3 results in (individual) measures of proposal plausibility. For the individual, the resulting proposal plausibility could meaningfully serve as a decision guide: a proposal can be accepted if its plausibility exceeds a certain threshold – e.g. the ‘so-so’-value of ‘zero’ or the plausibility value of the existing situation or ‘do nothing’ option. For a set of competing proposals: select the one with the highest plausibility.

It is tempting but controversial to use statistical aggregation of these pl-measures as group decision criteria; for example, the Mean group plausibility value GPlanpld. For various reasons, (e.g. the issue of overriding minority concerns), this should be resisted. A better approach would be to develop a measure of improvement of pl-conditions for all parties compared to the existing condition, with the proviso that plans resulting in ‘negative improvement’ should be rejected (or modified until showing improvement for all affected parties).

Plausibility-adjusted ‘Quality’ assessment measures.
Similar considerations apply to the measures derived from the approach to evaluate plans for ‘goodness or ‘quality’ but adjust the implied performance claims with the plausibility assessments. The resulting group statistics, again, can guide(but should not in their pure form determine) decisions, especially efforts to modify proposals to achieve better results for all affected parties (the interim results pinpointing the specific areas of potential improvement.

7 Contribution merit rewards

The proposal to offer reward points for discourse contributions is strongly suggested for the eventual overall platform but one difficult to implement in the pilot versions (without resorting to additional work and accounting means ‘outside’ of the main discussion). Its potential ‘side benefits’ deserve some consideration even for the ‘pilot’ version.

Participants are awarded ‘basic contribution points’ for entries to the discussion, provided that they are ‘new’ (to the respective discussion) and no mere repetition of entries offering essentially the same content that have already been made. If the discussion later uses assessment methods such as the argument plausibility evaluation, these basic ‘neutral’ credits are then modified by the group’s plausibility or importance assessment results – for example, by simply multiplying the basic credit point (e.g. ‘1’) with the group’s pl-assessment of that claim.

The immediate benefits of this are:
– Such rewards will represent an incentive for participation,
– for speedy assembly of needed information (since delayed entries of the same content will not get credit).
– They help eliminate repetitious comments that often overwhelm many discussions on social media: the same content will only be ‘counted and presented once;
– The prospect of later plausibility or quality assessment by the group – that can turn the credit for an ill-considered, false or insufficiently supported claim into a negative value (by being multiplied by a negative pl-value) – will also discourage contributions of lacking or dubious merit. ‘Troll’ entries will not only occur but once, but will then receive appropriate negative appraisal, and thus discouraged;
– Sincere participants will be encouraged to provide adequate support for their claims.
Together with the increased discipline introduced by the assessment exercises, his can help improve the overall quality of discourse.

Credit point accounts built up in this fashion are of little value if they are not ‘fungible’, that is, have value beyond the participation in the discourse. This may be remedied
a) within the process: by considering their uses to adjust the ‘weight’ of participant’s ‘votes’ or other factors in determining decisions;
b) beyond the process: By using contribution merit accounts as additional signs of qualification for employment or public office. An idea for using such currencies as a means of controlling power has been suggested, acknowledging both that there are public positions calling for ‘fast’ decisions that can’t wait for the outcome of lengthy discussions, and that people are seeking power (‘empowerment’) almost as a kind of human need, but like most other needs we are asked to pay for meeting (in one way or other), introducing a requirement that power decisions will be ‘paid for’ with credit points. (One of the several issues for discussion.)

—ooo—