Archive for the 'Public policy discourse' Category

‘THE GREAT RESET’

Thorbjørn Mann

The ‘Great Reseat’? 

Another new, evil bugaboo if not just  one more disguise or reincarnation of ‘socialist’, ‘neoliberalist’, but essentially authoritarian tyrnanny schemes?

I happened  lo listen to a lecture urging resistance against the WEF-driven ‘Great Reset’ that is using humanitarian crises like the Covid-pandemic as levers for unprecedented transitions toward capitalist-state-controlled Big Brother tyranny. Using well-intentionet benevolent mass protection directives (or means that can be presented as necessary mass protection tools, like wearing facemasks, social distancing, vaccination) as opportuntities for getting people used to more freedom-destroying oppression. Getting strong impressions that these warnings and concerns are either perhaps well-intentioned but based on thoroughly misunderstood misrepresented  nature and causes of the attacked evils, or just political ‘propaganda’ messages against the current administration — the very thing they accuse  

Assuming for a moment the interpretation of well-intentioned misunderstanding, but getting the direction of forces wrong:  Some key considerations. (Numbered for conveninece in responding, not to indicate any order of importance) 

1. Must not ANY initiative for improvement — well-intentioned or equally just power-hungry for the sake of power — pursue some degree of POWER  (‘empowerment’) to spread its ideas and get them adopted? Which also applies to any initiatives for resisting such initatives? 

2. Must not ANY adoption of ‘new’, ‘innovative’  or ‘restoring’  (repairing, returning to previous good states) initiatives and provisions at governance level (requiring adherence by all members of a community) run up against some degree of RESISTANCE by ‘opposition’ groups perceiving loss of status, power, well-being, profit from the change?  

3. Must not such opposition be expected, the more DECISIONS for adoption have been reached by decision methods  that inadvertently or deliberately ignore or override the concerns of such  segments of society, now feeling disadvantaged? Decision modes such as ‘leadership’  dictates or even majority voting, no matter how well justified as the very essence of democracy? 

4.  Are not most if not all current governance tools aiming at common ADHERENCE  to agreements (‘laws’) even by disavantaged parties, based on the notion of ‘ENFORCEMENT’ —that is, punishing violations by force (implied in the very term ‘enforcement’) or threat of force? 

5.  Will such opposition resistance not have to seek and adopt reciprocal force against ‘law enforcement’ means  — the more so, the more the very decision modes for law adoption  prevent or distort or ignore other means  of expressions of concerns by the disadvantaged parties? (Does this not include the ‘propaganda’ means of reckless mutual disputing / misrepresenting the intelligence, honesty, civil-mindedness, ethics, patriotism etc.?) 

6.  Will this reliance on force and counter-force not lead to a continuing escalation of the tools (weaponry) of ‘enforcement’ and ‘resistance’?  Escalation that can lead to internal civil war and revolution, and, given the increasing destructiveness of modern weaponry,  utterly ‘MAD’  outcomes on the larger, international level? 

7. Do these mechanisms not, potentially, apply to ALL historical and current forms of governance — not just to ‘socialist’ or ‘facist’, ‘chinese communist’ or ‘chinese capitalist’  but also to the ‘democratic’  regimes that are increasingly bought by the big corporations and oligarchs, or taken over by the military? The common denominator being the LACK OF EFFECTIVE CONTROLS  OF POWER? 

     Note that this conclusion does not imply nor justify the wholesale rejection of power: there are many situations in which effective public decisions will have to be made ‘fast’, without the benefit of thorough public discourse: On a ship encounering an iceberg in the ocean, one decision must be made ‘fast’ — pass the iceberg on the port or starboard side, with all necessary intemediate means for adopting the new course being followed by all affecte members of the crew?  

8. Regardless of the answers to these questions, does criticism of current ways of doing things not imply some responsibility of engaging in and encouraging a better PUBLIC DISCOURSE, supporting, even requiring, efforts of developing and discussing alternative, better ways?  Should mere complaints and attacks on ongoing or proposed change,  without concrete suggestions of better ways to  deal with the problems, just be seen as political  ‘propaganda’  in the interest of gaining politicsal power but under the same basic conditions that generated the problems? 

9. It would be presumptuous and preposterous for any single person to claim to have all the  answers. It can be argued, instead, that as a collective species, the global humanity as much as smaller local communities, WE DO NOT HAVE A CONVINCING, UNIVERSALLY ACCEPTABLE MODEL FOR SURVIVAL – YET.  It could even be argued  that humans are a designing, planning species  with every generation wanting to develop its own ‘NEW’ definition, vision, design, plan for what it means to be human, and that it should be ‘empowered’ to do so, and that any ultimate ‘RESET’ model would be the wrong answer. 

     So my own attempts to offer some thoughts should be seen as efforts to respond to that responsibility of #8 above as encouragements to develop, engage in, and offering initial  contributions and proposals to the necessary public discourse, not as any ultimate panacea: Some urgently needed considerations and efforts:

10. There are many efforts, theories, initiatives, experiments and proposed ‘models’ already being developed and implemented all over the world. They are diverse, not all agreeing on the same principles and assumptions, and arguably not communicating well either with similar initiatives or a wider public. However:  should they not be encouraged and supported, by a global community?  Perhaps on some conditions: of 

10.1  Remaining ‘local’ (in the sense of respecting, tolerating neighboring and existing systems — until common larger, even global agreemenrts have been achieved by satisfactory and peaceful means;

10.2  Comprehensibly sharing their ideas and experiences (sucesses, obstacles, and failures) as well as proposals for wider adoption in a global repository for mutual learning, discussion  and evaluation;  

10.3  Refraining from any form of violent, deceitful, or otherwise coercive attempts to impose their provisions on other parties.

11  Encouraging the development of a ‘PUBLIC PLANNING DISCOURSE SUPPORT PLATFORM’  both to house and facilitate access to the respository of innovation / restoration initiatives, and the discussion of necessary ‘global’ agreements (common ‘road rules’ akin to the decision to dirve on the right or left side of the road…) 

12 Development of a PUBLIC (potentially global as well as ‘local’) PLANNING DISCOURSE SUPPORT PLATFORM aiming at common decisions based on the quality and merit of information and contributions to the discourse, containing:

12.1  INCENTIVES for wide and speedy public participation;

12.2   Standard INFORMATION SUPPORT (Similar incentives, reaearch etc.)

12.3   TECHNIQUES AND PROCEDURES  for structured discourse without excessive repetition, disruptive and flawed contributions but concise, effective overview of the whole spectrum of contributions;

12.4  Optional provisions for SYSTEMATIC EVALUATION  of contribution merit (e.g. the merit or proposals or proposal improvement ideas, or of arguments pro or con proposals);

12.5  Development and provisions for DECISION-MAKING  (Recommendations, agreements) based on contribution merit (rather than on shortcuts such as majority voting which systematically disregards minority concerns, and in itself is inapplicable to projects and problems transgressing traditional the boundaries of governance entities where the numbers of voters can be meaningfully defined…) 

13 Development of NEW tools for ENSURING ADHERENCE of desisions and agreements, as much as possible based on automatic prevention of violations (triggered by the very attempt of violation) rather than violent or coercive ‘enforcement’.

14  Development of better provisions for the CONTROL OF POWER, aiming at preventing the escalation of power and power tools and the corresponding intesity of opposition.

Tentative ideas for innovative techniques and tools related to the above items 10, 11, 12, 13,  and 14 have been proposed for discussion  in my papers on Academia.edu, FB, LI, books, and Abbeboulah.com blog; pfd files can be sent by email to interested people upon request (by LI message). 

EVALUATION IN THE PLANNING DISCOURSE — AI SUPPORT OF EVALUATION IN PLANNING

Part of a series of  issues to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann, February 2020.

The necessity of information technology assistance

A planning discourse support platform aiming at accommodating projects that cannot be handled by small F2F ‘teams’ or deliberation bodies, must use current (or yet-to-be developed) advanced information technology, if only just to handle communication. The examination of evaluation tasks in such large project discourse, so far, also has shown that serious, thorough deliberation and evaluation can become so complex that information technology assistance for many tasks will seem unavoidable, whether in form of simple data management or more sophisticated ‘artificial intelligence‘.

So the question arises what role advanced Artificial or Augmented Intelligence tools might play in such a platform. A first cursory examination will begin by surveying the simpler data management (‘house-keeping’) aspects that have no direct bearing on actual ‘intelligence’ or ‘reasoning’ and evaluation in planning thinking, and then exploring possible expansion of the material being assembled and sorted, into the intelligence assistance realm. It will be important to remain alert to the concern of where the line between assistance to human reasoning and substituting machine calculation results for human judgment should be drawn.

‘House-keeping’ tasks

a. File maintenance. A first ‘simple’ data management task will of course be to gather and store the contributions to the discourse, for record-keeping, retrieval and reference. This will apply to all entries, in their ‘verbatim‘ form, most of which will be in conversational language. They may be stored in simple chronological order as they are entered, with date and author information. A separate file will keep track of authors and cross-reference them with entries and other actions. A log of activities may also be needed.

b. ‘Ordered’, or ‘formatted’ files. For a meaningfully orchestrated evaluation in the discourse, it will be necessary to check for and eliminate duplication of essential the same information, to sort the entries, for example according to issues, proposals, arguments, factual information, — perhaps already in some formatted manner — and to keep the resulting files updated. This may already involve some formatting of the content of ‘verbatim’ entries.

c.  Preparation of displays, for overview. This will involve displays of ‘candidates’ for
decision, the resulting agenda of accepted candidates; ‘issue maps’ of the evolving discussion, evaluation and decision results and statistics.

d. Preparation of evaluation worksheets.

e. Tabulating, aggregating evaluation results for statistics and displays.

‘Analysis’ tasks, examples

f. Translation. Verbatim entries submitted in different languages and their formatted ‘content’ will have to be translated into the languages of all participants. Also, entries expressed in ‘discipline jargon’ will have to be translated into conversational language.

g. Entries will have to be checked for duplication of essential identical content, expressed in different words (to avoid counting the same content twice in evaluation procedures).

h. Standard information search (‘googling’) for available pertinent information already
documented by existing research, data bases, case studies etc. This will require the selection of search terms, and the assessment of relevance of found items, then entered into as separate section of the ‘verbatim’ file.

i. Entered items (verbal contributions and researched material) will have to be formatted for evaluation; arguments with unstated (‘taken for granted’) premises must be completed with all premises stated explicitly; evaluation aspects, sub-aspects etc must be ordered into coherent ‘aspect trees’.  (Optional: Information claims found in searches may be combined to form ‘new’ arguments that have not been made by human participants).

j. Identifying argument patterns (inference rules) of arguments, and checked (to alert participants for validity problems and contradictions)

k. Normalization of weight assignments, aggregation of judgments and arguments and display if different aggregation result (different aggregation functions) as well as their effect on different decision criteria will have to be prepared and displayed.

l. More sophisticated support examples would be the development of systems models of the ‘system’ at hand, (for example, constructing cause-effect connections and loops for the factual-instrumental premises in arguments) to predict performance of proposed solutions, to simulate the behavior of the resulting system in its environment over time.

The boundary between human and machine judgments

It should be clear from preceding sections that general algorithms should not be used to generate evaluative judgments (unless there are criteria expressed in regulations, laws, or norms, to expressly substitute for human judgment.) Any calculated statistics of participant judgments should be clearly identified as ‘statistics’ of individuals’ judgments, not as ‘group judgments’. The boundary issue may be illustrated with the examination of the idea of complete ‘objectification’ or explanation of a person’s basis of judgment, with the ‘formal evaluation’ process explained in that segment. Complete description of judgment basis would require description of criterion functions for all aspect judgments, the weighting of all aspects and sub-aspects etc., and the estimates of plausibility (probability) for a plan to meet the performance expectations involved. This would allow a person A to make judgments on behalf of another person B, while not necessarily sharing B’s basis of judgment. Imagining a computer doing the same thing is meaningful only if all those values of B’s judgment basis can be given to the computer. The judgments would then be ‘deliberated’ and fully explained (not necessarily justified or mandatory for all to share).

In practice, doing that even for another person is too cumbersome to be realistic. People usually shortcut such complete objectification, making decisions with ‘offhand’ intuitive judgments — that they do not or cannot explain. That step cannot be performed by a machine, by definition: the machine must base its simulation of our judgment basis on some explanation. (Admittedly, It could be simulating the human equivalent of tossing a coin: randomly, though most humans would resent describing their intuitive judgments to be called ‘random’). And vague reference is usually made to ‘common sense’ or otherwise societally accepted values, obscuring and sidestepping the problem of dealing with the reality of significantly different values and opinions.

Where would the machine get the information for making such judgments if not from a human? Any algorithm for this would be written by a human programmer, including the specifics for obtaining the ‘factual’ information needed to develop even the most crude criterion function. A common AI argument would be that the machine can be designed to observe (gather the needed factual information) and ‘learn’ to assemble a basis of judgment, for measurable and predictable objectives such as ‘growth’ or stability (survival) of the system. The trouble is that the ‘facts’ involved in evaluating the performance and advisability of plans are not ‘facts’ at all:  They are estimates, predictions of future facts, so they cannot be ‘observed’ but must be extrapolated from past observations by means of some program. And we can deceive ourselves to accept information about the desirability of ‘ought’ or ‘goodness aspects of a plan as ‘factual’ data only by looking at statistics, (also extrapolated into the future) or legal requirements — that must have been adopted by some human agent or agency.

To be sure: these observations are not intended to dismiss the usefulness of AI (that should be called augmented intelligence) for the planning discourse. They are trying to call attention to the question of where to draw the boundary between human and machine ‘judgment’. Ignoring this issue can easily lead to development of processes in which machine ‘judgment’ — presented to the public as non-partisan, ‘objective’, and therefore more ‘correct’ than human decisions, but inevitably programmed to represent some party’s intentions and values — can become sources of serious mistakes, and tools of oppression. This brief sketch can only serve as encouragement to more thorough discussion.


— o —

EVALUATION IN THE PLANNING DISCOURSE — THE DIMINISHING PLAUSIBILITY PARADOX

Thorbjørn Mann,  February 2020

THE DIMINISHING PLAUSIBILITY PARADOX

Does thorough deliberation increase or decrease confidence in the decision?

There is a curious effect of careful evaluation and deliberation that may appear paradoxical to people involved in planning decision-making, who expect such efforts to lead to greater certainty and confidence in the validity of their decisions. There are even consulting approaches that derive measures of such confidence from the ‘breadth’ and ‘depth’ achieved in the discourse.

The effect is the observation that with well-intentioned, honest effort to give due consideration and even systematic evaluation  to all concerns — as expressed e.g. by the pros and cons of proposed plans perceived by affected and experienced people, –, the degree of certainty or plausibility for a proposed plan actually seems to decrease, or move towards a central ‘don’t know’ point on a +1 to -1 plausibility scale. Specifically: The more carefully breadth (meaning coverage the entire range of all aspects or concerns) and depth (understood as the thorough examination of the support — evidence and supporting arguments — of the premises of each ‘pro’ and ‘con’ argument) are evaluated, the more the degree of confidence felt by evaluators moves from initial high support (or opposition) towards the central point ‘zero’  on the scale, meaning ‘don’t know; can’t decide’.

This is of course, the opposite of what the advice to ‘carefully evaluate the pros and cons’ seem to promise, and what approaches striving for breadth and depth actually appear to achieve. This creates a suspicion that either the method for measuring the plausibility of all the pros and cons must be faulty, or that the approaches relying on the degree of breadth and depth directly as equivalent to greater support are making mistakes. So it seems necessary to take a closer a look at this apparently counterintuitive phenomenon.

The effect has first been observed in the course of the review for a journal publication of an article on the structure and evaluation of planning arguments [1] — several reviewers pointed out what they thought must be a flawed method of calculation.

Explanation of the effect

The crucial steps of the method (also explained in the section on planning argument assessment) are the following:

– All pro and con arguments are converted from their often incomplete, missing- premises state to the complete pattern explicitly stating all premises, (e.g. “Yes, adopt plan A because 1) A will lead to effect B given conditions C, and 2) B ought to be aimed for, and 3) conditions C will be present”).

– Each participant will assign plausibility judgments to each premise, on the +1 /-1 scale where the +1 stands for complete certainty or plausibility, the -1 for complete certainty that the claim is not true, or totally implausible (in the judgment of the individual participant), and the center point of zero expressing inability to judge”don’t know; can’t decide’. Since in the planning argument, all premises are estimates or expectations of future states — effects of the plan, applicability of the causal rule that connects future effects or ‘consequences’ with actions of the plan, and the desirability or undesirability of those consequences, complete certainty assessments (pl = +1, or -1) for the premises must be considered unreasonable; so all the plausibility values will be somewhere between those extremes.

– Deriving a plausibility value for the entire argument from these plausibility judgments can be done in different ways: The extreme being to assign the lowest premise plausibility judgment prempl to the entire argument, expressing an attitude like ‘the strength of a chain is equal to the strength of its weakest link’. Or the plausibility values can be multiplied:  The Argument plausibility: for argument i 

            Argpl(i) =  (prempl(i,j))  for all premises j of argument i

Either way, the resulting argument plausibility cannot be higher than the premise plausibilities.

– SInce arguments do not carry the same ‘weight’ in determining the overall plausibility judgment, it is necessary to assign some weight factor to each argument plausibility judgment. That weight will depend on the relative importance of the ‘deontic’ (ought) premises; and approximately expressed by assigning each of the deontic claims in all the arguments a weight between zero and +1, such that all the weights add up to +1. So the weight of argument i will be the plausibility of argument i times the weight of its deontic premises: Argw(i) = Argpl(i) x w(i)

– A plausibility value for the entire plan, will have to be calculated from all the argument weights. Again, there are different ways to do that (discussed in the section of aggregation) but an aggregation function such as adding all the argument weights (as derived by the preceding steps) will yield a plan plausibility value on the same scale as the initial premise and argument plausibility judgments. It will also be the result of considering all the arguments, both pro and con; and since the argument weights of arguments considered ‘con’ arguments in the view of individual participants will be subtracted from the summed-up weight of ‘pro’ arguments, it will be nowhere near the complete certainty value of +1 or -1, unless of course the process revealed that there were no arguments carrying any weight at all on the pro or con side. Which is unlikely since e.g. all plans have been conceived from some expectation of generating some benefit, and will carry some cost or effort, etc.

This approach as described thus far can be considered a ‘breadth-only’ assessment, justly so if there is no effort to examine the degree of support of premises. But of course the same reasoning can be applied to any of the premises — to any degree of ‘depth’ as demanded by participants from each other. The effect of overall plan plausibility tending toward the center point of zero (‘don’t know’ or ‘undecided’), compared with initial offhand convincing ‘yes: apply the plan!) or ‘no- reject!’ reactions will be the same — unless there are completely ‘principle’-based or ‘logical or physical ‘impossibility’ considerations, in plans that arguably should not even have reached the stage of collective decision-making.

Explanation of the opposite effect in ‘breadth/depth’ based approaches

So what distinguishes this method from approaches that claim to use degrees of ‘breadth and depth’ deliberation as measures justifying the resulting plan decisions? And in the process, increases the team’s confidence in the ‘rightness’ of their decision?

One obvious difference — that must be considered a definite flaw,– is that the degree of deliberation, measured by the mere number of comments, arguments, of ‘breadth’ or ‘depth’, does not include assessment of the plausibility (positive or negative) of the claims involved, nor of their weights of relative importance. Just having talked about the number of considerations, without that distinction, cannot already be a valid basis for decisions, even if Popper’s advice about the degree of confidence in scientific hypotheses we are entitled to hold is not considered applicable to design and planning. (“We are entitled to tentatively accept a hypothesis to the extent we have given our best effort to test, to refute it, and it has withstood all those tests”…)

Sure, we don’t have ‘tests’ that definitively refute a hypothesis (or ‘null hypothesis’) that we have to apply as best we can, and planning decisions don’t rest or fall on the strength of single arguments or hypotheses. All we have are arguments explaining our expectations, speculations about the future resulting from our planning actions — but we can adapt Popper’s advice to planning: “We can accept a plan as tentatively justified to the extent we have tried our best to expose it to counterarguments (con’s) and have seen that those arguments are either flawed (not sufficiently plausible) or outweighed by the arguments in its favor.”

And if we do this, honestly admitting that we really can’t be very certain about all the claims that go into the arguments, pro or con, and look at how all those uncertainties come together in totaling up the overall plausibility of the plan, the tendency of that plausibility to go towards the center point of the scale looks more reasonable.

Could these consideration be the key to understand why approaches relying on mere breadth and depth measurements may result in increased confidence of the participants in such projects? There are two kinds of extreme situations in which it is likely that even extensive breadth and depth discussions can ignore or marginalize one side or the other of necessary ‘pro’ or ‘con’ arguments.

One is the typical ‘problem-solving’ team assembled for the purpose of developing a ‘solution’ or recommendation. The enthusiasm of the collective creative effort itself (but possibly also the often invoked ‘positive’ thinking, defer judgment so as to not disrupt the creative momentum, as well a the expectation of a ‘consensus’ decision?) may focus the thinking of team members on ‘pro’ arguments, justifying the emerging plan — but neglecting or diverting attention from counterarguments. Finding sufficient good reasons for the plan being enough to make a decision?

An opposite type of situation is the ‘protest’ demonstration, or events arranged for the express purpose of opposing a plan. Disgruntled citizens outraged by how a big project will change their neighborhood: counting up all the damaging effects: Must we not assume that there will be a strong focus on highlighting the plan’s negative effects or potential consequences: assembling a strong enough ‘case’ to reject it? In both cases, there may be considerable and even reasonable deliberation in breadth and depth involved — but also possible bias due to neglect of the other side’s arguments.

Implications of the possibility of decreasing plan plausibility?

So pending some more research into this phenomenon, — if found to be common enough to worry about, — it may be useful to look at what it means: what adjustments to common practice it would suggest, what ‘side-stepping’ stratagems may have evolved due to the mere sentiment that more deliberation might shake any undue, undeserved expectations in a plan. Otherwise, cynical observers might recommend throwing up our arms and leaving the decision to the wisdom of ‘leaders’ of one kind or another, in the extreme to oracle-like devices — artificial intelligence from algorithms whose rationales remain as unintelligible to the lay person as the medieval ‘divine judgment’ validated by mysterious rituals (but otherwise amounting to tossing coins?).

Besides the above-mentioned research into the question, examining common approaches on the consulting market for potential vulnerability to provisions to overplay the tendency would be one first step. For example, adding plausibility assessment to the approaches using depth and breadth criteria would be necessary to make them more meaningful.

The introduction of more citizen participation into the public planning process is an increasingly common move that has been urged — among other undeniable advantages such as getting better information about how problems and the plans proposed to solve them actually affect people — to also make plans more acceptable to the public because the plans then are felt to be more ‘their own’. As such, could this make the process vulnerable to the above first fallacy of overlooking negative features? If so, the same remedy of actually including more systematic evaluation into the process might be considered.

A common temptation by promoters of ‘big’ plans can’t be overlooked: to resort to ‘big’ arguments that are so difficult to evaluate that made-up ‘supporting’ evidence can’t be distinguished from predictions based on better data and analysis (following Machiavelli’s quip about ‘the bigger the lie, the more likely people will buy it’…). Many people already are suggesting that we should return to smaller (local) governance entities that can’t offer big lies.

Again: this issue calls for more research.

[1]   “The Structure and Evaluation of Planning Arguments”  Thorbjoern Mann, INFORMAL LOGIC  Dec. 2010.

— o —

EVALUATION IN THE PLANNING DISCOURSE — PROCEDURAL AGREEMENTS

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.  Thorbjørn Mann,  February 2020

PROCEDURAL AGREEMENTS FOR EVALUATION

The need for procedural agreements

Any group, team or assembly having decided to embark upon a common evaluation / deliberation task aimed at a recommendation or decision about a plan, will have to adopt a set of agreements about the procedure to be followed, explicitly or implicitly. These rules can become quite detailed and complicated. Even the familiar ‘rules of order’ of standard parliamentary procedure, aiming at simple yea/nay decisions on ‘motions’ for the assembly to accept or reject, will become book-length guides (like ‘Robert’s Rules of Order’) that the chairpersons of such processes may have to consult when disputes arise. For simplified versions based on the expected simplicity of ending the discussions with a majority vote, and citizens’ familiarity with basic rules, agreements can even be tacitly taken for granted, without recourse to written guides. However, this no longer applies when the decision-making body engages in more detailed and systematic deliberation aiming at making the decisions more transparently justified by the evaluative judgments made on the comments in the discourse.

General overall agreements versus procedures for ‘special techniques’

This could be seen as a call for a general procedure that includes the necessary procedural rules, as an extension of the familiar parliamentary procedure. Would such a one-size-fits-all solution be appropriate? As the preceding sections of this study show, we now see not only a great variety of different evaluation tasks and context situations, but also a variety of different ‘approaches’ for such processes now on the ‘market’ — especially as they are assisted by new technology. Each one comes with different assumptions about the rules or ‘procedural agreements’ guiding the process. So it seems that the question is less one of developing and adopting one general-purpose pattern, than one of providing a ‘toolkit’ of different approaches that the participants in a planning process could choose from as the task at hand requires. That opportunity-step for choice must be embedded in a general and flexible overall process, than participants either would be familiar with already, or able to easily learn and agree to.

Once a special technique is selected, as decided by the group, its procedural steps and decision rules should then be explicitly agreed upon at the very beginning of the specific process — the more so, the ‘newer’ the approach, tools and techniques — so as to avoid disruption of the actual deliberation by disagreements about procedure later on. Such quibbles could easily become quite destructive and polarizing, and even their in-process resolution can introduce significant bias into the actual assessment work itself. It may be necessary to change some rules, as the participants learn more about the nature of the problem at hand. That process should be governed by rules set out in the initial agreements: A provision such as the ‘Next step’ proposed in the process for the overall planning discourse platform would offer that opportunity. [See ‘PDSS-REVISED’).

This seemingly matter-of-course step can become controversial because different ‘special techniques’ may involve different concepts and corresponding vocabulary to be used: even ‘systems’ approaches of different ‘generations’ are likely to use different labels for essentially the same things, which can result in miscommunication and misunderstanding or worse. New techniques and tools may require different responsibilities, behavior, decision modes, replacing rules still taken for granted: must new agreements be set ‘upfront’ to prevent later conflicts?

The main agreements — possibly different rules for different project types — then will cover the basic procedural steps, the ‘stopping rules’ for deciding when a decision can be said to have been accepted (since one of the key properties of ‘wicked problems’ is that there is nothing in the nature of the problem itself that tell problem-solvers that a solution has been reached and the the work can stop); decision criteria and modes according to which this should be done. For the details of the evaluation part itself, the kinds of judgments and judgment scales will have to be agreed upon, — so that e.g. a judgment score will have the same meaning for all participants. (These issues will be addressed in separate sections).

An argument can be made that efforts should made to preserve consistency between the overall approach and its frame of reference and vocabulary, and any ‘special techniques’ for evaluation within that process along the way.

Doing without cumbersome procedural rules?

There will be attempts to escape procedures felt to be too ‘cumbersome’ or bureaucratic, with an easier route to a decision. Majority voting itself can be seen as such an escape. Even easier are decision criteria such as ‘consent’ — declared, for example, by the chair that there are ‘no more objections’ combined with ‘time’s up’ — which may indicate that the congregation has become exhausted, rather than convinced of the advantages of a proposed plan, or dissuaded from voicing more ‘critical’ questions. But aren’t the conditions leading to ‘consent’ outcomes in some approaches — group size, seating arrangements, sequences of steps and phases — themselves procedural provisions?

Examples of aspects calling for agreements

Examples of different procedural agreements are the above-mentioned ‘rules of order’, the steps for determining the ‘Benefit/Cost Ratio’ of plans; provisions for ‘formal evaluation’ process of the ‘quality’ of a proposed plan or for the evaluation of a set of alternative proposals; agreements needed for evaluating the plausibility of a plan by systematic assessment of argument plausibility; the guides for a ‘Pattern Language’ approach to planning. (Some of these will be described in separate segments).

The procedural agreements cover aspects such as the following:
– The conceptual frame of reference and its vocabulary and corresponding techniques and displays;
– Proper ‘etiquette’ and behavior
The process steps (sequence), participant rights and responsibilities;
Formatting of entries as needed for evaluation;
– For the evaluation tasks: judgment scales and units, the meaning of the scores;
– The aggregation functions to be used to derive overall judgments from partial judgment scores and from individual participant scores to ‘group’ statistics and decision rules;
– Decision criteria and decision modes;
– The stopping rule(s) for the process.

Specific agreements for different evaluation ‘approaches’ and special techniques must then be discussed in the sections describing those methods.


–o–

On the style of government architecture

Thorbjørn Mann, February 2020

The current administration of the U.S.  Federal Government has proposed that buildings for federal government use should be designed in the ‘classical’ style of ancient Greek and Roman architecture; this has led to some passionate objections, e.g. from the American Institute of Architects.

Both the desire to get some general rules for designing government (at least ‘federal’) architecture and to the particular choice of style, as well as the reaction to that government move, are understandable, though the rationale for both deserve some discussion.

In traditional societies, it was almost a matter of course that buildings were designed in a way that made them recognizable as to their role or function or purpose: A house (for living in) was a house, distinct from the barn or the stable or the storehouse, a church, a temple or synagogue or mosque were recognizable as what they were even to children, a store was a store, and a government building was a government building — a city hall, a ruler’s palace. Even in societies changed by the industrial revolution, a factory or a railway station were recognizable to the citizens as what they were and what they were for.

For government buildings, the design or style carried additional expectations: what kind of government, what kind of societal order did they represent? At one time, a ruler would live in a fortress — ostensibly for protection from exterior enemies, but as a convenient side-effect also protection from the ruler’s own subjects who didn’t like the taxes and what he used them for, or other edicts. More ‘democratic’ or ‘republican’ governance systems favored more ‘civil’ connotations, say, like a ‘marketplace of ideas’ for how to run their lives; the issue of designing suitable places that told the governance folks that they were ‘servants of the people’ but also told visitors how great their cities or nations were, became a delicate challenge. This also affected the design of residences of oligarchs who ‘ran’ government from their own palaces, but wished to insist on the right to do so by their wealth and erudition and good taste. (1) Their administrations — bureaucracies — could no longer use the fortress symbols to keep the citizenry in line, but architects helped the rulers to find other means to do that; the sheer size and complexity of rule-based designs of administrative institutions were intimidating, sorry ‘inspiring’ enough?

That clarity and comprehensibility of buildings has been lost in recent architecture: We see many kinds of clients, governmental and commercial and in-between institutions trying to impress the public and each other by means of size and novelty supplied by architectural creativity with their buildings. This is leading to a ‘diversity’ of the public visual environment that many find refreshing and interesting but others are beginning to resent as disturbing and boring, since as a whole it expresses a different kind of uninspiring uniformity of common desire to impress: by means of size (who’s got the tallest building and most brilliant plumage?) of ‘different’ signature architecture. Coming across as more puerile than ‘inspiring’: is that who we are as a society?

So the question of whether at least some clear distinction between governmental architecture and other buildings should be re-established, is not an entirely meaningless one. But insisting that the issue should be the sole domain of architects to decide rather than the government is also missing just that point: what is it that architecture tells us about who we — and our government — are, or ought to be? Just big and impressively ‘imperial’ — like the Roman or other empires that ended up collapsing under their own weight and corruption that all the marble couldn’t hide? The ‘inspiration’ being mainly the same kind of puerile awe of its sheer power but also — and not just incidentally: fear? What is the kind of architecture that would inspire us to cooperate, through our government, towards a more ‘perfect’ just, free, creative but kind and peaceful society?

Part of the problem is that we do not have a good forum for the discussion of these issues. The government itself, in most countries, has lost the standing of being that forum, for various reasons. The forms of ‘classical’ architecture won’t bring it back — they have too easily been adopted by commercial and other building clients: the example of an insane asylum with a classical portico, an old standard joke in architecture schools that advocated more modern styles, is beginning to give us a new chilling feeling… So where: Books? Movies? TV? Ah: Twitter? Is that who we are? Just asking…

(1) I have written about this issue (under the heading of the role of ‘occasion’ and ‘image’ in the built environment) in some articles and book; using the example of government architecture in Renaissance Florence, (where we can see buildings showing the dramatic evolution of the image of government in close proximity) and about the forum for discussion of public policy. I consider the design and organization of that ‘forum’ one of the urgent challenges of our time.

EVALUATION IN THE PLANNING DISCOURSE — TIME AND EVALUATION OF PLANS

An effort to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann, February 2020

TIME AND EVALUATION OF PLANS  (Draft, for discussion)

Inadequate attention to time in current common assessment approaches

Considering that evaluation of plans (especially ‘strategic’ plans) and policy proposals, by their very nature are concerned with the future, it is curious that the role of time has not received more attention, even with the development of simulation techniques that aim at tracking the behavior of key variables of systems over many years into the future. The neglect of this question, for example in the education or architects, can be seen in the practice of judging students’ design project presentations on the basis of their drawings and models.

The exceptions — for example in building and engineering economics — are looking at very few performance variables, with quite sophisticated techniques: expected cost of building projects, ‘life cycle cost’, return on investment etc., — to be put into relation to expected revenues and profit. Techniques such as ‘Benefit/Cost Analysis‘, which in its simplest form considers those variables as realized immediately upon implementation, also can apply this kind of analysis to forecasting costs and benefits and comparing them over time by methods for converting initial amounts (of money) to ‘annualized’ or future equivalents, or vice versa.

Criticism of such approaches amount to pointing out problems such as having to convert ‘intangible’ performance aspects (like public health, satisfaction, loss of lives) into money amounts to be compared, (raising serious ethical questions) for entities like nations, where the money amounts drawn from or entering the national budget hide controversies such as inequities in the distribution of the costs and benefits. Looking at the issue from the point of view of other evaluation approaches might at least identify the challenges in the consideration of time in the assessment of plans, and help guide the development of better tools.

A first point to be pointed out is that from the perspective of the formal evaluation process, for example, (See e.g. the previous section on the Musso/Rittel approach), measures like present value of future cost or profit, or benefit-cost ratio must be considered ‘criteria’ (measures of performance) for more general evaluation aspects, for among a set of (goodness) evaluation aspects that each evaluator must be weighted for their relative importance, to make up overall ‘goodness’ or quality judgments. (See the segments on evaluation judgments, criteria and criterion functions, and aggregation.) And as such, the use of these measures as decision criteria must be considered incomplete and inappropriate. However, in those approaches, the time factor is usually not treated with even the attention expressed in the above tools for discounting future costs and benefits to comparable present worth: For example, pro or con arguments in a live verbal discussion about expected economic performance often amount to mere qualitative comparisons or claims like ‘over the budget’ or ‘more expensive in the long run’. 

Finally, in approaches such as the Pattern language, (which makes valuable observations about ‘timeless’ quality of built environments, but does not consider explicit evaluation a necessary part of the process of generating such environments), there is no mention or discussion of how time considerations might influence decisions: the quality of designs is guaranteed by having been generated by the use of patterns, but the efforts to describe that quality do not include consideration of effects of solutions over time.

Time aspects calling for attention in planning

Assessments of undesirable present or future states ‘if nothing is done’

The implementation of a plan is expected to bring about changes in the state of affairs that is felt to be ‘problems’ — things not being as they ought to be, or ‘challenges’,‘opportunities’ calling for better, improved states of affairs. Many plans and policies aim at preventing future developments to occur, either as distinctly ‘sudden’ events or development over time. Obviously, the degree of undesirability depends on the expected severity of these developments; they are matters of degree that must be predicted in order for the plan’s effectiveness to be judged.

The knowledge that goes into the estimates of future change comes from experience: observation of the pattern and rate of change in the past, (even if that knowledge is taken to be well enough established to be considered a ‘law’). But not all such change tracks have been well enough observed and recorded in the past, so much estimate and judgment goes into the assumptions already about the changes over time in the past.

Individual assessments of future plan performance

Our forecasts for future changes ‘if nothing is done’, resting on such shaky past knowledge must be considered less that 100% reliable. Should our confidence in the application of that knowledge to estimates of a plan’s future ‘performance‘ then not be be acknowledged as equal (at best) or arguably less certain — expressed as deserving a lower ‘plausibility’ qualifier? This would be expressed, for example, with the pl — plausibility — judgment for the relationship claimed in the factual-instrumental premise of an argument about the desirability of the plan effects: “Plan A will result (by virtue of the law or causal relationship R) in producing effect B”.

This argument should be (but is often not) qualified by adding the assumption ‘given the conditions C under which the relationship R will hold’: the conditions which the third (factual claim) premise of the ‘standard planning argument’ claims is — or will be — ‘given’.

Note: ‘Will be’: since the plan will be implemented in the future, this premise also involves a prediction. And to the extent the condition is not a stable, unchanging one but also a changing, evolving phenomenon, the degree of the desirable or undesirable effect B must be expected to change. And, to make things even more interesting and complex: as explained in the sections on argument assessment and systems modeling: the ‘condition’ is never adequately described by a single variable, but actually represents the  evolving state of the entire ‘system’ in which the plan will intervene.

This means that when two people exchange their assumptions and judgments, opinions, about the effectiveness of the plan by citing its effect on B, they may likely have very different degrees (or performance measures in mind, occurring under very different assumptions about both R and C, — at different times.

Things become more fuzzy when the likelihood is considered that the desired or undesired effects are not expected to change things overnight, but gradually, over time. So how should we make evaluation judgments about competing plan alternatives, when, for example, one plan promises rapid improvement soon after implementation, (as measured by one criterion), but then slowing down or even start declining, while the other will improve at a much slower but more consistent rate? A mutually consistent evaluation must be based on agreed-upon measures of performance: measured at what future time? Over what future time period, aka ‘planning horizon’? This question will just apply to the prediction of the performance criterion — what about the plausibility and weight of importance judgments we need to offer complete explanation of our judgment base?  Is it enough to apply the same plausibility factor to forecasts of trends decades in the future, as the one we use for near future predictions? As discussed in the segment on criteria, the crisp fine forecast lines we see in simulation printouts are misleading: the line should really be a fuzzy track widening more and more, the farther out in time it extends?  Likewise: is it meaningful to use the same weight of relative importance for the assessment of effects at different times?

These considerations apply, so far, only to the explanation of individual judgments, and already show that it would be almost impossible to construct meaningful criterion functions and aggregation functions to get adequately ‘objectified’ overall deliberated judgment scores for individual participants in evaluation procedures.

Aggregation issues for group judgment indicators

The time-assessment difficulties described for individual judgments do not diminish in the task of construction decision guides for groups, based on the results of individual judgment scores. Reminder: to meet the ideal ‘democratic’ expectation that the community decision about a plan should be based on due consideration of ‘all’ concerns expressed by ‘all’ affected parties, the guiding indicator (‘decision guide’ or criterion) should be an appropriate aggregation statistic of all individual overall judgments. The above considerations show, to put it mildly, that it would be difficult enough to aggregate individual judgments into overall judgment scores, but even more so to construct group indicators that are based on the same assumptions about the time qualifiers entering the assessments.

This makes it understandable (but not excusable) why decision-makers in practice tend to either screen out the uncomfortable questions about time in their judgments, or resort to vague ‘goals’ measured by vague criteria to be achieved within arbitrary time periods: “Carbon-emission neutrality by 2050”, for example: How to choose between different plan or policies whose performance simulation forecasts do not promise 100% achievement of the goal, but only ‘approximations’ with different interim performance tracks, at different costs and other side-effects in society? But 2050 is far enough in the future to ensure that none of the decision-makers for today’s plans will be held responsible for today’s decisions…

“Conclusions’ ?

The term ‘conclusion’ is obviously inappropriate if referring to expected answers to the questions discussed. These issues have just been raised, not resolved; which means that more research, experiments, discussion is called for to find better answers and tools. For the time being, the best recommendation that can be drawn from this brief exploration is that the decision-makers for today’s plans should routinely be alerted to these difficulties before making decisions, carry out the ‘objectification’ process for the concerns expressed in the discourse (of course: facilitating discourse with wide participation adequate to the severity of the challenge of the project), and then admit that any high degree of ‘certainty‘ for proposed decisions is not justified. Decisions about ‘wicked problems’ are more like ‘gambles’ for which responsibility, ‘accountability’ must be assumed. If official decision-makers cannot assume that responsibility — as expressed in ‘paying’ for mistaken decisions, should they seek supporters to share that responsibility?

So far, this kind of talk is just that: mere empty talk, since there is at best only the vague and hardly measurable ‘reputation’ available as the ‘account‘ from which ‘payment‘ can be made — in the next election, or in history books. Which does not prevent reckless mistakes in planning decisions: there should be better means for making the concept of ‘accountability’ more meaningful. (Some suggestions for this are sketched in the sections on the use of ‘discourse contribution credit points’ earned by decision-makers or contributed by supporters from their credit point accounts,and made the required form of ‘investment payment’ for decisions.) The needed research and discussion of these issues will have to consider new connections between the factors involved in evaluation for public planning.


Overview

— o —

EVALUATION IN THE PLANNING DISCOURSE — SYSTEMS THINKING, MODELING AND EVALUATION IN PLANNING

An effort to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann , February 2020. (DRAFT)

SYSTEMS THINKING / MODELING AND EVALUATION IN PLANNING

 

Evaluation and Systems in Planning  — Overview

The contribution of systems perspective and tools to planning.

In just about any discourse about improving approaches to planning and policy-making, there will be claims containing reference to ‘systems’: ‘systems thinking’, ‘systems modeling and simulation’, the need to understand ‘the whole system’, the counterintuitive behavior of systems. Systems thinking as a whole mental framework is described as ‘humanity’s currently best tool for dealing with its problems and challenges. There are by now so many variations, sub-disciplines, approaches and techniques, even definitions of systems and systems approaches on the academic as well as the consulting market, that even a cursory description of this field would become a book-length project.

The focus here is the much narrower issue of the relationship between this ‘systems perspective’ and various evaluation tasks in the planning discourse. This sketch will necessarily be quite general, not doing adequate justice to many specific ‘brands’ of systems theory and practice. However, looking at the subject from the planning / evaluation perspective will identify some significant issues that call for more discussion.

Evaluation judgments at many stages of systems projects and planning

A survey of many ‘systems’ contributions reveals that ‘evaluation’ judgments are made at many stages of projects claiming to take a systems view – like the finding that evaluation takes place at the various stages of planning projects whether explicitly guided by systems views or not. Those judgments are often not even acknowledged as ‘evaluation’, and done by very different patterns of evaluation (as described in the sections exploring the variety of evaluation judgment types and procedures.)

The similar aims of systems thinking and evaluation in planning

Systems practitioners feel that their work contributes well (or ‘better’ than other approaches) to the general aims of planning: such as
– to understand the ‘problem’ that initiates planning efforts;
– to understand the ‘system’ affected by the problem, as well as
– the larger ‘context’ or ‘environment’ system of the project;
– to understand the relationships between the components and agents, especially the ‘loops’ of such relationships that generates the often counterintuitive and complex systems behavior;
– to understand and predict the effects (costs, benefits, risks) and performance of proposed interventions in those systems (‘solution’) over time; both ‘desired’ outcomes and potentially ‘undesirable’ or even unexpected side-and after-effects;
– to help planners develop ‘good’ plan proposals,
– and to reach recommendations and/or decisions about plan proposals that are based on due consideration of all concerns for parties affected by the problem and proposed solutions, and of the merit of ‘all’ the information, contributions, insights and understanding brought into the process.
– To the extent that those decisions and their rationale must be communicated to the community for acceptance, these investigations and judgment processes should be represented in transparent, accountable form.

Judgment in early versus late stages of the process

Looking at these aims, it seems that ‘systems-guided’ projects tend to focus on the ‘early’ information (data) -gathering and ‘understanding’ aspects of planning – more than on the decision-making activities. These ‘early’ activities do involve judgment of many kinds, aiming at understanding ‘reality’ based on the gathering and analysis of facts and data. The validity of these judgments is drawn from standards of what may loosely be called ‘scientific method’ – proper observation, measurement, statistical analysis. There is no doubt that systems modeling, looking at the components of the ‘whole’ system, and the relationships between them, and the development of simulation techniques have greatly improved the degree of understanding both of the problems and the context that generates them, as well as the prediction of proposed effects (performance) of interventions: of ‘solutions’. Less attention seems to be given to the evaluation processes leading up to decisions in the later stages. Some justifications, guiding attitudes, can be distinguished to explain this:

Solution quality versus procedure based legitimatization on of decisions

One attitude, building on the ‘scientific method’ tools applied in the data-gathering and model-building phases, aims at finding ‘optimal’ (ideally, or at least ‘satisficing’) solutions described by performance measures from the models. Sophisticated computer-assisted models and simulations are used to do this; the performance measures (that must be quantifiable, to be calculated) derived from ‘client’ goal statements or from surveys of affected populations, interpreted by the model-building consultants: experts. One the one hand, their expert status is then used to assert validity of results. But on the other hand, increasingly criticized for the lack of transparency to the lay populations affected by problems and plans: questioning the experts’ legitimacy to make judgments ‘on behalf of’ affected parties. If there are differences of opinions, conflicts about model assumptions, these are ‘settled’ – must be settled – by the model builders in order for the programs to yield consistent results.

This practice (that Rittel and other critics called ‘first generation systems approach’) was seen as a superior alternative to traditional ways of generating planning decisions: the discussions in assemblies of people or their representatives, characterized by raising questions and debating the ‘pros and cons’ of proposed solutions – but then making decisions by majority voting or accepting the decisions of designated or self-designated leaders. Both of these decision modes obviously are not meeting all of the postulated expectations in the list above: voting implies dominance of interests of the ‘majority’ and potential disregard on the concerns of the minority; leader’s decisions could lack transparency (much like expert advice) leading to public distrust of the leader’s claim of having given due consideration to ‘all’ concerns affecting people.

There were then some efforts to develop procedures (e.g. formal evaluation procedures) or tools such as the widely used but also widely criticized ‘Benefit-Cost’ analysis tried to extend the ‘calculation based’ development of valid performance measures into the stage of criteria based on the assessment of solution quality to guide decisions. These were not equally widely adopted, for various reasons such as the complicated and burdensome procedures, again requiring experts to facilitate the process but arguably making public participation more difficult. A different path is the tendency to make basic ‘quality’ considerations ‘mandatory’ as regulations and laws, or ‘best practice’ standard. Apart from tending to set ‘minimum’ quality levels as requirement e.g. for building permits, this represents a movement to combine or entirely replace quality-based planning decision-making with decisions that draw their legitimacy from having been generated and following procedures.

This trend is visible both in approaches that specify procedures to generate solutions by using ‘valid’ solution components or features postulated by a theory (or laws): having followed those steps then validates the solution generated removes the necessity to carry out any complicated evaluation procedure. An example of this is Alexander’s ‘Pattern Language’ – though the ‘systems’ aspect is not as prevalent in that approach. Interestingly, that same stratagem is visible in movements that focus on processes aimed at mindsets of groups participating in special events, ‘increasing awareness’ of the nature and complexity of the ‘whole system’ but then rely on solutions ‘emerging’ from the resulting greater awareness and understanding that aim at consensus acceptance in the group for the results generated, that then do not need further examination by more systematic, quantity-focused deliberation procedures. The invoked ‘whole system’ consideration, together with a claimed scientific understanding of the true reality of the situation calling for planning intervention is a part of inducing that acceptance and legitimacy. A telltale feature of these approaches is that debate, argument, and the reasoning scrutiny of supporting evidence involving opposing opinions tends to be avoided or ‘screened out’ in the procedures generating collective ‘swarm’ consensus.

The controversy surrounding the role of ‘subjective’, feeling-based, intuitive judgments versus ‘objective’ measurable, scientific facts (not just opinions) as the proper basis for planning decisions also affects the role of systems thinking contributions to the planning process.

None of the ‘systems’ issues related to evaluation in the planning process can be considered ‘settled’ and needing no further discussion. The very basic ‘systems’ diagrams and models of planning may need to be revised and expanded to address the role and significance of evaluation, as well as argumentation, the assessment of the merit of arguments and other contributions to the discourse, and the development of better decision modes for collective planning decision-making.

–o–

EVALUATION IN THE PLANNING DISCOURSE: PROCEDURE EXAMPLE 2: EVALUATION OF PLANNING ARGUMENTS


An effort to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann, January 2020. (Draft)

PROCEDURE EXAMPLE 2:
EVALUATION OF PLANNING ARGUMENTS (PROS & CONS)

Argument evaluation in the planning discourse

Planning, like design, can be seen as an argumentative process (Rittel): Ideas and proposals are generated, questions are raised about them. The typical planning issues — especially the ‘deontic’ (ought-) questions about what the plan ought to be and how it can be achieved — generate not only answers but arguments — the proverbial ‘pros and cons’ . The information needed to make meaningful decisions — based on ‘due consideration’ of all concerns by all parties affected by the problem the plan is aiming to remedy, as well as by any solution proposals, is often coming mainly via those pros and cons. Taking this view seriously, it becomes necessary to address the question of how those arguments should be evaluated or‘weighed’ . After all, those arguments are supporting contradictory conclusions (claims), so just ‘considering. is not quite enough.

Argumentation as a cooperative rather than adversarial interaction

The very concept of the‘argumentative view of planning is somewhat controversial because many people misunderstand ‘argument’ itself as a nasty adversarial, combative, uncooperative phenomenon, a ‘quarrel’ . (I have suggested the label ‘quarrgument’ for this). But ‘argument’ is originally understood as a set of claims (premises) that together support another claim, the ‘conclusion. For planning, arguments are items of reasoning that explore the ‘pros and cons about plans; and an important underlying assumption is that we ‘argue’ — exchange arguments with others because we believe that the other will accept or consider the position about the plan we are talking about because the other already believes or accepts the premises we offer, — or will do so once we offer the additional support we have for them. It is unfortunate that even recent research on computer-assisted argumentation seems to be stuck in the ‘adversarial’ view of arguments, seeing arguments as ‘attacks’ on opposing positions rather than a cooperative search for a good planning response to problems or visions for a better future.

‘Planning arguments’

There is another critical difference between the arguments discussed in traditional logic textbooks and and the kinds I call ‘planning arguments: The traditional argumentation concern was to establish the truth or falsity of claims about the world, and that the discussion — the assessment of arguments — will ‘settle’ that question in favor of one or the other. This does not apply to planning arguments: The planning decision does not rest on single ‘clinching’ arguments but on the assessment of the entire set of pros and cons. There are always real expected benefits and real expected costs, and as the proverbial saying has it, they must be ‘weighed’ against one another to lead to a decision. There has not been much concern about how that ‘weighing’ can or should be done, and how that process might lead to a reasoned judgment about whether to accept or reject a proposed plan. I have tried to develop a way to do this — a way to explain what our judgments are based on — beginning with an examination of the structure of ‘planning arguments.

The structure of planning arguments and their different types of premises

I suggest that planning arguments can be represented in a following general ‘standard planning argument’ form, the simplest version being the following ‘pro’ argument pattern:

Proposal ‘ought’ claim (‘conclusion’):  Proposal PLAN A ought to be adopted
because
1. Factual-instrumental premise:         Implementing PLAN A will lead to outcome B
                                                                     given conditions C
and
2. Deontic premise:                                  Outcome B ought to be pursued;
and
3. Factual premise:                                  Conditions C are (or will be) given.

This form is not conclusively ‘valid’ in the formal logic sense, according to which it is considered ‘inconclusive’ and ‘defeasible’. There are usually many such pros and cons supporting or questioning a proposal: no single argument (other that evidence pointing out flaws of logical inconsistency or lacking feasibility, leading to rejection) will be sufficient to make a decision. Any evaluation of planning arguments therefore must be embedded in a ‘multi-criteria’ analysis and aggregation of judgments into the overall decision.

It will become evident that all the judgments people make will be personal ‘subjective’ judgments, not only about the deontic (ought) premise but even about the validity and salience of the ‘factual’ premises: they are all about estimated about the future — not yet validated by observation and measurement.

The judgment types of planning argument premises:
‘plausibility’ and weight of importance

There are two kinds of judgments that will be needed. The first is an assessment of the ‘plausibility’ of each claim. The term ‘plausibility’ here includes the familiar‘truth’ (or degree of certainty or probability about the truth of a claim, and the advisability, acceptability, desirability of the deontic claim. It can be expressed as a judgment on a scale e.g. of -1 to +1, with ‘-1’ meaning complete implausibility to +1 expressing ‘total plausibility’, virtual certainty, and the center point of zero meaning ‘don’t know, can’t judge’ . The second one is a judgment about the ‘weight’ of relative importance‘ of the ‘ought’ aspect. It can be expressed e.g. by a score between zero meaning (totally unimportant) and +1 meaning ‘totally important’, overriding all other aspects; the sum of all the weights of deontic premises must be equal to +1.

Argument plausibility

The first step would be the assessment of plausibility of the entire single argument, which would be a function of all three premise plausibility scores to result in an ‘Argument plausibility’ score.

For example, an argument i with pl(1) =0.5, pl(2) = 0.8, and pl(3) = 0.9 might get an argument plausibility :   Argpl (i) of 0.5 x 0.8 x 0.9 = 0.36.

Argument weight of relative importance

The second step would be to assess the ‘argument weight’ of each argument, which can be done by multiplying the weight of relative importance of its deontic premise (premise 2 in the pattern above) with the argument plausibility:    Argw(i) = Argpl(i) x w(i).
That weight will again be a value between zero (meaning ‘totally unimportant’) and +1 (meaning ‘all-important’ i.e. overriding all other considerations). This should be the result of the establishment of a ‘tree’ of deontic concerns (similar to the ‘aspects’ of the ‘Formal evaluation’ procedure in procedure example 1) that gives each deontic claim its proper place as a main aspect, sub-aspect, sub-sub-aspect or ‘criterion’ in the aspect tree, and assigning weights between 0 and 1 such that these add up to 1, at each level.

A deontic claim located at the second level of the aspect tree, having been assigned a weight of .8 at that level, being a sub-aspect to an aspect at the first level with a weight of +.4 at that level, would have a premise weight of w = 0.8 x 0.4 = 0.32. The argument weight with a plausibility of 0.36 would be  Argw(i) = 0.36 x 0.32 = 0.1152 (rounded up as 0.12).

Plan plausibility

All the argument weights could the be aggregated to the overall ‘plan plausibility’ score, for example by adding up all argument weights:
Planpl = ∑ Argw(i) for all argument weights i (of an individual participant)

Of course, there are other possible aggregation forms. (See the sections on ‘Aggregation’ and ‘Decision Criteria).  Which one of those should be used in any specific case must be specified — agreed upon — in the ‘procedural agreements’ governing each planning project.

It should be noted that in a worksheet simply listing all arguments with their premises for plausibility and weigh assignments, there is no need for identifying  arguments as ‘pro’ and ‘con’, as intended by their respective authors. Any argument given a negative premise plausibility by a participant will automatically end up getting a negative argument weight and thus becoming a ‘con’ argument for that participant — even if the argument was intended by its author as a ‘pro’ argument. This makes it obvious that all such assessments are individual, subjective judgments, even if the factual and factual-instrumental premises of arguments are considered ‘objective-fact’ matters.

The process of evaluation of planning arguments within the overall discourse

The diagram below shows the argument assessment process as it will be embedded in an overall discourse. Its central feature is the ‘Next Step?’ decision, invoked after each major activity. It lets the participants in the effort decide — according to rules specified in those procedural agreements — how deeply into the deliberation process they wish to proceed: they could decide to go ahead with a decision after the first set of overall offhand judgments, skipping the detailed premise analysis and evaluation if they feel sufficiently certain about the plan.

Process of argument assessment within the overall discourse

The use of overall plan plausibility scores:
Group statistics of the set of individual plan plausibility scores.

It may be tempting to use the overall plan plausibility scores directly as decision guides or determinants.  For example, to determine a statistic such as the average of all individual scores Planpl(j) for the participants j in the assessment group, as an overall ‘group plausibility score‘ GPlanpl,  e.g.   GPlanpl = 1/n ∑ Planpl(j) for all n members of the panel.

And in evaluating a set of competing plan alternatives: to select the proposal with the highest ‘group plausibility’ score.
Such temptations should be resisted, for a number of reasons, such as: whether a discussion has succeeded in bringing in all pertinent items that should be given ‘due consideration’; the concern that planning arguments tend to be of ‘qualitative’ nature and often don’t easily address quantitative measures of performance; questions regarding principles, the time frame of expected plan effects and consequences; whether and how issues of ‘quality’ of a plan are adequately addressed in the form of arguments; and the question of the appropriate ‘social aggregation’ criterion to be applied to the problem and plan in question: many open questions:

Open questions

Likely incompleteness of the discussion
It is argued that participation of all affected parties and a live discussion will be more likely to bring our the concerns people are actually worried about, than e.g. reliance on general textbook knowledge by panels or surveys made up by experts who ‘don’t live there’. But even the assumption that the discussion guarantees complete coverage is unwarranted. For example, is somebody likely to consider raising an issue about a plan feature that they know will affect another party negatively (when they expect the plan to be good for the own faction) — if the other party isn’t aware enough about this effect, and does not raise it? Likewise; some things may be expected to be so much matters ‘of course’ that nobody considers it necessary to mention it. So unless the overall process includes several different means of getting such information — systems modeling, simulation, extensive scrutiny of other cases etc. — the argumentative discussion alone can’t be assumed to be sufficient to bring up all needed information.

Quantitative aspects in arguments.
The typical planning argument will usually be framed in more ‘qualitative’ terms than quantitative measures. For example: in an argument that “The plan will be more sustainable’ than the current situation” this matters in the plausibility assessment: It can be seen as quite plausible as long as there is some evidence of sustainability improvement, so participants may be inclined to give it a high pl-score close to +1. By comparison, if somebody instead makes the same argument but now claims a specific ‘sustainability’ performance measure — one that others may consider as too optimistic, and therefore assign it a plausibility score closer to zero or even slightly negative: how will that affect the overall assessment? What procedural provisions would be necessary to needed to adequately deal with this question?

The issue of ‘quality’ or ‘goodness’ of a proposed solution.
It is of course possible that a discussion examines the quality or ‘goodness’ of a plan in detail, but as mentioned above, this will likely also be in general, qualitative terms, and often even avoided because to the general acceptance of sayings like’ you can’t argue about beauty’ , so the discussion will have some difficulty in this respect, if it does mention beauty at all, or spiritual value, or the appropriateness of the resulting image. Likewise, requirements for the implementation of the plan, such as meeting regulations, may not be discussed.

The decreasing plausibility ‘paradox’
Arguably, all ‘systematic’ reasoning efforts, including discussion and debate, aim a giving decision-makers a higher degree of certainty about their final judgment, than, say, just fast offhand intuitive decisions. However, it turns out that the more depth as well as breadth of discussion is done, the more final plausibility judgment scores will tend to end up closer to the ‘zero’ or ‘don’t know’ plausibility — if the plausibility assessment is done honestly and seriously, and the aggregation method suggested above is used: Multiplying the plausibility assessments for the various premises (which for the factual premises will be probability estimates). These judgments being all about future expectations, they cannot honestly be given +1 (‘total certainty’) scores or even scores close to it, the less so, the farther out in the future the effects are projected. This result can be quite disturbing and even disappointing to many participants, when final scores are compared with initial ‘offhand’ judgments.
Other issues related to time have often been inadequately dealt with in evaluation of any kind:

Estimates of plan consequences over time
All planning arguments are expressing people’s expectations of the plan’s effect in the future. Of course, we know that there are relatively few cases in which a plan or action will generate results that will materialize immediately upon implementation and then stay that way. So what do we mean when we offer an argument that a plan ‘will bring improve society’s overall health’ — even resorting to ‘precise ‘statistical’ indices like mortality rates, or life expectancy? We know that these figures will change over time, one proposed policy will bring more immediate results than another, but the other will have better effect in the long run; and again, the father into the future we look, the less certain we must be about our prediction estimates. These things are not easily expressed in even carefully crafted arguments supported by the requisite statistics: how should we score their plausibility?

Tentative insights, conclusions?

These ‘not fully resolved / more work needed’ questions may seem to strengthen the case for evaluation approaches other than trying to draw support for planning decisions from discourse contributions, even with more detailed assessment of arguments than shown here (examining the evidence and support for each premise). However, the problems emerging from the examination of the argumentative process do affect other evaluation tools as well. I have not seen approaches that resolve them all more convincingly. So:       Some first tentative conclusions are that planning debate and discourse  — too familiar and accessible to experts and lay people alike to be dismissed in favor of other methods — would benefit from enhancements such as the argument assessment tools, but also, opportunities and encouragement should be offered to draw upon other tools, as called for by the circumstances of each case and the complexity of the plans.

These techniques, methods, should be made available for use by experts and lay discourse participants, in a ‘toolkit’ part of a general planning discourse support platform — not as mandatory components of a general-purpose one-size-fit-all planning method but as a repository of tools for creative innovation and expansion: Because plans as well as the process that generate plans define those involved as ‘the creators of that plan’ , there will be a need to ‘make a difference, to make it theirs: by changing, adapting, expanding and using the tools in new and different ways, besides inventing new tools in the process.

References:
Rittel, Horst: “APIS: A Concept for an Argumentative Planning Information System” Institute of Urban and Regional Development, University of California at Berkeley, 1980 . A report about research activities conducted for the Commission of European Communities, Directorate General XIIA.
–o–

 

 

EVALUATION IN THE PLANNING DISCOURSE: SAMPLE EVALUATION PROCEDURES EXAMPLE 1: FORMAL ‘QUALITY‘ EVALUATION

Thorbjørn Mann,  January 2020

In the following segments, a few examples procedures for evaluation by groups will be discussed, to illustrate how the various parts of the evaluation process are selectively assembled into a complete process aiming at decision (or recommendation) for decision about a proposed plan or policy; to facilitate understanding of the way the different provisions and choices related to the evaluation task that are reviewed in this study can be assembled to practical procedures for specific situations. The examples are not intended to be universal recommendations for use in all situations. They all will — arguably — call for improvement as well as adaptation to the specific project and situation at hand.

A common evaluation situation is that of a panel of evaluators comparing a number of proposed alternative plan solutions to select or recommend the ‘best’ choice for adoption. Or — if there is only one proposal, — to determine if it is ‘good enough’ for implementation. It is usually carried out by a small group of people assumed to be knowledgeable of the specific discipline (for example, architecture) and reasonably representative of the interests of the project client (which may be the public). The rationale for such efforts, besides aiming for the ‘best’ decision, is the desire for ensuring that the decision will be based on good expert knowledge, but also for transparency and legitimacy and accountability of the process — to justify the decision. The outcome will usually be a recommendation to the actual client decision-makers rather than the actual adoption or implementation decision, based on the group’s assessment of the ‘goodness’ or ‘quality’ of the proposed plan, documented in some form. (It will be referred to as a ‘Formal Quality Evaluation’ procedure.)

There are of course many possible variations of procedures for this task. The sample procedure described in the following is based on the Musso-Rittel (1) procedure for the evaluation of the ‘goodness’ or quality of buildings.

The group will begin by agreeing on the procedure itself and its various provisions: the steps to be followed (for example, whether evaluation aspects and weighting should be worked out before or after presentation of the plan or plan alternatives), general vocabulary, judgment and weighting scales, aggregation functions both for individual overall judgments and group indices, and decision rules for determining its final recommendation.

Assuming that the group has adopted the sequence of first establishing the evaluation aspects and criteria against which the plan (or plans) will be judged, the first step will be a general discussion of the aspects and sub-aspects to be considered, resulting in the construction of the ‘aspect tree’ of aspects, sub-aspects, sub-sub-aspects etc. (ref. the section on aspects and aspect trees) and criteria (the ‘objective’ measures of performance; ref. the section on evaluation criteria). The resulting tree will be displayed and become the basis for scoring worksheets.

The second step will be the assignment of aspect weights (on a scale of zero to to 1 and such that at each level of the ‘tree’, the sum of weights at that level will be 1. Panel members will develop their own individual weighting. This phase can be further refined by applying ‘Delphi Method’ steps: establishing and displaying the mean / median and extreme weighting values and then asking the authors of extremely low or high weights to share and discuss their reasoning for these judgments, and giving all members the chance to revise their weights.

Once the weighted evaluation aspect trees have been established, the next step will be the presentation of the plan proposal or competing alternatives.

Each participant will assign a first ‘overall offhand’ quality score (on the agreed-upon scale, e.g. -3 to +3) to each plan alternative.

The group’s statistics of these scores are then established and displayed. This may help to decide whether any further discussion and detailed scoring of aspects will be needed: there may be a visible consensus for a clear ‘winner’. If there are disagreements, the group decides to go through with the detailed evaluation, and the initial scores are kept for later comparison with the final results. using common worksheets or spreadsheets of the aspect tree, for panel members to fill in their weighting and quality scores. This step may involve the drawing of ‘criterion functions’ (ref. the section of evaluation criteria and criterion functions) to explain how each participant’s quality judgments depend on (objective) criteria or performance measures. These diagrams may be discussed by the panel. They should be considered each panel member’s subjective basis of judgment (or representation of the interests of factions in the population of affected parties). However, some such functions may be the mandatory official regulations (such as building regulations). The temptation to urge adoption of common (group) functions (‘for simplicity and expression of ‘common purpose’) should be resisted to avoid possible bias towards the interests of some parties at the expense of others.

Each group member will then fill in the scores for all aspects and sub-aspects etc. The results will be compiled, and the statistics compared; extreme differences in the scoring will be discussed, and members given the chance to change their assessments. This step may be repeated as needed (e.g. until there are no further changes in the judgments).

The results are calculated and the group recommendation determined according to the agreed-upon decision criterion. The ‘deliberated’ individual overall scores are compared with the members’ initial ‘offhand’ scores. The results may cause the group to revise the aspects, weights, or criteria, (e.g. upon discovering that some critical aspect has been missed), or call for changes in the plan, before determining the final recommendation or decision (again, according to the initial procedural agreements).

The steps are summarized in the following ‘flow chart’.

Evalmap15 FormalevalEvaluation example 1: Steps of a ‘Group Formal Quality Evaluation’

Questions related to this version of a formal evaluation process may include the issue of potential manipulation of weight assignments by changing the steepness of the criterion junction.
Ostensibly, the described process aims at ‘giving due consideration’ to all legitimately ‘pertinent’ aspects, while eliminating or reducing the role of ‘hidden agenda’ factors. Questions may arise whether such ‘hidden’ concerns might be hidden behind other plausible but inordinately weighted aspects. A question that may arise from discussions and argumentation about controversial aspects of a plan and the examination of how such arguments should be assessed (ref. the section on a process for Evaluation of Planning Arguments) is the role of plausibility judgments about the premises of such arguments: esp. the probability of assumption claims that a plan will actually result in a desired or undesired outcome (an aspect). Should the ‘quality’ assessment’ process include a modification of quality scores based on plausibility / probability scores, or should this concern be explicitly included in the aspect list?

The process may of course seem ‘too complicated’, and if done by ‘experts’, invite critical questions whether the experts really can overcome their own interests, bias and preconceptions to adequately consider the interests of other, less‘expert’ groups. The procedure obviously assumes a general degree of cooperativeness in the panel, which sometimes may be unrealistic. Are more adequate provisions needed for dealing with incompatible attitudes and interests?

Other questions? Concerns? Missing considerations?

–o–

EVALUATION IN THE PLANNING DISCOURSE: ASPECTS and ‘ASPECT TREES’

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.  Thorbjørn Mann,  January 2020

The questions surrounding the task of assembling ‘all’ aspects calling for ‘due consideration’.

 

ASPECTS AND ASPECT TREE DISPLAYS

Once an evaluation effort begins to get serious about its professed aims: of deliberating, making overall judgments a transparent function of partial judgments, of ‘weighing all the pros and cons’, trying not to forget anything significant, to avoid missing things that could lead to ‘unexpected’ adverse consequences of a plan (but that could be anticipated with some care), the people involved will begin to create ‘lists’ of items that ‘should be given due consideration’ before making a decision. One label for these things is ‘aspects’.  Originally meaning just looking at the object (plan) to be decided upon, from different points of view.

A survey of different approaches to evaluation shows that there are many different such labels ‘on the market’ for these ‘things to be given due consideration’. And many of them — especially the many evaluation and problem-solving, systems change consultant brands that compete for commissions to help companies and institutions to cope with their issues — come with very different recommendations for the way this should be done. The question for the effort to develop a general public planning discourse support platform for dealing with projects and challenges that affect people in many governmental and commercial ‘jurisdictions’ — ultimately: ‘global’ challenges — then becomes: How can and should all these differences of the way people talk about these issues be accommodated in a common platform?

Whether a common ground for this can be found — or a way to accommodate all the different perspectives, if a common label can’t be agreed upon — depends upon a scrutiny of the different terms and their procedural implications. This is a significant task in itself, one for which I have not seen much in the way of inquiry and suggestions (other than the ‘brands’ recommendations for adopting ‘their’ terms and approach.) So raising this question might be the beginning of a sizable discussion in itself (or a survey of existing work I haven’t seen). Pending the outcome of such an investigation, many of the issues raised for discussion in this series of evaluation issues will continue to use the term ‘aspect’, with apologies to proponents of other perspectives.

This question of diversity of terminology is only one reason for needed discussion, however. One such reason has to do with the possibility of bias in the very selection of terms, depending on the underlying theory or method, or whether the perspective is focused on some ‘movement’ that by its very nature puts one main aspect at the center of attention (‘competitive strength and growth’; ‘sustainability’, ‘regeneration’; ‘climate change’; ‘globalization’ versus ‘local culture’ etc.) There are many efforts to classify or group aspects — starting with Vitruvius’ three main aspects ‘firmness, convenience and delight’ to the simple ‘cost, benefit, and risk’ grouping, or the recent efforts that encourage participants to explore aspects from different groups of affected or concerned parties, mixed in with concepts such as ‘principles’, best and worst expected outcomes, etc. shown in a ‘canvas’ poster for orientation. Are these efforts encouraging contribution of information from the public, or giving the impression of adequate coverage and inadvertently missing significant aspects? It seems that any classification scheme of aspects is likely to end up neglecting or marginalizing some concerns of affected parties.

Comparatively minor questions are about potential mistakes in applying the related tools: Listing preferred or familiar means of plan implementation as aspects representing goals or concerns, for example; listing the essentially same concern under different labels (and thus weighing it twice…). The issue of functional relationships between different aspects — a main concern of systems views of a problem situation — is one that is often not well represented in the evaluation work tools. A major potential controversy is, of course, the question of who is doing the evaluation, whose concerns are represented, what is the source of information a team will draw upon to assemble the aspect list?

It may be useful to look at the expectations for the vocabulary and its corresponding tools: Is the goal to ensure ‘scientific’ rigor, or to make it easy for lay participants to understand and to contribute to the discussion? To simplify things or to ensure comprehensive coverage? Which vocabulary facilitates further explanation (sub-aspects etc) and ultimately showing how valuation judgments relate to objective criteria — performance measures?

Finally: given the number of different ‘perspectives’ , how should the platform deal with the potential of biased ‘framing’ of discussions by the sequence in which comments are entered and displayed — or is this concern one that should be left to the participants in the process, while the platform itself should be as ‘neutral’ as possible — even with respect to potential bias or distortions?

The ‘aspect tree’ of some approaches refers to the hierarchical ‘tree’ structure emerging in a display of main aspects, each further explained by ‘sub-aspects’, sub-sub-aspects etc. The outermost ‘leaves’ of the aspect tree would be the‘criteria’ or objective performance variables, to which participants might carry their explanations of their judgment basis. (See the later section on criteria and criterion functions.) Is the possibility of doing that a factor in the insistence on the part of some people to ‘base decisions on facts’ — only — thereby eliminating ‘subjective’ judgments that can be explained only by listing more subjective aspects?

An important warning was made by Rittel in discussing ‘Wicked Problems’ long ago: The more different perspectives, explanations of a problem, potential solutions are entered into the discussion, the more aspects will appear claiming ‘due consideration’. The possible consequences of proposed solutions alone extend endlessly into the future. This makes it impossible for a single designer or planner, even a team of problem-solvers, to anticipate them all: the principle of assembling ‘all’ such aspects is practically impossible to meet. This is both a reminder to humbly abstain from claims to comprehensive coverage, and a justification of wide participation on logical (rather than the more common ideological-political) grounds: inviting all potentially affected parties to contribute to the discourse as the best way to get that needed information.

The need for more discussion of this subject, finally, should be shown by the presence of approaches or attitudes that deny the need for evaluation ‘methods’ altogether. This takes different forms, ranging from calls for ‘awareness’ or general adoption of a new ‘paradigm’ or approach — like ‘systems thinking’, holism, relying on ‘swarm’ guidance etc, to more specific approaches like Alexander’s Pattern Language which suggests that using valid patterns (solution elements, not evaluation aspects) to develop plans, will guarantee their validity and quality, thus making evaluation unnecessary.

One source of heuristic guidance to justify ‘stopping rules’ in the effort to assemble evaluation aspects may be seen in the weighting of relative importance given (as subjective judgments by participants) to the different aspects: if the assessment of a given aspect will not make a significant difference in the overall decision because that aspect is given too low a weight, is this a legitimate ‘excuse’ for not giving it a more thorough examination? (A later section will look at the weighting or preference ranking issue).

–o–