Archive for the 'Evaluation aspects' Category

EVALUATION IN THE PLANNING DISCOURSE — THE DIMINISHING PLAUSIBILITY PARADOX

Thorbjørn Mann,  February 2020

THE DIMINISHING PLAUSIBILITY PARADOX

Does thorough deliberation increase or decrease confidence in the decision?

There is a curious effect of careful evaluation and deliberation that may appear paradoxical to people involved in planning decision-making, who expect such efforts to lead to greater certainty and confidence in the validity of their decisions. There are even consulting approaches that derive measures of such confidence from the ‘breadth’ and ‘depth’ achieved in the discourse.

The effect is the observation that with well-intentioned, honest effort to give due consideration and even systematic evaluation  to all concerns — as expressed e.g. by the pros and cons of proposed plans perceived by affected and experienced people, –, the degree of certainty or plausibility for a proposed plan actually seems to decrease, or move towards a central ‘don’t know’ point on a +1 to -1 plausibility scale. Specifically: The more carefully breadth (meaning coverage the entire range of all aspects or concerns) and depth (understood as the thorough examination of the support — evidence and supporting arguments — of the premises of each ‘pro’ and ‘con’ argument) are evaluated, the more the degree of confidence felt by evaluators moves from initial high support (or opposition) towards the central point ‘zero’  on the scale, meaning ‘don’t know; can’t decide’.

This is of course, the opposite of what the advice to ‘carefully evaluate the pros and cons’ seem to promise, and what approaches striving for breadth and depth actually appear to achieve. This creates a suspicion that either the method for measuring the plausibility of all the pros and cons must be faulty, or that the approaches relying on the degree of breadth and depth directly as equivalent to greater support are making mistakes. So it seems necessary to take a closer a look at this apparently counterintuitive phenomenon.

The effect has first been observed in the course of the review for a journal publication of an article on the structure and evaluation of planning arguments [1] — several reviewers pointed out what they thought must be a flawed method of calculation.

Explanation of the effect

The crucial steps of the method (also explained in the section on planning argument assessment) are the following:

– All pro and con arguments are converted from their often incomplete, missing- premises state to the complete pattern explicitly stating all premises, (e.g. “Yes, adopt plan A because 1) A will lead to effect B given conditions C, and 2) B ought to be aimed for, and 3) conditions C will be present”).

– Each participant will assign plausibility judgments to each premise, on the +1 /-1 scale where the +1 stands for complete certainty or plausibility, the -1 for complete certainty that the claim is not true, or totally implausible (in the judgment of the individual participant), and the center point of zero expressing inability to judge”don’t know; can’t decide’. Since in the planning argument, all premises are estimates or expectations of future states — effects of the plan, applicability of the causal rule that connects future effects or ‘consequences’ with actions of the plan, and the desirability or undesirability of those consequences, complete certainty assessments (pl = +1, or -1) for the premises must be considered unreasonable; so all the plausibility values will be somewhere between those extremes.

– Deriving a plausibility value for the entire argument from these plausibility judgments can be done in different ways: The extreme being to assign the lowest premise plausibility judgment prempl to the entire argument, expressing an attitude like ‘the strength of a chain is equal to the strength of its weakest link’. Or the plausibility values can be multiplied:  The Argument plausibility: for argument i 

            Argpl(i) =  (prempl(i,j))  for all premises j of argument i

Either way, the resulting argument plausibility cannot be higher than the premise plausibilities.

– SInce arguments do not carry the same ‘weight’ in determining the overall plausibility judgment, it is necessary to assign some weight factor to each argument plausibility judgment. That weight will depend on the relative importance of the ‘deontic’ (ought) premises; and approximately expressed by assigning each of the deontic claims in all the arguments a weight between zero and +1, such that all the weights add up to +1. So the weight of argument i will be the plausibility of argument i times the weight of its deontic premises: Argw(i) = Argpl(i) x w(i)

– A plausibility value for the entire plan, will have to be calculated from all the argument weights. Again, there are different ways to do that (discussed in the section of aggregation) but an aggregation function such as adding all the argument weights (as derived by the preceding steps) will yield a plan plausibility value on the same scale as the initial premise and argument plausibility judgments. It will also be the result of considering all the arguments, both pro and con; and since the argument weights of arguments considered ‘con’ arguments in the view of individual participants will be subtracted from the summed-up weight of ‘pro’ arguments, it will be nowhere near the complete certainty value of +1 or -1, unless of course the process revealed that there were no arguments carrying any weight at all on the pro or con side. Which is unlikely since e.g. all plans have been conceived from some expectation of generating some benefit, and will carry some cost or effort, etc.

This approach as described thus far can be considered a ‘breadth-only’ assessment, justly so if there is no effort to examine the degree of support of premises. But of course the same reasoning can be applied to any of the premises — to any degree of ‘depth’ as demanded by participants from each other. The effect of overall plan plausibility tending toward the center point of zero (‘don’t know’ or ‘undecided’), compared with initial offhand convincing ‘yes: apply the plan!) or ‘no- reject!’ reactions will be the same — unless there are completely ‘principle’-based or ‘logical or physical ‘impossibility’ considerations, in plans that arguably should not even have reached the stage of collective decision-making.

Explanation of the opposite effect in ‘breadth/depth’ based approaches

So what distinguishes this method from approaches that claim to use degrees of ‘breadth and depth’ deliberation as measures justifying the resulting plan decisions? And in the process, increases the team’s confidence in the ‘rightness’ of their decision?

One obvious difference — that must be considered a definite flaw,– is that the degree of deliberation, measured by the mere number of comments, arguments, of ‘breadth’ or ‘depth’, does not include assessment of the plausibility (positive or negative) of the claims involved, nor of their weights of relative importance. Just having talked about the number of considerations, without that distinction, cannot already be a valid basis for decisions, even if Popper’s advice about the degree of confidence in scientific hypotheses we are entitled to hold is not considered applicable to design and planning. (“We are entitled to tentatively accept a hypothesis to the extent we have given our best effort to test, to refute it, and it has withstood all those tests”…)

Sure, we don’t have ‘tests’ that definitively refute a hypothesis (or ‘null hypothesis’) that we have to apply as best we can, and planning decisions don’t rest or fall on the strength of single arguments or hypotheses. All we have are arguments explaining our expectations, speculations about the future resulting from our planning actions — but we can adapt Popper’s advice to planning: “We can accept a plan as tentatively justified to the extent we have tried our best to expose it to counterarguments (con’s) and have seen that those arguments are either flawed (not sufficiently plausible) or outweighed by the arguments in its favor.”

And if we do this, honestly admitting that we really can’t be very certain about all the claims that go into the arguments, pro or con, and look at how all those uncertainties come together in totaling up the overall plausibility of the plan, the tendency of that plausibility to go towards the center point of the scale looks more reasonable.

Could these consideration be the key to understand why approaches relying on mere breadth and depth measurements may result in increased confidence of the participants in such projects? There are two kinds of extreme situations in which it is likely that even extensive breadth and depth discussions can ignore or marginalize one side or the other of necessary ‘pro’ or ‘con’ arguments.

One is the typical ‘problem-solving’ team assembled for the purpose of developing a ‘solution’ or recommendation. The enthusiasm of the collective creative effort itself (but possibly also the often invoked ‘positive’ thinking, defer judgment so as to not disrupt the creative momentum, as well a the expectation of a ‘consensus’ decision?) may focus the thinking of team members on ‘pro’ arguments, justifying the emerging plan — but neglecting or diverting attention from counterarguments. Finding sufficient good reasons for the plan being enough to make a decision?

An opposite type of situation is the ‘protest’ demonstration, or events arranged for the express purpose of opposing a plan. Disgruntled citizens outraged by how a big project will change their neighborhood: counting up all the damaging effects: Must we not assume that there will be a strong focus on highlighting the plan’s negative effects or potential consequences: assembling a strong enough ‘case’ to reject it? In both cases, there may be considerable and even reasonable deliberation in breadth and depth involved — but also possible bias due to neglect of the other side’s arguments.

Implications of the possibility of decreasing plan plausibility?

So pending some more research into this phenomenon, — if found to be common enough to worry about, — it may be useful to look at what it means: what adjustments to common practice it would suggest, what ‘side-stepping’ stratagems may have evolved due to the mere sentiment that more deliberation might shake any undue, undeserved expectations in a plan. Otherwise, cynical observers might recommend throwing up our arms and leaving the decision to the wisdom of ‘leaders’ of one kind or another, in the extreme to oracle-like devices — artificial intelligence from algorithms whose rationales remain as unintelligible to the lay person as the medieval ‘divine judgment’ validated by mysterious rituals (but otherwise amounting to tossing coins?).

Besides the above-mentioned research into the question, examining common approaches on the consulting market for potential vulnerability to provisions to overplay the tendency would be one first step. For example, adding plausibility assessment to the approaches using depth and breadth criteria would be necessary to make them more meaningful.

The introduction of more citizen participation into the public planning process is an increasingly common move that has been urged — among other undeniable advantages such as getting better information about how problems and the plans proposed to solve them actually affect people — to also make plans more acceptable to the public because the plans then are felt to be more ‘their own’. As such, could this make the process vulnerable to the above first fallacy of overlooking negative features? If so, the same remedy of actually including more systematic evaluation into the process might be considered.

A common temptation by promoters of ‘big’ plans can’t be overlooked: to resort to ‘big’ arguments that are so difficult to evaluate that made-up ‘supporting’ evidence can’t be distinguished from predictions based on better data and analysis (following Machiavelli’s quip about ‘the bigger the lie, the more likely people will buy it’…). Many people already are suggesting that we should return to smaller (local) governance entities that can’t offer big lies.

Again: this issue calls for more research.

[1]   “The Structure and Evaluation of Planning Arguments”  Thorbjoern Mann, INFORMAL LOGIC  Dec. 2010.

— o —

EVALUATION IN THE PLANNING DISCOURSE — TIME AND EVALUATION OF PLANS

An effort to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann, February 2020

TIME AND EVALUATION OF PLANS  (Draft, for discussion)

Inadequate attention to time in current common assessment approaches

Considering that evaluation of plans (especially ‘strategic’ plans) and policy proposals, by their very nature are concerned with the future, it is curious that the role of time has not received more attention, even with the development of simulation techniques that aim at tracking the behavior of key variables of systems over many years into the future. The neglect of this question, for example in the education or architects, can be seen in the practice of judging students’ design project presentations on the basis of their drawings and models.

The exceptions — for example in building and engineering economics — are looking at very few performance variables, with quite sophisticated techniques: expected cost of building projects, ‘life cycle cost’, return on investment etc., — to be put into relation to expected revenues and profit. Techniques such as ‘Benefit/Cost Analysis‘, which in its simplest form considers those variables as realized immediately upon implementation, also can apply this kind of analysis to forecasting costs and benefits and comparing them over time by methods for converting initial amounts (of money) to ‘annualized’ or future equivalents, or vice versa.

Criticism of such approaches amount to pointing out problems such as having to convert ‘intangible’ performance aspects (like public health, satisfaction, loss of lives) into money amounts to be compared, (raising serious ethical questions) for entities like nations, where the money amounts drawn from or entering the national budget hide controversies such as inequities in the distribution of the costs and benefits. Looking at the issue from the point of view of other evaluation approaches might at least identify the challenges in the consideration of time in the assessment of plans, and help guide the development of better tools.

A first point to be pointed out is that from the perspective of the formal evaluation process, for example, (See e.g. the previous section on the Musso/Rittel approach), measures like present value of future cost or profit, or benefit-cost ratio must be considered ‘criteria’ (measures of performance) for more general evaluation aspects, for among a set of (goodness) evaluation aspects that each evaluator must be weighted for their relative importance, to make up overall ‘goodness’ or quality judgments. (See the segments on evaluation judgments, criteria and criterion functions, and aggregation.) And as such, the use of these measures as decision criteria must be considered incomplete and inappropriate. However, in those approaches, the time factor is usually not treated with even the attention expressed in the above tools for discounting future costs and benefits to comparable present worth: For example, pro or con arguments in a live verbal discussion about expected economic performance often amount to mere qualitative comparisons or claims like ‘over the budget’ or ‘more expensive in the long run’. 

Finally, in approaches such as the Pattern language, (which makes valuable observations about ‘timeless’ quality of built environments, but does not consider explicit evaluation a necessary part of the process of generating such environments), there is no mention or discussion of how time considerations might influence decisions: the quality of designs is guaranteed by having been generated by the use of patterns, but the efforts to describe that quality do not include consideration of effects of solutions over time.

Time aspects calling for attention in planning

Assessments of undesirable present or future states ‘if nothing is done’

The implementation of a plan is expected to bring about changes in the state of affairs that is felt to be ‘problems’ — things not being as they ought to be, or ‘challenges’,‘opportunities’ calling for better, improved states of affairs. Many plans and policies aim at preventing future developments to occur, either as distinctly ‘sudden’ events or development over time. Obviously, the degree of undesirability depends on the expected severity of these developments; they are matters of degree that must be predicted in order for the plan’s effectiveness to be judged.

The knowledge that goes into the estimates of future change comes from experience: observation of the pattern and rate of change in the past, (even if that knowledge is taken to be well enough established to be considered a ‘law’). But not all such change tracks have been well enough observed and recorded in the past, so much estimate and judgment goes into the assumptions already about the changes over time in the past.

Individual assessments of future plan performance

Our forecasts for future changes ‘if nothing is done’, resting on such shaky past knowledge must be considered less that 100% reliable. Should our confidence in the application of that knowledge to estimates of a plan’s future ‘performance‘ then not be be acknowledged as equal (at best) or arguably less certain — expressed as deserving a lower ‘plausibility’ qualifier? This would be expressed, for example, with the pl — plausibility — judgment for the relationship claimed in the factual-instrumental premise of an argument about the desirability of the plan effects: “Plan A will result (by virtue of the law or causal relationship R) in producing effect B”.

This argument should be (but is often not) qualified by adding the assumption ‘given the conditions C under which the relationship R will hold’: the conditions which the third (factual claim) premise of the ‘standard planning argument’ claims is — or will be — ‘given’.

Note: ‘Will be’: since the plan will be implemented in the future, this premise also involves a prediction. And to the extent the condition is not a stable, unchanging one but also a changing, evolving phenomenon, the degree of the desirable or undesirable effect B must be expected to change. And, to make things even more interesting and complex: as explained in the sections on argument assessment and systems modeling: the ‘condition’ is never adequately described by a single variable, but actually represents the  evolving state of the entire ‘system’ in which the plan will intervene.

This means that when two people exchange their assumptions and judgments, opinions, about the effectiveness of the plan by citing its effect on B, they may likely have very different degrees (or performance measures in mind, occurring under very different assumptions about both R and C, — at different times.

Things become more fuzzy when the likelihood is considered that the desired or undesired effects are not expected to change things overnight, but gradually, over time. So how should we make evaluation judgments about competing plan alternatives, when, for example, one plan promises rapid improvement soon after implementation, (as measured by one criterion), but then slowing down or even start declining, while the other will improve at a much slower but more consistent rate? A mutually consistent evaluation must be based on agreed-upon measures of performance: measured at what future time? Over what future time period, aka ‘planning horizon’? This question will just apply to the prediction of the performance criterion — what about the plausibility and weight of importance judgments we need to offer complete explanation of our judgment base?  Is it enough to apply the same plausibility factor to forecasts of trends decades in the future, as the one we use for near future predictions? As discussed in the segment on criteria, the crisp fine forecast lines we see in simulation printouts are misleading: the line should really be a fuzzy track widening more and more, the farther out in time it extends?  Likewise: is it meaningful to use the same weight of relative importance for the assessment of effects at different times?

These considerations apply, so far, only to the explanation of individual judgments, and already show that it would be almost impossible to construct meaningful criterion functions and aggregation functions to get adequately ‘objectified’ overall deliberated judgment scores for individual participants in evaluation procedures.

Aggregation issues for group judgment indicators

The time-assessment difficulties described for individual judgments do not diminish in the task of construction decision guides for groups, based on the results of individual judgment scores. Reminder: to meet the ideal ‘democratic’ expectation that the community decision about a plan should be based on due consideration of ‘all’ concerns expressed by ‘all’ affected parties, the guiding indicator (‘decision guide’ or criterion) should be an appropriate aggregation statistic of all individual overall judgments. The above considerations show, to put it mildly, that it would be difficult enough to aggregate individual judgments into overall judgment scores, but even more so to construct group indicators that are based on the same assumptions about the time qualifiers entering the assessments.

This makes it understandable (but not excusable) why decision-makers in practice tend to either screen out the uncomfortable questions about time in their judgments, or resort to vague ‘goals’ measured by vague criteria to be achieved within arbitrary time periods: “Carbon-emission neutrality by 2050”, for example: How to choose between different plan or policies whose performance simulation forecasts do not promise 100% achievement of the goal, but only ‘approximations’ with different interim performance tracks, at different costs and other side-effects in society? But 2050 is far enough in the future to ensure that none of the decision-makers for today’s plans will be held responsible for today’s decisions…

“Conclusions’ ?

The term ‘conclusion’ is obviously inappropriate if referring to expected answers to the questions discussed. These issues have just been raised, not resolved; which means that more research, experiments, discussion is called for to find better answers and tools. For the time being, the best recommendation that can be drawn from this brief exploration is that the decision-makers for today’s plans should routinely be alerted to these difficulties before making decisions, carry out the ‘objectification’ process for the concerns expressed in the discourse (of course: facilitating discourse with wide participation adequate to the severity of the challenge of the project), and then admit that any high degree of ‘certainty‘ for proposed decisions is not justified. Decisions about ‘wicked problems’ are more like ‘gambles’ for which responsibility, ‘accountability’ must be assumed. If official decision-makers cannot assume that responsibility — as expressed in ‘paying’ for mistaken decisions, should they seek supporters to share that responsibility?

So far, this kind of talk is just that: mere empty talk, since there is at best only the vague and hardly measurable ‘reputation’ available as the ‘account‘ from which ‘payment‘ can be made — in the next election, or in history books. Which does not prevent reckless mistakes in planning decisions: there should be better means for making the concept of ‘accountability’ more meaningful. (Some suggestions for this are sketched in the sections on the use of ‘discourse contribution credit points’ earned by decision-makers or contributed by supporters from their credit point accounts,and made the required form of ‘investment payment’ for decisions.) The needed research and discussion of these issues will have to consider new connections between the factors involved in evaluation for public planning.


Overview

— o —

EVALUATION IN THE PLANNING DISCOURSE — SYSTEMS THINKING, MODELING AND EVALUATION IN PLANNING

An effort to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann , February 2020. (DRAFT)

SYSTEMS THINKING / MODELING AND EVALUATION IN PLANNING

 

Evaluation and Systems in Planning  — Overview

The contribution of systems perspective and tools to planning.

In just about any discourse about improving approaches to planning and policy-making, there will be claims containing reference to ‘systems’: ‘systems thinking’, ‘systems modeling and simulation’, the need to understand ‘the whole system’, the counterintuitive behavior of systems. Systems thinking as a whole mental framework is described as ‘humanity’s currently best tool for dealing with its problems and challenges. There are by now so many variations, sub-disciplines, approaches and techniques, even definitions of systems and systems approaches on the academic as well as the consulting market, that even a cursory description of this field would become a book-length project.

The focus here is the much narrower issue of the relationship between this ‘systems perspective’ and various evaluation tasks in the planning discourse. This sketch will necessarily be quite general, not doing adequate justice to many specific ‘brands’ of systems theory and practice. However, looking at the subject from the planning / evaluation perspective will identify some significant issues that call for more discussion.

Evaluation judgments at many stages of systems projects and planning

A survey of many ‘systems’ contributions reveals that ‘evaluation’ judgments are made at many stages of projects claiming to take a systems view – like the finding that evaluation takes place at the various stages of planning projects whether explicitly guided by systems views or not. Those judgments are often not even acknowledged as ‘evaluation’, and done by very different patterns of evaluation (as described in the sections exploring the variety of evaluation judgment types and procedures.)

The similar aims of systems thinking and evaluation in planning

Systems practitioners feel that their work contributes well (or ‘better’ than other approaches) to the general aims of planning: such as
– to understand the ‘problem’ that initiates planning efforts;
– to understand the ‘system’ affected by the problem, as well as
– the larger ‘context’ or ‘environment’ system of the project;
– to understand the relationships between the components and agents, especially the ‘loops’ of such relationships that generates the often counterintuitive and complex systems behavior;
– to understand and predict the effects (costs, benefits, risks) and performance of proposed interventions in those systems (‘solution’) over time; both ‘desired’ outcomes and potentially ‘undesirable’ or even unexpected side-and after-effects;
– to help planners develop ‘good’ plan proposals,
– and to reach recommendations and/or decisions about plan proposals that are based on due consideration of all concerns for parties affected by the problem and proposed solutions, and of the merit of ‘all’ the information, contributions, insights and understanding brought into the process.
– To the extent that those decisions and their rationale must be communicated to the community for acceptance, these investigations and judgment processes should be represented in transparent, accountable form.

Judgment in early versus late stages of the process

Looking at these aims, it seems that ‘systems-guided’ projects tend to focus on the ‘early’ information (data) -gathering and ‘understanding’ aspects of planning – more than on the decision-making activities. These ‘early’ activities do involve judgment of many kinds, aiming at understanding ‘reality’ based on the gathering and analysis of facts and data. The validity of these judgments is drawn from standards of what may loosely be called ‘scientific method’ – proper observation, measurement, statistical analysis. There is no doubt that systems modeling, looking at the components of the ‘whole’ system, and the relationships between them, and the development of simulation techniques have greatly improved the degree of understanding both of the problems and the context that generates them, as well as the prediction of proposed effects (performance) of interventions: of ‘solutions’. Less attention seems to be given to the evaluation processes leading up to decisions in the later stages. Some justifications, guiding attitudes, can be distinguished to explain this:

Solution quality versus procedure based legitimatization on of decisions

One attitude, building on the ‘scientific method’ tools applied in the data-gathering and model-building phases, aims at finding ‘optimal’ (ideally, or at least ‘satisficing’) solutions described by performance measures from the models. Sophisticated computer-assisted models and simulations are used to do this; the performance measures (that must be quantifiable, to be calculated) derived from ‘client’ goal statements or from surveys of affected populations, interpreted by the model-building consultants: experts. One the one hand, their expert status is then used to assert validity of results. But on the other hand, increasingly criticized for the lack of transparency to the lay populations affected by problems and plans: questioning the experts’ legitimacy to make judgments ‘on behalf of’ affected parties. If there are differences of opinions, conflicts about model assumptions, these are ‘settled’ – must be settled – by the model builders in order for the programs to yield consistent results.

This practice (that Rittel and other critics called ‘first generation systems approach’) was seen as a superior alternative to traditional ways of generating planning decisions: the discussions in assemblies of people or their representatives, characterized by raising questions and debating the ‘pros and cons’ of proposed solutions – but then making decisions by majority voting or accepting the decisions of designated or self-designated leaders. Both of these decision modes obviously are not meeting all of the postulated expectations in the list above: voting implies dominance of interests of the ‘majority’ and potential disregard on the concerns of the minority; leader’s decisions could lack transparency (much like expert advice) leading to public distrust of the leader’s claim of having given due consideration to ‘all’ concerns affecting people.

There were then some efforts to develop procedures (e.g. formal evaluation procedures) or tools such as the widely used but also widely criticized ‘Benefit-Cost’ analysis tried to extend the ‘calculation based’ development of valid performance measures into the stage of criteria based on the assessment of solution quality to guide decisions. These were not equally widely adopted, for various reasons such as the complicated and burdensome procedures, again requiring experts to facilitate the process but arguably making public participation more difficult. A different path is the tendency to make basic ‘quality’ considerations ‘mandatory’ as regulations and laws, or ‘best practice’ standard. Apart from tending to set ‘minimum’ quality levels as requirement e.g. for building permits, this represents a movement to combine or entirely replace quality-based planning decision-making with decisions that draw their legitimacy from having been generated and following procedures.

This trend is visible both in approaches that specify procedures to generate solutions by using ‘valid’ solution components or features postulated by a theory (or laws): having followed those steps then validates the solution generated removes the necessity to carry out any complicated evaluation procedure. An example of this is Alexander’s ‘Pattern Language’ – though the ‘systems’ aspect is not as prevalent in that approach. Interestingly, that same stratagem is visible in movements that focus on processes aimed at mindsets of groups participating in special events, ‘increasing awareness’ of the nature and complexity of the ‘whole system’ but then rely on solutions ‘emerging’ from the resulting greater awareness and understanding that aim at consensus acceptance in the group for the results generated, that then do not need further examination by more systematic, quantity-focused deliberation procedures. The invoked ‘whole system’ consideration, together with a claimed scientific understanding of the true reality of the situation calling for planning intervention is a part of inducing that acceptance and legitimacy. A telltale feature of these approaches is that debate, argument, and the reasoning scrutiny of supporting evidence involving opposing opinions tends to be avoided or ‘screened out’ in the procedures generating collective ‘swarm’ consensus.

The controversy surrounding the role of ‘subjective’, feeling-based, intuitive judgments versus ‘objective’ measurable, scientific facts (not just opinions) as the proper basis for planning decisions also affects the role of systems thinking contributions to the planning process.

None of the ‘systems’ issues related to evaluation in the planning process can be considered ‘settled’ and needing no further discussion. The very basic ‘systems’ diagrams and models of planning may need to be revised and expanded to address the role and significance of evaluation, as well as argumentation, the assessment of the merit of arguments and other contributions to the discourse, and the development of better decision modes for collective planning decision-making.

–o–

EVALUATION IN THE PLANNING DISCOURSE: SAMPLE EVALUATION PROCEDURES EXAMPLE 1: FORMAL ‘QUALITY‘ EVALUATION

Thorbjørn Mann,  January 2020

In the following segments, a few examples procedures for evaluation by groups will be discussed, to illustrate how the various parts of the evaluation process are selectively assembled into a complete process aiming at decision (or recommendation) for decision about a proposed plan or policy; to facilitate understanding of the way the different provisions and choices related to the evaluation task that are reviewed in this study can be assembled to practical procedures for specific situations. The examples are not intended to be universal recommendations for use in all situations. They all will — arguably — call for improvement as well as adaptation to the specific project and situation at hand.

A common evaluation situation is that of a panel of evaluators comparing a number of proposed alternative plan solutions to select or recommend the ‘best’ choice for adoption. Or — if there is only one proposal, — to determine if it is ‘good enough’ for implementation. It is usually carried out by a small group of people assumed to be knowledgeable of the specific discipline (for example, architecture) and reasonably representative of the interests of the project client (which may be the public). The rationale for such efforts, besides aiming for the ‘best’ decision, is the desire for ensuring that the decision will be based on good expert knowledge, but also for transparency and legitimacy and accountability of the process — to justify the decision. The outcome will usually be a recommendation to the actual client decision-makers rather than the actual adoption or implementation decision, based on the group’s assessment of the ‘goodness’ or ‘quality’ of the proposed plan, documented in some form. (It will be referred to as a ‘Formal Quality Evaluation’ procedure.)

There are of course many possible variations of procedures for this task. The sample procedure described in the following is based on the Musso-Rittel (1) procedure for the evaluation of the ‘goodness’ or quality of buildings.

The group will begin by agreeing on the procedure itself and its various provisions: the steps to be followed (for example, whether evaluation aspects and weighting should be worked out before or after presentation of the plan or plan alternatives), general vocabulary, judgment and weighting scales, aggregation functions both for individual overall judgments and group indices, and decision rules for determining its final recommendation.

Assuming that the group has adopted the sequence of first establishing the evaluation aspects and criteria against which the plan (or plans) will be judged, the first step will be a general discussion of the aspects and sub-aspects to be considered, resulting in the construction of the ‘aspect tree’ of aspects, sub-aspects, sub-sub-aspects etc. (ref. the section on aspects and aspect trees) and criteria (the ‘objective’ measures of performance; ref. the section on evaluation criteria). The resulting tree will be displayed and become the basis for scoring worksheets.

The second step will be the assignment of aspect weights (on a scale of zero to to 1 and such that at each level of the ‘tree’, the sum of weights at that level will be 1. Panel members will develop their own individual weighting. This phase can be further refined by applying ‘Delphi Method’ steps: establishing and displaying the mean / median and extreme weighting values and then asking the authors of extremely low or high weights to share and discuss their reasoning for these judgments, and giving all members the chance to revise their weights.

Once the weighted evaluation aspect trees have been established, the next step will be the presentation of the plan proposal or competing alternatives.

Each participant will assign a first ‘overall offhand’ quality score (on the agreed-upon scale, e.g. -3 to +3) to each plan alternative.

The group’s statistics of these scores are then established and displayed. This may help to decide whether any further discussion and detailed scoring of aspects will be needed: there may be a visible consensus for a clear ‘winner’. If there are disagreements, the group decides to go through with the detailed evaluation, and the initial scores are kept for later comparison with the final results. using common worksheets or spreadsheets of the aspect tree, for panel members to fill in their weighting and quality scores. This step may involve the drawing of ‘criterion functions’ (ref. the section of evaluation criteria and criterion functions) to explain how each participant’s quality judgments depend on (objective) criteria or performance measures. These diagrams may be discussed by the panel. They should be considered each panel member’s subjective basis of judgment (or representation of the interests of factions in the population of affected parties). However, some such functions may be the mandatory official regulations (such as building regulations). The temptation to urge adoption of common (group) functions (‘for simplicity and expression of ‘common purpose’) should be resisted to avoid possible bias towards the interests of some parties at the expense of others.

Each group member will then fill in the scores for all aspects and sub-aspects etc. The results will be compiled, and the statistics compared; extreme differences in the scoring will be discussed, and members given the chance to change their assessments. This step may be repeated as needed (e.g. until there are no further changes in the judgments).

The results are calculated and the group recommendation determined according to the agreed-upon decision criterion. The ‘deliberated’ individual overall scores are compared with the members’ initial ‘offhand’ scores. The results may cause the group to revise the aspects, weights, or criteria, (e.g. upon discovering that some critical aspect has been missed), or call for changes in the plan, before determining the final recommendation or decision (again, according to the initial procedural agreements).

The steps are summarized in the following ‘flow chart’.

Evalmap15 FormalevalEvaluation example 1: Steps of a ‘Group Formal Quality Evaluation’

Questions related to this version of a formal evaluation process may include the issue of potential manipulation of weight assignments by changing the steepness of the criterion junction.
Ostensibly, the described process aims at ‘giving due consideration’ to all legitimately ‘pertinent’ aspects, while eliminating or reducing the role of ‘hidden agenda’ factors. Questions may arise whether such ‘hidden’ concerns might be hidden behind other plausible but inordinately weighted aspects. A question that may arise from discussions and argumentation about controversial aspects of a plan and the examination of how such arguments should be assessed (ref. the section on a process for Evaluation of Planning Arguments) is the role of plausibility judgments about the premises of such arguments: esp. the probability of assumption claims that a plan will actually result in a desired or undesired outcome (an aspect). Should the ‘quality’ assessment’ process include a modification of quality scores based on plausibility / probability scores, or should this concern be explicitly included in the aspect list?

The process may of course seem ‘too complicated’, and if done by ‘experts’, invite critical questions whether the experts really can overcome their own interests, bias and preconceptions to adequately consider the interests of other, less‘expert’ groups. The procedure obviously assumes a general degree of cooperativeness in the panel, which sometimes may be unrealistic. Are more adequate provisions needed for dealing with incompatible attitudes and interests?

Other questions? Concerns? Missing considerations?

–o–

EVALUATION IN THE PLANNING DISCOURSE: ASPECTS and ‘ASPECT TREES’

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.  Thorbjørn Mann,  January 2020

The questions surrounding the task of assembling ‘all’ aspects calling for ‘due consideration’.

 

ASPECTS AND ASPECT TREE DISPLAYS

Once an evaluation effort begins to get serious about its professed aims: of deliberating, making overall judgments a transparent function of partial judgments, of ‘weighing all the pros and cons’, trying not to forget anything significant, to avoid missing things that could lead to ‘unexpected’ adverse consequences of a plan (but that could be anticipated with some care), the people involved will begin to create ‘lists’ of items that ‘should be given due consideration’ before making a decision. One label for these things is ‘aspects’.  Originally meaning just looking at the object (plan) to be decided upon, from different points of view.

A survey of different approaches to evaluation shows that there are many different such labels ‘on the market’ for these ‘things to be given due consideration’. And many of them — especially the many evaluation and problem-solving, systems change consultant brands that compete for commissions to help companies and institutions to cope with their issues — come with very different recommendations for the way this should be done. The question for the effort to develop a general public planning discourse support platform for dealing with projects and challenges that affect people in many governmental and commercial ‘jurisdictions’ — ultimately: ‘global’ challenges — then becomes: How can and should all these differences of the way people talk about these issues be accommodated in a common platform?

Whether a common ground for this can be found — or a way to accommodate all the different perspectives, if a common label can’t be agreed upon — depends upon a scrutiny of the different terms and their procedural implications. This is a significant task in itself, one for which I have not seen much in the way of inquiry and suggestions (other than the ‘brands’ recommendations for adopting ‘their’ terms and approach.) So raising this question might be the beginning of a sizable discussion in itself (or a survey of existing work I haven’t seen). Pending the outcome of such an investigation, many of the issues raised for discussion in this series of evaluation issues will continue to use the term ‘aspect’, with apologies to proponents of other perspectives.

This question of diversity of terminology is only one reason for needed discussion, however. One such reason has to do with the possibility of bias in the very selection of terms, depending on the underlying theory or method, or whether the perspective is focused on some ‘movement’ that by its very nature puts one main aspect at the center of attention (‘competitive strength and growth’; ‘sustainability’, ‘regeneration’; ‘climate change’; ‘globalization’ versus ‘local culture’ etc.) There are many efforts to classify or group aspects — starting with Vitruvius’ three main aspects ‘firmness, convenience and delight’ to the simple ‘cost, benefit, and risk’ grouping, or the recent efforts that encourage participants to explore aspects from different groups of affected or concerned parties, mixed in with concepts such as ‘principles’, best and worst expected outcomes, etc. shown in a ‘canvas’ poster for orientation. Are these efforts encouraging contribution of information from the public, or giving the impression of adequate coverage and inadvertently missing significant aspects? It seems that any classification scheme of aspects is likely to end up neglecting or marginalizing some concerns of affected parties.

Comparatively minor questions are about potential mistakes in applying the related tools: Listing preferred or familiar means of plan implementation as aspects representing goals or concerns, for example; listing the essentially same concern under different labels (and thus weighing it twice…). The issue of functional relationships between different aspects — a main concern of systems views of a problem situation — is one that is often not well represented in the evaluation work tools. A major potential controversy is, of course, the question of who is doing the evaluation, whose concerns are represented, what is the source of information a team will draw upon to assemble the aspect list?

It may be useful to look at the expectations for the vocabulary and its corresponding tools: Is the goal to ensure ‘scientific’ rigor, or to make it easy for lay participants to understand and to contribute to the discussion? To simplify things or to ensure comprehensive coverage? Which vocabulary facilitates further explanation (sub-aspects etc) and ultimately showing how valuation judgments relate to objective criteria — performance measures?

Finally: given the number of different ‘perspectives’ , how should the platform deal with the potential of biased ‘framing’ of discussions by the sequence in which comments are entered and displayed — or is this concern one that should be left to the participants in the process, while the platform itself should be as ‘neutral’ as possible — even with respect to potential bias or distortions?

The ‘aspect tree’ of some approaches refers to the hierarchical ‘tree’ structure emerging in a display of main aspects, each further explained by ‘sub-aspects’, sub-sub-aspects etc. The outermost ‘leaves’ of the aspect tree would be the‘criteria’ or objective performance variables, to which participants might carry their explanations of their judgment basis. (See the later section on criteria and criterion functions.) Is the possibility of doing that a factor in the insistence on the part of some people to ‘base decisions on facts’ — only — thereby eliminating ‘subjective’ judgments that can be explained only by listing more subjective aspects?

An important warning was made by Rittel in discussing ‘Wicked Problems’ long ago: The more different perspectives, explanations of a problem, potential solutions are entered into the discussion, the more aspects will appear claiming ‘due consideration’. The possible consequences of proposed solutions alone extend endlessly into the future. This makes it impossible for a single designer or planner, even a team of problem-solvers, to anticipate them all: the principle of assembling ‘all’ such aspects is practically impossible to meet. This is both a reminder to humbly abstain from claims to comprehensive coverage, and a justification of wide participation on logical (rather than the more common ideological-political) grounds: inviting all potentially affected parties to contribute to the discourse as the best way to get that needed information.

The need for more discussion of this subject, finally, should be shown by the presence of approaches or attitudes that deny the need for evaluation ‘methods’ altogether. This takes different forms, ranging from calls for ‘awareness’ or general adoption of a new ‘paradigm’ or approach — like ‘systems thinking’, holism, relying on ‘swarm’ guidance etc, to more specific approaches like Alexander’s Pattern Language which suggests that using valid patterns (solution elements, not evaluation aspects) to develop plans, will guarantee their validity and quality, thus making evaluation unnecessary.

One source of heuristic guidance to justify ‘stopping rules’ in the effort to assemble evaluation aspects may be seen in the weighting of relative importance given (as subjective judgments by participants) to the different aspects: if the assessment of a given aspect will not make a significant difference in the overall decision because that aspect is given too low a weight, is this a legitimate ‘excuse’ for not giving it a more thorough examination? (A later section will look at the weighting or preference ranking issue).

–o–