Archive for the 'Public policy discourse' Category

EVALUATION IN THE PLANNING DISCOURSE: SAMPLE EVALUATION PROCEDURES EXAMPLE 1: FORMAL ‘QUALITY‘ EVALUATION

Thorbjørn Mann,  January 2020

In the following segments, a few examples procedures for evaluation by groups will be discussed, to illustrate how the various parts of the evaluation process are selectively assembled into a complete process aiming at decision (or recommendation) for decision about a proposed plan or policy; to facilitate understanding of the way the different provisions and choices related to the evaluation task that are reviewed in this study can be assembled to practical procedures for specific situations. The examples are not intended to be universal recommendations for use in all situations. They all will — arguably — call for improvement as well as adaptation to the specific project and situation at hand.

A common evaluation situation is that of a panel of evaluators comparing a number of proposed alternative plan solutions to select or recommend the ‘best’ choice for adoption. Or — if there is only one proposal, — to determine if it is ‘good enough’ for implementation. It is usually carried out by a small group of people assumed to be knowledgeable of the specific discipline (for example, architecture) and reasonably representative of the interests of the project client (which may be the public). The rationale for such efforts, besides aiming for the ‘best’ decision, is the desire for ensuring that the decision will be based on good expert knowledge, but also for transparency and legitimacy and accountability of the process — to justify the decision. The outcome will usually be a recommendation to the actual client decision-makers rather than the actual adoption or implementation decision, based on the group’s assessment of the ‘goodness’ or ‘quality’ of the proposed plan, documented in some form. (It will be referred to as a ‘Formal Quality Evaluation’ procedure.)

There are of course many possible variations of procedures for this task. The sample procedure described in the following is based on the Musso-Rittel (1) procedure for the evaluation of the ‘goodness’ or quality of buildings.

The group will begin by agreeing on the procedure itself and its various provisions: the steps to be followed (for example, whether evaluation aspects and weighting should be worked out before or after presentation of the plan or plan alternatives), general vocabulary, judgment and weighting scales, aggregation functions both for individual overall judgments and group indices, and decision rules for determining its final recommendation.

Assuming that the group has adopted the sequence of first establishing the evaluation aspects and criteria against which the plan (or plans) will be judged, the first step will be a general discussion of the aspects and sub-aspects to be considered, resulting in the construction of the ‘aspect tree’ of aspects, sub-aspects, sub-sub-aspects etc. (ref. the section on aspects and aspect trees) and criteria (the ‘objective’ measures of performance; ref. the section on evaluation criteria). The resulting tree will be displayed and become the basis for scoring worksheets.

The second step will be the assignment of aspect weights (on a scale of zero to to 1 and such that at each level of the ‘tree’, the sum of weights at that level will be 1. Panel members will develop their own individual weighting. This phase can be further refined by applying ‘Delphi Method’ steps: establishing and displaying the mean / median and extreme weighting values and then asking the authors of extremely low or high weights to share and discuss their reasoning for these judgments, and giving all members the chance to revise their weights.

Once the weighted evaluation aspect trees have been established, the next step will be the presentation of the plan proposal or competing alternatives.

Each participant will assign a first ‘overall offhand’ quality score (on the agreed-upon scale, e.g. -3 to +3) to each plan alternative.

The group’s statistics of these scores are then established and displayed. This may help to decide whether any further discussion and detailed scoring of aspects will be needed: there may be a visible consensus for a clear ‘winner’. If there are disagreements, the group decides to go through with the detailed evaluation, and the initial scores are kept for later comparison with the final results. using common worksheets or spreadsheets of the aspect tree, for panel members to fill in their weighting and quality scores. This step may involve the drawing of ‘criterion functions’ (ref. the section of evaluation criteria and criterion functions) to explain how each participant’s quality judgments depend on (objective) criteria or performance measures. These diagrams may be discussed by the panel. They should be considered each panel member’s subjective basis of judgment (or representation of the interests of factions in the population of affected parties). However, some such functions may be the mandatory official regulations (such as building regulations). The temptation to urge adoption of common (group) functions (‘for simplicity and expression of ‘common purpose’) should be resisted to avoid possible bias towards the interests of some parties at the expense of others.

Each group member will then fill in the scores for all aspects and sub-aspects etc. The results will be compiled, and the statistics compared; extreme differences in the scoring will be discussed, and members given the chance to change their assessments. This step may be repeated as needed (e.g. until there are no further changes in the judgments).

The results are calculated and the group recommendation determined according to the agreed-upon decision criterion. The ‘deliberated’ individual overall scores are compared with the members’ initial ‘offhand’ scores. The results may cause the group to revise the aspects, weights, or criteria, (e.g. upon discovering that some critical aspect has been missed), or call for changes in the plan, before determining the final recommendation or decision (again, according to the initial procedural agreements).

The steps are summarized in the following ‘flow chart’.

Evalmap15 FormalevalEvaluation example 1: Steps of a ‘Group Formal Quality Evaluation’

Questions related to this version of a formal evaluation process may include the issue of potential manipulation of weight assignments by changing the steepness of the criterion junction.
Ostensibly, the described process aims at ‘giving due consideration’ to all legitimately ‘pertinent’ aspects, while eliminating or reducing the role of ‘hidden agenda’ factors. Questions may arise whether such ‘hidden’ concerns might be hidden behind other plausible but inordinately weighted aspects. A question that may arise from discussions and argumentation about controversial aspects of a plan and the examination of how such arguments should be assessed (ref. the section on a process for Evaluation of Planning Arguments) is the role of plausibility judgments about the premises of such arguments: esp. the probability of assumption claims that a plan will actually result in a desired or undesired outcome (an aspect). Should the ‘quality’ assessment’ process include a modification of quality scores based on plausibility / probability scores, or should this concern be explicitly included in the aspect list?

The process may of course seem ‘too complicated’, and if done by ‘experts’, invite critical questions whether the experts really can overcome their own interests, bias and preconceptions to adequately consider the interests of other, less‘expert’ groups. The procedure obviously assumes a general degree of cooperativeness in the panel, which sometimes may be unrealistic. Are more adequate provisions needed for dealing with incompatible attitudes and interests?

Other questions? Concerns? Missing considerations?

–o–

EVALUATION IN THE PLANNING DISCOURSE: ASPECTS and ‘ASPECT TREES’

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.  Thorbjørn Mann,  January 2020

The questions surrounding the task of assembling ‘all’ aspects calling for ‘due consideration’.

 

ASPECTS AND ASPECT TREE DISPLAYS

Once an evaluation effort begins to get serious about its professed aims: of deliberating, making overall judgments a transparent function of partial judgments, of ‘weighing all the pros and cons’, trying not to forget anything significant, to avoid missing things that could lead to ‘unexpected’ adverse consequences of a plan (but that could be anticipated with some care), the people involved will begin to create ‘lists’ of items that ‘should be given due consideration’ before making a decision. One label for these things is ‘aspects’.  Originally meaning just looking at the object (plan) to be decided upon, from different points of view.

A survey of different approaches to evaluation shows that there are many different such labels ‘on the market’ for these ‘things to be given due consideration’. And many of them — especially the many evaluation and problem-solving, systems change consultant brands that compete for commissions to help companies and institutions to cope with their issues — come with very different recommendations for the way this should be done. The question for the effort to develop a general public planning discourse support platform for dealing with projects and challenges that affect people in many governmental and commercial ‘jurisdictions’ — ultimately: ‘global’ challenges — then becomes: How can and should all these differences of the way people talk about these issues be accommodated in a common platform?

Whether a common ground for this can be found — or a way to accommodate all the different perspectives, if a common label can’t be agreed upon — depends upon a scrutiny of the different terms and their procedural implications. This is a significant task in itself, one for which I have not seen much in the way of inquiry and suggestions (other than the ‘brands’ recommendations for adopting ‘their’ terms and approach.) So raising this question might be the beginning of a sizable discussion in itself (or a survey of existing work I haven’t seen). Pending the outcome of such an investigation, many of the issues raised for discussion in this series of evaluation issues will continue to use the term ‘aspect’, with apologies to proponents of other perspectives.

This question of diversity of terminology is only one reason for needed discussion, however. One such reason has to do with the possibility of bias in the very selection of terms, depending on the underlying theory or method, or whether the perspective is focused on some ‘movement’ that by its very nature puts one main aspect at the center of attention (‘competitive strength and growth’; ‘sustainability’, ‘regeneration’; ‘climate change’; ‘globalization’ versus ‘local culture’ etc.) There are many efforts to classify or group aspects — starting with Vitruvius’ three main aspects ‘firmness, convenience and delight’ to the simple ‘cost, benefit, and risk’ grouping, or the recent efforts that encourage participants to explore aspects from different groups of affected or concerned parties, mixed in with concepts such as ‘principles’, best and worst expected outcomes, etc. shown in a ‘canvas’ poster for orientation. Are these efforts encouraging contribution of information from the public, or giving the impression of adequate coverage and inadvertently missing significant aspects? It seems that any classification scheme of aspects is likely to end up neglecting or marginalizing some concerns of affected parties.

Comparatively minor questions are about potential mistakes in applying the related tools: Listing preferred or familiar means of plan implementation as aspects representing goals or concerns, for example; listing the essentially same concern under different labels (and thus weighing it twice…). The issue of functional relationships between different aspects — a main concern of systems views of a problem situation — is one that is often not well represented in the evaluation work tools. A major potential controversy is, of course, the question of who is doing the evaluation, whose concerns are represented, what is the source of information a team will draw upon to assemble the aspect list?

It may be useful to look at the expectations for the vocabulary and its corresponding tools: Is the goal to ensure ‘scientific’ rigor, or to make it easy for lay participants to understand and to contribute to the discussion? To simplify things or to ensure comprehensive coverage? Which vocabulary facilitates further explanation (sub-aspects etc) and ultimately showing how valuation judgments relate to objective criteria — performance measures?

Finally: given the number of different ‘perspectives’ , how should the platform deal with the potential of biased ‘framing’ of discussions by the sequence in which comments are entered and displayed — or is this concern one that should be left to the participants in the process, while the platform itself should be as ‘neutral’ as possible — even with respect to potential bias or distortions?

The ‘aspect tree’ of some approaches refers to the hierarchical ‘tree’ structure emerging in a display of main aspects, each further explained by ‘sub-aspects’, sub-sub-aspects etc. The outermost ‘leaves’ of the aspect tree would be the‘criteria’ or objective performance variables, to which participants might carry their explanations of their judgment basis. (See the later section on criteria and criterion functions.) Is the possibility of doing that a factor in the insistence on the part of some people to ‘base decisions on facts’ — only — thereby eliminating ‘subjective’ judgments that can be explained only by listing more subjective aspects?

An important warning was made by Rittel in discussing ‘Wicked Problems’ long ago: The more different perspectives, explanations of a problem, potential solutions are entered into the discussion, the more aspects will appear claiming ‘due consideration’. The possible consequences of proposed solutions alone extend endlessly into the future. This makes it impossible for a single designer or planner, even a team of problem-solvers, to anticipate them all: the principle of assembling ‘all’ such aspects is practically impossible to meet. This is both a reminder to humbly abstain from claims to comprehensive coverage, and a justification of wide participation on logical (rather than the more common ideological-political) grounds: inviting all potentially affected parties to contribute to the discourse as the best way to get that needed information.

The need for more discussion of this subject, finally, should be shown by the presence of approaches or attitudes that deny the need for evaluation ‘methods’ altogether. This takes different forms, ranging from calls for ‘awareness’ or general adoption of a new ‘paradigm’ or approach — like ‘systems thinking’, holism, relying on ‘swarm’ guidance etc, to more specific approaches like Alexander’s Pattern Language which suggests that using valid patterns (solution elements, not evaluation aspects) to develop plans, will guarantee their validity and quality, thus making evaluation unnecessary.

One source of heuristic guidance to justify ‘stopping rules’ in the effort to assemble evaluation aspects may be seen in the weighting of relative importance given (as subjective judgments by participants) to the different aspects: if the assessment of a given aspect will not make a significant difference in the overall decision because that aspect is given too low a weight, is this a legitimate ‘excuse’ for not giving it a more thorough examination? (A later section will look at the weighting or preference ranking issue).

–o–

The Agenda of Many Important but Connected Issues

Are the agenda platforms of governance candidates consisting of single ‘highest priority’ issues realistic? Aren’t all the issues so tightly connected that none can be resolved without the others?
Attempting to understand, I see this chain:

1 Humanity is confronted by many unprecedented challenges to its survival.

2 There is little if any agreement about how these problems should be addressed.

3 There is a growing sense that current systems of governance are inadequate to address and convincingly resolve these problems: Calls are raised for ‘systemic change’ and ‘a new system’.

4 While there are many well-intentioned theories, initiatives, experiments already underway, to develop new ways of doing things in many domains,

5 There is little if any agreement about what such a ‘new system’ should look like, and very different ideas are promoted in ways that seem more polarizing than unified. We — humanity — do not yet know what works and what does not work: some major ‘systems’ that were tried over recent centuries have turned into dramatic failures.

6 There is much promotion of the many ‘new’ and old ideas, but not enough communication and sharing of experiences among the initiatives for discussion, evaluation and cooperative adoption. Meanwhile, the crises intensify.

So, before attempting another grand system based on inadequate understanding and acceptance, whose failure we cannot afford, it seems that a number of steps are needed:

7 Encouraging the many diverse (usually small scale, local) initiatives and experiments;

8 Supporting these efforts (financially and with information and other resources) regardless of their differences, on condition of their acceptance of some agreements:
a) to avoid getting in each other’s way;
b) to share information about their experiences: successes and failures, for systematic discussion and evaluation, into a common resource repository;
c) to cooperate in a common discourse aiming at necessary (even if just intermediate) decisions — the common ‘rules of the road’ to avoid conflict and facilitate mutual aid in emergencies and system failures.

9 To facilitate the aims in point 8, it will be necessary to develop
a) a common ‘global’ discourse platform accessible to all parties affected by an issue or problem
b) with a system of meaningful incentives for participation to access all information and concerns that must be given ‘due consideration’ in decisions’
c) with adequate translation support not only between different natural languages but also for disciplinary ‘jargon’ into conversational language;
d) new tools for assessment of the merit of information,
e) and new decision-making criteria and procedures based on the merit of contributions (since traditional voting will be inapplicable to issues affecting many parties in different ways across traditional boundaries that define voting rights).

10 It will also be necessary to develop
a) new means for ensuring that common agreements reached will actually be adhered to. Especially at the global level, these tools cannot be based on coercive ‘enforcement’ (which would require an entity endowed with greater power and force that any potential violator — a force which then would become vulnerable to the temptation of abuse of power that arguably is itself one of the global challenges). Instead, development should aim at
b) preventive sanctions triggered by the very attempt at violation, and
c) other innovative means of control of power.

I submit that all of these considerations will have to be pursued simultaneously: without them, any attempt to successfully resolve or mitigate the crises and problems (point 1) will be unsuccessful. The agenda of governance agencies and candidates for public office should include the entire set of interlinked aspects, not just isolated ‘priority’ items. Of course I realize that the practice of election campaign posters, 30-second ads or Twitter posts effectively prevents the communication of comprehensive platforms of this nature. What can we realistically hope for?

EVALUATION IN THE PLANNING PROCESS: EVALUATION TASKS


An effort to clarify the role of deliberative evaluation in the planning and policy-making process

Thorbjoern Mann

EVALUATION TASKS / SITUATIONS

The necessity for this review of evaluation practices and tools arises from the fact that evaluation tasks and judgments and related activities occur at many stages of planning projects. A focus on the most common task, the evaluation of a proposed plan or a set of plan alternatives in preparation for the last action, may hide the role and impact of many judgments along the way, where explicitly or implicitly not only different labels but also very different vocabulary, tools and principles are involved. Is it necessary to look at these differences, to ask whether there should be more of an effort of coordination and common vocabulary in the set of working agreements for a project?

This section will at least raise the question and begin to explore the different disguises of evaluation acts throughout the planning process to answers these questions.

Many plans are started as extensions of routine ‘maintenance’ activities on existing processes and systems, using established performance measures as indicators of a need for extraordinary steps to ensure the continued desirable function of the system in question. In such tasks, the selected performance criteria, their threshold values demanding action and most of the expected remedial steps and means, are part of the factual ‘current conditions’ data basis of further planning.

To what extent are these data understood as part of the planning project — either as ‘given’ aspects or as needing revision, discussion, change — when the situation is so unprecedented as to call for activities going beyond the routine maintenance concerns? Such situations are often referred to as ’problems’, which tends to trigger a very different way of talking. There are many different ‘definitions’ or views, understandings of problems, as well as different problem types. To what extent is an evaluation group’s decision to talk about the situation as a problem, a specific problem type, already an evaluative task? Even adopting a view of ‘problem’ as a perceived (by somebody!) discrepancy between an existing ‘IS’ state of affairs and a view of what that state ‘OUGHT’ to be, calling for ideas about ‘HOW’ to get from the IS to the OUGHT.

Judgments about what ‘is’ the case do call for judgments, perhaps even measurements, of current conditions: assessments of factual matters, even as those are perceived — again, by whom? — as ‘NOT-Ought’. Judgments specifying the OUGHT — ‘goals’ , ‘visions’, ‘desirable’ states of affairs — belong to the ‘deontic’ realm, much as this often is obscured by the invocation of ‘facts’ in the form authorities and of polls of percentages of populations ‘wanting’ this or that ‘OUGHT’: the ‘good’ they are after. The judgments about the ‘HOW’ — means, tools, etc. to reach those goals may look like ‘factual-instrumental’ judgments — but also getting into the deontic realm; some possible ‘means’ are decidedly NOT what we OUGHT to do, no matter how functionally effective they seem to be.

The ‘authority’ source of judgments that participants in planning will have to consider come in the form of laws and ‘regulations’. Examined as ‘givens’, they may be helpful in defining, constraining the ‘solution space’ for the development of the plan. But they often ‘don’t fit the circumstances’ of a current planning situation, and raise questions about whether to apply for a ‘variance’, an exception to a rule. Of course, any regulation is itself the outcome of an evaluation or judgment process — one that may be acknowledged but usually not thoroughly examined by the planners of a specific project. The temptation is, of course, to ‘accept’ such regulations as the critical performance objective (‘to get the permit’), conveniently forgetting that such regulations usually specify m i n i m a l performance expectations. They usually focus on meaningful concerns such as safety and conformance to setback and functional performance conventions — and neglecting or drawing attention away from other issues such as aesthetics, sustainability, environmental or mental health impact of the resulting ‘permitted’ but in many other ways quite mediocre and outright undesirable solutions.

Other guidance tools for the development of the plan — buildings, urban environments, but also general societal policy and policy implementation efforts — are the ‘programs’ (briefs’) and equivalent statements about the desired outcome. One main consideration of such statements is to describe the scope of the plan (in buildings; how many spaces, their size and functions , etc.) in relation to the constraint of the budget. In many cases, such descriptions are in turn guided by ‘standards’ and norms for similar uses, in each case moving responsibility for the evaluation judgments onto a different agency: asking for the basis of judgment of the provision of such expectations is becoming a complex task in itself.

The ‘participation’ demand for involving the eventual users, citizens, affected parties in these processes seems to take two main forms: one being general surveys — asking the participants to fill out questionnaires that try to capture expectations and preferences; the other being ‘hearings’ in connection with the presentation of in-progress ‘option decisions or final plans. Do the different methodological basis and treatment of these otherwise laudable efforts raise questions about their ultimate usefulness in nurturing the production of ‘quality’ plans?

The term ‘quality’ is a key concern of a very different approach to design and planning — on that explicitly denies the very need for ‘method’ in the form of systematic evaluation procedures. This is the key feature (from the current point of view) of the ‘Pattern Language’ by Christopher Alexander. Its promise (briefly and arguably unfairly distorting) is that using ‘patterns’ such as the design precepts for building and town planning of his book ‘A Pattern Language’ in the development of the plan will ‘guarantee’ an outcome that embodies the ‘quality without a name’ — including many of the aspects not addressed by the ‘usual’ design process and its regulation and function-centered constraints.

This move seems to be very appealing to designers (surprisingly, even more in other domains such as computer programming than in architecture) — any outcome done in the proper way with the proper patterns is thereby ‘good’ (‘has the ‘quality’ ) and does not need further evaluation. Not discussed, as far as I can see, is the fact that the evaluation issue is merely moved to the process of suggesting and ‘validating’ the patterns — in the building case, by Alexander and his associates, and assembled in the book. Is the admirable and very necessary effort to bring those missing quality issues back into the design and planning process and discussion undercut by the removal of the evaluation problem from that discussion?

The Pattern Language example should make it very clear how drastically the treatment of the evaluation question could influence the process and decision-making in the planning process.

Comments: Missing items / issues? Wrong question?

–o–

EVALUATION, DELIBERATION IN THE PLANNING DISCOURSE

An effort to clarify the role of deliberative evaluation in the planning and policy-making process
Thorbjoern Mann

EVALUATION / DELIBERATION

‘Evaluation‘ and its related term ‘deliberation’ is understood in many different ways. A simple view is just the act of making a value judgment about something: about a plan: is it ‘worth’ implementing? To many, it evokes a somewhat cumbersome, bureaucratic process that itself constitutes a problem. Seen from the perspective of theories like the Pattern Language, for example, it is a ‘method’ from which the Pattern Language ‘frees’ the designer: not needed, even ‘part of the problem’ of misguided design and planning process. So does the idea need some clarification, discussion?

Some answers to this question might be found by examining the reasons people feel such efforts are necessary: Beginning with trying to make up one’s own mind when facing a somewhat complicated situation and plan, trying to consider all pertinent aspects, all significant causes of the problem a plan is supposed to fix, also its possible consequences, its ‘pros and cons’; trying not to forget important details, expected benefits and costs and risks if things don’t turn out quite as we might wish.

Such ‘mulling’ about the task in order for an individual person to arrive at a judgment may not require a very systematic and orderly process. Things may be somewhat different when we are then asked to explain or justify our judgments to others, and even more so when participants in a project discourse try to get other parties to not only become aware of their concerns and judgments, but even to give them ‘due consideration’ in making decisions. Or when clients or users are asking designers, planners and ultimate decision-makers to make the decisions in developing the plan ‘on their behalf’: The burden of explanation (of what they would consider a viable answer to their needs or wishes falls first on the former, and then on the latter, pointing out how their plan features will meet those expectations. The common denominator: explaining the basis of one’s judgment to others, for the purpose of justification or persuasion — to accept the plan. The basic pattern in that process is to show how o v e r a l l judgments or quality scores depend on various   p a r t i a l judgments, or ultimately on some ‘objective’ quantifiable features (‘criteria’) of the plan. (The very term ‘objective’, used in asserting its distinction from ‘subjective’ judgments and ‘opinions’, is of course itself a major controversy, to be dealt with in a later segment.)

The shift of burden of explanation mentioned above is an indicator of a fact that is often overlooked in discussions about evaluation issues: that evaluation occurs in many different shapes and forms, in many different stages all along the planning process, not just in the final occasions of accepting or rejecting a proposed plan, or selecting ‘the best’ of a set of proposed alternatives by a competition jury. Should a better coordination be developed between those different events, and the often very different terms used?

The claims and arguments used in the different evaluation tasks use different terms, and draw on different sources and methods for obtaining the ‘evidence’ for claims and arguments. The near obsession with ‘data’ (or ‘facts’) in this connection overshadows the problems associated with the relationships between facts describing the current ‘problem’ situations to be remedied, the ‘facts’ about the expectations, concerns, wishes, needs of different groups in the affected populations (which themselves are not ‘facts’ …yet) and the ‘facts’ (but also just estimates, predictions) generated by systems models about the ‘whole system’ in which current problem, plans and future consequences are embedded.

A final aspect should be mentioned in this connection. There will be, in real life, many situations in which people, leaders and others, will be called upon to make quick decisions, with no time for lengthy public discourse. These decisions will be ‘intuitive’, often ‘offhand’ decisions for which there is insufficient information upon which they can be reasonably based. We expect that decisions must be made by people whose (intuitive?) judgment can be trusted. This suggests that we think some people have ‘better’ intuitive judgment than others. So where does better intuition, better judgment come from? Experience with similar situations is one likely source. There are claims that having experienced the process of organized, systematic deliberation and evaluation may also contribute to improve decision-makers‘ quality of intuitive judgment. What is the evidence for this, and what, if any implications should be considered?

Given the speculative nature of many of these considerations, it seems that there is a need for more thorough study and discussion of these issues; what are the implications of assumptions we make for the design of better planning discourse platforms? What other aspects should be added to the picture?

–o–

Abbé Boulah’s Hack-Rigged Funding Scheme

In the Fog Island Tavern:

Hey Vodçek — has Abbé Boulah been in today?

And a good morning to you, too, Bog-Hubert. No, haven’t seen him yet. What’s stirring your urgency to see him?

Well it’s a mystery. I was out in the Gulf trying to get to Rigatopia — you know, the new refugee society on that abandoned rig, when my GPS conked out and I had to navigate by compass, the old-fashioned way. Turns out I’d forgotten to re-set the compass declination to the new position of that wandering magnetic North Pole. So I ran into a different rig, nearby, but was warned off by radio to go anywhere near it. Secret prison rehab project or something. Near Rigatopia? Sounded like another one of Abbe Boulah’s crazy schemes: do you know anything about it?

Ah Bog-Hubert: You’ve been over in your Tate’s Hell bog cooking stuff to long. Yes, it’s Abbé Boulah’s new project. He’s gotten another abandoned oil rig for it. But this one has prison inmates on it, working on a new kind of ‘community service’ to try to get reduced sentences.

Doesn’t surprise me. Abbé Boulah again. Prison inmates? What kind of community service — there’s no community out there? And why secret?

Patience. Remember Abbé Boulah and his friend up in town working on that global planning discourse project?

Of course:  I was working on that one too.

Oh yes, I forgot. Well, one day, here, Abbé Boulah was talking about it with a guy who turned to to be a bit of a planning discourse-cum-argument-evaluation-sceptic — a ‘NAA’ —’never argue-arguer’, Abbé Boulah calls them. This one thought it’d be impossible to get any serious mainstream company to write the programs for that kind of public platform, and funding the implementation even for small local prototypes. So Abbé Boulah sat there fuming for a while, using up a good part of my Sonoma Zinfandel supply, and came up with this idea to get imprisoned hackers to work on the project. You know, some of those brilliant computer freaks who were caught showing humanity how naively vulnerable our precious IT systems have become.

Brilliant guys, eh — but not brilliant enough to avoid getting caught?

Well, turns out some of them were sold out by their peer hackers. You know there’s fierce and unfair competition even in that murky kingdom too. And I think the FBI has hired some such experts…

But hey, sounds like an interesting idea?

We’ll see how it works out. Anyway, he got some judges convinced that incarcerating these people at great expense to society is a sinful waste of brilliant minds and public money, and to set up a program to offer these guys reductions of their sentences if they worked on writing the programs needed for this project. And similar projects. So Abbé Boulah got a friend of his — a brilliant fellow, once a student of his buddy up in town — who’s been busy getting people whose lives have been disrupted by all the stupid wars in the Middle East to learn programming to get well-paying jobs — to rehab another abandoned rig. Where these people can be kept safely to work on that project.

Hmm. Sounds a little like putting the fox to work on guarding the hen-house, though?

Well, the things they come up with will be thoroughly tested, of course.

Tested, how?

Easy. They put separate hacker or hacker teams to work on trying to hack the system designs. Promising those guys rewards — three years of life off, if they can break the competition’s system… And vice versa. Anyway, it’s an inexpensive way to get those programs written, putting those minds to productive use, work no other company wants to do. And possibly getting those people not only a chance of keeping their skills honed but also a better chance of rehabilitated legitimate existence once released.

So Abbé Boulah is out there on that rig now, is that what you are saying?

Not sure. He’s a difficult fellow to keep track of. Of course somebody has to tell those guys what the platform is supposed to do. And he’s getting some sailing and fishing in on his breaks…

I knew it. Having a great time doing interesting stuff… I’ll drink to that.

Cheers!

—o—

EVALUATION IN THE PLANNING DISCOURSE: ISSUES, CONTROVERSIES, (OVERVIEW)

Thorbjoern Mann

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.

Many aspects of evaluation-related tasks in familiar approaches and practice, call for some re-assessment and improvement even for practical applications in current situations. These will be discussed in more detail in sections addressing requirements and tools for practical application. Others are more significant in that they end up questioning the entire concept of deliberative evaluation in planning on a ‘philosophical’ level, or resist adopting smaller detail improvements of the first (practical) kind because they may mean abandoning familiar habits based on tradition and even constitutional provisions.

The very concept of deliberative evaluation — as materialized in procedures and practices that look too cumbersome, bureaucratic and elitist ‘expert-model‘ to many — is an example of a fundamental issue that can significantly flavor and complicate planning discourse. The desire to do without such ‘methods’ is theoretically and emotionally supported by concepts such as the civic, patriotic, call and need for consensus, unity of purpose and even ideas such as swarm behavior or ‘wisdom of the crowds’ that claim to more effortlessly produce ‘good’ solutions and community behavior. A related example is the philosophy behind Christopher Alexander’s ‘Pattern Language’ . Does its claim that using patterns declared ‘valid’ and ‘good’ (having ‘Quality Without a Name — ‘QWAN’) in developing plans and solutions, e.g. for buildings and neighborhoods, will produce overall solutions that will ‘automatically’ be valid / good etc. and thus require no evaluation ‘method’ at all to validate it?

A related issue is the one about ‘objective’ measurement, fact, ‘laws’ (akin to natural laws) as opposed to ‘subjective’ opinion. Discussion, felt to consist mainly of the latter, ‘mere opinions’, difficult to measure and thus lacking reliable tools for resolution of disagreement is seen as too unreliable a basis for important decisions.

On a more practical level, there is the matter of ‘decision criteria’ that are assumed to legitimize decisions. Simple tools such as voting ratios — even of votes following the practice of debating the pros and cons of proposed plans: the practice (accepted as eminently ‘democratic’ even by authoritarian regimes as a smokescreen) in reality results in the concerns of significant parts of affected populations (the minority) to be effectively ignored. Is the call for reaching decisions better and more transparently based on the merit of discourse contributions and ‘due consideration’ of all aspects promising but needing different tools? What would they look like?

An understanding of ‘deliberation’ as the process of making overall judgment (of the good, value, acceptability etc.) a function of partial judgment raises questions of ‘aggregation’: how do or should we convert the many partial judgments into overall judgments? How should the many individual judgments of members of a community be‘aggregated’ into overall ‘group’ judgments or indicators of the distribution of individual judgments that can guide the community’s decision on an issue? Here, to, traditional conventions need reconsiderations.

These issues and controversies need to be examined not only individually but also how they relate to one another and how they should guide evaluation procedures in the planning discourse. The diagram shows a number of them and some relationships adding to the complexity, there are probably more that should be added to the list.

Additions, connections, comments?
–o–