EVALUATION IN PLANNING DISCOURSE: DECISION CRITERIA

Thorbjørn Mann, January 2020

DECISION CRITERIA

The term ‘Decision criteria‘ needs explanation, so as to not be confused with the ‘evaluation criteria‘ used for the task of explaining one’s subjective ‘goodness (or ‘quality’ ) judgment about a plan or object by showing how it relates to an ‘objective’ criterion or performance measure (in section /post …) The criteria that actually determine or guide decision may be very different from those ‘goodness’ evaluation criteria — much as the expectation of the entire effort here is to get decisions that are more based on the merit of discourse contributions that clarify ‘goodness.

For discourse aiming at actual actions to achieve changes in the real world we inhabit: when discussion stops after all aspects etc. have been assessed and individual quality judgment scores have been aggregated into individual overall scores and into group statistics about the distribution of those individual scores, a decision or recommendation has to be made. The question then arises: what should guide that decision? The aim of “reaching decisions based on the merit of discourse contributions” can be understood in many different ways, of which actual ‘group statistics’ are only one — not only because there are several such statistical indicators. (It is advisable to not use the term ‘group judgment‘ for this: the group or set of participants may make a collective decision, but there may be several factions within the group for which any single statistic may not be representative; and the most familiar decision criterion in use is the ratio of votes for or against a plan proposal — which may have little if any relation to the group members’ judgments about the plan’s quality.)

The following is an attempt to survey the range of different group decision criteria of guiding indicators that are used in practice, in part to show why the planning discourse for projects that affect many different governance entities (and, finally, decisions of ‘global’ nature) are calling for different decision guides than the familiar tools such as majority voting.

A first distinction must be made between decision guides we may call ‘plan quality’– based, and those that are more concerned with discourse process.

Examples of plan quality-based indicators are of course the different indicators derived from the quality-based evaluation scores:
–  Averaged scores of all ‘Quality’ or ‘Plausibility’ (or combined) judgment scores of participating members;
–  ‘Weighted average’ scores (where the manner of weighting becoming another controversial issue: degree of ‘affectedness’ of different parties? number of people represented by participating group representatives? number of stock certificates held by stock holders?…)
–  As the extreme form of ‘weighting’ participant ’judgments: the ‘leader’s judgment;
–  The judgment of ‘worst-off’ participants or represented groups (the ‘Max-min’ criterion for a set of alternatives);
–  The Benefit-Cost Ratio;
–  The criterion of having met all ‘regulation rules’ — which usually are just ‘minimal’ expectation considerations (‘to get the permit’) or thresholds of performance, such as ‘coming in under the budget’;
–  Successive elimination of alternatives that show specific weaknesses for certain aspects, such that the remaining alternative will become the recommended decision. A related criterion applied during the plan development would be the successive reduction of the ‘solution space’ until there is only one remaining solution with ‘no alternative’ remaining.

Given the burdensome complexity of more systematic evaluation procedures, many process-based’ criteria are preferred in practice:

– Majority voting; in various forms, with the extreme being ‘consensus’ — i. e. 100% approval;
– ‘Consent’ — understood less as approval but acceptance with reservations either not voiced or not convincing a majority. (Sometimes only achieved / invoked in live meetings by determinations such as ‘time’s up’ or ‘no more objections to the one proposed motion).
– ‘Depth and breadth’ of the discussion (but without assessment of the validity or merit of the contributions making up the breath or depth);
– ‘All parties having been heard / given a chance to voice their concerns;
– Agreed-upon (or institutionally mandated) procedures and presentation requirements having been followed, legitimating approval, or violated, leading to rejection e.g. of competing alternatives; (‘Handed in late’ means ‘failed assignment…’)

Of course, combinations of these criteria are possible. Does the variety of possible resulting decision criteria emphasize the need for more explicitly and carefully agreements: establishing clear, agreed-upon procedural rules at the outset of the process? And for many projects, there is a need for better decision criteria. A main reason for this is that in many important projects affecting populations beyond traditional governance boundaries (e.g. countries) traditional decision determinants such as voting become inapplicable not only because votes may be based on inadequate information and understanding of the problem, but simply because the number of people having ‘voting right’ becomes indeterminate.

A few main issues or practical concerns can be seen that guide the selection of decision criteria: The principle of ‘coverage’ of ‘all aspects that should be given due consideration’ on the one hand, with the desire for simplicity, speed and clarity on the other. The first is aligned with either trust or demonstration (‘proof’ ) of fair coverage: ‘accountability’; the second with expediency. Given the complexity of ‘thorough’ coverage of ‘all’ aspects, explored in previous segments, it should be obvious that full adherence to this principle would call for a decision criterion based on the fully explained (i.e. completed evaluation worksheet results of all parties affected by the project in any way, properly aggregated into an overall statistic accepted by all.

This is clearly not only impossible to define but practically impossible to apply — and equally clearly situated at the opposite end of an ‘expediency’ (speed, simple to understand and apply) scale. These considerations also show why there is a plausible tendency to use ‘procedural compliance criteria‘ to lend the appearance of legitimacy to decisions: ‘All parties have been given the chance to speak up; now time’s up and some decision must be made (whether it meets all parties’ concerns or not.)

It seems to follow that some compromise or ‘approximation’ solution will have to be agreed upon for each case, as opposed to proceed without such agreements, relying on standard assumptions of ‘usual’ procedures, that later lead to procedural quarrels.

For example, one conceivable ‘approximation’ version might be to arrange for a thorough discussion with all affected parties being encouraged to voice and explain their concerns, but only the ‘leader’ or official responsible for actually making the decision be required to complete the detailed evaluation worksheets — and to publish it to ‘prove’ that all aspects have been entered, addressed (with criterion functions for explanation) and given acceptable weights, and that the resulting overall judgment, aggregated with acceptable aggregation functions, corresponds with the leaders’s actual decision. (One issue in this version will be how ‘side payments’ or ‘logrolling’ provisions to compensate parties that do not benefit fairly from the decision but whose votes in traditional voting procedures would be ‘bought’ to support the decision, should be represented in such ‘accounts’.

This topic may call for a separate, more detailed exploration of a ‘morphology‘ of possible decision criteria for such projects, and an examination of evaluation criteria for decision guides or modes to help participants in such projects agree on combinations suited to the specific project and circumstances.

Questions? Missing aspects? Wrong question? Ideas, suggestions?

Suggestions for ‘best answers’ given current state of understanding:
– Ensure better opportunity for all parties affected by problems or plans to contribute their ideas, concerns, and judgments: (Planning discourse platform);
– Focus on improved use of ‘quality/plausibility’ based decision guides, using ‘plausibility-weighted quality evaluation procedures explained and accepted in initial ‘procedural agreements’;
– Reducing the reliance on ‘process-based criteria.

Evalmap Decision criteria
Overview of decision criteria (indices to guide decisions)

–o–

0 Responses to “EVALUATION IN PLANNING DISCOURSE: DECISION CRITERIA”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.





%d bloggers like this: