Archive for the 'Pattern Language' Category

EVALUATION IN THE PLANNING DISCOURSE — TIME AND EVALUATION OF PLANS

An effort to clarify the role of deliberative evaluation in the planning and policy-making process. Thorbjørn Mann, February 2020

TIME AND EVALUATION OF PLANS  (Draft, for discussion)

Inadequate attention to time in current common assessment approaches

Considering that evaluation of plans (especially ‘strategic’ plans) and policy proposals, by their very nature are concerned with the future, it is curious that the role of time has not received more attention, even with the development of simulation techniques that aim at tracking the behavior of key variables of systems over many years into the future. The neglect of this question, for example in the education or architects, can be seen in the practice of judging students’ design project presentations on the basis of their drawings and models.

The exceptions — for example in building and engineering economics — are looking at very few performance variables, with quite sophisticated techniques: expected cost of building projects, ‘life cycle cost’, return on investment etc., — to be put into relation to expected revenues and profit. Techniques such as ‘Benefit/Cost Analysis‘, which in its simplest form considers those variables as realized immediately upon implementation, also can apply this kind of analysis to forecasting costs and benefits and comparing them over time by methods for converting initial amounts (of money) to ‘annualized’ or future equivalents, or vice versa.

Criticism of such approaches amount to pointing out problems such as having to convert ‘intangible’ performance aspects (like public health, satisfaction, loss of lives) into money amounts to be compared, (raising serious ethical questions) for entities like nations, where the money amounts drawn from or entering the national budget hide controversies such as inequities in the distribution of the costs and benefits. Looking at the issue from the point of view of other evaluation approaches might at least identify the challenges in the consideration of time in the assessment of plans, and help guide the development of better tools.

A first point to be pointed out is that from the perspective of the formal evaluation process, for example, (See e.g. the previous section on the Musso/Rittel approach), measures like present value of future cost or profit, or benefit-cost ratio must be considered ‘criteria’ (measures of performance) for more general evaluation aspects, for among a set of (goodness) evaluation aspects that each evaluator must be weighted for their relative importance, to make up overall ‘goodness’ or quality judgments. (See the segments on evaluation judgments, criteria and criterion functions, and aggregation.) And as such, the use of these measures as decision criteria must be considered incomplete and inappropriate. However, in those approaches, the time factor is usually not treated with even the attention expressed in the above tools for discounting future costs and benefits to comparable present worth: For example, pro or con arguments in a live verbal discussion about expected economic performance often amount to mere qualitative comparisons or claims like ‘over the budget’ or ‘more expensive in the long run’. 

Finally, in approaches such as the Pattern language, (which makes valuable observations about ‘timeless’ quality of built environments, but does not consider explicit evaluation a necessary part of the process of generating such environments), there is no mention or discussion of how time considerations might influence decisions: the quality of designs is guaranteed by having been generated by the use of patterns, but the efforts to describe that quality do not include consideration of effects of solutions over time.

Time aspects calling for attention in planning

Assessments of undesirable present or future states ‘if nothing is done’

The implementation of a plan is expected to bring about changes in the state of affairs that is felt to be ‘problems’ — things not being as they ought to be, or ‘challenges’,‘opportunities’ calling for better, improved states of affairs. Many plans and policies aim at preventing future developments to occur, either as distinctly ‘sudden’ events or development over time. Obviously, the degree of undesirability depends on the expected severity of these developments; they are matters of degree that must be predicted in order for the plan’s effectiveness to be judged.

The knowledge that goes into the estimates of future change comes from experience: observation of the pattern and rate of change in the past, (even if that knowledge is taken to be well enough established to be considered a ‘law’). But not all such change tracks have been well enough observed and recorded in the past, so much estimate and judgment goes into the assumptions already about the changes over time in the past.

Individual assessments of future plan performance

Our forecasts for future changes ‘if nothing is done’, resting on such shaky past knowledge must be considered less that 100% reliable. Should our confidence in the application of that knowledge to estimates of a plan’s future ‘performance‘ then not be be acknowledged as equal (at best) or arguably less certain — expressed as deserving a lower ‘plausibility’ qualifier? This would be expressed, for example, with the pl — plausibility — judgment for the relationship claimed in the factual-instrumental premise of an argument about the desirability of the plan effects: “Plan A will result (by virtue of the law or causal relationship R) in producing effect B”.

This argument should be (but is often not) qualified by adding the assumption ‘given the conditions C under which the relationship R will hold’: the conditions which the third (factual claim) premise of the ‘standard planning argument’ claims is — or will be — ‘given’.

Note: ‘Will be’: since the plan will be implemented in the future, this premise also involves a prediction. And to the extent the condition is not a stable, unchanging one but also a changing, evolving phenomenon, the degree of the desirable or undesirable effect B must be expected to change. And, to make things even more interesting and complex: as explained in the sections on argument assessment and systems modeling: the ‘condition’ is never adequately described by a single variable, but actually represents the  evolving state of the entire ‘system’ in which the plan will intervene.

This means that when two people exchange their assumptions and judgments, opinions, about the effectiveness of the plan by citing its effect on B, they may likely have very different degrees (or performance measures in mind, occurring under very different assumptions about both R and C, — at different times.

Things become more fuzzy when the likelihood is considered that the desired or undesired effects are not expected to change things overnight, but gradually, over time. So how should we make evaluation judgments about competing plan alternatives, when, for example, one plan promises rapid improvement soon after implementation, (as measured by one criterion), but then slowing down or even start declining, while the other will improve at a much slower but more consistent rate? A mutually consistent evaluation must be based on agreed-upon measures of performance: measured at what future time? Over what future time period, aka ‘planning horizon’? This question will just apply to the prediction of the performance criterion — what about the plausibility and weight of importance judgments we need to offer complete explanation of our judgment base?  Is it enough to apply the same plausibility factor to forecasts of trends decades in the future, as the one we use for near future predictions? As discussed in the segment on criteria, the crisp fine forecast lines we see in simulation printouts are misleading: the line should really be a fuzzy track widening more and more, the farther out in time it extends?  Likewise: is it meaningful to use the same weight of relative importance for the assessment of effects at different times?

These considerations apply, so far, only to the explanation of individual judgments, and already show that it would be almost impossible to construct meaningful criterion functions and aggregation functions to get adequately ‘objectified’ overall deliberated judgment scores for individual participants in evaluation procedures.

Aggregation issues for group judgment indicators

The time-assessment difficulties described for individual judgments do not diminish in the task of construction decision guides for groups, based on the results of individual judgment scores. Reminder: to meet the ideal ‘democratic’ expectation that the community decision about a plan should be based on due consideration of ‘all’ concerns expressed by ‘all’ affected parties, the guiding indicator (‘decision guide’ or criterion) should be an appropriate aggregation statistic of all individual overall judgments. The above considerations show, to put it mildly, that it would be difficult enough to aggregate individual judgments into overall judgment scores, but even more so to construct group indicators that are based on the same assumptions about the time qualifiers entering the assessments.

This makes it understandable (but not excusable) why decision-makers in practice tend to either screen out the uncomfortable questions about time in their judgments, or resort to vague ‘goals’ measured by vague criteria to be achieved within arbitrary time periods: “Carbon-emission neutrality by 2050”, for example: How to choose between different plan or policies whose performance simulation forecasts do not promise 100% achievement of the goal, but only ‘approximations’ with different interim performance tracks, at different costs and other side-effects in society? But 2050 is far enough in the future to ensure that none of the decision-makers for today’s plans will be held responsible for today’s decisions…

“Conclusions’ ?

The term ‘conclusion’ is obviously inappropriate if referring to expected answers to the questions discussed. These issues have just been raised, not resolved; which means that more research, experiments, discussion is called for to find better answers and tools. For the time being, the best recommendation that can be drawn from this brief exploration is that the decision-makers for today’s plans should routinely be alerted to these difficulties before making decisions, carry out the ‘objectification’ process for the concerns expressed in the discourse (of course: facilitating discourse with wide participation adequate to the severity of the challenge of the project), and then admit that any high degree of ‘certainty‘ for proposed decisions is not justified. Decisions about ‘wicked problems’ are more like ‘gambles’ for which responsibility, ‘accountability’ must be assumed. If official decision-makers cannot assume that responsibility — as expressed in ‘paying’ for mistaken decisions, should they seek supporters to share that responsibility?

So far, this kind of talk is just that: mere empty talk, since there is at best only the vague and hardly measurable ‘reputation’ available as the ‘account‘ from which ‘payment‘ can be made — in the next election, or in history books. Which does not prevent reckless mistakes in planning decisions: there should be better means for making the concept of ‘accountability’ more meaningful. (Some suggestions for this are sketched in the sections on the use of ‘discourse contribution credit points’ earned by decision-makers or contributed by supporters from their credit point accounts,and made the required form of ‘investment payment’ for decisions.) The needed research and discussion of these issues will have to consider new connections between the factors involved in evaluation for public planning.


Overview

— o —

EVALUATION IN THE PLANNING PROCESS: EVALUATION TASKS


An effort to clarify the role of deliberative evaluation in the planning and policy-making process

Thorbjoern Mann

EVALUATION TASKS / SITUATIONS

The necessity for this review of evaluation practices and tools arises from the fact that evaluation tasks and judgments and related activities occur at many stages of planning projects. A focus on the most common task, the evaluation of a proposed plan or a set of plan alternatives in preparation for the last action, may hide the role and impact of many judgments along the way, where explicitly or implicitly not only different labels but also very different vocabulary, tools and principles are involved. Is it necessary to look at these differences, to ask whether there should be more of an effort of coordination and common vocabulary in the set of working agreements for a project?

This section will at least raise the question and begin to explore the different disguises of evaluation acts throughout the planning process to answers these questions.

Many plans are started as extensions of routine ‘maintenance’ activities on existing processes and systems, using established performance measures as indicators of a need for extraordinary steps to ensure the continued desirable function of the system in question. In such tasks, the selected performance criteria, their threshold values demanding action and most of the expected remedial steps and means, are part of the factual ‘current conditions’ data basis of further planning.

To what extent are these data understood as part of the planning project — either as ‘given’ aspects or as needing revision, discussion, change — when the situation is so unprecedented as to call for activities going beyond the routine maintenance concerns? Such situations are often referred to as ’problems’, which tends to trigger a very different way of talking. There are many different ‘definitions’ or views, understandings of problems, as well as different problem types. To what extent is an evaluation group’s decision to talk about the situation as a problem, a specific problem type, already an evaluative task? Even adopting a view of ‘problem’ as a perceived (by somebody!) discrepancy between an existing ‘IS’ state of affairs and a view of what that state ‘OUGHT’ to be, calling for ideas about ‘HOW’ to get from the IS to the OUGHT.

Judgments about what ‘is’ the case do call for judgments, perhaps even measurements, of current conditions: assessments of factual matters, even as those are perceived — again, by whom? — as ‘NOT-Ought’. Judgments specifying the OUGHT — ‘goals’ , ‘visions’, ‘desirable’ states of affairs — belong to the ‘deontic’ realm, much as this often is obscured by the invocation of ‘facts’ in the form authorities and of polls of percentages of populations ‘wanting’ this or that ‘OUGHT’: the ‘good’ they are after. The judgments about the ‘HOW’ — means, tools, etc. to reach those goals may look like ‘factual-instrumental’ judgments — but also getting into the deontic realm; some possible ‘means’ are decidedly NOT what we OUGHT to do, no matter how functionally effective they seem to be.

The ‘authority’ source of judgments that participants in planning will have to consider come in the form of laws and ‘regulations’. Examined as ‘givens’, they may be helpful in defining, constraining the ‘solution space’ for the development of the plan. But they often ‘don’t fit the circumstances’ of a current planning situation, and raise questions about whether to apply for a ‘variance’, an exception to a rule. Of course, any regulation is itself the outcome of an evaluation or judgment process — one that may be acknowledged but usually not thoroughly examined by the planners of a specific project. The temptation is, of course, to ‘accept’ such regulations as the critical performance objective (‘to get the permit’), conveniently forgetting that such regulations usually specify m i n i m a l performance expectations. They usually focus on meaningful concerns such as safety and conformance to setback and functional performance conventions — and neglecting or drawing attention away from other issues such as aesthetics, sustainability, environmental or mental health impact of the resulting ‘permitted’ but in many other ways quite mediocre and outright undesirable solutions.

Other guidance tools for the development of the plan — buildings, urban environments, but also general societal policy and policy implementation efforts — are the ‘programs’ (briefs’) and equivalent statements about the desired outcome. One main consideration of such statements is to describe the scope of the plan (in buildings; how many spaces, their size and functions , etc.) in relation to the constraint of the budget. In many cases, such descriptions are in turn guided by ‘standards’ and norms for similar uses, in each case moving responsibility for the evaluation judgments onto a different agency: asking for the basis of judgment of the provision of such expectations is becoming a complex task in itself.

The ‘participation’ demand for involving the eventual users, citizens, affected parties in these processes seems to take two main forms: one being general surveys — asking the participants to fill out questionnaires that try to capture expectations and preferences; the other being ‘hearings’ in connection with the presentation of in-progress ‘option decisions or final plans. Do the different methodological basis and treatment of these otherwise laudable efforts raise questions about their ultimate usefulness in nurturing the production of ‘quality’ plans?

The term ‘quality’ is a key concern of a very different approach to design and planning — on that explicitly denies the very need for ‘method’ in the form of systematic evaluation procedures. This is the key feature (from the current point of view) of the ‘Pattern Language’ by Christopher Alexander. Its promise (briefly and arguably unfairly distorting) is that using ‘patterns’ such as the design precepts for building and town planning of his book ‘A Pattern Language’ in the development of the plan will ‘guarantee’ an outcome that embodies the ‘quality without a name’ — including many of the aspects not addressed by the ‘usual’ design process and its regulation and function-centered constraints.

This move seems to be very appealing to designers (surprisingly, even more in other domains such as computer programming than in architecture) — any outcome done in the proper way with the proper patterns is thereby ‘good’ (‘has the ‘quality’ ) and does not need further evaluation. Not discussed, as far as I can see, is the fact that the evaluation issue is merely moved to the process of suggesting and ‘validating’ the patterns — in the building case, by Alexander and his associates, and assembled in the book. Is the admirable and very necessary effort to bring those missing quality issues back into the design and planning process and discussion undercut by the removal of the evaluation problem from that discussion?

The Pattern Language example should make it very clear how drastically the treatment of the evaluation question could influence the process and decision-making in the planning process.

Comments: Missing items / issues? Wrong question?

–o–

EVALUATION IN THE PLANNING DISCOURSE: ISSUES, CONTROVERSIES, (OVERVIEW)

Thorbjoern Mann

An effort to clarify the role of deliberative evaluation in the planning and policy-making process.

Many aspects of evaluation-related tasks in familiar approaches and practice, call for some re-assessment and improvement even for practical applications in current situations. These will be discussed in more detail in sections addressing requirements and tools for practical application. Others are more significant in that they end up questioning the entire concept of deliberative evaluation in planning on a ‘philosophical’ level, or resist adopting smaller detail improvements of the first (practical) kind because they may mean abandoning familiar habits based on tradition and even constitutional provisions.

The very concept of deliberative evaluation — as materialized in procedures and practices that look too cumbersome, bureaucratic and elitist ‘expert-model‘ to many — is an example of a fundamental issue that can significantly flavor and complicate planning discourse. The desire to do without such ‘methods’ is theoretically and emotionally supported by concepts such as the civic, patriotic, call and need for consensus, unity of purpose and even ideas such as swarm behavior or ‘wisdom of the crowds’ that claim to more effortlessly produce ‘good’ solutions and community behavior. A related example is the philosophy behind Christopher Alexander’s ‘Pattern Language’ . Does its claim that using patterns declared ‘valid’ and ‘good’ (having ‘Quality Without a Name — ‘QWAN’) in developing plans and solutions, e.g. for buildings and neighborhoods, will produce overall solutions that will ‘automatically’ be valid / good etc. and thus require no evaluation ‘method’ at all to validate it?

A related issue is the one about ‘objective’ measurement, fact, ‘laws’ (akin to natural laws) as opposed to ‘subjective’ opinion. Discussion, felt to consist mainly of the latter, ‘mere opinions’, difficult to measure and thus lacking reliable tools for resolution of disagreement is seen as too unreliable a basis for important decisions.

On a more practical level, there is the matter of ‘decision criteria’ that are assumed to legitimize decisions. Simple tools such as voting ratios — even of votes following the practice of debating the pros and cons of proposed plans: the practice (accepted as eminently ‘democratic’ even by authoritarian regimes as a smokescreen) in reality results in the concerns of significant parts of affected populations (the minority) to be effectively ignored. Is the call for reaching decisions better and more transparently based on the merit of discourse contributions and ‘due consideration’ of all aspects promising but needing different tools? What would they look like?

An understanding of ‘deliberation’ as the process of making overall judgment (of the good, value, acceptability etc.) a function of partial judgment raises questions of ‘aggregation’: how do or should we convert the many partial judgments into overall judgments? How should the many individual judgments of members of a community be‘aggregated’ into overall ‘group’ judgments or indicators of the distribution of individual judgments that can guide the community’s decision on an issue? Here, to, traditional conventions need reconsiderations.

These issues and controversies need to be examined not only individually but also how they relate to one another and how they should guide evaluation procedures in the planning discourse. The diagram shows a number of them and some relationships adding to the complexity, there are probably more that should be added to the list.

Additions, connections, comments?
–o–

Artificial Intelligence for the Planning Discourse?

The discussion about whether and to what extent Artificial Intelligence technology can meaningfully support the planning process with contributions similar or equivalent to human thinking is largely dominated by controversies about what constitutes thinking. An exploration of the reasoning patterns in the various phases of human planning discourse could produce examples for that discussion, leaving the determination of that definition label ‘thinking’ open for the time being.

One specific example (only one of several different and equally significant aspects of planning):
People propose plans for action, e.g. to solve problems, and then engage in discussion of the ‘pros and cons’ of those plans: arguments. A typical planning argument can be represented as follows:
“Plan A should be adopted for implementation, because
i) Plan A will produce consequences B, given certain conditions C, and
ii) Consequences B ought to be pursued (are desirable); and
iii) Conditions C are present (or will be, at implementation).

Question 1: could such an argument be produced by automated technological means?
This question is usually followed up by question 2: Would or could the ‘machine’ doing this be able (or should it be allowed) to also make decisions to accept or reject the plan?

Can meaningful answer to these questions be found? (Currently or definitively?)

Beginning with question 1: Formulating such an argument in their minds, humans draw on their memory — or on explanations and information provided during the discourse itself — for items of knowledge that could become premises of arguments:

‘Factual-instrumental’ knowledge of the form “FI (A –> X)”, for example (“A will cause X’, given conditions C;
‘Deontic’ Knowledge: of the form “D(X)” or “X ought to be’ (is desirable)”, and
Factual Knowledge of the form “F ( C)” or “Conditions C are given”.
‘Argumentation-pattern knowledge’: Recognition that any of the three knowledge items above can be inserted into an argument pattern of the form
D(A) <– ((A–> X)|C)) & D(X) & F( C)).

(There are of course many variations of such argument patterns, depending on assertion or negation of the premises, and different kinds of relations between A and X.)

It does not seem to be very difficult to develop a Knowledge Base (collection) of such knowledge items and a search-and-match program that would assemble ‘arguments’ of this pattern.

Any difficulties arguably would be more related to the task of recognizing and suitably extracting such items (‘translating’ it into the form recognizable to the program) from the human recorded and documented sources of knowledge, than to the mechanics of the search-and-match process itself. Interpretation of meaning: is an item expressed in different words equivalent to other terms that are appropriate to the other potential premises in an argument?

Another slight quibble relates to the question whether and to what extent the consequence qualifies as one that ‘ought to be’ (or not) — but this can be dealt with by reformulating the argument as follows:
“If (FI(A –> X|C) & D(X) & F( C)) then D(A)”.

(It should be accompanied by the warning that this formulation that ‘looks’ like a valid logic argument pattern is in fact not really applicable to arguments containing deontic premises, and that a plan’s plausibility does not rest on one single argument but on the weight of all its pros and cons.)

But assuming that these difficulties can be adequately dealt with, the answer to question 1) seems obvious: yes, the machine would be able to construct such arguments. Whether that already qualifies as ‘thinking’ or ‘reasoning’ can be left open; the significant realization is equally obvious: that such contributions could be potentially helpful contributions to the discourse. For example, by contributing arguments human participants had not thought of, they could be helping to meet the aim of ensuring — as much as possible — that the plan will not have ‘unexpected’ undesirable side-and-after-effects. (One important part of H. Rittel’s very definition of design and planning.)

The same cannot as easily be said about question 2.

The answer to that question hinges on whether the human ‘thinking’ activities needed to make a decision to accept or reject the proposed plan can be matched by ‘the machine’. The reason is, of course, that not only the plausibility of each argument will have to be ‘evaluated’, judged, (by assessing the plausibility of each premise) but also that the arguments must be weighed against one another. (A method for doing that has been described e.g  in ‘The Fog Island Argument” and  several papers.)

So a ‘search and match’ process as the first part of such a judgment process would have to look for those judgments in the data base, and the difficulty here has to do with where such judgments would come from.

The prevailing answers for factual-instrumental premises as well as for fact-premises — premises i) and iii) — are drawing on ‘documented’ and commonly accepted truth, probability, or validity. Differences of opinion about claims drawn from ‘scientific’ and technical work, if any, are decided by a version of ‘majority voting’ — ‘prevailing knowledge’, accepted by the community of scientists or domain experts, ‘settled’ controversies, derived from sufficiently ‘big data’ (“95% of climate scientists…”) can serve as the basis of such judgments. It is often overlooked that the premises of planning arguments, however securely based on ‘past’ measurements, observations etc, are inherently predictions. So any certainty about their past truth must at least be qualified with a somewhat lesser degree of confidence that they will be equally reliably true in future: will the conditions under which the A –> X relationships are assumed to hold, be equally likely to hold in the future? Including the conditions that may be — intentionally or inadvertently — changed as a result of future human activities pursuing different aims than those of the plan?

The question becomes even more controversial for the deontic (ought-) premises of the planning arguments. Where do the judgments come from by which their plausibility and importance can be determined? Humans can be asked to express their opinions — and prevalent social conventions consider the freedom to not only express such judgments but to have them given ‘due consideration’ in public decision-making (however roundabout and murky the actual mechanisms for realizing this may be) as a human right.

Equally commonly accepted is the principle that machines do not ‘have’ such rights. Thus, any judgment about deontic premises that might be used by a program for evaluating planning arguments would have to be based on information about human judgments that can be found in the data base the program is using. There are areas where this is possible and even plausible. Not only is it prudent to assign a decidedly negative plausibility to deontic claims whose realization contradicts natural laws established by science (and considered still valid…like ‘any being heavier than air can’t fly…’). But there also are human agreements — regulations and laws, and predominant moral codes — that summarily prohibit or mandate certain plans or parts of plans; supported by subsequent arguments to the effect that we all ought not break the law, regardless of our own opinions. This will effectively ‘settle’ some arguments.

And there are various approaches in design and planning that seem to aim at finding — or establishing — enough such mandates or prohibitions that, taken together, would make it possible to ‘mechanically’ determine at least whether a plan is ‘admissible’ or not — e.g. for buildings, whether its developer should get a building permit.

This pattern is supported in theory by modal logic branches that seek to resolve deontic claims on the basis of ‘true/false’ judgments (that must have been made somewhere by some authority) of ‘obligatory’, ‘prohibited’, ‘permissible’ etc. It can be seen to be extended by at last two different ‘movements’ that must be seen as sidestepping the judgment question.

One is the call for society as a whole to adopt (collectively agree upon) moral, ethical codes whose function is equivalent to ‘laws’ — from which the deontic judgment about plans could be derived by mechanically applying the appropriate reasoning steps — invoking ‘Common Good’ mandates supposedly accepted unanimously by everybody. The question whether and how this relates to the principle of granting the ‘right’ of freely holding and happily pursuing one’s own deontic opinions is usually not examined in this context.

Another example is the ‘movement’ of Alexander’s ‘Pattern Language’. Contrary to claims that it is a radically ‘new’ theory, it stands in a long and venerable tradition of many trades and disciplines to establish codes and collections of ‘best practice’ rules of ‘patterns’ — learned by apprentices in years of observing the masters, or compiled in large volumes of proper patterns. The basic idea is that of postulating ‘elements’ (patterns) of the realm of plans, and relationships between these, by means of which plans can be generated. The ‘validity’ or ‘quality’ of the generated plan is then guaranteed by the claim that each of the patterns (rules) are ‘valid’ (‘true’, or having that elusive ‘quality without a name’). This is supported by showing examples of environments judged (by intuition, i.e. needing no further justification) to be exhibiting ‘quality’, by  applications of the patterns. The remaining ‘solution space’ left open by e.g.  the different combinations of patterns, then serves as the basis for claims that the theory offers ‘participation’ by prospective users. However, it hardly needs pointing out that individual ‘different’ judgments — e.g. based on the appropriateness of a given pattern or relationship — are effectively eliminated by such approaches. (This assessment should not be seen as a wholesale criticism of the approach, whose unquestionable merit is to introduce quality considerations into the discourse about built environment that ‘common practice’ has neglected.)

The relevance of discussing these approaches for the two questions above now becomes clear: If a ‘machine’ (which could of course just be a human, untiringly pedantic bureaucrat assiduously checking plans for adherence to rules or patterns) were able to draw upon a sufficiently comprehensive data base of factual-instrumental knowledge and ‘patterns or rules’, it could conceivably be able to generate solutions. And if the deontic judgments have been inherently attached to those rules, it could claim that no further evaluation (i.e. inconvenient intrusion of differing individual judgments would be necessary.

The development of ‘AI’ tools of automated support for planning discourse — will have to make a choice. It could follow this vision of ‘common good’ and valid truth of solution elements, universally accepted by all members of society. Or it could accept the challenge of a view that it either should refrain from intruding on the task of making judgments, or going to the trouble of obtaining those judgments from human participants in the process, before using them in the task of deriving decisions. Depending on which course is followed, I suspect the agenda and tasks of current and further research and development and programming will be very different. This is, in my opinion, a controversial issue of prime significance.