The discussion about whether and to what extent Artificial Intelligence technology can meaningfully support the planning process with contributions similar or equivalent to human thinking is largely dominated by controversies about what constitutes thinking. An exploration of the reasoning patterns in the various phases of human planning discourse could produce examples for that discussion, leaving the determination of that definition label ‘thinking’ open for the time being.
One specific example (only one of several different and equally significant aspects of planning):
People propose plans for action, e.g. to solve problems, and then engage in discussion of the ‘pros and cons’ of those plans: arguments. A typical planning argument can be represented as follows:
“Plan A should be adopted for implementation, because
i) Plan A will produce consequences B, given certain conditions C, and
ii) Consequences B ought to be pursued (are desirable); and
iii) Conditions C are present (or will be, at implementation).
Question 1: could such an argument be produced by automated technological means?
This question is usually followed up by question 2: Would or could the ‘machine’ doing this be able (or should it be allowed) to also make decisions to accept or reject the plan?
Can meaningful answer to these questions be found? (Currently or definitively?)
Beginning with question 1: Formulating such an argument in their minds, humans draw on their memory — or on explanations and information provided during the discourse itself — for items of knowledge that could become premises of arguments:
‘Factual-instrumental’ knowledge of the form “FI (A –> X)”, for example (“A will cause X’, given conditions C;
‘Deontic’ Knowledge: of the form “D(X)” or “X ought to be’ (is desirable)”, and
Factual Knowledge of the form “F ( C)” or “Conditions C are given”.
‘Argumentation-pattern knowledge’: Recognition that any of the three knowledge items above can be inserted into an argument pattern of the form
D(A) <– ((A–> X)|C)) & D(X) & F( C)).
(There are of course many variations of such argument patterns, depending on assertion or negation of the premises, and different kinds of relations between A and X.)
It does not seem to be very difficult to develop a Knowledge Base (collection) of such knowledge items and a search-and-match program that would assemble ‘arguments’ of this pattern.
Any difficulties arguably would be more related to the task of recognizing and suitably extracting such items (‘translating’ it into the form recognizable to the program) from the human recorded and documented sources of knowledge, than to the mechanics of the search-and-match process itself. Interpretation of meaning: is an item expressed in different words equivalent to other terms that are appropriate to the other potential premises in an argument?
Another slight quibble relates to the question whether and to what extent the consequence qualifies as one that ‘ought to be’ (or not) — but this can be dealt with by reformulating the argument as follows:
“If (FI(A –> X|C) & D(X) & F( C)) then D(A)”.
(It should be accompanied by the warning that this formulation that ‘looks’ like a valid logic argument pattern is in fact not really applicable to arguments containing deontic premises, and that a plan’s plausibility does not rest on one single argument but on the weight of all its pros and cons.)
But assuming that these difficulties can be adequately dealt with, the answer to question 1) seems obvious: yes, the machine would be able to construct such arguments. Whether that already qualifies as ‘thinking’ or ‘reasoning’ can be left open; the significant realization is equally obvious: that such contributions could be potentially helpful contributions to the discourse. For example, by contributing arguments human participants had not thought of, they could be helping to meet the aim of ensuring — as much as possible — that the plan will not have ‘unexpected’ undesirable side-and-after-effects. (One important part of H. Rittel’s very definition of design and planning.)
The same cannot as easily be said about question 2.
The answer to that question hinges on whether the human ‘thinking’ activities needed to make a decision to accept or reject the proposed plan can be matched by ‘the machine’. The reason is, of course, that not only the plausibility of each argument will have to be ‘evaluated’, judged, (by assessing the plausibility of each premise) but also that the arguments must be weighed against one another. (A method for doing that has been described e.g in ‘The Fog Island Argument” and several papers.)
So a ‘search and match’ process as the first part of such a judgment process would have to look for those judgments in the data base, and the difficulty here has to do with where such judgments would come from.
The prevailing answers for factual-instrumental premises as well as for fact-premises — premises i) and iii) — are drawing on ‘documented’ and commonly accepted truth, probability, or validity. Differences of opinion about claims drawn from ‘scientific’ and technical work, if any, are decided by a version of ‘majority voting’ — ‘prevailing knowledge’, accepted by the community of scientists or domain experts, ‘settled’ controversies, derived from sufficiently ‘big data’ (“95% of climate scientists…”) can serve as the basis of such judgments. It is often overlooked that the premises of planning arguments, however securely based on ‘past’ measurements, observations etc, are inherently predictions. So any certainty about their past truth must at least be qualified with a somewhat lesser degree of confidence that they will be equally reliably true in future: will the conditions under which the A –> X relationships are assumed to hold, be equally likely to hold in the future? Including the conditions that may be — intentionally or inadvertently — changed as a result of future human activities pursuing different aims than those of the plan?
The question becomes even more controversial for the deontic (ought-) premises of the planning arguments. Where do the judgments come from by which their plausibility and importance can be determined? Humans can be asked to express their opinions — and prevalent social conventions consider the freedom to not only express such judgments but to have them given ‘due consideration’ in public decision-making (however roundabout and murky the actual mechanisms for realizing this may be) as a human right.
Equally commonly accepted is the principle that machines do not ‘have’ such rights. Thus, any judgment about deontic premises that might be used by a program for evaluating planning arguments would have to be based on information about human judgments that can be found in the data base the program is using. There are areas where this is possible and even plausible. Not only is it prudent to assign a decidedly negative plausibility to deontic claims whose realization contradicts natural laws established by science (and considered still valid…like ‘any being heavier than air can’t fly…’). But there also are human agreements — regulations and laws, and predominant moral codes — that summarily prohibit or mandate certain plans or parts of plans; supported by subsequent arguments to the effect that we all ought not break the law, regardless of our own opinions. This will effectively ‘settle’ some arguments.
And there are various approaches in design and planning that seem to aim at finding — or establishing — enough such mandates or prohibitions that, taken together, would make it possible to ‘mechanically’ determine at least whether a plan is ‘admissible’ or not — e.g. for buildings, whether its developer should get a building permit.
This pattern is supported in theory by modal logic branches that seek to resolve deontic claims on the basis of ‘true/false’ judgments (that must have been made somewhere by some authority) of ‘obligatory’, ‘prohibited’, ‘permissible’ etc. It can be seen to be extended by at last two different ‘movements’ that must be seen as sidestepping the judgment question.
One is the call for society as a whole to adopt (collectively agree upon) moral, ethical codes whose function is equivalent to ‘laws’ — from which the deontic judgment about plans could be derived by mechanically applying the appropriate reasoning steps — invoking ‘Common Good’ mandates supposedly accepted unanimously by everybody. The question whether and how this relates to the principle of granting the ‘right’ of freely holding and happily pursuing one’s own deontic opinions is usually not examined in this context.
Another example is the ‘movement’ of Alexander’s ‘Pattern Language’. Contrary to claims that it is a radically ‘new’ theory, it stands in a long and venerable tradition of many trades and disciplines to establish codes and collections of ‘best practice’ rules of ‘patterns’ — learned by apprentices in years of observing the masters, or compiled in large volumes of proper patterns. The basic idea is that of postulating ‘elements’ (patterns) of the realm of plans, and relationships between these, by means of which plans can be generated. The ‘validity’ or ‘quality’ of the generated plan is then guaranteed by the claim that each of the patterns (rules) are ‘valid’ (‘true’, or having that elusive ‘quality without a name’). This is supported by showing examples of environments judged (by intuition, i.e. needing no further justification) to be exhibiting ‘quality’, by applications of the patterns. The remaining ‘solution space’ left open by e.g. the different combinations of patterns, then serves as the basis for claims that the theory offers ‘participation’ by prospective users. However, it hardly needs pointing out that individual ‘different’ judgments — e.g. based on the appropriateness of a given pattern or relationship — are effectively eliminated by such approaches. (This assessment should not be seen as a wholesale criticism of the approach, whose unquestionable merit is to introduce quality considerations into the discourse about built environment that ‘common practice’ has neglected.)
The relevance of discussing these approaches for the two questions above now becomes clear: If a ‘machine’ (which could of course just be a human, untiringly pedantic bureaucrat assiduously checking plans for adherence to rules or patterns) were able to draw upon a sufficiently comprehensive data base of factual-instrumental knowledge and ‘patterns or rules’, it could conceivably be able to generate solutions. And if the deontic judgments have been inherently attached to those rules, it could claim that no further evaluation (i.e. inconvenient intrusion of differing individual judgments would be necessary.
The development of ‘AI’ tools of automated support for planning discourse — will have to make a choice. It could follow this vision of ‘common good’ and valid truth of solution elements, universally accepted by all members of society. Or it could accept the challenge of a view that it either should refrain from intruding on the task of making judgments, or going to the trouble of obtaining those judgments from human participants in the process, before using them in the task of deriving decisions. Depending on which course is followed, I suspect the agenda and tasks of current and further research and development and programming will be very different. This is, in my opinion, a controversial issue of prime significance.
0 Responses to “Artificial Intelligence for the Planning Discourse?”