Archive for March, 2018

Artificial Intelligence for the Planning Discourse?

The discussion about whether and to what extent Artificial Intelligence technology can meaningfully support the planning process with contributions similar or equivalent to human thinking is largely dominated by controversies about what constitutes thinking. An exploration of the reasoning patterns in the various phases of human planning discourse could produce examples for that discussion, leaving the determination of that definition label ‘thinking’ open for the time being.

One specific example (only one of several different and equally significant aspects of planning):
People propose plans for action, e.g. to solve problems, and then engage in discussion of the ‘pros and cons’ of those plans: arguments. A typical planning argument can be represented as follows:
“Plan A should be adopted for implementation, because
i) Plan A will produce consequences B, given certain conditions C, and
ii) Consequences B ought to be pursued (are desirable); and
iii) Conditions C are present (or will be, at implementation).

Question 1: could such an argument be produced by automated technological means?
This question is usually followed up by question 2: Would or could the ‘machine’ doing this be able (or should it be allowed) to also make decisions to accept or reject the plan?

Can meaningful answer to these questions be found? (Currently or definitively?)

Beginning with question 1: Formulating such an argument in their minds, humans draw on their memory — or on explanations and information provided during the discourse itself — for items of knowledge that could become premises of arguments:

‘Factual-instrumental’ knowledge of the form “FI (A –> X)”, for example (“A will cause X’, given conditions C;
‘Deontic’ Knowledge: of the form “D(X)” or “X ought to be’ (is desirable)”, and
Factual Knowledge of the form “F ( C)” or “Conditions C are given”.
‘Argumentation-pattern knowledge’: Recognition that any of the three knowledge items above can be inserted into an argument pattern of the form
D(A) <– ((A–> X)|C)) & D(X) & F( C)).

(There are of course many variations of such argument patterns, depending on assertion or negation of the premises, and different kinds of relations between A and X.)

It does not seem to be very difficult to develop a Knowledge Base (collection) of such knowledge items and a search-and-match program that would assemble ‘arguments’ of this pattern.

Any difficulties arguably would be more related to the task of recognizing and suitably extracting such items (‘translating’ it into the form recognizable to the program) from the human recorded and documented sources of knowledge, than to the mechanics of the search-and-match process itself. Interpretation of meaning: is an item expressed in different words equivalent to other terms that are appropriate to the other potential premises in an argument?

Another slight quibble relates to the question whether and to what extent the consequence qualifies as one that ‘ought to be’ (or not) — but this can be dealt with by reformulating the argument as follows:
“If (FI(A –> X|C) & D(X) & F( C)) then D(A)”.

(It should be accompanied by the warning that this formulation that ‘looks’ like a valid logic argument pattern is in fact not really applicable to arguments containing deontic premises, and that a plan’s plausibility does not rest on one single argument but on the weight of all its pros and cons.)

But assuming that these difficulties can be adequately dealt with, the answer to question 1) seems obvious: yes, the machine would be able to construct such arguments. Whether that already qualifies as ‘thinking’ or ‘reasoning’ can be left open; the significant realization is equally obvious: that such contributions could be potentially helpful contributions to the discourse. For example, by contributing arguments human participants had not thought of, they could be helping to meet the aim of ensuring — as much as possible — that the plan will not have ‘unexpected’ undesirable side-and-after-effects. (One important part of H. Rittel’s very definition of design and planning.)

The same cannot as easily be said about question 2.

The answer to that question hinges on whether the human ‘thinking’ activities needed to make a decision to accept or reject the proposed plan can be matched by ‘the machine’. The reason is, of course, that not only the plausibility of each argument will have to be ‘evaluated’, judged, (by assessing the plausibility of each premise) but also that the arguments must be weighed against one another. (A method for doing that has been described e.g  in ‘The Fog Island Argument” and  several papers.)

So a ‘search and match’ process as the first part of such a judgment process would have to look for those judgments in the data base, and the difficulty here has to do with where such judgments would come from.

The prevailing answers for factual-instrumental premises as well as for fact-premises — premises i) and iii) — are drawing on ‘documented’ and commonly accepted truth, probability, or validity. Differences of opinion about claims drawn from ‘scientific’ and technical work, if any, are decided by a version of ‘majority voting’ — ‘prevailing knowledge’, accepted by the community of scientists or domain experts, ‘settled’ controversies, derived from sufficiently ‘big data’ (“95% of climate scientists…”) can serve as the basis of such judgments. It is often overlooked that the premises of planning arguments, however securely based on ‘past’ measurements, observations etc, are inherently predictions. So any certainty about their past truth must at least be qualified with a somewhat lesser degree of confidence that they will be equally reliably true in future: will the conditions under which the A –> X relationships are assumed to hold, be equally likely to hold in the future? Including the conditions that may be — intentionally or inadvertently — changed as a result of future human activities pursuing different aims than those of the plan?

The question becomes even more controversial for the deontic (ought-) premises of the planning arguments. Where do the judgments come from by which their plausibility and importance can be determined? Humans can be asked to express their opinions — and prevalent social conventions consider the freedom to not only express such judgments but to have them given ‘due consideration’ in public decision-making (however roundabout and murky the actual mechanisms for realizing this may be) as a human right.

Equally commonly accepted is the principle that machines do not ‘have’ such rights. Thus, any judgment about deontic premises that might be used by a program for evaluating planning arguments would have to be based on information about human judgments that can be found in the data base the program is using. There are areas where this is possible and even plausible. Not only is it prudent to assign a decidedly negative plausibility to deontic claims whose realization contradicts natural laws established by science (and considered still valid…like ‘any being heavier than air can’t fly…’). But there also are human agreements — regulations and laws, and predominant moral codes — that summarily prohibit or mandate certain plans or parts of plans; supported by subsequent arguments to the effect that we all ought not break the law, regardless of our own opinions. This will effectively ‘settle’ some arguments.

And there are various approaches in design and planning that seem to aim at finding — or establishing — enough such mandates or prohibitions that, taken together, would make it possible to ‘mechanically’ determine at least whether a plan is ‘admissible’ or not — e.g. for buildings, whether its developer should get a building permit.

This pattern is supported in theory by modal logic branches that seek to resolve deontic claims on the basis of ‘true/false’ judgments (that must have been made somewhere by some authority) of ‘obligatory’, ‘prohibited’, ‘permissible’ etc. It can be seen to be extended by at last two different ‘movements’ that must be seen as sidestepping the judgment question.

One is the call for society as a whole to adopt (collectively agree upon) moral, ethical codes whose function is equivalent to ‘laws’ — from which the deontic judgment about plans could be derived by mechanically applying the appropriate reasoning steps — invoking ‘Common Good’ mandates supposedly accepted unanimously by everybody. The question whether and how this relates to the principle of granting the ‘right’ of freely holding and happily pursuing one’s own deontic opinions is usually not examined in this context.

Another example is the ‘movement’ of Alexander’s ‘Pattern Language’. Contrary to claims that it is a radically ‘new’ theory, it stands in a long and venerable tradition of many trades and disciplines to establish codes and collections of ‘best practice’ rules of ‘patterns’ — learned by apprentices in years of observing the masters, or compiled in large volumes of proper patterns. The basic idea is that of postulating ‘elements’ (patterns) of the realm of plans, and relationships between these, by means of which plans can be generated. The ‘validity’ or ‘quality’ of the generated plan is then guaranteed by the claim that each of the patterns (rules) are ‘valid’ (‘true’, or having that elusive ‘quality without a name’). This is supported by showing examples of environments judged (by intuition, i.e. needing no further justification) to be exhibiting ‘quality’, by  applications of the patterns. The remaining ‘solution space’ left open by e.g.  the different combinations of patterns, then serves as the basis for claims that the theory offers ‘participation’ by prospective users. However, it hardly needs pointing out that individual ‘different’ judgments — e.g. based on the appropriateness of a given pattern or relationship — are effectively eliminated by such approaches. (This assessment should not be seen as a wholesale criticism of the approach, whose unquestionable merit is to introduce quality considerations into the discourse about built environment that ‘common practice’ has neglected.)

The relevance of discussing these approaches for the two questions above now becomes clear: If a ‘machine’ (which could of course just be a human, untiringly pedantic bureaucrat assiduously checking plans for adherence to rules or patterns) were able to draw upon a sufficiently comprehensive data base of factual-instrumental knowledge and ‘patterns or rules’, it could conceivably be able to generate solutions. And if the deontic judgments have been inherently attached to those rules, it could claim that no further evaluation (i.e. inconvenient intrusion of differing individual judgments would be necessary.

The development of ‘AI’ tools of automated support for planning discourse — will have to make a choice. It could follow this vision of ‘common good’ and valid truth of solution elements, universally accepted by all members of society. Or it could accept the challenge of a view that it either should refrain from intruding on the task of making judgments, or going to the trouble of obtaining those judgments from human participants in the process, before using them in the task of deriving decisions. Depending on which course is followed, I suspect the agenda and tasks of current and further research and development and programming will be very different. This is, in my opinion, a controversial issue of prime significance.

Embracing contradictions — A Fog Island Tavern conversation.

– Vodçek, tell me, do you have a feeling Abbé Boulah has lost it?
– Bog-Hubert, my friend: you do look worried — did you overdo the testing of your new batch of Eau d’Hole? What in the world makes you ask such questions?
– Well, I just overheard him talk on the phone with his buddy up in town. Out on the deck while I was coming up the ramp; he didn’t seem to mind at all. And he was going on about how we ought to embrace contradictions, of all things. Speaking Latin and quoting Libbett, whoever that was.
– Libbett? Never heard of him. Ah, wait. Libbet’s not a person — at least not one our friend was talking about. Could he have been invoking the old logic rule of “ex contradictione sequitur quodlibet”?
– Was that what he kept shouting? What does it mean?
– It means “From a contradiction you can infer, conclude whatever”. Right, professor?
– Yes indeed. It’s also known as the rule ‘Ex falso quodlibet” It’s a warning about the fallacy of allowing contradictory claims in a system because any statement can be proven from them: anything follows, looking like ‘proof’, even though it’s nonsense.
– I don’t understand.
– Okay. Consider a statement (1) “Wine is good for you, or Abbé Boulah is a Lower Lugubrian spy”. Now we know that wine is good for you — even my doctor tells me that, so that statement is true, which means that whole claim is true. But a statement (2) to the effect of ‘Wine is not good for you’ can also be seen as true, since there are wines that are so vile they make you sick. Not in my tavern, of course. But then the fact that if (2) means the first part of statement (1) is not true would mean that its second part must be true, since otherwise the whole sentence would not be true. This ‘proves’ that Abbé Boulah must be a Lower Lugubrian spy. Or whatever you’d put in that second part.
– They teach that at la Sorbonne?
– Sure do. At least they did when William of Soissons came up with this rule in the 12th century.
– But is there even a place called Lower Lugubria?
– Right, Renfroe, I mean no. Not yet, at least. But the logic proves it, so it must be true, eh?
– Well, I’ll be dipped in Cajun hot sauce. So how come Abbé Boulah can call for anybody to ’embrace’ such nonsense? The government is going to deport him, and where will they send him if there ain’t no such place?
– Bears some looking into. Could it be that he means something else by the ‘quodlibet’ thingy?
– Interesting idea. Yes, I think he could have meant several things. You know he’s been working with his buddy on the planning discourse support platform idea for a while, where one of the problems has to do with whether there should be an ‘expert system’ or Artificial Intelligence component that could look at both the discussion entries and other data bases, and come up with useful comments to the discourse. And their concern has been, quite reasonably, that the planning discourse contains all kinds of contradictory information. Almost by definition — all the ‘pros and cons’, remember? So how can it draw any meaningful conclusions from all that?
– I get it. It’ll all be Lower Lugubrian… So?
– So they have been saying that unless the AI folks can come up with a meaningful way of dealing with the contradictions in the discourse — and in the other data bases, too, I’d guess, — there will be big questions about AI support for the planning discourse.
– Why is that?
– Well, normally, you’d trust whatever such a system comes up with because you think they are trustworthy and reliable — which depends on two things: One, that all the data from which they draw conclusions are true, or at least come with trustworthy probability estimates. And two: that the reasoning rule they use to draw conclusions are ‘valid — meaning that they will actually result in true conclusions if all their premises are true. So if you now admit contradictions in the data gumbo, how can you trust the conclusions?
– Well, isn’t the point of such systems to calculate, based on valid reasoning rules and checked-out facts, which of two contradictory claims is true and which one is wrong?
– That would be nice, wouldn’t it, Sophie. But it isn’t that easy to tell what’s what. Take the big controversy about climate change, and whether it’s all caused by human activity. There are many studies that show how warming trends seem to follow the amounts of CO2 that human activities emit. Using good, reliable data (meaning the data have been confirmed by other independent studies) and reliable scientific and statistical methods, ‘reasoning’ patterns, if you like. So those studies seem to ‘prove’ that human emissions have contributed to global warming. But there are also studies that use data you can’t dismiss as ‘false’, and equally valid statistical methods — that come up with conclusions that say there is no such connection.
– What do you mean, Professor?
– Well, Sophie: it turns out that much depends on what kind of data you are using — for example, whether you are measuring temperature changes on the earth’s surface  (on land) or over the ocean, or in the ocean, or in the air, and how high up you are measuring it. And it also depends on what time period you are looking at. So each of the studies may actually be quite ‘valid’ in themselves — reliable data and respectable methods — just looking at different data, and coming up with different results. So it takes some close scrutiny to evaluate those studies — the ‘system’ may report all of them as ‘reliable’ because it doesn’t realize that the kinds of data and the time period may make all the difference. So you still need to take a critical look at the studies.
– Okay. Let’s not get caught up in that climate issue right now. You mentioned that there were several possible reasons for Abbé Boulah’s strange call?
– Yes, thanks for reminding me. At least two, besides simply saying that we need to acknowledge all the contradictions in the discourse, and actively encourage people to express them for discussion and evaluation. Not sweeping them under the rug by pushing for ‘consensus’ or relying on ‘expert’ reports.
– Yes. We’ve discussed that — but people seem to be stuck on unified visions and leadership and consensus; well, I guess it takes time to sink in. But you weren’t done yet, were you, sorry?
– Right. We should also improve our tools for dealing with all those contradictions. Because there are actually some claims — the ‘deontic’ ones, that have to do with what people feel ‘ought to be’ — goals, objectives, concerns about things they don’t like in the current problem situation, or visions about what life might be that’s different — don’t we all wish that we could ‘make a difference’ in our lives? And those premises of planning arguments are not ‘true’ or ‘false’, not even properly assessed as ‘probable’; so logic has some serious trouble with them. Its valid syllogisms don’t apply. And we accept that people have the right to happiness in pursuing different goals. Look at even the common views of ‘costs’ and ‘benefits’ of proposed solutions: they aren’t the same for all affected parties: some of the ‘costs’ that some want to reduce are actually ‘benefits’ (income, profits,) for others, who want more…
– Hmm. You’re making things difficult here. So the government making decisions based of the famous ‘Benefit/Cost Ratio is actually … how should we say…
– Sweeping those issues under the rug? Yes. We need different decision tools.
– But there still could be more behind Abbé Boulah’s attitude?
– Well, maybe he is simply referring to the ‘Systematic Doubt’ method for analyzing a problem and generating solution ideas.
– What in ninety nonsequiturs’ names are you talking about, Professor?
– You never heard about this technique? It’s actually based on a nifty piece of logic — DeMorgan’s second theorem — which says that a the negation of a statement consisting of two or several claims joined by ‘and’, is equivalent to a statement of each of the negated parts joined by ‘or’:
~(a ^ b) = (~a v ~b)
Here, ~ means negation, ^ means ‘and’ ; v means ‘or’).
– So where does the contradiction come in?
– Impatient wench, Sophie. Say you have a problem and try to find a way, a plan, to fix it — eliminate it. Now make a statement consisting of several necessary conditions for that problem to exist: “a ^ b ^ c …” (De Morgan’s theorem works for any number of conditions.) You want the problem you want to go away? This can be expressed as ‘~(a^b^c…).  The equivalence of that statement to that of ‘ ~a v ~b v ~c…’ means that if only one of those necessary conditions could go away — negated, c o n t r a d i c t e d, get it? — the whole problem would not exist.
– So?
– Ah Renfroe — this gives rise to a beautiful approach to finding many possible solution ideas, don’t you see? The steps are:
* First, you look at the problem and find its all its necessary conditions
* You state those conditions in plain assertive sentences. This way of talking about problems takes a bit of getting used to; we usually just complain about it in negative sentences: such as ‘there isn’t any money in my bank account’ or ‘there’s no place to park’
* Then, you take each of the conditions and negate it: contradict it.  Write them out one by one.
* Now you look for ways to make these negated statements come true. There may be no way to do that for some statements, but often two or more. They may not all make sense — some can be plain nonsense or hilariously unfeasible, the process can be a lot of fun — but there may just be one or more good ideas for solving the problem, so you can pickup and elaborate on those.
You have been nudged to look at the problem from any different viewpoints; improving the chances of finding a good, feasible solution. All by contradicting the necessary conditions for the problem to exist.
– So Vodçek, you think this is what Abbé Boulah was talking about? It does make some sense now.
– Who knows, Sophie; we may just have to wait until next time he shows up here to ask him about that. But I can think of one more possibility of what he meant.
– Oh? As interesting as this Systematic Doubt one?
– Well, I’ll let you be the judge of that. It has to do with the meaning of ‘quodlibet’ in that old logic rule. Perhaps the professor can explain it better?
– Ah, I think I see what you are getting at, Vodçek. Yes, it would be quite in style with Abbé Boulah’s sometimes, hmm, unusual, way of thinking. Let’s see. ‘Quodlibet’ comes from ‘quod’ , meaning ‘what’, and ‘libet’ means ‘like’ or ‘please’. So maybe wasn’t referring to the meaningless ‘whatever’, random nonsense. He could have been trying to say that by ’embracing the contradictions’ in a planning situation, and using them with some imagination, we could end up with more creative, imaginative, but also pleasing solutions?
– As compared to just dwelling on the negative aspects about the problem, the complaints?
– Right. You agree, Bog-Hubert? You were the one who heard him talk to his friend — did that sound like it had something to do with Vodçek’s theory?
– Well it sure would explain the last comment I heard him say, that I didn’t understand at all at the time.
– What was that, again?
– I think he mentioned something about misquoting, or changing one of Winston Churchill’s lesser known sayings — but what I heard was: “The price of ‘quodlibet’ is responsibility.”

– Is that what they are saying in Lower Lugubria?