Posts Tagged 'Planning arguments'

Artificial Intelligence for the Planning Discourse?

The discussion about whether and to what extent Artificial Intelligence technology can meaningfully support the planning process with contributions similar or equivalent to human thinking is largely dominated by controversies about what constitutes thinking. An exploration of the reasoning patterns in the various phases of human planning discourse could produce examples for that discussion, leaving the determination of that definition label ‘thinking’ open for the time being.

One specific example (only one of several different and equally significant aspects of planning):
People propose plans for action, e.g. to solve problems, and then engage in discussion of the ‘pros and cons’ of those plans: arguments. A typical planning argument can be represented as follows:
“Plan A should be adopted for implementation, because
i) Plan A will produce consequences B, given certain conditions C, and
ii) Consequences B ought to be pursued (are desirable); and
iii) Conditions C are present (or will be, at implementation).

Question 1: could such an argument be produced by automated technological means?
This question is usually followed up by question 2: Would or could the ‘machine’ doing this be able (or should it be allowed) to also make decisions to accept or reject the plan?

Can meaningful answer to these questions be found? (Currently or definitively?)

Beginning with question 1: Formulating such an argument in their minds, humans draw on their memory — or on explanations and information provided during the discourse itself — for items of knowledge that could become premises of arguments:

‘Factual-instrumental’ knowledge of the form “FI (A –> X)”, for example (“A will cause X’, given conditions C;
‘Deontic’ Knowledge: of the form “D(X)” or “X ought to be’ (is desirable)”, and
Factual Knowledge of the form “F ( C)” or “Conditions C are given”.
‘Argumentation-pattern knowledge’: Recognition that any of the three knowledge items above can be inserted into an argument pattern of the form
D(A) <– ((A–> X)|C)) & D(X) & F( C)).

(There are of course many variations of such argument patterns, depending on assertion or negation of the premises, and different kinds of relations between A and X.)

It does not seem to be very difficult to develop a Knowledge Base (collection) of such knowledge items and a search-and-match program that would assemble ‘arguments’ of this pattern.

Any difficulties arguably would be more related to the task of recognizing and suitably extracting such items (‘translating’ it into the form recognizable to the program) from the human recorded and documented sources of knowledge, than to the mechanics of the search-and-match process itself. Interpretation of meaning: is an item expressed in different words equivalent to other terms that are appropriate to the other potential premises in an argument?

Another slight quibble relates to the question whether and to what extent the consequence qualifies as one that ‘ought to be’ (or not) — but this can be dealt with by reformulating the argument as follows:
“If (FI(A –> X|C) & D(X) & F( C)) then D(A)”.

(It should be accompanied by the warning that this formulation that ‘looks’ like a valid logic argument pattern is in fact not really applicable to arguments containing deontic premises, and that a plan’s plausibility does not rest on one single argument but on the weight of all its pros and cons.)

But assuming that these difficulties can be adequately dealt with, the answer to question 1) seems obvious: yes, the machine would be able to construct such arguments. Whether that already qualifies as ‘thinking’ or ‘reasoning’ can be left open; the significant realization is equally obvious: that such contributions could be potentially helpful contributions to the discourse. For example, by contributing arguments human participants had not thought of, they could be helping to meet the aim of ensuring — as much as possible — that the plan will not have ‘unexpected’ undesirable side-and-after-effects. (One important part of H. Rittel’s very definition of design and planning.)

The same cannot as easily be said about question 2.

The answer to that question hinges on whether the human ‘thinking’ activities needed to make a decision to accept or reject the proposed plan can be matched by ‘the machine’. The reason is, of course, that not only the plausibility of each argument will have to be ‘evaluated’, judged, (by assessing the plausibility of each premise) but also that the arguments must be weighed against one another. (A method for doing that has been described e.g  in ‘The Fog Island Argument” and  several papers.)

So a ‘search and match’ process as the first part of such a judgment process would have to look for those judgments in the data base, and the difficulty here has to do with where such judgments would come from.

The prevailing answers for factual-instrumental premises as well as for fact-premises — premises i) and iii) — are drawing on ‘documented’ and commonly accepted truth, probability, or validity. Differences of opinion about claims drawn from ‘scientific’ and technical work, if any, are decided by a version of ‘majority voting’ — ‘prevailing knowledge’, accepted by the community of scientists or domain experts, ‘settled’ controversies, derived from sufficiently ‘big data’ (“95% of climate scientists…”) can serve as the basis of such judgments. It is often overlooked that the premises of planning arguments, however securely based on ‘past’ measurements, observations etc, are inherently predictions. So any certainty about their past truth must at least be qualified with a somewhat lesser degree of confidence that they will be equally reliably true in future: will the conditions under which the A –> X relationships are assumed to hold, be equally likely to hold in the future? Including the conditions that may be — intentionally or inadvertently — changed as a result of future human activities pursuing different aims than those of the plan?

The question becomes even more controversial for the deontic (ought-) premises of the planning arguments. Where do the judgments come from by which their plausibility and importance can be determined? Humans can be asked to express their opinions — and prevalent social conventions consider the freedom to not only express such judgments but to have them given ‘due consideration’ in public decision-making (however roundabout and murky the actual mechanisms for realizing this may be) as a human right.

Equally commonly accepted is the principle that machines do not ‘have’ such rights. Thus, any judgment about deontic premises that might be used by a program for evaluating planning arguments would have to be based on information about human judgments that can be found in the data base the program is using. There are areas where this is possible and even plausible. Not only is it prudent to assign a decidedly negative plausibility to deontic claims whose realization contradicts natural laws established by science (and considered still valid…like ‘any being heavier than air can’t fly…’). But there also are human agreements — regulations and laws, and predominant moral codes — that summarily prohibit or mandate certain plans or parts of plans; supported by subsequent arguments to the effect that we all ought not break the law, regardless of our own opinions. This will effectively ‘settle’ some arguments.

And there are various approaches in design and planning that seem to aim at finding — or establishing — enough such mandates or prohibitions that, taken together, would make it possible to ‘mechanically’ determine at least whether a plan is ‘admissible’ or not — e.g. for buildings, whether its developer should get a building permit.

This pattern is supported in theory by modal logic branches that seek to resolve deontic claims on the basis of ‘true/false’ judgments (that must have been made somewhere by some authority) of ‘obligatory’, ‘prohibited’, ‘permissible’ etc. It can be seen to be extended by at last two different ‘movements’ that must be seen as sidestepping the judgment question.

One is the call for society as a whole to adopt (collectively agree upon) moral, ethical codes whose function is equivalent to ‘laws’ — from which the deontic judgment about plans could be derived by mechanically applying the appropriate reasoning steps — invoking ‘Common Good’ mandates supposedly accepted unanimously by everybody. The question whether and how this relates to the principle of granting the ‘right’ of freely holding and happily pursuing one’s own deontic opinions is usually not examined in this context.

Another example is the ‘movement’ of Alexander’s ‘Pattern Language’. Contrary to claims that it is a radically ‘new’ theory, it stands in a long and venerable tradition of many trades and disciplines to establish codes and collections of ‘best practice’ rules of ‘patterns’ — learned by apprentices in years of observing the masters, or compiled in large volumes of proper patterns. The basic idea is that of postulating ‘elements’ (patterns) of the realm of plans, and relationships between these, by means of which plans can be generated. The ‘validity’ or ‘quality’ of the generated plan is then guaranteed by the claim that each of the patterns (rules) are ‘valid’ (‘true’, or having that elusive ‘quality without a name’). This is supported by showing examples of environments judged (by intuition, i.e. needing no further justification) to be exhibiting ‘quality’, by  applications of the patterns. The remaining ‘solution space’ left open by e.g.  the different combinations of patterns, then serves as the basis for claims that the theory offers ‘participation’ by prospective users. However, it hardly needs pointing out that individual ‘different’ judgments — e.g. based on the appropriateness of a given pattern or relationship — are effectively eliminated by such approaches. (This assessment should not be seen as a wholesale criticism of the approach, whose unquestionable merit is to introduce quality considerations into the discourse about built environment that ‘common practice’ has neglected.)

The relevance of discussing these approaches for the two questions above now becomes clear: If a ‘machine’ (which could of course just be a human, untiringly pedantic bureaucrat assiduously checking plans for adherence to rules or patterns) were able to draw upon a sufficiently comprehensive data base of factual-instrumental knowledge and ‘patterns or rules’, it could conceivably be able to generate solutions. And if the deontic judgments have been inherently attached to those rules, it could claim that no further evaluation (i.e. inconvenient intrusion of differing individual judgments would be necessary.

The development of ‘AI’ tools of automated support for planning discourse — will have to make a choice. It could follow this vision of ‘common good’ and valid truth of solution elements, universally accepted by all members of society. Or it could accept the challenge of a view that it either should refrain from intruding on the task of making judgments, or going to the trouble of obtaining those judgments from human participants in the process, before using them in the task of deriving decisions. Depending on which course is followed, I suspect the agenda and tasks of current and further research and development and programming will be very different. This is, in my opinion, a controversial issue of prime significance.

Some problems with the systematic assessment of planning arguments.

(Ref. e.g. the article ‘The Structure and Evaluation of Planning Arguments’ (Informal Logic, Dec. 2010, also slightly revised, in Academia.edu).

In an effort to explore phenomena, identifying shortcomings and errors, that can be seen as arguments against the too ready acceptance of the argumentative model of planning, I ran into a well-intentioned article full of claims and arguments that did not fit the simple clean basic model of the planning argument, and would cause some problems in their analysis and plausibility assessment. Briefly, there are three aspects of concern.

The first is the liberal use of verbs denoting the relationship between concepts that — in the basic planning argument — would be seen as plan features that cause outcomes or consequences. Reminder: the argumentative view shares the focus on cause-effect relationships with much of the systems modeling perspective: the ‘loops’ of systems networks are generated by changes in components / variables causing positive or negative changes in other variables. So the relationship constituting the ‘factual-instrumental’ premise of planning arguments is mostly seen as a cause-effect relationship.

Now the survey of arguments in the article mentioned above (not identified to protect the author until proven guilty, and because the practice is actually quite common) hardly ever actually uses the terms ’cause’ and ‘effect’ or their equivalent in arguments that clearly advocate certain policies and actions. Instead, one finds terms such as ‘reflects’, ‘advance’ (an adaptive response); ‘reinforce’, ‘seeks to.. ‘, ‘codifies’, ‘is wired to..’. ‘erodes’, ‘come to terms with…’. ‘speaks to…’, ‘retreats into…’,’crystallizes…’, ‘promotes’. ‘cross-fertilizes…, ’embraces’, ‘ ‘cuts across’, ‘rooted in…’, ‘deeply embedded ‘, ‘leverages’,
‘co-create’ and ‘co-design’, ‘highlight ‘, ‘re-ignite’. (Once the extent of such claims was realized in that article that was trying to make a case for ‘disrupting’ the old system and its propaganda, it became clear that the article itself was heavily engaged in the art of propaganda… slightly saddening the reader who was initially tending towards sympathetic endorsement of that case…)

This wealth of relationship descriptions is apt to throw the blind faith promoter of the simple planning argument pattern into serious self-recrimination: What is the point of thorough analysis of these kinds of argument, if they never appear in their pristine form in actual discourse? (The basic ‘standard planning argument’ pattern is the following: “Proposed plan X ought to be adopted because X will produce consequence Y given conditions C), and consequence Y ought to be pursued, (and conditions C are or will be given.)” True, it was always pointed out that there were other kinds of relationships than ‘will produce’; or ’causes’, at work in that basic pattern: ‘part-whole’, for example, or ‘association’, ‘acting as catalyst’, ‘being identical’ or synonymous with’, for example. But those were never seen as serious obstacles to their evaluation by the proposed process of argument assessment, as the above examples appear to be. How can they be evaluated with the same blunt tool as the arguments with plain cause-effect premises?

Secondly, the problems they cause for assessment are exacerbated by the fact that often, these verbs are qualified with expressions like ‘probably’; ‘likely to’, ‘may be seen as’ and other means of retreating from complete certainty regarding the underlying claims. The effect of these qualification moves is that the entire claim ‘probably x’ or ‘x is likely to advance y’ can now be evaluated as a fully plausible claim, and given a pl-value of +1 (‘completely plausible, virtually certain’) by a listener — since the premise obviously, honestly, does not claim complete certainty. This obscures the actual underlying suggestion that ‘x (actually) will advance y’ is far from completely plausible, and thus will lend more plausibility and weight to the argument of which it is a premise.

A third problem is that, upon closer inspection, many of the relationship claims are not just honest, innocent expressions of factual or functional relationships between real entities or forces. They are often themselves ‘laden’ with deontic content — subjective expressions of ‘good’ or ‘bad’: ‘x threatens y’, or ‘relativizes’, or ‘manipulates’ are valuing relationship descriptions: judgments about ‘ought-aspects that the proposed method reserved for the clearly deontic premises of planning arguments: the purported outcomes or consequences of plans.

What are the implications of these considerations for the proposal of systematic argument assessment in the planning discourse? (Other than the necessary acknowledgement that this very comment is itself a piece of propaganda…)

Apart from the option of giving up on the entire enterprise and leaving the subjective judgments by discourse participants unexamined, one response would be to devise ways of ‘separating’ the qualifying terms from the basic claims in the evaluation work sheets given to participants. They would be asked to assess the probability or plausibility of the basic premise claim, perhaps using the qualifying statements as a ‘guide’ to their plausibility judgment (like any other supporting evidence). This seems possible with some additional refinement and simplification of the proposed process.

It is less clear how the value-contamination of relationship descriptors could be dealt with. Changing the representation of arguments to the condensed form of the basic ‘standard planning argument’ pattern is already a controversial suggestion requiring considerable ‘intelligent’ assessment of arguments’ ‘core’ from their ‘verbatim’ version, both to get it ‘right’ and to avoid turning it into a partisan interpretation. The ‘intelligent computation’ needed to add the suggested separation of value from relationship terms to the already severely manipulated argument representation will require some more research — but doing that may be asking too much?

And it is not clear how these considerations can help participants deal with insidious argument patterns such as the recent beauty alleging media coverup of terrorist incidents in Sweden, and then using the objection that there was no evidence of such an incident, as a ‘clinching’ argument for the coverup: ‘see how clever they are covering it up?’

Some speculations regarding the possibility of a moral code without religion.

On a Linked-In forum, the question was raised whether a moral code without religion could be developed. My effort to look into ways to achieve better decisions for planning, design, policy-making issues suggests that it is indeed possible to develop at least a partial system of agreements — for which ‘moral code’ would be an unnecessarily pretentious term — but which has some of the same features. For problems, conflicts of interest or proposed actions or projects that require the consent and cooperation of more than one individual, (this does not cover all situations in which moral codes apply), as soon as parties realize that ‘resolutions’ based on coercion of any kind either will not really improve the situation or are fraught with unacceptable risks (the other guy might have a bigger club… or even one’s own nuclear weapon would be so damaging to even one’s own side that its use would be counterproductive) the basic situation becomes one of negotiation or, as I call it, ‘planning discourse’. Such situations can be sustained and brought to success only on the basis of the expectation that parties will accept and behave according to some agreements. The set of such agreements can be seen as (part of) an ethical or moral code. For the planning discourse, a rough sketch of first underlying ‘agreements’ or code elements are the following:

**1 Instead of attempting to resolve the problem by coercion — imposing one side’s preferred solution over those of other parties — let us talk, discuss the situation.

**2 The discussion will consist of each side describing that side’s preferred outcome, and attempting to convince the other side (other parties) of the advantages –or disadvantages — of the proposal.

**3 All sides will have the opportunity to do this, and all sides implicitly promise to listen to the other’s description and arguments before making a decision.

**4 The decision will (should) be based on the arguments brought forward in the discussion.

*4.1 The description of proposals should be truthful and avoid deception — all its relevant features should be described, none hidden; no pertinent aspects omitted.

*4.2 The arguments should equally truthful, avoiding deception and exaggeration, and be open to scrutiny and challenge, which means that participants should be willing to answer questions for further support of the claims made in the descriptions and arguments.

Simplified ‘planning arguments’ consist of three types of claims:
a) the factual-instrumental claim
‘proposal A will bring about Result B, given conditions C’
b) the factual claim ‘
‘Conditions C are (or will be) given’;
c) the ‘deontic’ or ‘ought-claim’
‘Consequence B of the proposal ought to be pursued’;
and also
d) the ‘pattern’ or inference rule of the argument (that is, the specific constellation of assertions, negation of claims and relations between A and B) is ‘plausible’.
While such arguments (just like the ‘inductive’ reasoning that plays such a significant role in science) are not ‘valid’ from a formal logic point of view, they are nevertheless used and considered all the time, their plausibility deranging from their particular constellation of claims, and the ‘fit’ to the specific situation.
The plan proposal A is itself a ‘deontic’ (ought-) claim.

*4.3 The support for claims of type (a) and (b) takes the form of ‘evidence’ provided and bolstered by what we might loosely call the ‘scientific’ perspective and method.

*4.4 Support for claims of type c) will take further arguments of the ‘planning argument’ kind and pattern, containing further factual and deontic claims in support of the desirability of B.
The deontic claims of such further support arguments can refer to previous agreements, accepted laws or treaties that imply acceptance of a disputed claim, claims of desirability or undesirability for any party affected by the proposed plan, even moral rules derived from religious domains.

**5 Individual participants’ (preliminary) decision should be based on that participant’s individual assessment of the plausibility of all the arguments pro and con that have been brought up in the discussion.
That assessment should not be superseded by considerations extraneous to the plan proposal discussion itself — such as party voting discipline — but be a function of the plausibility and weights assigned by the individual to the arguments and their supporting claims.

**6 A collective decision will be based on the overall ‘decisions’ or opinions of individual participants.
(The current predominant ‘majority voting’ methods for reaching decisions do not meet the expectation #4 above of guaranteeing that the decision be based on due consideration of all expressed concerns: here, a new method is sorely needed).

A decision to adopt a plan by the participants (parties affected by the proposed plan) in such a discussion should only be taken (agreed upon) if all participants’ assessment of the plan is positive or at least ‘not worse’ than the existing problem situation that precipitated the discussion.

**7    Discussion should be continued until all parties feel that all relevant concerns have been voiced. Ideally, the discussion would lead to consensus regarding acceptance or rejection of the proposed plan. If this is the case, a decision can be taken and the plan accepted for implementation.
Realistically, there may be differences of opinion: some parties will support, others oppose the plan. The options for this case are either to abandon the process (to do nothing), to attempt to modify the plan to remove specific features that cause opponents’ concerns; or to prepare a different proposal altogether and start a new discussion about it.

**8    Individual parties’ ‘decision’ (e.g. vote) contribution to the common decision should be matching the party’s expressed assessment of the arguments and argument premises.
For example: if a participant agrees with all the ‘pro’ arguments and disagrees with the ‘con’ arguments (or expressed lesser weigh of the ‘con’ arguments) the participant’s overall vote should be positive. Conversely, if the participant’s assessment of arguments is negative, the overall ‘vote’ should be negative. Participants should be expected to offer additional explanations of a discrepancy between argument assessment and overall decision.

**9 A common decision to accept a proposed plan implies obligations (specified in the plan) for all parties to contribute to implementation and adherence to the decision provisions.

**10 The plan may include provisions to ensure adherence and contributions by the parties. Such provisions may include ‘sanctions’, understood as (punitive) measures taken against parties guilty of violating plan agreements.
There undoubtedly might be more agreements needed for a viable planning ‘ethic’. It is clear that some of the above provisions are not easy to ‘live up to’ — but what moral system has ever been? And for some provisions, the necessary tools for their successful application are still not available. For many societal decisions, access to the discussion (to be able to voice concerns) is lacking even in so-called advanced democracies. Some expectations may sound like wishful thinking: The expectation of transparent linkage between argument assessment and overall (individual) decision and even more the linkage between arguments and collective decision are still not available. The approach for systematic and transparent argument assessment (My article in the Dec 2010 issue of “Informal Logic” on ‘The structure and Evaluation of Planning Arguments’) suggests that such a link would be feasible and practical, if somewhat more cumbersome that current voting and opinion polling practices. However, its application would require some changes in the organization of the planning discourse and support system, as well as decision-making methods.

These observations were mainly done in response to the question whether a ‘moral’ not based on religious tenets would be possible (and meaningful?). That question may ultimately be taken to hinge on item # 10 above — the sanction issue. The practical difficulties of specifying and imposing effective sanctions to ensure adherence to moral rules may lead many to the necessity of accepting or postulating sanctions and rewards to be administered by an entity in the hereafter. But it would seem reasonable to continue to explore such agreement systems including sanctions in the ‘here and now’ beyond current practices, since both non-religious and religion-based systems arguably have not been successful enough reducing the level of violations of their rules.

On Experts and Expert Judgment

Experts

Reading a recent issue of Critical Review Journal (Vol. 22, #4, 2010) which is devoted to reviews of Philip Tetlock’s book Expert Political Judgment (2005), I was encouraged by what seemed to be a long overdue examination of the role of experts in society in general, and politics in particular. I had not read the book, and was looking at the articles in the journal mainly for material related to my own major interest, that of arguments we use in planning, design, political discourse, and of the evaluation of such arguments.

I was a little disappointed to see that the book appears (judging by the discussion in the six papers reviewing it) to have focused on one specific issue — that of the reliability of experts’ predictions of political and other developments in society. I am not in the least disputing the value of such an investigation, nor am I in the position of judging the validity of the methods used by Tetlock, and thus not of his findings. They seem to be quite critical of expert judgments, perhaps even too critical, in the opinion of some of the reviewers.

The issue forced me to re-examine my own views of the role of experts, specifically in planning, where I had long supported the position of complementing (at least) expert contributions to the planning discourse with participation by the people affected by a plan. This position was not so much based on empirical evidence as on theoretical and logical considerations: the information about consequences of plans — the way a plan affects people — is only partially predictable on the basis of expert (i.e. textbook and previous experience) knowledge, but distributed ‘out there’; and the judgments about the deontic premises involved — whether such consequences are desirable or undesirable, and to what degree, is plainly not a matter for the experts to decide, unless they are also among those affected — but such affectedness is mostly a grounds for rejecting judgments on the basis of not being sufficiently ‘disinterested’ and tainted by ‘conflict of interest’.

So there are good reasons to look at this issue in some more detail. The first question might be: what is the purpose of expert judgment? The answer could range from simply “to provide specialized information not available to the average person and political decision-maker” to “helping the average citizen to take positions, vote, or support political decision-makers, on important issues”: a kind of shortcut of judgment in the spirit of division of labor on which society depends so pervasively. This raises the question of what makes an expert and expert’s judgment reliable, trustworthy. The issue becomes significant not only because of the importance of issues for which we are bombarded with ‘expert’ advice in the media, but also because of the less savory spectacle of selection of experts in courts not on the basis of the trustworthiness of expertise but on their willingness to support the respective prosecution or defense positions.

The list of criteria for expert advice trustworthiness is long and varied:
– evidence of training, and possession of degrees in the subject matter;
– evidence of experience in official government, academic or private enterprise positions related to the subject matter;
– the kind and prestige of the respective institution;
– sometimes the position itself (editor, department director…);
– age;
– gender;
– political, ethnic, ideological group association;
– ‘being on TV’;
– extent of followership, election margin or polling results;
– stated position (and ‘voting record’) on issues.

The last item, ‘voting record’ finally begins to approach a really meaningful basis of judgment trustworthiness, one aspect of which was the target of Tetlock’s investigation: how reliable did an expert’s judgment turn out to be in a variety of significant situations, as supported by evidence? Unfortunately — or merely disappointing for me because his studies are an undeniably important start, — Tetlock chose to investigate this only for one type of judgment involved in policy-making and political decisions: that of predictions, in his case, of the economic development of different states as measured by GDP.

Seen from the perspective of my analysis of planning arguments, this kind of judgment is only one of the various types of judgments involved in those arguments. And it may be useful to examine these types in some detail, because it may help to clarify what one might expect form expert advice.

The ‘standard planning argument’ in its basic form has the following structure: it offers support for a proposed plan X, or for opposition to X, by means of the claims
‘Yes, X should be adopted (or not adopted)
because
(It is true that) X will result in consequence Y
and
Y ought to be pursued (is desirable)

in more formal notation:

D(X) (a ‘deontic’ i.e. ‘ought’ claim)
because
FI( X REL Y) (a factual-instrumental claim)
and
D(Y) a deontic claim.

REL can be one of many different types of relations; causation being the most common, but ‘being instrumental to’, (hence the general label ‘factual-instrumental’ that admittedly does not cover the range of such relationships) association, similarity, analogy, class-inclusion, property-assignment, logical implication and other relationships can occur.
The argument pattern can be expanded by providing information about the conditions C under which the relationship F(X REL Y) is thought to hold:

D(X)
because
FI (X REL Y | C)
and D(Y)
and
F(C);

perhaps even

D(X)
because
FI {(X REL Y) | C1}
and
D{(Y) |C2}
and
F(C1)
and
F(C2)

This structure can now serve to examine the different types of expertise that will support such arguments: an expert may be knowledgeable about ALL of the above claims, or just one, or a selective combination of claims.
For example:
An expert may just know all about the plan X and its details, workings, provisions. More likely, his expertise is expected to pertain to the REL claim between X and Y: based on knowledge about the laws (natural or human) behind the cause-effect link, for example. ‘Complete’ expertise for this claim type would have to cover all possible relationships between X and Y, but also — for the discussion as a whole — all possible relationships between X and all possible consequences Y. Even more exacting: the ‘wrong question’ objection to such an argument may point out that while Y may be a desirable objective, there are other means –other potential X-plans — that might be more effective in achieving Y. This quickly becomes a quite demanding expertise, especially if the effects are described in terms of effects on various affected parties — people, and other creatures or conditions, the set of which is in reality quite difficult to ascertain.

Or the expert may be merely knowledgeable about the conditions C, or C1 and C2, under which the relationship REL is expected to hold: the facts, data about these conditions. Such knowledge is based on empirical evidence — ‘data’ — for current conditions (actually most of those are based on past data); and predictions about such conditions that would have to hold for consequences for the plan in the future will have to rely on other REL- relationships. Interestingly, these are the kinds of judgments examined by Tetlock, and his results seem to indicate (again, according to the reviews) that expert judgments about these are no better than mere statistical extrapolations of past and current data. The role of such conditions should distinguish between C1 and C2 conditions: whether a factual condition for a natural causal relationship exists is a different issue than the question of conditions under which parties affected by a plan will consider its consequences desirable or undesirable.

The common criticism leveled against experts is of course that of making the jump from expertise in any or all of these ‘factual’ questions to the expert’s right to make claims and recommendations pertaining to deontic claims: ought-claims. This includes not only the consequence Y but also the very issue under discussion: Plan X.

Nevertheless, such expertise is claimed. What would it be based on? There are several possibilities.
There is the possibility or tendency of turning this question too into one of fact — by reference to
– pre-existing laws, constitutions; and perhaps logical implications of the claim Y from laws or constitutional principles;
– election or polling results that specifically express general (or majority) acceptance or preference for the goal Y in question; thereby also positing the general acceptance of the majority decision rule, or course;

The legitimacy of treating deontic questions as questions of fact has been the subject of much discussion in the literature. There is no question that some experts’ attempt to posit desirability of plan consequences on behalf of those affected, or ‘in the general interest’ or ‘common good’ runs into considerable difficulties of legitimacy and justification; and it should be obvious that expertise in the various factual issues listed above does not concern expertise in judging deontic claims on behalf of others. (The U.S. Declaration of Independence recognizes this implicitly in the provision that government actions derive their legitimacy (‘just powers’) from the consent of the governed.)

Whenever deontic claims have to be justified by further argument, these arguments will inevitably make reference to further deontic claims — on the assumption that those will be accepted or left unchallenged by questioners. These claims include the validity of previous decisions (precedents); general principles of oral, ethical nature (often backed by religious doctrine) for which experts may claim expertise based on study and familiarity with the respective texts. There is a common tendency to argue for a plan on the grounds that it pursues noble and valid ‘principles’ or ‘ideals’ — even when there are valid questions about whether the proposed plan will actually achieve this (or conform to the principles). Such arguments are being proposed with great conviction by ‘experts’ on the deontics in question, and often carry considerable weight in the discourse even when their factual-instrumental and factual claims stand on wobbly legs.

An interesting kind of claim is that of the ‘vision’ of the future situation (sometimes called ‘scenario’) thought to result from the plan. Appeal is to the desirability, beauty, preferability to that situation, or to the ‘image’ (self-image’) of the people inhabiting that further, and have contributed to it.

It should be obvious that few if any experts can convincingly demonstrate expertise in all these areas. They may be ‘experts’ with respect to one or a combination of question types. Thorough demonstration of expertise would arguably be preferable if based less on the indicators of expertise listed above than on actual arguments about the question at hand, each argument being adequately (that is, to the satisfaction of affected parties) supported by the evidence and further arguments for each premise. This, of course, would not serve the purpose of providing ‘shortcuts’ to judgment very well, in that it would require marshaling all the subject-area knowledge pertinent to each argument, and require the other participants to make judgments about those in turn (which they may not be willing, able / qualified to do.)

Can we draw any conclusions from these considerations? I would consider the following: It would be useful to clearly distinguish if not the experts, then the type of expert judgment they contribute. It would then be possible to devise surveys and experiments measuring the reliability or validity of experts’ judgments for each type. For example, the questions Tetlock used to test experts’ reliability in making predictions should be modified to include specification of the conditions C under which the predicted results can be expected to materialize, and some confidence judgment (probability) about whether they will occur (persist from the present, or emerge). For other types of claims, different kinds of tests would be necessary; and such tests should be developed.

It would be useful to structure public discourse in such a way that the different claims constituting arguments pro and con proposed plans are clearly visible to all, and the expert judgment pertinent to each type of claim be clearly and visibly associated with the respective claims. The discourse should provide opportunity to request further backup of expert judgment upon request. And finally, for decisions of great importance, there should be provisions for a process of evaluating the arguments (pro and con) in detail, considering each of the premise types separately, before taking a decision. (Such provisions were discussed e.g. in my book ‘The Fog Island Argument’, and the Informal Logic article “The Structure and Evaluation of Planning Arguments’).