The various crises facing humanity will require significant changes in current practice, habits, behaviors. Such changes cannot be imposed by governments or other authorities without running the risk of creating resentment, resistance and possible violent confrontation, adding to the dangers. The decisions to be taken must arise from a participatory discourse that is accessible to all parties potentially affected by a plan or decisions, in which all contributions, questions, suggestions and arguments are heard, and in which the merit of such contributions will have a visible impact on the decisions taken. Current governance practice does not provide this. The missing elements are first, a platform or framework for such a discourse, and second, a way of measuring the merit of contributions, the merit of arguments. Without such a measure, decisions can all too easily ignore or even go against the result of discussion; the perception that this is the case even in current ‘democratic’ regimes explains the voter ‘apathy’ — the declining participation in elections: the sense that one’s vote does not really make a difference in the decisions made by the people elected.
There are various commendable efforts and programs on the market that aim at improving planning and policy-making, political discourse. A common concern is ‘argument mapping’, ‘debate mapping’ — the effort to provide a convenient overview of the discussion through graphic representations of the relationships between the discussion elements: issues, claims, proposals, arguments. The tools currently on the market do not yet meet the requirements for a systematic and transparent evaluation. To encourage the further development of these tools, it may be helpful to summarize these requirements: the following is a first attempt to do so.
The arguments we use in such planning discussions have not received the attention of logic, even informal logic, or rhetoric, that one would expect given their ubiquity: humanity quarrels about ‘what we ought to do’ as much if not more that about the ‘facts’ of the world. The arguments used in such discussions are of a type I have called ‘design arguments’ or ‘planning arguments’. Even in informal logic textbooks, where they are discussed, for example, as ‘proposal arguments’ , their structure is not analyzed sufficiently well to permit a systematic evaluation. An approach for such evaluation of planning arguments has been presented e.g. in the article ‘The structure and evaluation of planning arguments’ in Informal Logic December 2010. Elaborating on that discussion is the following brief exploration of how planning arguments should be represented, and presented in argument maps, for example, so as to facilitate evaluation.
Recapping: The typical planning argument can be described as follows:
The proposed plan or decision — denoted here as ‘x’
is supported (or attacked) by the argument:
‘X ought to be adopted (implemented) (the ‘conclusion’)
x is related to effect y (the ‘factual-instrumental premise)
y ought to be pursued. (the deontic premise)
A more elaborate version might include some qualifications , say, of conditions c under which the relationship between x and y holds, and an assertion that those conditions are indeed (or are not ) present, now condensed in a form that uses the symbol ‘F’ for a factual premises, ‘and ‘D; for the deontic (ought) premise:
F(x REL y | c)
and (D( y )
F ( c )
The relation REL is a common label for any of the usual links between x and y: a ‘categorical’ link or claim (e.g.: ‘x IS y’); a causal claim (‘x CAUSES y’) or a ‘resemblance claim’ (‘x is LIKE y’); according to each case at hand, there may be variations or other connections invoked.
In textbooks discussion of ‘proposal arguments’, this structure is usually not presented completely. Thus, an argument maybe rendered as ‘x should be adopted because it causes y’; or ‘x ought to be because its effect y is desirable’. In both cases, only one premise is explicitly stated. The practice of omitting premises that ‘can be taken for granted’, (resulting in an ‘enthymeme’ — an incomplete argument) is common, as already Aristotle made clear. But such an argument can be opposed on very different grounds: An opponent of ‘x’ may not be convinced that x will indeed result in y. Another opponent may agree that x does cause y but does not consider y desirable. A third participant may feel that yes, y might be a good thing, and even agree that x may be helpful in getting y, but only if certain conditions are present, and since they are not, hold that implementing x is not warranted. Yet another observer may simply feel that x is not the best way to get y: a different plan should be considered. These objections are aimed at different premises, some of which are not explicitly stated.
This means that if the argument is to be evaluated in any meaningful way, the elements at which these opinions are directed must all be stated explicitly, visibly. This is the first of several ‘rules’ needed to ensure meaningful evaluation:
The Premise Completeness Rule:
All premises of a planning argument
— the factual-instrumental premise, including qualifying conditions as applicable;
— the deontic premise;
— the factual premise regarding qualifying conditions
must be stated explicitly.
It is necessary to clarify that some claims of arguments — that are often part of argument pattern representations in popular textbooks — should NOT be included in the display of a single planning argument because they are really arguments about ‘successor issues’: issues arising from challenges to main argument premises. Even the widely accepted representation of arguments by Toulmin (The Uses of Argument, 1958) makes this mistake: his argument diagram
D (Datum) ————————–> Q (qualification) —–> C (conclusion)
though not a planning argument, is an example of selective inclusion of premises that really are parts of successor issue arguments. Here, the Warrant is the premise making the connection between D and C; the backing B is the arguer’s preventive move in anticipation of a challenge to that premise. But any premise can usually be challenged on several kinds of grounds, not only one. So either the backing should properly include all those grounds (which of course would make the argument unwieldy and complicated), or the inclusion of one such ground to bolster the warrant is a selective complication of the main argument with one partial argument for the successor issue: Is the warrant W true? (or plausible?– the preferred term for argument evaluation). For that matter, isn’t it possible to also challenge the Data (D)? So could the argument not contain another claim supporting the veracity or validity of the data claim? The upshot of this is that for a useful representation of the arguments in a map, or a tool for evaluation, the argument itself should be reduced to its basic structure. For the planning argument, a resulting ‘map’ would look like this:
Issue / argument map, generic
The Overall Argument Completeness Rule
The generic map above shows only three arguments, which may be all that have actually been entered in a discussion. In argumentation textbooks, the emphasis is usually on the analysis of individual arguments — just as in formal logic, or even scientific method, the truth or falsity of a claim is taken to be adequately established by means of one single valid argument with true premises. It is curious that the familiarity of the ‘careful weighing of pros and cons’ often heard in official speeches is not reflected in the academic analysis of the arguments that constitute such pros and cons, specifically in the examination of the question of how such weighing might actually be done. The practice of argumentation in the political arena looks even less reassuring: political advertising tends to focus only on a few ‘key’ issues and arguments, and the relentless repetition of those points in TV and radio spots.
A modest amount of reflection should show that for some thorough deliberative effort of evaluation of the merit of pro and con arguments to reach a meaningful decision, all pro and con arguments should be included in the evaluation. That is, all potential effects of a proposed plan should be looked at and evaluated. The rationale for greater citizen participation in public planning and policy-making is in part the fact that the information of all such effects is distributed in the citizenry — the people who are affected have that knowledge, so they must be called upon to bring it into the discussion. Reliance on experts (who are usually not or very differently affected by government plans) cannot guarantee that all such pertinent knowledge is brought to bear on the decision. The only area where a thorough examination of all aspects is attempted is the practice of ‘benefit / cost analysis’ applied to big government or business planning. But this technique is invariably carried out by experts, public participation is mostly prohibited by the specialized terminology and technique.
The implication of this issue is that the discourse about public plans must be carefully orchestrated to ensure that all ‘pros and cons’ are actually raised and identified so that they can be included in the evaluation. On the one hand, people must be encouraged to contribute that information; on the other hand, the ‘overview’ representation of the set of aspects should not be obscured by repetition and rhetorical embroidery. Both requirements are difficult to satisfy.Some participants may not wish to reveal advantages a plan would bestow upon them — that other might consider unfair; or identify disadvantages to other parties (that these are not aware of) if this would require remedies reducing their own benefits. This has led me to suspect that the discourse must be considered systemically incomplete (and therefore, evaluation results should not be used directly as decision criteria). Nevertheless, the aim must be for all pros and cons to be brought out to be considered.
For the map representation of a discussion, this raises the question whether maps should ‘suggest’ issues that might be important to examine — even if they haven’t been raised by actual human participants but by some enhanced search engine, for example. Maps might show ‘potential issues’ in shades of grey as compared to highlighted issues that have actually been raised. The systematic generation of issues, even the construction of potential arguments by artificial intelligence programs based on information stored in data banks are both within reach of technological feasibility, and should be discussed carefully. This is a topic for a different investigation, however.
Besides other criticisms of the methodology — for example, the difficulty of assigning monetary costs or benefits to ‘intangible’ aspects — a key problem inherent in cost-benefit is that the effects of a plan must be declared as costs or benefits (by the experts) as perceived by some entity (e.g. the government funding the analysis) — an entity that is just one party, one side in the controversy. This is the subject of the next point:
The Pro / Con Identification Rule
In cost-benefit studies as well as in most if not all argument mapping programs, aspects and arguments are identified as ‘pro’ or ‘con’ (‘costs’ and ‘benefits’) — a practice that on the surface seems crucial for anyone trying to carefully review all the pros and cons in order to reach a deliberated decision. And in discussions, arguments are certainly entered by participants as supporting or opposing a proposed plan. So it seems eminently plausible that the maps should reflect this.
However, this practice hides the fact that effects of plans may not be beneficial for all people affected; indeed, one person’s ‘benefit’ (and thus ‘pro’ argument) may be another person’s ‘cost’ -(and thus a ‘con’ argument). In addition, once beginning the evaluation process, people will assign different weights and expressions of agreement / disagreement to different premises. these can have the effect of turning an an argument intended as a ‘pro’ argument and even initially accepted as such by the evaluator into a ‘con’ argument for that person: I may look at an argument meant to support plan x by pointing out that it will cause effect y given conditions c, and find that while I indeed believe that x will produce y, upon reflection y does not seem such a good idea. Or that I believe both that x will cause y under conditions c, and y is a worthy goal, but that conditions c are not present, which makes the effort to implement x a futile one. But seeing the argument identified in a map as a ‘pro’ argument may make it look like an established point, and that I have made a mistake: the map is ‘taking sides’ in the evaluation, as it were: the side of the agency funding the analysis, or simply the side of the participant entering that particular argument.
For that reason, it is better to refrain from accepting the intended ‘pro’ and ‘con’ label of arguments in the map. Whether an argument is a pro or con reason for a specific person is a result of that person’s assessment, not the proponent’s intention. Therefore, both in the list or collection of arguments, in evaluation forms and in argument maps, the labeling of arguments as supporting or opposing should be avoided. (This is a main reason for my rejection of most ‘debate-mapping’ and ‘argument mapping’ programs and techniques on the market today.)
The Rule of Rejecting some Arguments
(e.g. characterization, ad hominem, authority arguments, ‘meta-arguments’)
The previous ‘completeness’ rule may be misunderstood as advocating the admission of all kinds of arguments into maps and in the evaluation process. There are some important exceptions: for instance, arguments or premises that merely characterize a plan or claim, but don’t offer a reason for such characterization. The remark “This is a crazy idea” is indeed a forceful opposition statement against a proposal. But it is not really an argument — and therefore should not be entered into either formal evaluation forms nor argument maps. The same is true for positive (‘like’ or “wow, what a beautiful, creative proposal) expressions of support. They have the same status as ad hominem arguments (‘the author of the plan is a crook’) or arguments from authority (the principle goes back to Aristotle!’) — they suggest that the number of supporters, or the character of proponents, the fame of a philosopher who endorsed a concept, are adequate reasons to accept a claim. Once stated fully as such, the fallacy usually becomes obvious. Now sure, we agree that denigrating the messenger because of his flawed character is not by itself a good indication of the quality of the message — but is the citing of authorities not a common practice, even a condition for respectability in scientific work? How can it be wrong or inadmissible?
To the extent such expressions do have a legitimate place in the discourse and evaluation process, they are recommendations of how we should evaluate the plausibility of individual claims of an argument, they are not arguments about the plan x themselves. We accept an argument from a scientific authority because we assume that such a famous scientist would have very good reasons, evidence, data, valid calculations, measurements to back up his claim. Even so, such arguments often deteriorate into silly discussions not about that evidence for a claim, but about the reliability of the authority’s judgment, hurling stories about many other silly, untrue things that person also believed against the authority’s unchallenged record — all having nothing to do with the merit of the claim itself. So the venerable academic practice of citing sources belongs in the body of arguments and evidence of successor issues, not in the main argument about a plan nor in the maps showing the relationships between the issues and claims:
The first-level arguments about a plan should not contain
– arguments of characterization;
– ad hominem arguments (positive or negative);
– arguments from authority;
The same reservations hold for ‘meta’-arguments that make claims about the set of arguments in the discussion, or even in principle: “There is no reason to support this proposal”; “All the arguments of the opponent are fallacious”; “We haven’t heard any quantitative evidence questioning the validity of the proposal…” and the like. This is not to say that such observations do not have a place in discussions. They can serve an important purpose — such as to remind participants to provide substantial evidence, data, and support for their arguments. But these meta-arguments talk about the state of the discourse, not about the proposed plan — and therefore should likewise be omitted from representations of the discussion, argument maps, or evaluation tools of that plan itself. Perhaps there should be a separate ‘commentator’ rubric for such observations about the state and quality of the discussion itself.
The Rule of Rewarding Participation
The last observation above raises another important issue: that of the degree and sincerity of participation in the discussion. Just like the phenomenon of ‘voter apathy’ held responsible for low voter turnout in elections, the experience with efforts to engage participants in online discussion to ratchet up their contributions from just exchanging comments to the more demanding task of collaborative writing more comprehensive summaries or reports on the results of their discourse has been disappointing. Even the extra effort to switch to a different platform without the normal length limits of online discussion posts, and permitting the inclusion of visual material (maps, pictures) has been ‘too much’ for discussion participants normally quite eager to exchange arguments and share material researched on the web.
It is misplaced to accuse such people of ‘apathy’ or merely being motivated by the excitement of the online discussion (the nature of this motivation may not be very well understood yet). The reason for voter apathy and this reluctance of discussion participants might be more properly seen in the lack of meaningful rewards for such engagement. Voters who perceive — with or without justification — that their votes do not have a significant impact on government decisions, will be less eager to vote; discussion participants who don’t see what difference a summary of their contributions would make in the larger scheme of things will not be eager to go beyond the venting of their frustrations and exchange of opinions. Most online discussions ‘die down’ after some time without having reached any meaningful resolution of the subject debated.
Online social networks have tried to respond to this phenomenon with features such as the count of ‘friends’ or ‘network connections’ — or simple evaluation devices in the form of ‘like’ and ‘dislike’ (thumbs up or down) buttons. These efforts turn into quite meaningless competitive numbers efforts, which suggests nothing more that how meaningless they are (how many ‘friends’ do we have on Facebook that we wouldn’t even know if we met them in the street?) — but are encouraged by the networks because they help the advertising part of their enterprise.
It turns out that the suggested tool of argument evaluation for the discourse framework might offer a better approach to the problem of rewarding participants for their contribution. Going beyond the mere count of posts in a discussion, the evaluation of argument plausibility and argument weight (the argument’s plausibility modified by the weight of relative importance of its deontic premise) of planning arguments, as evaluated by the entire group of participants in the evaluation exercise, can be directly used as a measure for the value of a participant’s contributions. (The details of scoring are developed in more detail in a paper on a proposed argumentative planning and argument evaluation game; draft available on request.)
This feature leads to the possibility of building up a reputation record of different types of contributors: for example, a participant’s contribution to the development (through modification) of the plan eventually adopted or recommended; the ‘creative’ contributor supplying innovative solution ideas; the solid ‘researcher’ finding information pertinent to the discussion on the net, the ‘influential’ participant whose arguments lead other participants to change their minds; the ‘thorough / in-depth deliberating participant’ who is delving more deeply into the evidence and support for argument premises in successor issues; the person with the most reliable offhand judgment whose initial assessment turns out to be closest to the final deliberated result by the entire group, and so on).
The possibility of building up such cooperative contribution records — that might be included in a person’s resume for job applications or profile for public office positions — could provide the needed reward mechanism for constructive participation in discussions about significant public issues.
The Rule of Improving Proposed Plans rather than forcing a decision
One aspect of the purpose of public discourse deserves some special consideration. There are various reasons for the widespread perception of argumentation as an adversarial, divisive activity. For example: the spectacle of many ‘debates’ of candidates for public office, where the aim of each debater is to make the opponent look less fit for the job by refuting the opponents arguments, or goading the opponent into making foolish assertions (that can then be used in ‘attack ads’). Even more so, the decision mechanism applied both in elections and decisions in ‘decision-making bodies’ in government and private enterprise: majority voting. It will provide a decision, which may be convenient or even critical in some cases — but at the expense of ignoring the arguments, the concerns of a significant minority of participants. The practice of enforcing ‘party discipline’ in voting in parliamentary bodies is entirely obviating discussion — if the majority party has the votes, no debate is necessary. The victory celebrations of the winners of such votes overshadow the fact that the quality of the plans or policies voted upon has totally disappeared from the process.
The introduction of merit of discourse measures into such discussions could help reverse this problem: the contribution rewards to individual participants could — and should — be structured to favor the development of ‘better’ proposals. By this is meant, here, plans modified step by step from the initial proposal by amendments or changes, in response to concerns expressed by participants, and with the aim of achieving a greater degree of approval from a larger group of participants, and at least acceptance as ‘not making things worse than before’ by the adversely affected minorities. The goal of ‘complete consensus’ is an ideal that may be too difficult to achieve in many cases, and tempt lone dissenting holdouts to adopt a position of de facto ‘dictating’ no action. But a discourse participation reward structured to encourage the improvement of plan proposals rather than mere majority vote decisions may help improve not only the discourse about public issues but the resulting decisions as well.