‘CONNECTING THE DOTS’ OF SOME GOVERNANCE PROBLEMS
Posted: May 9, 2018 Filed under: control of power, Design discourse, Discourse contribution rewards, Evaluation of plans, Governance, Public policy discourse, Systems Thinking, Uncategorized | Tags: democracy, discourse, evaluation of planning arguments, Governance, planning discourse Leave a commentThere is much discussion about flaws of ‘democratic’ governance systems, supposedly leading to increasingly threatening crises. Calls for ‘fixing’ these challenges tend to focus on single problems, urging single ‘solutions’. Even recommendations for application of ‘systems thinking’ tools seem to be fixated on the phase of ‘problem understanding’ of the process; while promotions of AI (artificial / augmented intelligence) sound like solutions are likely to be found by improved collection and analysis of data, of information in existing ‘knowledge bases’. Little effort seems devoted to actually ‘connecting the dots’ – linking the different aspects and problems, making key improvements that serve multiple purposes. The following attempt is an example of such an effort to develop comprehensive ‘connecting the dots’ remedies – one that itself arguably would help realize the ambitious dream of democracy, proposed for discussion. A selection (not a comprehensive account) of some often invoked problems, briefly:
“Voter apathy” The problem of diminishing participation in current citizen participation in political discourse and decisions / elections, leading to unequal representation of all citizens’ interests;
“Getting all needed information”
The problem of eliciting and assembling all pertinent ‘documented’ information (‘data’) but also critical ‘distributed’ information especially for ‘wicked problems’, – but:
“Avoiding information overload”
The phenomenon of ‘too much information’, much of which may be repetitive, overly rhetorical, judgmental, misleading (untruthful) or irrelevant;
“Obstacles to citizens’ ability to voice concerns”
The constraints to citizens’ awareness of problems, plans, overview of discourse, ability to voice concerns;
“Understanding the problem”
Social problems are increasingly complex, interconnected, ill-structured, explained in different, often contradicting ways, without ‘true’ (‘correct) or ‘false’ answers, and thus hard to understand, leading to solution proposals which may result in unexpected consequences that can even make the situation worse;
“Developing better solutions”
The problem of effectively utilizing all available tools to the development of better (innovative) solutions;
“Meaningful discussion”
The problem of conducting meaningful (less ‘partisan’ and vitriolic, more cooperative, constructive) discussion of proposed plans and their pros and cons;
“Better evaluation of proposed plans”
The task of meaningful evaluation of proposed plans;
“Developing decisions based on the merit of discourse contributions”
Current decision methods do not guarantee ‘due consideration’ of all citizens’ concerns but tend to ignore and override as much as the contributions and concerns of half of the population (voting minority);
“The lack of meaningful measures of merit of discourse contributions”
Lack of convincing measures of the merit of discourse contributions: ideas, information, strength of evidence, weight of arguments and judgments;
“Appointing qualified people to positions of power”
Finding qualified people for positions of power to make decisions that cannot be determined by lengthy public discourse — especially those charged with ensuring
“Adherence to decisions / laws / agreements”
The problem of ‘sanctions’ ensuring adherence to decisions reached or issued by governance agencies: ‘enforcement’ – (requiring government ‘force’ greater than potential violators leading to ‘force’ escalation;
“Control of power”
To prevent people in positions of power from falling victim to temptations of abusing their power, better controls of power must be developed.
Some connections and responses:
Details of possible remedies / responses to problems, using information technology, aiming at having specific provisions (‘contribution credits’) work together with new methodological tools (argument and quality evaluation) to serve multiple purposes:
“Voter apathy”
Participation and contribution incentives: for example, offering ‘credit points’ for contributions to the planning discourse, saved in participants’ ‘contribution credit account’ as mere ‘contribution’ or participation markers, (to be evaluated for merit later.)
“Getting all needed information”
A public projects ‘bulletin board’ announcing proposed projects / plans, inviting interested and affected parties to contribute comments, information, not only from knowledge bases of ‘documented’ information (supported by technology) but also ‘distributed, not yet documented information from parties affected by the problem and proposed plans.
“Avoiding information overload”
Points given only for ‘first’ entries of the same content and relevance to the topic
(This also contributes to speedy contributions and assembling information)
“Obstacles to citizens’ ability to voice concerns”
The public planning discourse platform accepts entries in all media, with entries displayed on public easily accessible and regularly (ideally real-time) updated media, non-partisan
“Understanding the problem”
The platform encourages representation of the project’s problem, intent and ‘explanation’ from different perspectives. Systems models contribute visual representation of relationships between the various aspects, causes and consequences, agents, intents and variables, supported by translation not only between different languages but also from discipline ‘jargon’ to natural conversational language.
“Developing better solutions”
Techniques of creative problem analysis and solution development, (carried out by ‘special techniques’ teams reporting results to the pain platform) as well as information about precedents and scientific and technology knowledge support the development of solutions for discussion
“Meaningful discussion”
While all entries are stored for reference in the ‘Verbatim’ repository, the discussion process will be structured according to topics and issues, with contributions condensed to ‘essential content’, separating information claims from judgmental characterization (evaluation to be added separately, below) and rhetoric, for overview display (‘IBIS’ format, issue maps) and facilitating systematic assessment.
“Better evaluation of proposed plans”
Systematic evaluation procedures facilitate assessment of plan plausibility (argument evaluation) and quality (formal evaluation to mutually explain participants’ basis of judgment) or combined plausibility-weighted quality assessment.
“Meaningful measures of merit”
The evaluation procedures produce ‘judgment based’ measures of plan proposal merit that guide individual and collective decision judgments. The assessment results also are used to add merit judgments (veracity, significance, plausibility, quality of proposal) to individuals’ first ‘contribution credit’ points, added to their ‘public credit accounts’.
“Decision based on merit”
For large public (at the extreme, global) planning projects, new decision modes and criteria are developed to replace traditional tools (e.g. majority voting)
“Qualified people to positions of power”
Not all public governance decisions need to or can wait for the result of lengthy discourse, thus, people will have to be appointed (elected) to positions of power to make such decisions. The ‘public contribution credits’ of candidates are used as additional qualification indicators for such positions.
“Control of power”
Better controls of power can be developed using the results of procedures proposed above: Having decision makers ‘pay’ for the privilege of making power decisions using their contribution credits as the currency for ‘investments’ in their decision: Good decision will ‘earn’ future credits based on public assessment of outcomes; poor decisions will reduce the credit accounts of officials, forcing their resignation if depleted. ‘Supporters’ of officials can transfer credits from their own accounts to the official’s account to support the official’s ability to make important decisions requiring credits exceeding their own account. They can also withdraw such contributions if the official’s performance has disappointed the supporter.
This provision may help reduce the detrimental influence of money in governance, and corresponding corruption.
“Adherence to decisions / laws / agreements”
One of the duties of public governance is ‘enforcement’ of laws and decisions. The very word indicates the narrow view of tools for this: force, coercion. Since government force must necessarily exceed that of any would-be violator to be effective, this contributes both to the temptation of corruption, — to abuse their power because there is no greater power to prevent it, and to the escalation of enforcement means (weaponry) by enforces and violators alike. For the problem of global conflicts, treaties, and agreements, this becomes a danger of use of weapons of mass destruction if not defused. The possibility of using provisions of ‘credit accounts’ to develop ‘sanctions’ that do not have to be ‘enforced’ but triggered automatically by the very attempt of violation, might help this important task.
Artificial Intelligence for the Planning Discourse?
Posted: March 26, 2018 Filed under: Argument patterns, Design discourse, Discourse contribution rewards, Evaluation of plans, Pattern Language, Public policy discourse, Rittel, Uncategorized | Tags: argument evaluation, Argumentation, Artificial Intelligence, evaluation of planning arguments, Planning arguments Leave a commentThe discussion about whether and to what extent Artificial Intelligence technology can meaningfully support the planning process with contributions similar or equivalent to human thinking is largely dominated by controversies about what constitutes thinking. An exploration of the reasoning patterns in the various phases of human planning discourse could produce examples for that discussion, leaving the determination of that definition label ‘thinking’ open for the time being.
One specific example (only one of several different and equally significant aspects of planning):
People propose plans for action, e.g. to solve problems, and then engage in discussion of the ‘pros and cons’ of those plans: arguments. A typical planning argument can be represented as follows:
“Plan A should be adopted for implementation, because
i) Plan A will produce consequences B, given certain conditions C, and
ii) Consequences B ought to be pursued (are desirable); and
iii) Conditions C are present (or will be, at implementation).
Question 1: could such an argument be produced by automated technological means?
This question is usually followed up by question 2: Would or could the ‘machine’ doing this be able (or should it be allowed) to also make decisions to accept or reject the plan?
Can meaningful answer to these questions be found? (Currently or definitively?)
Beginning with question 1: Formulating such an argument in their minds, humans draw on their memory — or on explanations and information provided during the discourse itself — for items of knowledge that could become premises of arguments:
‘Factual-instrumental’ knowledge of the form “FI (A –> X)”, for example (“A will cause X’, given conditions C;
‘Deontic’ Knowledge: of the form “D(X)” or “X ought to be’ (is desirable)”, and
Factual Knowledge of the form “F ( C)” or “Conditions C are given”.
‘Argumentation-pattern knowledge’: Recognition that any of the three knowledge items above can be inserted into an argument pattern of the form
D(A) <– ((A–> X)|C)) & D(X) & F( C)).
(There are of course many variations of such argument patterns, depending on assertion or negation of the premises, and different kinds of relations between A and X.)
It does not seem to be very difficult to develop a Knowledge Base (collection) of such knowledge items and a search-and-match program that would assemble ‘arguments’ of this pattern.
Any difficulties arguably would be more related to the task of recognizing and suitably extracting such items (‘translating’ it into the form recognizable to the program) from the human recorded and documented sources of knowledge, than to the mechanics of the search-and-match process itself. Interpretation of meaning: is an item expressed in different words equivalent to other terms that are appropriate to the other potential premises in an argument?
Another slight quibble relates to the question whether and to what extent the consequence qualifies as one that ‘ought to be’ (or not) — but this can be dealt with by reformulating the argument as follows:
“If (FI(A –> X|C) & D(X) & F( C)) then D(A)”.
(It should be accompanied by the warning that this formulation that ‘looks’ like a valid logic argument pattern is in fact not really applicable to arguments containing deontic premises, and that a plan’s plausibility does not rest on one single argument but on the weight of all its pros and cons.)
But assuming that these difficulties can be adequately dealt with, the answer to question 1) seems obvious: yes, the machine would be able to construct such arguments. Whether that already qualifies as ‘thinking’ or ‘reasoning’ can be left open; the significant realization is equally obvious: that such contributions could be potentially helpful contributions to the discourse. For example, by contributing arguments human participants had not thought of, they could be helping to meet the aim of ensuring — as much as possible — that the plan will not have ‘unexpected’ undesirable side-and-after-effects. (One important part of H. Rittel’s very definition of design and planning.)
The same cannot as easily be said about question 2.
The answer to that question hinges on whether the human ‘thinking’ activities needed to make a decision to accept or reject the proposed plan can be matched by ‘the machine’. The reason is, of course, that not only the plausibility of each argument will have to be ‘evaluated’, judged, (by assessing the plausibility of each premise) but also that the arguments must be weighed against one another. (A method for doing that has been described e.g in ‘The Fog Island Argument” and several papers.)
So a ‘search and match’ process as the first part of such a judgment process would have to look for those judgments in the data base, and the difficulty here has to do with where such judgments would come from.
The prevailing answers for factual-instrumental premises as well as for fact-premises — premises i) and iii) — are drawing on ‘documented’ and commonly accepted truth, probability, or validity. Differences of opinion about claims drawn from ‘scientific’ and technical work, if any, are decided by a version of ‘majority voting’ — ‘prevailing knowledge’, accepted by the community of scientists or domain experts, ‘settled’ controversies, derived from sufficiently ‘big data’ (“95% of climate scientists…”) can serve as the basis of such judgments. It is often overlooked that the premises of planning arguments, however securely based on ‘past’ measurements, observations etc, are inherently predictions. So any certainty about their past truth must at least be qualified with a somewhat lesser degree of confidence that they will be equally reliably true in future: will the conditions under which the A –> X relationships are assumed to hold, be equally likely to hold in the future? Including the conditions that may be — intentionally or inadvertently — changed as a result of future human activities pursuing different aims than those of the plan?
The question becomes even more controversial for the deontic (ought-) premises of the planning arguments. Where do the judgments come from by which their plausibility and importance can be determined? Humans can be asked to express their opinions — and prevalent social conventions consider the freedom to not only express such judgments but to have them given ‘due consideration’ in public decision-making (however roundabout and murky the actual mechanisms for realizing this may be) as a human right.
Equally commonly accepted is the principle that machines do not ‘have’ such rights. Thus, any judgment about deontic premises that might be used by a program for evaluating planning arguments would have to be based on information about human judgments that can be found in the data base the program is using. There are areas where this is possible and even plausible. Not only is it prudent to assign a decidedly negative plausibility to deontic claims whose realization contradicts natural laws established by science (and considered still valid…like ‘any being heavier than air can’t fly…’). But there also are human agreements — regulations and laws, and predominant moral codes — that summarily prohibit or mandate certain plans or parts of plans; supported by subsequent arguments to the effect that we all ought not break the law, regardless of our own opinions. This will effectively ‘settle’ some arguments.
And there are various approaches in design and planning that seem to aim at finding — or establishing — enough such mandates or prohibitions that, taken together, would make it possible to ‘mechanically’ determine at least whether a plan is ‘admissible’ or not — e.g. for buildings, whether its developer should get a building permit.
This pattern is supported in theory by modal logic branches that seek to resolve deontic claims on the basis of ‘true/false’ judgments (that must have been made somewhere by some authority) of ‘obligatory’, ‘prohibited’, ‘permissible’ etc. It can be seen to be extended by at last two different ‘movements’ that must be seen as sidestepping the judgment question.
One is the call for society as a whole to adopt (collectively agree upon) moral, ethical codes whose function is equivalent to ‘laws’ — from which the deontic judgment about plans could be derived by mechanically applying the appropriate reasoning steps — invoking ‘Common Good’ mandates supposedly accepted unanimously by everybody. The question whether and how this relates to the principle of granting the ‘right’ of freely holding and happily pursuing one’s own deontic opinions is usually not examined in this context.
Another example is the ‘movement’ of Alexander’s ‘Pattern Language’. Contrary to claims that it is a radically ‘new’ theory, it stands in a long and venerable tradition of many trades and disciplines to establish codes and collections of ‘best practice’ rules of ‘patterns’ — learned by apprentices in years of observing the masters, or compiled in large volumes of proper patterns. The basic idea is that of postulating ‘elements’ (patterns) of the realm of plans, and relationships between these, by means of which plans can be generated. The ‘validity’ or ‘quality’ of the generated plan is then guaranteed by the claim that each of the patterns (rules) are ‘valid’ (‘true’, or having that elusive ‘quality without a name’). This is supported by showing examples of environments judged (by intuition, i.e. needing no further justification) to be exhibiting ‘quality’, by applications of the patterns. The remaining ‘solution space’ left open by e.g. the different combinations of patterns, then serves as the basis for claims that the theory offers ‘participation’ by prospective users. However, it hardly needs pointing out that individual ‘different’ judgments — e.g. based on the appropriateness of a given pattern or relationship — are effectively eliminated by such approaches. (This assessment should not be seen as a wholesale criticism of the approach, whose unquestionable merit is to introduce quality considerations into the discourse about built environment that ‘common practice’ has neglected.)
The relevance of discussing these approaches for the two questions above now becomes clear: If a ‘machine’ (which could of course just be a human, untiringly pedantic bureaucrat assiduously checking plans for adherence to rules or patterns) were able to draw upon a sufficiently comprehensive data base of factual-instrumental knowledge and ‘patterns or rules’, it could conceivably be able to generate solutions. And if the deontic judgments have been inherently attached to those rules, it could claim that no further evaluation (i.e. inconvenient intrusion of differing individual judgments would be necessary.
The development of ‘AI’ tools of automated support for planning discourse — will have to make a choice. It could follow this vision of ‘common good’ and valid truth of solution elements, universally accepted by all members of society. Or it could accept the challenge of a view that it either should refrain from intruding on the task of making judgments, or going to the trouble of obtaining those judgments from human participants in the process, before using them in the task of deriving decisions. Depending on which course is followed, I suspect the agenda and tasks of current and further research and development and programming will be very different. This is, in my opinion, a controversial issue of prime significance.
Combining systems modeling maps with argumentative evaluation maps: a general template
Posted: March 9, 2015 Filed under: Uncategorized | Tags: argument evaluation, evaluation of planning arguments, Planning discourse map, systems modeling, Systems Thinking Leave a commentMany suggested tools and platforms have been proposed to help humanity overcome the various global problems and crises, each with claims of superior ability or adequacy for addressing the ‘wickedness’ of the problems.
Two of the main perspectives I have studied – the general group of models labeled as ‘systems thinking’, ‘systems modeling and simulation’, and the ‘argumentative model of planning’ proposed by H. Rittel (who incidentally saw his ideas as part of a ‘second generation’ systems approach) have been shown to fall somewhat short of those claims: specifically, they have so far not been able to demonstrate the ability to adequately accommodate each others’ key concerns. The typical systems model seems to assume that all disagreements regarding its model assumptions have been ‘settled’; it shows no room for argument and discussion or disagreement, while the key component of the argumentative model: the typical ‘pro’ or ‘con’ argument of the planning discourse, — the ‘standard planning argument’ does not connect more than two or three of the many elements of a more elaborate systems model of the respective situation, and thus fails to properly accommodate the complexity and multiple loops of such models.
It is of course possible that a different perspective and approach will emerge that can better resolve this discrepancy. However, it will have to acknowledge and then properly address the difficulty we can now only express with the vocabulary of the two perspectives. This essay explores the problem of showing how the elements of the two selected approaches can be related in maps that convey both the respective system’s complexity and the possible disagreements and assessment of the merit of arguments about system assumptions.
A first step is the following simplified diagram template that shows a ‘systems model’ in the center, with arguments both about how the proposal for intervention in the system (consisting of suggested actions upon specific system elements) should be evaluated, and about the degree of certainty – the suggested term is ‘plausibility’ – about assumptions regarding individual elements.
A key aspect of the integration effort is the insight that the ‘system’ will have to include all the features discussed in the discourse under the terms of ‘plan proposal’ with its details of initial conditions, proposed actions (what to do, by whom, using what tools and resources, and the conditions for their availability), the ‘problem’ a solution aims at remedying, which is described (at least) by specifying its current ‘IS’ state, the desired ‘OUGHT’ state or planning outcome, the means by which the transition of is- to ought-state can be achieved; and the potential consequences of implementing the plan, including possible ‘unexpected’ side-and-after-effects. Conversely, the assessment of arguments (the “careful weighing of pros and cons”) will have to explicitly address the system model elements and their interactions – elements that should be (but mostly are not) specified in the argument as ‘conditions under which the plan or one of its features is assumed to effectively achieve the specific outcome or goal referenced by the argument.
For the sake of simplicity, the diagram only shows two arguments or reasons for or against a proposed plan. In reality, there always will be at least two arguments (benefit and cost of a plan), but usually many more, based on assessment of the multiple outcomes of the plan and actions to implement it, as well as of conditions (feasibility, availability, cost and other resources) for its implementation. The desirability assessments of different parties will be different; the argument seen as ‘pro’ by one party can be a ‘con’ argument for another, depending on the assessment of the premises. Therefore, arguments are not shown as pro or con in the diagram.
The diagram uses abbreviated notations for conciseness and convenient overview that are explained in the legend below, that presents some key (but by no means exhaustively comprehensive) concepts of both perspectives.
* PLAN or P Plan or proposal for a plan or plan aspects
* R Argument or ‘reason’. It is used both for an entire ‘pro’ or ‘con’ argument about the plan or an issue, — the entire set of premises supporting the ‘conclusion’ claim (usually the plan proposal) and for the relationship claimed to connect the Plan with an effect, usually a goal, or a negative consequence of plan implementation, in the factual-instrumental premise.
The ‘standard planning argument’ pattern prevailing in planning discourse has the general form:
D(PLAN) Plan P ought to be adopted (deontic ‘conclusion’)
because
FI (PLAN –>R –>O)|{C} P has relationship R with outcome O given
Conditions {C} (Factual-instrumental premise)
and
D(O) Outcome O ought to be pursued (Deontic premise)
and
F{C} Conditions {C} are given (true)
The relationship R is most often a causal connection, but also stands for a wide variety of relationships that constitute the basis for pro or con arguments: part-whole, identity, similarity, association, analogy, catalyst, logical implication, being a necessary or sufficient condition for, etc. In an actual application, these relationships may be distinguished and identified as appropriate.
* O or G Outcome or goal to be pursued by the plan, but also used for other effects including negative consequences
* M — the relationship of P ‘being a means’ to achieve O
* C or {C} The set of a number of
c conditions under which the claimed relationship M between P and O is assumed to hold
* pl ‘plausibility’ judgments about the plan, arguments, and argument premises, expressed as values on a scale of +1 (completely plausible) to -1 (completely implausible) with a midpoint ‘zero’ understood as ‘so-so or ‘don’t know, cant decide’) in combination with the abbreviations for those:
* plPLAN or plP plausibility judgment of the PLAN,
this is some individual’s subjective judgment.
* plM plausibility of P being effective in achieving O;
* pO plausibility of an outcome O or Goal;
* pl{C} plausibility (probability) of conditions {C} being present;
* plc plausibility of condition c being present;
* plR plausibility of argument or reason R;
* pl PLAN GROUP a group judgment of plan plausibility
* wO weight of relative importance of outcome O ( 0 ≤ w ≤ 1; ∑w = 1)
* WR Argument weight or weight of reason
Functions F between plausibility values:
* F1 Group plausibility aggregation function:
n
pl PLANGROUP = F1 (plPLANq) for all n members q of the group
q=1, 2
* F2 Plan plausibility function:
m
Pl(PLAN)q = F2 (WRi) for all m reasons R, by person q
i = 1,2…
* F3 Argument weight function:
WRi = F3 pl Ri)* wOj
* F4 Argument plausibility function:
Pl(Ri) = F4: {pl(P –>Mi –>Oi)|{Ci}) , pl(Oi), pl{C}}
The plausibility of argument R is a function of all
Premise plausibility judgments
* F5 Condition set plausibility function:
Pl{C} = F5 (pl ck) pl of set {C} is a function of the
K = 1,2… plausibility judgmens of all c in the set.
n
* F6 Weight of relative importance of outcome Oi: wOi = 1/n ∑ vOi
i=1,2…
Subject to conditions 0 ≤ wOi ≤ 1, and ∑wO = 1.
* System S The system S is the network of all variables describing both the initial conditions c (the IS-state of the problem the plan is trying to remedy), the means M involved in implementing the plan, the desired ‘end’ conditions or goals G of the plan, and the relationships and loops between these.
The diagram does not yet show a number of additional variables that will play a role in the system: the causes of initial conditions (that will also affect the outcome or goal conditions; the variables describing the availability, effectiveness, costs and acceptability of means M, and potential consequences of both M and O of the proposed plan. Clearly, these conditions and their behavior over time (both the time period needed for implementation, and the assumed planning horizon or life expectancy of the solution) will or should be given due consideration in evaluating the proposed plan.
Some speculations regarding the possibility of a moral code without religion.
Posted: April 2, 2013 Filed under: Design discourse, Public policy discourse, Uncategorized | Tags: ethics, evaluation of planning arguments, Morals, Planning arguments, Planning ethic, sanctions for violation of agreements and laws Leave a commentOn a Linked-In forum, the question was raised whether a moral code without religion could be developed. My effort to look into ways to achieve better decisions for planning, design, policy-making issues suggests that it is indeed possible to develop at least a partial system of agreements — for which ‘moral code’ would be an unnecessarily pretentious term — but which has some of the same features. For problems, conflicts of interest or proposed actions or projects that require the consent and cooperation of more than one individual, (this does not cover all situations in which moral codes apply), as soon as parties realize that ‘resolutions’ based on coercion of any kind either will not really improve the situation or are fraught with unacceptable risks (the other guy might have a bigger club… or even one’s own nuclear weapon would be so damaging to even one’s own side that its use would be counterproductive) the basic situation becomes one of negotiation or, as I call it, ‘planning discourse’. Such situations can be sustained and brought to success only on the basis of the expectation that parties will accept and behave according to some agreements. The set of such agreements can be seen as (part of) an ethical or moral code. For the planning discourse, a rough sketch of first underlying ‘agreements’ or code elements are the following:
**1 Instead of attempting to resolve the problem by coercion — imposing one side’s preferred solution over those of other parties — let us talk, discuss the situation.
**2 The discussion will consist of each side describing that side’s preferred outcome, and attempting to convince the other side (other parties) of the advantages –or disadvantages — of the proposal.
**3 All sides will have the opportunity to do this, and all sides implicitly promise to listen to the other’s description and arguments before making a decision.
**4 The decision will (should) be based on the arguments brought forward in the discussion.
*4.1 The description of proposals should be truthful and avoid deception — all its relevant features should be described, none hidden; no pertinent aspects omitted.
*4.2 The arguments should equally truthful, avoiding deception and exaggeration, and be open to scrutiny and challenge, which means that participants should be willing to answer questions for further support of the claims made in the descriptions and arguments.
Simplified ‘planning arguments’ consist of three types of claims:
a) the factual-instrumental claim
‘proposal A will bring about Result B, given conditions C’
b) the factual claim ‘
‘Conditions C are (or will be) given’;
c) the ‘deontic’ or ‘ought-claim’
‘Consequence B of the proposal ought to be pursued’;
and also
d) the ‘pattern’ or inference rule of the argument (that is, the specific constellation of assertions, negation of claims and relations between A and B) is ‘plausible’.
While such arguments (just like the ‘inductive’ reasoning that plays such a significant role in science) are not ‘valid’ from a formal logic point of view, they are nevertheless used and considered all the time, their plausibility deranging from their particular constellation of claims, and the ‘fit’ to the specific situation.
The plan proposal A is itself a ‘deontic’ (ought-) claim.
*4.3 The support for claims of type (a) and (b) takes the form of ‘evidence’ provided and bolstered by what we might loosely call the ‘scientific’ perspective and method.
*4.4 Support for claims of type c) will take further arguments of the ‘planning argument’ kind and pattern, containing further factual and deontic claims in support of the desirability of B.
The deontic claims of such further support arguments can refer to previous agreements, accepted laws or treaties that imply acceptance of a disputed claim, claims of desirability or undesirability for any party affected by the proposed plan, even moral rules derived from religious domains.
**5 Individual participants’ (preliminary) decision should be based on that participant’s individual assessment of the plausibility of all the arguments pro and con that have been brought up in the discussion.
That assessment should not be superseded by considerations extraneous to the plan proposal discussion itself — such as party voting discipline — but be a function of the plausibility and weights assigned by the individual to the arguments and their supporting claims.
**6 A collective decision will be based on the overall ‘decisions’ or opinions of individual participants.
(The current predominant ‘majority voting’ methods for reaching decisions do not meet the expectation #4 above of guaranteeing that the decision be based on due consideration of all expressed concerns: here, a new method is sorely needed).
A decision to adopt a plan by the participants (parties affected by the proposed plan) in such a discussion should only be taken (agreed upon) if all participants’ assessment of the plan is positive or at least ‘not worse’ than the existing problem situation that precipitated the discussion.
**7 Discussion should be continued until all parties feel that all relevant concerns have been voiced. Ideally, the discussion would lead to consensus regarding acceptance or rejection of the proposed plan. If this is the case, a decision can be taken and the plan accepted for implementation.
Realistically, there may be differences of opinion: some parties will support, others oppose the plan. The options for this case are either to abandon the process (to do nothing), to attempt to modify the plan to remove specific features that cause opponents’ concerns; or to prepare a different proposal altogether and start a new discussion about it.
**8 Individual parties’ ‘decision’ (e.g. vote) contribution to the common decision should be matching the party’s expressed assessment of the arguments and argument premises.
For example: if a participant agrees with all the ‘pro’ arguments and disagrees with the ‘con’ arguments (or expressed lesser weigh of the ‘con’ arguments) the participant’s overall vote should be positive. Conversely, if the participant’s assessment of arguments is negative, the overall ‘vote’ should be negative. Participants should be expected to offer additional explanations of a discrepancy between argument assessment and overall decision.
**9 A common decision to accept a proposed plan implies obligations (specified in the plan) for all parties to contribute to implementation and adherence to the decision provisions.
**10 The plan may include provisions to ensure adherence and contributions by the parties. Such provisions may include ‘sanctions’, understood as (punitive) measures taken against parties guilty of violating plan agreements.
There undoubtedly might be more agreements needed for a viable planning ‘ethic’. It is clear that some of the above provisions are not easy to ‘live up to’ — but what moral system has ever been? And for some provisions, the necessary tools for their successful application are still not available. For many societal decisions, access to the discussion (to be able to voice concerns) is lacking even in so-called advanced democracies. Some expectations may sound like wishful thinking: The expectation of transparent linkage between argument assessment and overall (individual) decision and even more the linkage between arguments and collective decision are still not available. The approach for systematic and transparent argument assessment (My article in the Dec 2010 issue of “Informal Logic” on ‘The structure and Evaluation of Planning Arguments’) suggests that such a link would be feasible and practical, if somewhat more cumbersome that current voting and opinion polling practices. However, its application would require some changes in the organization of the planning discourse and support system, as well as decision-making methods.
These observations were mainly done in response to the question whether a ‘moral’ not based on religious tenets would be possible (and meaningful?). That question may ultimately be taken to hinge on item # 10 above — the sanction issue. The practical difficulties of specifying and imposing effective sanctions to ensure adherence to moral rules may lead many to the necessity of accepting or postulating sanctions and rewards to be administered by an entity in the hereafter. But it would seem reasonable to continue to explore such agreement systems including sanctions in the ‘here and now’ beyond current practices, since both non-religious and religion-based systems arguably have not been successful enough reducing the level of violations of their rules.
Some rules for effective evaluation and mapping of planning arguments.
Posted: April 20, 2012 Filed under: Uncategorized | Tags: argument mapping, Argumentation, evaluation of planning arguments 2 Comments
The various crises facing humanity will require significant changes in current practice, habits, behaviors. Such changes cannot be imposed by governments or other authorities without running the risk of creating resentment, resistance and possible violent confrontation, adding to the dangers. The decisions to be taken must arise from a participatory discourse that is accessible to all parties potentially affected by a plan or decisions, in which all contributions, questions, suggestions and arguments are heard, and in which the merit of such contributions will have a visible impact on the decisions taken. Current governance practice does not provide this. The missing elements are first, a platform or framework for such a discourse, and second, a way of measuring the merit of contributions, the merit of arguments. Without such a measure, decisions can all too easily ignore or even go against the result of discussion; the perception that this is the case even in current ‘democratic’ regimes explains the voter ‘apathy’ — the declining participation in elections: the sense that one’s vote does not really make a difference in the decisions made by the people elected.
There are various commendable efforts and programs on the market that aim at improving planning and policy-making, political discourse. A common concern is ‘argument mapping’, ‘debate mapping’ — the effort to provide a convenient overview of the discussion through graphic representations of the relationships between the discussion elements: issues, claims, proposals, arguments. The tools currently on the market do not yet meet the requirements for a systematic and transparent evaluation. To encourage the further development of these tools, it may be helpful to summarize these requirements: the following is a first attempt to do so.
The arguments we use in such planning discussions have not received the attention of logic, even informal logic, or rhetoric, that one would expect given their ubiquity: humanity quarrels about ‘what we ought to do’ as much if not more that about the ‘facts’ of the world. The arguments used in such discussions are of a type I have called ‘design arguments’ or ‘planning arguments’. Even in informal logic textbooks, where they are discussed, for example, as ‘proposal arguments’ , their structure is not analyzed sufficiently well to permit a systematic evaluation. An approach for such evaluation of planning arguments has been presented e.g. in the article ‘The structure and evaluation of planning arguments’ in Informal Logic December 2010. Elaborating on that discussion is the following brief exploration of how planning arguments should be represented, and presented in argument maps, for example, so as to facilitate evaluation.
Recapping: The typical planning argument can be described as follows:
The proposed plan or decision — denoted here as ‘x’
is supported (or attacked) by the argument:
‘X ought to be adopted (implemented) (the ‘conclusion’)
because
x is related to effect y (the ‘factual-instrumental premise)
and
y ought to be pursued. (the deontic premise)
A more elaborate version might include some qualifications , say, of conditions c under which the relationship between x and y holds, and an assertion that those conditions are indeed (or are not ) present, now condensed in a form that uses the symbol ‘F’ for a factual premises, ‘and ‘D; for the deontic (ought) premise:
D(x)
because
F(x REL y | c)
and (D( y )
and
F ( c )
The relation REL is a common label for any of the usual links between x and y: a ‘categorical’ link or claim (e.g.: ‘x IS y’); a causal claim (‘x CAUSES y’) or a ‘resemblance claim’ (‘x is LIKE y’); according to each case at hand, there may be variations or other connections invoked.
In textbooks discussion of ‘proposal arguments’, this structure is usually not presented completely. Thus, an argument maybe rendered as ‘x should be adopted because it causes y’; or ‘x ought to be because its effect y is desirable’. In both cases, only one premise is explicitly stated. The practice of omitting premises that ‘can be taken for granted’, (resulting in an ‘enthymeme’ — an incomplete argument) is common, as already Aristotle made clear. But such an argument can be opposed on very different grounds: An opponent of ‘x’ may not be convinced that x will indeed result in y. Another opponent may agree that x does cause y but does not consider y desirable. A third participant may feel that yes, y might be a good thing, and even agree that x may be helpful in getting y, but only if certain conditions are present, and since they are not, hold that implementing x is not warranted. Yet another observer may simply feel that x is not the best way to get y: a different plan should be considered. These objections are aimed at different premises, some of which are not explicitly stated.
This means that if the argument is to be evaluated in any meaningful way, the elements at which these opinions are directed must all be stated explicitly, visibly. This is the first of several ‘rules’ needed to ensure meaningful evaluation:
The Premise Completeness Rule:
All premises of a planning argument
— the factual-instrumental premise, including qualifying conditions as applicable;
— the deontic premise;
— the factual premise regarding qualifying conditions
must be stated explicitly.
It is necessary to clarify that some claims of arguments — that are often part of argument pattern representations in popular textbooks — should NOT be included in the display of a single planning argument because they are really arguments about ‘successor issues’: issues arising from challenges to main argument premises. Even the widely accepted representation of arguments by Toulmin (The Uses of Argument, 1958) makes this mistake: his argument diagram
D (Datum) ————————–> Q (qualification) —–> C (conclusion)
|
since
|
W (warrant)
|
because
|
B (Backing)
though not a planning argument, is an example of selective inclusion of premises that really are parts of successor issue arguments. Here, the Warrant is the premise making the connection between D and C; the backing B is the arguer’s preventive move in anticipation of a challenge to that premise. But any premise can usually be challenged on several kinds of grounds, not only one. So either the backing should properly include all those grounds (which of course would make the argument unwieldy and complicated), or the inclusion of one such ground to bolster the warrant is a selective complication of the main argument with one partial argument for the successor issue: Is the warrant W true? (or plausible?– the preferred term for argument evaluation). For that matter, isn’t it possible to also challenge the Data (D)? So could the argument not contain another claim supporting the veracity or validity of the data claim? The upshot of this is that for a useful representation of the arguments in a map, or a tool for evaluation, the argument itself should be reduced to its basic structure. For the planning argument, a resulting ‘map’ would look like this:
Issue / argument map, generic
The Overall Argument Completeness Rule
The generic map above shows only three arguments, which may be all that have actually been entered in a discussion. In argumentation textbooks, the emphasis is usually on the analysis of individual arguments — just as in formal logic, or even scientific method, the truth or falsity of a claim is taken to be adequately established by means of one single valid argument with true premises. It is curious that the familiarity of the ‘careful weighing of pros and cons’ often heard in official speeches is not reflected in the academic analysis of the arguments that constitute such pros and cons, specifically in the examination of the question of how such weighing might actually be done. The practice of argumentation in the political arena looks even less reassuring: political advertising tends to focus only on a few ‘key’ issues and arguments, and the relentless repetition of those points in TV and radio spots.
A modest amount of reflection should show that for some thorough deliberative effort of evaluation of the merit of pro and con arguments to reach a meaningful decision, all pro and con arguments should be included in the evaluation. That is, all potential effects of a proposed plan should be looked at and evaluated. The rationale for greater citizen participation in public planning and policy-making is in part the fact that the information of all such effects is distributed in the citizenry — the people who are affected have that knowledge, so they must be called upon to bring it into the discussion. Reliance on experts (who are usually not or very differently affected by government plans) cannot guarantee that all such pertinent knowledge is brought to bear on the decision. The only area where a thorough examination of all aspects is attempted is the practice of ‘benefit / cost analysis’ applied to big government or business planning. But this technique is invariably carried out by experts, public participation is mostly prohibited by the specialized terminology and technique.
The implication of this issue is that the discourse about public plans must be carefully orchestrated to ensure that all ‘pros and cons’ are actually raised and identified so that they can be included in the evaluation. On the one hand, people must be encouraged to contribute that information; on the other hand, the ‘overview’ representation of the set of aspects should not be obscured by repetition and rhetorical embroidery. Both requirements are difficult to satisfy.Some participants may not wish to reveal advantages a plan would bestow upon them — that other might consider unfair; or identify disadvantages to other parties (that these are not aware of) if this would require remedies reducing their own benefits. This has led me to suspect that the discourse must be considered systemically incomplete (and therefore, evaluation results should not be used directly as decision criteria). Nevertheless, the aim must be for all pros and cons to be brought out to be considered.
For the map representation of a discussion, this raises the question whether maps should ‘suggest’ issues that might be important to examine — even if they haven’t been raised by actual human participants but by some enhanced search engine, for example. Maps might show ‘potential issues’ in shades of grey as compared to highlighted issues that have actually been raised. The systematic generation of issues, even the construction of potential arguments by artificial intelligence programs based on information stored in data banks are both within reach of technological feasibility, and should be discussed carefully. This is a topic for a different investigation, however.
Besides other criticisms of the methodology — for example, the difficulty of assigning monetary costs or benefits to ‘intangible’ aspects — a key problem inherent in cost-benefit is that the effects of a plan must be declared as costs or benefits (by the experts) as perceived by some entity (e.g. the government funding the analysis) — an entity that is just one party, one side in the controversy. This is the subject of the next point:
The Pro / Con Identification Rule
In cost-benefit studies as well as in most if not all argument mapping programs, aspects and arguments are identified as ‘pro’ or ‘con’ (‘costs’ and ‘benefits’) — a practice that on the surface seems crucial for anyone trying to carefully review all the pros and cons in order to reach a deliberated decision. And in discussions, arguments are certainly entered by participants as supporting or opposing a proposed plan. So it seems eminently plausible that the maps should reflect this.
However, this practice hides the fact that effects of plans may not be beneficial for all people affected; indeed, one person’s ‘benefit’ (and thus ‘pro’ argument) may be another person’s ‘cost’ -(and thus a ‘con’ argument). In addition, once beginning the evaluation process, people will assign different weights and expressions of agreement / disagreement to different premises. these can have the effect of turning an an argument intended as a ‘pro’ argument and even initially accepted as such by the evaluator into a ‘con’ argument for that person: I may look at an argument meant to support plan x by pointing out that it will cause effect y given conditions c, and find that while I indeed believe that x will produce y, upon reflection y does not seem such a good idea. Or that I believe both that x will cause y under conditions c, and y is a worthy goal, but that conditions c are not present, which makes the effort to implement x a futile one. But seeing the argument identified in a map as a ‘pro’ argument may make it look like an established point, and that I have made a mistake: the map is ‘taking sides’ in the evaluation, as it were: the side of the agency funding the analysis, or simply the side of the participant entering that particular argument.
For that reason, it is better to refrain from accepting the intended ‘pro’ and ‘con’ label of arguments in the map. Whether an argument is a pro or con reason for a specific person is a result of that person’s assessment, not the proponent’s intention. Therefore, both in the list or collection of arguments, in evaluation forms and in argument maps, the labeling of arguments as supporting or opposing should be avoided. (This is a main reason for my rejection of most ‘debate-mapping’ and ‘argument mapping’ programs and techniques on the market today.)
The Rule of Rejecting some Arguments
(e.g. characterization, ad hominem, authority arguments, ‘meta-arguments’)
The previous ‘completeness’ rule may be misunderstood as advocating the admission of all kinds of arguments into maps and in the evaluation process. There are some important exceptions: for instance, arguments or premises that merely characterize a plan or claim, but don’t offer a reason for such characterization. The remark “This is a crazy idea” is indeed a forceful opposition statement against a proposal. But it is not really an argument — and therefore should not be entered into either formal evaluation forms nor argument maps. The same is true for positive (‘like’ or “wow, what a beautiful, creative proposal) expressions of support. They have the same status as ad hominem arguments (‘the author of the plan is a crook’) or arguments from authority (the principle goes back to Aristotle!’) — they suggest that the number of supporters, or the character of proponents, the fame of a philosopher who endorsed a concept, are adequate reasons to accept a claim. Once stated fully as such, the fallacy usually becomes obvious. Now sure, we agree that denigrating the messenger because of his flawed character is not by itself a good indication of the quality of the message — but is the citing of authorities not a common practice, even a condition for respectability in scientific work? How can it be wrong or inadmissible?
To the extent such expressions do have a legitimate place in the discourse and evaluation process, they are recommendations of how we should evaluate the plausibility of individual claims of an argument, they are not arguments about the plan x themselves. We accept an argument from a scientific authority because we assume that such a famous scientist would have very good reasons, evidence, data, valid calculations, measurements to back up his claim. Even so, such arguments often deteriorate into silly discussions not about that evidence for a claim, but about the reliability of the authority’s judgment, hurling stories about many other silly, untrue things that person also believed against the authority’s unchallenged record — all having nothing to do with the merit of the claim itself. So the venerable academic practice of citing sources belongs in the body of arguments and evidence of successor issues, not in the main argument about a plan nor in the maps showing the relationships between the issues and claims:
The first-level arguments about a plan should not contain
– arguments of characterization;
– ad hominem arguments (positive or negative);
– arguments from authority;
The same reservations hold for ‘meta’-arguments that make claims about the set of arguments in the discussion, or even in principle: “There is no reason to support this proposal”; “All the arguments of the opponent are fallacious”; “We haven’t heard any quantitative evidence questioning the validity of the proposal…” and the like. This is not to say that such observations do not have a place in discussions. They can serve an important purpose — such as to remind participants to provide substantial evidence, data, and support for their arguments. But these meta-arguments talk about the state of the discourse, not about the proposed plan — and therefore should likewise be omitted from representations of the discussion, argument maps, or evaluation tools of that plan itself. Perhaps there should be a separate ‘commentator’ rubric for such observations about the state and quality of the discussion itself.
The Rule of Rewarding Participation
The last observation above raises another important issue: that of the degree and sincerity of participation in the discussion. Just like the phenomenon of ‘voter apathy’ held responsible for low voter turnout in elections, the experience with efforts to engage participants in online discussion to ratchet up their contributions from just exchanging comments to the more demanding task of collaborative writing more comprehensive summaries or reports on the results of their discourse has been disappointing. Even the extra effort to switch to a different platform without the normal length limits of online discussion posts, and permitting the inclusion of visual material (maps, pictures) has been ‘too much’ for discussion participants normally quite eager to exchange arguments and share material researched on the web.
It is misplaced to accuse such people of ‘apathy’ or merely being motivated by the excitement of the online discussion (the nature of this motivation may not be very well understood yet). The reason for voter apathy and this reluctance of discussion participants might be more properly seen in the lack of meaningful rewards for such engagement. Voters who perceive — with or without justification — that their votes do not have a significant impact on government decisions, will be less eager to vote; discussion participants who don’t see what difference a summary of their contributions would make in the larger scheme of things will not be eager to go beyond the venting of their frustrations and exchange of opinions. Most online discussions ‘die down’ after some time without having reached any meaningful resolution of the subject debated.
Online social networks have tried to respond to this phenomenon with features such as the count of ‘friends’ or ‘network connections’ — or simple evaluation devices in the form of ‘like’ and ‘dislike’ (thumbs up or down) buttons. These efforts turn into quite meaningless competitive numbers efforts, which suggests nothing more that how meaningless they are (how many ‘friends’ do we have on Facebook that we wouldn’t even know if we met them in the street?) — but are encouraged by the networks because they help the advertising part of their enterprise.
It turns out that the suggested tool of argument evaluation for the discourse framework might offer a better approach to the problem of rewarding participants for their contribution. Going beyond the mere count of posts in a discussion, the evaluation of argument plausibility and argument weight (the argument’s plausibility modified by the weight of relative importance of its deontic premise) of planning arguments, as evaluated by the entire group of participants in the evaluation exercise, can be directly used as a measure for the value of a participant’s contributions. (The details of scoring are developed in more detail in a paper on a proposed argumentative planning and argument evaluation game; draft available on request.)
This feature leads to the possibility of building up a reputation record of different types of contributors: for example, a participant’s contribution to the development (through modification) of the plan eventually adopted or recommended; the ‘creative’ contributor supplying innovative solution ideas; the solid ‘researcher’ finding information pertinent to the discussion on the net, the ‘influential’ participant whose arguments lead other participants to change their minds; the ‘thorough / in-depth deliberating participant’ who is delving more deeply into the evidence and support for argument premises in successor issues; the person with the most reliable offhand judgment whose initial assessment turns out to be closest to the final deliberated result by the entire group, and so on).
The possibility of building up such cooperative contribution records — that might be included in a person’s resume for job applications or profile for public office positions — could provide the needed reward mechanism for constructive participation in discussions about significant public issues.
The Rule of Improving Proposed Plans rather than forcing a decision
One aspect of the purpose of public discourse deserves some special consideration. There are various reasons for the widespread perception of argumentation as an adversarial, divisive activity. For example: the spectacle of many ‘debates’ of candidates for public office, where the aim of each debater is to make the opponent look less fit for the job by refuting the opponents arguments, or goading the opponent into making foolish assertions (that can then be used in ‘attack ads’). Even more so, the decision mechanism applied both in elections and decisions in ‘decision-making bodies’ in government and private enterprise: majority voting. It will provide a decision, which may be convenient or even critical in some cases — but at the expense of ignoring the arguments, the concerns of a significant minority of participants. The practice of enforcing ‘party discipline’ in voting in parliamentary bodies is entirely obviating discussion — if the majority party has the votes, no debate is necessary. The victory celebrations of the winners of such votes overshadow the fact that the quality of the plans or policies voted upon has totally disappeared from the process.
The introduction of merit of discourse measures into such discussions could help reverse this problem: the contribution rewards to individual participants could — and should — be structured to favor the development of ‘better’ proposals. By this is meant, here, plans modified step by step from the initial proposal by amendments or changes, in response to concerns expressed by participants, and with the aim of achieving a greater degree of approval from a larger group of participants, and at least acceptance as ‘not making things worse than before’ by the adversely affected minorities. The goal of ‘complete consensus’ is an ideal that may be too difficult to achieve in many cases, and tempt lone dissenting holdouts to adopt a position of de facto ‘dictating’ no action. But a discourse participation reward structured to encourage the improvement of plan proposals rather than mere majority vote decisions may help improve not only the discourse about public issues but the resulting decisions as well.
===