There is much discussion about flaws of ‘democratic’ governance systems, supposedly leading to increasingly threatening crises. Calls for ‘fixing’ these challenges tend to focus on single problems, urging single ‘solutions’. Even recommendations for application of ‘systems thinking’ tools seem to be fixated on the phase of ‘problem understanding’ of the process; while promotions of AI (artificial / augmented intelligence) sound like solutions are likely to be found by improved collection and analysis of data, of information in existing ‘knowledge bases’. Little effort seems devoted to actually ‘connecting the dots’ – linking the different aspects and problems, making key improvements that serve multiple purposes. The following attempt is an example of such an effort to develop comprehensive ‘connecting the dots’ remedies – one that itself arguably would help realize the ambitious dream of democracy, proposed for discussion. A selection (not a comprehensive account) of some often invoked problems, briefly:
“Voter apathy” The problem of diminishing participation in current citizen participation in political discourse and decisions / elections, leading to unequal representation of all citizens’ interests;
“Getting all needed information”
The problem of eliciting and assembling all pertinent ‘documented’ information (‘data’) but also critical ‘distributed’ information especially for ‘wicked problems’, – but:
“Avoiding information overload”
The phenomenon of ‘too much information’, much of which may be repetitive, overly rhetorical, judgmental, misleading (untruthful) or irrelevant;
“Obstacles to citizens’ ability to voice concerns”
The constraints to citizens’ awareness of problems, plans, overview of discourse, ability to voice concerns;
“Understanding the problem”
Social problems are increasingly complex, interconnected, ill-structured, explained in different, often contradicting ways, without ‘true’ (‘correct) or ‘false’ answers, and thus hard to understand, leading to solution proposals which may result in unexpected consequences that can even make the situation worse;
“Developing better solutions”
The problem of effectively utilizing all available tools to the development of better (innovative) solutions;
The problem of conducting meaningful (less ‘partisan’ and vitriolic, more cooperative, constructive) discussion of proposed plans and their pros and cons;
“Better evaluation of proposed plans”
The task of meaningful evaluation of proposed plans;
“Developing decisions based on the merit of discourse contributions”
Current decision methods do not guarantee ‘due consideration’ of all citizens’ concerns but tend to ignore and override as much as the contributions and concerns of half of the population (voting minority);
“The lack of meaningful measures of merit of discourse contributions”
Lack of convincing measures of the merit of discourse contributions: ideas, information, strength of evidence, weight of arguments and judgments;
“Appointing qualified people to positions of power”
Finding qualified people for positions of power to make decisions that cannot be determined by lengthy public discourse — especially those charged with ensuring
“Adherence to decisions / laws / agreements”
The problem of ‘sanctions’ ensuring adherence to decisions reached or issued by governance agencies: ‘enforcement’ – (requiring government ‘force’ greater than potential violators leading to ‘force’ escalation;
“Control of power”
To prevent people in positions of power from falling victim to temptations of abusing their power, better controls of power must be developed.
Some connections and responses:
Details of possible remedies / responses to problems, using information technology, aiming at having specific provisions (‘contribution credits’) work together with new methodological tools (argument and quality evaluation) to serve multiple purposes:
Participation and contribution incentives: for example, offering ‘credit points’ for contributions to the planning discourse, saved in participants’ ‘contribution credit account’ as mere ‘contribution’ or participation markers, (to be evaluated for merit later.)
“Getting all needed information”
A public projects ‘bulletin board’ announcing proposed projects / plans, inviting interested and affected parties to contribute comments, information, not only from knowledge bases of ‘documented’ information (supported by technology) but also ‘distributed, not yet documented information from parties affected by the problem and proposed plans.
“Avoiding information overload”
Points given only for ‘first’ entries of the same content and relevance to the topic
(This also contributes to speedy contributions and assembling information)
“Obstacles to citizens’ ability to voice concerns”
The public planning discourse platform accepts entries in all media, with entries displayed on public easily accessible and regularly (ideally real-time) updated media, non-partisan
“Understanding the problem”
The platform encourages representation of the project’s problem, intent and ‘explanation’ from different perspectives. Systems models contribute visual representation of relationships between the various aspects, causes and consequences, agents, intents and variables, supported by translation not only between different languages but also from discipline ‘jargon’ to natural conversational language.
“Developing better solutions”
Techniques of creative problem analysis and solution development, (carried out by ‘special techniques’ teams reporting results to the pain platform) as well as information about precedents and scientific and technology knowledge support the development of solutions for discussion
While all entries are stored for reference in the ‘Verbatim’ repository, the discussion process will be structured according to topics and issues, with contributions condensed to ‘essential content’, separating information claims from judgmental characterization (evaluation to be added separately, below) and rhetoric, for overview display (‘IBIS’ format, issue maps) and facilitating systematic assessment.
“Better evaluation of proposed plans”
Systematic evaluation procedures facilitate assessment of plan plausibility (argument evaluation) and quality (formal evaluation to mutually explain participants’ basis of judgment) or combined plausibility-weighted quality assessment.
“Meaningful measures of merit”
The evaluation procedures produce ‘judgment based’ measures of plan proposal merit that guide individual and collective decision judgments. The assessment results also are used to add merit judgments (veracity, significance, plausibility, quality of proposal) to individuals’ first ‘contribution credit’ points, added to their ‘public credit accounts’.
“Decision based on merit”
For large public (at the extreme, global) planning projects, new decision modes and criteria are developed to replace traditional tools (e.g. majority voting)
“Qualified people to positions of power”
Not all public governance decisions need to or can wait for the result of lengthy discourse, thus, people will have to be appointed (elected) to positions of power to make such decisions. The ‘public contribution credits’ of candidates are used as additional qualification indicators for such positions.
“Control of power”
Better controls of power can be developed using the results of procedures proposed above: Having decision makers ‘pay’ for the privilege of making power decisions using their contribution credits as the currency for ‘investments’ in their decision: Good decision will ‘earn’ future credits based on public assessment of outcomes; poor decisions will reduce the credit accounts of officials, forcing their resignation if depleted. ‘Supporters’ of officials can transfer credits from their own accounts to the official’s account to support the official’s ability to make important decisions requiring credits exceeding their own account. They can also withdraw such contributions if the official’s performance has disappointed the supporter.
This provision may help reduce the detrimental influence of money in governance, and corresponding corruption.
“Adherence to decisions / laws / agreements”
One of the duties of public governance is ‘enforcement’ of laws and decisions. The very word indicates the narrow view of tools for this: force, coercion. Since government force must necessarily exceed that of any would-be violator to be effective, this contributes both to the temptation of corruption, — to abuse their power because there is no greater power to prevent it, and to the escalation of enforcement means (weaponry) by enforces and violators alike. For the problem of global conflicts, treaties, and agreements, this becomes a danger of use of weapons of mass destruction if not defused. The possibility of using provisions of ‘credit accounts’ to develop ‘sanctions’ that do not have to be ‘enforced’ but triggered automatically by the very attempt of violation, might help this important task.
(Ref. e.g. the article ‘The Structure and Evaluation of Planning Arguments’ (Informal Logic, Dec. 2010, also slightly revised, in Academia.edu).
In an effort to explore phenomena, identifying shortcomings and errors, that can be seen as arguments against the too ready acceptance of the argumentative model of planning, I ran into a well-intentioned article full of claims and arguments that did not fit the simple clean basic model of the planning argument, and would cause some problems in their analysis and plausibility assessment. Briefly, there are three aspects of concern.
The first is the liberal use of verbs denoting the relationship between concepts that — in the basic planning argument — would be seen as plan features that cause outcomes or consequences. Reminder: the argumentative view shares the focus on cause-effect relationships with much of the systems modeling perspective: the ‘loops’ of systems networks are generated by changes in components / variables causing positive or negative changes in other variables. So the relationship constituting the ‘factual-instrumental’ premise of planning arguments is mostly seen as a cause-effect relationship.
Now the survey of arguments in the article mentioned above (not identified to protect the author until proven guilty, and because the practice is actually quite common) hardly ever actually uses the terms ’cause’ and ‘effect’ or their equivalent in arguments that clearly advocate certain policies and actions. Instead, one finds terms such as ‘reflects’, ‘advance’ (an adaptive response); ‘reinforce’, ‘seeks to.. ‘, ‘codifies’, ‘is wired to..’. ‘erodes’, ‘come to terms with…’. ‘speaks to…’, ‘retreats into…’,’crystallizes…’, ‘promotes’. ‘cross-fertilizes…, ’embraces’, ‘ ‘cuts across’, ‘rooted in…’, ‘deeply embedded ‘, ‘leverages’,
‘co-create’ and ‘co-design’, ‘highlight ‘, ‘re-ignite’. (Once the extent of such claims was realized in that article that was trying to make a case for ‘disrupting’ the old system and its propaganda, it became clear that the article itself was heavily engaged in the art of propaganda… slightly saddening the reader who was initially tending towards sympathetic endorsement of that case…)
This wealth of relationship descriptions is apt to throw the blind faith promoter of the simple planning argument pattern into serious self-recrimination: What is the point of thorough analysis of these kinds of argument, if they never appear in their pristine form in actual discourse? (The basic ‘standard planning argument’ pattern is the following: “Proposed plan X ought to be adopted because X will produce consequence Y given conditions C), and consequence Y ought to be pursued, (and conditions C are or will be given.)” True, it was always pointed out that there were other kinds of relationships than ‘will produce’; or ’causes’, at work in that basic pattern: ‘part-whole’, for example, or ‘association’, ‘acting as catalyst’, ‘being identical’ or synonymous with’, for example. But those were never seen as serious obstacles to their evaluation by the proposed process of argument assessment, as the above examples appear to be. How can they be evaluated with the same blunt tool as the arguments with plain cause-effect premises?
Secondly, the problems they cause for assessment are exacerbated by the fact that often, these verbs are qualified with expressions like ‘probably’; ‘likely to’, ‘may be seen as’ and other means of retreating from complete certainty regarding the underlying claims. The effect of these qualification moves is that the entire claim ‘probably x’ or ‘x is likely to advance y’ can now be evaluated as a fully plausible claim, and given a pl-value of +1 (‘completely plausible, virtually certain’) by a listener — since the premise obviously, honestly, does not claim complete certainty. This obscures the actual underlying suggestion that ‘x (actually) will advance y’ is far from completely plausible, and thus will lend more plausibility and weight to the argument of which it is a premise.
A third problem is that, upon closer inspection, many of the relationship claims are not just honest, innocent expressions of factual or functional relationships between real entities or forces. They are often themselves ‘laden’ with deontic content — subjective expressions of ‘good’ or ‘bad’: ‘x threatens y’, or ‘relativizes’, or ‘manipulates’ are valuing relationship descriptions: judgments about ‘ought-aspects that the proposed method reserved for the clearly deontic premises of planning arguments: the purported outcomes or consequences of plans.
What are the implications of these considerations for the proposal of systematic argument assessment in the planning discourse? (Other than the necessary acknowledgement that this very comment is itself a piece of propaganda…)
Apart from the option of giving up on the entire enterprise and leaving the subjective judgments by discourse participants unexamined, one response would be to devise ways of ‘separating’ the qualifying terms from the basic claims in the evaluation work sheets given to participants. They would be asked to assess the probability or plausibility of the basic premise claim, perhaps using the qualifying statements as a ‘guide’ to their plausibility judgment (like any other supporting evidence). This seems possible with some additional refinement and simplification of the proposed process.
It is less clear how the value-contamination of relationship descriptors could be dealt with. Changing the representation of arguments to the condensed form of the basic ‘standard planning argument’ pattern is already a controversial suggestion requiring considerable ‘intelligent’ assessment of arguments’ ‘core’ from their ‘verbatim’ version, both to get it ‘right’ and to avoid turning it into a partisan interpretation. The ‘intelligent computation’ needed to add the suggested separation of value from relationship terms to the already severely manipulated argument representation will require some more research — but doing that may be asking too much?
And it is not clear how these considerations can help participants deal with insidious argument patterns such as the recent beauty alleging media coverup of terrorist incidents in Sweden, and then using the objection that there was no evidence of such an incident, as a ‘clinching’ argument for the coverup: ‘see how clever they are covering it up?’
(From a letter to a friend who has been working, writing and publishing on the problems of ‘design’.) Thorbjoern Mann, May 2015
I have been busy trying to communicate with the systems folks on LinkedIn about the role of argumentation in systems modeling — there seems to be an obstinate blind spot (or hole?) in their oh so holistic minds about that. I have yet to see a systems diagram in which the various issues (contentious questions, for example regarding assumptions of the model variables and parameters, about which people might disagree) are not somehow assumed to be ‘settled’. No more discussion. Curious, it is making me feel a little like someone trying to fill those open minds (they insist) with the precious grains of my speculations only to see them run out of the bottoms of those minds (there are holes top and bottom, and the bottom ones are larger?) like ocean sand.
So every once in a while I resort to wise books like the Designology volume you graciously sent me, for reassurance that the design perspective is one to be valued, respected, and further explored. I especially am fascinated by the heroic efforts in that book — and elsewhere — to identify and locate the proper role of design in the academic landscape of disciplines and departments. And the more I think about it, I sense how much of a monster this thing must look like from the point of view of, say, a ministry of education confronted with demands for proper designation of funds and personnel and labels (department names) let alone assignment of leadership roles to this ‘design’ phenomenon.
For it seems to be a little like that curious object some people have used to test prospective designers’ visual imagination: the thing that has a square profile if seen from one side, a triangle from another, and a circle from a third direction. Design indeed looks like a handful of different disciplines, depending on the angle from which it is seen. The literature is replete with complaints about the difficulty of agreeing on a common definition of design.
For example: Let’s say we start, arbitrarily, from some proposed explanation that design has something to do with problem solving. Looking at a problem as a discrepancy between a state of affairs as it IS (or will be, if nothing is done), and as it OUGHT to be, raising the need or desire to find out HOW it may be transformed from the former to the latter. A closer attention to the IS part may get us to look not only at the facts of the current situation and their adequate determination and description, but also at the causes that made things get this way, trying to understand the forces and laws at work in that process. This may have to do with physical aspects of reality, suggesting an approach like the scientific method of natural sciences to validate and understand it: Does this not look like Science? But not only science in the sense of the ‘hard’ natural sciences, because physical conditions and artifacts involved in problems have effects on people, their minds (psychological, physiological) and relations: Social science. The designer must have some adequate understanding of both ‘kinds’ of science in order to deal with the challenge of doing something meaningful about it.
Looking at the other end, though, the OUGHT aspect, a first impression that it also has a social sciences flavor — user needs, for example — soon gives way to a sense that there may be more esoteric aspects at work: vision, dreams, desires, imagination, aesthetics: aspects for which either science label clearly is not appropriate. In fact, the label OUGHT evokes connections to quite different disciplines: those that explore the good, morality, ethics, norms. So should design actually be situated in the philosophy department?
This is not a very common idea. Rather, it is the imagination aspect, or more specifically, the need to use visual images to communicate about the proposed results of this activity, that has led many to see the essence of design in the tools we have to help our own and the audience’s understanding and ability to ‘see’ proposed solutions: Drawing, model-building, perspective, rendering, with their closeness to painting and sculpture: Obviously: it’s (a kind of) an Art? Even given more recent tools of computer programs for virtual visual walk-through presentations. This is a historically a more widely embraced notion.
However, there are more, less ‘artistic’ tools designers need to persuasively present solution ideas to clients and the public. Proofs of validity, affordability, safety: diagrams, calculations. More like the tools engineers are using?
Wait: persuasion? Yes, designers will have to spend some effort trying to convince others of the advantages of the solution — mainly the ones who are expected to pay for its implementation. This is partly the stuff of ‘storytelling’ many design teachers admonish their students to cultivate — what will it be like to live in this great proposed solution? But also, when things are heating up, of argumentation: exploring, discussing the pros and cons of the proposals.
Arguments? Doesn’t that have to do with logic, rhetoric? But the disciplines in charge of argumentation haven’t paid much attention to the kinds of arguments we are using all the time in the design and planning discourse, so they do not have much room for the concerns of design in their curricula — but it’s argumentation, all right. Even the structure of these ‘planning arguments’ clearly indicate the multifaceted nature of the concerns involved:
“We ought to adopt proposal X
1) implementing X will result in consequence Y provided conditions C are given;
2) we ought to pursue consequence Y,
3) conditions C are indeed present?”
This ubiquitous argument pattern (of course there are many variations due to different assertion / negation of terms, and different relations between X and Y) contains at least two or three different kinds of premises: the factual-instrumental premise 1, the deontic premise 2, and the factual premise claim 3. If questioned, each of these will have to be supported with very different kinds of reasons: the kind of evidence we could loosely call scientific method for premises 1 and 3, but based on conceptual agreements about the meaning of the things we are talking about. Reasons which employ arguments found in the familiar catalogues of reliable logical and statistical inference, observation, data-gathering, measurement. A closer scrutiny of the catch-all premise 3 might reveal that the conditions C include all the variables, values, and relationship parameters of a systems model. The ‘Systems Thinking’ community (referring to a variety of different emerging ‘brands’ of systems studies) would this argue that holistic understanding and modeling of the systems into which designers are intervening is a necessity, and this is the concern of premises 1 and especially 3.
But for premise 2, the supporting arguments will be of the same kind of ‘planning argument’ type. From the point of view of formal logic, these arguments are not ‘valid’ in the sense of deductive syllogisms whose conclusions must be accepted as true if all the premises are true. They are merely ‘inconclusive’ at best, no matter how recklessly we use and accept this kind of reasoning in everyday planning discourse. That very recklessness being a strong argument in favor of designers studying such reasoning more carefully than is currently the case… What to call this perspective?
Coming back to the impression that design is more like engineering. There is good evidence for this: the question of HOW to transform the unpleasant IS condition to the desirable OUGHT requires the application of scientific knowledge — science, again — to the task of putting together tools, processes, resources to generate solutions and to evaluate them, test them to see if they will meet the requirements and withstand damaging forces. And in the production of modern architecture, there are many different kinds of engineers involved — engineering had to divide itself into many different sub-disciplines, each drawing on their own branch of science. The available and needed knowledge has become too rich and complex for any single professional to master them all. This means that effecive coordination of all these activities in the design process requires at least an adequate understanding of the different engineering branches and their vocabulary, concerns, criteria, to make sense of it all. Ideally. So perhaps it was appropriate for many architecture schools to be located in Institutes of Technology rather that in art schools such as the Beaux Arts?
The successful practitioners of this kind of art, though, (the ones who consistently win commissions for significant work) find themselves facing a quite different challenge: that of running a business. And some of the well-known sources of jokes about architects refer to their frequent troubles of this kind. For example: meeting deadlines: time management, and even more seriously, staying within the budget. A case for including more management, business and economics material in the education of designers?
What, besides an understanding of engineering, business, and economics, — we might as well throw in the various disciplines exploring the aspect of sustainability and ecological impact of their buildings — does this mean for the poor architects? The ones who got through architecture school even in spite of the required structures courses that gave their artistic minds so much trouble? It becomes a very different activity: to guide and orchestrate — the word is very apt for the assembly of different disciplines and professions — the activities of all these people in the design process. Not only there, but of course also in the subsequent implementation process, with different professionals. The architect there has to become a project manager — if he hasn’t given up that role to yet another, different profession. But a good design has to take the implementation process into account as an important determining factor: if it can’t be built, if it takes too long, if there are too many possibilities of accidents or failures along the way, his prospects for successful creation of solutions are slim.
Creating, designing, then, involves all these considerations and skills. And while this little sketch considered only the architect of buildings (the word ‘architect’ has been taken over by many other ‘designing’ roles such as software developers and even turned into a verb; old Vitruvius must be rotating in his grave) it should be easy to see how this multiple perspective feature applies to many other areas of modern life. Yes: for the academic department designer, ‘design’ is a monster, and the proper role and placement of design education is a very wicked problem.
It raises a number of important questions for how research (the science of design) and education for all the professions that will have to deal with design ought to be organized, funded and guided. The current confused attitude and treatment — best characterized as the infamous ‘benevolent negligence’ quip by Senator Moynihan about race relations — perhaps has the advantage that many different people in many different realms are forced to creatively deal with it. But it can’t, by any measure, be called a convincing, efficient design. This very point, in my opinion, is calling for increased attention and discussion. Perhaps a conference? A research project (if research is the proper word, after all these questions…)? A ‘design’ competition? A large online public planning discourse?
A Fog Island Tavern discussion
Sjutusensjuhundreochsytti-sju jäklar, beim heiligen Kyrill von Drögenpütt!
– Bog-Hubert, you’ve got to quit drinking that Slovenian stuff, it makes you cuss in incomprehensible Balkan dialects. I can’t even tell whether I should kick you out of here for inappropriate language.
– Ah, Vodçek, pour me another one. It’s actually some kind of Swedish and German this time. I think.
– Cross-cultural cussing, oh my. What in the world gets you so upset? Anything in your notebook that would have made you rich if you’d thought of it a week ago?
– Huh? You’re confusifying me. No, it’s Abbé Boulah.
– Good grief. What’s he done now?
– It’s not what he’s done but what he hasn’t.
– Well, aren’t we all guilty of some of that sin. I should have paid my utility bill several days ago. But explain.
– Well, you know how he and his buddy have been working on this scheme for a planning discourse support system. On the basis of the old Argumentative Model of Planning, you remember?
– Do I remember? Your ramblings about that one have kept me up beyond too many last calls I care to count. But isn’t it actually a good idea, basically? What’s wrong with it now?
– Well, we are all still working on straightening out some details. But Abbé Boulah and his buddy won’t get moving on those problems. I don’t know whether it’s because they don’t think they are serious enough to fix, or because they don’t know how.
– What problems?
– It’s this misunderstanding that some people have about the argumentative model — that it’s ‘too rational’ and doesn’t allow for feelings and emotions. So in a few of the first application experiments, the people didn’t even get started on working with it. Well, Abbé Boulah and his buddy are insisting that the model allows for any subject and concern to be brought up in the discussion — as Rittel said, anything can be dealt with as questions and arguments and answers, it’s the most general framework anybody has come up with. So they won’t change anything about the basic concept.
– And you think that those critics are right? That the argumentative model does not — how do they put it — accommodate feelings and emotions?
– They are right! Some people are just put off or intimidated by the pretense of logic and rationality of the term ‘argumentative model’, and ignores emotions.
– Huh, Sophie, good morning. You’ve got a point there. I don’t care whether they are right or wrong. The fact that they are put off by what they think it is when they hear ‘argumentative model’ is the problem. It’s real. So I think that needs to be dealt with, somehow.
– I agree. But what do you think they should do? Let’s assume those folks are right. That feelings and emotions should play some significant role in planning discussions. Why do they think that?
– Some people are mentioning recent research that seems to show that when people make decisions, the regions in the brain that deal with emotions are showing significant activity some time — they are talking about fractions of seconds — before the thinking and reasoning areas of the brain are signaling that a decision has been made. So they conclude that the emotional side has actually made the decision before the thinking part has, or processed the reasons for it.
– Hmm. So what are they saying: because the emotions are calling the shots, the decision is better than what the reasoning part would have come up with?
– I don’t know if they actually believe or are claiming that. Though it does sound like it when they come up with that old bit of ‘going with your gut feelings’. And I don’t really care about that either…
– Wait: isn’t there some good explanation for that? That there may be some piece of information about the situation that the brain has picked up only in the subconscious — some rustling in the forest that the ears have barely registered — but the conscious brain hasn’t yet interpreted and processed yet? But the unconscious has produced the gut feeling that there may be a dangerous predator sneaking up on you? That seems like a very good reason to pay attention to that gut feeling, don’t you think?
– Yes: So why don’t you care about that?
– Sophie, I do care about those feelings. I have gone by my gut feelings many times myself. And it often turned out that they were right — that there actually was a piece of information that called for attention and influenced the decision. But hey, there were also many times when there wasn’t anything to be concerned about. So often that people around me began to think I was overly paranoid. The issue is: how do I know when the gut feeing is right and when it’s not?
– So that’s another reason to care about it, isn’t it?
– Sure. But does that whole issue apply to the problem of planning discourse about public issues? Even if it’s just you and me discussing a plan. My gut feeling says do A, but your gut tells you something else — what should we do about that?
– I see what you are saying. Unless your gut also tells you to hit me over the head – yeah, yeah, for my own protection or good — we need to talk about it.
– Right. It has to be brought out in the discussion. It’s not enough to say ‘my gut tells me to do, or not to do this’ — when there are different gut feeling signals, they need to be made explicit and explored, discussed. And for large public issues, there is even a legitimate question, in Abbé Boulah’s opinion, whether individual people’s feelings should play a role in the decisions. Not that he says that they shouldn’t be voiced if participants in the discussion feel they are important — but merely private, individual feelings without explanation should not be allowed to determine decisions that affect many people over a long time.
– You don’t agree with that?
– I think there is a case to be made that people who insist that feelings should play a role even in decisions about large scale plans, should offer some evidence that their feelings are shared by a significant number of other people. But in principle: aren’t plans and planning discussions meant to produce solutions that people agree with? That they like, and feel good about? Future situations of their lives that they expect will be emotionally satisfactory? Help their pursuit of happiness?
– I can’t disagree with that, Sophie. But isn’t there a difference between ‘respecting’ someone’s feelings, and accepting them sight unseen as a reason for rejecting or accepting a public decision? So if we accept that emotions should play a role even in large-scale public decisions: what role should they play?
– You mean, other than just being brought up in the discussion and examined?
– Well, yes.
– In other words, it seems you are staying within the assumption that there is, or should be, a discussion. A discourse. And that it consists of questions, issues, and — among other things: arguments? Or do you think you can keep people from arguing in discussions?
– I see, Bog-Hubert. Yes, we are still talking argumentative model. Or what other models are such critics proposing to use as the basis for public planning?
– Alternative models? To my knowledge, they tend to stay silent on that question. At least, I haven’t heard any alternative proposals in those situations. ‘It’s too rational’ or ‘It doesn’t acknowledge emotions’ — that’s usually the end of it. Of course there are a number of other approaches to problem solving and planning. But they don’t engage the issue of argumentation very well either.
– What are those?
– Well Sophie, there is the whole realm of ‘Systems Thinking’ approaches — where the approach is to develop models and diagrams of the ‘whole’ system or problem situation, with all its factors and relationships. Very powerful and useful, if done right, in revealing the complexity of systems and their sometimes counter-intuitive behavior.
– I agree. But?
– Think about it, Vodçek: there is hardly ever any talk about how they get all the information that goes into the models, (other than ‘research’, which may take the form of opinion, ‘user need’ or customer preference surveys or some such tools, usually to early on, to begin the model development work. Nor about how they resolve any disagreements about those assumptions. It simply isn’t talked about. In the finished model diagrams, it seems that all controversies and disagreements are assumed to have been settled.
– True – I have been wondering about that myself. Which means that what the modeler- analysts have settled upon are their own perspectives or prejudices?
– Don’t let them hear such heretical thoughts. To be fair, they are trying; and convinced that their data support those views.
– Well. Let’s just keep the question unsettled for now. Any other approaches? Examples?
– Sure. Just some examples: there are approaches like the ‘Pattern Language’, — you know that one?
– Yes, — the ‘Timeless Way of Building’ books by Alexander? But isn’t that mainly about buildings”
– Yes. Buildings, construction, urban design. In my opinion, that Pattern Language essentially aims at developing a collection of recipes or guidelines — ‘pattern’ sounds a little less than the rules they really are — that guarantee a good solution if they are applied properly, and therefore don’t need to be discussed or evaluated in any formal sense. No discussion or arguments there either.
– So what role do feelings and emotions play in those approaches?
– I guess the same accusation of not accommodating feelings could be raised against many systems models. ‘Stocks and flows’, variables and rates etc. don’t exactly sound like having to do with emotions. Nor does the statistical analysis of data – even when they deal with opinion surveys. Though the systems people would argue like Rittel does for the argumentative model, that if anybody wants to make a model of emotions and what influences those, say, they can do that in the systems vocabulary too.
– And the pattern language?
– The language Alexander developed for building consists of a number of patterns that he and his collaborators found when they looked at places they liked, so they claimed that these patterns solve problems and conflicts inherent in the situation, and make people feel good. ‘If you aren’t using the pattern, you aren’t addressing the problem’ is one of their admonitions. Many of those recipes are quite good, I agree; better than some of the things we see in buildings by other people using different theories, if you can call them that. But he also used the stratagem of the ‘quality without a name’ that can’t be explained. That cuts off the discussion right quickly: nobody wants to be told that ‘if you have to ask, you simply don’t understand it…’
– I see. If you can’t feel it, you are just one of those unfeeling folks…
– And when the patterns are applied, there is no more talk about feelings or emotions, or arguments, pros or cons, either.
– So I take it, we have the same problems with those approaches too? It seems we are back to discussion, discourse, argument, the minute we even begin to examine whether any alternative approach works, and how. So what do you think should be done with the argumentative discourse system you guys are working on, if you are going to stick with it?
– Good question. That’s what I was cussing about. Do you have any suggestions for that problem? Vodçek? Sophie?
– You are asking lil’ ol’ me? Let me think about it. Vodçek looks like he has thought some ideas: Do you, Vodçek?
– Well, if I were bothered by the ‘argumentative’ label – which I’m not, mind you: in my experience around here, it makes people thirsty, you know what I mean? – but if I were, I’d start by changing that label. Isn’t your ‘planning discourse support system’ good enough? Well, it’s a bit long, and doesn’t make a catchy acronym; I’d work on that. And leave the reference to the argumentative model to the academic treatises.
– Okay, that’s just the label, the name. Is that enough to change the reaction of those emotional advocates?
– Maybe not. It might help if the discussion process could be started with some questions that de-emphasize the quarrelsome kind of argument part of the discussion. Starting up with questions about what folks would like to see in the solution or intervention to a problem situation: what would please other groups affected by the situation or potential solutions? What would make them feel good?
– So as to make them focus on things they can agree on right from the start, instead of bickering about proposals they don’t like? Okay: how would you frame that? And how would you keep people from starting out on – of falling into — an adversarial track right from the start? For example, if somebody starts out with some pet proposal of a solution that raises the hackles of everybody else?
– It might take some procedural manipulation, eh?
– Bad idea, if you ask me. Wouldn’t that really aggravate people and get them upset?
– All right. Suppose we start out by agreeing on some sequence beforehand – before any specific proposals are presented, and simply asked what such a proposal would or should look like if it were to make everybody happy? And agree that any ‘preconceived’ solutions be held back until they have been amended and modified with any suggestions brought up in that first phase of discussion?
– I don’t think that any restrictions should be placed on the order or sequence in which people contribute their ideas to the discourse. So whatever is being brought in will have to be accepted and recorded as it comes in. I am assuming a system that is being run not in a meeting, but mainly on some platform with contributions in different media. All entries should be kept as they have been stated, in what we called the ‘Verbatim’ file. But your suggestion could be useful when the material is sorted out and presented in the files and especially maps, structured according to topics and questions or issues. This could be shown in a sequence that encouraged constructive ideas, a gradual building up of solutions towards results that are acceptable to everybody, rather than having a proposal plunked down initially, take-it-or leave-it style, that people have to argue about.
– Sounds like something you should try out.
– Would it help if during that phase, the display of ideas and comments could be kept ‘anonymous’?
– Why, Sophie?
– Well, I have noticed that often, arguments get nasty not because the proposals are bad or controversial, but because of who made them. Jealousy, revenge for past slights, not wanting to give the other guy credit for an idea, or partisanship: ‘anything those guys are proposing we’ll turn down’ – you’ve seen those things, haven’t you?
– Yes, the news media are full of them.
– You don’t seem too excited about that idea. I think it gets in the way of the other provisions of your system – the evaluation part, does it? But you can still run that system of merit points ‘behind the curtain’ of the system, can’t you, so that people don’t evaluate ideas because of who proposed them?
– I guess so. It might actually help the concern somebody mentioned, that the evaluation of contributions could be deliberately skewed because of such personal or partisanship jealousies. Yes, ideas might be rewarded more fairly for their own merit if you don’t know whose they are.
– We’ll see. Sometimes the ideas are so obviously partisan that everybody can guess whose they are.
– Well, back to the issue here: what about feelings and emotions? So far, what you have suggested is aimed more at defusing or minimizing extraneous feelings about other participants than about the problem and solution proposals?
– You are right. Again: there could be nudges, suggestions about how to bring those into the discussion. For example, rather than asking participants to state their feelings or concerns outright, those considerations could be phrased as questions like this: ‘Would the proposed solution detail make people feel … ? And if so, what might be done about that?’
– Are you suggesting a rule about how participants should be wording their comments? What if they don’t?
– No, that’s not what I’m saying. The original ‘verbatim’ record entries are worded in whatever way they choose. I’m talking about how they would be displayed in the maps. But of course, that very feature may lead people to formulate their comments in this way, both less ‘argumentative’ and less personal – as you suggested earlier, in a way that indicates a more common feeling than a purely individual one.
– Hey, this all sounds very nice and friendly and cooperative. Well-intentioned. But are we looking at this in the right way? I mean, can all feelings, all emotions be treated the same way? Aren’t some more, let’s say, more ‘legitimate’ than others?
– Good question. What are all the feelings those do-gooders want the system to accommodate’?
– I think the judgment about whether they are legitimate or not must be left to the people participating in the discussion, don’t you agree? And it may be very different for different cases and situations? But yes, it may be useful to look at various kinds of emotions, to see whether they require different rules. Yeah, yeah: ‘nudges’, I see you’re frowning at the term ‘rule’, Sophie. Is there a rule against it?
– Can we go with ‘encouragements’ for the time being?
– Sure. If it makes you happy…
– We may have to ask some of the people raising this concern about feelings in the discourse, what kinds of feelings they have in mind. For example: I see many papers and blogposts complaining about other people’s resistance to change. Is that an issue we should look into?
– Ah, the current obsession with change. I suspect that’s often just a fad, something all the management consultants have to promote so they can help management push for their particular brand of change in their organizations. The Starbucks syndrome: try to order a straightforward coffee these days – bad boy: You aren’t honoring the change, effort of innovation and increase in choices. As if you couldn’t just mix them up yourself to your own taste if they put out the ingredients. No: You’d get upset – there’s an emotion for you – if the recipe for your plain coffee were changed.
– Hey, calm down, Sophie. Here’s a plain coffee for you. Sumatra. Cream? Sugar? Lemon? Red pepper? Brandy? French? Spanish? For recipes that aren’t on the Starbucks menu yet? But you are right, Bog-Hubert: Resistance to change is a common reaction. And it can be caused by many different emotions. Fear? Irritation over the reduced degree of certainty about the stability of conditions for your own plans? After all, your plans for whatever change or success you pursue are based on some context conditions being predictable and constant, so if those conditions change, you have to hustle and change your plans. Aggravations galore, right? Jealousy? Because the change will reduce your income while increasing that of the ‘change agent’ and other people?
– This all sounds very negative, guys. Aren’t there positive emotions too, that might play a role? Excitement, a sense of adventure, even risk and danger: some people like and thrive on things that elevate their adrenaline levels? Hope? Empathy? Love?
– Hold on, Sophie. You are right, we should consider positive emotions – but isn’t this getting into a whole range of different topics? Attitudes, values, beliefs, habits, personality likes and dislikes? Social pressures and demands. Boredom, curiosity, pride, group affinity and allegiances. Why should a planning discourse platform make special provisions for all of those? Can’t it be left to the people doing the planning in each specific case how they want to deal with such issues?
– I think you are right. But the problem is still that the folks who need to run such discussions or to participate in them don’t see how that is possible in the current version of the approach, the way it is presented. It may boil down to getting the story across, perhaps finding better ways of making people familiar and comfortable with this way of thinking.
– I see where you are headed, Vodçek. Games, am I right?
– Yes. And good examples, stories. But yes, I think games are a good way to familiarize people with new ideas and ways to work together. You remember the weird experiment we did here some time ago – on the bus system issue? I think things like that would help. We should look at that issue again, see if we can develop some different versions, — some simple ones, for kids, and some more advanced ones that can actually be used as entertainment versions of planning and problem-solving tools for real cases. And the issue of how to deal with emotions in those might take the form of trying to make them exciting and fun to play.
– Sounds like a plan… just don’t mention the word ‘argument’?
– Yes. And whenever it does slip into the discussion and people object to it, ask them what other approach they suggest, for developing a better tool? Perhaps they might actually come up with some useful ideas?
– Don’t get your hopes up. They’ll just vote you down.
– Three cheers for the optimist. Yes, I say give them a chance to make some positive and practical contributions. We might learn something. Let’s go to work on it.
Some issues regarding the role of emotions and feelings in the planning discourse
Re-examining various efforts and proposals on discourse support over time, I have tried to identify and address some key issues or problems that require attention and rethinking. Briefly, the list of issues includes the following (in no particular order of importance):
• The question of the appropriate Conceptual Framework for the discourse support system;
• The preparation and use of discourse, issue and argument maps, ncluding the choice of map ‘elements’ (questions, issues, arguments, concepts or topics…);
• The design of the organizational framework: the ‘platform’;
• The Software problem: Specifications for discourse support software;
• Questions of appropriate process;
• The role and design of discourse mapping;
• The aspect of distributed information;
• The problem of complexity of information (complexity of linear verbal or written discussion, complex reports, systems model information);
• The role of experts;
• Negative associations with the term ‘argument’;
• The problem of ‘framing’ the discourse;
• Inappropriate focus on insignificant issues;
• The role of media;
• Appropriate Discussion representation;
• Incentives / motivation for participation (‘Voter apathy’)
• The ‘familiar’ (comfortable?) linear format of discussions versus the need (?) for structuring discourse contributions;
• The need for overview of the number of issues / aspects of the problem and their relationships;
• The effect of ‘last word’ contributions (e.g. speeches) on collective decisions; or mere ‘rhetorical brilliance’;
• Linking discussion merit / argument merit with eventual decisions;
• The issue of maps ‘taking sides’;
• The problem of evaluation: of proposals, arguments, discussion contributions;
• The role of ‘systems models’ information in common (verbal, linear, including ‘argumentative’) discourse
• The question of argument reconstruction.
• The appropriate formalization or condensation needed for concise map representation;
• Differences between requirements for e.g. ‘argument maps’ as used in e.g. law or science versus planning;
• The deliberate or inadvertent ‘authoritative’ effect of e.g. argument representation as ‘valid’; (i.e. the extent of evaluative content of maps);
• The question of appropriate sequence of map generation and updating;
• The question of representation of qualifiers in evaluation forms.
In previous work on the structure and evaluation of ‘planning arguments’ within the overall framework of the ‘Argumentative Model of Planning’ (as proposed by Rittel), I have been making various assumptions with regard to these questions — assumptions differing from those made in other studies and proposed discourse support tools. Such assumptions, for example regarding the conceptual framework, as manifested in the choice of vocabulary, — adopted as a mostly unquestioned matter of course in my proposals as well as in other’s work, — have significant implications on the development of such discourse support tools. They therefore should be raised as explicit issues for discussion and re-examination.
A first step in such a re-examination might begin with an attempt to explicitly state my current position, for discussion. This position is the result, to date, of experience with my own ideas as well as the study of others’ proposals. Not all of the issues listed above will be addressed in the following. Some position items still are, in my mind, more ‘questions’ than firm convictions, but I will try to state them as ‘provocatively’ as possible, for discussion and questioning.
1 The development of a global support framework for the discussion of global planning and policy agreements, based on wide participation and assessment of concerns, is a matter of increasingly critical concern; it should be pursued with high priority.
While no such system can be expected to achieve definitive universal validity and acceptance, and therefore many different efforts for further development of alternative approaches should be encouraged, there is a clear need for some global agreements and decisions that must be based on wide participation as well as thorough evaluation of concerns and information (evidence).
The design of a global framework will not be structurally different from the design of such systems for smaller entities, e.g. local governments. The differences would be mainly ones of scale. Therefore, experimental systems can be developed and tested at smaller scales to gain sufficient experience before engaging in the investments that will be needed for a global framework. By the same token, global systems for initially very narrow topics would serve the same purpose of incremental development and implementation.
2 The design of such a framework must be based on — and accommodate — currently familiar and comfortable habits and practices of collective discussion.
While there are analytical techniques and tools with plausible claims of greater effectiveness, ability to deal with the amount and complexity of data etc., the use of these tools in discourse situations with wide participation of people of different educational achievement levels would either be prohibitive of wide participation, or require implausibly massive information/education programs for which precisely the needed tools for reaching agreement on the selection of method / approach (among the many competing candidates) are currently not available.
3 Even with the growing use of new information technology tools, the currently most familiar and comfortable discourse pattern seems to be that of the traditional ‘linear discussion’ (sequential exchange of questions and answers or arguments) — the pattern that has been developed in e.g. the parliamentary tradition, the agreement of giving all parties a chance to speak, air their concerns, their pros and cons to proposed collective actions, before making a decision.
This form of discourse, originally based on the sequential exchange of verbal contributions, is of course complemented and represented by written documents, reports, books, and communications.
4 A first significant attempt to enhance the ‘parliamentary’ tradition with systematic information system, procedural and technology support was Rittel’s ‘Argumentative Model of Planning’. It is still a main candidate for the common framework.
Rittel’s main argument for the general acceptance of this model was the insight that its basic, general conceptual framework of ‘questions’, ‘issues’ (controversial questions), ‘answers’, and ‘arguments’ could in principle accommodate the content of any other framework or approach, and thus become a bridge or common forum for planning at all levels. This still seems to be a valid claim not matched by any other theoretical approach.
5 However, there are sufficiently worrisome ‘negative associations’ with the term ‘argument’ of Rittel’s model to suggest at least a different label and selection of more neutral key concepts and terms for the general framework
The main options are to only refer to ‘questions’ and ‘responses’ and ‘claims’, and to avoid ‘argument’ as well as the concepts of ‘pro’s and ‘cons’ — arguments in favor and opposed to plan proposals or other propositions.
Argumentation can be seen as the mutually cooperative (positive) effort of discussion participants to point out premises that support their positions, but that also are already believed to true or plausible by the ‘opponent‘, (or will be accepted by the opponent upon presentation of evidence or further arguments). But the more common, apparently persistent view is that of argumentation as a ‘nasty’, adversarial, combative ‘win-lose’ endeavor. While undoubtedly discourse by ay other label will produce arguments, pros and cons etc., the question is whether these should be represented as such in support tools, or in a more neutral vocabulary.
Experiments should be carried out with representations of discourse contributions — in overview maps and evaluation forms — as ‘questions’ and ‘answers’.
6 Any re-formatting, reconstruction, condensing of discussion contributions carries the danger of changing the meaning of an entry as intended by its author.
Regardless of the choice of such formatting — which should be the subject of discussion — the framework must preserve all original entries in their ‘verbatim’ form for reference and clarification as needed. Ideally, any reformatting of an entry should be checked with its author to ensure that it represents its intended meaning. (Unfortunately, this is not possible for entries whose authors cannot be reached, e.g. because they are dead.)
7 The framework should provide for translation services not only for translation between natural languages, but also from specialized discipline ‘jargon’ entries to natural language.
8 While researchers in several disciplines are carrying out significant and useful efforts towards the development of discourse support tools, and some of these efforts seem to claim to produce universally applicable tools, such claims are overly optimistic.
The requirements for different disciplines are different, and lead to different solutions that cannot comfortably be transferred to other realms. Specifically, the differences between scientific, legal, and planning reasoning are calling for quite different approaches. and discourse support systems. However, they are not independent: the planning discourse contains premises from all these realms that must be supported with the tools pertinent to those differences. The diagram suggests how different discourse and argument systems are related to planning:
(Sorry, diagram will be added later)
9 Analysis and problem-solving approaches can be distinguished according to the criteria they recommend as the warrant for solution decisions:
– Voting results (government, management decision systems, supported by experts);
– ‘Backwards-looking’ criteria: ‘Root cause’ (Root cause analysis, ‘Necessary conditions, contributing factors (‘Systematic Doubt’ analysis), historical data (Systems models);
– ‘Process/Approach’ criteria (“the ‘right’ approach guarantees the solution”);
solutions legitimized by participation vote or authority position; or argument merit;
– ‘Forward-looking’ criteria: Expected result performance, Benefit-Cost Ratio, simulated performance of selected variables over time, or stability of the system, etc.
It should be clear that the framework must accommodate all these approaches, or preferably, be based on an approach that could integrate all these perspectives, as applicable to context and characteristics of the problem. There is, to my knowledge, currently no approach matching this expectation, though some are claiming to do so (e.g. ‘Multi-level Systems Analysis’, which however is looking at only approaches deemed to fit within the realm of ‘Systems Thinking).
10 While the basic components of the overall framework should be as few, general, and simple as possible, — for example ‘topic’, ‘question’ and ‘claim’ or ‘response’, — actual contributions in real discussions can be lengthy and complex, and must be accommodated as such (in ‘verbatim’ reference files). However, for the purposes of overview by means of visual relationship mapping, or systematic evaluation, some form of condensed formatting or formalization will be necessary.
The needed provisions for overview mapping and evaluation are slightly different, but should be as similar as possible for the sake of simplicity.
11 Provisions for mapping:
a. Different detail levels of discourse maps should be distinguished: ‘Topic maps’, ‘Issue maps’ (or ‘question maps’), and ‘argument maps’ or ‘reasoning maps’.
– Topic maps merely show the general topics or concepts and their relationship as linked by discussion entries. Topics are conceptually linked (simple line) if they are connected by a relationship claim in a discussion entry.
– Issue or question maps show the relationships between specific questions raised about topics. Questions can be identified by type: e.g. factual, deontic, explanatory, instrumental questions. There are two main kinds of relationships: one is the ‘topic family’ relation (all questions raised about a specific topic); the other is the relationship of a question (a ‘successor’ question) having been raised as a result of challenging or query for clarification of an element (premise) of another (‘predecessor‘) question.
– Argument or reasoning maps show the individual claims (premises) making up an answer or argument about an issue (question), and the questions or issues having been raised as a result of questioning any such element (e.g. challenging or clarifying, calling for additional support for an argument premise.
b. Reasoning maps (argument maps) should show all the claims making up an argument, including claims left not expressed in the original ‘verbatim’ entry as assumed to be ‘taken for granted’ and understood by the audience.
Reasoning maps aiming at encouraging critical examination and thinking about a controversial subject might show ‘potential’ questions (for example the entire ‘family of issues for a topic) that could or should be raised about an answer or argument. These might be shown in gray or faint shades, or a different color from actually raised questions.
c. Reasoning maps should not identify answers or arguments as ‘pro’ and ‘con’ a proposal or position (unless it is made very clear that these are only the author’s intended function.)
The reason is that other participants might disagree with one or several of the premises of an intended ‘pro’ argument, in which case the set of premises (not with the respective participant’s negation) can constitute a ‘con’ argument — but the map showing it as ‘pro’ would in fact be ‘taking sides’ in the assessment. This would violate the principle of the map serving as a neutral, ‘impartial’ support tool.
d. For the same reason, reasoning maps should not attempt to identify and state the reasoning pattern (e.g. ‘modus ponens’ or modus tollens’ etc.) of the argument. Nor should they ‘reconstruct’ arguments into such (presumably more ‘logical’, even ‘deductively valid’) forms.
Again, if in a participant’s opinion, one of the premises of such an argument should be negated, the pattern (reasoning rule) of the set of claims will become a different one. Showing the pattern as the originally intended one by the author, (however justified by its inherent nature and validity of premises it may seem to map preparers), the map would inadvertently or deliberately be ‘taking sides’ in the assessment of the argument.
e. Topic, issue and reasoning maps should link to the respective elements in the verbatim and any formalized records of the discussion, including to source documents, and illustrations (pictures, diagrams, tables).
d. The ‘rich image’ fashion (fad?) of adding icons and symbols (thumbs up or down, plus or minus signs) or other decorative features to the maps — moving bubbles, background imagery, etc. serve as distracting elements more than as well-intended user-friendly devices, and should be avoided.
12 Current discourse-based decision approaches exhibit a significant shortcoming in that there is no clear, transparent, visible link between the ‘merit’ of discussion contributions and the decision.
Voting blatantly permits disregarding discussion results entirely. Other approaches (e.g. Benefit-Cost Analysis, or systems modeling) claim to address all concerns voiced e.g. in preparatory surveys, but disregard any differences of opinion about the assumptions entering the analysis. (For example: some entities in society would consider the ‘cost’ of government project expenditures as ‘benefits’ if they lead to profits for those entities (e.g. industries) from government contracts).
The proposed expansion of the Argumentative Model with Argument Evaluation (TM 2010) provides an explicit link between the merit of arguments (as evaluated by discourse participants) and the decision, in the form of measures of plan proposal plausibility. This approach should be integrated into an approach dropping the ‘argumentative‘ label, even though it requires explicit or implicit evaluation of argument premises.
13 Provisions for evaluation.
In discussion-based planning processes, three main evaluation tasks should be distinguished: the comparative assessment of the merit of alternative plan proposals (if more than one); the evaluation of one plan proposal or proposition, as a function of the merit of arguments; and the evaluation of the merit of single contributions, (item of information, arguments) to the discussion.
For all three, the basic principle is that evaluation judgments must be understood as subjective judgments, by individual participants, about the quality, plausibility, goodness, validity desirability etc. While traditional assessments e.g. of truth of argument premises and conclusions were aiming at absolute, objective truth, the practical working assumption here is that while we all strive for such knowledge, we must acknowledge that we do not have any more than (utterly subjective) estimate judgments of it, and it is on the strength of those estimates we have to make our decisions. The discussion is a collective effort to share and hopefully improve the basis of those judgments.
The first task above is often approached by means of a ‘formal evaluation’ procedure developing ‘goodness’ or performance judgments about the quality of the plan alternatives, resulting on an overall judgment score as a function of partial judgments about the plans’ performance with respect to various aspects. sub-aspects etc. Such procedures are well documented; the discourse may be the source of the aspects, but more often, the aspects are assembled (by experts) by a different procedure.
The following suggestions are exploring the approach of developing a plausibility score for a plan proposal based on the plausibility and weight assessments of the (pro and con) arguments and argument premises. (following TM 2010 with some adaptations).
a. Judgment criterion: Plausibility.
All elements to be ‘evaluated’ are assessed with the common criterion of ‘plausibility’, on an agreed-upon scale of +n (‘completely plausible’) to -n (‘completely implausible’), the midpoint score of zero meaning ‘don’t know’ or ‘neither plausible nor implausible’.
While many argument assessment approaches aim at establishing the (binary) truth or falsity of claims, ‘truth’, (not even ‘degree of certainty’ or probability about the truth of a claim), does not properly apply to deontic (ought-) claims and desirability of goals etc. The plausibility criterion or judgment type applies to all types of claims, factual, deontic, explanatory etc.
b. Weights of relative importance
Deontic claims (goals, objectives) are not equally important to people. To express these differences in importance, individuals assign ‘weight of relative importance) judgments to deontics in the arguments, on an agreed upon scale of zero to 1 such that all weights relative to an overall judgment add up to 1.
c. All premises of an argument are assigned premise plausibility judgments ppl; the deontic premises are also assigned their weight of relative importance pw.
d. The argument plausibility argpl of an argument is a function of the plausibility values of all its premises.
e. Argument weight argw is a function of argument plausibility argpl and the weight ppw of its deontic premise.
f. Individual Plan or Proposal plausibility PLANpl is a function of all argument weights.
g. ‘Group’ assessments or indicators of plan plausibility GPLANpl can be expressed as some function of all individual PLANpl scores.
However, ‘group scores’ should only be used as a decision guide, together with added consideration of degrees of disagreement (range, variance), not as a direct decision criterion. The decision may have to be taken by traditional means e.g. voting. But the correspondence or difference between deliberated plausibility scores and the final vote adds an ‘accountability’ provision: a participant having assigned a deliberated positive plausbility score for a plan but voting against it will face strong demands for explanation.
h. A potential ‘by-product’ of such an evaluation component of a collective deliberation process is the possibility of rewarding participants for discussion contributions not only with reward points for making contributions — and making such contributions speedily, (since only the ‘first’ argument making the same point will be included in the evaluation) — but modifying these contribution points with the collective assessments of their plausibility. Thus, participants will have an incentive — and be rewarded for — making plausible and meritorious contributions.
14 The process for deliberative planning discourse with evaluation of arguments and other discourse contributions will be somewhat different from current forms of participatory planning, especially if much or all of it is to be carried out online.
The main provisions for the design of the process pose no great problems, and small experimental projects can be carried out with current tools ‘by hand’ with human facilitators and support staff using currently available software packages. But for larger applications adequate integrated software tools will first have to be developed.
15 The development of ‘civic merit accounts’ based on the evaluated contributions to public deliberation projects may help the problem of citizen reluctance (often referred to as the problem of ‘voter apathy’) to participate in such discourse.
However, such rewards will only be effective incentives if they can become a fungible ‘currency’ for other exchanges in society. One possibility is to use the built-up account of such ‘civic merit points’ as one part of qualification for public office — positions of power to make decisions that do not need or cannot wait for lengthy public deliberation. At the same time, the legitimization for power decisions must be ‘paid for’ with appropriate sums of credit points — a much-needed additional form of control of power.
16 An important, yet unresolved ‘open question’ is the role of complex ‘systems modeling’ information in any form of argumentative planning discourse with the kind of evaluation sketched above.
Just as disagreement and argumentation about model assumptions are currently not adequately accommodated in systems models, the information of complex systems models and e.g. simulation results is difficult to present in coherent form in traditional arguments, and almost impossible to represent in argument maps and evaluation tools. Since systems models arguably are currently the most important available tools for detailed and systematic analysis and understanding of problems and system behavior, the integration of these tools in the discourse framework for wide public participation must be seen as a task of urgent and high priority.
17 Another unresolved question regarding argument evaluation (and perhaps also mapping) is the role of statement qualifiers.
Whether arguments that are stated with qualifiers (‘possibly’, ‘perhaps’; ‘tend to’ etc.) in the original ‘verbatim’ version should show such qualifiers in the statements (premises) to be evaluated. Arguably, qualifiers can be seen as statements about how an unqualified, categorical claim should be evaluated; the proponent of a claim qualified with a ‘possible’ does not ask for a complete 100% plausibility score. This means that the qualifier belongs to a separate argument about how the main categorical claim should be assessed, and thus should not be included in the ‘first-level’ argument to be evaluated. The problem is that the qualified claim can be evaluated — as qualified — as quite, even 100% plausible — but that plausibility can (in the aggregation function) be counted as 100% for the unqualified claim. Unless the author can be persuaded to add an actual suggested plausibility value in lieu of the verbal qualifier, one that other evaluators can view and perhaps modify according to their own judgment (unlikely and probably impractical), it would seem better to just enter unqualified claims in the evaluation forms, even though this may be seen as misrepresenting the author’s real intended meaning.
18 Examples of topic, issue, and argument maps according to the preceding suggestions.
a. A ‘topic map’ of the main topics addressed in this article:
Map of topics discussed
b. An issue map for one of the topics:
Argument mapping issues
c. A map of the ‘first level’ arguments in a planning discourse: the overall plan plausibility as a function of plausibility and weight assessments of the planning arguments (pro and con) that were raised about the plan.
The overall hierarchy of plan plausibility judgments
Hierarchy map of argument evaluation judgments, with successor issues
Argument map for mapping issue ‘Should argument map show ‘pro’ and ‘con’ labels?
Mann, T. (2010) “The Structure and Evaluation of Planning Arguments” Informal Logic, Dec. 2010.
Rittel, H. (1972) “On the Planning Crisis: Systems Analysis of the ‘First and Second Generations’.” BedriftsØkonomen. #8, 1972.
– (1977) “Structure and Usefulness of Planning Information Systems”, Working Paper S-77-8, Institut für Grundlagen der Planung, Universität Stuttgart.
– (1980) “APIS: A Concept for an Argumentative Planning Information System’. Working Paper No. 324. Berkeley: Institute of Urban and Regional Development, University of California.
– (1989) “Issue-Based Information Systems for Design”. Working Paper No. 492. Berkeley: Institute of Urban and Regional Development, University of California.
Suggestions made by proponents of ‘systems thinking’ or systems analysis to discussions we might call ‘planning or policy discourse’ often take the form of recommendations to construct models of the ‘whole system’ in question, and to use these to guide policy decisions.
A crude explanation of what such system models are and how they are used might be the following: The ‘model’ is represented as a network of all the parts (variables, components; e.g. ‘stocks’) in the ‘whole’ system. What counts as the whole system is the number of such parts that have some significant relationship (for example, ‘flows’) to one another — such that changes in the state or properties of some part will produce changes in other parts. Of particular interest to system model builders are the ‘loops’ of positive or negative ‘feedback’ in the system — such that changes in part A will produce changes in part B, but those changes will, after a small or large circle of further changes, come back to influence A. Over time, these changes will produce behaviors of the system that would be impossible to track with simple assumptions e.g. about causal relationships between individual pairs of variables such as A and B.
The usefulness of such system models — which simply means the degree of reliability with which simulation runs of those changes over time will produce predictions that would come true if the ‘real system’ that is represented by the model could be made to run according to the same assumptions. The confidence in the trustworthiness of model predictions thus relies on a number of assumptions (equally simplistically described):
– the number of ‘parts’ (variables, components, forces, ‘(stocks’) included;
– the nature and strength of relationships between the system variables;
– the magnitudes (values) of the initial system variables, e.g. stocks.
System models are presented as ‘decision-making tools’ that allow the examination of the effects of various possible interventions in the system (that is, introduction of changes in systems variables that can be influenced by human decision-makers) given various combinations of conditions in variables that cannot be influenced but must be predicted, as well as assumptions about the strength of interactions. All in order to achieve certain desirable states or system behaviors (the ‘goals’ or objectives measures by performance criteria of the system). System modelers usually refrain from positing goals but either assume them as ‘given’ by assumed social consensus or directives by authorities who are funding the study (a habit having come in for considerable criticism) or leaving it up to decision-maker ‘users’ of the system to define the goals, and use the simulations to experiment with different action variables until the desired results are achieved.
Demonstrations of the usefulness or reliability of a model rest on simulation runs for past system states (for which the data about context and past action conditions can be determined): the model is deemed reliable and valid if it can produce results that match observable ‘current’ conditions. If the needed data for this can be produced and the relationships can be adjusted with sufficient accuracy to actually produce matching outcomes, the degree of confidence we are invited to invest in such models can be quite high: very close to 100% (with qualifications such as ‘a few percentage point plus or minus’.
The usual planning discourse — that is, discussion about what actions to take to deal with situations or developments deemed undesirable by some (‘problems’) or desirable improvements of current conditions (‘goals’) — unfortunately uses arguments that are far from acknowledging such ‘whole system’ complexity. Especially in the context of citizen or user participation currently called for, the arguments mostly take a form that can be represented (simplified) by the following pattern, say, about a proposal X put forward for discussion and decision:
(1) “Yes, proposal X ought to be implemented,
implementing X will produce effect (consequence) Y
Y ought to be aimed for.”
(This is of course a ‘pro’ argument; a counterargument might sound like:
(2) ” No, X should NOT be implemented
Implementing X will produce effect Z
Z ought to be avoided.”
Of course, there are other forms of ‘con’ arguments possible, targeting either the claim that X will produce Y granted that Y is desirable; or the claim that Y is desirable, granting that X will indeed produce Y…)
A more ‘sophisticated’ version of this typical (‘standard’) planning argument would perhaps include consideration of some conditions under which the relationship X — Y holds:
(3) “Yes, X ought to be implemented,
Implementing X will produce Y if conditions c are present;
Y ought to be aimed for;
conditions c are present.”
While ‘conditions C’ are mostly thought of as simple, one-variable phenomena, the systems thinker will recognize that ‘conditions C’ should include all the assumptions about the state of the whole system in which action X is one variable that can indeed be manipulated by decision-makers (while many others are context conditions that cannot be influenced). So from this point of view, the argument should be modified to include the entire set of assumptions of the whole system. The question of how a meaningful discourse should be organized to take this expectation into account while still accommodating participation by citizens — non-experts — is a challenge that has yet to be recognized and taken on.
Meanwhile, however, the efforts to improve the planning discourse consisting of the simpler pro and con arguments might shed some interesting lights on the issue of the reliability of system models for predicting outcomes of proposed plans over time.
The improvements of the planning discourse in question have to do with the proposals I have made for a more systematic and transparent assessment of the planning argument — in response to the common claim of having public interest decisions made ‘on the merit of arguments’. The approach I developed implies that the plausibility of a planning argument of the types 1,2,3 above (in the mind of an individual evaluator) will be a function of the plausibility of all the premises. I am using the term ‘plausibility’ to apply both to the ‘factual’ premises claiming the relationship X –>Y and the presence of conditions C (which traditionally are represented as ‘probability’ or degree of confidence) as well as the to the deontic premise ‘Y ought to be aimed for’ that is not adequately characterized by ‘probability’ much less ‘truth’ or ‘falsity’ that is the stuff of traditional argument assessment. The scale on which such plausibility assessment is expressed must be one ranging from an agreed-upon value such as -1 (meaning ‘totally implausible) to +1 (meaning totally plausible, entirely certain) with a midpoint of zero (meaning ‘don’t know’; ‘can’t tell’ or even ‘don’t care’).
The plausibility of such an argument, I suggest, will be some function of the plausibilities assigned to each of the premises, arguably also to the implied claim that the argument pattern itself (the inference rule
FI(X –> Y) | C
F (C )”
applies meaningfully to the situation at hand. (D prefixes denote deontic claims, FI factual-instrumental claims, F factual claims)
(The weight of each argument among the many pro and con arguments is one step later: it will be a function of its plausibility and weight of relative importance of the goals, concerns, objectives referred to in the deontic premise.)
This means that the argument plausibility will decrease quite rapidly as the plausibilities for each of these premises deviate from 100% certainty. Experiments with a plausibility function that consists of the simple product of those plausibilities have shown that the resulting overall argument plausibility often shrinks to a value much closer to zero that to +1; and the overall proposal plausibility (e.g. a sum of all the weighted argument plausibilities) will also be far away from the comfortable certainty (decisively ‘pro’ or decisively ‘con’) hoped for by many decision-makers.
These points will require some further study and discussion in the proposed approach to systematic argument assessment. For the moment, the implication of this effect of argument plausibility tending towards zero on the issue of enhancing arguments with the proper recognition of ‘all’ the system condition assumptions of the ‘whole’ system deserve some comment.
For even when a model can be claimed to represent past system behavior with reasonable degree of certainty plausibility close to 1, the projection of those assumptions into the future must always be done with a prudent dose of qualification: all predictions are only more or less probable (plausible), none are 100% certain. The deontic premises as well are less than totally plausible — indeed usually express legitimate opposing claims by people affected in different ways by a proposed plan, differences we are asked to acknowledge and consider instead of insisting that ‘our’ interests are to be pursued with total certainty. We might even be quite mistaken about what we ask for… So when the argument plausibility function must include the uncertainty-laden plausibility assessments of all the assumptions about relationships and variable values over time in the future, the results (with the functions used thus far, for which there are plausible justification but which are admittedly still up for discussion) must be expected to decline towards zero even faster than for the simple arguments examined in previous studies.
So as the systems views of the problem situation becomes more detailed, holistic, and sophisticated, the degree of confidence in our plan proposals that we can derive from arguments including those whole system insights is likely getting lower, not higher. This nudge towards humility even about the degree of confidence we might derive from honest, careful and systematic argument assessment may be a disappointment to leaders whose success in leading depends to some extent on such degree of confidence. Then again, this may not be a bad thing.