A hypothetical ‘perfect’ artificial argumentative systems planner — D R A F T

A tavern discussion looking at the idea of an artificial planning discourse participant from the perspectives of the argumentative model and the systems thinking perspectives, expanding both (or mutually patching up their shortcomings), and inadvertently stumbling upon potential improvements upon the concept of democracy.

Customers and patron of a fogged-in island tavern with nothing better to do,
awaiting news on progress on the development of a better planning discourse
begin an idly speculative exploration of the idea of an artificial planner:
would such a creature be a better planning discourse participant?

– Hey Bog-Hubert: Early up and testing Vodçeks latest incarnation of café cataluñia forte forte? The Fog Island Tavern mental alarm clock for the difficult-to-wakeup?

– Good morning, professor. Well, have you tried it? Or do you want to walk around in a fogged-in-morning daze for just a while longer?

– Whou-ahmm, sorry. Depends.

– Depends? On what?

– Whether this morning needs my full un-dazed attention yet.

– Makes sense. Okay. Let me ask you a question. I hear you’ve been up in town. Did you run into Abbé Boulah, by any chance? He’s been up there for a while, sorely neglecting his Fog Island Tavern duties here, ostensibly to help his buddy at the university with the work on his proposals for a better planning discourse system. Hey, Sophie: care to join us?

– Okay, good morning to you too. What’s this about a planning system?

– I’m not sure if it’s a ‘system’. I was asking the professor if he has heard whether Abbé Boulah and his buddy have made any progress on that. It’s more like a discourse platform than a ‘system’ – if by ‘system’ you mean something like an artificial planning machine – a robot planner.

– Oh, I’m relieved to hear that.

– Why, Sophie?

– Why? Having a machine make our plans for our future? That would be soo out of touch. Really. Just when we are just beginning to understand that WE have to take charge, to redesign the current ‘MeE’ system, from a new Awareness of the Whole, of our common place on the planet, in the universe, our very survival as a species? That WE have to get out from under that authoritarian, ME-centered linear machine systems thinking, to emerge into a sustainable, regenerative NEW SYSTEM?

– Wow. Sounds like we are in more trouble than I thought. So who’s doing that, how will we get to that New System?

– Hold on, my friends. Lets not get into that New System issue again – haven’t we settled that some time ago here – that we simply don’t know yet what it should be like, and should try to learn more about what works and what doesn’t, before starting another ambitious grand experiment with another flawed theory?

– Okay, Vodçek, good point. But coming to think about it – to get there, — I mean to a better system with a better theory — wouldn’t that require some smart planning? You can’t just rely on everybody coming to that great awareness Sophie is taking about, for everything just to fall into place? So wouldn’t it be interesting to just speculate a bit about what your, I mean Abbé Boulah’s buddy’s planning machine, would have to do to make decent plans?

– You mean the machine he doesn’t, or, according to Sophie, emphatically shouldn’t even think about developing?

– That’s the one.

– Glad we have that cleared up… Well, since we haven’t heard anything new about the latest scandals up in town yet, it might be an interesting way to pass the time.

– Hmm.

– I hear no real objections, just an indecisive Hmm. And no, I don’t have any news from Abbé Boulah either – didn’t see him. He tends to stay out of public view. So it’s agreed. Where do we start?

– Well: how about at the beginning? What triggers a planning project? How does it start?

Initializing triggers for planning?

– Good idea, Sophie. Somebody having a problem – meaning something in the way things are, that are perceived as unsatisfactory, hurtful, ugly, whatever: not the way they ought to be?

– Or: somebody just has a bright idea for doing something new and interesting?

– Or there’s a routine habit or institutional obligation to make preparations for the future – to lay in provisions for a trip, or heating material for the winter?

– Right: there are many different things that could trigger a call for ‘doing something about it’ – a plan. So what would the machine do about that?

– You are assuming that somebody – a human being – is telling the machine to do something? Or are you saying that it could come up with a planning project on its own?

– It would have to be programmed to recognize a discrepancy between what IS and what OUGHT to be, about a problem or need, wouldn’t it? And some human would have had to tell him that. Because it’s never the machine (or the human planner working on behalf of people) hurting if there’s a problem; its only people who have problems.

– So it’s a Him already?

– Easy, Sophie. Okay: A She? You decide. Give her, him, it a name. So we can get on with it.

– Okay. I’d call it the APT – Abominable Planning Thing. And it’s an IT, a neuter.

– APT it is. Nicely ambiguous… For a moment I thought you meant Argumentative Planning Tool. Or Template.

– Let’s assume, for now, that somebody told it about a problem or a bright idea. So what would that APT do?

Ground rules, Principles?
Due consideration of all available information;
Whole system understanding guiding decisions
towards better (or at least not worse) outcomes
for all affected parties

– Wait: Shouldn’t we first find out some ground rules about how it’s going to work? For example, it wouldn’t do to just come up with some random idea and say ‘this is it’?

– Good point. You have any such ground rules in mind, professor?

– Sure. I think one principle is that it should try to gather and ‘duly consider’ ALL pertinent information that is available about the problem situation. Ideally. Don’t you agree, Sophie? Get the WHOLE picture? Wasn’t that part of the agenda you mentioned?

– Sounds good, professor. But is it enough to just ‘have’ all the information? Didn’t someone give a good description of the difference between ‘data’ (just givens, messages, numbers etc) and ‘information’ – the process of data changing someone’s stat of knowledge, insight, understanding?

– No, you are right. There must be adequate UNDERSTANDING – of what it means and how it all is related.

– I see a hot discussion coming up about what that really means: ‘understanding’… But go on.

– Well, next: wouldn’t we expect that there needs to be a process of developing or drawing a SOLUTION or a proposed PLAN – or several – from that understanding? Not just from the stupid data?

– Det er da svœrt så fordringfull du er idag, Sophie: Now you are getting astoundingly demanding here. Solutions based on understanding?

– Oh, quit your Norwegian bickering. I’ll do even more demanding: Mustn’t there be a way to CONNECT all that understanding, all the concerns, data, facts, arguments, with any proposed DECISION, especially the final one that leads to action, implementation. If we ever get to that?

– Are you considering that all the affected folks will expect that the decision should end up making things BETTER for them? Or at least not WORSE than before? Would that be one of your ground rules?

– Don’t get greedy here, Vodçek. The good old conservative way is to ask some poor slobs to make some heroic Sacrifices for the Common Good. “mourir pour des idées, d’accord, mais de mort lente…”  as George Brassens complains. But you are right: ideally, that would be a good way to put the purpose of the effort.

– All right, we have some first principles or expectations. We’ll probably add some more of those along the way, but I’d say it’s enough for a start. So what would our APT gizmo do to get things moving?

Obtaining information
Sources?

– I’d say it would start to inquire and assemble information about the problem’s IS state, first. Where is the problem, who’s hurting and how, etc. What caused it? Are there any ideas for how to fix it? What would be the OUGHT part — of the problem as well as a bright idea as the starting point?

– Sounds good, Bog-Hubert. Get the data. I guess there will be cases where the process actually starts with somebody having a bright idea for a solution. But that’s a piece of data too, put it in the pile. Where would it get all that information?

– Many sources, I guess. First: from whoever is hurting or affected in any way.

– By the problem, Vodçek? Or the solutions?

– Uh, I guess both. But what if there aren’t any solutions proposed yet?

– It means that the APT will have to check and re-check that whenever someone proposes a solution — throughout the whole process, doesn’t it? It’s not enough to run a single first survey of citizen preferences, like they usually do to piously meet the mandate for ‘citizen participation’. Information gathering, research, re-research, analysis will accompany the whole process.

– Okay. It’s a machine, it won’t get tired of repeated tasks.

– Ever heard of devices overheating, eh? But to go on, there will be experts on the particular kind of problem. There’ll be documented research, case studies from similar events, the textbooks, newspapers, letters to the editor, petitions, the internet. The APT would have to go through everything. And I guess there might have to be some actual ‘observation’, data gathering, measurements.

Distinctions, meaning
Understanding

– So now it has a bunch of stuff in its memory. Doesn’t it have to sort it somehow, so it can begin to do some real work on it?

– You don’t think gathering that information is work, Sophie?

– Sure, but just a bunch of megabytes of stuff… what would it do with it? Don’t tell me it can magically pull the solution from that pile of data!

– Right. Some seem to think they can… But you’ll have to admit that having all the information is part of the answer to our first expectation: to consider ALL available information. The WHOLE thing, remember? The venerable Systems Thinking idea?

– Okay. If you say so. So what to you mean by ‘consider’ – or ‘due consideration’? Just staring at the pile of data until understanding blossoms in your minds and the solution jumps out at you like the bikini-clad girl out of the convention cake? Or Aphrodite rising out of the data ocean?

– You are right. You need to make some distinctions, sort out things. What you have now, at best, are a bunch of concepts, vague, undefined ideas. The kind of ‘tags’ you use to google stuff.

– Yeah. Your argumentation buddy would say you’d have to ask for explanations of those tags – making sure it’s clear what they mean, right?

– Yes. Now he’d also make the distinction that some of the data are actual claims about the situation. Of different types: ‘fact’-claims about the current situation; ‘ought’ claims about what people feel the solution should be. Claims of ‘instrumental’ knowledge about what caused things to become what they are, and thus what will happen when we do this or that: connecting some action on a concept x with another concept ‘y’ – an effect. Useful when we are looking for x’s to achieve desired ‘y’s that we want – the ‘ought’ ideas – or avoid the proverbial ‘unexpected / undesirable’ side-and after-effect surprises of our grand plans: ‘How’ to do things.

– You’re getting there. But some of the information will also consist of several claims arranged into arguments. Like: “Yes, we should do ‘x’ (as part of the plan) because it will lead to ‘y’, and ‘y’ ought to be…” And counterarguments: “No, we shouldn’t do ‘x’ because x will cause ‘z’ which ought not to be.”

– Right. You’ve been listening to Abbé Boulah’s buddy’s argumentative stories, I can tell. Or even reading Rittel? Yes, there will be differences of opinion – not only about what ought to be, but about what we should do to get what we want, about what causes what, even about what Is the case. Is there an old sinkhole on the proposed construction site? And if so, where? That kind of issue. And different opinions about those, too. So the data pile will contain a lot of contradictory claims of all kinds. Which means, for one thing, that we, –even Spock’s relative APT — can’t draw any deductively valid conclusions from contradictory items in the data. ‘Ex contradictio sequitur quodlibet’, remember – from a contradiction you can conclude anything whatever. So APT can’t be a reliable ‘artificial intelligence’ or ‘expert system’ that gives you answers you can trust to be correct. We discussed that too once, didn’t we – there was an old conference paper from the 1990s about it. Remember?

– But don’t we argue about contradictory opinions all the time – and draw conclusions about them too?
– Sure. Living recklessly, eh? All the time, and especially in planning and policy-making. But it means that we can’t expect to draw ‘valid’ conclusions that are ‘true or false’, from our planning arguments. Just more or less plausible. Or ‘probable’ – for claims that are appropriately labeled that way.

Systems Thinking perspective
Versus Argumentative Model of Planning?

– Wait. What about the ‘Systems Thinking’ perspective — systems modeling and simulation? Isn’t that a better way to meet the expectation of ‘due consideration’ of the ‘whole system’? So should the APT develop a systems model from the information it collected?

– Glad you brought that up, Vodçek. Yes, it’s claimed to be the best available foundation for dealing with our challenges. So what would that mean for our APT? Is it going to have a split robopersonality between Systems and the Argumentative Model?

– Let’s look at both and see? There are several levels we can distinguish there. The main tenets of the systems approach have to do with the relationships between the different parts of a system – a system is a set of parts or entities, components, that are related in different ways – some say that ‘everything is connected / related to everything else’ – but a systems modeler will focus on the most significant relationships, and try to identify the ‘loops’ in that network of relationships. Those are the ones that will cause the system to behave in ways that can’t be predicted from the relationships between any of the individual pairs of entities in the network. Complexity; nonlinearity. Emergence.

– Wow. You’re throwing a lot of fancy words around there!

– Sorry, Renfroe; good morning, I didn’t see you come in. Doing okay?

– Yeah, thanks. Didn’t get hit by a nonlinearity, so far. This a dangerous place now, for that kind of thing?

– Not if you don’t put too much brandy in that café cataluñia Vodçek is brewing here.

– Hey, lets’ get back to your systems model. Can you explain it in less nonlinear terms?

– Sure, Sophie. Basically, you take all the significant concepts you’ve found, put them into a diagram, a map, and draw the relationships between them. For example, cause-effect relationships; meaning increasing ‘x’ will cause an increase in ‘y’. Many people think that fixing a system can best be done by identifying the causes that brought the state of affairs about that we now see as a problem. This will add a number or new variables to the diagram, to the ‘understanding’ of the problem.

– They also look for the presence of ‘loops’ in the diagram, don’t they? – Where cause-effect chains come back to previous variables.

– Right, Vodçek. This is an improvement over a simple listing of all the pro and con arguments, for example – they also talk about relationships x – y, but only one at a time, so you don’t easily see the whole network, and the loops, in the network. So if you are after ‘understanding the system’, seeing the network of relationships will be helpful. To get a sense of its complexity and nonlinearity.

– I think I understand: you understand a system when you recognize that it’s so loopy and complex and nonlinear that its behavior can’t be predicted so it can’t be understood?

– Renfroe… Professor, can you straighten him out?

– Sounds to me like he’s got it right on, Sophie. Going on: Of course, to be really helpful, the systems modeler will tell you that you should find a way to measure each concept, that is, find a variable – a property of the system that can be measures with precise units.

– What’s the purpose of that, other than making it look more scientific?

– Well, Renfroe, remember the starting point, the problem situation. Oh, wait, you weren’t here yet. Okay; say there’s a problem. We described it as a discrepancy between what somebody feels Is the case and what Ought to be. Somebody complains about it being too hot in here. Now just saying: ‘it’s too hot; it ought to be cooler’, is a starting point, but in order to become useful, you need to be able to say just what you mean by ‘cooler’. See, you are stating the Is/Ought problem in terms of the same variable ‘temperature’. So too even see the difference between Is and Ought, you have to point to the levels of each. 85 degrees F? Too hot. Better: cool it to 72. Different degrees or numbers on the temperature scale.

– Get it. So now we have numbers, math in the system. Great. Just what we need. This early in the morning, too.

– I was afraid of that too. It’s bound to get worse…nonlinear. So in the argumentative approach – the arguments don’t show that? Is that good or bad?

– Good question. Of course you can get to that level, if you bug them enough. Just keep asking more specific questions.

– Aren’t there issues where degrees of variables are not important, or where variables have only two values: Present or not present? Remember that the argumentative model came out of architectural and environmental design, where the main concerns were whether or not to provide some feature: ‘should the entrance to the building be from the east, yes or no?’ or ‘Should the building structure be of steel or concrete?’ Those ‘conceptual’ planning decisions could often be handled without getting into degrees of variables. The decision to go with steel could be reached just with the argument that steel would be faster and cheaper than concrete, even before knowing just by how much. The arguments and the decision were then mainly yes or no decisions.

– Good points, Vodçek. Fine-tuning, or what they call ‘parametric’ planning comes later, and could of course cause much bickering, but doesn’t usually change the nature of the main design that much. Just its quality and cost…

Time
Simulation of systems behavior

– Right. And they also didn’t have to worry too much about the development of systems over time. A building, once finished, will usually stay that way for a good while. But for policies that would guide societal developments or economies, the variables people were concerned about will change considerably over time, so more prediction is called for, trying to beat complexity.

– I knew it, I knew it: time’s the culprit, the snake in the woodpile. I never could keep track of time…

– Renfroe… You just forget winding up your old alarm clock. Now, where were we? Okay: In order to use the model to make predictions about what will happen, you have to allocate each relationship step to some small time unit: x to y during the first time unit; y to z in the second, and so on. This will allow you to track the behavior of the variables of the system over time, give some initial setting, and make predictions about the likely effects of your plans. The APT computer can quickly calculate predictions for a variety of planning options.

– I’ve seen some such simulation predictions, yes. Amazing. But I’ve always wondered how they can make such precise forecasts – those fine crisp lines over several decades: how do they do that, when for example our meteorologists can only make forecasts of hurricane tracks of a few days only, tracks that get wider like a fat trumpet in just a few days? Are those guys pulling a fast one?

– Good point. The answer is that each simulation only shows the calculated result of one specific set of initial conditions and settings of relationships equations. If you make many forecasts with different numbers, and put them all on the same graph, you’d get the same kind of trumpet track. Or even a wild spaghetti plate of tracks.

– I am beginning to see why those ‘free market’ economists had such an advantage over people who wanted to gain some control of the economy. They just said: the market is unpredictable. It’s pointless to make big government plans and laws and regulations. Just get rid of all the regulations, let the free market play it out. It will control and adapt and balance itself by supply and demand and competition and creativity.

– Yeah, and if something goes wrong, blame it on the remaining regulations of big bad government. Diabolically smart and devious.

– But they do appreciate government research grants, don’t they? Wait. They get them from the companies that just want to get rid of some more regulations. Or from think tanks financed by those companies.

– Hey, this is irresponsibly interesting but way off our topic, wouldn’t you say?

– Right, Vodçek. Are you worried about some government regulation – say, about the fireworks involved in your café catastrofia? But okay. Back to the issue.

– So, to at least try to be less irresponsible, our APT thing would have systems models and be able to run simulations. A simulation, if I understand what you were saying, would show how the different variables in the system would change over time, for some assumed initial setting of those variables. That initial setting would be different from the ‘current’ situation, though, wouldn’t it? So where does the proposed solution in the systems model come from? Where are the arguments? Does the model diagram show what we want to achieve? Or just the ‘current state’?

Representation of plan proposals
and arguments in the systems model?
Leverage points

– Good questions, all. They touch on some critical problems with the systems perspective. Let’s take one at a time. You are right: the usual systems model does not show a picture of a proposed solution. To do that, I think we’ll have to expand a little upon our description of a plan: Would you agree that a plan involves some actions by some actors, using some resources acting upon specific variables in the system? Usually not just one variable but several. So a plan would be described by those variables, and the additional concepts of actions, actor, resources etc. Besides the usual sources of plans, — somebody’s ‘brilliant idea’, some result of a team brainstorming session, or just an adaptation of a precedent, a ‘tried and true’ known solution with a little new twist, —  the systems modeler may have played around with his model and identified some ‘leverage points’ in the system – variables where modest and easy-to-do changes can bring about significant improvement elsewhere in the system: those are suggested starting points for solution ideas.

– So you are saying that the systems tinkerer should get with it and add all the additional solution description to the diagram?

– Yes. And that would raise some new questions. What are those resources needed for the solution? Where would they come from, are they available? What will they cost? And more: wouldn’t just getting all that together cause some new effects, consequences, that weren’t in the original data collection, and that some other people than those who originally voiced their concerns about the problem would now be worried about? So your data collection component will have to go back to do some more collecting. Each new solution idea will need its own new set of information.

– There goes your orderly systematic procedure all right. That may go on for quite some time, eh?

– Right. Back and forth, if you want to be thorough. ‘Parallel processing’. And it will generate more arguments that will have to be considered, with questions about how plausible the relationship links are, how plausible the concerns about the effects – the desirable / undesirable outcomes. More work. So it will often be shouted down with the usual cries of ‘analysis paralysis’.

Intelligent analysis of data:
Generating ‘new’ arguments?

– Coming to think of it: if our APT has stored all the different claims it has found – in the literature, the textbooks, previous cases, and in the ongoing discussions, would it be able to construct ‘new’ arguments from those? Arguments the actual participants haven’t thought about?

– Interesting idea, Bog-Hubert. – It’s not even too difficult. I actually heard our friend Dexter explain that recently. It would take the common argument patterns – like the ones we looked at – and put claim after claim into them, to see how they fit: all the if-then connections to a proposal claim would generate more arguments for and against the proposal. Start looking at an ‘x’ claim of the proposal. Then search for (‘google’)  ‘x→ ?’:  any ‘y’s in the data that have been cited as ‘caused by x’. If a ‘y’ you found was expressed somewhere else as ‘desirable or undesirable’ – as a deontic claim, — it makes an instant ‘new’ potential argument. Of course, whether it would work as a ‘pro’ or a ‘con’ argument in some participant’s mind would depend on how that participant feels about the various premises.

– What are you saying, professor? This doesn’t make sense. A ‘pro’ argument is a ‘pro’ argument, and ‘con’ argument is a ‘con’ argument. Now you’re saying it depends on the listener?

– Precisely. I know some people don’t like this. But consider an example. People are discussing a plan P; somebody A makes what he thinks is a ‘pro’ argument: “Let’s do P because P will produce Q; and Q is desirable, isn’t it?” Okay, for A it is a pro argument, no question. Positive plausibility, he assumes, for P→Q as well as for Q; so it would get positive plausibility pl for P. Now for curmudgeon B, who would also like to achieve Q but is adamant that P→Q won’t work, (getting a negative pl) that set of premises would produce a negative pl for P, wouldn’t it? Similarly, for his neighbor C, who would hate for Q to become true, but thinks that P→Q will do just that, that same set of premises also is a ‘con’ argument.

– So what you’re saying is that all the programs out there, that show ‘dialogue maps’ identifying all arguments as pro or con, as they were intended by their authors, are patently ignoring the real nature and effects of arguments?

– I know some people have been shocked – shocked — by these heretical opinions – they have been written up. But I haven’t seen any serious rebuttals; those companies, if they have heard of them have chosen to ignore them. Haven’t changed their evil ways though…

– So our devious APT could be programmed to produce new arguments. More arguments. Just what we need. The arguments can be added to the argument list, but I was going to ask you before: how would the deontic claims, the ‘oughts’, be shown in the model?

– You’d have to add another bubble to each variable bubble, right? Now, we have the variable itself, the value of each variable in the current IS condition, the value of the variable if it’s part of a plan intervention, and the desired value – hey: at what time?

– You had to put the finger on the sore spot, Vodçek. Bad boy. Not only does this make the diagram a lot less clean, simple, and legible. Harder to understand. And showing what somebody means by saying what the solution ought to achieve, when all the variables are changing over time, now becomes a real challenge. Can you realistically expect that a desired variable should stay ‘stable’ at one desired value all the time, after the solution is implemented? Or would people settle for something like: remaining within a range of acceptable values? Or, if a disturbance has occurred, return to a desired value after some reasonably short specified time?

– I see the problem here. Couldn’t the diagram at least show the central desired value, and then let people judge whether a given solution comes close enough to be acceptable?

– Remember that we might be talking about a large number of variables that represent measures of how well all the different concerns have been met by a proposed solution. But if you don’t mind complex diagrams, you could add anything to the systems model. Or you can use several diagrams. Understanding can require some work, not just sudden ‘aha!’ enlightenment.

Certainty about arguments and predictions
Truth, probability, plausibility and relative importance of claims

– And we haven’t even talked about the question of how sure we can be that a solution will actually achieve a desired result.

– I remember our argumentative friends at least claimed to have a way to calculate the plausibility of a plan proposal based on the plausibility of each argument and the weight of relative importance of each deontic, each ought concern. Would that help?

– Wait, Bog-hubert: how does that work, again? Can you give us the short explanation? I know you guys talked about that before, but…

– Okay, Sophie: The idea is this: a person would express how plausible she thinks each of the premises of an argument are. On some plausibility scale of, say +1 which means ‘totally plausible’, to -1 which means ‘totally implausible; with a midpoint zero meaning ‘don’t know, can’t tell’. These plausibility values together will then give you an ‘argument plausibility’ – on the same scale, either by multiplying them or taking the lowest score as the overall result. The weakest link in the chain, remember. Then: multiplying that plausibility with the weight of relative importance of the ought- premise in the argument, which is a value between zero and +1 such that all the weights of all the ‘oughts’ in all the arguments about the proposal will add up to +1. That will give you the ‘argument weight’ of each argument; and all the argument weights together will give you the proposal plausibility – again, on the same scale of +1 to -1, so you’d know what the score means. A value higher than zero means it’s somewhat plausible; a value lower than zero and close to -1 means it’ so implausible that it should not be implemented. But we aren’t saying that this plausibility could be used as the final decision measure.

– Yeah, I remember now. So that would have to be added to the systems model as well?

– Yes, of course – but I have never seen one that does that yet.

‘Goodness’ of solutions
not just plausibility?

– But is that all? I mean: ‘plausibility’ is fine. If there are several proposals to compare: is plausibility the appropriate measure? It doesn’t really tell me how good the plan outcome will be? Even comparing a proposed solution to the current situation: wouldn’t the current situation come up with a higher plausibility — simply because it’s already there?

– You’ve got a point there. Hmm. Let me think. You have just pointed out that both these illustrious approaches – the argumentative model, at last as we have discussed it so far, as well as the systems perspective, for all its glory, have both grievously sidestepped the question of what makes a solution, a systems intervention ‘good’ or bad’. The argument assessment work, because it was just focused on the plausibility of arguments; as the first necessary step that had not been looked at yet. And the systems modeling focusing on the intricacies of the model relations and simulation, leaving the decision and its preparatory evaluation, if any, to the ‘client.’ Fair enough; they are both meritorious efforts, but it leaves both approaches rather incomplete. Not really justifying the claims of being THE ultimate tools to crack the wicked problems of the world. It makes you wonder: why didn’t anybody call the various authors on this?

– But haven’t there long been methods, procedures for people to evaluate to the presumed ‘goodness’ of plans? Why wouldn’t they have been added to either approach?

– They have, just as separate, detached and not really integrated extra techniques. Added, cumbersome complications, because they represent additional effort and preparation, even for small groups. And never even envisaged for large public discussions.

– So would you say there are ways to add the ‘goodness’ evaluation into the mix? We’ve already brought systems and arguments closer together? You say there are already tools for doing that?

– Yes, there are. For example, as part of a ‘formal’ evaluation procedure, you can ask people to explain the basis of their ‘goodness’ judgment about a proposed solution by specifying a ‘criterion function’ that shows how that judgment depends on the values of a system variable. The graph of it looks like this: On one axis it would have positive (‘like’, ‘good’, desirable’) judgment values on the positive side, and ‘dislike’, ‘bad’, ‘undesirable ‘ values on the negative one, with a midpoint of ‘neither good nor bad’ or ‘can’t decide’. And the specific system variable on the other axis, for example that temperature scale from our example a while ago. So by drawing a line in the graph that touches the ‘best possible’ judgment score at the person’s most comfortable temperature, and curves down towards ‘so-so, and down to ‘very bad’ and ultimately ‘intolerable’, couldn’t get worse’, a person could ‘explain’ the ‘objective’, measurable basis of her subjective goodness.

– But that’s just one judgment out of many others she’d have to make about all the other system variables that have been declared ‘deontic’ targets? How would you get to an overall judgment about the whole plan proposal?

– There are ways to ‘aggregate’ all those partial judgments into an overall deliberated judgment. All worked out in the old papers describing the procedure. I can show you that if you want. But that’s not the real problem here – you don’t see it?

– Huh?

The problem of  ‘aggregation’

of many different personal, subjective judgments
into group or collective decision guides

– Well, tell me this, professor: would our APTamajig have the APTitude to make all those judgments?

– Sorry, Bog-Hubert: No. Those judgments would be judgments of real persons. The APT machine would have to get those judgments from all the people involved.

– That’s just too complicated. Forget it.

– Well, commissioner, — you’ve been too quiet here all this time – remember: the expectation was to make the decision based on ‘due consideration’ of all concerns. Of everybody affected?

– Yes, of course. Everybody has the right to have his or her concerns considered.

– So wouldn’t ‘knowing and understanding the whole system’ include knowing how everybody affected feels about those concerns? Wasn’t that, in a sense, part of your oath of office, to serve all members of the public to the best of your knowledge and abilities? So now we have a way to express that, you don’t want to know about that because it’s ‘too complicated?

– Cut the poor commissioner some slack: the systems displays would get extremely crowded trying to show all that. And adding all that detail will not really convey much insight.

– It would, professor, if the way that it’s being sidestepped wasn’t actually a little more tricky, almost deceptive. Commissioner, you guys have some systems experts on your staff, don’t you? So where do they get those pristine performance track printouts of their simulation models?

– Ah. Huh. Well, that question never came up.

– But you are very concerned about public opinion, aren’t you? The polls, your user preference surveys?

– Oh, yeah: that’s a different department – the PR staff. Yes, they get the Big Data about public opinions. Doing a terrific job at it too, and we do pay close attention to that.

– But – judging just from the few incidents in which I have been contacted by folks with such surveys – those are just asking general questions, like ‘How important is it to attract new businesses to the city?’ Nobody has ever asked me to do anything like those criterion functions the professor was talking about. So if you’re not getting that: what’s the basis for your staff recommendations about which new plan you should vote for?

– Best current practice: we have those general criteria, like growth rate, local or regional product, the usual economic indicators.

– Well, isn’t that the big problem with those systems models? They have to assume some performance measure to make a recommendation. And that is usually one very general aggregate measure – like the quarterly profit for companies. Or your Gross National Product, for countries. The one all the critics now are attacking, for good reasons, I’d say, — but then they just suggest another big aggregate measure that nobody really can be against – like Gross National Happiness or similar well-intentioned measures. Sustainability. Systemicity. Whatever that means.

– Well, what’s wrong with those? Are you fixin’ to join the climate change denier crowd?

– No, Renfroe. The problem with those measures is that they assume that all issues have been settled, all arguments resolved. But the reality is that people still do have differences of opinions, there will still be costs as well as benefits for all plans, and those are all too often not fairly distributed. The big single measure, whatever it is, only hides the disagreements and the concerns of those who have to bear more of the costs. Getting shafted in the name of overall social benefits.

Alternative criteria to guide decisions?

– So what do you think should be done about that? And what about our poor APT? It sounds like most of the really important stuff is about judgments it isn’t allowed or able to make? Would even a professional planner named APT – ‘Jonathan Beaujardin APT, Ph.D M.WQ, IDC’ — with the same smarts as the machine, not be allowed to make such judgments?

– As a person, an affected and concerned citizen, he’d have the same right as everybody else to express his opinions, and bring them into the process. As a planner, no. Not claiming to judge ‘on behalf’ of citizens – unless they have explicitly directed him to do that, and told him how… But now the good Commissioner says he wouldn’t even need to understand his own basis of judgment,  much less make it count in the decision?

– Gee. That really explains a lot.

– Putting it differently: Any machine – or any human planner, for that matter, however much they try to be ‘perfect’ – trying to make those judgments ‘on behalf’ of other people, is not only imperfect but wrong, unless it has somehow obtained knowledge about those feelings about good or bad of others, and has found an acceptable way of reconciling the differences into some overall common ‘goodness’ measure. Some people will argue that there isn’t any such thing: judgments about ‘good or ‘bad’ are individual, subjective judgments; they will differ, there’s no method by which those individual judgments can be aggregated into a ‘group’ judgment that wouldn’t end up taking sides, one way or the other.

– You are a miserable spoilsport, Bog-Hubert. Worse than Abbé Boulah! He probably would say that coming to know good and bad, or rather thinking that you can make meaningful judgments about good or bad IS the original SIN.

– I thought he’s been excommunicated, Vodçek? So does he have any business saying anything like that? Don’t put words in his mouth when he’s not here to spit them back at you. Still, even if Bog-Hubert is right: if that APT is a machine that can process all kinds of information faster and more accurate than humans, isn’t there anything it can do to actually help the planning process?

– Yes, Sophie, I can see a number of things that can be done, and might help.

– Let’s hear it.

– Okay. We were assuming that APT is a kind of half-breed argumentative-systems creature, except we have seen that it can’t make up either new claims nor plausibility nor goodness judgments on its own. It must get them from humans; only then can it use them for things like making new arguments. If it does that, — it may take some bribery to get everybody to make and give those judgments, mind you – it can of course store them, analyze them, and come up with all kinds of statistics about them.
One kind of information I’d find useful would be to find out exactly where people disagree, and how much, and for what reasons. I mean, people argue against a policy for different reasons – one because he doesn’t believe that the policy will be effective in achieving the desired goal – the deontic premise that he agrees with – and the other because she disagrees with the goal.

– I see: Some people disagree with the US health plan they call ‘Obamacare’ because they genuinely think it has some flaws that need correcting, and perhaps with good reasons. But others can’t even name any such flaws and just rail against it, calling it a disaster or a trainwreck etc. because, when you strip away all the reasons they can’t substantiate, simply because it’s Obama’s.

– Are you saying Obama should have called it Romneycare, since it was alleged to be very similar to what Romney did in Massachusetts when he was governor there? Might have gotten some GOP support?

– Let’s not get into that quarrgument here, guys. Not healthy. Stay with the topic. So  our APT would be able to identify those differences, and other discourse features that might help decide what to do next – get more information, do some more discussion, another analysis, whatever. But so far, its systems alter ego hasn’t been able to show any of that in the systems model diagram, to make that part of holistic information visible to the other participants in the discourse.

– Wouldn’t that require that it become fully conscious of its own calculations, first?

– Interesting question, Sophie. Conscious. Hmm. Yes: my old car wouldn’t show me a lot of things on the dashboard that were potential problems – whether a tire was slowly going flat or the left rear turn indicator was out – so you could say it wasn’t aware enough, — even ‘conscious?’ — of those things to let me know. The Commissioner’s new car does some of that, I think. Of course my old one could be very much aware but just ornery enough to leave me in the dark about them; we’ll never know, eh?

– Who was complaining about running off the topic road here just a while ago?

– You’re right, Vodçek: sorry. The issue is whether and how the system could produce a useful display of those findings. I don’t think it’s a fundamental problem, just work to do. My guess is that all that would need several different maps or diagrams.

Discourse –based criteria guiding collective decisions?

– So let’s assume that not only all those judgments could be gathered, stored, analyzed and the results displayed in a useful manner. All those individual judgments, the many plausibility and judgment scores and the resulting overall plan plausibility and ‘goodness’ judgments. What’s still open is this: how should those determine or at least guide the overall group’s decision? In a way that makes it visible that all aspects, all concerns were ‘duly considered’, and ending up in a result that does not make some participants feel that their concerns were neglected or ignored, and that the result is – if not ‘the very best we could come up with’ then at least somewhat better than the current situation and not worse for anybody?

– Your list of aspects there already throws out a number of familiar decision-making procedures, my friend. Leaving the decision to authority, which is what the systems folks have cowardly done, working for some corporate client, (who also determines the overall ‘common good’ priorities for a project, that will be understood to rank higher than any individual concerns) – that’s out. Not even pretending to be transparent or connected to the concerns expressed in the elaborate process. Even traditional voting, that has been accepted as the most ‘democratic’ method, for all its flaws. Out. And don’t even mention ‘consensus’ or the facile ‘no objection?‘ version. What could our APT possibly produce that can replace those tools? Do we have any candidate tools?

– If you already concede that ‘optimal’ solutions are unrealistic and we have to make do with ‘not worse – would it make sense to examine possible adaptations to one of the familiar techniques?

– It may come to that if we don’t find anything better – but I’d say let’s look at the possibilities for alternatives in the ideas we just discussed, first? I don’t feel like going through the pros and cons about our current tools. It’s been done.

– Okay, professor: Could our APT develop a performance measure made up of the final scores of the measures we have developed? Say, the overall goodness score modified by the overall plausibility score a plan proposal achieved?

– Sounds promising.

– Hold your horses, folks. It sounds good for individual judgment scores – may even tell a person whether she ought to vote yes or no on a plan – but how would you concoct a group measure from all that – especially in the kind of public asynchronous discourse we have in mind? Where we don’t even know what segment of the whole population is represented by the participants in the discourse and its cumbersome exercises, and how they relate to the whole public populations for the issue at hand?
– Hmm. You got some more of that café catawhatnot, Vodçek?

– Sure – question got you flummoxed?

– Well, looks like we’ll have to think for a while. Think it might help?

– What an extraordinary concept!

– Light your Fundador already, Vodçek, and quit being obnoxious!

– Okay, you guys. Lets examine the options. The idea you mentioned, Bog-Hubert, was to combine the goodness score and the plausibility score for a plan. We could do that for any number of competing plan alternatives, too.

– It was actually an idea I got from Abbé Boulah some time ago. At the time I just didn’t get its significance.

– Abbé Boulah? Let’s drink to his health. So we have the individual scores: the problem is to get some kind of group score from them. The mean – the average – of those scores is one; we discussed the problems with the mean many times here, didn’t we? It obscures the way the scores are distributed on the scale: you get the same result from a bunch of scores tightly grouped around that average as you’d get from two groups of extreme scores at opposite ends of the scale. Can’t see the differences of opinion.

– That can be somewhat improved upon if you calculate the variance – it measures the extent of disagreement among the scores. So if you get two alternatives with the same mean, the one with the lower variance will be the less controversial one. The range is a crude version of the same idea – just take the difference between the highest and the lowest score; the better solution is the one with a smaller range.

– What if there’s only one proposal?

– Well, hmm; I guess you’d have to look at the scores and decide if it’s good enough.

– Let’s go back to what we tried to do – the criteria for the whole effort: wasn’t there something about making sure that nobody ends up in worse shape in the end?

– Brilliant, Sophie – I see what you are suggesting. Look at the lowest scores in the result and check whether they are lower or higher than, than …

– Than what, Bog-Hubert?

– Let me think, let me think. If we had a score for the assessment of the initial condition for everybody (or for the outcome that would occur if the problem isn’t taken care of) then an acceptable solution would simply have to show a higher score than that initial assessment, for everybody. Right? The higher the difference, even something like the average, the better.

– Unusual idea. But if we don’t have the initial score?

– I guess we’d have to set some target threshold for any lowest score – no lower than zero (not good, not bad) or at least a + 0.5 on a +2/-2 goodness scale, for the worst-off participant score? That would be one way to take care of the worst-off affected folks. The better-off people couldn’t complain, because they are doing better, according to their own judgment. And we’d have made sure that the worst-off outcomes aren’t all that bad.

– You’re talking as if ‘we’ or that APT thing is already set up and doing all that. The old Norwegian farmer’s rule says: Don’t sell the hide before the bear is shot! It isn’t that easy though, is it? Wouldn’t we need a whole new department, office, or institution to run those processes for all the plans in a society?

– You have a point there, Vodçek. A new branch of government? Well now that you open that Pandora’s box: yes, there’s something missing in the balance.

– What in three twisters name are you talking about, Bog-Hubert?

– Well, Sophie. We’ve been talking about the pros and cons of plans. In government, I mean the legislative branch that makes the laws, that’s what the parties do, right? Now look at the judicial branch. There, too, they are arguing – prosecutor versus defense attorney – like the parties in the House and Senate. But then there’s a judge and the jury: they are looking at the pros and cons of both sides, and they make the decision. Where is that  jury or judge ‘institution’ in the legislature? Both ‘chambers’ are made up of parties, who too often look like they are concerned about gaining or keeping their power, their majority, their seats, more than the quality of their laws. Where’s the jury? The judge? And to top that off: even the Executive is decided by the party, in a roundabout process that looks perfectly designed to blow the thinking cap off every citizen. A spectacle! Plenty of circenses but not enough panem. Worse than old Rome…

– Calm down, Bog-Hubert. Aren’t they going to the judiciary to resolve quarrels about their laws, though?

– Yes, good point. But you realize that the courts can only make decisions based on whether a law complies with the Constitution or prior laws – issues of fact, of legality. Not about the quality, the goodness of the law. What’s missing is just what Vodçek said: another entity that looks at the quality and goodness of the proposed plans and policies, and makes the decisions.

– What would the basis of judgment of such an entity be?

– Well, didn’t we just draw up some possibilities? The concerns are those that have been discussed, by all parties. The criteria that are drawn from all the contributions of the discourse.  The party ‘in power’ would only use the criteria of its own arguments, wouldn’t it? Just like they do now… Of course the idea will have to be discussed, thought through, refined. But I say that’s the key missing element in the so-called ‘democratic’ system.

– Abbé Boulah would be proud of you, Bog-Hubert. Perhaps a little concerned, too? Though I’m still not sure how it all would work, for example considering that the humans in the entity or ‘goodness panel’ are also citizens, and thus likely ‘party’. But that applies to the judge and jury system in the judicial as well. Work to do.

– And whatever decision they come up with, that worst-off guy could still complain that it isn’t fair, though?

– Better that 49% of the population peeved and feeling taken advantage of? Commissioner: what do you say?

– Hmmm. That one guy might be easier to buy off than the 49%, yes. But I’m not sure I’d get enough financing for my re-election campaign with these ideas. The money doesn’t come from the worst-off folks, you know…

– Houston, we have a problem …

Advertisements

A paradoxical effect of thorough examination of planning pros and cons

In the Fog Island Tavern:

– Bog-Hubert, I hear you had a big argument you had in here with Professor Balthus last night? Sounds like I missed a lot of fun?
– Well, Sophie, I’m not sure it was all fun; at least the good prof seemed quite put out about it.
– Oh? Did you actually admit you haven’t read his latest fat book yet?
– No. Well, uh, I haven’t read the book yet. And he knows it. But it actually was about one of Abbé Boulah’s pet peeves, or should i say his buddy’s curious findings, that got him all upset.
– Come on, do tell. What about those could upset the professor — I thought he was generally in favor of the weird theories of Abbe Boulah’s buddy?
– Yes — but it seems he had gotten some hopes up about some of their possibilities — mistakenly, as I foolishly started to point out to him. He thought that the recommendations about planning discourse and argument evaluation they keep talking about might help collective decision-making achieve more confidence and certainty about the issues they have to resolve, the plans they have to adopt or reject.
– Well, isn’t that what they are trying to do?
– Sure — at least that was what the research started out to do, from what I know. But they ran into a kind of paradoxical effect: It looks like the more carefully you try to evaluate the pros and cons about a proposed plan, the less sure you end up being about the decision you have to make. Not at all the more certain.
– Huh. That doesn’t sound right. And the professor didn’t straighten you out on that?
– I don’t think so. Funny thing: I started out agreeing that he must be right: Don’t we all expect decision-makers to carefully examine all those pros and cons, how people feel about a proposed plan, until they become confident enough — and can explain that to everybody else — that the decision is the right one? But when I began to explain Abbé Boulah’s concern — as he had mentioned it to me some time ago — I became more convinced that there’s something wrong with that happy expectation. And that is what Abbé Boulah’s research seems to have found out.
– You are speaking strangely here: on examination, you became more convinced that the more we examine the pros and cons, the less convinced we will get? Can you have it both ways?
– Yeah, it’s strange. Somebody should do some research on that — but then again, if it’s right, will the research come up with anything to convince us?
– I wish you’d explain that to me. I’ll buy you a glass of Zinfandel…
– Okay, maybe I need to rethink the whole thing again myself. Well, let me try: Somebody has proposed a plan of action, call it A, to remedy some problem or improve some condition. Or just to do something. Make a difference. So now you try to decide whether you’d support that plan, or if you were king, whether you’d go ahead with it. What do you do?
– Well, as you said: get everybody to tell you what they see as the advantages and disadvantages of the plan. The pros and cons.
– Right. Good start. And now you have to examine and ‘weigh’ them, carefully, like your glorious leaders always promise. You know how to do that? Other than to toss a coin?
– Hmm. I never heard anybody explain how that’s done. Have to think about it.
– Well, that’s what Abbé Boulah’s buddy had looked at and developed a story about how it could be done more thoroughly. He looked at the kinds of arguments people make, and found the general pattern of what he calls he ‘standard planning argument’.
– I’ve read some logic books back in school, never heard about that one.
– That’s because logic never did look at and identified let alone studied those. Not sure why, in all the years since ol’ Aristotle…
– What do they look like?
– You’ve used them all your life, just like you’ve spoken prose all your life and didn’t know it. The basic pattern is something like this: Say you want to argue for a proposed plan A: You start with the ‘conclusion’ or proposal:
“Yes, let’s implement plan A
because
1. Plan A will result in outcome B — given some conditions C;
and we assume that
2. Conditions C will be present;
and
3. We ought to aim for outcome B.”
– It sounds a little more elaborate than…
– Than what you probably are used to? Yes, because you usually don’t bother to state the premises you think people already accept so you ‘take them for granted’.
– Okay, I understand and take it for granted. And that argument is a ‘pro’ one; I assume that a ‘con’ argument is basically using the same pattern but with the conclusion and some premises negated. So?
– What you want to find out is whether the decision ‘Do A’ is plausible. Or better: whether or to what extent it is more plausible than not to do A. And you are looking at the arguments pro and con because you think that they will tell you which one is ‘more plausible’ than the other.
– Didn’t you guys talk about a slightly different recipe a while back — something about an adapted Poppa’s rule about refutation?
– Amazing: you remember that one? Well, almost: it was about adapting Sir Karl Raimund Popper’s philosophy of science principle to planning: that we are entitled to accept a scientific hypothesis as tentatively supported or ‘corroborated’ as they say in the science lab, to the extent we have done our very best to refute it, — show that it is NOT true, — and it has resisted all those attempts and tests. Since no supporting evidence’ can ever conclusively ‘prove’ the hypothesis but one true observation of the contrary can conclusively disprove it. It’s the hypothesis of that all swans are white — never proved by any number of white swans you see, but conclusively shot down by just one black swan.
– So how does it get adapted to planning? And why does it have to be adapted, not just adopted?
– Good question. In planning, your proposed plan ‘hypothesis’ isn’t true or false — just more or less plausible. So refutation doesn’t apply. But the attitude is basically the same. So Abbé Boulah’s buddy’s adapted rule says: “We can accept a plan proposal as tentatively supported only to the extent we have not only examined all the arguments in its favor, but more importantly, all the arguments against it — and all those ‘con’ arguments have been shown to be less plausible or outweighed by the ‘pro’ arguments.”
– Never heard that one before either, but it sounds right. But you keep saying ‘plausible’? Aren’t we looking for ‘truth’? For ‘correct’ or ‘false’?
– That’s what Abbé Boulah and his buddy are railing against — planning decisions just are not ‘correct’ or ‘false’, not ‘true’ or false. We are arguing about plans precisely because they aren’t ‘true’ or ‘false’ — yet. Nor ‘correct or ‘false’, like a math problem. Planning problems are ‘wicked problems’; the decisions are not right or wrong, they are ‘good or bad’. Or, to use a term that applies to all the premises: more or less plausible, which can be interpreted as true or false only for the rare ‘factual’ claims or premises, or more likely ‘probable’ for the factual-instrumental premises 1 and factual claims, premise 2, but as just plausible, or good or bad, for the ought claims, premise 3, and the ‘conclusion’.
– Okay, I go along with that. For now. It sounds… plausible?
– Ahh. Getting there, Sophie; good. It’s also a matter of degrees, like probability. If you want to express how ‘sure’ you are about the decision or about one of the premises, just the terms ‘plausible and ‘implausible’ are not expressing that degree at all. You need a scale with more judgments. One that goes from ‘totally plausible’ on one side to ‘totally implausible’ on the other, with some ‘more or less’ scores in-between. One with a midpoint of ‘don’t know, can’t decide’. For example, a scale from +1 to -1 with midpoint zero.
– Hmm, It’s a lot to swallow, all at once. But go on. I guess the next task is to make some of your ‘plausibility’ judgments about each of the premises, to see how the plausibility of the whole argument depends on those?
– Couldn’t have said it better myself. Now consider: if the argument as a whole is to be ‘totally plausible’ — with a plausibility value of +1 — wouldn’t that require that all the premise plausibility values also were +1?
– Okay…
– Well — and if one of those plausibility values turns out to be ‘less that ‘totally plausible, let’s say with a pl value of 0.9 — wouldn’t that reduce the overall argument plausibility?
– Stands to reason. And I guess you’ll say that if one of them had a negative value, the overall argument plausibility value would turn negative as well?
– Very good! If someone assigns a -.8 plausibility value to the premise 1 or 3, for example, in the above argument that is intended as a ‘pro’ argument, that argument would turn into a ‘con’ argument — for that person. So to express that as a mathematical function, you might say that the argument plausibility is equal to either the lowest of the premise plausibility values, or a product of all those values. (Let’s deal with the issue of what to do with cases of several negative plausibilities later on, to keep things simple. Also, some people might have questions about the overall ‘validity’ or plausibility of the entire argument pattern, and how it ‘fits’ the case at hand; so we might have to assign a pl-value to the whole pattern; but that doesn’t affect the issue of the paradox that much here.)
– So, Bog-Hubert, lets get back to where you left off. Now you have argument plausibility values; okay. Weren’t we talking about argument ‘weight’ somewhere? Weighing the arguments? Where does that come in?
– Good question! Okay — consider just two arguments, one ‘pro’ and one ‘con’. You may even assume that they both have good overall plausibilities, so that both have close to +1 (for the ‘pro’ argument) and -1 (for the ‘con’ argument). You might consider how important they are, by comparison, and thus how much of a ‘weight’ each should have towards the overall Plan plausibility. It’s the ‘ought’ premise — the goal or concern of the consequence of implementing the plan, that carries the weight. You decide which one is more important than the other, and give if a higher weight number.
– Something like ‘is it more important to get the benefit, the advantage of the plan, than to avoid the possible disadvantage?
– Right. And to express that difference in importance, you could use a scale from zero to +1, and a rule that all the weight numbers add up to +1. The ‘+1’ simply means that it carried the whole decision judgment.
– That’s a whole separate operation, isn’t it? and wouldn’t each person doing this come up with different weights? And, coming to think about it, different plausibility values?
– Yes: All those judgments are personal, subjective judgments. I know that many people will be quite disappointed by that — they want ‘objective’ measures of performance, about which there’s no quibbling. Sorry. But that’s a different issue, too — we’ll have to devote another evening and a good part of Vodçek’s Zinfandel supply for that one.
– Okay, so what you are saying is that, subjective or objective, we’re heading for the same paradox?
– Right again. First, let’s review the remaining steps in the assessments. We have the argument plausibility values — each person separately — and the weight or relative importance for each of the ‘ought premises. We can multiply the argument plausibility with the weight of the goal or concern in the ‘ought’ premise, and you have your argument weight. Adding them all up — remember that all the ‘con’ arguments will have negative plausibility values — will give you one measure of ‘plan plausibility’. You might then use that as a guide to making the decision — for example: to be adopted, a plan should have at least a positive pl-value, or at least a pl-value you’ve specified as a minimum threshold value for plan adoption.
– And that’s better than voting?
– I think so — but again, that’s a different issue too, also worth serious discussion. Depending on the problem and the institutional circumstances, decisions may have to be made by traditional means such as voting, or left to a ‘leader’ person in authority to make decisions. A plan-pl value would then just be a guide to the decision.
– So what’s the problem, the paradox?
– The problem is this: It turns out that the more arguments you consider in such a process, the more you examine each of the premises of the arguments (by applying the same method to the premises) and the more honest you are about your confidence in the plausibility of all the premises — they’re all about the future, remember, none can be determined to be 100% certain — the closer the overall pl-result will approach the midpoint ‘don’t know’ value, close to zero.
– That’s what the experiments and simulations of such evaluations show?
– Yes. You could see that already with our example above of just two arguments, equally plausible but one pro and the other con. If they also have the same weight, the plan plausibility would be zero, point blank. Not at all what the dear professor wanted to get from such a thorough analysis; very disappointing.
– Ahh. I see. Is he one of those management consultants who advise companies how to deal with difficult problems, and get the commissions by having to promise that his approaches will produce decisively convincing results?
– Oh Sophie — Let’s not go there…
– So the professor, he’s in denial about that?
– At least in a funk…
– Does he have any ideas about what to do about this? Or how to avoid it?
– Well, we agreed that the only remedy we could think of so far is to tweak the plan until it has fewer features that people will feel as ‘con’ arguments: until the plan -pl will at least be more visibly on the plus side of the scale.
– Makes you wonder whether in the old days, when people relied on auspices and ‘divine judgments’ to tip the scales, were having a wiser attitude about this.
– At least they were smart enough to give those tricks a sense of mystery and ritual — more impressive than just rolling dice — which some folks can see as a kind of prosaic, crude divine judgment?
– Hmm. If they made sure that all the concerns leading affected people to have concerns about a plan, what would be wrong with that?
– Other than that you’d have to load the dice — and worry about being found out? What’s the matter, Vodçek?
– You guys — I’ll have to cut you off…


Systems Models and Argumentation in the Planning Discourse

The following study will try to explore the possibility of combining the contribution of ‘Systems Thinking’ 1 — systems modeling and simulation — with that of the ‘Argumentative Model of Planning’ 2 expanded with the proposals for systematic and transparent evaluation of ‘planning arguments’.
Both approaches have significant shortcomings in accommodating their mutual features and concerns. Briefly: While systems models do not accommodate and show any argumentation (of ‘pros and cons’) involved in planning and appear to assume that any differences of opinion have been ‘settled’, individual arguments used in planning discussions do not adequately convey the complexity of the ‘whole system’ that systems diagrams try to convey. Thus, planning teams relying on only one of these approaches to problem-solving and planning (or any other single approach exhibiting similar deficiencies) risk making significant mistakes and missing important aspects of the situation.
This mutual discrepancy raises the suggestion to resolve it either by developing a different model altogether, or combining the two in some meaningful way. The exercise will try to show how some of the mutual shortcomings could be alleviated — by procedural means of successively feeding information drawn from one approach to the other, and vice versa. It does not attempt to conceive a substantially different approach.

Starting from a very basic situation: Somebody complains about some current ‘Is’-state of the world (IS) he does not like: ‘Somebody do something about IS!’

The call for Action (A plan is desired) raises a first set of questions besides the main one: Should the plan be adopted for implementation: D?:
(Questions / issues will be italicized. The prefixes distinguish different question types: D for ‘deontic or ‘ought-questions; E for Explanatory questions, I for Instrumental of actual-instrumental questions, F for factual questions; the same notation can be applied to individual claims):

E( IS –>OS)?              What kind of action should that be?
which can’t really be answered before other questions are clarified, e.g.:
E(IS)?                Description of the IS-state?
E(OS)?              What is the ‘ought-state (OS) that the person feels ought to be? Description?
(At this point , no concrete proposal has been made — just some action called for.)
D(OS)?              Should OS become the case?
(This question calls for ‘pros and cons’ about the proposed state OS), and
I(IS –> OS)?    How can IS be changed to OS?

Traditional approaches at this stage recommend doing some ‘research’. This might include both the careful gathering of data about the IS situation, as well as searching for tools, ‘precedents’ of the situation, and possible solutions used successfully in the past.

At this point, a ‘Systems Thinking’ (ST) analyst may suggest that, in order to truly understand the situation, it should be looked at as a system, and a ‘model’ representing that system be developed. This would begin by identifying the ‘elements’ or key variables V of the system, and the relationships R between them. Since so far, very little is known about the situation, the diagram of the model would be trivially simple:

(IS) –> REL –> (OS)

or, more specifically, representing the IS and OS states as sets of values of variables:

{VIS} –> REL(IS/OS) –> {VOS}

(The {…} brackets indicate that there may be a set of variables describing the state).

So far, the model simply shows the IS-state and the OS-state, as described by a variable V (or a set of variables), and the values for these variables, and some relationship REL between IS and OS.

Another ST consultant suggests that the situation — the discrepancy between the situation as it IS and as it ought to be (OS), as perceived by a person [P1] may be called a ‘problem’ IS/OS, and to look for a way to resolve it by identifying its ‘root cause’ RC :

E(RC of IS)?       What is the root cause of IS?
and
F(RC of IS)?       Is RC indeed the root cause of IS?

Yet another consultant might point out that any causal chain is really potentially infinitely long (any cause has yet another cause…), and that it may be more useful to look for ‘necessary conditions’ NC for the problem to exist, and perhaps for ‘contributing factors’ CF that aggravate the problem once occurring (but don’t ’cause’ it):

E(NC of IS/OS)?     What are the necessary conditions for the problem to exist?
F(NC of IS/OS)?     Is the suggested condition actually a NC of the problem?
and
E(CF of IS/OS)       What factors contribute to aggravate the problem once it occurs?
F(CF of IS/OS)?

These suggestions are based on the reasoning that if a NC can be identified and successfully removed, the problem ceases to exist, and/or if a CF can be removed, the problem could at least be alleviated.

Either form of analysis is expected to produce ideas for potential means or Actions to form the basis of a plan to resolve the problem and can be put up for debate. As soon as such a specific plan of action is described, it raises the question:

E(PLAN A)?        Description of the plan?
and
D(PLAN A)?        Should the plan be adopted / implemented?

The ST model-builder will have to include these items in the systems diagram, with each factor impacting specific variables or system elements V.

RC       –> REL(RC-IS)      –> {V(IS)}
{NC}   –> REL(NC-IS)      –> { V(IS) }     –> REL    –> {V(OS)}
{CF}    –> RELCF-IS)        –> {V(IS)}

Elements in ‘{…}’ brackets denote sets of items of that type. It is of course possible that one such factor influences several or all system elements at the same time, rather than just one. Of course, Plan A may include aspects of NC, CF, or RC. If these consist of several variables with their own specific relationships, they will have to be shown in the model diagram as such.

An Argumentative Model (AM) consultant will insist that a discussion be arranged, in which questions may be raised about the description of any of these new system elements and whether and how effectively they will actually perform in the proposed relationship.

Having invoked causality, questions will be raised about what further effects, ‘consequences’ CQ the OS-state will have, once achieved; what these will be like, and whether they should be considered desirable, undesirable (the proverbial ‘unexpected consequences’ or side-effects, or merely neutral effects. To be as thorough as the mantra of Systems Thinking demands, to consider ‘the whole system’, that same question should be raised about the initial actions of PLAN A: It may have side-effects not considered in the desired problem-solution OS: should they be included in the examination of the desired ‘Ought-state? So:

For {OS} –> {CQof OS}:

E(CQ ofOS)?        (what is/are the consequences? Description?)
D(CQofOS)?         (is the consequence desirable/ undesirable?)

For —> CQ of A:

E(CQ of A)?
and
D(CQ of A)?

For the case that any of the consequence aspects are considered undesirable, additional measures might be suggested, to avoid or mitigate these effects, which then must be included in the modified PLAN A’, and the entire package be reconsidered / re-examined for consistency and desirability.

The systems diagram would now have to be amended with all these additions. The great advantage of systems modeling is that many constellations of variable values can be considered as potential ‘initial settings’ of a system simulation run, (plan alternatives) and the development of each variable can be tracked (simulated) over time. In any system with even moderate complexity and number of loops — variables in a chain of relationships having causal relationships of other variables ‘earlier’ in the chain — the outcomes will become ‘nonlinear’ and quite difficult and ‘counter-intuitive’ to predict. Both the possibility of inspection of the diagram showing ‘the whole system’ and the exploration of different alternatives contribute immensely to the task of ‘understanding the system’ as a prerequisite to taking action.

While systems diagrams do not usually show either ‘root’ causes, ‘necessary conditions’, or ‘contributing factors’ of each of the elements in the model, these will now have to be included, as well as the actions and needed resources of PLANS setting the initial conditions to simulate outcomes. A simplified diagram of the emerging model, with possible loops, is the following:

(Outside uncontrolled factors (context)

    /                   /                       |                     |                |           \               \        \

PLAN->REL -> (RC, NC, CF) -> REL -> (IS) -> REL -> (OS) -> REL ->(CQ)

\              \                \                      |            |             |            /           /             /

forward and backward loops

 

A critical observer might call attention to a common assumption in simulation models — a remaining ‘linearity’ feature that may not be realistic: In the network of variables and relationships, the impact of a change in one variable V1 on the connected ‘next’ variable V2 is assumed to occur stepwise during one time unit i of the simulation, and the change in the following variable V3 in the following time unit i+1, and so on. Delays in these effect may be accounted for. But what if the information about that change in time unit i is distributed throughout the system much faster — even ‘almost instantaneously’, compared to the actual and possibly delayed substantial effects (e.g. ‘flows’) the diagram shows with its explicit links? Which might have the effect that actors, decision-makers concerned about variables elsewhere in the system for reasons unrelated to the problem at hand, might take ‘preventive’ steps that could change the expected simulated transformation? Of course, such actors and decision-makers are not shown…

Systems diagrams ordinarily do not acknowledge that — to the extent there are several parties involved in the project, and affected in different ways by either the initial problem situation or by proposed solutions and their outcomes — those different parties will have significantly different opinions about the issues arising in connection with all the system components, if the argumentation consultant manages to organize discussion. The system diagram only represents one participant’s view or perspective of the situation. It appears to assume that what ‘counts’ in making any decisions about the problem are only the factual, causal, functional relationships in the system, as determined by one (set of) model-builder. Thus, those responsible for making decisions about implementing the plan must rely on a different set of provisions and perspectives to convert the gained insights and ‘understanding’ of the system and its working into sound decisions.

Several types of theories and corresponding consultants are offering suggestions for how to do this. Given the particular way their expertise is currently brought into planning processes, they usually reflect just the main concerns of the clients they are working for. In business, the decision criterion is, obviously, the company’s competitive advantage resulting in reliable earnings: profit, over time. Thus for each ‘alternative’ plan considered (different initial settings in the system), and the actions and resources needed to achieve the desired OS, the ‘measure of performance’ associated with the resulting OS will be profit — earnings minus costs. For government consultants (striving to ‘run government like a business?’) the profit criterion may have to be labeled somewhat differently — say: ‘benefit’ and ‘cost’ of government projects, and their relationship such as B-C or the more popular B/C, the benefit-cost ratio. For overall government performance, the ‘Gross National Product’ GNP is the equivalent measure. The shortcomings and problems associated with such approaches led to calls for using ‘quality of life‘ or ‘happiness‘ or Human Development Indices instead, and criteria for sustainability and ecological aspects All or most such approaches still suffer from the shortcoming of constructing overall measures of performance: shortcomings because they inevitably represent only o n e view of the problems or projects — differences of opinion or significant conflicts are made invisible.

In the political arena, any business and economic considerations are overlaid if not completely overridden by the political decision criteria — voting percentages. Most clearly expressed in referenda on specific issues, alternatives are spelled out, more or less clearly, so as to require a ‘yes’ or ‘no’ vote, and the decision criterion is the percentage of those votes. Estimates of such percentages are increasingly produced by opinion surveys sampling just a small but ‘representative’ number of the entire population, and these aim to have a similar effect on decision-makers.

Both Systems Thinkers and advocates of the extended Argumentative Model are disheartened about the fact that in these business and governance habits, all the insight produced by their respective analysis efforts seem to have little if no visible connection with the simple ‘yes/no’, opinion poll or referendum votes. Rightfully so, and their concern should properly be with constructing better mechanisms for making that connection. From the Argumentative Model side, such an effort has been made with the proposed evaluation approach for planning arguments, though with clear warnings against using the resulting ‘measures’ of plan plausibility as convenient substitutes for decision criteria. The reasons for this have to do with the systemic incompleteness of the planning discourse: there is no guarantee that all the concerns that influence a person’s decision about a plan that should be given ‘due consideration — and therefore should be included in the evaluation — actually can and will be made explicit in the discussion.

To some extent, this is based on different attitudes discourse participants will bring to the process. The straightforward assumption of mutual trust and cooperativeness aiming at mutually beneficial outcomes — ‘win-win’ solutions — obviously does not apply to all such situations. Though there are many well-intentioned groups and initiatives that try to instill and grow such attitudes, especially when it comes to global decisions about issues affecting all humanity such as climate, pollution, disarmament, global trade and finance. The predominant business assumption is that of competition, seeing all parties as pursuing their own advantages at the expense of others, resulting in zero-sum outcomes: win-lose solutions. There are a number of different situations that can be distinguished as to whether the parties share or have different attitudes in the same discourse with the ‘extreme’ positions being complete sharing the same attitude, having attitudes on the opposite ends of the scale; or something in-between which might be called indifference to the other side’s concerns — as long as they don’t intrude on their own concerns, in which case the attitudes likely shift to the win-lose position at least for that specific aspect.

The effect of these issues can be seen by looking at the way a single argument about some feature of a proposed plan might be evaluated by different participants, and how the resulting assessments would change decisions. Consider, for the sake of simplicity, the argument in favor of a Plan A by participant P1:

D(PLAN A)!         Position (‘Conclusion’) : Plan A ought to be adopted)
because
F((VA –>REL(VA–>VO) –> VO) | VC      Premise 1: Variable V of  plan  A  will result in (e.g. cause) Variable VO , given condition C;
and
D(VO)                   Premise 2: Variable VO ought to be aimed for;
and
F(VC)                     Premise 3: Variable VC is the case.

Participant P1 may be quite confident (but still open to some doubt) about these premises, and of being able to supply adequate evidence and support arguments for them in turn. She might express this by assigning the following plausibility values to them, on the plausibility scale of -1 to + 1, for example:
Premise 1: +0.9
Premise 2: +0.8
Premise 3: +0.9
One simple argument plausibility function (multiplying the plausibility judgments) would result in argument plausibility of   +0.658;     a  not completely ‘certain’ but still comfortable result supporting the plan. Another participant P2 may agree with premises 1 and 2, assigning the same plausibility values to those as P1, but having considerable doubt as to whether the condition VC is indeed present to guarantee the effect of premise 1, expressed by the low plausibility score of +0.1 which would yield an argument plausibility of +0.07; a result that can be described as too close to ‘don’t know if VA is such a good idea’. If somebody else — participant P3 — disagrees with the desirability of VO, and therefore assigns a negative plausibility of, say, -0.5 to premise 2 while agreeing with P1 about the other premises, his result would be – 0.405, using the same crude aggregation formula. (These are of course up for discussion.)  The issue of weight assignment has been left aside here, assuming only the one argument, so there is only one argument being considered and the weight of its deontic premise is 1, for the sake of simplicity. The difference in these assessments raises not only the question of how to obtain a meaningful common plausibility value for the group, as a guide for its decision. It might also cause P1 to worry whether P3 would consider taking ‘corrective’ (in P1’s view ‘subversive’?) actions to mitigate the effect of VA should the plan be adopted e.g. by majority rule, or by following the result of some group plan plausibility function such as taking the average of the individual argument plausibility judgments as a decision criterion. (This is not recommended by the theory). And finally: should these assessments, with their underlying assumptions of cooperative, competitive, or neutral, disinterested attitudes, and the potential actions of individual players in the system to unilaterally manipulate the outcome, be included in the model and its diagram or map?

While a detailed investigation of the role of these attitudes on cooperative planning decision-making seems much needed, this brief overview already makes it clear that there are many situations in which participants have good reasons not to contribute complete and truthful information. In fact, the prevailing assumption is that secrecy, misrepresentation, misleading and deceptive information and corresponding efforts to obtain such information from other participants — spying — are part of the common ‘business as usual’.

So how should systems models and diagrams deal with these aspects? The ‘holistic’ claim of showing all elements so as to offer a complete picture and understanding of a system arguably would require this: ‘as completely as possible’. But how? Admitting that a complete understanding of many situations actually is not possible? What a participant does not contribute to the discourse, the model diagram can’t show. Should it (cynically?) announce that such ‘may’ be the case — and that therefore participants should not base their decisions only on the information it shows? To truly ‘good faith’ cooperative participants, sowing distrust this way may be perceived as somewhat offensive, and itself actually interfere with the process.

The work on systems modeling faces another significant unfinished task here. Perhaps a another look at the way we are making decisions as a result of planning discussions can help somewhat.

The discussion itself assumes that it is possible and useful towards better decisions — presumably, better than decisions made without the information it produces. It does not, inherently, condone the practice of sticking to a preconceived decision no matter what is being brought up (nor the arrogant attitude behind it: ‘my mind is made up, no matter what you say…’) The question has two parts. One is related to the criteria we use to convert the value of information to decisions. The other concerns the process itself: the kinds of steps taken, and their sequence.

It is necessary to quickly go over the criteria issue first — some were already discussed above. The criteria for business decision-makers discussed above, that can be assumed to be used by the single decision-maker at the helm of a business enterprise (which of course is a simplified picture): profit, ROI, and its variants arising from planning horizon, sustainability and PR considerations, are single measures of performance attached to the alternative solutions considered: the rule for this decision ‘under certainty’ is: select the solution having the ‘best’ (highest, maximized) value. (‘Value’ here is understood simply as the number of the criterion.) That picture is complicated for decision situations under risk, where outcomes have different levels of probability, or complete uncertainty, where outcomes are not governed by predictable laws, nor even probability, but by other participants’ possible attempts to anticipate the designer’s plans, and will actively seek to oppose them. This is the domain of decision and game theory, whose analyses may produce guidelines and strategies for decisions — but again, different decisions or strategies for different participants in the planning. The factors determining these strategies are arguably significant parts of the environment or context that designers must take into account — and systems models should represent — to produce a viable understanding of the problem situation. The point to note is that the systems models permit simulation of these criteria — profit,  life cycle economic cost or performance, ecological damage or sustainability — because they are single measures, presumably collectively agreed upon (which is at least debatable). But once the use of plausibility judgments as measures of performance is considered as a possibility, — even as aggregated group measures — the ability of systems models and diagrams to accommodate them becomes very questionable, to say the least. It would require the input of many individual (subjective) judgments, which are generated as the discussion proceeds, and some of which will not be made explicit even if there are methods available for doing this.

This shift of criteria for decision-making raises the concerns about the second question, the process: the kinds of steps taken, by what participants, according to what rules, and their sequence. If this second aspect does not seem to need or require much attention — the standard systems diagrams again do not show it — consider the significance given to it by such elaborate rule systems as parliamentary procedure, ‘rules of order’ volumes, even for entities where the criterion for decisions is the simple voting percentage. Any change of criteria will necessarily have procedural implications.

By now, the systems diagram for even the simple three-variable system we started out with has become so complex that it is difficult to see how it might be represented in a diagram. Adding the challenges of accounting for the additional aspects discussed above — the discourse with controversial issues, the conditions and subsequent causal and other relationships of plan implementation requirements and further side-effects, and the attitudes and judgments of individual parties involved in and affected by the problem and proposed plans, are complicating the modeling and diagram display tasks to an extent where they are likely to lose their ability to support the process of understanding and arriving at responsible decisions; I do not presume to have any convincing solutions for these problems and can only point to them as urgent work to be done.

ST-AM 4

Evolving ‘map’ of  ‘system’ elements and relationships, and related issues

Meanwhile, from a point of view of acknowledging these difficulties but trying, for now, to ‘do the best we can with what we have’, it seems that systems models and diagrams should continue to serve as tools to understand the situation and to predict the performance of proposed plans — if some of the aspects discussed can be incorporated into the models. The construction of the model must draw upon the discourse that elicits the pertinent information (through the ‘pros and cons’ about proposal). The model-building work therefore must accompany the discourse — it cannot precede or follow the discussion as a separate step. Standard ‘expert’ knowledge based analysis — conventional ‘best practice’ and research based regulations, for example, will be as much a part of this as the ‘new’, ‘distributed’ information that is to be expected in any unprecedented ‘wicked’ planning problem, that can only be brought out in the discourse with affected parties.

The evaluation preparing for decision — whether following a customary formal evaluation process or a process of argument evaluation — will have to be a separate phase. Its content will now draw upon and reflect the content of the model. The analysis of its results — identifying the specific areas of disagreement leading to different overall judgments, for example — may lead to returning to previous design and model (re-)construction stages: to modify proposals for more general acceptability, or better overall performance, and then return to the evaluation stage supporting a final decision. Procedures for this process have been sketched in outline but remain to be examined and refined in detail, and described concisely so that they can be agreed upon and adopted by the group of participants in any planning case before starting the work, as they must, so that quarrels about procedure will not disrupt the process later.

Looking at the above map again, another point must be made. It is that once again, the criticism of systems diagrams seems to have been ignored, that the diagram still only expresses one person’s view of the problem. The system elements called ‘variables’, for example, are represented as elements of  ‘reality’, and the issues and questions about those expected to give ‘real’ (that is, real for all participants) answers and arguments. Taking the objection seriously, would we not have to acknowledge that ‘reality’ is known to us only imperfectly, if at all, and that each of us has a different mental ‘map’ of it? Thus, each item in the systems map should perhaps be shown as multiple elements referring to the same thing labeled as something we think we know and agree about: but as one bubble of the item for each participant in the discourse? And these bubbles will possibly, even likely, not being congruent but only overlapping, at best, and at worst covering totally different content meaning — the content that is then expected to be explained and explored in follow-up questions? Systems Thinking has acknowledged this issue in principle — that ‘the map (the systems model and diagram) is NOT the landscape‘ (the reality). But this insight should itself be represented in a more ‘realistic’ diagram — realistic in the sense that it acknowledges that all the detail information contributed to the discourse and the diagram will be assembled in different ways by each individual into different, only partially overlapping ‘maps’. An objection might be that the system model should ‘realistically’ focus on those parts of reality that we can work with (control? or at least predict?) — with some degree of ‘objectivity’ — the overlap we strive for with ‘scientific’ method of replicable experiments, observations, measurements, logic, statistical conformation? And that the concepts different participants carrying around in the minds to make up their different maps are just ‘subjective’ phenomena that should ‘count’ in our discussions about collective plans only to the extent they correspond (‘overlap’) to the objective measurable elements of our observable system?   The answer is that such subjective elements as individual perspectives about the nature of the discourse as cooperative or competitive etc. are phenomena that do affect the reality of our interactions. Mental concepts are ‘real’ forces in the world — so should they not be acknowledged as ‘real’ elements with ‘real’ relationships in the relationship network of the system diagram?

We could perhaps state the purpose of the discourse as that of bringing those mental maps into sufficiently close overlap for a final decision to become sufficiently congruent in meaning and acceptability for all participants: the resulting ‘maps’ along the way having a sufficient degree of overlap. What is ‘sufficient’ for this, though?   And does that apply to all aspects of the system? Are not all our plans in part also meant to help us to pursue our own, that is: our different versions of happiness? We all want to ‘make a difference’ in our lives — some more than others, of course — and each in our own way.  The close, complete overlap of our mental maps is a goal and obsession of societies we call ‘totalitarian’. If that is not what we wish to achieve, should the principle of plan outcomes leaving and offering (more? better?) opportunities for differences in the way we live and work in the ‘ought-state’ of problem solutions,  be an integral element of our system models and diagrams? Which would be represented as a description of the outcome consisting of ‘possibility’ circles that have ‘sufficient’ overlap, sure, but also a sufficient degree of non-overlap ‘difference’ opportunity outside of the overlapping area. Our models and diagrams and system maps don’t even consider that. So is Systems Thinking, proudly claimed as being ‘the best foundation for tackling societal problems’ by the Systems Thinking forum, truly able to carry the edifice of future society yet? For its part, the Argumentative Model claims to accommodate questions of all kinds of perspectives, including questions such as these, — but the mapping and decision-making tools for arriving at meaningful answers and agreements are still very much unanswered questions. The maps, for all their crowded data, have large undiscovered areas.

The emerging picture of what a responsible planning discourse and decision-making process for the social challenges we call ‘wicked problems’, would look like, with currently available tools, is not a simple, reassuring and appealing one. But the questions that have been raised for this important work-in-progress, in my opinion, should not be ignored or dismissed because they are difficult. There are understandable temptations to remain with traditional, familiar habits — the ones that arguably often are responsible for the problems? — or revert to even simpler shortcuts such as placing our trust in the ability and judgments of ‘leaders’ to understand and resolve tasks we cannot even model and diagram properly. For humanity to give in to those temptations (again?) would seem to qualify as a very wicked problem indeed.


Notes:

1 The understanding of ‘systems thinking’ (ST) here is based on the predominant use of the term in the ‘Systems Thinking World’ Network on LinkedIn.

2 The Argumentative Model (AM) of Planning was proposed by H. Rittel, e.g. in the paper ‘APIS: A Concept for an Argumentative Planning Information System’, Working paper 324, Institute of Urban and Regional Development, University of California, 1980. It sees the planning activity as a process in which participants raise issues – questions to which there may be different positions and opinions, and support their positions with evidence, answers and arguments. From the ST point of view, AM might just be considered a small, somewhat heretic sect within ST…


Towards adding argumentation information to systems maps and systems complexity to argument maps.

This brief exploration assumes that discussions as well as any systems analysis and modeling are essentially part of human efforts to deal with some problem, to achieve some change of conditions in a situation, — a change that expected to be different from how that situation would exist or change on its own without a planning intervention.

1       Adding questions and arguments to systems diagrams.

Focusing on a single component of a typical systems diagram: two elements (variables)
A and B are linked by a connection / relationship R(AB) :

A ———R———> B

For convenience, in the following these elements are listed vertically to allow adding questions people might ask about them, and hold different opinions about the possible answers.

A What is A?
|           What is the current value (description) of A? (at time i)
|           How will A change (e.g. what will the value of A be at time i+j)?
|           What causes / caused A?
|           Should changing A be a part of a policy / plan?
|                  If so: What action steps S (Sequence? Times? Actors?) and
|                            What Means / resources M will be needed?
|             Are the means actors etc. available? Able? Willing?
|             What will be the consequences KA of changing A?
|            Who would be affected by KA? In what way?
|             Is consequence KAj desirable? Undesirable?
|           Q: Is A the appropriate concept for the problem at hand?
|               (and the questions about A the appropriate questions?)
|
R(AB)   What is the relationship R(AB)?
|            What is the direction of R?
|            Should there be a relation R(AB)?
|            What is the (current) rate of R? (Other parameters? E.g. strength)?
|            What should the rate of R be?
|
B          What is B?
.            What is the current state / value of B?
.            Should B be the aim / goal G of a policy / plan?
.             Are there other (alternative) means for attaining B?
.            What should be the desired state / value of B? (At what time?)
.             What factors (other than A) are influencing B?
.            What would be the consequences K of attaining G?
.            Who would be affected by K? In what way?
.            Is consequence KBj desirable? Undesirable?
.            Q: Is B the appropriate concept for the problem at hand?
.            (and the questions about B the appropriate questions?)

Most systems models and diagrams do not show such questions and arguments – it is my impression that they either assume that differences of opinion about the underlying assumptions have been ‘settled’ in the respectively last version of the model, or that the modeler’s understanding of those assumptions is the best or valid one (on the authority of having constructed the model?). They thereby arguably discourage discussion. They also do not easily accommodate the complete description of plans or policies, assuming a kind of ‘refraining from committing to solutions’ attitude of just ‘objectively’ conveying the simulated consequences of different policies while limiting the range of policy or plan options by omitting the aspects addressed by the questions and arguments.

2             Adding systems complexity information to argument maps

Typically, the planning discourse will consist of a growing set of ‘pro’ and ‘con’ arguments about plan proposals; any decision should be based on ‘due consideration’ of all these arguments. In the common practice of discussion (even in carefully structured participatory events) the individual typical planning argument can be represented as follows:
“Plan P ought to be adopted and implemented
because
Implementing the plan P will have relationship R (e.g. lead to) consequence K, given conditions C
and
Consequence K ought to be pursued (is a goal G)
and
Conditions C are present.

This argument, in which several premises already have been added that in reality often are omitted as ‘taken for granted’, can be represented in more concise formal ways , for example as follows:

D(P)                           (Deontic claim: conclusion, proposal to be supported)
Because
FI((P –R—>K)|C)    (Factual-instrumental premise)
and
D(K)                           (Deontic premise)
and
F(C)                            (Factual premise)

The argumentative process, in the view of Rittel’s ‘Argumentative Model of Planning’, consists of asking questions (in the case of controversial questions, ‘raising issues’) for the purpose of clarifying, challenging or supporting the various premises. This serves to increase participants’ understanding of the situation and its complexity, which from the point of view of the ‘Systems Perspective’ may be merely ‘crudely’, only qualitatively and thus inadequately represented in the arguments in a ‘live’ discussion. Some potential questions for the above premises are the following:

D(P)         Description, explanation of the plan and its details:
Problem addressed?
Current condition / situation?
Causes, necessary conditions for problem to exist, contributing factors?
Aims / goals?
Available means?
Other possible means of addressing problem?
Q: wrong question: wrong way of looking at the problem?
Implementation details? Steps, actions? Sequence?

Actors / responsibilities?
Means and resources needed? Availability? Costs?

FI((P –R–>K)|C)) : Does the relationship hold? Currently? Future?

R(P,K)      Explanation: Type of relationship?

(Causal, analogy, part-whole, logical implication…)
Existence and direction of relationship? Reverse? Spurious?
Strength of relationship?
Conditions under which the relationship can exist / function?

D(K)       Should consequence K be pursued?
Explanation / description of K: details?
What other factors (than the provisions of plan P) affect / influence K?
Other (alternative) means of achieving K?

F(C)         Are the conditions C (under which relationship R holds) present?
Will they be present in future?
What are the conditions C?
What factors (other than those activated by plan P) affect / influence C?
If conditions C are NOT reliably present,
what provisions must be made to secure them? (Plan additions?)

These questions, (which arguably should be better accommodated in systems diagrams) can be taken up and addressed in the normal discussion process. Their sequence and orderly treatment representation, especially to provide adequate overview, can be improved, and could be significantly improved by better representation of the variety and complexity of the additional elements introduced by the questions raised.

This is especially true with respect to the question about Conditions C under which the claimed relationship R is assumed to hold. A more careful examination of this question (i.e. more careful than the common qualification ‘everything else being equal’: what IS that ‘everything else’ – and IS it ‘equal’?) will reveal that there are many conditions, and that they are interrelated in different, complex ways, with behaviors over time that we have trouble fully understanding. In other words, they constitute a ‘systems network’ of elements, factors and relationships including positive and negative feedback loops – precisely the kind of network shown in systems diagrams.

Thus, it must be argued that in order to live up to the sensible principle that decisions to adopt or reject plans should be made on the basis of due consideration (i.e. understanding) of all the pro and con arguments, the assessment of those arguments should include adequate understanding of the systems networks referred to in all the pro and con arguments.

3          Conclusion

The implication of the above considerations is. I think, fairly clear: Neither does common practice of systems modeling or diagramming adequately accommodate questions and arguments about model assumptions, nor do common representations (issue and argument maps) of the argumentative discourse adequately accommodate systems complexity. Which means that the task of developing better means of meeting that requirement is quite urgent; the development of effective global discourse support platforms for addressing the global crises we are facing will depend on acceptable solutions for this question. But this is still a vague goal: I have not seen anything in the way of specific means of achieving it yet. Work to do.


A Less Adversarial Planning Discourse Support System

A Fog Island Tavern conversation
about defusing the adversarial aspect of the Argumentative Model of Planning

Thorbjoern Mann 2015

(The Fog Island Tavern: a figment of imagination
of a congenial venue for civilized conversations
about issues, plans and policies of public interest)

– Hi Vodçek, how’s the Tavern life this morning? Fog lifting yet?
– Hello Bog-Hubert, good to see you. Coffee?
– Sure, the usual, thanks. What’s with those happy guys over there — they must be drinking something else already; I’ve never seen them having such a good time here?
– No, they are just having coffee too. But you should have seen their glum faces just a while ago.
– What happened?
– Well, they were talking about the ideas of our friend up at the university, about this planning discourse platform he’s proposing. They were bickering about whether the underlying perspective — the argumentative model of planning — should be used for that, or some other theory, systems thinking or pattern language approaches. You should have been there, isn’t that one of your pet topics too?
– Yes, sorry I missed it. Did they get anywhere with that? What specifically did they argue about?
– It was about those ambitious claims they are all making, about their approach being the best foundation for developing tools to tackle those global wicked problems we are all facing. They feel that those claims are, well, a little exaggerated, while accusing each other’s pet approach of being far from as effective and universally applicable as they think. Each one missing just the main concerns the other feels are the most important features of their tool. And lamenting the fact that neither one seems to be as widely accepted and used as they think it deserves.
– Did they have any ideas why that might be?
– One main point seemed to be the mutual blind spot that the Argumentative Model, besides being too ‘rational’ and argumentative for some people, and not acknowledging emotions and feelings, did not accommodate the complexity and holistic perspective of systems modeling (in the view of the systems guys), while the systems models did not seem to have any room for disagreements and argumentation, from the point of view of your argumentative friends.
– Right. I am familiar with those complaints. I don’t think they are all justified, but the perceptions that they are need to be addressed. We’ve been working on that.
– Good. Another main issue they were all complaining about — both sides — was that there currently isn’t a workable platform for the planning discourse, even with all the cool technology we now have. And therefore some people were calling for a return to simple tools that can be used in actual meeting places where everybody can come and discuss problems, plans, issues, policies. The ‘design tavern’ that Abbé Boulah kept talking about, remember?
– Yes. It seemed like a good idea, but only for small communities that can meet and interact meaningfully in ‘town hall’- kind places. Like his Rigatopia thing, as long as that community stays small enough.
– Well, they seemed to get stuck in gloom about that issue for a while, couldn’t decide which way to go, and lamenting the state of technology for both sides. That’s when Abbé Boulah showed up for a while, and turned things around.
– How did he do that?
– He just reminded them of the incredible progress the computing and communication technology has seen in the last few decades, and suggested that they might think about how that progress might have been focused on the wrong problems, or simply not getting around to the real task of their topic — planning discourse support — yet. Told them to explore some opportunities of the technology – possibilities already realized by tools already on the market or just as feasible but not yet produced. He bought them a round of his favorite Slovenian firewater and told them to brainstorm crazy ideas for new inventions for that cause, to be applied first in his Rigatopia community experiment on that abandoned oil rig. That’s what set them off. Boy, they are still having fun doing that.
– Did they actually come up with some useful concepts?
– Useful? Don’t know about that. But there were some wild and interesting ideas I heard them toss around. Strangely, most of them seemed about tech gizmos. They seem to think that the technical problem of global communication is just about solved — messages, information can be exchanged instantaneously all over the world, and that concepts like Rittel’s IBIS provides an appropriate basis for organizing, storing, retrieving that information, and that the missing things have to do with the representation, display, and processing the contributions for decision-making: analysis and evaluation.
– Do you have an example of ideas they discussed?
– Plenty. For the display issue, there was the invention of the solar-powered ‘Googleglass-Sombrero’ — taking the Google glass idea a step further by moving the internet-connected display farther away from the eye, to the rim of a wide sombrero, so that several display maps can be seen and scanned side by side, not sequentially. Overview, see? Which we know today’s cell-phones or tablets don’t do so well. There was the abominable ‘Rollupyersleeve-watch’. It is actually a smartphone, but would have an expandable screen that can be rolled up to your elbow so you can see several maps simultaneously. Others were still obsessed with making real places for people to actually meet and discuss issues, where the overall discourse information is displayed on the walls, and where they would be able to insert their own comments to be instantly added and the display updated. ‘Democracy bars’, in the tradition of the venerable sports bars. Fitted with ‘insect-eye’ projectors to simultaneously project many maps on the walls of the place, with comments added on their own individual devices and uploaded to the central system.
– Abbé Boulah’s ‘Design Tavern’ brought into the 21st IT age. Okay!
– Yes, that one was immediately grabbed by the corporate – economy folks: Supermarkets offering such displays in the cafe sections, with advertisement, as added P/A attractions…
– Inevitable, I guess. Raises some questions about possible interference with the content?
– Yes, of course. Somebody suggested a version of the old equal-time rule: that any such ad had to be immediately accompanied by a counter-ad of some kind, to ‘count’ as a P/A message.
– Hmm. I’d see a lot of fruitless lawsuits coming up about that.
– Even the evaluation function generated its innovative gizmos: There was a proposal for a pen (for typing comments) with a sliding up-down button that instantly lets you send your plausibility assessment of proposed plans or claims. It was instantly countered by another idea, of equipping smartphones with a second ‘selfie-camera’ that would read and interpret you facial expressions when reading a comment or argument: not only nodding for agreement, shaking your head to signal disagreement, but also reading raised eyebrows, frowns, smiles, confusion, and instantly sending it to the system, as instant opinion polls. That system would then compute the assessment level of the entire group of participants in a discussion, and send it back to the person who made a comment, suggesting more evidence, or better justification etc.
– Yes, there are some such possibilities that a kind of ‘expert system’ component could provide: not only doing some web research on the issues discussed, but actually taking part in the discussion, as it were. For example, didn’t we discuss the idea of such a system scanning both the record of discussion contributions and the web, for example for similar cases? I remember Abbé Boulah explaining how a ‘research service’ of such a system could scan the data base for pertinent claims and put them together into pro and con arguments the participants hadn’t even thought of yet. Plus, of course, suggesting candidate questions about those claims that should be answered, or for which support and evidence should be provided, so people could make better-informed assessments of their plausibility.
– I’m glad you said ‘people’ making such assessments. Because contrary to the visions of some Artificial Intelligence enthusiasts, I don’t think machines, or the system, should be involved in the evaluation part.
– Hey, all their prowess in drawing logical conclusions from data and stored claims should be kept from making valuable contributions: are you a closet retro-post-neoluddite? Of course I agree: especially regarding the ought-claims of the planning arguments, the system has no business making judgments. But the system would be ‘involved’, wouldn’t it? Processing and calculation of participants’ evaluation results? In taking the plausibility and importance judgments, and calculating the resulting argument plausibility, argument weights, and conclusion plausibility, as well as the statistics of those judgments for the entire group of participants?
– You are right. But those results should always just be displayed for people to make their own final judgments in the end, wasn’t that the agreement? Those calculation results should never be used as the final decision criterion?
– Yes, we always emphasized that; but in a practical situation it’s a fine balancing act. Just like decision-makers were always tempted to use some arbitrary performance measure as the final decision criterion, just because it was calculated from a bunch of data, and the techies said it was ‘optimized’. But hey, we’re getting into a different subject here, aren’t we: How to put all those tools and techniques into a meaningful design for the platform, and a corresponding process?
– Good point. Work to do. Do you think we’re ready to sketch out a first draft blueprint of that platform, even if it would need tools that still have to be developed and tested?
– Worth a try, even if all we learn is where there are still holes in the story. Hey guys, why don’t you come over here, let’s see if we can use your ideas to make a whole workable system out of it: a better Planning Discourse Support System?
– Hi Bog-Hubert. Okay, if you feel that we’ve got enough material lined up now?
– We’ll see. How should we start? Does your Robert’s Rules expert have any ideas? Commissioner?
– Well, thanks for the confidence. Yes, I do think it would be smart to use the old parliamentary process as a skeleton for the process, if only because it’s fairly familiar to most folks living in countries with something like a parliament. Going through the steps from raising an issue to a final decision, to see what system components might be needed to support each of those steps along the way, and then adding what we feel are missing parts.
– Sounds good. As long as Vodçek keep his bar stocked, we can always go back to square one and start over if we get stuck. So how does it start?
– I think there are several possible starting points: Somebody could just complain about a problem, or already make a proposal for how to deal with it, part of a plan. Or just raise a question that’s part of those.
– Could it just be some routine agency report, monitoring an ongoing process, — people may just accept it as okay, no special action needed, or decide that something should be done to improve its function?
– Yes, the process could start with any of those. Can we call it a ‘case’, as a catchall label, for now? But whatever the label, there needs to be a forum, a place, a medium to alert people that there is a candidate case for starting the process. A ‘potential case candidate listing’, for information. Anybody who feels there is a need to do something could post such a potential case. It may be something a regular agency is already working on or should address by law or custom. But as soon as somebody else picks it up as something out of the ordinary, significant enough to warrant a public discussion, the system will ‘open’ the case, which means establishing a forum corner, a venue or ‘site’ for its discussion, and invite public contributions to that discussion.
– Yeah, and it will get swamped immediately with all kinds of silly and irrelevant posts. How does the system deal with that? Trolls, blowhards, just people out to throw sticks into the wheels?
– Good question. The problem is how to sort out the irrelevant stuff — but who is to decide what’s what? And throw out what’s irrelevant?
– Yes, that itself could lead to irrelevant and distracting quarrels. I think it’s necessary to have a first file where everything is kept in its original form, a ‘Verbatim’ depository, for reference. And deal with the decision about what’s relevant by other means, for example the process of assessment of the merit of contributions. First, everybody who makes a contribution will get a kind of ‘basic contribution credit point’, a kind of ‘present’ score, which is initially just ‘empty’. If it’s the first item of some significance for the discussion, it will get filled with an adjustable but still neutral score — mere repetitions will stay ‘noted’ but empty.
– Good idea! This will be an incentive to make significant information fast, and keep people from filling the system with the same stuff over and over.
– Yes. But then you need some sorting out of all that material, won’t you?
– True. You might consider that as part of an analysis service, determining whether a post contains claims that are ‘pertinent’ to the case. It may just consist of matching a term — of a ‘topic’ or subject, that’s part of the initial case description, or provides a link to any subsequent contribution already posted. Each term or topic is now listed as the content subject of a number of possible questions or issues — the ‘potential issue family’ of factual, explanatory, instrumental, and deontic (ought-) questions that can be raised about the concept. This can be done according to the standard structure of an IBIS (issue based information system), a ‘structured’ or formalized file that consists of the specific questions and the respective answers and arguments to those. Of course somebody or something must be doing this — an ‘Analysis’ or ‘Formalizing’ component — either some human staff, or an automated system which needs to be developed. Ideally, the participants will learn to do this structuring or formalizing themselves, to make sure the formalized version expresses their real intent.
– And that ‘structured’ file will be accessible to everybody, as well as the ‘verbatim’ file?
– Yes. Both should be publicly accessible as a matter of principle. But access ‘in principle’ is not yet very useful. Such files aren’t very informative or interesting to use. Most importantly, they don’t provide the overview of the discussion and of the relationship between the issues. This is where the provision and rapid updating of discourse maps becomes important. There should be maps of different levels of detail: topic maps, just showing the general topics and their relationships, issue maps that provide the connections between the issues, and argument maps that show the answers or arguments for a specific issue, with the individual premises and their connections to the issues raised by each premise.
– So what do we have now: a support system with several storage and display files, and the service components to shuffle and sort the material into the proper slots. Al, I see did you draw a little diagram there?
– Yes – I have to doodle all this in visual form to understand it:

AMwoADV 14a

 

Figure  1 — The main discourse support system: basic content components

– Looks about right, for a start. You agree, Sophie?
– Yes, but it doesn’t look that much different from the argumentative or IBIS type system we know and started from. What happened to the concern about the adversarial flavor of this kind of system? Weren’t we trying to defuse that? But how? Get rid of arguments?
– Well, I don’t think you can prevent people from entering arguments — pros and cons about proposed plans or claims. Every plan has ‘pros’ – the benefits or desirable results it tries to produce – and ‘cons’, its costs, and any undesirable side-and after-effects. And I don’t think anybody can seriously deny that they must be brought up, to be considered and discussed. So they must be acknowledged and accommodated, don’t you think?
– Yes. And the evaluation of pro and con merit of plan proposals, based on the approach we’ve been able to develop so far, will depend on establishing some argument plausibility and argument weight.
– I agree. But isn’t there a way in which the adversarial flavor can be diminished, defused?
– Lets’ see. I think there are several ways that can be done. First, in the way the material is presented. For example, the basic topic maps don’t show content as adversarial, and the issue maps can de-emphasize the underlying pro-and con partisanship, if any, by the way the issues are phrased. Whether argument maps should be shown with complete pro and con arguments, is a matter of discussion, perhaps best dealt with in each specific case by the participants. This applies most importantly to the way the entire discourse is framed, and the ‘system’ could suggest forms of framing that avoid the expectation of an adversarial win-lose outcome. If a plan is introduced as a ‘take-it-or-leave-it’ proposal to be approved or rejected, inevitably some participants can see themselves as the intended or unintended losing party, which generates the adversarial attitudes. Instead, if the discourse is started as an invitation to contribute to the generation of a plan that avoids placing the costs or disadvantages unfairly on some affected folks, and the process explicitly includes the expectation of plan modification and improvement, that attitude will be different.
– So the participants in this kind of process will have to get some kind of manual of proper or suggested behavior, is that right? How to express their ideas?
– I guess that would helpful. Suggestions, yes, not rules, if possible.
– Also, if I understand the evaluation ideas right, the reward system for contributions can include giving people points for information items that aren’t clearly supporting one party or the other, so individual participants can ‘gain’ by offering information that might benefit ‘the other’ party, would that help to generate a more cooperative attitude?
– Good point. Before we get to the evaluation part though, there is another aspect — one of the ‘approach shortcomings’, that I think we need to address.
– Right, I’ve been waiting for that: the systems modeling question. How to represent complex relationships of systems models in the displays presented to the participants? Is that what you are referring to?
– Yes indeed.
– So do you have any suggestions for that? It seems that it is so difficult — or so far off the argumentative planners’ radar – that it hasn’t been discussed or even acknowledged let alone solved yet?
– Sure, it almost looks like a kind of blind spot. I think there are two ways this might, or should be, dealt with. One is that the system’s research component — here I mean the discourse support system — can have a service that make searches in the appropriate data bases to find and enter information about similar cases, where systems models may have been developed, and enter the systems descriptions, equations and diagrams — most importantly, the diagrams — to the structured file and the map displays. In the structured file, questions about the model assumptions and data can then be added — this was the element that is usually missing in systems diagrams. But the diagrams themselves do offer a different and important way for participants to gain the needed overview of the problem they are dealing with.
– So far, so good. Usually, the argumentative discussion and the systems are speaking different languages, have different perspectives, with different vocabularies. What can we do about that?
– I was coming to that — it was the second way I mentioned. But the first step, remember, is that the systems diagrams are now becoming part of the discussion, and any different vocabulary can be questioned and clarified together with the assumptions of the model. That’s looking at it from the systems side. The other entry, from the argumentative side, can be seen when we take a closer look at specific arguments. The typical planning argument is usually only stated incompletely — just like other arguments. It leaves out premises the arguer feels can be ‘taken for granted’. A more completely stated planning argument would spell out these three premises of the ‘conclusion-claim’, that
‘Proposal or Plan P should be adopted,
          because
          P will lead to consequence or result R , (given conditions C)
           and
          Result R ought to be pursued
          (and
           conditions C are present)’.

The premise in parenthesis, about conditions C, is the one that’s most often not spelled out, or just swept under the rug with phrases such as ‘all else being equal’. But take a closer look at that premise. Those conditions — the ones under which the relationship between P and R can be expected to hold or come true — refer to the set of variables we might see in a systems diagram, interacting in a number of relationship loops. It’s the loops that make the set a true system, in the minds of the systems thinkers.
– Okay, so what?
– What this suggests is, again, a twofold recommendation, that the ‘system’ (the discourse system) should offer as nudges or suggestions for the participants to explore.
– Not rules, I hope?
– No: suggestions and incentives. The first is to use existing or proposed system diagrams as possible sources for aspects — or argument premises — to study and include in the set of concerns that should be given ‘due consideration’ in a decision about the case. In other words, turn them into arguments. Of the defused kind, Sophie. The second ‘nudge’ is that the concerns expressed in the arguments or questions by people affected by the problem at hand, or by proposed solutions — should be used as material for the very construction of the model of problem situation by the system modeler for the case at hand.
– Right. For the folks who are constructing systems models for the case at hand.
– Yes, That would likely be part of the support system service, but there might be other participants getting involved in it too.
– I see: Reminders: as in ‘do you think this premise refers to a variable that should be entered into the systems model?’
– Good suggestion. This means that the construction of the system model is a process accompanying the discourse. One cannot precede the other without remaining incomplete. It also requires a constant ‘service’ of translation between any disciplinary jargon of the systems model — the ‘systems’ vocabulary as well as the special vocabulary of the discipline within which the system studied is located. And of course, translation between different natural languages, as needed. For now, let’s assume that would be one of the tasks of the ‘sorting’ department; we should have mentioned that earlier.
– Oh boy. All this could complicate things in that discourse.
– Sure — but only to the extent that there are concepts that need to be translated, and aspects that are significantly different as seen from ordinary ‘argumentative’ or ‘parliamentary’ planning discussion perspective as opposed to a systems perspective, don’t you agree?
– So let’s see: now we have some additional components in your discourse support system: the argument analysis component, the systems modeling component, the different translation desks, and the mapping and display component. What’s next?
– That would be the evaluation function. From what we know about evaluation, in this case evaluating the merit of discussion contributions, the process of clarifying, testing, improving our initial offhand judgments about things to more solidly well-founded, deliberated judgments requires that we make the deliberated overall judgments a function, that is, dependent on, the many ‘partial’ judgments provided in the discussion and in the models. And we have talked about the need for a better connection between the discourse contribution merit and the decision judgment. This is the purpose of the discourse, after all, right?
– Yes. And the reason we think there needs to be a distinct ‘evaluation’ step or function is that quite often, the link between the merit of discussion contributions and the decision is too weak, perhaps short-circuited, prejudiced, or influenced by ‘hidden agenda’ — improper, illicit agenda considerations, and needs to be more systematic and transparent. In other words, the decisions should be more ‘accountable’.
– That’s quite a project. Especially the ‘accountability’ part — perhaps we should keep that one separate to begin with? Let’s just start with the transparency aspect?
– Hmm. You don’t seem too optimistic about accountability? But without that, what use is transparency? If decision makers, whoever they might be in a specific case, don’t have to be accountable for their decision, does it matter how transparent they are? But okay, let’s take it one item at a time.
– Seems prudent and practical. Can you provide some detail about that evaluation process?
– Let me see. We ask the participants in the process to express their judgments about various concepts in the process, on some agreed-upon scale. The evaluation process of our friend suggests a plausibility scale. It applies to judgments about how certain we are that a claim is true, or how probable it is — or how plausible it is — if neither truth nor probability really apply, as in ought-claims. It ranges from some positive number to a negative point, agreed to mean ‘couldn’t be more plausible’ or ‘couldn’t be less plausible’, respectively, with a midpoint of zero expressing ‘don’t know’, ‘can’t judge’.
– What about those ‘ought’ claims in the planning argument? ‘Just ‘plausible’ doesn’t really express the ‘weighing’ aspect we are talking about?
– Right: for ought-claims — goals, objectives — there must be a preference ranking or a scale expressing weight of relative importance. The evaluation ‘service’ system component will prepare some kind of form or instrument people can use to express and enter those judgments. This is an important step where I think the adversarial factor can be defused to some extent: if argument premises are presented for evaluation individually, not as part of the arguments in which they may have been entered originally, and without showing who was the original author of a claim, can we expect people to evaluate them more according to their intrinsic merit and evidence support, and less according to how they bolster this or that adversarial party?
– I’d say it would require some experiments to find out.
– Okay: put that on the agenda for next steps.
– Can you explain how the evaluation process would continue?
– Sure. First let me say that the process should ideally include assessment during all phases of the process. If there is a proposal for a plan or a plan detail, for example, participants should assign a first ‘offhand’ overall plausibility score to it. That score scan then be compared to the final ‘deliberated’ judgment, as an indicator of how the discussion has achieved a more informed judgment, and what difference that made. Now, for the details of the process. To get an overall deliberated plausibility judgment, people only need to provide plausibility scores and importance weights for the individual premises of the pro and con planning arguments. For each individual participant, the ‘system’ can now calculate the argument plausibility and the argument weight of each argument, based on the weight the person has assigned to its deontic premise, and the person’s deliberated proposal plausibility, as a function of all the argument weights.
– I seem to remember that there were some questions about how all those judgments should be assembled and aggregated into the next deliberated value?
– Yes, there should be some more discussion and experiments about that. But I think those are mostly technical details that are solved in principle, and can be decided upon by the participants to fit the case.
– And the results are then posted or displayed to the group for review?
– Yes. This may lead to more questions and discussion, of course, or for requests for more research and discussion, if there are claims that don’t seem to have enough support to make reasonable assessments, or for which the evidence is disputed. I see you are getting worried, Sophie: will this go on forever? There’s a kind of stopping rule: when there are no more questions or arguments, the process can stop and proceed to the decision phase.
– I think the old parliamentary tradition of ‘calling the question’ when the talking has gone on for too long should be kept in this system.
– Sure, but remember, that one was needed mainly because there was no other filter for endless repetition of the same points wrapped in different rhetoric. The rule of adding the same point only once into the set of claims to be evaluated will put a damper on that, don’t you think?
– So Al, did you add the evaluation steps to your diagram?
– Yes. Here’s what it looks like now:

AM wo ADV 14c

Figure 2 — The discourse support system with added evaluation components

– Here is another suggestion we might want to test, and add to the picture – coming back to the idea of the reward system helping to reduce the adversarial aspect: We now have some real measures — not only for the individual claims or information items that make up the answers and arguments to questions, but also for the plausibility of plan proposals that are derived from those judgments. So we can use those as part of a reward mechanism to get participants more interested in working out a final solution and decision that is more acceptable to all parties, not just to ‘win’ advantages for their ‘own side’.
– You have to explain that, Bog-Hubert.
– Sure. Remember the contribution credit points that were given to everybody, for making a contribution, to encourage participation? Okay: in the process of plausibility and importance assessment we were asking people to do, to deliberate their own judgments more carefully, they were assessing the plausibility and weight of relative importance of those contributions, weren’t they? So if we now take some meaningful group statistic of those assessments, we can modify those initial credits by the value or merit the entire group was assigning to a given item.
– ‘Meaningful’ statistic? What are you saying here? You mean, not just the average or weighted average?
– No, some indicator that also takes account of the degree of support presented for a claim, and the degree of agreement or disagreement in the group. The needs to be discussed. In this way, participants will build up their ‘contribution merit credit account’. You could then also earn merit credits for information that –from a narrow partisan point of view — would be part of an argument for ‘the other side’ — credit for information that serves the whole group.
– Ha! now I understand what you said initially about the evaluation function also serving to reduce the amount of trivial, untrue, and plain irrelevant stuff people might post in such discussions: if their information is judged negatively on the plausibility scale, that will reduce their credit accounts. A way to reward good information that can be well supported, and discourages BS and false information… I like that.
– Good. In addition to that, people could also get credit points for the quality of the final solution — assuming that the discourse includes efforts to modify initial proposals some people find troublesome, to become more acceptable — more ‘plausible’ — to all parties. And the credit you earn may be in part determined by your own contribution to that result. So there are some possibilities for such a system to encourage more constructive cooperation.
– Sounds good. As you said, we should try to do some research to see whether this would work, and how the reward system should be calibrated.
– So the reward mechanism adds another couple of components to your diagram, Al?
– Yes. Bog-hubert said that the evaluation process should really be going on throughout the entire process, so the diagram that shows it just after the main evaluation of the plan is completed is a little misleading. I tried to keep it simple. And there’s really just one component that will have to keep track of the different steps:

 

AM wo ADV 14d

Figure 3 –The process with added contribution reward component

 

– Looks good, thanks, Al. But what I don’t see there yet is how it connects with the final decision. I think you got derailed from finishing your explanation of the evaluation process, Bog-Hubert?
– Huh? What did I miss?
– You explained how each participant got a deliberated proposal plausibility score. Presumably one that’s expressed on the same plausibility scale as the initial premise plausibility judgments, so we can understand what the number means. Okay. Then what? How do you get from that to a common decision by the entire community of participants?
– You are right; I didn’t get to that. Well…
– Why doesn’t the system calculate an overall group proposal plausibility score from the individual scores?
– I guess there are some problems with that step, Vodçek. If you mean something like the average plausibility score. Are you saying that it should be the deciding criterion?
– Well… why not? It’s like all those opinion polls, only better, isn’t it? And definitely better that just voting?
– No, friends, I don’t think the judgment about the final decision should not be ‘usurped’ by such a score. For one, unless there are several proposals that have all been evaluated in this way so you could say ‘pick the one with the highest group plausibility score’, you’d have to agree on a kind of threshold plausibility a solution would have to achieve to get accepted. And that would just be another controversial issue. Also, a simple group average could gloss over, hide serious differences of opinion. And like majority voting, just override the concerns of minority groups. So such statistics should always be accompanied by measures of the degree of consensus and disagreement, at the very least.
– Couldn’t there be a rule that a proposal is acceptable if all the individual final plan plausibility scores are better than the existing problem situation? Ideally, of course, all on the positive side of the plausibility scale, but in a pinch at least better than before?
– That’s another subject for research and experiments, and agreements in each situation. But in reality, decisions are made according to established (e.g. constitutional) rules and conventions, habits or ad hoc agreements. Sure, the discourse support systems could provide some useful suggestions or advice to the decision-makers, based on the analysis of the evaluation results. A ‘decision support component’. One kind of advice might be to delay decision if the overall plausibility for a proposal is too close to the midpoint (‘zero’) value of the plausibility scale — indicating the need for more discussion, more research, or more modification and improvement. Similarly, if there is too much disagreement in the overall assessment – if a group of participants show very different results from the majority, even if the overall ‘average’ result looks like there is sufficient support, the suggestion may be to look at the reasons for the disagreement before adopting a solution. Back to the drawing board…
– Getting back to the accountability aspect you promised to discuss: Now I see how that may be using the evaluation results and credit accounts somehow — but can you elaborate how that would work?
– Yes, that’s a suggestion thrown around by Abbé Boulah some time ago. It uses the credit point account idea as a basis of qualification for decision-making positions, and the credit points as a form of ‘ante’ or performance bond for making a decision. There are decisions that must be made without a lot of public discourse, and people in those positions ‘pay’ for the right to make decisions with an appropriate amount of credit points. If the decision works out, they earn the credits back, or more. If not, they lose them. Of course, important decisions may require more points than any individual has compiled; so others can transfer some of their credits to the person, unrestricted, or dedicated for specific decisions. So they have a stake, — their own credit account — and lose their credits if they make or support poor decisions. This also applies to decisions made by bodies of representatives: they too must put up the bond for a decision, and the size of that bond may be larger if the plausibility evaluations by discourse participants show significant differences, that is, disagreements. They take a larger risk to make decisions about which some people have significant doubts. But I’m sorry, this is getting away from the discussion here, about the discourse support system.
– Another interesting idea that needs some research and experiments before the kinks are worked out.
– Certainly, like many other components of the proposed system — proposed for discussion. But a discussion that is very much needed, don’t you agree? Al, do you have the complete system diagram for us now?
– So far, what I have is this — for discussion:

AM wo ADV 14

Figure 4 — The Planning Discourse Support System – Components

– So, Bog-Hubert: should we make a brief list of the research and experiments that should be done before such a system can be applied in practice?
– Aren’t the main parts already sufficiently clear so that experimental application for small projects could be done with what we have now?
– I think so, Vodçek — but only for small projects with a small number of participants and for problems that don’t have a huge amount of published literature that would have to be brought in.
– Why is that, Bog-Hubert?
– See, Sophie: the various steps have been worked through and described to explain the concept, but it had to be done with different common, simple software programs that are not integrated: the content from one component in Al’s diagram have to be transferred ‘by hand’ from one component to the next. For a small project, that can be done with a small support staff with a little training. And that may be sufficient to do a few of the experiments we mentioned to fine-tune the details of the system. But for larger projects, what we’d need is a well-integrated software program that could do most of the transferring work from one component to the next ‘automatically’.
– Including creating and updating the maps?
– Ideally, yes. And I haven’t seen any programs on the market that can do that yet. So that should the biggest and top priority item on the research ‘to do’ list. Do you remember the other items we should mention there?
– Well, there were a lot of items you guys mentioned in passing without going into much detail – I don’t know if that was because any questions about those aspects had been worked out already, or because you didn’t have good answers for them? For example, the idea of building ‘nudging’ suggestions into the system to encourage participants to put their comments and questions into a form that encourages cooperation and discourages adversarial attitudes?
– True, that whole issue should be looked into more closely.
– What about the issue of ‘aggregation functions’ – wasn’t that what you called them? They way participants’ plausibility and importance judgments about individual premises of arguments, for example, get assembled into argument plausibility, argument weights, and proposal plausibility?
– Not to forget the problem of getting a reasonable measure of group assessment from all those individual judgment scores.
– Right. It may not end up being a multivariable one, not just a single measure. Like the weather, we need several variables to describe it.
– Then there is the whole idea of those merit points. It sounds intriguing, and the suggestion to link them to the group’s plausibility assessments makes sense, but I guess there are a lot of details to be worked out before it can be used for real problems.
– You say ‘real problems’ – I guess you are referring to the way they could be used in a kind of game, just like the one we ran here in the Tavern last year about the bus system, where the points are just part of the game rules, as opposed to real cases. I think the detailed development of this kind of game should be on the list too, since games may be an important tool to make people familiar with the whole approach. How to get these ideas out there may take some thinking too, and several different tools. But using these ideas for real cases is a whole different ball game, I agree. Work to do.
– And what about the link between all those measures of merit of people’s information and arguments and the final decision. Isn’t that going to need some more work as well? Or will it be sufficient to just have the system sound an alarm if there is too much of a discrepancy between the evaluation results and, say, a final vote?
– We’ll have to find out – as we said, run some experiments. Finally, to come back to our original problem of trying to reduce the adversarial flavor of such a discourse: I’d like to see some more detail about the suggestion of using the merit point system to encourage and reward cooperative behavior. Linking the individual merit points to the overall quality of the final decision — the plan the group is ending up adopting — sounds like another good idea that needs more thought and specifics.
– I agree. And this may sound like going way out of our original discussion: we may end up finding that the decision methods themselves may need some rethinking. I know we said to leave this alone, accept the conventional, constitutional decision modes just because people are used to them. But don’t we agree that simple majority voting is not the ultimate democratic tool it is often held out to be, but a crutch, a discussion shortcut, because we don’t have anything better? Well, if we have the opportunity to develop something better, shouldn’t it be part of the project to look at what it could be?
– Okay, okay, we’ll put it on the list. Even though it may end up making the list a black list of heresy against the majesty of the noble idea of democracy.
– Now there’s a multidimensional mix of metaphors for you. Well, here’s the job list for this mission; I hope it’s not an impossible one:
– Developing the integrated software for the platform
– Developing better display and mapping tools, linked to the formalized record (IBIS)
– Developing ‘nudge’ phrasing suggestions for questions and arguments that minimize adversarial potential
– Clarifying questions about aggregation functions in the evaluation component
– Improving the linkage between evaluation results (e.g. argument merit) and decision
– Clarifying, elaborating the discourse merit point system
– Adding improvement / modification options for the entire system
– Developing alternative decision modes using the contribution merit evaluation results.
– That’s enough for today, Bog-Hubert. Will you run it by Abbé Boulah to see what he thinks about it?
– Yeah, he’ll just take it out to Rigatopia and have them work it all out there. Cheers.


On the role of feelings and emotions in the Planning Discourse Support System


A Fog Island Tavern discussion

Sjutusensjuhundreochsytti-sju jäklar, beim heiligen Kyrill von Drögenpütt!
– Bog-Hubert, you’ve got to quit drinking that Slovenian stuff, it makes you cuss in incomprehensible Balkan dialects. I can’t even tell whether I should kick you out of here for inappropriate language.
– Ah, Vodçek, pour me another one. It’s actually some kind of Swedish and German this time. I think.
– Cross-cultural cussing, oh my. What in the world gets you so upset? Anything in your notebook that would have made you rich if you’d thought of it a week ago?
– Huh? You’re confusifying me. No, it’s Abbé Boulah.
– Good grief. What’s he done now?
– It’s not what he’s done but what he hasn’t.
– Well, aren’t we all guilty of some of that sin. I should have paid my utility bill several days ago. But explain.
– Well, you know how he and his buddy have been working on this scheme for a planning discourse support system. On the basis of the old Argumentative Model of Planning, you remember?
– Do I remember? Your ramblings about that one have kept me up beyond too many last calls I care to count. But isn’t it actually a good idea, basically? What’s wrong with it now?
– Well, we are all still working on straightening out some details. But Abbé Boulah and his buddy won’t get moving on those problems. I don’t know whether it’s because they don’t think they are serious enough to fix, or because they don’t know how.
– What problems?
– It’s this misunderstanding that some people have about the argumentative model — that it’s ‘too rational’ and doesn’t allow for feelings and emotions. So in a few of the first application experiments, the people didn’t even get started on working with it. Well, Abbé Boulah and his buddy are insisting that the model allows for any subject and concern to be brought up in the discussion — as Rittel said, anything can be dealt with as questions and arguments and answers, it’s the most general framework anybody has come up with. So they won’t change anything about the basic concept.
– And you think that those critics are right? That the argumentative model does not — how do they put it — accommodate feelings and emotions?
– They are right! Some people are just put off or intimidated by the pretense of logic and rationality of the term ‘argumentative model’, and ignores emotions.
– Huh, Sophie, good morning. You’ve got a point there. I don’t care whether they are right or wrong. The fact that they are put off by what they think it is when they hear ‘argumentative model’ is the problem. It’s real. So I think that needs to be dealt with, somehow.
– I agree. But what do you think they should do? Let’s assume those folks are right. That feelings and emotions should play some significant role in planning discussions. Why do they think that?
– Some people are mentioning recent research that seems to show that when people make decisions, the regions in the brain that deal with emotions are showing significant activity some time — they are talking about fractions of seconds — before the thinking and reasoning areas of the brain are signaling that a decision has been made. So they conclude that the emotional side has actually made the decision before the thinking part has, or processed the reasons for it.
– Hmm. So what are they saying: because the emotions are calling the shots, the decision is better than what the reasoning part would have come up with?
– I don’t know if they actually believe or are claiming that. Though it does sound like it when they come up with that old bit of ‘going with your gut feelings’. And I don’t really care about that either…
– Wait: isn’t there some good explanation for that? That there may be some piece of information about the situation that the brain has picked up only in the subconscious — some rustling in the forest that the ears have barely registered — but the conscious brain hasn’t yet interpreted and processed yet? But the unconscious has produced the gut feeling that there may be a dangerous predator sneaking up on you? That seems like a very good reason to pay attention to that gut feeling, don’t you think?
– Yes: So why don’t you care about that?
– Sophie, I do care about those feelings. I have gone by my gut feelings many times myself. And it often turned out that they were right — that there actually was a piece of information that called for attention and influenced the decision. But hey, there were also many times when there wasn’t anything to be concerned about. So often that people around me began to think I was overly paranoid. The issue is: how do I know when the gut feeing is right and when it’s not?
– So that’s another reason to care about it, isn’t it?
– Sure. But does that whole issue apply to the problem of planning discourse about public issues? Even if it’s just you and me discussing a plan. My gut feeling says do A, but your gut tells you something else — what should we do about that?
– I see what you are saying. Unless your gut also tells you to hit me over the head – yeah, yeah, for my own protection or good — we need to talk about it.
– Right. It has to be brought out in the discussion. It’s not enough to say ‘my gut tells me to do, or not to do this’ — when there are different gut feeling signals, they need to be made explicit and explored, discussed. And for large public issues, there is even a legitimate question, in Abbé Boulah’s opinion, whether individual people’s feelings should play a role in the decisions. Not that he says that they shouldn’t be voiced if participants in the discussion feel they are important — but merely private, individual feelings without explanation should not be allowed to determine decisions that affect many people over a long time.
– You don’t agree with that?
– I think there is a case to be made that people who insist that feelings should play a role even in decisions about large scale plans, should offer some evidence that their feelings are shared by a significant number of other people. But in principle: aren’t plans and planning discussions meant to produce solutions that people agree with? That they like, and feel good about? Future situations of their lives that they expect will be emotionally satisfactory? Help their pursuit of happiness?
– I can’t disagree with that, Sophie. But isn’t there a difference between ‘respecting’ someone’s feelings, and accepting them sight unseen as a reason for rejecting or accepting a public decision? So if we accept that emotions should play a role even in large-scale public decisions: what role should they play?
– You mean, other than just being brought up in the discussion and examined?
– Well, yes.
– In other words, it seems you are staying within the assumption that there is, or should be, a discussion. A discourse. And that it consists of questions, issues, and — among other things: arguments? Or do you think you can keep people from arguing in discussions?
– I see, Bog-Hubert. Yes, we are still talking argumentative model. Or what other models are such critics proposing to use as the basis for public planning?
– Alternative models? To my knowledge, they tend to stay silent on that question. At least, I haven’t heard any alternative proposals in those situations. ‘It’s too rational’ or ‘It doesn’t acknowledge emotions’ — that’s usually the end of it. Of course there are a number of other approaches to problem solving and planning. But they don’t engage the issue of argumentation very well either.
– What are those?
– Well Sophie, there is the whole realm of ‘Systems Thinking’ approaches — where the approach is to develop models and diagrams of the ‘whole’ system or problem situation, with all its factors and relationships. Very powerful and useful, if done right, in revealing the complexity of systems and their sometimes counter-intuitive behavior.
– I agree. But?
– Think about it, Vodçek: there is hardly ever any talk about how they get all the information that goes into the models, (other than ‘research’, which may take the form of opinion, ‘user need’ or customer preference surveys or some such tools, usually to early on, to begin the model development work. Nor about how they resolve any disagreements about those assumptions. It simply isn’t talked about. In the finished model diagrams, it seems that all controversies and disagreements are assumed to have been settled.
– True – I have been wondering about that myself. Which means that what the modeler- analysts have settled upon are their own perspectives or prejudices?
– Don’t let them hear such heretical thoughts. To be fair, they are trying; and convinced that their data support those views.
– Well. Let’s just keep the question unsettled for now. Any other approaches? Examples?
– Sure. Just some examples: there are approaches like the ‘Pattern Language’, — you know that one?
– Yes, — the ‘Timeless Way of Building’ books by Alexander? But isn’t that mainly about buildings”
– Yes. Buildings, construction, urban design. In my opinion, that Pattern Language essentially aims at developing a collection of recipes or guidelines — ‘pattern’ sounds a little less than the rules they really are — that guarantee a good solution if they are applied properly, and therefore don’t need to be discussed or evaluated in any formal sense. No discussion or arguments there either.
– So what role do feelings and emotions play in those approaches?
– I guess the same accusation of not accommodating feelings could be raised against many systems models. ‘Stocks and flows’, variables and rates etc. don’t exactly sound like having to do with emotions. Nor does the statistical analysis of data – even when they deal with opinion surveys. Though the systems people would argue like Rittel does for the argumentative model, that if anybody wants to make a model of emotions and what influences those, say, they can do that in the systems vocabulary too.
– And the pattern language?
– The language Alexander developed for building consists of a number of patterns that he and his collaborators found when they looked at places they liked, so they claimed that these patterns solve problems and conflicts inherent in the situation, and make people feel good. ‘If you aren’t using the pattern, you aren’t addressing the problem’ is one of their admonitions. Many of those recipes are quite good, I agree; better than some of the things we see in buildings by other people using different theories, if you can call them that. But he also used the stratagem of the ‘quality without a name’ that can’t be explained. That cuts off the discussion right quickly: nobody wants to be told that ‘if you have to ask, you simply don’t understand it…’
– I see. If you can’t feel it, you are just one of those unfeeling folks…
– And when the patterns are applied, there is no more talk about feelings or emotions, or arguments, pros or cons, either.
– So I take it, we have the same problems with those approaches too? It seems we are back to discussion, discourse, argument, the minute we even begin to examine whether any alternative approach works, and how. So what do you think should be done with the argumentative discourse system you guys are working on, if you are going to stick with it?
– Good question. That’s what I was cussing about. Do you have any suggestions for that problem? Vodçek? Sophie?
– You are asking lil’ ol’ me? Let me think about it. Vodçek looks like he has thought some ideas: Do you, Vodçek?
– Well, if I were bothered by the ‘argumentative’ label – which I’m not, mind you: in my experience around here, it makes people thirsty, you know what I mean? – but if I were, I’d start by changing that label. Isn’t your ‘planning discourse support system’ good enough? Well, it’s a bit long, and doesn’t make a catchy acronym; I’d work on that. And leave the reference to the argumentative model to the academic treatises.
– Okay, that’s just the label, the name. Is that enough to change the reaction of those emotional advocates?
– Maybe not. It might help if the discussion process could be started with some questions that de-emphasize the quarrelsome kind of argument part of the discussion. Starting up with questions about what folks would like to see in the solution or intervention to a problem situation: what would please other groups affected by the situation or potential solutions? What would make them feel good?
– So as to make them focus on things they can agree on right from the start, instead of bickering about proposals they don’t like? Okay: how would you frame that? And how would you keep people from starting out on – of falling into — an adversarial track right from the start? For example, if somebody starts out with some pet proposal of a solution that raises the hackles of everybody else?
– It might take some procedural manipulation, eh?
– Bad idea, if you ask me. Wouldn’t that really aggravate people and get them upset?
– All right. Suppose we start out by agreeing on some sequence beforehand – before any specific proposals are presented, and simply asked what such a proposal would or should look like if it were to make everybody happy? And agree that any ‘preconceived’ solutions be held back until they have been amended and modified with any suggestions brought up in that first phase of discussion?
– I don’t think that any restrictions should be placed on the order or sequence in which people contribute their ideas to the discourse. So whatever is being brought in will have to be accepted and recorded as it comes in. I am assuming a system that is being run not in a meeting, but mainly on some platform with contributions in different media. All entries should be kept as they have been stated, in what we called the ‘Verbatim’ file. But your suggestion could be useful when the material is sorted out and presented in the files and especially maps, structured according to topics and questions or issues. This could be shown in a sequence that encouraged constructive ideas, a gradual building up of solutions towards results that are acceptable to everybody, rather than having a proposal plunked down initially, take-it-or leave-it style, that people have to argue about.
– Sounds like something you should try out.
– Would it help if during that phase, the display of ideas and comments could be kept ‘anonymous’?
– Why, Sophie?
– Well, I have noticed that often, arguments get nasty not because the proposals are bad or controversial, but because of who made them. Jealousy, revenge for past slights, not wanting to give the other guy credit for an idea, or partisanship: ‘anything those guys are proposing we’ll turn down’ – you’ve seen those things, haven’t you?
– Yes, the news media are full of them.
– You don’t seem too excited about that idea. I think it gets in the way of the other provisions of your system – the evaluation part, does it? But you can still run that system of merit points ‘behind the curtain’ of the system, can’t you, so that people don’t evaluate ideas because of who proposed them?
– I guess so. It might actually help the concern somebody mentioned, that the evaluation of contributions could be deliberately skewed because of such personal or partisanship jealousies. Yes, ideas might be rewarded more fairly for their own merit if you don’t know whose they are.
– We’ll see. Sometimes the ideas are so obviously partisan that everybody can guess whose they are.
– Well, back to the issue here: what about feelings and emotions? So far, what you have suggested is aimed more at defusing or minimizing extraneous feelings about other participants than about the problem and solution proposals?
– You are right. Again: there could be nudges, suggestions about how to bring those into the discussion. For example, rather than asking participants to state their feelings or concerns outright, those considerations could be phrased as questions like this: ‘Would the proposed solution detail make people feel … ? And if so, what might be done about that?’
– Are you suggesting a rule about how participants should be wording their comments? What if they don’t?
– No, that’s not what I’m saying. The original ‘verbatim’ record entries are worded in whatever way they choose. I’m talking about how they would be displayed in the maps. But of course, that very feature may lead people to formulate their comments in this way, both less ‘argumentative’ and less personal – as you suggested earlier, in a way that indicates a more common feeling than a purely individual one.
– Hey, this all sounds very nice and friendly and cooperative. Well-intentioned. But are we looking at this in the right way? I mean, can all feelings, all emotions be treated the same way? Aren’t some more, let’s say, more ‘legitimate’ than others?
– Good question. What are all the feelings those do-gooders want the system to accommodate’?
– I think the judgment about whether they are legitimate or not must be left to the people participating in the discussion, don’t you agree? And it may be very different for different cases and situations? But yes, it may be useful to look at various kinds of emotions, to see whether they require different rules. Yeah, yeah: ‘nudges’, I see you’re frowning at the term ‘rule’, Sophie. Is there a rule against it?
– Can we go with ‘encouragements’ for the time being?
– Sure. If it makes you happy…
– We may have to ask some of the people raising this concern about feelings in the discourse, what kinds of feelings they have in mind. For example: I see many papers and blogposts complaining about other people’s resistance to change. Is that an issue we should look into?
– Ah, the current obsession with change. I suspect that’s often just a fad, something all the management consultants have to promote so they can help management push for their particular brand of change in their organizations. The Starbucks syndrome: try to order a straightforward coffee these days – bad boy: You aren’t honoring the change, effort of innovation and increase in choices. As if you couldn’t just mix them up yourself to your own taste if they put out the ingredients. No: You’d get upset – there’s an emotion for you – if the recipe for your plain coffee were changed.
– Hey, calm down, Sophie. Here’s a plain coffee for you. Sumatra. Cream? Sugar? Lemon? Red pepper? Brandy? French? Spanish? For recipes that aren’t on the Starbucks menu yet? But you are right, Bog-Hubert: Resistance to change is a common reaction. And it can be caused by many different emotions. Fear? Irritation over the reduced degree of certainty about the stability of conditions for your own plans? After all, your plans for whatever change or success you pursue are based on some context conditions being predictable and constant, so if those conditions change, you have to hustle and change your plans. Aggravations galore, right? Jealousy? Because the change will reduce your income while increasing that of the ‘change agent’ and other people?
– This all sounds very negative, guys. Aren’t there positive emotions too, that might play a role? Excitement, a sense of adventure, even risk and danger: some people like and thrive on things that elevate their adrenaline levels? Hope? Empathy? Love?
– Hold on, Sophie. You are right, we should consider positive emotions – but isn’t this getting into a whole range of different topics? Attitudes, values, beliefs, habits, personality likes and dislikes? Social pressures and demands. Boredom, curiosity, pride, group affinity and allegiances. Why should a planning discourse platform make special provisions for all of those? Can’t it be left to the people doing the planning in each specific case how they want to deal with such issues?
– I think you are right. But the problem is still that the folks who need to run such discussions or to participate in them don’t see how that is possible in the current version of the approach, the way it is presented. It may boil down to getting the story across, perhaps finding better ways of making people familiar and comfortable with this way of thinking.
– I see where you are headed, Vodçek. Games, am I right?
– Yes. And good examples, stories. But yes, I think games are a good way to familiarize people with new ideas and ways to work together. You remember the weird experiment we did here some time ago – on the bus system issue? I think things like that would help. We should look at that issue again, see if we can develop some different versions, — some simple ones, for kids, and some more advanced ones that can actually be used as entertainment versions of planning and problem-solving tools for real cases. And the issue of how to deal with emotions in those might take the form of trying to make them exciting and fun to play.
– Sounds like a plan… just don’t mention the word ‘argument’?
– Yes. And whenever it does slip into the discussion and people object to it, ask them what other approach they suggest, for developing a better tool? Perhaps they might actually come up with some useful ideas?
– Don’t get your hopes up. They’ll just vote you down.
– Three cheers for the optimist. Yes, I say give them a chance to make some positive and practical contributions. We might learn something. Let’s go to work on it.

AM-PDSS Feelings

Some issues regarding the role of emotions and feelings in the planning discourse


Does Logic Settle The Issue?

Bog-Hubert, entering the Fog Island Tavern, tries to get the attention of Tavern-keeper Vodçek, who is bent over a piece of paper on the counter, scribbling notes in its margin.

– So my friend, are you embarking on a new career of literary critic, or editor? What august publishing entity are you working for?

= Huh? Oh, sorry, didn’t hear you come in. What’s that you say about career? Or did you mean a beer?

– No, thanks, coffee would be fine. I was curious about your editing work there.

= Oh, this? It’s just a letter to my grand-aunt that came back as ‘undeliverable’.

– Your grand-aunt? Hasn’t she been dead for quite some time already? The one and only Aurelia Fryermouth? or do you have another equally grand aunt?

= No, that’s the one. And yes, she died many years ago. Here’s your coffee.

– And your letter took this long to get back to you? I knew the postal service to that country was kind of, well, unpredictable, but this…

= No, I wrote this about a month ago, and it just came back.

– Of course, if she’s dead. Stands to reason. But now you have me seriously worried. Why in three twister’s name did you write her when she’s dead?

= Oh, I do that all the time. I used to write her whenever I’ve written something I’m not quite sure about, and she always sent me useful, insightful comments back. So now I do the same thing, and when the letters come back after a while, I imagine her comments and write them in the margin, with my comments and rebuttals. Using a four-color Bic to keep track of what’s what. Very useful.

– The Bic? Okay. But this strange habit?

= Don’t knock it, it has kept me out of a lot of trouble. It should be required of everybody who’s writing, especially folks who write all those comments in social media discussions.
Now I admit, not everybody has an aunt Aurelia whose wisdom, even of the imagined kind, can be of such profound quality and assistance. She had a way of cutting through the distractions and BS, and put her finger on the real sore spots, like no teacher I ever had. But just imagining what she would say — just like those so-called conservatives who keep parroting ‘what would Reagan do?’; they really should look for somebody more … well, let’s not get into that — is immensely helpful. Not to mention the time delay. Remember the old advice to ‘sleep on it’ before jumping into action? One night is not enough, my friend. Looking at your impulsive writing after several weeks, during which you may also have gained some extra insights and wisdom, however infinitesimal, given your age (compared to aunt Aurelia’s), can be a very sobering experience.
– Ah. I see. It explains the wide margins you’ve left in the letter. But how to you ensure that your margin entries are not as impulsively imprudent as the original writing?

= Good point. I can only say there is a marked marginal improvement, if you’ll excuse the puns. And I indeed have at times resorted to sending her my comments back for review and revision… She does not mind that, unlike live editors who have a tendency to react with irritated and, if I may say so, rather impolite retorts to even the slightest challenges of their authority.

– Hmm. I admit, it sounds like a wise routine. Widely adopted, it would save humanity from a lot of, — what did you call it? — ‘impulsive’ writing, I agree. But now you have made me curious: what are those profound questions you have her comment upon from the Great Aurelian Beyond?

= Don’t know about profound. This last one was just about the puzzlements I felt about the offer by the climate scientist Dr. Keating, to pay a considerable sum of money to any ‘denier’ (his term) of man made climate change to provide a proof, via scientific method, that man made climate change does not occur.

– I heard something about that, yes. Did he get any such proof?

= Several dozen, as far as I know. So he had a big discussion on his blog about why the proofs didn’t hold water, and responding to all the folks who didn’t think the challenge was serious, or ill-stated etc.

– So what about your puzzlement?

= Well, there were several. One was about the reasons why some people seem very reluctant to accept the idea of man made climate change, (MMCC) for reasons they couldn’t really discuss because Dr. Keating insisted on it all being ‘scientific’. But even if they weren’t, does that mean they were totally illegitimate and nonsensical? For example, could it be that — for Dr. Keating and others — accepting the MMCC hypothesis would be seen as also accepting some implications, sight unseen, that might very well be worthy of discussion?

– Okay: seems worth looking into. The other issue?

= That was a strange one. In the discussion, both parties were at times insisting that they had valid logical reasoning on their side, and that the other side was guilty of violating logic. Now somebody found this a bit curious, not to say logically questionable. But upon investigating the matter, he found out that if you just looked a the logical validity of the arguments people put forward — not the truth or probability of the premises — it is entirely possible for both sides to propose quite logically valid arguments for their case, while also being vulnerable to accusations of using arguments that are not deductively valid but perhaps merely ‘plausible’, but logically inconclusive.

= Huh. Can you explain that?

– Sure. Take the main scientific argument abut a hypothesis H — in this case, that MMCC is true. You examine the hypothesis and find that if it is true, then we should be able to find some evidence E that must occur as a consequence. That makes the first premise “If H then E must be observed” or H –> E. Now we observe E. Does this ‘prove’ that H is true?
No: it is the inductive reasoning scheme
((H –> E) & E) –> H
which is logically inconclusive, not deductively valid. It’s what they call a ‘just another white swan’ argument: observing any number of white swans — E — does not prove that the hypothesis that all swans are white is true. (If true, it implies that all swans observed will be white.) You can test that with a truth table: there is one case among all the possible states of the world involving H and E that makes the main implication ‘false’. So if that is your main argument for H, you can be accused of using less than deductively valid logic. But observing just one black swan (or even a pink one, for a more colorful discussion) ~E, deductively, validly refutes the hypothesis:
((H –>E) & ~E) –> ~H
This is a perfectly valid deductive argument (called modus tollens by the logicians).

= I remember that now, yes. But in science, they have developed that trick with the ‘null hypothesis Ho’ — the hypothesis put on its head — haven’t they? And use the same modus tollens argument showing that if E is observed, Ho can’t possibly be true?

– Yes, at least for questions involving large numbers of data observations, where Ho is understood as e.g. climate changes happen at random, unrelated to human activities. Then the argument is not claiming total refutation, just that it is so unlikely (having such a low probability) that E could be observed of Ho is true, that Ho is rejected, and provisionally H is accepted instead. This is accepted as valid scientific reasoning.

= So If they can produce such evidence and arguments, doesn’t that settle it?

– Not so fast. For one, the argument scheme is a different one, depending on whether you accept or reject the premises. Which are of course part of the controversy. And the evidence E is not a simple observation or experiment result, but consisting of a ‘body of evidence’ that starts with the definition and understanding of the things you are discussing. Say: what qualifies as ‘climate change’, what human activities are influencing climate. Then selecting appropriate variables for those concepts, that must be measured: temperature; okay, or CO2 — but of what? air? water? land? some combination? measured how, over what time period, where? (e.g. on the surface? or in the stratosphere, or somewhere in-between? Then there must be some distinctive and significant correlation between the measures for climate change and human shenanigans, and some provisions that the correlation actually indicates causation and not the other way around.

= Wait a minute: ‘the other way around’ — what do you mean by that?

– Oh, maybe somebody claims that human activities increase CO2 levels in the air, which change the climate. And somebody else says: wait — the climate is actually cooling, — the winters are getting colder, which causes humans to do more heating, which maybe increases CO2 somewhat, but the cause of that is really climate cooling? Even if the argument doesn’t make sense to you, you can’t just dismiss it as unlogical, you have to make sure that if you see a correlation between man-made CO2 and climate change, you have the cause and effect going in the proper direction.

= All that puts quite a burden on the scientists who claim there is a connection between climate change and human activities.

– Right. They have to provide solid evidence and arguments for all the components of that body of evidence. And it makes it relatively easy for anybody to challenge that hypothesis: they only have to put reasonable doubt on one single component of that chain of evidence, to turn the corroborating argument H –> E and E into the modus tollens
((H –> E) & ~E) –> ~H (‘black swan’) argument ‘refuting’ H. Allowing the ‘denier’ to claim a deductively valid argument.
But: What if somebody came up with an argument like this one: “((E –>H) & E) –> H”
(“If we see evidence E, this must mean that H is true; now we observe E, so H is true”)

= Huh? is that the way science works?

– That may be up for discussion. You could argue that this is precisely the way scientists come up with — conjecture — the hypothesis: they see some things E that suggest H. Science of course also insists that such observations must be repeatable and confirmed bu other observers etc. But if somebody makes such a case, they can claim a perfectly logical and deductively valid argument — a respectable modus ponens. Remember, whether the conclusion is true depends on the validity of the argument scheme and the truth or plausibility of the premises. To claim logical validity you don’t have to also claim truth of premises. But of course you can’t jump to any specific conclusions yet.

= I see the problem here: both sides can claim to have logic on their side. So logic by itself does not settle the controversy. Now if you accept that, shouldn’t both sides agree that the final conclusion will rest on both logical validity and true premises, and to then refrain from trying to clinch the case by just claiming valid logic?

– You’d think so. And youd’ think that the scientist would make that clear in stating his case, wouldn’t you?

= Sure. So?

– So part of my puzzlement was the reaction by Dr. Keating to somebody pointing out this story of both sides claiming logic. He just dismissed this, writing that ‘logic is just a tool’, what counts is valid science. Here little ol’ me always thought that valid logical reasoning, together with confirmed observation, correct measurement, calculations etc. was an integral part of scientific method, the science toolkit. What do I know…

= I can see where this might be a puzzlement for you. So what does your grand-aunt Aurelia have to say about all this?

– She jumped right on the first one of my puzzlements: the other, perhaps ‘illegitimate’, or non-scientific reasons people might have for hesitating to accept the hypothesis that human activities are screwing up the global climate. That perhaps the context of challenges and claims, like the one by Dr. Keating, subtly or not so subtly implies acceptance of some conclusions that are quite partisan and political, but that can’t be entered in this discussion because they are not ‘scientific’.

= I’m sure that may not be intended by Dr. Keating and other climate scientists?

– Sure — but it may be in the minds of some folks out there. And that makes them look for any little chinks in the body of evidence they can find.

= What are some of those implications — did you revered grand-aunt suggest some?

– Her main point was this: if man-made climate change is true, it raises the question whether we actually can do something meaningful about it, and if so, what. But they suspect that the scientists — calling them ‘alarmists’ in return for being called ‘deniers’ by Keating and others — already have an agenda of proposed strategies and rules up their sleeve. And that those will be very expensive. Even worse: that many people will have to change some of their cherished habits regarding energy use. And –psst– that some folks who are now making fine profits from conventional energy sources and life habits will lose those profits. The worst, though: that those new strategies will allow o t h e r guys, not them, to now make more profit. Utterly unacceptable, that one.

= Ahh. Of course. It may also be the fact that the costs of the new strategies will have to be paid ‘now’ or ‘soon’, obviously by people who now have or make money, by way of taxes — but that the profits or benefits will manifest themselves much later, in terms not of cash revenues but avoided disaster. So does that answer your concerns?

– You mean can I sleep better at night for these insights? Don’t think so. But I think that it might be better if those issues would also be put on the table and discussed, negotiated. Perhaps such questions could be more productively dealt with if they were stated differently.

= What do you mean — does it matter how a problem is stated to answer what we should do about it? A problem is a problem is a problem, after all…

– No, I think the way they are thrown up for discussion does matter. For example, consider raising a challenge about the climate change in the following way: Look at a table showing — I’m simplifying now, perhaps dangerously so, but just to make it clear — the possible answers to the MMCC question as columns: is MMCC real, or is it not (or so insignificant that we don’t have to worry about it), and our strategies as rows: do we do something about it, or do we not?
There will be four main outcomes, the boxes 1,2,3,4. For each one, there are three major questions that should be answered: a) what will happen? What will be the consequences? b) What, if anything, will be done? and c) depending on what is done, what is the likely result? That would allow the discussion to address each question separately and more explicitly, and perhaps make it easier to reach some decisions. If decisions are needed. And if they are, avoid wasting more time by quibbling about issues like proof or disproof of MMCC (which is a wrong question in itself because it’s not a yes-no question but one of relative significance and relationships between many variables).

= So what does your table look like?

– Here’s a first simple draft, for filling in the boxes and discussion: What if:

____________________________________________________________________________
MMCC is:                              real & significant                   not real or insignificant ____________________________________________________________________________

We decide to                                     1                                                         2
take steps
____________________________________________________________________________

We do nothing, or                           3                                                         4
continue what we do
____________________________________________________________________________

= You might add another question there, my friend.

– Sure, there will be many more as people start talking more thoroughly about it. What’s the question you have in mind?

= It has to do with responsibility. Or accountability, if you wish: Who will take on the responsibility for decisions? And be accountable — whatever that means, which should be discussed more carefully — if it’s the ‘wrong’ decision?

– Huh. I need to send this back to aunt Aurelia. With wider margins…

—–