In the Fog Island Tavern:
– Bog-Hubert, I hear you had a big argument you had in here with Professor Balthus last night? Sounds like I missed a lot of fun?
– Well, Sophie, I’m not sure it was all fun; at least the good prof seemed quite put out about it.
– Oh? Did you actually admit you haven’t read his latest fat book yet?
– No. Well, uh, I haven’t read the book yet. And he knows it. But it actually was about one of Abbé Boulah’s pet peeves, or should i say his buddy’s curious findings, that got him all upset.
– Come on, do tell. What about those could upset the professor — I thought he was generally in favor of the weird theories of Abbe Boulah’s buddy?
– Yes — but it seems he had gotten some hopes up about some of their possibilities — mistakenly, as I foolishly started to point out to him. He thought that the recommendations about planning discourse and argument evaluation they keep talking about might help collective decision-making achieve more confidence and certainty about the issues they have to resolve, the plans they have to adopt or reject.
– Well, isn’t that what they are trying to do?
– Sure — at least that was what the research started out to do, from what I know. But they ran into a kind of paradoxical effect: It looks like the more carefully you try to evaluate the pros and cons about a proposed plan, the less sure you end up being about the decision you have to make. Not at all the more certain.
– Huh. That doesn’t sound right. And the professor didn’t straighten you out on that?
– I don’t think so. Funny thing: I started out agreeing that he must be right: Don’t we all expect decision-makers to carefully examine all those pros and cons, how people feel about a proposed plan, until they become confident enough — and can explain that to everybody else — that the decision is the right one? But when I began to explain Abbé Boulah’s concern — as he had mentioned it to me some time ago — I became more convinced that there’s something wrong with that happy expectation. And that is what Abbé Boulah’s research seems to have found out.
– You are speaking strangely here: on examination, you became more convinced that the more we examine the pros and cons, the less convinced we will get? Can you have it both ways?
– Yeah, it’s strange. Somebody should do some research on that — but then again, if it’s right, will the research come up with anything to convince us?
– I wish you’d explain that to me. I’ll buy you a glass of Zinfandel…
– Okay, maybe I need to rethink the whole thing again myself. Well, let me try: Somebody has proposed a plan of action, call it A, to remedy some problem or improve some condition. Or just to do something. Make a difference. So now you try to decide whether you’d support that plan, or if you were king, whether you’d go ahead with it. What do you do?
– Well, as you said: get everybody to tell you what they see as the advantages and disadvantages of the plan. The pros and cons.
– Right. Good start. And now you have to examine and ‘weigh’ them, carefully, like your glorious leaders always promise. You know how to do that? Other than to toss a coin?
– Hmm. I never heard anybody explain how that’s done. Have to think about it.
– Well, that’s what Abbé Boulah’s buddy had looked at and developed a story about how it could be done more thoroughly. He looked at the kinds of arguments people make, and found the general pattern of what he calls he ‘standard planning argument’.
– I’ve read some logic books back in school, never heard about that one.
– That’s because logic never did look at and identified let alone studied those. Not sure why, in all the years since ol’ Aristotle…
– What do they look like?
– You’ve used them all your life, just like you’ve spoken prose all your life and didn’t know it. The basic pattern is something like this: Say you want to argue for a proposed plan A: You start with the ‘conclusion’ or proposal:
“Yes, let’s implement plan A
1. Plan A will result in outcome B — given some conditions C;
and we assume that
2. Conditions C will be present;
3. We ought to aim for outcome B.”
– It sounds a little more elaborate than…
– Than what you probably are used to? Yes, because you usually don’t bother to state the premises you think people already accept so you ‘take them for granted’.
– Okay, I understand and take it for granted. And that argument is a ‘pro’ one; I assume that a ‘con’ argument is basically using the same pattern but with the conclusion and some premises negated. So?
– What you want to find out is whether the decision ‘Do A’ is plausible. Or better: whether or to what extent it is more plausible than not to do A. And you are looking at the arguments pro and con because you think that they will tell you which one is ‘more plausible’ than the other.
– Didn’t you guys talk about a slightly different recipe a while back — something about an adapted Poppa’s rule about refutation?
– Amazing: you remember that one? Well, almost: it was about adapting Sir Karl Raimund Popper’s philosophy of science principle to planning: that we are entitled to accept a scientific hypothesis as tentatively supported or ‘corroborated’ as they say in the science lab, to the extent we have done our very best to refute it, — show that it is NOT true, — and it has resisted all those attempts and tests. Since no supporting evidence’ can ever conclusively ‘prove’ the hypothesis but one true observation of the contrary can conclusively disprove it. It’s the hypothesis of that all swans are white — never proved by any number of white swans you see, but conclusively shot down by just one black swan.
– So how does it get adapted to planning? And why does it have to be adapted, not just adopted?
– Good question. In planning, your proposed plan ‘hypothesis’ isn’t true or false — just more or less plausible. So refutation doesn’t apply. But the attitude is basically the same. So Abbé Boulah’s buddy’s adapted rule says: “We can accept a plan proposal as tentatively supported only to the extent we have not only examined all the arguments in its favor, but more importantly, all the arguments against it — and all those ‘con’ arguments have been shown to be less plausible or outweighed by the ‘pro’ arguments.”
– Never heard that one before either, but it sounds right. But you keep saying ‘plausible’? Aren’t we looking for ‘truth’? For ‘correct’ or ‘false’?
– That’s what Abbé Boulah and his buddy are railing against — planning decisions just are not ‘correct’ or ‘false’, not ‘true’ or false. We are arguing about plans precisely because they aren’t ‘true’ or ‘false’ — yet. Nor ‘correct or ‘false’, like a math problem. Planning problems are ‘wicked problems’; the decisions are not right or wrong, they are ‘good or bad’. Or, to use a term that applies to all the premises: more or less plausible, which can be interpreted as true or false only for the rare ‘factual’ claims or premises, or more likely ‘probable’ for the factual-instrumental premises 1 and factual claims, premise 2, but as just plausible, or good or bad, for the ought claims, premise 3, and the ‘conclusion’.
– Okay, I go along with that. For now. It sounds… plausible?
– Ahh. Getting there, Sophie; good. It’s also a matter of degrees, like probability. If you want to express how ‘sure’ you are about the decision or about one of the premises, just the terms ‘plausible and ‘implausible’ are not expressing that degree at all. You need a scale with more judgments. One that goes from ‘totally plausible’ on one side to ‘totally implausible’ on the other, with some ‘more or less’ scores in-between. One with a midpoint of ‘don’t know, can’t decide’. For example, a scale from +1 to -1 with midpoint zero.
– Hmm, It’s a lot to swallow, all at once. But go on. I guess the next task is to make some of your ‘plausibility’ judgments about each of the premises, to see how the plausibility of the whole argument depends on those?
– Couldn’t have said it better myself. Now consider: if the argument as a whole is to be ‘totally plausible’ — with a plausibility value of +1 — wouldn’t that require that all the premise plausibility values also were +1?
– Well — and if one of those plausibility values turns out to be ‘less that ‘totally plausible, let’s say with a pl value of 0.9 — wouldn’t that reduce the overall argument plausibility?
– Stands to reason. And I guess you’ll say that if one of them had a negative value, the overall argument plausibility value would turn negative as well?
– Very good! If someone assigns a -.8 plausibility value to the premise 1 or 3, for example, in the above argument that is intended as a ‘pro’ argument, that argument would turn into a ‘con’ argument — for that person. So to express that as a mathematical function, you might say that the argument plausibility is equal to either the lowest of the premise plausibility values, or a product of all those values. (Let’s deal with the issue of what to do with cases of several negative plausibilities later on, to keep things simple. Also, some people might have questions about the overall ‘validity’ or plausibility of the entire argument pattern, and how it ‘fits’ the case at hand; so we might have to assign a pl-value to the whole pattern; but that doesn’t affect the issue of the paradox that much here.)
– So, Bog-Hubert, lets get back to where you left off. Now you have argument plausibility values; okay. Weren’t we talking about argument ‘weight’ somewhere? Weighing the arguments? Where does that come in?
– Good question! Okay — consider just two arguments, one ‘pro’ and one ‘con’. You may even assume that they both have good overall plausibilities, so that both have close to +1 (for the ‘pro’ argument) and -1 (for the ‘con’ argument). You might consider how important they are, by comparison, and thus how much of a ‘weight’ each should have towards the overall Plan plausibility. It’s the ‘ought’ premise — the goal or concern of the consequence of implementing the plan, that carries the weight. You decide which one is more important than the other, and give if a higher weight number.
– Something like ‘is it more important to get the benefit, the advantage of the plan, than to avoid the possible disadvantage?
– Right. And to express that difference in importance, you could use a scale from zero to +1, and a rule that all the weight numbers add up to +1. The ‘+1’ simply means that it carried the whole decision judgment.
– That’s a whole separate operation, isn’t it? and wouldn’t each person doing this come up with different weights? And, coming to think about it, different plausibility values?
– Yes: All those judgments are personal, subjective judgments. I know that many people will be quite disappointed by that — they want ‘objective’ measures of performance, about which there’s no quibbling. Sorry. But that’s a different issue, too — we’ll have to devote another evening and a good part of Vodçek’s Zinfandel supply for that one.
– Okay, so what you are saying is that, subjective or objective, we’re heading for the same paradox?
– Right again. First, let’s review the remaining steps in the assessments. We have the argument plausibility values — each person separately — and the weight or relative importance for each of the ‘ought premises. We can multiply the argument plausibility with the weight of the goal or concern in the ‘ought’ premise, and you have your argument weight. Adding them all up — remember that all the ‘con’ arguments will have negative plausibility values — will give you one measure of ‘plan plausibility’. You might then use that as a guide to making the decision — for example: to be adopted, a plan should have at least a positive pl-value, or at least a pl-value you’ve specified as a minimum threshold value for plan adoption.
– And that’s better than voting?
– I think so — but again, that’s a different issue too, also worth serious discussion. Depending on the problem and the institutional circumstances, decisions may have to be made by traditional means such as voting, or left to a ‘leader’ person in authority to make decisions. A plan-pl value would then just be a guide to the decision.
– So what’s the problem, the paradox?
– The problem is this: It turns out that the more arguments you consider in such a process, the more you examine each of the premises of the arguments (by applying the same method to the premises) and the more honest you are about your confidence in the plausibility of all the premises — they’re all about the future, remember, none can be determined to be 100% certain — the closer the overall pl-result will approach the midpoint ‘don’t know’ value, close to zero.
– That’s what the experiments and simulations of such evaluations show?
– Yes. You could see that already with our example above of just two arguments, equally plausible but one pro and the other con. If they also have the same weight, the plan plausibility would be zero, point blank. Not at all what the dear professor wanted to get from such a thorough analysis; very disappointing.
– Ahh. I see. Is he one of those management consultants who advise companies how to deal with difficult problems, and get the commissions by having to promise that his approaches will produce decisively convincing results?
– Oh Sophie — Let’s not go there…
– So the professor, he’s in denial about that?
– At least in a funk…
– Does he have any ideas about what to do about this? Or how to avoid it?
– Well, we agreed that the only remedy we could think of so far is to tweak the plan until it has fewer features that people will feel as ‘con’ arguments: until the plan -pl will at least be more visibly on the plus side of the scale.
– Makes you wonder whether in the old days, when people relied on auspices and ‘divine judgments’ to tip the scales, were having a wiser attitude about this.
– At least they were smart enough to give those tricks a sense of mystery and ritual — more impressive than just rolling dice — which some folks can see as a kind of prosaic, crude divine judgment?
– Hmm. If they made sure that all the concerns leading affected people to have concerns about a plan, what would be wrong with that?
– Other than that you’d have to load the dice — and worry about being found out? What’s the matter, Vodçek?
– You guys — I’ll have to cut you off…