On a Linked-In forum, the question was raised whether a moral code without religion could be developed. My effort to look into ways to achieve better decisions for planning, design, policy-making issues suggests that it is indeed possible to develop at least a partial system of agreements — for which ‘moral code’ would be an unnecessarily pretentious term — but which has some of the same features. For problems, conflicts of interest or proposed actions or projects that require the consent and cooperation of more than one individual, (this does not cover all situations in which moral codes apply), as soon as parties realize that ‘resolutions’ based on coercion of any kind either will not really improve the situation or are fraught with unacceptable risks (the other guy might have a bigger club… or even one’s own nuclear weapon would be so damaging to even one’s own side that its use would be counterproductive) the basic situation becomes one of negotiation or, as I call it, ‘planning discourse’. Such situations can be sustained and brought to success only on the basis of the expectation that parties will accept and behave according to some agreements. The set of such agreements can be seen as (part of) an ethical or moral code. For the planning discourse, a rough sketch of first underlying ‘agreements’ or code elements are the following:
**1 Instead of attempting to resolve the problem by coercion — imposing one side’s preferred solution over those of other parties — let us talk, discuss the situation.
**2 The discussion will consist of each side describing that side’s preferred outcome, and attempting to convince the other side (other parties) of the advantages –or disadvantages — of the proposal.
**3 All sides will have the opportunity to do this, and all sides implicitly promise to listen to the other’s description and arguments before making a decision.
**4 The decision will (should) be based on the arguments brought forward in the discussion.
*4.1 The description of proposals should be truthful and avoid deception — all its relevant features should be described, none hidden; no pertinent aspects omitted.
*4.2 The arguments should equally truthful, avoiding deception and exaggeration, and be open to scrutiny and challenge, which means that participants should be willing to answer questions for further support of the claims made in the descriptions and arguments.
Simplified ‘planning arguments’ consist of three types of claims:
a) the factual-instrumental claim
‘proposal A will bring about Result B, given conditions C’
b) the factual claim ‘
‘Conditions C are (or will be) given’;
c) the ‘deontic’ or ‘ought-claim’
‘Consequence B of the proposal ought to be pursued’;
d) the ‘pattern’ or inference rule of the argument (that is, the specific constellation of assertions, negation of claims and relations between A and B) is ‘plausible’.
While such arguments (just like the ‘inductive’ reasoning that plays such a significant role in science) are not ‘valid’ from a formal logic point of view, they are nevertheless used and considered all the time, their plausibility deranging from their particular constellation of claims, and the ‘fit’ to the specific situation.
The plan proposal A is itself a ‘deontic’ (ought-) claim.
*4.3 The support for claims of type (a) and (b) takes the form of ‘evidence’ provided and bolstered by what we might loosely call the ‘scientific’ perspective and method.
*4.4 Support for claims of type c) will take further arguments of the ‘planning argument’ kind and pattern, containing further factual and deontic claims in support of the desirability of B.
The deontic claims of such further support arguments can refer to previous agreements, accepted laws or treaties that imply acceptance of a disputed claim, claims of desirability or undesirability for any party affected by the proposed plan, even moral rules derived from religious domains.
**5 Individual participants’ (preliminary) decision should be based on that participant’s individual assessment of the plausibility of all the arguments pro and con that have been brought up in the discussion.
That assessment should not be superseded by considerations extraneous to the plan proposal discussion itself — such as party voting discipline — but be a function of the plausibility and weights assigned by the individual to the arguments and their supporting claims.
**6 A collective decision will be based on the overall ‘decisions’ or opinions of individual participants.
(The current predominant ‘majority voting’ methods for reaching decisions do not meet the expectation #4 above of guaranteeing that the decision be based on due consideration of all expressed concerns: here, a new method is sorely needed).
A decision to adopt a plan by the participants (parties affected by the proposed plan) in such a discussion should only be taken (agreed upon) if all participants’ assessment of the plan is positive or at least ‘not worse’ than the existing problem situation that precipitated the discussion.
**7 Discussion should be continued until all parties feel that all relevant concerns have been voiced. Ideally, the discussion would lead to consensus regarding acceptance or rejection of the proposed plan. If this is the case, a decision can be taken and the plan accepted for implementation.
Realistically, there may be differences of opinion: some parties will support, others oppose the plan. The options for this case are either to abandon the process (to do nothing), to attempt to modify the plan to remove specific features that cause opponents’ concerns; or to prepare a different proposal altogether and start a new discussion about it.
**8 Individual parties’ ‘decision’ (e.g. vote) contribution to the common decision should be matching the party’s expressed assessment of the arguments and argument premises.
For example: if a participant agrees with all the ‘pro’ arguments and disagrees with the ‘con’ arguments (or expressed lesser weigh of the ‘con’ arguments) the participant’s overall vote should be positive. Conversely, if the participant’s assessment of arguments is negative, the overall ‘vote’ should be negative. Participants should be expected to offer additional explanations of a discrepancy between argument assessment and overall decision.
**9 A common decision to accept a proposed plan implies obligations (specified in the plan) for all parties to contribute to implementation and adherence to the decision provisions.
**10 The plan may include provisions to ensure adherence and contributions by the parties. Such provisions may include ‘sanctions’, understood as (punitive) measures taken against parties guilty of violating plan agreements.
There undoubtedly might be more agreements needed for a viable planning ‘ethic’. It is clear that some of the above provisions are not easy to ‘live up to’ — but what moral system has ever been? And for some provisions, the necessary tools for their successful application are still not available. For many societal decisions, access to the discussion (to be able to voice concerns) is lacking even in so-called advanced democracies. Some expectations may sound like wishful thinking: The expectation of transparent linkage between argument assessment and overall (individual) decision and even more the linkage between arguments and collective decision are still not available. The approach for systematic and transparent argument assessment (My article in the Dec 2010 issue of “Informal Logic” on ‘The structure and Evaluation of Planning Arguments’) suggests that such a link would be feasible and practical, if somewhat more cumbersome that current voting and opinion polling practices. However, its application would require some changes in the organization of the planning discourse and support system, as well as decision-making methods.
These observations were mainly done in response to the question whether a ‘moral’ not based on religious tenets would be possible (and meaningful?). That question may ultimately be taken to hinge on item # 10 above — the sanction issue. The practical difficulties of specifying and imposing effective sanctions to ensure adherence to moral rules may lead many to the necessity of accepting or postulating sanctions and rewards to be administered by an entity in the hereafter. But it would seem reasonable to continue to explore such agreement systems including sanctions in the ‘here and now’ beyond current practices, since both non-religious and religion-based systems arguably have not been successful enough reducing the level of violations of their rules.
NYRB: H. Allen Orr on Sam Harris’ book “The Moral Landscape”.
– Hey Abbé Boulah, your coffee’s getting cold. What are you reading there?
– Ah, Bog-Hubert: and a good morning to you too. What I am reading while I am waiting for somebody to talk to in this deserted Fog Island Tavern? Something about the old quarrel between science and ethics: another round. In the New York Review of Books.
– Another round of that? Sounds like it’s all settled in your mind. So why are you even bothering with it?
– Trying to keep it open, that’s all. The mind, not the controversy. This time, the reviewer — H. Allen Orr — takes on the new book by Sam Harris about ‘The Moral Landscape’ .
– Have you read the book?
– Nah. And I don’t think I’m going to. The review seems to summarize its main points nicely enough for me to see that I don’t agree with them. Nor with the reviewer either, for that matter.
– Your mind opened a little and slammed shut again so soon? Not like you. And on both the author and the reviewer? How did that happen?
– You’re right in wagging your finger at the possibility that I’m jumping to unsupported conclusions. Especially about morals and neuroscience of which I admittedly know nothing. But wait ‘till you hear this.The insights of neuroscience — research about how the brain works when dealing with questions of scientific fact and moral issues — lead Harris to the claim that a science of ethics is possible. A science! And that this science, though it is a yet undeveloped branch of science, can discover objective moral truths. That the widely accepted distinction between ‘is’ and ‘ought’ claims (the view that one cannot derive moral ‘ought’ truths from scientific facts) is a ‘nonproblem’, and that the split between these claims is an illusion, and that the objective basis for a science of morals is the ‘correct’ conception of the good as the well-being of conscious creatures.
– I can see how you get aggravated by those views — I remember your tirades about how truth doesn’t apply to moral claims. And the reviewer bought all that?
– No. Orr remains unconvinced about the validity of these claims but is open to the possibility Harris’s belief that a science of morals might be possible — he just doesn’t think Harris has made his case. Yet.
– Maybe that’s just a polite way of saying that it looks like BS but that he’ll keep his mind open, eh? Just in case?
– No, I think they are both barking up the wrong tree. Wrong question. As long as they keep arguing from within the paradigms of science (and true facts about the world) and ethics or morals (and moral truths).
– But if that’s what they are worried about, what else should they be discussing?
– You’ve got a point there. But you see, they are stuck in their way of looking at the issue. And that way of looking at it makes it seem quite unlikely that the controversy will ever be resolved to the satisfaction of either camp — science and ethics, especially the religious-based ethics.
– So what’s the better way of looking at it, then? Are you going to get in the middle of that?
– Well, I might enter the fray with some observations about the concepts of truth, or about the human urge to find moral precepts. But, I admit, those are not backed by any superior expertise or authority of mine either as a scientist or as a philosopher of ethics, or even a religious expert, none of which I claim to be. Just by some stupid commonsense questions.
– I agree, common sense is often uncommonly stupid. So what are those questions?
– Starting with truth, doesn’t it seem obvious that if there IS a real world out there, it would be useful to know what it IS like, really: to know the truth about what the world is like and how it works?
– Right: that’s the job of science, is that what you are saying? So far, I don’t see the controversy.
– Not yet. To all that, the criterion ‘truth’ applies: we would like to know it — even though people may have expressed different opinions about what that truth is. Science is supposed to try to sort that out. And it seems equally obvious that there exists a human desire for moral or ethical guidance about what we ought to do. I don’t know about you, but it holds for a lot of people.
– Hey, what are you insinuating here? Better watch it all the time, buddy.
– Sorry, I didn’t mean to insinuate anything. It’s too early for insinuating. Coffee hasn’t even kicked in yet. Now: The precepts we are offered are often called ‘moral truths’.
– Ain’t that the truth. Just got on a radio station this morning that was full of that. Full of it, I tell you.
– I know what you mean. Well, while many of them look quite convincing, isn’t there a difference between such claims and the claims about reality?
– They both refer to fat books: what’s the difference?
– I think there is a big difference. You know it. The descriptions by science about the real world refer to the past and present, — what IS. Well, and sometimes carelessly about the future: predictions about what will be, based on what we know about what was and what is. Later, the predictions are then seen as having come true (resulted in true facts) or false (when things didn’t turn out as predicted). The test is the observed reality at that time. In contrast, aren’t precepts about what we OUGHT to do made precisely because they are not true (yet)? And because there is the possibility that we might not heed them and do something contrary to the recommended moral rule?
– You are right: the preacher on that radio sounded like most people are breaking the moral rules all the time. Got all worked up about it, too…
– So if we want to use the same term ‘truth’ for both the scientific and the moral claims, shouldn’t we make a distinction between the kinds of ‘truth’ involved: ‘reality-truth’ and ‘moral truth’? And then Hume’s warning that we can’t derive the latter from the former kind still applies — or would have to be convincingly resolved.
– Whose warning?
– Who’s Hume?
– David Hume. He was an old English philosopher who first stated that you can’t logically derive ought claims from facts. And it’s something that many smart people have pretty much accepted. And logic too.
– Oh. And now this Harris Neurowhatnot is saying that’s not true? And you are saying Harris is wrong?
– I am not in a position to make any comments about whether neuroscience can resolve that issue. I don’t believe it can, but what I do suggest is that its’ the wrong question. And that a resolution of a kind is possible from a different perspective — that of design, planning, policy-making.
– And you can prove that?
– I think it’s a better story. Proof, I don’t know: not about ought-claims. Which is the point.
– So what’s the story?
– It begins with the observation that humans (possibly other species too, but we know about humans for sure) act, and make plans for actions, for changing the environment they find themselves in. The purpose for such action is survival: food, shelter, procreation etc. at the basic level.
– And happiness, don’t forget.
– Right. Happiness, I guess. Now they find, unhappily, that such plans sometimes conflict with the plans of other people. They also find that they then have several options available, several possible actions among which they have to decide. For example: they can decide to just try to get rid of the guy with the other plan and go ahead with their own. Chase him, make him an offer he can’t refuse, hit him over the head with a blunt object, you know the routines. Or they might recognize the possibility of being gotten rid of by the other guy — perhaps he has a bigger club — and run away, forgetting about their plan. Or, to look for some way of reconciling the differences between the plans (all the while seeking to keep up the appearance of having the bigger club…). They may also entertain a vision that perhaps by joining forces in pursuing a common plan, they might achieve an outcome that would be even preferable to that of either individual plan, and opt for cooperation.
– I see. I think. So what should I do when that happens?
– Ah! You put the finger on it!
– I put the finger on what? The guy with the bigger club who’s interfering with my plan? Don’t think so.
– No: Think! You put the finger on the origin of basic ought – questions of the kind ‘what should I do?‘ Which of the options of fighting, fleeing or surrendering, or negotiating to choose. And of the more elaborate questions of the kinds of agreements that must be entered to ensure successful cooperative planning, if you select the last option of negotiating a common plan.
– What kind of agreements are you talking about?
– Good question. First, there is something of a vague acknowledgment that the outcome of the plan must be better than the existing (or predicted) situation or at least acceptable, for both parties. And secondly, that to find out what that mutually acceptable plan should look like, requires communication: talking, negotiation. It also requires some mutual assurance that the clubs will have to be left outside the negotiating hall. The talk must aim at clarifying the features of the common plan, and the application or threat of force, coercion don’t relate to the quality of the plan, and therefore would immediately revert the situation to the fight/flight option. So the agreement to abstain from force is a basic necessary element of such situations.
– Okay, I can see the need for those agreements. Now are you saying those are moral truths?
– Not truths: agreements. This distinction is important: these starting agreements or rules for cooperative planning are not truths — they are mutual understandings, commitments, promises. Even the concept of ‘thieves’ honor’ expresses this: two scoundrels may know and acknowledge that they are scoundrels when negotiating a deal — but also know that making certain commitments will have to be honored if the negotiating option is to be maintained — but that they can always switch to one of the other options.To call the agreements, promises, ‘truths‘ is not doing justice to the concept of truth, don’t you think? But they work just as well, don’t they?
– Yeah. At least I see that they are very different kinds of truths than the facts of reality. So truth is gone from planning, then?
– No: This does not mean that truth is absent from the process. Quite the contrary: In explaining to each other what features the plan should have or not have, the parties make proposals of the kind: ‘the plan should have feature x’ — and try to convince the other party with an argument justifying that suggestion. The argument will take something like the form: “The plan should have feature x, because having x will lead to (cause) a situation with feature y, and feature y is desirable”. Such an argument will ‘work’ in persuading the other party only if that ‘opponent’ or planning partner feels / is convinced that the claim ‘x will cause y’ is true. So at least for that part of the argument, we are looking for truth: ‘reality-truth’, if you go along with the distinction we made.
– Okay: but what about the part that says we ought to have y?
– Right. The problem is that we talk about this in different ways, some of which look like the term ‘truth’ applies, even though we saw that it doesn’t, or that it is a different kind of truth.
– Your are confusing me here.
– It can be confusing. Let’s see. You agreed, didn’t you, that saying ‘we ought to have y’ is not true in the same way as ‘doing x will produce y’. If only because different people may have legitimate and obvious disagreements about whether we should have y — one person’s benefit is another person’s cost, remember? But they should eventually come to an agreement whether x causes y: it depends on the nature of x and y, not on whether they like it or not. I’m not saying that is always easy to pin down. But the confusion comes when we speak about the effect or goal y like this: “Having y is desirable” or “Y is a good thing”. Notice the use of the word ‘is’ here? That’s the source of the confusion.
– It sounds more like a statement about reality that’s independent of how we feel about it — so people use ‘truth’ for such statements as if they were also ‘reality-truths’. Blurring the difference.
– Okay: So what b… blurring difference does that make in deciding what we should do?
– Patience. There will usually be a number of such arguments being bandied back and forth, some supporting the plan having feature x, some against. Each party must decide whether and when to end the discussion by agreeing to the plan or ending the cooperation, based on some ‘weighing’ of all these pros and cons: a decision. What does this mean? It means, for one, that the decision is made on the strength of the person’s perception of the truth of the claims ‘x will lead to y’ etc. in all the arguments. More specifically, the decision is not made on the basis of the actual truth of the claims — but on the person’s degree of confidence that the claim is true. An important distinction: we never know with complete certainty whether x will cause y; it is a prediction that may turn out not true (due to all kinds of unforeseen circumstances) even if we know with reasonable certainty that in the past x has always caused y. But even that isn’t always very certain. What about the claim that y is desirable? Again: y may be desirable to one party but not so much for the other. So is ‘true’ the proper term for whatever level of confidence we have for such claims?
– Well,you convinced me that it isn’t quite the same. But what else do you suggest?
– I suggest that we use something like ‘plausible’ instead — with degrees of plausibility ranging all the way from complete agreement or conviction, to complete disagreement, with an in-between point of ‘don’t know’ or ‘undecided’. And all the ‘pro’ and all the ‘con’ arguments (of which there is always at least one, pertaining to the cost or effort involved of getting the desired outcome, plus any other disadvantages) must be weighed against one another according to their relative importance for each party.
– The short and long of all this is that the planning argument contains two different kinds of premisses: (at least two; there may be some qualifying claims added, or statements about conditions under which x causes y, and whether those conditions are present) but the two key premisses are these: one ‘factual’ or ‘factual-instrumental’ which will have to be justified, supported by means of what we might loosely call the ‘scientific’ approach: observation, logic, calculation. Aiming at ‘objectivity’ — our judgments about it should aim at conforming to the property of the reality, the object we are judging, not according to what we would like it to be or how we feel about it. But that kind of judgment is preciseley what we have to make about the second kind, the ‘deontic’ premiss: “we ought to achieve y”. Both premisses can of course we challenged’: the former will call for ‘scientific’ evidence for support, as I said — but the latter can only be supported with more arguments of the same kind. Which students of argument will have recognized the argument pattern as being inconclusive from a formal logic point of view: “y should be pursued because y will lead to z, and z is desirable” — more arguments that cannot be decided by ‘scientific’ means, if only because their deontic premises in turn may be desirable to one party but unacceptable to the other.
– This is getting kind of complicated. How does all this relate to a science of morality?
– If you think about it for a while, it will sort itself out. But here is where it gets back to morals and ethics. Some such discussions may end up invoking deontic claims, principles, rules that are accepted, even seen as ‘evident’ or ‘self-evident’ by all participants. Is the search for such universally claims and rules what morality, ethics is all about?
– From what I know about it, yeah. Obviously, it would be useful to have such a set of precepts that could help settle disagreements about what we ought to do.
– I quite agree. And the very planning discourse itself embodies some such rules: for example, we must assume, for a truly cooperative discussion towards a mutually desirable plan, that the claims we make are ‘true’, in the sense that we do not make claims which we are convinced are not true: our claims should have a reasonable degree of plausibility. We shouldn’t make deceptive or knowingly untrue claims; they would jeopardize the quality of the plan.
– That makes sense. But hey, not lying and not telling the whole truth can be different things, can’t they? Should we also be obliged not to hold back knowledge we have reason to believe would constitute weighty arguments for or against, for the other party?
– You are getting it, my friend. What about explicitly spelling out reasons — for and against some proposal — that some may feel are so obvious that they should be taken for granted as being known and taken into account by the other? What about mentioning possible effects of the plan that would be desirable for us, but undesirable for the other party — but that the other party is not aware of?
– Well how does all that relate to the claims of that book? Does he have an answer for these questions?
– To be honest, I’m not sure; since I haven’t read the book, only the review. But there seems to be a claim for some ultimate answer in there, that I have trouble accepting. It is the claim by Harris regarding the ‘correct conception of the good‘ being the well-being of conscious creatures. Ultimately, the deontic premisses that must be accepted as ‘self-evident‘ and not requiring further debate would rest on the identification of such well-being — in the planning case, of all parties involved in the planning discussion because they might be affected by the outcome in some way.
– And the author thinks science can do that?
– Apparently. Sure: If science could clarify what is required for such well-being, this would indeed provide us with at least a workable set of ultimately and commonly acceptable deontic premisses for the planning discourse: morality. This would then be described by the scientists who — as Orr seems to accept — may be in a position to do so because of their expertise in neuroscience. If neuroscience has those answers — which is another question.
– You sound like you don’t think so? And I imagine there would be other people who don’t like where this is going?
– You are probably right. Another round of quarreling. There is, from the planning perspective, at least one good reason for the visceral reaction one must expect to this vision, in my opinion.
– I’m glad you have more than another visceral reaction. What’s the reason?
– The notion of ‘well-being’. Doesn’t it look like a rather static concept: one set of circumstances that produces the optimal constellation of neural responses in the human brain? What if it cannot be determined with any degree of certainty?
– I’d suspect it won’t be that easy…
– Right — not just because of its complexity. The real difficulty is that it isn’t a static, constant condition. Don’t humans, at least many humans, have an innate desire to change not only their environment to increase their well-being, but essentially themselves?
– What do you mean? All I ever hear is about people wanting to find out ‘who they are’?
– That’s a distraction people have been brainwashed into accepting. Well, I guess it’s somewhat justified in that it helps people get out from under what other people keep telling them they are. No, the real issue is who they want to be. Think about it. Expressions such as ‘make a difference’ are one indication. People want to stand out, be recognizable as individuals, not as indistinguishable specimens of the same species. They’d be very unhappy if they found out that what they really are is just a cog in the wheel, one of millions of indistinguishable worker ants. They try to ‘live up to’ certain images, visions of what they could be. Don’t you agree?
– Not that you explain it — Was that what Bob Dylan was saying in his song ‘I’ve got nothing, ma, to live up to’?
– Right. And just the basic two or three choices of the original planning situation described above demonstrates that different images lead to very different ‘moral’ rules. Some people may have a stronger tendency (perhaps even on a genetic basis: that would be something for science to study) to deal with conflicts according to the ‘fight’ option.The ‘warrior ethics’ has some very demanding rules, internal consistency and ethical precepts that demand acknowledgment and even grudging admiration even from people who themselves are more inclined to the ‘cooperation, mutual assistance’ attitude with its very different ethical implication. And people throughout history have designed very different visions of who they wanted to be and to become, to be seen as — expressed in their art, their architecture, their manners, and their moral rule systems.
– To the casual observer such as myself, this may look like just baseless, what do they call it, moral relativism, without any firm foundation. Isn’t there a desire, a need for something more, something more universal, timeless?
– You have been listening to too much talk radio. No, the question is a valid one. Where does this desire for a firm foundation for morality come from? The individual, certainly, confronted with choices, must make decisions and justify these to others; with arguments resting on deontic premisses that are acceptable to others: this is one if not the major basis for a desire for morality. Yes: people want to ‘do the right thing’ — as acknowledged even by others from which they also want to distinguish themselves (‘stand out’…). A bit of a dilemma, right? But there are other motivations. For people whose existence involves intense interaction with other people — and especially for anybody aiming at leadership roles or positions of power in society, the predictability of the moral rules of others is a significant aspect of their own planning: so there is a strong incentive to try to influence people to adopt and adhere to a consistent set of moral or ethical rules.
– Are you saying they are trying to brainwash us to toe the line?
– Would I ever say any such thing?
– No, you sly devil, you trick me into saying it…
– Well… And to ensure adherence by imposing sanctions for violating them. If this conflicts with assumptions about avoiding ‘enforcement’ by application of force, the strategy has been to invoke supernatural beings who will carry out the requisite enforcement sanctions or rewards, if not in this life, then in the hereafter…
– I think you’d better watch it all the time. You are making yourself a tad unpopular here.
– Why, even in this mythical fogged-in tavern? I guess it can’t be helped: the questions just keep coming up. Can humanity find a workable balance between its members’ desire to invent and live up to ever new and different images of who we might be, encourage their creativity and ability to devise inspiring, noble, beautiful visions of what humans can be — and the need for predictability of the resulting ethic rules of each of those visions?
– Gee, don’t ask me, Abbé Boulah.
– Why not?
(These comments were triggered by Ronald Dworkin’s article in the New York Review of Books of January 2011 on ‘What Is a Good Life?’ and its moral and ethical implications. They try to put my insights on the theory of design and planning into a meaningful relation to morality and ethics.)
We find ourselves in the world, dealing with our needs , desires, and reality’s challenges to meeting those. Whether we call all that ‘pursuit of happiness’ or ‘problem-solving’, or anything else, a common feature is that we make plans —
plans to act in those pursuits.
Our plans can be made as individuals — ‘my plan’, or as groups of people. Either way — as soon as ‘my plan’ begins to relate to and affect others’ plans: ‘your plan’, the effort becomes ‘our plan’.
The natural expectation for any plan is that implementing it will result in a situation that is ‘better’ (1) than if it were not implemented.
This expectation must be extended to any participant in the effort; any person affected by the plans: a ‘good’ plan is one that is perceived to be ‘better’ or at least not worse, for all affected.
Plans whose acceptance is achieved by coercion (2) are not ‘better’ in this understanding.
The determination of what constitutes a ‘good’ plan must be sought and achieved by means of communication. This mostly takes the form of ‘argument’ understood as the common exploration of the ‘pros and cons’ — the advantages and disadvantages — of a proposed plan.
The resulting expectation is therefore that the decision about acceptance, or rejection or modification of the plan (towards a greater chance of acceptance) should be based on the ‘merit of the arguments’.
This raises the question of how such arguments should be evaluated: how their ‘merit’ ought to be established, so as to plausibly support the decision.
The tradition on argument assessment as studied in the past by the disciplines of logic, rhetoric, or critical thinking has not treated the evaluation of planning arguments adequately. The reason for this is the focus of analysis on individual arguments (3) — not the entire array of pros and cons –, on the ‘validity’ of argument patterns, and on the ‘truth’ of argument premises and conclusions (4).
The lessons from traditional logic argument analysis do apply only to the validation / verification of some of the premises in planning arguments:
The prototypical planning argument can be rephrased as follows (5):
Proposal x ought to be accepted (conclusion, a deontic claim)
(It is a fact that) x has a relationship REL to some effect y
(factual-instrumental premise, e.g. causal)
y is desirable (ought to be aimed for) (deontic premise) (6).
The merit of such arguments rests — in the subjective assessment of individual participants — on the following aspects:
– the plausibility (7) of each of the premise claims,
– the plausibility of the entire argument pattern.
The plausibility of an individual argument of this type will be a function of the plausibility values of the premises and the argument pattern.
Furthermore, the ‘weight’ of an individual argument in the entire set of pros and cons raised about a proposed plan must be seen in relation to all the other arguments: specifically, it will depend on its own degree of assessed plausibility and the significance or weight of relative importance of the deontic to which it refers, among all the deontic concerns of the entire argument set.
The question of how the arguments together support or don’t support the ‘conclusion’ to accept or reject the proposed plan is a separate issue, discussed for example in Mann (2010).
The question of morality and ethics arises with respect to the issue of necessary assumptions and agreements for a constructive planning discourse.
In addition to explicit and agreed-upon basic agreements (8), there are unspoken but important assumptions such as the following:
– The expectation that my arguments are given due consideration rests on the assumption that the information I present in them is a true (or plausible) representation of my actual beliefs — that I don’t misrepresent or distort what I believe to be the truth or desirable goals. In other words, it rests on the assumptions that I am seen as trustworthy by other participants. If not, I can’t expect them to pay attention to my arguments. This expectation may be mutually ‘granted’ up front as a good faith assumption. But it must be sustained by consistent performance, and will be damaged, sometimes irreparably, by revelation of violations in the form of deliberate misrepresentation, distortion, untruthful claims, or deliberate and intentional omission or withholding of critical information.
It is easily seen that this is the equivalent of the moral injunction ‘thou shalt not lie’; the difference is not only that is is not couched in ‘shalt not’ terms but in terms of a positive effort of truthful, honest, constructive sharing of information. In this sense, the agreement to refrain from the use of force or threat of force is the equivalent to the commandment ‘thou shalt not kill’ — but now phrased in the positive terms of seeking a commonly acceptable, ‘good’ plan: a plan including the killing of a participant who does not see it as all hat beneficial is not living up to the expectation of ‘good’ for all concerned.
Similarly, the expectation of ‘giving due consideration’ to all arguments put forward — even those dealing with aspects of the plan that are mainly or exclusively beneficial or detrimental to other participants — implies some degree of empathy, compassion, desire to care for others besides oneself; mirrored by the expectation that other participants harbor at least some similar feelings about others’ concerns even if they don’t affect themselves that much. Arguably, these are considerations that can be called moral, with the difference that they are not postulated as ‘categorical’ or imposed by some earthly or supernatural authority, to be adhered to on penalty of displeasing that authority (and incurring penalties here or in the hereafter) but simply as conditions for making reasonable plans with others in the here and now.
It is interesting, in this connection, to examine some of the deontic concerns that play a role of planning discussions — even though one might claim that these are not always, even not even as a rule, made explicit. The argument that implementation of ‘plan x’ will establish or strengthen the image of the implementers of the plan. (9) Here, ‘image’ refers to something like ‘who we are’, or ‘who we would like to be’ (or become). Some of these are quite general — and therefore easy to be included in general moral canon: fairness — in considering others, indeed everyone’s concerns equitably in evaluating the merit of arguments; compassion in considering the suffering of others; consistency in one’s adherence and observation of principles and guidelines — an element of predictability (and hence trustworthiness).
But there are other aspects of image that play a role in making plan decisions — sometimes alluded to in comments such as ‘that’s just not me’ or ‘that who I am’: we do all, some more than others, wish to ‘make a difference’ in the world of our existence. That includes not only to leave artifacts, memories of memorable acts, behind, but precisely not be just like everybody else, like all that came before. Again, the image concepts guiding such decisions can be standard societal roles: the warrior, the healer, the ruler, the humble servant, the wise man and teacher (guru). Sometimes, people or entire societies get hung up in trying to live up to images established in earlier times; the fascination with heroic figures of historic, even mythical periods has repeatedly gripped entire nations. But there is always a quest, hidden or explicit, for new, unheard-of images. What are the criteria that govern such ideas? Well, there is the ‘new’ — and in architecture, it sometimes seems to be the only criterion for making a difference. The innovative, here in the sense of new ways of dealing with old and current problems, plays are; being ‘creative’ is very much on people’s minds these days, it seems. These sometimes require courage to pursue, given traditional attitudes and constraints — and so courage is very much a part of image quest; that must be demonstrated in acts of standing up against resistance and reaction — it can be combined and manifested with the heroic into the tragic (suffering, ultimately defeated but for a worthy cause) hero. What about ‘appealing’? Is beauty an aspect of image to which people might aspire? Appeal these days often seems debased to ‘sex appeal’ — and physical appearance, to which considerable amounts of money is devoted; but sometimes ending up as travesties or even caricatures of more coherent concepts of beauty, of which integrity and genuineness are essential ingredients.
The point of enumerating (by no means exhaustively) such examples is that each such image will carry its own requirements for one’s corresponding conduct: ‘according to the image’. Internal coherence and consistency are important for each such image — but the specific criteria do not necessarily have to match those of other images. We might respect and appreciate the ethics of the warrior — as one arguably quite coherent design of who we might be — even if we are personally pursuing the virtues of the healer, the builder, the artist, or the teacher. And the question is: what are the precepts guiding our dealing with all these different image pursuits when our concerns begin to get in each others’ way?
These considerations are seen as a different perspective, as the heading implies, of ethics and morality. They do not seek to replace or deny the validity of theories that try to offer more universal, timeless, general basis for human morality and ethics. But they might be of some significance and perhaps help for some who have trouble accepting specific religious or political theory authorities as the arbiters and foundations for human rules of behavior .
1) ‘Better’: understood as an improvement of a current situation perceived as not sufficiently satisfactory, or as the prevention of a problem that would have resulted in a worse situation.
2) Coercion must be understood as any form of application of force (violence) as well as the introduction or threat of introduction of disagreeable conditions to participants who do not (yet) consider the plan acceptable. Economic constraints, psychological pressure, social pressure, all fall into this category. Their common denominator is that the features introduced into the discussion (‘An offer you can’t refuse’) are not features or qualities of the plan itself but of extraneous circumstances designed to extort acceptance from a less powerful participant. It is a question whether misrepresentation, omission of pertinent information, or distortion of true facts should be seen as forms of coercion; but they certainly are assumed to be equally inadmissible.
3) The discussion of argument assessment in logic is exclusively focused on single arguments, understood as a sequence of claims (premises — usually only two or three premises) that are listed in support of the truth or falsity of a conclusion.
4) The concept of validity of an argument – in traditional logic, especially formal logic, is restricted to arguments involving factual claims, and an argument is understood as being ‘valid’ if there is no way the conclusion can be false if all the premises are true. There have been various attempts to extend this view of validity to arguments involving deontic or ‘ought’ claims (modal logic, deontic logic) but these have all approached the task by a kind of ‘begging the question’ tactic — that of positing claims such as ‘permitted or ‘forbidden’ as ‘true’ and then basis for ought -conclusions following from them, but none of these approaches adequately deal with the nature of desirable or undesirable advantages or disadvantages of plans.
5) The pattern presented here has multiple variation forms derived from various combinations of assertion or negation of the premises, and of the relationship type claimed — in the factual-instrumental premise — to hold between the proposed plan (or plan detail) and the consequence claimed to be desirable or undesirable in the deontic (ought-) premise.
6) Expressed in formal notation, with ‘D’ standing for ‘deontic’, ‘F’ for ‘Fact-claim), and ‘FI’ for ‘factual-instrumental claim’, and ‘REL’ for one of the various relationship claims:
FI( x REL y)
The argument is sometimes extend (qualified) to include assertions about certain conditions under which the relationship REL holds; the pattern then looks like this:
FI (x REL y given c)
7) While formal logic aims at establishing the ‘truth’ of premises as the condition for the truth of a conclusion, the predicates ‘true’ or ‘false’ apply only to the factual and factual-instrumental premises, not to the deontic claim. This is in contrast to some colloquial usage of referring e.g. to ‘moral truths’; since a desirable aim of a plan is discussed precisely because it is NOT yet true (though it may be true that it is desired by the proponent of an argument). Furthermore, even claims about hypotheses such as that x will cause y are not universally accepted as true, science has long adopted the custom of describing such claims with the ‘probability’ predicate, which also does not fit the deontic premise well. The suggestion is therefore to use the term ‘plausible’ and ‘plausibility’ as expressed on some agreed-upon scale for all claims as well as for the question of the entire argument pattern and its fit or applicability to the case at hand.
8) Basic necessary conditions for constructive planning discourse include such agreement as these: to talk become making a decision, to abstain from the use of foe or threats of coercion, to give each party to the discussion a chance to be heard, to listen to the arguments and to give them due consideration, and to abide by certain decision rules — to be agreed upon — such as the outcome of a vote, or the decision by a referee, in case no consensus or clear decision results from the vote. etc.
9) ‘Image’ here refers to a coherent concept of a societal role or life style; in plans for a building, for example, the building may through its forms and details convey
such societal roles (ref. Mann …. ). In social relations, images may refer to character, skills orientation: the ‘warrior’; the ‘healer’, the ‘ruler’, the ‘friend’.
Mann, T. : ‘The structure and evaluation of planning arguments’ in Informal Logic, Dec. 2010.
— “Programming for Innovation: The Case of the Planning for Santa Maria del Fiore in Florence” , EDRA (Environmental Design Research Association) Meeting, Black Mountain, 1989. DESIGN METHODS AND THEORIES, Vol 24, No. 3, 1990.
— “Images of Government: A Comparative Analysis of Government Buildings in Renaissance Florence.” 1993. Presentation at EDRA (Environmental Design Research Association) Boston, 1995.
“Notes On the Value of Buildings” PROCEEDINGS, 28th Annual Conference of the Environmental Design Research Association (EDRA) Montreal 1997;
“User Survey on Image Preferences for a School of Architecture” 30th Annual Conference of the Environmental Design Research Association (EDRA) Orlando, FL 1999.