AGENDA FOR A ‘SCAFFOLD’ OF TRANSFORMATION PROJECTS: A PERSONAL VIEW

The following is an attempt to articulate basic premises and assumptions about various proposed efforts to respond to crises and problems, corresponding proposals, the resulting agenda and the parts I might be able to work on, for discussion.
(Numbered for ease of commenting.)

1 There is widespread concern about serious crises and problems threatening humanity.

2 The existing systems of governance and economic relations do not seem to be able to remedy or prevent these crises effectively; in fact, it seems that some crises are actually caused and exacerbated by those systems.

3 Thus, the calls for ‘change’, ‘transformation’, for a new ‘model for survival’ are both urgent and plausible. However: there is no common agreement about what the remedies, what that new model should be, and how it should be brought about.

4 The models and visions (in many current discussions) appear to mean some overall ‘global’ political-economic governance model to replace existing structures. There are reasons to question whether this search for ‘one model’ is the appropriate approach.

5 The next question ‘how to achieve this?’ would draw on
a) possible precedents, models; and / or
b) plausible information, knowledge, ideas (visions) for new solutions
c) available tools, methods, ‘approaches’ for dealing to the tasks.

6 Both extensive efforts by many people and organizations, (for example, a several year-long discussion on LI by members of a group of people calling itself ‘systems thinkers and subscribing to the claim that ‘systems thinking offers the best foundation for meeting the challenges / solving the problems facing humanity’), did not yield a consensus about even which direction to choose in developing such a model, which of the available tools to use, let alone a convincing model proposal itself.

7 Neither the examination of historical precedents nor the discussion of new methods and techniques support the notion that ‘we’ (humanity, our leaders, systems thinkers or other analysts), currently know enough and have sufficient agreements to confidently develop a global model for survival — one that could replace or override the existing systems without causing violent opposition, wars, upheavals, while guaranteeing a better future. Thus, in my opinion, the discussion should – while not giving up the discussion for such a global model – focus more decisively on a more incremental strategy.

8 We need more information – not just ‘data’ about current conditions, but information about what would work, supported both by methodological research and systematically evaluated experiments with different models: experiments not on a global scale (we can’t afford global failure of another ‘grand scale’ global model) but at small, local scale where ‘failure’ can become important information about what works and what does not, while the effects of failure on the group where it was tried can be remedied by global support and aid. ‘Failures’ should not be despised, shunned and punished, but its information results rewarded just as much as those of ‘successes’.

9 The above-mentioned LI discussion, for example, revealed that there are already many experiments, initiatives for alternative ways of doing things in many areas of life underway in many countries. Most are not supported but struggling to survive opposition by traditional attitudes and structures that might feel threatened by these innovations; and there seems to be little communication and exchange of information between them. Nor is there a coherent system in place for harvesting the insights from these efforts for evaluation of what works and what doesn’t.

10 A few counteracting forces can be seen that must be acknowledged and considered in responding to the issue:
a) The right to plan (in the ‘pursuit of happiness’ in all forms) is acknowledged as a human right. All planning must necessarily rely on the assumption of some ‘context’ conditions remaining sufficiently stable to guarantee the reliability of predictions of plan success — even while pursuing change;
b) The pursuit of change is driven not only by the desire to ‘solve problems’ but by an inherent desire to ‘make a difference’ in life.
(Both these forces are currently active; both must be adequately accommodated in whatever eventual ‘model’ is going to be adopted.)
c) Both pursuits also inherently rely on some set of unchanging, stable conditions – conditions that guarantee the effectiveness of actions to achieve the desired change. They include not only natural laws such as gravity or the properties of materials, but also assumptions about laws, habits, agreements that rule human behavior. (The relentless calls for ‘change’, ‘transformation’, even ‘destructive creativity’ seem to ignore this fact, raising the possibility of resulting in a state of continuous chaotic change.)
d) These counteracting forces tend to generate conflicts even in relationships characterized by genuine desire for cooperation. Resolution of conflicts by force or coercion, historically dominant, is increasingly recognized as ineffective let alone immoral and even intensifying and making conflict a recurring and escalating problem, Thus, the development of better, effective nonviolent means of conflict resolution is becoming more urgent. In general, such means will result in social ‘agreements’ (laws, treaties). Such agreements must cover all ‘local’ domains affected by the potential conflict: they will have connect those domains, becoming nonlocal: ‘global’.

(Example: we drive cars to destinations determined by our individual needs and desires: all different. But in doing so, we rely on a commonly accepted agreement: to drive on the ‘right’ (agreed-upon) side. It is arbitrary which side, as rules in different countries demonstrate; the important thing is that it is agreed upon, and that adequate provisions are in place to ensure that the agreements are adhered to.)

11 The implication of the above assumptions is that we ‘need’ both the right and opportunities to plan our different individual plans, individually or in groups) AND common agreements for behavior when plans might interact in conflicting ways. The acknowledgement of individuals’ and groups’ rights to pursue plans is the first needed global agreement.

12 Both the common need for diverse experiments (to gain information about what works) and the individual need for ‘making a difference’ can be met by a commonly agreed-upon policy to allow and support a variety of experiments and initiatives.

13 This agreement should include a common provision or entity – a forum — for coordinating the diverse initiatives, keeping track of their experiences and performance, and for negotiating necessary ‘global’ (or inter-initiative) agreements and ‘rules’.

14 The provisions for ensuring that agreements (laws) are adhered to have in the past relied on ‘enforcement’ – replying to violations by means of force. To be effective, the entities designated to do this have to be more ‘forceful’ – that is, more powerful – than any would-be violator. This makes the enforcement entities extremely vulnerable to the temptations of power: by definition, there is no ‘more powerful’ entity to prevent the powerful from violating the very rules and agreements they are supposed to enforce. The control of power – important at all levels of society – becomes critical at the level of global governance. Traditional tools for the control of power are arguably losing their effectiveness, the search for better means of power control should be given highest priority.

15 Since new initiatives tend to, or tend to be perceived as threatening or competing with existing institutions, infrastructure and processes and powers, an initial strategy of implementation of such policies should seek out ‘new’ projects aiming at creating new needed entities rather than replacing existing ones, in domains where there are no existing power structures that might feel threatened and therefore obstruct experiments. One good (if not the best) way to do this is to encourage such initiatives and experiments (‘Innovation zones’) in areas (geographical or societal, non-territorial) where existing social, technological and governance infrastructure has been destroyed — e.g. by natural or man–made disasters — or not yet developed in response to innovations in technology, science, or human vision. Initiatives would be run on a volunteer basis, in return for agreement to certain conditions (below).

16 The support of such initiatives in disaster areas would consist mainly of that share of humanitarian aid normally given to disaster relief that would be used to reconstruct (existing/old) infrastructure, in addition to other philanthropic support for the applicable causes. It would instead be devoted to start and support the innovation initiative.

17 Such designated support would be provided on condition of
a) Presentation of a plan outline for the experiment; including some indication of what would be considered a measure or indication of success. These conditions should be carefully kept ‘unbureaucratic’.
b) Agreement to reasonably carefully log, monitor and record the effort and its outcomes, whether success or failure to meet its intended goals. (This could be done by the coordinating entity.) It must be emphasized that even records of what does NOT work are what is needed overall, for the development of larger coherent models and plans.
c) The plan may include transition provisions for either expanding the key features of the experiment into other areas (or neighboring regions) if successful, or reversion to existing or other conditions in the case of failure.

18 The implementation of the proposed strategy would require a ‘platform’, forum or ‘scaffold’ – organization? – with the following aims and components:

A) A data base of
a) currently existing initiatives and experiments;
(including past items to the extent adequate information can be found)
b) the growing list of proposed new experiments;
c) a ‘tool kit’ of techniques, methods, procedures, etc. that are or can be used by the initiatives, and shared with others.
d) a set of ‘innovation experiment templates’ that can serve groups in setting up and quickly apply for support (esp. in case of emergencies); they can be adapted and modified in response to conditions and ideas;
e) a network of governmental and other providers of support for such initiatives;

B) A ‘planning discourse support platform’ for the following tasks:
a) development and discussion of proposed initiative templates;
b) discussion and evaluation of proposed initiatives to be supported;
c) the development and discussion of common, ‘global’ agreements;
d) drawing recommendations from the experiences that might lead to promising ideas for the eventual development of larger ‘global’ models.

19 The design of the platform or ‘scaffold’ may be guided by the models currently seen by its designers as the most promising tools. However, it should be presented and operated in the most generally understood terms, avoiding the ‘jargon’ associated with current approaches as much as possible – as well as avoiding the semblance of requirements to adopt any such ‘paradigm’ as a prerequisite for contributions and participation. The currently most general conceptual framework for all application domains is that of questions and answers, action or plan proposals (‘solutions’) to address perceived problems or aims (goals), and the ‘pro and con’ arguments about the plans, leading to common acceptance of decisions to adopt and implement.

20 My past work and interest would suggest my main contribution as aiming at item 18B as a first step.

21 Some example sketches of initiatives that meet the above criteria of ‘innovation zones’ in domains or areas where the initiatives would not have to ‘fight’ existing infrastructure and power networks (drawn from ideas in response to previous discussions):

a) Areas in which natural or man-made disasters have destroyed roads, buildings and other essential infrastructure as well as businesses. Such areas would be prime candidates for a number of technological, energy-generating, agricultural, housing and community design experiments.

b) The opportunities originally conceived as ‘highway right-of-way biomass projects’: gradually replacing the grass cover of highway median and right-of way areas with plants that would be more effective yielding biomass to produce fuel for the maintenance equipment. Extending these projects to include e.g. flowers and other non-food plants that could evolve into revenue-generating projects that in turn might experiment with different business plans while offering employment opportunities. Adding higher hedges and trees at the edges of these areas could help prevent wind erosion of adjacent agricultural areas and improve the microclimate of the area. These projects could utilize the ‘grey water’ of rest areas and other buildings nearby, reducing the detrimental effect of emptying wastewater into rivers or lakes.

c) Efforts to ‘revitalize’ downtown and other areas in cities where ‘monoculture’ land use has destroyed urban vitality and appeal as well as the diversity of services: Establishing ‘cartmart’ markets on vacant lots where aging, no longer economically viable buildings have been destroyed, and residential structures as well as the small businesses have disappeared that made urban streets appealing as well as providing essential services. Such areas, provided with common civic services (e.g. public bathrooms, bus and taxi stops, information booths) can accommodate kiosks or carts offering ‘daytime-specific’ wares and services for limited periods during the day, making room for other carts at other times. These small businesses could be ideal for part-time owners and employees, but if run out of vehicles (vans) might also form small ‘instant markets’ in suburban areas, reducing the need for residents there to drive to the nearest supermarket for small errands. ‘Big box’ stores might be enticed to support such businesses by supplying them with merchandise at cost (in return for a share of advertising space on the kiosks or vans, and for the permits to locate their big box facilities in areas outside downtown). They would also be the ideal outlets for locally grown or produced wares.
The basic idea offers a variety of different opportunities for business models, as well as information for the review and revision of municipal land use regulations – e.g. regulations requiring businesses at sidewalk level to the smaller scale and minimum average visitor frequency. They can easily be introduced as ‘temporary’ and ‘experimental’ until more information about better ways to revitalize such areas is accumulated.

d) There have been various efforts to introduce alternative currencies, in response to a range of conditions such as inflation of the national currency, or lending restrictions by the larger financial institutions. One different ‘currency’ concept is a ‘by-product’ of planning participation projects that involve rewarding participation with ‘civic credit points’ – (weighted by the assessment of their merit by the entire group) which then can be used to ‘qualify’ people for various positions and decision-making roles in the respective community. (Besides aiming at a better linkage of planning decisions to the concerns and arguments of affected parties, offering such incentives would help improving the much lamented ‘voter apathy’ by citizens urged to participate, at their own expense of time and effort but without assurance that their contributions will ‘count’ in a perceptible way). The idea likewise allows for a great range of variations and arrangements that need to be tested by trying them out in selected small scale experiments.


Systems Models and Argumentation in the Planning Discourse

The following study will try to explore the possibility of combining the contribution of ‘Systems Thinking’ 1 — systems modeling and simulation — with that of the ‘Argumentative Model of Planning’ 2 expanded with the proposals for systematic and transparent evaluation of ‘planning arguments’.
Both approaches have significant shortcomings in accommodating their mutual features and concerns. Briefly: While systems models do not accommodate and show any argumentation (of ‘pros and cons’) involved in planning and appear to assume that any differences of opinion have been ‘settled’, individual arguments used in planning discussions do not adequately convey the complexity of the ‘whole system’ that systems diagrams try to convey. Thus, planning teams relying on only one of these approaches to problem-solving and planning (or any other single approach exhibiting similar deficiencies) risk making significant mistakes and missing important aspects of the situation.
This mutual discrepancy raises the suggestion to resolve it either by developing a different model altogether, or combining the two in some meaningful way. The exercise will try to show how some of the mutual shortcomings could be alleviated — by procedural means of successively feeding information drawn from one approach to the other, and vice versa. It does not attempt to conceive a substantially different approach.

Starting from a very basic situation: Somebody complains about some current ‘Is’-state of the world (IS) he does not like: ‘Somebody do something about IS!’

The call for Action (A plan is desired) raises a first set of questions besides the main one: Should the plan be adopted for implementation: D?:
(Questions / issues will be italicized. The prefixes distinguish different question types: D for ‘deontic or ‘ought-questions; E for Explanatory questions, I for Instrumental of actual-instrumental questions, F for factual questions; the same notation can be applied to individual claims):

E( IS –>OS)?              What kind of action should that be?
which can’t really be answered before other questions are clarified, e.g.:
E(IS)?                Description of the IS-state?
E(OS)?              What is the ‘ought-state (OS) that the person feels ought to be? Description?
(At this point , no concrete proposal has been made — just some action called for.)
D(OS)?              Should OS become the case?
(This question calls for ‘pros and cons’ about the proposed state OS), and
I(IS –> OS)?    How can IS be changed to OS?

Traditional approaches at this stage recommend doing some ‘research’. This might include both the careful gathering of data about the IS situation, as well as searching for tools, ‘precedents’ of the situation, and possible solutions used successfully in the past.

At this point, a ‘Systems Thinking’ (ST) analyst may suggest that, in order to truly understand the situation, it should be looked at as a system, and a ‘model’ representing that system be developed. This would begin by identifying the ‘elements’ or key variables V of the system, and the relationships R between them. Since so far, very little is known about the situation, the diagram of the model would be trivially simple:

(IS) –> REL –> (OS)

or, more specifically, representing the IS and OS states as sets of values of variables:

{VIS} –> REL(IS/OS) –> {VOS}

(The {…} brackets indicate that there may be a set of variables describing the state).

So far, the model simply shows the IS-state and the OS-state, as described by a variable V (or a set of variables), and the values for these variables, and some relationship REL between IS and OS.

Another ST consultant suggests that the situation — the discrepancy between the situation as it IS and as it ought to be (OS), as perceived by a person [P1] may be called a ‘problem’ IS/OS, and to look for a way to resolve it by identifying its ‘root cause’ RC :

E(RC of IS)?       What is the root cause of IS?
and
F(RC of IS)?       Is RC indeed the root cause of IS?

Yet another consultant might point out that any causal chain is really potentially infinitely long (any cause has yet another cause…), and that it may be more useful to look for ‘necessary conditions’ NC for the problem to exist, and perhaps for ‘contributing factors’ CF that aggravate the problem once occurring (but don’t ’cause’ it):

E(NC of IS/OS)?     What are the necessary conditions for the problem to exist?
F(NC of IS/OS)?     Is the suggested condition actually a NC of the problem?
and
E(CF of IS/OS)       What factors contribute to aggravate the problem once it occurs?
F(CF of IS/OS)?

These suggestions are based on the reasoning that if a NC can be identified and successfully removed, the problem ceases to exist, and/or if a CF can be removed, the problem could at least be alleviated.

Either form of analysis is expected to produce ideas for potential means or Actions to form the basis of a plan to resolve the problem and can be put up for debate. As soon as such a specific plan of action is described, it raises the question:

E(PLAN A)?        Description of the plan?
and
D(PLAN A)?        Should the plan be adopted / implemented?

The ST model-builder will have to include these items in the systems diagram, with each factor impacting specific variables or system elements V.

RC       –> REL(RC-IS)      –> {V(IS)}
{NC}   –> REL(NC-IS)      –> { V(IS) }     –> REL    –> {V(OS)}
{CF}    –> RELCF-IS)        –> {V(IS)}

Elements in ‘{…}’ brackets denote sets of items of that type. It is of course possible that one such factor influences several or all system elements at the same time, rather than just one. Of course, Plan A may include aspects of NC, CF, or RC. If these consist of several variables with their own specific relationships, they will have to be shown in the model diagram as such.

An Argumentative Model (AM) consultant will insist that a discussion be arranged, in which questions may be raised about the description of any of these new system elements and whether and how effectively they will actually perform in the proposed relationship.

Having invoked causality, questions will be raised about what further effects, ‘consequences’ CQ the OS-state will have, once achieved; what these will be like, and whether they should be considered desirable, undesirable (the proverbial ‘unexpected consequences’ or side-effects, or merely neutral effects. To be as thorough as the mantra of Systems Thinking demands, to consider ‘the whole system’, that same question should be raised about the initial actions of PLAN A: It may have side-effects not considered in the desired problem-solution OS: should they be included in the examination of the desired ‘Ought-state? So:

For {OS} –> {CQof OS}:

E(CQ ofOS)?        (what is/are the consequences? Description?)
D(CQofOS)?         (is the consequence desirable/ undesirable?)

For —> CQ of A:

E(CQ of A)?
and
D(CQ of A)?

For the case that any of the consequence aspects are considered undesirable, additional measures might be suggested, to avoid or mitigate these effects, which then must be included in the modified PLAN A’, and the entire package be reconsidered / re-examined for consistency and desirability.

The systems diagram would now have to be amended with all these additions. The great advantage of systems modeling is that many constellations of variable values can be considered as potential ‘initial settings’ of a system simulation run, (plan alternatives) and the development of each variable can be tracked (simulated) over time. In any system with even moderate complexity and number of loops — variables in a chain of relationships having causal relationships of other variables ‘earlier’ in the chain — the outcomes will become ‘nonlinear’ and quite difficult and ‘counter-intuitive’ to predict. Both the possibility of inspection of the diagram showing ‘the whole system’ and the exploration of different alternatives contribute immensely to the task of ‘understanding the system’ as a prerequisite to taking action.

While systems diagrams do not usually show either ‘root’ causes, ‘necessary conditions’, or ‘contributing factors’ of each of the elements in the model, these will now have to be included, as well as the actions and needed resources of PLANS setting the initial conditions to simulate outcomes. A simplified diagram of the emerging model, with possible loops, is the following:

(Outside uncontrolled factors (context)

    /                   /                       |                     |                |           \               \        \

PLAN->REL -> (RC, NC, CF) -> REL -> (IS) -> REL -> (OS) -> REL ->(CQ)

\              \                \                      |            |             |            /           /             /

forward and backward loops

 

A critical observer might call attention to a common assumption in simulation models — a remaining ‘linearity’ feature that may not be realistic: In the network of variables and relationships, the impact of a change in one variable V1 on the connected ‘next’ variable V2 is assumed to occur stepwise during one time unit i of the simulation, and the change in the following variable V3 in the following time unit i+1, and so on. Delays in these effect may be accounted for. But what if the information about that change in time unit i is distributed throughout the system much faster — even ‘almost instantaneously’, compared to the actual and possibly delayed substantial effects (e.g. ‘flows’) the diagram shows with its explicit links? Which might have the effect that actors, decision-makers concerned about variables elsewhere in the system for reasons unrelated to the problem at hand, might take ‘preventive’ steps that could change the expected simulated transformation? Of course, such actors and decision-makers are not shown…

Systems diagrams ordinarily do not acknowledge that — to the extent there are several parties involved in the project, and affected in different ways by either the initial problem situation or by proposed solutions and their outcomes — those different parties will have significantly different opinions about the issues arising in connection with all the system components, if the argumentation consultant manages to organize discussion. The system diagram only represents one participant’s view or perspective of the situation. It appears to assume that what ‘counts’ in making any decisions about the problem are only the factual, causal, functional relationships in the system, as determined by one (set of) model-builder. Thus, those responsible for making decisions about implementing the plan must rely on a different set of provisions and perspectives to convert the gained insights and ‘understanding’ of the system and its working into sound decisions.

Several types of theories and corresponding consultants are offering suggestions for how to do this. Given the particular way their expertise is currently brought into planning processes, they usually reflect just the main concerns of the clients they are working for. In business, the decision criterion is, obviously, the company’s competitive advantage resulting in reliable earnings: profit, over time. Thus for each ‘alternative’ plan considered (different initial settings in the system), and the actions and resources needed to achieve the desired OS, the ‘measure of performance’ associated with the resulting OS will be profit — earnings minus costs. For government consultants (striving to ‘run government like a business?’) the profit criterion may have to be labeled somewhat differently — say: ‘benefit’ and ‘cost’ of government projects, and their relationship such as B-C or the more popular B/C, the benefit-cost ratio. For overall government performance, the ‘Gross National Product’ GNP is the equivalent measure. The shortcomings and problems associated with such approaches led to calls for using ‘quality of life‘ or ‘happiness‘ or Human Development Indices instead, and criteria for sustainability and ecological aspects All or most such approaches still suffer from the shortcoming of constructing overall measures of performance: shortcomings because they inevitably represent only o n e view of the problems or projects — differences of opinion or significant conflicts are made invisible.

In the political arena, any business and economic considerations are overlaid if not completely overridden by the political decision criteria — voting percentages. Most clearly expressed in referenda on specific issues, alternatives are spelled out, more or less clearly, so as to require a ‘yes’ or ‘no’ vote, and the decision criterion is the percentage of those votes. Estimates of such percentages are increasingly produced by opinion surveys sampling just a small but ‘representative’ number of the entire population, and these aim to have a similar effect on decision-makers.

Both Systems Thinkers and advocates of the extended Argumentative Model are disheartened about the fact that in these business and governance habits, all the insight produced by their respective analysis efforts seem to have little if no visible connection with the simple ‘yes/no’, opinion poll or referendum votes. Rightfully so, and their concern should properly be with constructing better mechanisms for making that connection. From the Argumentative Model side, such an effort has been made with the proposed evaluation approach for planning arguments, though with clear warnings against using the resulting ‘measures’ of plan plausibility as convenient substitutes for decision criteria. The reasons for this have to do with the systemic incompleteness of the planning discourse: there is no guarantee that all the concerns that influence a person’s decision about a plan that should be given ‘due consideration — and therefore should be included in the evaluation — actually can and will be made explicit in the discussion.

To some extent, this is based on different attitudes discourse participants will bring to the process. The straightforward assumption of mutual trust and cooperativeness aiming at mutually beneficial outcomes — ‘win-win’ solutions — obviously does not apply to all such situations. Though there are many well-intentioned groups and initiatives that try to instill and grow such attitudes, especially when it comes to global decisions about issues affecting all humanity such as climate, pollution, disarmament, global trade and finance. The predominant business assumption is that of competition, seeing all parties as pursuing their own advantages at the expense of others, resulting in zero-sum outcomes: win-lose solutions. There are a number of different situations that can be distinguished as to whether the parties share or have different attitudes in the same discourse with the ‘extreme’ positions being complete sharing the same attitude, having attitudes on the opposite ends of the scale; or something in-between which might be called indifference to the other side’s concerns — as long as they don’t intrude on their own concerns, in which case the attitudes likely shift to the win-lose position at least for that specific aspect.

The effect of these issues can be seen by looking at the way a single argument about some feature of a proposed plan might be evaluated by different participants, and how the resulting assessments would change decisions. Consider, for the sake of simplicity, the argument in favor of a Plan A by participant P1:

D(PLAN A)!         Position (‘Conclusion’) : Plan A ought to be adopted)
because
F((VA –>REL(VA–>VO) –> VO) | VC      Premise 1: Variable V of  plan  A  will result in (e.g. cause) Variable VO , given condition C;
and
D(VO)                   Premise 2: Variable VO ought to be aimed for;
and
F(VC)                     Premise 3: Variable VC is the case.

Participant P1 may be quite confident (but still open to some doubt) about these premises, and of being able to supply adequate evidence and support arguments for them in turn. She might express this by assigning the following plausibility values to them, on the plausibility scale of -1 to + 1, for example:
Premise 1: +0.9
Premise 2: +0.8
Premise 3: +0.9
One simple argument plausibility function (multiplying the plausibility judgments) would result in argument plausibility of   +0.658;     a  not completely ‘certain’ but still comfortable result supporting the plan. Another participant P2 may agree with premises 1 and 2, assigning the same plausibility values to those as P1, but having considerable doubt as to whether the condition VC is indeed present to guarantee the effect of premise 1, expressed by the low plausibility score of +0.1 which would yield an argument plausibility of +0.07; a result that can be described as too close to ‘don’t know if VA is such a good idea’. If somebody else — participant P3 — disagrees with the desirability of VO, and therefore assigns a negative plausibility of, say, -0.5 to premise 2 while agreeing with P1 about the other premises, his result would be – 0.405, using the same crude aggregation formula. (These are of course up for discussion.)  The issue of weight assignment has been left aside here, assuming only the one argument, so there is only one argument being considered and the weight of its deontic premise is 1, for the sake of simplicity. The difference in these assessments raises not only the question of how to obtain a meaningful common plausibility value for the group, as a guide for its decision. It might also cause P1 to worry whether P3 would consider taking ‘corrective’ (in P1’s view ‘subversive’?) actions to mitigate the effect of VA should the plan be adopted e.g. by majority rule, or by following the result of some group plan plausibility function such as taking the average of the individual argument plausibility judgments as a decision criterion. (This is not recommended by the theory). And finally: should these assessments, with their underlying assumptions of cooperative, competitive, or neutral, disinterested attitudes, and the potential actions of individual players in the system to unilaterally manipulate the outcome, be included in the model and its diagram or map?

While a detailed investigation of the role of these attitudes on cooperative planning decision-making seems much needed, this brief overview already makes it clear that there are many situations in which participants have good reasons not to contribute complete and truthful information. In fact, the prevailing assumption is that secrecy, misrepresentation, misleading and deceptive information and corresponding efforts to obtain such information from other participants — spying — are part of the common ‘business as usual’.

So how should systems models and diagrams deal with these aspects? The ‘holistic’ claim of showing all elements so as to offer a complete picture and understanding of a system arguably would require this: ‘as completely as possible’. But how? Admitting that a complete understanding of many situations actually is not possible? What a participant does not contribute to the discourse, the model diagram can’t show. Should it (cynically?) announce that such ‘may’ be the case — and that therefore participants should not base their decisions only on the information it shows? To truly ‘good faith’ cooperative participants, sowing distrust this way may be perceived as somewhat offensive, and itself actually interfere with the process.

The work on systems modeling faces another significant unfinished task here. Perhaps a another look at the way we are making decisions as a result of planning discussions can help somewhat.

The discussion itself assumes that it is possible and useful towards better decisions — presumably, better than decisions made without the information it produces. It does not, inherently, condone the practice of sticking to a preconceived decision no matter what is being brought up (nor the arrogant attitude behind it: ‘my mind is made up, no matter what you say…’) The question has two parts. One is related to the criteria we use to convert the value of information to decisions. The other concerns the process itself: the kinds of steps taken, and their sequence.

It is necessary to quickly go over the criteria issue first — some were already discussed above. The criteria for business decision-makers discussed above, that can be assumed to be used by the single decision-maker at the helm of a business enterprise (which of course is a simplified picture): profit, ROI, and its variants arising from planning horizon, sustainability and PR considerations, are single measures of performance attached to the alternative solutions considered: the rule for this decision ‘under certainty’ is: select the solution having the ‘best’ (highest, maximized) value. (‘Value’ here is understood simply as the number of the criterion.) That picture is complicated for decision situations under risk, where outcomes have different levels of probability, or complete uncertainty, where outcomes are not governed by predictable laws, nor even probability, but by other participants’ possible attempts to anticipate the designer’s plans, and will actively seek to oppose them. This is the domain of decision and game theory, whose analyses may produce guidelines and strategies for decisions — but again, different decisions or strategies for different participants in the planning. The factors determining these strategies are arguably significant parts of the environment or context that designers must take into account — and systems models should represent — to produce a viable understanding of the problem situation. The point to note is that the systems models permit simulation of these criteria — profit,  life cycle economic cost or performance, ecological damage or sustainability — because they are single measures, presumably collectively agreed upon (which is at least debatable). But once the use of plausibility judgments as measures of performance is considered as a possibility, — even as aggregated group measures — the ability of systems models and diagrams to accommodate them becomes very questionable, to say the least. It would require the input of many individual (subjective) judgments, which are generated as the discussion proceeds, and some of which will not be made explicit even if there are methods available for doing this.

This shift of criteria for decision-making raises the concerns about the second question, the process: the kinds of steps taken, by what participants, according to what rules, and their sequence. If this second aspect does not seem to need or require much attention — the standard systems diagrams again do not show it — consider the significance given to it by such elaborate rule systems as parliamentary procedure, ‘rules of order’ volumes, even for entities where the criterion for decisions is the simple voting percentage. Any change of criteria will necessarily have procedural implications.

By now, the systems diagram for even the simple three-variable system we started out with has become so complex that it is difficult to see how it might be represented in a diagram. Adding the challenges of accounting for the additional aspects discussed above — the discourse with controversial issues, the conditions and subsequent causal and other relationships of plan implementation requirements and further side-effects, and the attitudes and judgments of individual parties involved in and affected by the problem and proposed plans, are complicating the modeling and diagram display tasks to an extent where they are likely to lose their ability to support the process of understanding and arriving at responsible decisions; I do not presume to have any convincing solutions for these problems and can only point to them as urgent work to be done.

ST-AM 4

Evolving ‘map’ of  ‘system’ elements and relationships, and related issues

Meanwhile, from a point of view of acknowledging these difficulties but trying, for now, to ‘do the best we can with what we have’, it seems that systems models and diagrams should continue to serve as tools to understand the situation and to predict the performance of proposed plans — if some of the aspects discussed can be incorporated into the models. The construction of the model must draw upon the discourse that elicits the pertinent information (through the ‘pros and cons’ about proposal). The model-building work therefore must accompany the discourse — it cannot precede or follow the discussion as a separate step. Standard ‘expert’ knowledge based analysis — conventional ‘best practice’ and research based regulations, for example, will be as much a part of this as the ‘new’, ‘distributed’ information that is to be expected in any unprecedented ‘wicked’ planning problem, that can only be brought out in the discourse with affected parties.

The evaluation preparing for decision — whether following a customary formal evaluation process or a process of argument evaluation — will have to be a separate phase. Its content will now draw upon and reflect the content of the model. The analysis of its results — identifying the specific areas of disagreement leading to different overall judgments, for example — may lead to returning to previous design and model (re-)construction stages: to modify proposals for more general acceptability, or better overall performance, and then return to the evaluation stage supporting a final decision. Procedures for this process have been sketched in outline but remain to be examined and refined in detail, and described concisely so that they can be agreed upon and adopted by the group of participants in any planning case before starting the work, as they must, so that quarrels about procedure will not disrupt the process later.

Looking at the above map again, another point must be made. It is that once again, the criticism of systems diagrams seems to have been ignored, that the diagram still only expresses one person’s view of the problem. The system elements called ‘variables’, for example, are represented as elements of  ‘reality’, and the issues and questions about those expected to give ‘real’ (that is, real for all participants) answers and arguments. Taking the objection seriously, would we not have to acknowledge that ‘reality’ is known to us only imperfectly, if at all, and that each of us has a different mental ‘map’ of it? Thus, each item in the systems map should perhaps be shown as multiple elements referring to the same thing labeled as something we think we know and agree about: but as one bubble of the item for each participant in the discourse? And these bubbles will possibly, even likely, not being congruent but only overlapping, at best, and at worst covering totally different content meaning — the content that is then expected to be explained and explored in follow-up questions? Systems Thinking has acknowledged this issue in principle — that ‘the map (the systems model and diagram) is NOT the landscape‘ (the reality). But this insight should itself be represented in a more ‘realistic’ diagram — realistic in the sense that it acknowledges that all the detail information contributed to the discourse and the diagram will be assembled in different ways by each individual into different, only partially overlapping ‘maps’. An objection might be that the system model should ‘realistically’ focus on those parts of reality that we can work with (control? or at least predict?) — with some degree of ‘objectivity’ — the overlap we strive for with ‘scientific’ method of replicable experiments, observations, measurements, logic, statistical conformation? And that the concepts different participants carrying around in the minds to make up their different maps are just ‘subjective’ phenomena that should ‘count’ in our discussions about collective plans only to the extent they correspond (‘overlap’) to the objective measurable elements of our observable system?   The answer is that such subjective elements as individual perspectives about the nature of the discourse as cooperative or competitive etc. are phenomena that do affect the reality of our interactions. Mental concepts are ‘real’ forces in the world — so should they not be acknowledged as ‘real’ elements with ‘real’ relationships in the relationship network of the system diagram?

We could perhaps state the purpose of the discourse as that of bringing those mental maps into sufficiently close overlap for a final decision to become sufficiently congruent in meaning and acceptability for all participants: the resulting ‘maps’ along the way having a sufficient degree of overlap. What is ‘sufficient’ for this, though?   And does that apply to all aspects of the system? Are not all our plans in part also meant to help us to pursue our own, that is: our different versions of happiness? We all want to ‘make a difference’ in our lives — some more than others, of course — and each in our own way.  The close, complete overlap of our mental maps is a goal and obsession of societies we call ‘totalitarian’. If that is not what we wish to achieve, should the principle of plan outcomes leaving and offering (more? better?) opportunities for differences in the way we live and work in the ‘ought-state’ of problem solutions,  be an integral element of our system models and diagrams? Which would be represented as a description of the outcome consisting of ‘possibility’ circles that have ‘sufficient’ overlap, sure, but also a sufficient degree of non-overlap ‘difference’ opportunity outside of the overlapping area. Our models and diagrams and system maps don’t even consider that. So is Systems Thinking, proudly claimed as being ‘the best foundation for tackling societal problems’ by the Systems Thinking forum, truly able to carry the edifice of future society yet? For its part, the Argumentative Model claims to accommodate questions of all kinds of perspectives, including questions such as these, — but the mapping and decision-making tools for arriving at meaningful answers and agreements are still very much unanswered questions. The maps, for all their crowded data, have large undiscovered areas.

The emerging picture of what a responsible planning discourse and decision-making process for the social challenges we call ‘wicked problems’, would look like, with currently available tools, is not a simple, reassuring and appealing one. But the questions that have been raised for this important work-in-progress, in my opinion, should not be ignored or dismissed because they are difficult. There are understandable temptations to remain with traditional, familiar habits — the ones that arguably often are responsible for the problems? — or revert to even simpler shortcuts such as placing our trust in the ability and judgments of ‘leaders’ to understand and resolve tasks we cannot even model and diagram properly. For humanity to give in to those temptations (again?) would seem to qualify as a very wicked problem indeed.


Notes:

1 The understanding of ‘systems thinking’ (ST) here is based on the predominant use of the term in the ‘Systems Thinking World’ Network on LinkedIn.

2 The Argumentative Model (AM) of Planning was proposed by H. Rittel, e.g. in the paper ‘APIS: A Concept for an Argumentative Planning Information System’, Working paper 324, Institute of Urban and Regional Development, University of California, 1980. It sees the planning activity as a process in which participants raise issues – questions to which there may be different positions and opinions, and support their positions with evidence, answers and arguments. From the ST point of view, AM might just be considered a small, somewhat heretic sect within ST…


Another bedtime gun control story

What are you thinking about so somberly, Abbé Boulah?

Old stories.

Tell me. It’s almost bedtime.

Okay. Not sure they’ll live happily ever after on this story though.

Once upon a time, in the dark ages, life was hard and dangerous. Wild animals in the forests often attacked and killed humans. So they invented tools they’d carry around to protect themselves against the animals, and kill them instead. They also used these tools against other – ‘bad’ — humans who occasionally liked to take their women or livestock or fermented fruit juices, or because they looked or talked funny. Or because there weren’t enough wild animals around anymore. Even for no particular reason. Now soon, those ‘bad guys’ would invent bigger and more effective tools.  So people had to invent even bigger and more deadly tools — they called them ‘weapons’ — and things they’d wear to protect themselves — shields, mail coats, body armor, helmets. Inevitably, the bad guys would develop even more effective weapons, better armor, and start their attacks from big horses they also protected with armor. So the good people got even bigger horses and longer lances for their protection. This went on until all the stuff they had to put on and carry became so heavy that they needed several servants to even get up on their horse. If they fell off the horse in the course of an altercation, they were totally helpless. Not good.

No. Sounds more like insanity.

Yes. So they decided to try a different tack. They agreed that — at least in everyday civil life — nobody would carry any weapons. And that they’d settle disagreements by talking, or calling upon referees or an assembly of neighbors, explain their grievances and abide by their decision. Of course, they had to designate some people — guardians of the peace — to make sure everybody adhered to these agreements. In some places, these folks did not carry weapons, — they were only protected by the people’s promise that if anybody did use weapons or force against them, they would face very unpleasant consequences.

Sounds better. Did it work?

Not everywhere, Bog-Hubert: not everybody believed it would so they didn’t even try it. Other places followed a different reasoning. They thought that guardians who had to ensure that nobody would use force to violate their agreements they called ‘laws’ must necessarily be stronger, and have more force at their disposal than any would-be violator. So they gave their peacekeepers weapons.

Not entirely unreasonable, huh?

Arguable. But not only did this start that escalating process all over between the police and the bad guys. But it also tempted some people in charge of ‘government’ to enact and have the peacekeepers enforce laws that were benefiting themselves more than the citizens, which was easy as long as those didn’t have any weapons. (Those people most likely had come into power by not so peaceful means themselves.)

Those pesky unexpected consequences, eh?

Right. The citizens did not like that, and in some places found ways to kick those people off their thrones, and try a different set of rules. First, they would designate people from their own ranks to be the government. People they trusted — but only to some extent, so they were allowed to govern only for some time. And who could be recalled if they didn’t govern the way the people wanted. And secondly: that citizens would be allowed to keep weapons themselves — to protect themselves and their homes against bad guys if the peacekeepers couldn’t get there fast enough, but also to ensure that the government wouldn’t start any funny business. Weapons were needed, of course, because the government was in charge of peacekeepers who had weapons — according to the abovementioned reasoning. What happened now was that the bad guys were getting more effective weapons than the peacekeepers, who inevitably had to respond by getting better weapons themselves, plus protective gear and vehicles. This left the average citizen behind, the old muskets wouldn’t do the job for either one of the two protection reasons we mentioned. So they had to get better weapons too.

Sounds familiar. Back to the drawing board?

Not yet. What was new was that this was very good business for the people who made all those weapons and protective gear. They provided both — or rather, all three — parties in this game with ever-increasingly effective weapons and ammunition. And they became rich. So they, or some of them, anyway, were tempted to buy themselves some government.

Buy the government? I’m shocked, shocked…

Of course they wouldn’t crudely say that out loud. Wouldn’t be proper. Not good PR. They would just finance the ‘democratic’ election campaigns for candidates who wanted to govern — in return for a promise that they would not pass any laws that could hurt their business. And business was very good. Everybody got weapons. And the weapons got better — more deadly and destructive — every year, so everybody had to keep buying better weapons. And whenever somebody would fall into the strange temptation to start killing other people for no particular good reason…

Or because they had lost their job, or gotten a bad grade in math, because their parents hadn’t bought them the toys they wanted when they were kids, or because the girl they liked didn’t like them back, or, so cowardly, because their victims did not have any weapons, like school kids, and were easy to kill, or to get on TV even if they got killed themselves?

Like I said, no particular good reason. Then the business and government leaders would solemnly pray for the victims and argue for more people getting and carrying weapons around everywhere, for their protection. And then, for the peacekeepers to get more effective weapons than anybody else, of course. Business, you know. Elections coming up, you know.

Ahh.

These are of course much more advanced and enlightened times than those old days we call the dark ages, when the heroes in shining armor kept falling helplessly off their horses. Today, many more people get killed much more efficiently. And business is good.

Hmm.

What’s that you say? You’d think that some of all the creativity and invention and hard work that goes into the making of better weapons might be used to figure out other ways to make this problem go away? Other than just to make guns illegal, and lock up people who have any? Or let everybody get more and better guns every year? You’d kill for it, you say? Well. Figure of speech, I know you didn’t mean it that way. Based on experience, you’d get laughed out of the room to argue for that. Or get killed. Humans are funny about that gun business.

So what would you do?

Me? I have some ideas. Didn’t we talk about that some time ago? But … Experience, you know?


On the role of design and education of designers.

(From a letter to a friend who has been working, writing and publishing on the problems of ‘design’.) Thorbjoern Mann, May 2015

I have been busy trying to communicate with the systems folks on LinkedIn about the role of argumentation in systems modeling — there seems to be an obstinate blind spot (or hole?) in their oh so holistic minds about that. I have yet to see a systems diagram in which the various issues (contentious questions, for example regarding assumptions of the model variables and parameters, about which people might disagree) are not somehow assumed to be ‘settled’. No more discussion. Curious, it is making me feel a little like someone trying to fill those open minds (they insist) with the precious grains of my speculations only to see them run out of the bottoms of those minds (there are holes top and bottom, and the bottom ones are larger?) like ocean sand.

So every once in a while I resort to wise books like the Designology volume you graciously sent me, for reassurance that the design perspective is one to be valued, respected, and further explored. I especially am fascinated by the heroic efforts in that book — and elsewhere — to identify and locate the proper role of design in the academic landscape of disciplines and departments. And the more I think about it, I sense how much of a monster this thing must look like from the point of view of, say, a ministry of education confronted with demands for proper designation of funds and personnel and labels (department names) let alone assignment of leadership roles to this ‘design’ phenomenon.

For it seems to be a little like that curious object some people have used to test prospective designers’ visual imagination: the thing that has a square profile if seen from one side, a triangle from another, and a circle from a third direction. Design indeed looks like a handful of different disciplines, depending on the angle from which it is seen. The literature is replete with complaints about the difficulty of agreeing on a common definition of design.

For example: Let’s say we start, arbitrarily, from some proposed explanation that design has something to do with problem solving. Looking at a problem as a discrepancy between a state of affairs as it IS (or will be, if nothing is done), and as it OUGHT to be, raising the need or desire to find out HOW it may be transformed from the former to the latter. A closer attention to the IS part may get us to look not only at the facts of the current situation and their adequate determination and description, but also at the causes that made things get this way, trying to understand the forces and laws at work in that process. This may have to do with physical aspects of reality, suggesting an approach like the scientific method of natural sciences to validate and understand it: Does this not look like Science? But not only science in the sense of the ‘hard’ natural sciences, because physical conditions and artifacts involved in problems have effects on people, their minds (psychological, physiological) and relations: Social science. The designer must have some adequate understanding of both ‘kinds’ of science in order to deal with the challenge of doing something meaningful about it.

Looking at the other end, though, the OUGHT aspect, a first impression that it also has a social sciences flavor — user needs, for example — soon gives way to a sense that there may be more esoteric aspects at work: vision, dreams, desires, imagination, aesthetics: aspects for which either science label clearly is not appropriate. In fact, the label OUGHT evokes connections to quite different disciplines: those that explore the good, morality, ethics, norms. So should design actually be situated in the philosophy department?

This is not a very common idea. Rather, it is the imagination aspect, or more specifically, the need to use visual images to communicate about the proposed results of this activity, that has led many to see the essence of design in the tools we have to help our own and the audience’s understanding and ability to ‘see’ proposed solutions: Drawing, model-building, perspective, rendering, with their closeness to painting and sculpture: Obviously: it’s (a kind of) an Art? Even given more recent tools of computer programs for virtual visual walk-through presentations. This is a historically a more widely embraced notion.

However, there are more, less ‘artistic’ tools designers need to persuasively present solution ideas to clients and the public. Proofs of validity, affordability, safety: diagrams, calculations. More like the tools engineers are using?

Wait: persuasion? Yes, designers will have to spend some effort trying to convince others of the advantages of the solution — mainly the ones who are expected to pay for its implementation. This is partly the stuff of ‘storytelling’ many design teachers admonish their students to cultivate — what will it be like to live in this great proposed solution? But also, when things are heating up, of argumentation: exploring, discussing the pros and cons of the proposals.

Arguments? Doesn’t that have to do with logic, rhetoric? But the disciplines in charge of argumentation haven’t paid much attention to the kinds of arguments we are using all the time in the design and planning discourse, so they do not have much room for the concerns of design in their curricula — but it’s argumentation, all right. Even the structure of these ‘planning arguments’ clearly indicate the multifaceted nature of the concerns involved:

“We ought to adopt proposal X
because
1) implementing X will result in consequence Y provided conditions C are given;
and
2) we ought to pursue consequence Y,
and
3) conditions C are indeed present?”

This ubiquitous argument pattern (of course there are many variations due to different assertion / negation of terms, and different relations between X and Y) contains at least two or three different kinds of premises: the factual-instrumental premise 1, the deontic premise 2, and the factual premise claim 3. If questioned, each of these will have to be supported with very different kinds of reasons: the kind of evidence we could loosely call scientific method for premises 1 and 3, but based on conceptual agreements about the meaning of the things we are talking about. Reasons which employ arguments found in the familiar catalogues of reliable logical and statistical inference, observation, data-gathering, measurement. A closer scrutiny of the catch-all premise 3 might reveal that the conditions C include all the variables, values, and relationship parameters of a systems model. The ‘Systems Thinking’ community (referring to a variety of different emerging ‘brands’ of systems studies) would this argue that holistic understanding and modeling of the systems into which designers are intervening is a necessity, and this is the concern of premises 1 and especially 3.

But for premise 2, the supporting arguments will be of the same kind of ‘planning argument’ type. From the point of view of formal logic, these arguments are not ‘valid’ in the sense of deductive syllogisms whose conclusions must be accepted as true if all the premises are true. They are merely ‘inconclusive’ at best, no matter how recklessly we use and accept this kind of reasoning in everyday planning discourse. That very recklessness being a strong argument in favor of designers studying such reasoning more carefully than is currently the case… What to call this perspective?

Coming back to the impression that design is more like engineering. There is good evidence for this: the question of HOW to transform the unpleasant IS condition to the desirable OUGHT requires the application of scientific knowledge — science, again — to the task of putting together tools, processes, resources to generate solutions and to evaluate them, test them to see if they will meet the requirements and withstand damaging forces. And in the production of modern architecture, there are many different kinds of engineers involved — engineering had to divide itself into many different sub-disciplines, each drawing on their own branch of science. The available and needed knowledge has become too rich and complex for any single professional to master them all. This means that effecive coordination of all these activities in the design process requires at least an adequate understanding of the different engineering branches and their vocabulary, concerns, criteria, to make sense of it all. Ideally. So perhaps it was appropriate for many architecture schools to be located in Institutes of Technology rather that in art schools such as the Beaux Arts?

The successful practitioners of this kind of art, though, (the ones who consistently win commissions for significant work) find themselves facing a quite different challenge: that of running a business. And some of the well-known sources of jokes about architects refer to their frequent troubles of this kind. For example: meeting deadlines: time management, and even more seriously, staying within the budget. A case for including more management, business and economics material in the education of designers?

What, besides an understanding of engineering, business, and economics, — we might as well throw in the various disciplines exploring the aspect of sustainability and ecological impact of their buildings — does this mean for the poor architects? The ones who got through architecture school even in spite of the required structures courses that gave their artistic minds so much trouble? It becomes a very different activity: to guide and orchestrate — the word is very apt for the assembly of different disciplines and professions — the activities of all these people in the design process. Not only there, but of course also in the subsequent implementation process, with different professionals. The architect there has to become a project manager — if he hasn’t given up that role to yet another, different profession. But a good design has to take the implementation process into account as an important determining factor: if it can’t be built, if it takes too long, if there are too many possibilities of accidents or failures along the way, his prospects for successful creation of solutions are slim.

Creating, designing, then, involves all these considerations and skills. And while this little sketch considered only the architect of buildings (the word ‘architect’ has been taken over by many other ‘designing’ roles such as software developers and even turned into a verb; old Vitruvius must be rotating in his grave) it should be easy to see how this multiple perspective feature applies to many other areas of modern life. Yes: for the academic department designer, ‘design’ is a monster, and the proper role and placement of design education is a very wicked problem.

It raises a number of important questions for how research (the science of design) and education for all the professions that will have to deal with design ought to be organized, funded and guided. The current confused attitude and treatment — best characterized as the infamous ‘benevolent negligence’ quip by Senator Moynihan about race relations — perhaps has the advantage that many different people in many different realms are forced to creatively deal with it. But it can’t, by any measure, be called a convincing, efficient design. This very point, in my opinion, is calling for increased attention and discussion. Perhaps a conference? A research project (if research is the proper word, after all these questions…)? A ‘design’ competition? A large online public planning discourse?

 


Combining systems modeling maps with argumentative evaluation maps: a general template

Many suggested tools and platforms have been proposed to help humanity overcome the various global problems and crises, each with claims of superior ability or adequacy for addressing the ‘wickedness’ of the problems.

Two of the main perspectives I have studied – the general group of models labeled as ‘systems thinking’, ‘systems modeling and simulation’, and the ‘argumentative model of planning’ proposed by H. Rittel (who incidentally saw his ideas as part of a ‘second generation’ systems approach) have been shown to fall somewhat short of those claims: specifically, they have so far not been able to demonstrate the ability to adequately accommodate each others’ key concerns. The typical systems model seems to assume that all disagreements regarding its model assumptions have been ‘settled’; it shows no room for argument and discussion or disagreement, while the key component of the argumentative model: the typical ‘pro’ or ‘con’ argument of the planning discourse, — the ‘standard planning argument’ does not connect more than two or three of the many elements of a more elaborate systems model of the respective situation, and thus fails to properly accommodate the complexity and multiple loops of such models.

It is of course possible that a different perspective and approach will emerge that can better resolve this discrepancy. However, it will have to acknowledge and then properly address the difficulty we can now only express with the vocabulary of the two perspectives. This essay explores the problem of showing how the elements of the two selected approaches can be related in maps that convey both the respective system’s complexity and the possible disagreements and assessment of the merit of arguments about system assumptions.

A first step is the following simplified diagram template that shows a ‘systems model’ in the center, with arguments both about how the proposal for intervention in the system (consisting of suggested actions upon specific system elements) should be evaluated, and about the degree of certainty – the suggested term is ‘plausibility’ – about assumptions regarding individual elements.

A key aspect of the integration effort is the insight that the ‘system’ will have to include all the features discussed in the discourse under the terms of ‘plan proposal’ with its details of initial conditions, proposed actions (what to do, by whom, using what tools and resources, and the conditions for their availability), the ‘problem’ a solution aims at remedying, which is described (at least) by specifying its current ‘IS’ state, the desired ‘OUGHT’ state or planning outcome, the means by which the transition of is- to ought-state can be achieved; and the potential consequences of implementing the plan, including possible ‘unexpected’ side-and-after-effects. Conversely, the assessment of arguments (the “careful weighing of pros and cons”) will have to explicitly address the system model elements and their interactions – elements that should be (but mostly are not) specified in the argument as ‘conditions under which the plan or one of its features is assumed to effectively achieve the specific outcome or goal referenced by the argument.

For the sake of simplicity, the diagram only shows two arguments or reasons for or against a proposed plan. In reality, there always will be at least two arguments (benefit and cost of a plan), but usually many more, based on assessment of the multiple outcomes of the plan and actions to implement it, as well as of conditions (feasibility, availability, cost and other resources) for its implementation. The desirability assessments of different parties will be different; the argument seen as ‘pro’ by one party can be a ‘con’ argument for another, depending on the assessment of the premises. Therefore, arguments are not shown as pro or con in the diagram.

 

AMSYST 1
The diagram uses abbreviated notations for conciseness and convenient overview that are explained in the legend below, that presents some key (but by no means exhaustively comprehensive) concepts of both perspectives.

*  PLAN or P Plan or proposal for a plan or plan aspects

*  R    Argument or ‘reason’. It is used both for an entire ‘pro’ or ‘con’ argument about the plan or an issue, — the entire set of premises supporting the ‘conclusion’ claim (usually the plan proposal) and for the relationship claimed to connect the Plan with an effect, usually a goal, or a negative consequence of plan implementation, in the factual-instrumental premise.
The ‘standard planning argument’ pattern prevailing in planning discourse has the general form:
D(PLAN) Plan P ought to be adopted (deontic ‘conclusion’)
because
FI (PLAN –>R –>O)|{C} P has relationship R with outcome O given
Conditions {C} (Factual-instrumental premise)
and
D(O) Outcome O ought to be pursued (Deontic premise)
and
F{C} Conditions {C} are given (true)

The relationship R is most often a causal connection, but also stands for a wide variety of relationships that constitute the basis for pro or con arguments: part-whole, identity, similarity, association, analogy, catalyst, logical implication, being a necessary or sufficient condition for, etc. In an actual application, these relationships may be distinguished and identified as appropriate.

*    O or G   Outcome or goal to be pursued by the plan, but also used for other effects including negative consequences

*    M —   the relationship of P ‘being a means’ to achieve O

*     C or {C}     The set of a number of
c conditions under which the claimed relationship M between P and    O is assumed to hold

*     pl ‘plausibility’ judgments about the plan, arguments, and argument premises, expressed as values on a scale of +1 (completely plausible) to -1 (completely implausible) with a midpoint ‘zero’ understood as ‘so-so or ‘don’t know, cant decide’) in combination with the abbreviations for those:
*       plPLAN or plP plausibility judgment of the PLAN,
this is some individual’s subjective judgment.
*       plM plausibility of P being effective in achieving O;
*       pO plausibility of an outcome O or Goal;
*       pl{C} plausibility (probability) of conditions {C} being present;
*       plc plausibility of condition c being present;
*       plR plausibility of argument or reason R;
*       pl PLAN GROUP a group judgment of plan plausibility

*       wO weight of relative importance of outcome O ( 0 ≤ w ≤ 1; ∑w = 1)

*       WR Argument weight or weight of reason

Functions F between plausibility values:

*      F1     Group plausibility aggregation function:
n
pl PLANGROUP = F1 (plPLANq) for all n members q of the group
q=1, 2

*      F2    Plan plausibility function:
m
Pl(PLAN)q = F2 (WRi) for all m reasons R, by person q
i = 1,2…

*      F3   Argument weight function:

WRi = F3 pl Ri)* wOj

*     F4   Argument plausibility function:

Pl(Ri) = F4: {pl(P –>Mi –>Oi)|{Ci}) , pl(Oi), pl{C}}
The plausibility of argument R is a function of all
Premise plausibility judgments

*     F5     Condition set plausibility function:

Pl{C} = F5 (pl ck) pl of set {C} is a function of the
K = 1,2… plausibility judgmens of all c in the set.
n
*     F6 Weight of relative importance of outcome Oi: wOi = 1/n ∑ vOi
i=1,2…
Subject to conditions 0 ≤ wOi ≤ 1, and ∑wO = 1.

*    System S The system S is the network of all variables describing both the initial  conditions c (the IS-state of the problem the plan is trying to remedy), the  means M involved in implementing the plan, the desired ‘end’ conditions or goals G of the plan, and the relationships and loops between these.

The diagram does not yet show a number of additional variables that will play  a role in the system: the causes of initial conditions (that will also affect the  outcome or goal conditions; the variables describing the availability, effectiveness, costs and acceptability of means M, and potential consequences of both M and O of the proposed plan. Clearly, these conditions and their behavior over time (both the time period needed for implementation, and the assumed planning horizon or life expectancy of the solution) will or should be given due consideration in evaluating the proposed plan.


Towards adding argumentation information to systems maps and systems complexity to argument maps.

This brief exploration assumes that discussions as well as any systems analysis and modeling are essentially part of human efforts to deal with some problem, to achieve some change of conditions in a situation, — a change that expected to be different from how that situation would exist or change on its own without a planning intervention.

1       Adding questions and arguments to systems diagrams.

Focusing on a single component of a typical systems diagram: two elements (variables)
A and B are linked by a connection / relationship R(AB) :

A ———R———> B

For convenience, in the following these elements are listed vertically to allow adding questions people might ask about them, and hold different opinions about the possible answers.

A What is A?
|           What is the current value (description) of A? (at time i)
|           How will A change (e.g. what will the value of A be at time i+j)?
|           What causes / caused A?
|           Should changing A be a part of a policy / plan?
|                  If so: What action steps S (Sequence? Times? Actors?) and
|                            What Means / resources M will be needed?
|             Are the means actors etc. available? Able? Willing?
|             What will be the consequences KA of changing A?
|            Who would be affected by KA? In what way?
|             Is consequence KAj desirable? Undesirable?
|           Q: Is A the appropriate concept for the problem at hand?
|               (and the questions about A the appropriate questions?)
|
R(AB)   What is the relationship R(AB)?
|            What is the direction of R?
|            Should there be a relation R(AB)?
|            What is the (current) rate of R? (Other parameters? E.g. strength)?
|            What should the rate of R be?
|
B          What is B?
.            What is the current state / value of B?
.            Should B be the aim / goal G of a policy / plan?
.             Are there other (alternative) means for attaining B?
.            What should be the desired state / value of B? (At what time?)
.             What factors (other than A) are influencing B?
.            What would be the consequences K of attaining G?
.            Who would be affected by K? In what way?
.            Is consequence KBj desirable? Undesirable?
.            Q: Is B the appropriate concept for the problem at hand?
.            (and the questions about B the appropriate questions?)

Most systems models and diagrams do not show such questions and arguments – it is my impression that they either assume that differences of opinion about the underlying assumptions have been ‘settled’ in the respectively last version of the model, or that the modeler’s understanding of those assumptions is the best or valid one (on the authority of having constructed the model?). They thereby arguably discourage discussion. They also do not easily accommodate the complete description of plans or policies, assuming a kind of ‘refraining from committing to solutions’ attitude of just ‘objectively’ conveying the simulated consequences of different policies while limiting the range of policy or plan options by omitting the aspects addressed by the questions and arguments.

2             Adding systems complexity information to argument maps

Typically, the planning discourse will consist of a growing set of ‘pro’ and ‘con’ arguments about plan proposals; any decision should be based on ‘due consideration’ of all these arguments. In the common practice of discussion (even in carefully structured participatory events) the individual typical planning argument can be represented as follows:
“Plan P ought to be adopted and implemented
because
Implementing the plan P will have relationship R (e.g. lead to) consequence K, given conditions C
and
Consequence K ought to be pursued (is a goal G)
and
Conditions C are present.

This argument, in which several premises already have been added that in reality often are omitted as ‘taken for granted’, can be represented in more concise formal ways , for example as follows:

D(P)                           (Deontic claim: conclusion, proposal to be supported)
Because
FI((P –R—>K)|C)    (Factual-instrumental premise)
and
D(K)                           (Deontic premise)
and
F(C)                            (Factual premise)

The argumentative process, in the view of Rittel’s ‘Argumentative Model of Planning’, consists of asking questions (in the case of controversial questions, ‘raising issues’) for the purpose of clarifying, challenging or supporting the various premises. This serves to increase participants’ understanding of the situation and its complexity, which from the point of view of the ‘Systems Perspective’ may be merely ‘crudely’, only qualitatively and thus inadequately represented in the arguments in a ‘live’ discussion. Some potential questions for the above premises are the following:

D(P)         Description, explanation of the plan and its details:
Problem addressed?
Current condition / situation?
Causes, necessary conditions for problem to exist, contributing factors?
Aims / goals?
Available means?
Other possible means of addressing problem?
Q: wrong question: wrong way of looking at the problem?
Implementation details? Steps, actions? Sequence?

Actors / responsibilities?
Means and resources needed? Availability? Costs?

FI((P –R–>K)|C)) : Does the relationship hold? Currently? Future?

R(P,K)      Explanation: Type of relationship?

(Causal, analogy, part-whole, logical implication…)
Existence and direction of relationship? Reverse? Spurious?
Strength of relationship?
Conditions under which the relationship can exist / function?

D(K)       Should consequence K be pursued?
Explanation / description of K: details?
What other factors (than the provisions of plan P) affect / influence K?
Other (alternative) means of achieving K?

F(C)         Are the conditions C (under which relationship R holds) present?
Will they be present in future?
What are the conditions C?
What factors (other than those activated by plan P) affect / influence C?
If conditions C are NOT reliably present,
what provisions must be made to secure them? (Plan additions?)

These questions, (which arguably should be better accommodated in systems diagrams) can be taken up and addressed in the normal discussion process. Their sequence and orderly treatment representation, especially to provide adequate overview, can be improved, and could be significantly improved by better representation of the variety and complexity of the additional elements introduced by the questions raised.

This is especially true with respect to the question about Conditions C under which the claimed relationship R is assumed to hold. A more careful examination of this question (i.e. more careful than the common qualification ‘everything else being equal’: what IS that ‘everything else’ – and IS it ‘equal’?) will reveal that there are many conditions, and that they are interrelated in different, complex ways, with behaviors over time that we have trouble fully understanding. In other words, they constitute a ‘systems network’ of elements, factors and relationships including positive and negative feedback loops – precisely the kind of network shown in systems diagrams.

Thus, it must be argued that in order to live up to the sensible principle that decisions to adopt or reject plans should be made on the basis of due consideration (i.e. understanding) of all the pro and con arguments, the assessment of those arguments should include adequate understanding of the systems networks referred to in all the pro and con arguments.

3          Conclusion

The implication of the above considerations is. I think, fairly clear: Neither does common practice of systems modeling or diagramming adequately accommodate questions and arguments about model assumptions, nor do common representations (issue and argument maps) of the argumentative discourse adequately accommodate systems complexity. Which means that the task of developing better means of meeting that requirement is quite urgent; the development of effective global discourse support platforms for addressing the global crises we are facing will depend on acceptable solutions for this question. But this is still a vague goal: I have not seen anything in the way of specific means of achieving it yet. Work to do.


A Less Adversarial Planning Discourse Support System

A Fog Island Tavern conversation
about defusing the adversarial aspect of the Argumentative Model of Planning

Thorbjoern Mann 2015

(The Fog Island Tavern: a figment of imagination
of a congenial venue for civilized conversations
about issues, plans and policies of public interest)

– Hi Vodçek, how’s the Tavern life this morning? Fog lifting yet?
– Hello Bog-Hubert, good to see you. Coffee?
– Sure, the usual, thanks. What’s with those happy guys over there — they must be drinking something else already; I’ve never seen them having such a good time here?
– No, they are just having coffee too. But you should have seen their glum faces just a while ago.
– What happened?
– Well, they were talking about the ideas of our friend up at the university, about this planning discourse platform he’s proposing. They were bickering about whether the underlying perspective — the argumentative model of planning — should be used for that, or some other theory, systems thinking or pattern language approaches. You should have been there, isn’t that one of your pet topics too?
– Yes, sorry I missed it. Did they get anywhere with that? What specifically did they argue about?
– It was about those ambitious claims they are all making, about their approach being the best foundation for developing tools to tackle those global wicked problems we are all facing. They feel that those claims are, well, a little exaggerated, while accusing each other’s pet approach of being far from as effective and universally applicable as they think. Each one missing just the main concerns the other feels are the most important features of their tool. And lamenting the fact that neither one seems to be as widely accepted and used as they think it deserves.
– Did they have any ideas why that might be?
– One main point seemed to be the mutual blind spot that the Argumentative Model, besides being too ‘rational’ and argumentative for some people, and not acknowledging emotions and feelings, did not accommodate the complexity and holistic perspective of systems modeling (in the view of the systems guys), while the systems models did not seem to have any room for disagreements and argumentation, from the point of view of your argumentative friends.
– Right. I am familiar with those complaints. I don’t think they are all justified, but the perceptions that they are need to be addressed. We’ve been working on that.
– Good. Another main issue they were all complaining about — both sides — was that there currently isn’t a workable platform for the planning discourse, even with all the cool technology we now have. And therefore some people were calling for a return to simple tools that can be used in actual meeting places where everybody can come and discuss problems, plans, issues, policies. The ‘design tavern’ that Abbé Boulah kept talking about, remember?
– Yes. It seemed like a good idea, but only for small communities that can meet and interact meaningfully in ‘town hall’- kind places. Like his Rigatopia thing, as long as that community stays small enough.
– Well, they seemed to get stuck in gloom about that issue for a while, couldn’t decide which way to go, and lamenting the state of technology for both sides. That’s when Abbé Boulah showed up for a while, and turned things around.
– How did he do that?
– He just reminded them of the incredible progress the computing and communication technology has seen in the last few decades, and suggested that they might think about how that progress might have been focused on the wrong problems, or simply not getting around to the real task of their topic — planning discourse support — yet. Told them to explore some opportunities of the technology – possibilities already realized by tools already on the market or just as feasible but not yet produced. He bought them a round of his favorite Slovenian firewater and told them to brainstorm crazy ideas for new inventions for that cause, to be applied first in his Rigatopia community experiment on that abandoned oil rig. That’s what set them off. Boy, they are still having fun doing that.
– Did they actually come up with some useful concepts?
– Useful? Don’t know about that. But there were some wild and interesting ideas I heard them toss around. Strangely, most of them seemed about tech gizmos. They seem to think that the technical problem of global communication is just about solved — messages, information can be exchanged instantaneously all over the world, and that concepts like Rittel’s IBIS provides an appropriate basis for organizing, storing, retrieving that information, and that the missing things have to do with the representation, display, and processing the contributions for decision-making: analysis and evaluation.
– Do you have an example of ideas they discussed?
– Plenty. For the display issue, there was the invention of the solar-powered ‘Googleglass-Sombrero’ — taking the Google glass idea a step further by moving the internet-connected display farther away from the eye, to the rim of a wide sombrero, so that several display maps can be seen and scanned side by side, not sequentially. Overview, see? Which we know today’s cell-phones or tablets don’t do so well. There was the abominable ‘Rollupyersleeve-watch’. It is actually a smartphone, but would have an expandable screen that can be rolled up to your elbow so you can see several maps simultaneously. Others were still obsessed with making real places for people to actually meet and discuss issues, where the overall discourse information is displayed on the walls, and where they would be able to insert their own comments to be instantly added and the display updated. ‘Democracy bars’, in the tradition of the venerable sports bars. Fitted with ‘insect-eye’ projectors to simultaneously project many maps on the walls of the place, with comments added on their own individual devices and uploaded to the central system.
– Abbé Boulah’s ‘Design Tavern’ brought into the 21st IT age. Okay!
– Yes, that one was immediately grabbed by the corporate – economy folks: Supermarkets offering such displays in the cafe sections, with advertisement, as added P/A attractions…
– Inevitable, I guess. Raises some questions about possible interference with the content?
– Yes, of course. Somebody suggested a version of the old equal-time rule: that any such ad had to be immediately accompanied by a counter-ad of some kind, to ‘count’ as a P/A message.
– Hmm. I’d see a lot of fruitless lawsuits coming up about that.
– Even the evaluation function generated its innovative gizmos: There was a proposal for a pen (for typing comments) with a sliding up-down button that instantly lets you send your plausibility assessment of proposed plans or claims. It was instantly countered by another idea, of equipping smartphones with a second ‘selfie-camera’ that would read and interpret you facial expressions when reading a comment or argument: not only nodding for agreement, shaking your head to signal disagreement, but also reading raised eyebrows, frowns, smiles, confusion, and instantly sending it to the system, as instant opinion polls. That system would then compute the assessment level of the entire group of participants in a discussion, and send it back to the person who made a comment, suggesting more evidence, or better justification etc.
– Yes, there are some such possibilities that a kind of ‘expert system’ component could provide: not only doing some web research on the issues discussed, but actually taking part in the discussion, as it were. For example, didn’t we discuss the idea of such a system scanning both the record of discussion contributions and the web, for example for similar cases? I remember Abbé Boulah explaining how a ‘research service’ of such a system could scan the data base for pertinent claims and put them together into pro and con arguments the participants hadn’t even thought of yet. Plus, of course, suggesting candidate questions about those claims that should be answered, or for which support and evidence should be provided, so people could make better-informed assessments of their plausibility.
– I’m glad you said ‘people’ making such assessments. Because contrary to the visions of some Artificial Intelligence enthusiasts, I don’t think machines, or the system, should be involved in the evaluation part.
– Hey, all their prowess in drawing logical conclusions from data and stored claims should be kept from making valuable contributions: are you a closet retro-post-neoluddite? Of course I agree: especially regarding the ought-claims of the planning arguments, the system has no business making judgments. But the system would be ‘involved’, wouldn’t it? Processing and calculation of participants’ evaluation results? In taking the plausibility and importance judgments, and calculating the resulting argument plausibility, argument weights, and conclusion plausibility, as well as the statistics of those judgments for the entire group of participants?
– You are right. But those results should always just be displayed for people to make their own final judgments in the end, wasn’t that the agreement? Those calculation results should never be used as the final decision criterion?
– Yes, we always emphasized that; but in a practical situation it’s a fine balancing act. Just like decision-makers were always tempted to use some arbitrary performance measure as the final decision criterion, just because it was calculated from a bunch of data, and the techies said it was ‘optimized’. But hey, we’re getting into a different subject here, aren’t we: How to put all those tools and techniques into a meaningful design for the platform, and a corresponding process?
– Good point. Work to do. Do you think we’re ready to sketch out a first draft blueprint of that platform, even if it would need tools that still have to be developed and tested?
– Worth a try, even if all we learn is where there are still holes in the story. Hey guys, why don’t you come over here, let’s see if we can use your ideas to make a whole workable system out of it: a better Planning Discourse Support System?
– Hi Bog-Hubert. Okay, if you feel that we’ve got enough material lined up now?
– We’ll see. How should we start? Does your Robert’s Rules expert have any ideas? Commissioner?
– Well, thanks for the confidence. Yes, I do think it would be smart to use the old parliamentary process as a skeleton for the process, if only because it’s fairly familiar to most folks living in countries with something like a parliament. Going through the steps from raising an issue to a final decision, to see what system components might be needed to support each of those steps along the way, and then adding what we feel are missing parts.
– Sounds good. As long as Vodçek keep his bar stocked, we can always go back to square one and start over if we get stuck. So how does it start?
– I think there are several possible starting points: Somebody could just complain about a problem, or already make a proposal for how to deal with it, part of a plan. Or just raise a question that’s part of those.
– Could it just be some routine agency report, monitoring an ongoing process, — people may just accept it as okay, no special action needed, or decide that something should be done to improve its function?
– Yes, the process could start with any of those. Can we call it a ‘case’, as a catchall label, for now? But whatever the label, there needs to be a forum, a place, a medium to alert people that there is a candidate case for starting the process. A ‘potential case candidate listing’, for information. Anybody who feels there is a need to do something could post such a potential case. It may be something a regular agency is already working on or should address by law or custom. But as soon as somebody else picks it up as something out of the ordinary, significant enough to warrant a public discussion, the system will ‘open’ the case, which means establishing a forum corner, a venue or ‘site’ for its discussion, and invite public contributions to that discussion.
– Yeah, and it will get swamped immediately with all kinds of silly and irrelevant posts. How does the system deal with that? Trolls, blowhards, just people out to throw sticks into the wheels?
– Good question. The problem is how to sort out the irrelevant stuff — but who is to decide what’s what? And throw out what’s irrelevant?
– Yes, that itself could lead to irrelevant and distracting quarrels. I think it’s necessary to have a first file where everything is kept in its original form, a ‘Verbatim’ depository, for reference. And deal with the decision about what’s relevant by other means, for example the process of assessment of the merit of contributions. First, everybody who makes a contribution will get a kind of ‘basic contribution credit point’, a kind of ‘present’ score, which is initially just ‘empty’. If it’s the first item of some significance for the discussion, it will get filled with an adjustable but still neutral score — mere repetitions will stay ‘noted’ but empty.
– Good idea! This will be an incentive to make significant information fast, and keep people from filling the system with the same stuff over and over.
– Yes. But then you need some sorting out of all that material, won’t you?
– True. You might consider that as part of an analysis service, determining whether a post contains claims that are ‘pertinent’ to the case. It may just consist of matching a term — of a ‘topic’ or subject, that’s part of the initial case description, or provides a link to any subsequent contribution already posted. Each term or topic is now listed as the content subject of a number of possible questions or issues — the ‘potential issue family’ of factual, explanatory, instrumental, and deontic (ought-) questions that can be raised about the concept. This can be done according to the standard structure of an IBIS (issue based information system), a ‘structured’ or formalized file that consists of the specific questions and the respective answers and arguments to those. Of course somebody or something must be doing this — an ‘Analysis’ or ‘Formalizing’ component — either some human staff, or an automated system which needs to be developed. Ideally, the participants will learn to do this structuring or formalizing themselves, to make sure the formalized version expresses their real intent.
– And that ‘structured’ file will be accessible to everybody, as well as the ‘verbatim’ file?
– Yes. Both should be publicly accessible as a matter of principle. But access ‘in principle’ is not yet very useful. Such files aren’t very informative or interesting to use. Most importantly, they don’t provide the overview of the discussion and of the relationship between the issues. This is where the provision and rapid updating of discourse maps becomes important. There should be maps of different levels of detail: topic maps, just showing the general topics and their relationships, issue maps that provide the connections between the issues, and argument maps that show the answers or arguments for a specific issue, with the individual premises and their connections to the issues raised by each premise.
– So what do we have now: a support system with several storage and display files, and the service components to shuffle and sort the material into the proper slots. Al, I see did you draw a little diagram there?
– Yes – I have to doodle all this in visual form to understand it:

AMwoADV 14a

 

Figure  1 — The main discourse support system: basic content components

– Looks about right, for a start. You agree, Sophie?
– Yes, but it doesn’t look that much different from the argumentative or IBIS type system we know and started from. What happened to the concern about the adversarial flavor of this kind of system? Weren’t we trying to defuse that? But how? Get rid of arguments?
– Well, I don’t think you can prevent people from entering arguments — pros and cons about proposed plans or claims. Every plan has ‘pros’ – the benefits or desirable results it tries to produce – and ‘cons’, its costs, and any undesirable side-and after-effects. And I don’t think anybody can seriously deny that they must be brought up, to be considered and discussed. So they must be acknowledged and accommodated, don’t you think?
– Yes. And the evaluation of pro and con merit of plan proposals, based on the approach we’ve been able to develop so far, will depend on establishing some argument plausibility and argument weight.
– I agree. But isn’t there a way in which the adversarial flavor can be diminished, defused?
– Lets’ see. I think there are several ways that can be done. First, in the way the material is presented. For example, the basic topic maps don’t show content as adversarial, and the issue maps can de-emphasize the underlying pro-and con partisanship, if any, by the way the issues are phrased. Whether argument maps should be shown with complete pro and con arguments, is a matter of discussion, perhaps best dealt with in each specific case by the participants. This applies most importantly to the way the entire discourse is framed, and the ‘system’ could suggest forms of framing that avoid the expectation of an adversarial win-lose outcome. If a plan is introduced as a ‘take-it-or-leave-it’ proposal to be approved or rejected, inevitably some participants can see themselves as the intended or unintended losing party, which generates the adversarial attitudes. Instead, if the discourse is started as an invitation to contribute to the generation of a plan that avoids placing the costs or disadvantages unfairly on some affected folks, and the process explicitly includes the expectation of plan modification and improvement, that attitude will be different.
– So the participants in this kind of process will have to get some kind of manual of proper or suggested behavior, is that right? How to express their ideas?
– I guess that would helpful. Suggestions, yes, not rules, if possible.
– Also, if I understand the evaluation ideas right, the reward system for contributions can include giving people points for information items that aren’t clearly supporting one party or the other, so individual participants can ‘gain’ by offering information that might benefit ‘the other’ party, would that help to generate a more cooperative attitude?
– Good point. Before we get to the evaluation part though, there is another aspect — one of the ‘approach shortcomings’, that I think we need to address.
– Right, I’ve been waiting for that: the systems modeling question. How to represent complex relationships of systems models in the displays presented to the participants? Is that what you are referring to?
– Yes indeed.
– So do you have any suggestions for that? It seems that it is so difficult — or so far off the argumentative planners’ radar – that it hasn’t been discussed or even acknowledged let alone solved yet?
– Sure, it almost looks like a kind of blind spot. I think there are two ways this might, or should be, dealt with. One is that the system’s research component — here I mean the discourse support system — can have a service that make searches in the appropriate data bases to find and enter information about similar cases, where systems models may have been developed, and enter the systems descriptions, equations and diagrams — most importantly, the diagrams — to the structured file and the map displays. In the structured file, questions about the model assumptions and data can then be added — this was the element that is usually missing in systems diagrams. But the diagrams themselves do offer a different and important way for participants to gain the needed overview of the problem they are dealing with.
– So far, so good. Usually, the argumentative discussion and the systems are speaking different languages, have different perspectives, with different vocabularies. What can we do about that?
– I was coming to that — it was the second way I mentioned. But the first step, remember, is that the systems diagrams are now becoming part of the discussion, and any different vocabulary can be questioned and clarified together with the assumptions of the model. That’s looking at it from the systems side. The other entry, from the argumentative side, can be seen when we take a closer look at specific arguments. The typical planning argument is usually only stated incompletely — just like other arguments. It leaves out premises the arguer feels can be ‘taken for granted’. A more completely stated planning argument would spell out these three premises of the ‘conclusion-claim’, that
‘Proposal or Plan P should be adopted,
          because
          P will lead to consequence or result R , (given conditions C)
           and
          Result R ought to be pursued
          (and
           conditions C are present)’.

The premise in parenthesis, about conditions C, is the one that’s most often not spelled out, or just swept under the rug with phrases such as ‘all else being equal’. But take a closer look at that premise. Those conditions — the ones under which the relationship between P and R can be expected to hold or come true — refer to the set of variables we might see in a systems diagram, interacting in a number of relationship loops. It’s the loops that make the set a true system, in the minds of the systems thinkers.
– Okay, so what?
– What this suggests is, again, a twofold recommendation, that the ‘system’ (the discourse system) should offer as nudges or suggestions for the participants to explore.
– Not rules, I hope?
– No: suggestions and incentives. The first is to use existing or proposed system diagrams as possible sources for aspects — or argument premises — to study and include in the set of concerns that should be given ‘due consideration’ in a decision about the case. In other words, turn them into arguments. Of the defused kind, Sophie. The second ‘nudge’ is that the concerns expressed in the arguments or questions by people affected by the problem at hand, or by proposed solutions — should be used as material for the very construction of the model of problem situation by the system modeler for the case at hand.
– Right. For the folks who are constructing systems models for the case at hand.
– Yes, That would likely be part of the support system service, but there might be other participants getting involved in it too.
– I see: Reminders: as in ‘do you think this premise refers to a variable that should be entered into the systems model?’
– Good suggestion. This means that the construction of the system model is a process accompanying the discourse. One cannot precede the other without remaining incomplete. It also requires a constant ‘service’ of translation between any disciplinary jargon of the systems model — the ‘systems’ vocabulary as well as the special vocabulary of the discipline within which the system studied is located. And of course, translation between different natural languages, as needed. For now, let’s assume that would be one of the tasks of the ‘sorting’ department; we should have mentioned that earlier.
– Oh boy. All this could complicate things in that discourse.
– Sure — but only to the extent that there are concepts that need to be translated, and aspects that are significantly different as seen from ordinary ‘argumentative’ or ‘parliamentary’ planning discussion perspective as opposed to a systems perspective, don’t you agree?
– So let’s see: now we have some additional components in your discourse support system: the argument analysis component, the systems modeling component, the different translation desks, and the mapping and display component. What’s next?
– That would be the evaluation function. From what we know about evaluation, in this case evaluating the merit of discussion contributions, the process of clarifying, testing, improving our initial offhand judgments about things to more solidly well-founded, deliberated judgments requires that we make the deliberated overall judgments a function, that is, dependent on, the many ‘partial’ judgments provided in the discussion and in the models. And we have talked about the need for a better connection between the discourse contribution merit and the decision judgment. This is the purpose of the discourse, after all, right?
– Yes. And the reason we think there needs to be a distinct ‘evaluation’ step or function is that quite often, the link between the merit of discussion contributions and the decision is too weak, perhaps short-circuited, prejudiced, or influenced by ‘hidden agenda’ — improper, illicit agenda considerations, and needs to be more systematic and transparent. In other words, the decisions should be more ‘accountable’.
– That’s quite a project. Especially the ‘accountability’ part — perhaps we should keep that one separate to begin with? Let’s just start with the transparency aspect?
– Hmm. You don’t seem too optimistic about accountability? But without that, what use is transparency? If decision makers, whoever they might be in a specific case, don’t have to be accountable for their decision, does it matter how transparent they are? But okay, let’s take it one item at a time.
– Seems prudent and practical. Can you provide some detail about that evaluation process?
– Let me see. We ask the participants in the process to express their judgments about various concepts in the process, on some agreed-upon scale. The evaluation process of our friend suggests a plausibility scale. It applies to judgments about how certain we are that a claim is true, or how probable it is — or how plausible it is — if neither truth nor probability really apply, as in ought-claims. It ranges from some positive number to a negative point, agreed to mean ‘couldn’t be more plausible’ or ‘couldn’t be less plausible’, respectively, with a midpoint of zero expressing ‘don’t know’, ‘can’t judge’.
– What about those ‘ought’ claims in the planning argument? ‘Just ‘plausible’ doesn’t really express the ‘weighing’ aspect we are talking about?
– Right: for ought-claims — goals, objectives — there must be a preference ranking or a scale expressing weight of relative importance. The evaluation ‘service’ system component will prepare some kind of form or instrument people can use to express and enter those judgments. This is an important step where I think the adversarial factor can be defused to some extent: if argument premises are presented for evaluation individually, not as part of the arguments in which they may have been entered originally, and without showing who was the original author of a claim, can we expect people to evaluate them more according to their intrinsic merit and evidence support, and less according to how they bolster this or that adversarial party?
– I’d say it would require some experiments to find out.
– Okay: put that on the agenda for next steps.
– Can you explain how the evaluation process would continue?
– Sure. First let me say that the process should ideally include assessment during all phases of the process. If there is a proposal for a plan or a plan detail, for example, participants should assign a first ‘offhand’ overall plausibility score to it. That score scan then be compared to the final ‘deliberated’ judgment, as an indicator of how the discussion has achieved a more informed judgment, and what difference that made. Now, for the details of the process. To get an overall deliberated plausibility judgment, people only need to provide plausibility scores and importance weights for the individual premises of the pro and con planning arguments. For each individual participant, the ‘system’ can now calculate the argument plausibility and the argument weight of each argument, based on the weight the person has assigned to its deontic premise, and the person’s deliberated proposal plausibility, as a function of all the argument weights.
– I seem to remember that there were some questions about how all those judgments should be assembled and aggregated into the next deliberated value?
– Yes, there should be some more discussion and experiments about that. But I think those are mostly technical details that are solved in principle, and can be decided upon by the participants to fit the case.
– And the results are then posted or displayed to the group for review?
– Yes. This may lead to more questions and discussion, of course, or for requests for more research and discussion, if there are claims that don’t seem to have enough support to make reasonable assessments, or for which the evidence is disputed. I see you are getting worried, Sophie: will this go on forever? There’s a kind of stopping rule: when there are no more questions or arguments, the process can stop and proceed to the decision phase.
– I think the old parliamentary tradition of ‘calling the question’ when the talking has gone on for too long should be kept in this system.
– Sure, but remember, that one was needed mainly because there was no other filter for endless repetition of the same points wrapped in different rhetoric. The rule of adding the same point only once into the set of claims to be evaluated will put a damper on that, don’t you think?
– So Al, did you add the evaluation steps to your diagram?
– Yes. Here’s what it looks like now:

AM wo ADV 14c

Figure 2 — The discourse support system with added evaluation components

– Here is another suggestion we might want to test, and add to the picture – coming back to the idea of the reward system helping to reduce the adversarial aspect: We now have some real measures — not only for the individual claims or information items that make up the answers and arguments to questions, but also for the plausibility of plan proposals that are derived from those judgments. So we can use those as part of a reward mechanism to get participants more interested in working out a final solution and decision that is more acceptable to all parties, not just to ‘win’ advantages for their ‘own side’.
– You have to explain that, Bog-Hubert.
– Sure. Remember the contribution credit points that were given to everybody, for making a contribution, to encourage participation? Okay: in the process of plausibility and importance assessment we were asking people to do, to deliberate their own judgments more carefully, they were assessing the plausibility and weight of relative importance of those contributions, weren’t they? So if we now take some meaningful group statistic of those assessments, we can modify those initial credits by the value or merit the entire group was assigning to a given item.
– ‘Meaningful’ statistic? What are you saying here? You mean, not just the average or weighted average?
– No, some indicator that also takes account of the degree of support presented for a claim, and the degree of agreement or disagreement in the group. The needs to be discussed. In this way, participants will build up their ‘contribution merit credit account’. You could then also earn merit credits for information that –from a narrow partisan point of view — would be part of an argument for ‘the other side’ — credit for information that serves the whole group.
– Ha! now I understand what you said initially about the evaluation function also serving to reduce the amount of trivial, untrue, and plain irrelevant stuff people might post in such discussions: if their information is judged negatively on the plausibility scale, that will reduce their credit accounts. A way to reward good information that can be well supported, and discourages BS and false information… I like that.
– Good. In addition to that, people could also get credit points for the quality of the final solution — assuming that the discourse includes efforts to modify initial proposals some people find troublesome, to become more acceptable — more ‘plausible’ — to all parties. And the credit you earn may be in part determined by your own contribution to that result. So there are some possibilities for such a system to encourage more constructive cooperation.
– Sounds good. As you said, we should try to do some research to see whether this would work, and how the reward system should be calibrated.
– So the reward mechanism adds another couple of components to your diagram, Al?
– Yes. Bog-hubert said that the evaluation process should really be going on throughout the entire process, so the diagram that shows it just after the main evaluation of the plan is completed is a little misleading. I tried to keep it simple. And there’s really just one component that will have to keep track of the different steps:

 

AM wo ADV 14d

Figure 3 –The process with added contribution reward component

 

– Looks good, thanks, Al. But what I don’t see there yet is how it connects with the final decision. I think you got derailed from finishing your explanation of the evaluation process, Bog-Hubert?
– Huh? What did I miss?
– You explained how each participant got a deliberated proposal plausibility score. Presumably one that’s expressed on the same plausibility scale as the initial premise plausibility judgments, so we can understand what the number means. Okay. Then what? How do you get from that to a common decision by the entire community of participants?
– You are right; I didn’t get to that. Well…
– Why doesn’t the system calculate an overall group proposal plausibility score from the individual scores?
– I guess there are some problems with that step, Vodçek. If you mean something like the average plausibility score. Are you saying that it should be the deciding criterion?
– Well… why not? It’s like all those opinion polls, only better, isn’t it? And definitely better that just voting?
– No, friends, I don’t think the judgment about the final decision should not be ‘usurped’ by such a score. For one, unless there are several proposals that have all been evaluated in this way so you could say ‘pick the one with the highest group plausibility score’, you’d have to agree on a kind of threshold plausibility a solution would have to achieve to get accepted. And that would just be another controversial issue. Also, a simple group average could gloss over, hide serious differences of opinion. And like majority voting, just override the concerns of minority groups. So such statistics should always be accompanied by measures of the degree of consensus and disagreement, at the very least.
– Couldn’t there be a rule that a proposal is acceptable if all the individual final plan plausibility scores are better than the existing problem situation? Ideally, of course, all on the positive side of the plausibility scale, but in a pinch at least better than before?
– That’s another subject for research and experiments, and agreements in each situation. But in reality, decisions are made according to established (e.g. constitutional) rules and conventions, habits or ad hoc agreements. Sure, the discourse support systems could provide some useful suggestions or advice to the decision-makers, based on the analysis of the evaluation results. A ‘decision support component’. One kind of advice might be to delay decision if the overall plausibility for a proposal is too close to the midpoint (‘zero’) value of the plausibility scale — indicating the need for more discussion, more research, or more modification and improvement. Similarly, if there is too much disagreement in the overall assessment – if a group of participants show very different results from the majority, even if the overall ‘average’ result looks like there is sufficient support, the suggestion may be to look at the reasons for the disagreement before adopting a solution. Back to the drawing board…
– Getting back to the accountability aspect you promised to discuss: Now I see how that may be using the evaluation results and credit accounts somehow — but can you elaborate how that would work?
– Yes, that’s a suggestion thrown around by Abbé Boulah some time ago. It uses the credit point account idea as a basis of qualification for decision-making positions, and the credit points as a form of ‘ante’ or performance bond for making a decision. There are decisions that must be made without a lot of public discourse, and people in those positions ‘pay’ for the right to make decisions with an appropriate amount of credit points. If the decision works out, they earn the credits back, or more. If not, they lose them. Of course, important decisions may require more points than any individual has compiled; so others can transfer some of their credits to the person, unrestricted, or dedicated for specific decisions. So they have a stake, — their own credit account — and lose their credits if they make or support poor decisions. This also applies to decisions made by bodies of representatives: they too must put up the bond for a decision, and the size of that bond may be larger if the plausibility evaluations by discourse participants show significant differences, that is, disagreements. They take a larger risk to make decisions about which some people have significant doubts. But I’m sorry, this is getting away from the discussion here, about the discourse support system.
– Another interesting idea that needs some research and experiments before the kinks are worked out.
– Certainly, like many other components of the proposed system — proposed for discussion. But a discussion that is very much needed, don’t you agree? Al, do you have the complete system diagram for us now?
– So far, what I have is this — for discussion:

AM wo ADV 14

Figure 4 — The Planning Discourse Support System – Components

– So, Bog-Hubert: should we make a brief list of the research and experiments that should be done before such a system can be applied in practice?
– Aren’t the main parts already sufficiently clear so that experimental application for small projects could be done with what we have now?
– I think so, Vodçek — but only for small projects with a small number of participants and for problems that don’t have a huge amount of published literature that would have to be brought in.
– Why is that, Bog-Hubert?
– See, Sophie: the various steps have been worked through and described to explain the concept, but it had to be done with different common, simple software programs that are not integrated: the content from one component in Al’s diagram have to be transferred ‘by hand’ from one component to the next. For a small project, that can be done with a small support staff with a little training. And that may be sufficient to do a few of the experiments we mentioned to fine-tune the details of the system. But for larger projects, what we’d need is a well-integrated software program that could do most of the transferring work from one component to the next ‘automatically’.
– Including creating and updating the maps?
– Ideally, yes. And I haven’t seen any programs on the market that can do that yet. So that should the biggest and top priority item on the research ‘to do’ list. Do you remember the other items we should mention there?
– Well, there were a lot of items you guys mentioned in passing without going into much detail – I don’t know if that was because any questions about those aspects had been worked out already, or because you didn’t have good answers for them? For example, the idea of building ‘nudging’ suggestions into the system to encourage participants to put their comments and questions into a form that encourages cooperation and discourages adversarial attitudes?
– True, that whole issue should be looked into more closely.
– What about the issue of ‘aggregation functions’ – wasn’t that what you called them? They way participants’ plausibility and importance judgments about individual premises of arguments, for example, get assembled into argument plausibility, argument weights, and proposal plausibility?
– Not to forget the problem of getting a reasonable measure of group assessment from all those individual judgment scores.
– Right. It may not end up being a multivariable one, not just a single measure. Like the weather, we need several variables to describe it.
– Then there is the whole idea of those merit points. It sounds intriguing, and the suggestion to link them to the group’s plausibility assessments makes sense, but I guess there are a lot of details to be worked out before it can be used for real problems.
– You say ‘real problems’ – I guess you are referring to the way they could be used in a kind of game, just like the one we ran here in the Tavern last year about the bus system, where the points are just part of the game rules, as opposed to real cases. I think the detailed development of this kind of game should be on the list too, since games may be an important tool to make people familiar with the whole approach. How to get these ideas out there may take some thinking too, and several different tools. But using these ideas for real cases is a whole different ball game, I agree. Work to do.
– And what about the link between all those measures of merit of people’s information and arguments and the final decision. Isn’t that going to need some more work as well? Or will it be sufficient to just have the system sound an alarm if there is too much of a discrepancy between the evaluation results and, say, a final vote?
– We’ll have to find out – as we said, run some experiments. Finally, to come back to our original problem of trying to reduce the adversarial flavor of such a discourse: I’d like to see some more detail about the suggestion of using the merit point system to encourage and reward cooperative behavior. Linking the individual merit points to the overall quality of the final decision — the plan the group is ending up adopting — sounds like another good idea that needs more thought and specifics.
– I agree. And this may sound like going way out of our original discussion: we may end up finding that the decision methods themselves may need some rethinking. I know we said to leave this alone, accept the conventional, constitutional decision modes just because people are used to them. But don’t we agree that simple majority voting is not the ultimate democratic tool it is often held out to be, but a crutch, a discussion shortcut, because we don’t have anything better? Well, if we have the opportunity to develop something better, shouldn’t it be part of the project to look at what it could be?
– Okay, okay, we’ll put it on the list. Even though it may end up making the list a black list of heresy against the majesty of the noble idea of democracy.
– Now there’s a multidimensional mix of metaphors for you. Well, here’s the job list for this mission; I hope it’s not an impossible one:
– Developing the integrated software for the platform
– Developing better display and mapping tools, linked to the formalized record (IBIS)
– Developing ‘nudge’ phrasing suggestions for questions and arguments that minimize adversarial potential
– Clarifying questions about aggregation functions in the evaluation component
– Improving the linkage between evaluation results (e.g. argument merit) and decision
– Clarifying, elaborating the discourse merit point system
– Adding improvement / modification options for the entire system
– Developing alternative decision modes using the contribution merit evaluation results.
– That’s enough for today, Bog-Hubert. Will you run it by Abbé Boulah to see what he thinks about it?
– Yeah, he’ll just take it out to Rigatopia and have them work it all out there. Cheers.