Planning discourse: Integration of argumentation into systems models or systems modeling information into argumentative discourse.Posted: March 10, 2014
Various discussions about how complex societal problems and crises can be dealt with have revealed, among other things, a mutual shortcoming of two conceptual ‘models’ held to carry the best promises for overcoming the challenges: ‘Systems Thinking’ on the one hand, and the Argumentative Model of Planning on the other. Briefly, systems modeling tools are considered the best available tools for the understanding and analysis of complex systems behavior, while a carefully orchestrated argumentative discourse with wide participation appears to offer the best – because most familiar and accessible – vehicle for assembling the ‘distributed’ information and connecting that information forward to acceptable agreements and decisions.
The problem or shortcoming is the following: The detailed information embodied in complex systems models is not accommodated in the familiar patterns of argumentative discourse, and thus difficult to adequately bring to bear on the decisions reached at the end of such discourse. On the other hand, the disagreements (and thus conflicting, inconsistent information) that characterize argumentative discourse in the form of ‘pros’ and ‘cons’ are not accommodated in the typical systems models whose assumptions regarding variables, parameters, and their values and relationships have the appearance of being either valid on the basis of scientific verification, or ‘settled’ by other means (e.g. as goals ‘given’ by the clients of analysis projects, or opinion surveys).
The consequences of decision processes adopting either ‘model’ can be equally defective: decisions based on the output of model simulations, for example, run the risk of overriding critical disagreements and interests of parties whose information has not been included, or downplayed, in the model, and thus lead to future conflict. Decisions reached on the basis of argumentative discourse in which the complexity of the system in question has not been fully understood because it couldn’t be adequately represented in the tools of the discourse are equally likely to be flawed. This would be true even if the main shortcoming of the ‘parliamentary’ tradition were successfully resolved – that of the possibility of the final majority vote completely ignoring and overriding the concerns of the minority. (A possible solution for this problem has been suggested with the proposals for systematic and transparent argument assessment for planning arguments (Mann 2010), it will be assumed to be adopted in some form in the following.)
The mutual difficulties of these two models to appropriately accommodate each other’s content is considered to be a main obstacle for the successful development of a viable framework for planning / policy-making from the small scale, local level to the scale of global crises and conflicts. It has not, to my knowledge, received sufficient attention, analysis and discussion. The following two suggestions, exploring the possibility for each of the two models to be integrated into the other are intended as a starting point for this much needed discussion. The possibility for the emergence of a ‘third model’ that would resolve the difficulty is left open as a challenge for future thinking.
- A. How can Argumentation be integrated into ST tools?
Possibility: (using the example of simulation models for clarity, and
i) Starting with the model diagram:
ii) Each variable and parameter in the model diagram is shown in a ‘box’ with attached expansion symbols:
E for explanatory information about item x: What is x? Also: description?
F for factual information: What is (the value of) x currently; evidence, data?
O (instead of D) for deontic / ought information and arguments: should x be set (as part e.g. of an intervention package)?
H or I for instrumental (‘How to’) information: How can x be achieved?
Clicking on the symbol will open a discussion page where the question is stated and answers / arguments are listed.
Plan proposals are described as packages of variable and parameter values of the model that serve as the proposed ‘intervention’ settings whose performance will be simulated over time in the model.
Main menu symbols shown in the ‘legend’ box of the diagram show the links for the issues:
– What should be the plan proposal? (Described as initial intervention settings of model); clicking should link to follow-up questions:
– H- question: plan proposals (alternatives)?
– Evaluation work sheets for selected proposals. (To develop a pl-value for the proposal based on the assessment argument weight, argument plausibility and plausibility of argument premises).
– What is the critical performance variable that should be simulated with the model?
– What additional variable / parameter should be included in the model?
Subsequent additional links for the follow-up question:
– Should this item be included in the model?
– What are the values and relationships?
This information can be ‘automatically’ extracted from the discussion and shown in the model.
– Should the proposed variable be part of the intervention (plan) package?
– How can the initial / intervention variable setting be achieved (if not already in place…)
iii) These pages should have convenient ‘back’ links to the question from where they were accessed.
iv) The pages for these questions should be complemented or linked to issue maps showing the relationships between the various issues in the entire discussion, (with the ‘current issue from which the page was linked shown bold or highlighted).
v) These requirements imply that the different functions described: model diagram, issue discussions, mapping, evaluation etc. must be part of one single integrated software program.
- B. How can systems modeling information be integrated into argumentative discourse platforms and maps?
Assuming, as a starting point, that there is a discussion about whether a plan proposal X should be decided upon for implementation. The discussion support documentation (drawn from the ‘live‘ or conventional online discussion) is organized along the principles of adapted planning discourse IBIS (‘issue based information system’) resp. APIS (‘argumentative planning information system’).
Arguments pro or con the proposal will be raised and displayed in the ‘standard’ format:
“Proposal X ought /ought not be implemented because it is/is not a fact that X with help achieve goal Y, given conditions C, and conditions C are/are not (or will be) present.” Formally:
“+/-O(X) <— (+/-FI((X–>Y)| C) & +/-O(Y) & +/- F(C)”
Here, ‘conditions C‘ stand for the set of assumed variable and parameter values of a simulation model; and the proposal X will be described as the package of such model assumptions that are under the control of planners as the starting ‘intervention‘ into the situation and for which the performance over time is to be simulated with the model.
Successor questions about C will be answered by displays of the entire model, listing all variables and parameters with their assumptions and relationships so that they can be discussed, within the regular format provisions of the argumentative discourse platform.
The platform will be structured according to the main considerations described e.g. in Mann (2010) including the components of the verbatim file of contributions, the topic and issue lists, the discussion files of each issue in a condensed / formalized manner, argument maps, and evaluation worksheets and analysis tools.
This requires that the platform be structured so as to
i) allow discussion of each issue as a separate thread;
ii) permit visual displays of not only issue and argument maps but also systems model diagrams (and ideally, running provisions) within the same platform;
iii) allow convenient forward and backward linking between all its components.
Re-examining various efforts and proposals on discourse support over time, I have tried to identify and address some key issues or problems that require attention and rethinking. Briefly, the list of issues includes the following (in no particular order of importance):
• The question of the appropriate Conceptual Framework for the discourse support system;
• The preparation and use of discourse, issue and argument maps, ncluding the choice of map ‘elements’ (questions, issues, arguments, concepts or topics…);
• The design of the organizational framework: the ‘platform’;
• The Software problem: Specifications for discourse support software;
• Questions of appropriate process;
• The role and design of discourse mapping;
• The aspect of distributed information;
• The problem of complexity of information (complexity of linear verbal or written discussion, complex reports, systems model information);
• The role of experts;
• Negative associations with the term ‘argument’;
• The problem of ‘framing’ the discourse;
• Inappropriate focus on insignificant issues;
• The role of media;
• Appropriate Discussion representation;
• Incentives / motivation for participation (‘Voter apathy’)
• The ‘familiar’ (comfortable?) linear format of discussions versus the need (?) for structuring discourse contributions;
• The need for overview of the number of issues / aspects of the problem and their relationships;
• The effect of ‘last word’ contributions (e.g. speeches) on collective decisions; or mere ‘rhetorical brilliance’;
• Linking discussion merit / argument merit with eventual decisions;
• The issue of maps ‘taking sides’;
• The problem of evaluation: of proposals, arguments, discussion contributions;
• The role of ‘systems models’ information in common (verbal, linear, including ‘argumentative’) discourse
• The question of argument reconstruction.
• The appropriate formalization or condensation needed for concise map representation;
• Differences between requirements for e.g. ‘argument maps’ as used in e.g. law or science versus planning;
• The deliberate or inadvertent ‘authoritative’ effect of e.g. argument representation as ‘valid’; (i.e. the extent of evaluative content of maps);
• The question of appropriate sequence of map generation and updating;
• The question of representation of qualifiers in evaluation forms.
In previous work on the structure and evaluation of ‘planning arguments’ within the overall framework of the ‘Argumentative Model of Planning’ (as proposed by Rittel), I have been making various assumptions with regard to these questions — assumptions differing from those made in other studies and proposed discourse support tools. Such assumptions, for example regarding the conceptual framework, as manifested in the choice of vocabulary, — adopted as a mostly unquestioned matter of course in my proposals as well as in other’s work, — have significant implications on the development of such discourse support tools. They therefore should be raised as explicit issues for discussion and re-examination.
A first step in such a re-examination might begin with an attempt to explicitly state my current position, for discussion. This position is the result, to date, of experience with my own ideas as well as the study of others’ proposals. Not all of the issues listed above will be addressed in the following. Some position items still are, in my mind, more ‘questions’ than firm convictions, but I will try to state them as ‘provocatively’ as possible, for discussion and questioning.
1 The development of a global support framework for the discussion of global planning and policy agreements, based on wide participation and assessment of concerns, is a matter of increasingly critical concern; it should be pursued with high priority.
While no such system can be expected to achieve definitive universal validity and acceptance, and therefore many different efforts for further development of alternative approaches should be encouraged, there is a clear need for some global agreements and decisions that must be based on wide participation as well as thorough evaluation of concerns and information (evidence).
The design of a global framework will not be structurally different from the design of such systems for smaller entities, e.g. local governments. The differences would be mainly ones of scale. Therefore, experimental systems can be developed and tested at smaller scales to gain sufficient experience before engaging in the investments that will be needed for a global framework. By the same token, global systems for initially very narrow topics would serve the same purpose of incremental development and implementation.
2 The design of such a framework must be based on — and accommodate — currently familiar and comfortable habits and practices of collective discussion.
While there are analytical techniques and tools with plausible claims of greater effectiveness, ability to deal with the amount and complexity of data etc., the use of these tools in discourse situations with wide participation of people of different educational achievement levels would either be prohibitive of wide participation, or require implausibly massive information/education programs for which precisely the needed tools for reaching agreement on the selection of method / approach (among the many competing candidates) are currently not available.
3 Even with the growing use of new information technology tools, the currently most familiar and comfortable discourse pattern seems to be that of the traditional ‘linear discussion’ (sequential exchange of questions and answers or arguments) — the pattern that has been developed in e.g. the parliamentary tradition, the agreement of giving all parties a chance to speak, air their concerns, their pros and cons to proposed collective actions, before making a decision.
This form of discourse, originally based on the sequential exchange of verbal contributions, is of course complemented and represented by written documents, reports, books, and communications.
4 A first significant attempt to enhance the ‘parliamentary’ tradition with systematic information system, procedural and technology support was Rittel’s ‘Argumentative Model of Planning’. It is still a main candidate for the common framework.
Rittel’s main argument for the general acceptance of this model was the insight that its basic, general conceptual framework of ‘questions’, ‘issues’ (controversial questions), ‘answers’, and ‘arguments’ could in principle accommodate the content of any other framework or approach, and thus become a bridge or common forum for planning at all levels. This still seems to be a valid claim not matched by any other theoretical approach.
5 However, there are sufficiently worrisome ‘negative associations’ with the term ‘argument’ of Rittel’s model to suggest at least a different label and selection of more neutral key concepts and terms for the general framework
The main options are to only refer to ‘questions’ and ‘responses’ and ‘claims’, and to avoid ‘argument’ as well as the concepts of ‘pro’s and ‘cons’ — arguments in favor and opposed to plan proposals or other propositions.
Argumentation can be seen as the mutually cooperative (positive) effort of discussion participants to point out premises that support their positions, but that also are already believed to true or plausible by the ‘opponent‘, (or will be accepted by the opponent upon presentation of evidence or further arguments). But the more common, apparently persistent view is that of argumentation as a ‘nasty’, adversarial, combative ‘win-lose’ endeavor. While undoubtedly discourse by ay other label will produce arguments, pros and cons etc., the question is whether these should be represented as such in support tools, or in a more neutral vocabulary.
Experiments should be carried out with representations of discourse contributions — in overview maps and evaluation forms — as ‘questions’ and ‘answers’.
6 Any re-formatting, reconstruction, condensing of discussion contributions carries the danger of changing the meaning of an entry as intended by its author.
Regardless of the choice of such formatting — which should be the subject of discussion — the framework must preserve all original entries in their ‘verbatim’ form for reference and clarification as needed. Ideally, any reformatting of an entry should be checked with its author to ensure that it represents its intended meaning. (Unfortunately, this is not possible for entries whose authors cannot be reached, e.g. because they are dead.)
7 The framework should provide for translation services not only for translation between natural languages, but also from specialized discipline ‘jargon’ entries to natural language.
8 While researchers in several disciplines are carrying out significant and useful efforts towards the development of discourse support tools, and some of these efforts seem to claim to produce universally applicable tools, such claims are overly optimistic.
The requirements for different disciplines are different, and lead to different solutions that cannot comfortably be transferred to other realms. Specifically, the differences between scientific, legal, and planning reasoning are calling for quite different approaches. and discourse support systems. However, they are not independent: the planning discourse contains premises from all these realms that must be supported with the tools pertinent to those differences. The diagram suggests how different discourse and argument systems are related to planning:
(Sorry, diagram will be added later)
9 Analysis and problem-solving approaches can be distinguished according to the criteria they recommend as the warrant for solution decisions:
– Voting results (government, management decision systems, supported by experts);
– ‘Backwards-looking’ criteria: ‘Root cause’ (Root cause analysis, ‘Necessary conditions, contributing factors (‘Systematic Doubt’ analysis), historical data (Systems models);
– ‘Process/Approach’ criteria (“the ‘right’ approach guarantees the solution”);
solutions legitimized by participation vote or authority position; or argument merit;
– ‘Forward-looking’ criteria: Expected result performance, Benefit-Cost Ratio, simulated performance of selected variables over time, or stability of the system, etc.
It should be clear that the framework must accommodate all these approaches, or preferably, be based on an approach that could integrate all these perspectives, as applicable to context and characteristics of the problem. There is, to my knowledge, currently no approach matching this expectation, though some are claiming to do so (e.g. ‘Multi-level Systems Analysis’, which however is looking at only approaches deemed to fit within the realm of ‘Systems Thinking).
10 While the basic components of the overall framework should be as few, general, and simple as possible, — for example ‘topic’, ‘question’ and ‘claim’ or ‘response’, — actual contributions in real discussions can be lengthy and complex, and must be accommodated as such (in ‘verbatim’ reference files). However, for the purposes of overview by means of visual relationship mapping, or systematic evaluation, some form of condensed formatting or formalization will be necessary.
The needed provisions for overview mapping and evaluation are slightly different, but should be as similar as possible for the sake of simplicity.
11 Provisions for mapping:
a. Different detail levels of discourse maps should be distinguished: ‘Topic maps’, ‘Issue maps’ (or ‘question maps’), and ‘argument maps’ or ‘reasoning maps’.
– Topic maps merely show the general topics or concepts and their relationship as linked by discussion entries. Topics are conceptually linked (simple line) if they are connected by a relationship claim in a discussion entry.
– Issue or question maps show the relationships between specific questions raised about topics. Questions can be identified by type: e.g. factual, deontic, explanatory, instrumental questions. There are two main kinds of relationships: one is the ‘topic family’ relation (all questions raised about a specific topic); the other is the relationship of a question (a ‘successor’ question) having been raised as a result of challenging or query for clarification of an element (premise) of another (‘predecessor‘) question.
– Argument or reasoning maps show the individual claims (premises) making up an answer or argument about an issue (question), and the questions or issues having been raised as a result of questioning any such element (e.g. challenging or clarifying, calling for additional support for an argument premise.
b. Reasoning maps (argument maps) should show all the claims making up an argument, including claims left not expressed in the original ‘verbatim’ entry as assumed to be ‘taken for granted’ and understood by the audience.
Reasoning maps aiming at encouraging critical examination and thinking about a controversial subject might show ‘potential’ questions (for example the entire ‘family of issues for a topic) that could or should be raised about an answer or argument. These might be shown in gray or faint shades, or a different color from actually raised questions.
c. Reasoning maps should not identify answers or arguments as ‘pro’ and ‘con’ a proposal or position (unless it is made very clear that these are only the author’s intended function.)
The reason is that other participants might disagree with one or several of the premises of an intended ‘pro’ argument, in which case the set of premises (not with the respective participant’s negation) can constitute a ‘con’ argument — but the map showing it as ‘pro’ would in fact be ‘taking sides’ in the assessment. This would violate the principle of the map serving as a neutral, ‘impartial’ support tool.
d. For the same reason, reasoning maps should not attempt to identify and state the reasoning pattern (e.g. ‘modus ponens’ or modus tollens’ etc.) of the argument. Nor should they ‘reconstruct’ arguments into such (presumably more ‘logical’, even ‘deductively valid’) forms.
Again, if in a participant’s opinion, one of the premises of such an argument should be negated, the pattern (reasoning rule) of the set of claims will become a different one. Showing the pattern as the originally intended one by the author, (however justified by its inherent nature and validity of premises it may seem to map preparers), the map would inadvertently or deliberately be ‘taking sides’ in the assessment of the argument.
e. Topic, issue and reasoning maps should link to the respective elements in the verbatim and any formalized records of the discussion, including to source documents, and illustrations (pictures, diagrams, tables).
d. The ‘rich image’ fashion (fad?) of adding icons and symbols (thumbs up or down, plus or minus signs) or other decorative features to the maps — moving bubbles, background imagery, etc. serve as distracting elements more than as well-intended user-friendly devices, and should be avoided.
12 Current discourse-based decision approaches exhibit a significant shortcoming in that there is no clear, transparent, visible link between the ‘merit’ of discussion contributions and the decision.
Voting blatantly permits disregarding discussion results entirely. Other approaches (e.g. Benefit-Cost Analysis, or systems modeling) claim to address all concerns voiced e.g. in preparatory surveys, but disregard any differences of opinion about the assumptions entering the analysis. (For example: some entities in society would consider the ‘cost’ of government project expenditures as ‘benefits’ if they lead to profits for those entities (e.g. industries) from government contracts).
The proposed expansion of the Argumentative Model with Argument Evaluation (TM 2010) provides an explicit link between the merit of arguments (as evaluated by discourse participants) and the decision, in the form of measures of plan proposal plausibility. This approach should be integrated into an approach dropping the ‘argumentative‘ label, even though it requires explicit or implicit evaluation of argument premises.
13 Provisions for evaluation.
In discussion-based planning processes, three main evaluation tasks should be distinguished: the comparative assessment of the merit of alternative plan proposals (if more than one); the evaluation of one plan proposal or proposition, as a function of the merit of arguments; and the evaluation of the merit of single contributions, (item of information, arguments) to the discussion.
For all three, the basic principle is that evaluation judgments must be understood as subjective judgments, by individual participants, about the quality, plausibility, goodness, validity desirability etc. While traditional assessments e.g. of truth of argument premises and conclusions were aiming at absolute, objective truth, the practical working assumption here is that while we all strive for such knowledge, we must acknowledge that we do not have any more than (utterly subjective) estimate judgments of it, and it is on the strength of those estimates we have to make our decisions. The discussion is a collective effort to share and hopefully improve the basis of those judgments.
The first task above is often approached by means of a ‘formal evaluation’ procedure developing ‘goodness’ or performance judgments about the quality of the plan alternatives, resulting on an overall judgment score as a function of partial judgments about the plans’ performance with respect to various aspects. sub-aspects etc. Such procedures are well documented; the discourse may be the source of the aspects, but more often, the aspects are assembled (by experts) by a different procedure.
The following suggestions are exploring the approach of developing a plausibility score for a plan proposal based on the plausibility and weight assessments of the (pro and con) arguments and argument premises. (following TM 2010 with some adaptations).
a. Judgment criterion: Plausibility.
All elements to be ‘evaluated’ are assessed with the common criterion of ‘plausibility’, on an agreed-upon scale of +n (‘completely plausible’) to -n (‘completely implausible’), the midpoint score of zero meaning ‘don’t know’ or ‘neither plausible nor implausible’.
While many argument assessment approaches aim at establishing the (binary) truth or falsity of claims, ‘truth’, (not even ‘degree of certainty’ or probability about the truth of a claim), does not properly apply to deontic (ought-) claims and desirability of goals etc. The plausibility criterion or judgment type applies to all types of claims, factual, deontic, explanatory etc.
b. Weights of relative importance
Deontic claims (goals, objectives) are not equally important to people. To express these differences in importance, individuals assign ‘weight of relative importance) judgments to deontics in the arguments, on an agreed upon scale of zero to 1 such that all weights relative to an overall judgment add up to 1.
c. All premises of an argument are assigned premise plausibility judgments ppl; the deontic premises are also assigned their weight of relative importance pw.
d. The argument plausibility argpl of an argument is a function of the plausibility values of all its premises.
e. Argument weight argw is a function of argument plausibility argpl and the weight ppw of its deontic premise.
f. Individual Plan or Proposal plausibility PLANpl is a function of all argument weights.
g. ‘Group’ assessments or indicators of plan plausibility GPLANpl can be expressed as some function of all individual PLANpl scores.
However, ‘group scores’ should only be used as a decision guide, together with added consideration of degrees of disagreement (range, variance), not as a direct decision criterion. The decision may have to be taken by traditional means e.g. voting. But the correspondence or difference between deliberated plausibility scores and the final vote adds an ‘accountability’ provision: a participant having assigned a deliberated positive plausbility score for a plan but voting against it will face strong demands for explanation.
h. A potential ‘by-product’ of such an evaluation component of a collective deliberation process is the possibility of rewarding participants for discussion contributions not only with reward points for making contributions — and making such contributions speedily, (since only the ‘first’ argument making the same point will be included in the evaluation) — but modifying these contribution points with the collective assessments of their plausibility. Thus, participants will have an incentive — and be rewarded for — making plausible and meritorious contributions.
14 The process for deliberative planning discourse with evaluation of arguments and other discourse contributions will be somewhat different from current forms of participatory planning, especially if much or all of it is to be carried out online.
The main provisions for the design of the process pose no great problems, and small experimental projects can be carried out with current tools ‘by hand’ with human facilitators and support staff using currently available software packages. But for larger applications adequate integrated software tools will first have to be developed.
15 The development of ‘civic merit accounts’ based on the evaluated contributions to public deliberation projects may help the problem of citizen reluctance (often referred to as the problem of ‘voter apathy’) to participate in such discourse.
However, such rewards will only be effective incentives if they can become a fungible ‘currency’ for other exchanges in society. One possibility is to use the built-up account of such ‘civic merit points’ as one part of qualification for public office — positions of power to make decisions that do not need or cannot wait for lengthy public deliberation. At the same time, the legitimization for power decisions must be ‘paid for’ with appropriate sums of credit points — a much-needed additional form of control of power.
16 An important, yet unresolved ‘open question’ is the role of complex ‘systems modeling’ information in any form of argumentative planning discourse with the kind of evaluation sketched above.
Just as disagreement and argumentation about model assumptions are currently not adequately accommodated in systems models, the information of complex systems models and e.g. simulation results is difficult to present in coherent form in traditional arguments, and almost impossible to represent in argument maps and evaluation tools. Since systems models arguably are currently the most important available tools for detailed and systematic analysis and understanding of problems and system behavior, the integration of these tools in the discourse framework for wide public participation must be seen as a task of urgent and high priority.
17 Another unresolved question regarding argument evaluation (and perhaps also mapping) is the role of statement qualifiers.
Whether arguments that are stated with qualifiers (‘possibly’, ‘perhaps’; ‘tend to’ etc.) in the original ‘verbatim’ version should show such qualifiers in the statements (premises) to be evaluated. Arguably, qualifiers can be seen as statements about how an unqualified, categorical claim should be evaluated; the proponent of a claim qualified with a ‘possible’ does not ask for a complete 100% plausibility score. This means that the qualifier belongs to a separate argument about how the main categorical claim should be assessed, and thus should not be included in the ‘first-level’ argument to be evaluated. The problem is that the qualified claim can be evaluated — as qualified — as quite, even 100% plausible — but that plausibility can (in the aggregation function) be counted as 100% for the unqualified claim. Unless the author can be persuaded to add an actual suggested plausibility value in lieu of the verbal qualifier, one that other evaluators can view and perhaps modify according to their own judgment (unlikely and probably impractical), it would seem better to just enter unqualified claims in the evaluation forms, even though this may be seen as misrepresenting the author’s real intended meaning.
18 Examples of topic, issue, and argument maps according to the preceding suggestions.
a. A ‘topic map’ of the main topics addressed in this article:
Map of topics discussed
b. An issue map for one of the topics:
Argument mapping issues
c. A map of the ‘first level’ arguments in a planning discourse: the overall plan plausibility as a function of plausibility and weight assessments of the planning arguments (pro and con) that were raised about the plan.
The overall hierarchy of plan plausibility judgments
Hierarchy map of argument evaluation judgments, with successor issues
Argument map for mapping issue ‘Should argument map show ‘pro’ and ‘con’ labels?
Mann, T. (2010) “The Structure and Evaluation of Planning Arguments” Informal Logic, Dec. 2010.
Rittel, H. (1972) “On the Planning Crisis: Systems Analysis of the ‘First and Second Generations’.” BedriftsØkonomen. #8, 1972.
– (1977) “Structure and Usefulness of Planning Information Systems”, Working Paper S-77-8, Institut für Grundlagen der Planung, Universität Stuttgart.
– (1980) “APIS: A Concept for an Argumentative Planning Information System’. Working Paper No. 324. Berkeley: Institute of Urban and Regional Development, University of California.
– (1989) “Issue-Based Information Systems for Design”. Working Paper No. 492. Berkeley: Institute of Urban and Regional Development, University of California.