Archive for February, 2023

About the Role of AI  in the Planning Discourse?

Thorbjørn Mann

Concern 

The rapidly increasing use of ‘AI’  — ‘Artificial’ or  ‘Augmented’ Intelligence — tools for many different kinds of tasks that require what resembles resembles human reasoning —  raise the question of how such programs should be dealt with in the Planning Discourse.  There is little argument that there are many information-gathering and data analysis functions at which AI programs are impressively faster and more efficient than humans.  There is also justifiable uneasiness about how the results produced should influence the decision process. ‘

At the extreme ends of sentiment, potentially flawed and dangerous reactions emerge: Blind faith and and acceptance of AI-based solutions at one end: The tendency to invoke ‘data’, ‘facts’ and science, that preceded AI, as the sole justifiable basis of public policy is strengthened by it. On the other end, the very effectiveness of these tools raise concerns that significant planning decisions determined solely by algorithms will excluding legitimate considerations by humans, and can lead to blind and violent rejection;  the very concept of an organized large scale discourse support platform can be seen as a ‘Big Brother’ power instrument that must be resisted. 

So the question of how these concerns should influence policy in general and the design of the platform in particular deserves attention and discussion. 

Discussion:

The question can be approached from several plausible directions, not all of which can be explored here.  One significant perspective is that of making the basis of ‘judgment’  of AI algorithms sufficienlty transparent to the public  to alleviate the ‘Big Brother’ concern. This will require work both on the AI development side and, significantly, on the side of education of the public, to be able to understand and critically evaluate and judge the explanations. The importance of this task cannot be overstated: mere ‘trust me’  assurances on the part of officials, purveyors of the AI programs, even of ‘independent’  review committees installed to produce the assurances will not be enough: How ‘independent’ can review committees be if installed by the authorities, AI providers and their lobbyists?  

Another approach to explore this question, is the following: to look for provisions in the platform that separate  ‘reasoning’, data analysis, performance calulations, simlulations,  etc. that best can be done by AI tools, from ‘reasoning’ involving evaluation judgments that should be contributed by humans, as clearly as possible. (Of course the transparency reviews of the AI tools should be part of these provisions as well.)

Whether such a ‘clear separation’ of reasoning, or even a reasonable approximation for practical purposes is possible, is of course up for discussion. In other words: Are there meaningful distinctions between questions that can or should be ‘answered’ by humans exclusively, and those where AI produced results should be given equal or even exclusive consideration? 

Some optimism about this issue is based on the structure of what I have called the typical ‘planning argument’. Turning the standard logic sequence of ‘premies, premise, therefore conclusion’ around to the more colloquial ‘proposal, because premise, premise, premise’ or: («Proposal A should be adopted because  A will result in B (given conditions C) and B ought to be pursued, and conditions C are/will be given.»)  The pattern contains two premises that can be taken to plausibly rely on AI analysis: the factual-instrumental premise ‘A will preduce B, given C’  and the factual premise ‘Conditions C are/will be given’.  But the premise ‘B ought to be’ as well as the conclusion ’A ought to be adopted’ itself are ‘deontic’ or ought-claims. Is it meaningful to accept a procedural rule of assigning the ‘right’ to judge or calculate the plausibility of premises only to the former (‘factual’) premises and the right to judge (or assign ‘weights of importance’) to the deontic claims only to humans?  Or to accept the rule that AI data and plausibility calculations of fact-premises should be given ‘due consideration’ by humans, but that the aggregation of judgments of both kinds to indications of overall support for the proposal A should include  human judgments only:  the decision should be based on human judgments  only? 

A few kinds of procedural provisions both for the assessment judgment of premises to form plauibility and weight judgents for individual arguments, and the aggregations of the resulting argument weights into plausibiity measures of the proposal have been sketched for discusiion. The provisions could be seen as part of the needed effort to make the reasoning ‘transparent’. It should be noted that all such results will be judgments by individuals, and thus legitimately different, not any abstract universal or even common ‘group’ judgment. Any AI results claiming to have been based on ‘due consideration of all pertinent concerns, pros and cons’  would have to show that it has been given all the judgments of humans affected by the problem a plan aims to solve, and by all proposed ’solutions’. This aspect alone seems to be a strong argument in favor of separating AI-produced judgments about plans from the judgments of humans. And that the way the planning discourse will deal with this issue urgently needs more thorough discussion. 

Comments?

Wrong question?  If so:  what is the better, real question or problem? 

*  AI is ‘Inevitable’: To the extent results to questions produced by AI programs already already are virtually indistinguishable from comments produced by humans, to the same questions, isn’t it futile to even consider the question whether such contributions to the discourse should be admissible or not?  There will be human participants who will insist on using the tools to produce their own entries. And if such entries are indeed difficult or impossible to distinguish from genuinely human entries, any effort to prohibit them may lead to rejection of actual human comments. So better question: AI based entries will be part of the discourse — but shouldn’t the way entries will influence  the decision be based on the merit of entries, regardless of whether they have been constructed by huans or AI algorithms? 

Comments?

PLAN P?  A PUBLIC PLANNING DISCOURSE PLATFORM:

A discussion of the concept of a Public Planning and Policy-making Discourse Support Platform 

The preceding post ‘Counterframing’?  is part of an effort to explore the idea of a public planning discourse support platform — a platform aiming at recommendations and decisions based on the merit of contributions to the discourse, This post introduces the concept for disussion. Further posts will take up specific emerging issues for more detailed examination..

There are many well-intentioned efforts to find better ways of tackling the many local and global crises, conflicts, and emergencies facing humanity. These efforts are fueled by a growing common sense that current governmental and planning entities and their practices are proving ill-prepared and inadequate to find and implement constructive solutions — solutions that do not themselves generate new conflicts and problems. These efforts can be found on many different platforms and in many different media, explicitly or implicitly taking the form of proposals for what could or should be done. However, there is little evidence that the needed agreements for collective action will be reached soon enough to become accepted and effective.

Many suggestions for change are relying on governance organizations and their corresponding decision-making patterns that have been adequate in smaller communities and government structures; and there are good reasons for many local  issues to be resolved by ‘local’ governance entities. However, this ignores the fact that many of the challenges affecting  many communities across governance boundaries, and humanity as a whole, will require ‘cross-governance’ (‘nations’) and eventually ‘global’ decisions. The ‘local’ decision-making tools, even the ‘democratic’ practices such as majority voting, no more than the alternative growing trend towards authoritarian governance, offer adequate assurance that decisions will be based on ‘due consideration’ much less professed ‘careful weighing’ of pros and cons’ by decision-makers will produce effective and equitable solutions. Yet there is no evidence of a commonly accepted process of discussion and coherent evaluation of proposals, that can lead to viable agreements and collective action. 

A key reason for this is the current lack of a well-organized and universally accepted platform for communication among these initiatives and governance entities. While many such efforts ‘advertise’ their work and approach on social media, they do not communicate well for the purpose of achieving common agreements about what should be done: there is no common platform for well-organized discussion, thorough evaluation, adaptation to concerns by other perspectives, and eventually reaching recommendations and decisions that can be adopted by affected constituencies without creating new problems and conflicts. 

While current official and social media platforms are not yet well enough suited to support such a cooperative discourse, they do how that the technology for such a platform, open to wide participation and discussion is now possible.Yet there is little evidence of efforts to forge  a reasonable process of discussion and coherent evaluation of proposals, that can lead to viable and accepted agreements and actions.

This post aims at  starting a discussion of what a platform for a better public planning  and policy-making discourse might look like. It is an urgent invitation to participants from many different disciplines who will have to contribute their expertise and insight, to join the effort. 

The task itself is an example of projects that need such a platform — a platform that currently does not exist. So this ‘pilot’ discussion will have to start on a ‘tentative adaptation of something like this platform, thatvis not designed for this task. The needed adaptation agreements — that will themselves become the subject of discussion and modification — simply aim at a somewhat more structured format that may facilitate concise overview, evaluation and eventual  recommendations  and decisions based on the merit of the assembled content. 

Further posts will explore  details on a possible ‘Approach’, any needed ‘Procedural  Agreements’ and other essential features of the platform. 

‘Counterframing’?

Throbjørn Mann

The Framing Problem in the Planning Discourse  

In an ongoing process of designing the outline if a (potentially) global Public Planning Discourse Support Platform, a recurring issue is delaying the very attempt to open up a ‘pilot’ version on social media to discuss the concept and its development: the question of ‘framing’ the discourse.

The concept of ‘framing’ refers to the fact that there are always several different ‘ways of talking’  (also called ‘perspectives’ or ‘paradigms’)  about a problem or emergency or vision that some feel should become a community planning project. Further, is was seen that the first such ‘way of talking’  introduced into the discussion — even the way the project is ‘raised’ for discussion, often tends to dominate the ensuing discourse. And to the extent the concern for the project is ‘controversial’ or involves a conflict of interests, it thereby can become part of a ‘power’ tool in the search for solutions: it will favor a particular, — partisan — set of potential solutions. And this may result in solutions that are inequtable, unjust, oppressive to other parts of the affected constituencies.  

The implication for the design of a ‘democratic’ planning platform therefore becomes a  requirement to keep the platform design itself as ‘perspective-neutral’ as possible, lest it be perceived as a power tool of the part of the community that will benefit from solutions gained from the particular perspective, and therefore not trusted by other segments  of the community. 

Though the pursuit of perspecitve neutrality must be taken seriously, It is probably impossible to design a totally ‘perspectve-neutral’ platform.  But even if this could be done: would not then the very first effort by any party to start a discussion about a planning problem or project be the ‘framing’ entry the principle says we should avoid? 

So it looks like framing will be inevitable. The progress of designing an outline for even a pilot version of a planning platform has become stuck in this dilemma, to the point of not even being able to reach agreement on the basic articulation of purpose, focus, and aim of the project, for fear of committing the sin of framing.  

Now, would it not be more useful to look for platform provisions that would neutralize the effects of such first framing incidents, rather than to insist on avoiding them? Are there ways of acknowledging this, and including provisions in the platform design, for defusing any potentially controversial or destructive effects?

A first option would be to simply always point out the framing essence of discourse contributions — with ubiquitous reminders like Rittel’s suggestion to end each entry in an ‘IBIS’ (Issue Based Information System’) with a «Wrong question?» or «Wrong Problem?» line. It may have to be more specific, like «Wrong Way of Talking?» 

Another, more detailed possibility, following C. West Churchman’s recommendation of ‘testing’ a systems narrative with a ‘counterplanning’ effort, would be to adopt a rule of requiring that any entry of a substantial effort or proposal in the discourse must be accompanied by an equally plausible but substantially different ‘counterframing’ comment to be accepted as a topic for more in-depth and systematic discussion? 

The ‘democratic’ principles of planning and policy-making, that ‘decisions should be based on ‘due consideration of all concerns of all segments of a community; on ‘careful weighing of all pros and cons’ would seem to require that all ‘perspectives’ held by all parties in a community should be expressed, articulated and discussed. What provisions for the planing discourse wold be needed to ensure this?