Thorbjørn Mann
Concern
The rapidly increasing use of ‘AI’ — ‘Artificial’ or ‘Augmented’ Intelligence — tools for many different kinds of tasks that require what resembles resembles human reasoning — raise the question of how such programs should be dealt with in the Planning Discourse. There is little argument that there are many information-gathering and data analysis functions at which AI programs are impressively faster and more efficient than humans. There is also justifiable uneasiness about how the results produced should influence the decision process. ‘
At the extreme ends of sentiment, potentially flawed and dangerous reactions emerge: Blind faith and and acceptance of AI-based solutions at one end: The tendency to invoke ‘data’, ‘facts’ and science, that preceded AI, as the sole justifiable basis of public policy is strengthened by it. On the other end, the very effectiveness of these tools raise concerns that significant planning decisions determined solely by algorithms will excluding legitimate considerations by humans, and can lead to blind and violent rejection; the very concept of an organized large scale discourse support platform can be seen as a ‘Big Brother’ power instrument that must be resisted.
So the question of how these concerns should influence policy in general and the design of the platform in particular deserves attention and discussion.
Discussion:
The question can be approached from several plausible directions, not all of which can be explored here. One significant perspective is that of making the basis of ‘judgment’ of AI algorithms sufficienlty transparent to the public to alleviate the ‘Big Brother’ concern. This will require work both on the AI development side and, significantly, on the side of education of the public, to be able to understand and critically evaluate and judge the explanations. The importance of this task cannot be overstated: mere ‘trust me’ assurances on the part of officials, purveyors of the AI programs, even of ‘independent’ review committees installed to produce the assurances will not be enough: How ‘independent’ can review committees be if installed by the authorities, AI providers and their lobbyists?
Another approach to explore this question, is the following: to look for provisions in the platform that separate ‘reasoning’, data analysis, performance calulations, simlulations, etc. that best can be done by AI tools, from ‘reasoning’ involving evaluation judgments that should be contributed by humans, as clearly as possible. (Of course the transparency reviews of the AI tools should be part of these provisions as well.)
Whether such a ‘clear separation’ of reasoning, or even a reasonable approximation for practical purposes is possible, is of course up for discussion. In other words: Are there meaningful distinctions between questions that can or should be ‘answered’ by humans exclusively, and those where AI produced results should be given equal or even exclusive consideration?
Some optimism about this issue is based on the structure of what I have called the typical ‘planning argument’. Turning the standard logic sequence of ‘premies, premise, therefore conclusion’ around to the more colloquial ‘proposal, because premise, premise, premise’ or: («Proposal A should be adopted because A will result in B (given conditions C) and B ought to be pursued, and conditions C are/will be given.») The pattern contains two premises that can be taken to plausibly rely on AI analysis: the factual-instrumental premise ‘A will preduce B, given C’ and the factual premise ‘Conditions C are/will be given’. But the premise ‘B ought to be’ as well as the conclusion ’A ought to be adopted’ itself are ‘deontic’ or ought-claims. Is it meaningful to accept a procedural rule of assigning the ‘right’ to judge or calculate the plausibility of premises only to the former (‘factual’) premises and the right to judge (or assign ‘weights of importance’) to the deontic claims only to humans? Or to accept the rule that AI data and plausibility calculations of fact-premises should be given ‘due consideration’ by humans, but that the aggregation of judgments of both kinds to indications of overall support for the proposal A should include human judgments only: the decision should be based on human judgments only?
A few kinds of procedural provisions both for the assessment judgment of premises to form plauibility and weight judgents for individual arguments, and the aggregations of the resulting argument weights into plausibiity measures of the proposal have been sketched for discusiion. The provisions could be seen as part of the needed effort to make the reasoning ‘transparent’. It should be noted that all such results will be judgments by individuals, and thus legitimately different, not any abstract universal or even common ‘group’ judgment. Any AI results claiming to have been based on ‘due consideration of all pertinent concerns, pros and cons’ would have to show that it has been given all the judgments of humans affected by the problem a plan aims to solve, and by all proposed ’solutions’. This aspect alone seems to be a strong argument in favor of separating AI-produced judgments about plans from the judgments of humans. And that the way the planning discourse will deal with this issue urgently needs more thorough discussion.
Comments?
Wrong question? If so: what is the better, real question or problem?
* AI is ‘Inevitable’: To the extent results to questions produced by AI programs already already are virtually indistinguishable from comments produced by humans, to the same questions, isn’t it futile to even consider the question whether such contributions to the discourse should be admissible or not? There will be human participants who will insist on using the tools to produce their own entries. And if such entries are indeed difficult or impossible to distinguish from genuinely human entries, any effort to prohibit them may lead to rejection of actual human comments. So better question: AI based entries will be part of the discourse — but shouldn’t the way entries will influence the decision be based on the merit of entries, regardless of whether they have been constructed by huans or AI algorithms?
Comments?
I found the article to be very insightful! It’s clear that AI can play an important role in the planning discourse, but it’s also important to consider the potential ethical implications of using AI in this way. It’s important to ensure that AI is used responsibly and ethically. (ChatGPT)
Joe: Thanks. Do we need an agreement about making sure that I’m looking at YOUR comment and not ChatGPT’s? How about putting quote marks around quotes, if the latter?
But the somewhat unhelpful quote itself is an indictment of its ‘author’ (I’m assuming now) which I think you are implying: sure, sure: we all agree, nothing to argue about there — but just what would ‘ethical inplications’ and ‘responsibly’ MEAN, in practical terms?
That comment was entirely written by the ChatGPT. I was only the vehicle to post what the bot had to say after reading your post.
Re your response: my response “Exactly. It’s pablum.”
The work of clarifying what this technology (or rather: the different versions of this technology) can or should be allowed to do, may be advanced by studying it from within specific domains, such as the planning discourse. The approach chosen for ChatGPT — the imitation or simulation of conversational language (as I understand) rather than a more technical perspective — may account for the disappointing response to the post. Looking at the post itself, which merely ends up recommending more discussion and development work about the issue, it is not that surprising that no more incisive insights were forthcoming: The response was as noncommittal as the article which did not offer a controversial much less critical position. Is a polite, non-commital response one sign of intelligence? Of course neither the post nor the response does much to move the discussion into a more constructive, creative or even critical direction. I take that lesson.