Levels of assessment depth in planning discourse: A three-tier experimental (‘pilot’) version of a planning discourse support system

Thorbjoern Mann, February 2018

Overview

A ‘pilot’ version of a needed full scale Planning Discourse Support System (‘PDSS’)
to be run on current social media platforms such as Facebook

The following are suggestions for an experimental application of a ‘pilot’ version of the structured planning discourse platform that should be developed for planning projects with wide public participation, at scales ranging from local issues to global projects.

Currently available platforms do not yet offer all desirable features of a viable PDSS

The eventual ‘global’ platform will require research, development and integrated programming features that current social media platforms do not yet offer. The ‘pilot’ project is aiming at producing adequate material to guide further work and attract support and funding a limited ‘pilot’ version of the eventual platform, that can be run on currently available platforms.

Provisions for realization of key aims of planning: wide participation;
decisions based on merit of discourse contribution;
recognition of contribution merit;
presented as optional add-on features
leading to a three-tier presentation of the pilot platform

One of the key aims of the overall project is the development of a planning process leading to decisions based on the assessed merit of participants’ contributions to the discourse. The procedural provisions for realizing that aim are precisely those that are not supported by current platforms, and will have to be implemented as optional add-on processes (‘special techniques’) by smaller teams, outside of the main discourse. Therefore, the proposal is presented as a set of three optional ‘levels’ of depth of analysis and evaluation. Actual projects may choose the appropriate level inconsideration of the project’s complexity and importance, of the degree of consensus or controversy emerging during the discourse, and the team’s familiarity with the entire approach and the techniques involved.

Contents:
1 General provisions
2 Basic structured discourse
3 Structured discourse with argument plausibility assessment
4 Assessment of plausibility-adjusted Quality assessment
5 Sample ‘procedural’ agreements
6 Possible decision modes based on contribution merit
7 Discourse contribution merit rewards

—-
1 General Provisions

Main (e.g. Facebook) Group Page

Assuming a venue like Facebook, a new ‘group’ page will be opened for the experiment. It will serve as a forum to discuss the approach and platform provisions, and to propose and select ‘projects’ for discussion under the agreed-upon procedures.

Project proposals and selection

Group members can propose ‘projects’ for discussion. To avoid project discussions being overwhelmed by references to previous research and literature, the projects selected for this experiment should be as ‘new’ (‘unprecedented’) and limited in scope as possible. (Regrettably, this will make many urgent and important issues ineligible for selection.)

Separate Project Page for selected projects

For each selected project, a new group page will be opened to ensure sufficient hierarchical organization options within the project. There will be specific designated threads within each group, providing the basic structure of each discourse. A key feature not seen in social media discussions is the ‘Next step’ interruption of the process, in which participants can choose between several options of continuing or ending the process.

Project participants
‘Participants’ in projects will be selected from the number of ‘group members’ having signed up, expressing an interest in participating, and agree to proceed according to the procedural agreements for the project.

Main Process and ‘Special Techniques’

The basic process of project discourse is the same for all three levels; the argument plausibility assessment and project quality assessment procedures are easily added to the simple sequence of steps of the ‘basic’ versions described in section 2.
In previous drafts of the proposal, these assessment tools have been described as ‘special techniques’ that would require provisions of formatting, programming and calculation. For any pilot version, they would have to be conducted by ‘special teams’ outside of the main discourse process. This also applies to the proposed three-level versions and the two additional ‘levels’ of assessment presented here. Smaller ‘special techniques teams’ will have to be designated to work outside of the main group discussion, (e.g. by email); they will report their results back to the main project group for consideration and discussion.

For the first implementation of the pilot experiment, only two such special techniques: the technique of argument plausibility assessment, and the evaluation process for plan proposal ‘quality’ (‘goodness’) are considered; they are seen as key components of the effort to link decisions to the merit of discourse contributions.


2 Basic structured discourse

Project selection Group members post suggestions for projects (‘project candidates) on the group’s main ‘bulletin board’. If a candidate is selected, the posting member will act as its ‘facilitator’ or co-facilitator. Selection is done by posting an agreed-upon minimum of ‘likes’ for a project candidate. By posting a ‘like’, group members signal their intention to become ‘project participants’ and actively contribute to the discussion.

Project bulletin page, Project description

For selected projects, a new page serving as introduction and ‘bulletin board’ for the project will be opened. It will contain a description of the project (which will be updated as modifications are agreed upon). For the first pilot exercise, the projects should be an actual plan or action proposals.

Procedural agreements

On a separate thread, a ‘default’ version of procedural agreements will be posted. They may be modified in response to project conditions and expected level of depth, ideally before the discussion starts. The agreements will specify the selection criteria for issues, and the decision modes for reaching recommendations or decisions on the project proposals. (See section 5 for a default set of agreements).

General discussion thread (unstructured)

A ‘General discussion’ thread will be started for the project, inviting comments from all group members. For this thread, there are no special format expectation other than general ‘netiquette’.

Issue candidates
On a ‘bulletin board’ subthread of the project intro thread, participants can propose ‘issue’ or ‘thread’ candidates, about questions or issues that emerge as needing discussion in the ‘general discussion’ thread. Selection will be based on an agreed-upon number of ‘likes, ‘dislikes’ or comments about the respective issue in the ‘general discussion’ thread.

Issue threads: For each selected issue, a separate issue thread will be opened. The questions or claims of issue threads should be stated more specifically in the expectation of clear answers or arguments, and comments should meet those expectations.

It may be helpful to distinguish different types of questions, and their expected responses:

– “Explanatory” questions (Explanations, descriptions, definitions);
– “Factual’ questions (‘Factual’ claims, data, arguments)
– “Instrumental questions” (Instrumental claims” “how to do …”)
– “Deontic” (‘Ought’- questions) (Arguments pro / con proposals)

Links and References thread

Comments containing links and references should provide brief explanations about what positions the link addresses or supports; the links should also be posted on a ‘links and references’ thread.

Visual material: diagrams and maps

Comments can be accompanied by diagrams, maps, photos, or other visual material. Comments should briefly explain the gist of the message supported by the picture. (“What is the ‘argument’ of the image?) For complex discussions, overview ‘maps’ of the evolving network of issues should be posted on the project ‘bulletin’ thread.

‘Next Step?’
Anytime participants sense that the discussion has exhausted itself or needs input of other information or analysis, they can make a motion for a ‘Next step?’ interruption, specifying the suggested next step:

– a decision on the main proposal or a part,
– call for more information, analysis;
– call for a ‘special technique’ (with or without postponement of further discussion)
– call for modifying the proposal, or
– changing procedural rules;
– continuing the discussion or
– dropping the issue, ending the discussion without decision.

These will be decided upon according to the procedural rules ‘currently’ in force.

Decision on the plan proposal

The decision about the proposed plan — or partial decisions about features that should be part of the plan — will be posted on the project’s ‘bulletin board’ thread., together with a brief report. Reports about the process experience, problems and successes, etc. will be of special interest for further development of the tool.

3    Structured discourse with argument plausibility assessment

The sequence of steps for the discourse with added argument plausibility assessment is the same as those of the ‘basic’ process described in section 2 above. At each major step, participants can make interim judgments about the plausibility of the proposed plan, (for comparison with later, more deliberated judgments). At each of these steps, there also exists the option of responding to a ‘Next step?’ motion with a decision to cut the process short, based on emerging consensus or other insights such as ‘wrong question’ that suggest dropping the issue. Without these intermediate judgments, the sequence of steps will proceed to construct an overall judgment of proposal plausibility ‘bottom-up-fashion’ from the plausibility judgments of individual argument premises.

Presenting the proposal

The proposal for which the argument assessment is called, is presented and described in as much detail as is available.
(Optional: Before having studied the arguments, participants make first offhand, overall judgments of proposal plausibility Planploo’ on a +1 / -1 scale, (for comparison with later judgments). Group statistics: e.g. GPlanploo’ are calculated (Mean, range…) and examined for consensus or significant differences. )

Displaying all pro/con arguments

The pro / con arguments having been raised about the issue , displayed in the respective ‘issue’ thread, are displayed and studied, if possible with the assistance of ‘issue maps’ showing the emerging network of interrelated issues. (Optional:) Participants assign a second overall offhand plan plausibility judgment: Planploo”, GPlanploo”)

Preparation of formal argument display and worksheets

For the formal argument plausibility assessment, worksheets are prepared that list
a) the deontic premises of each argument (goals, concerns), and
b) the key premises of all arguments ((including those left unstated as ‘taken for granted’)

Assignment of ‘Weights of Relative Importance’ w

Participants assign ‘weights of relative importance’ w to the deontics in list (a), such that 0 ≤ wi ≤ 1, and ∑wi = 1, for all i arguments.

Assignment of premise plausibility judgments prempl to all argument premises

Participants assign plausibility judgments to all argument premises, on a scale of -1 (totally implausible) via 0 –zero – (don’t know) to +1 (totally plausible)

Calculation of Argpl Argument plausibility

For each participant and argument, the ‘Argument plausibility’ Argpl is calculated from the premises plausibility judgments. E,g. Argplod = ∏ (premplj) for all j premises of the argument.

Calculation of Argument Weight Argw

From the argument plausibility judgment s and the weight of the deontic premise for that argument, the ‘weight of the respective argument Argw is calculated. E.g. Argwi = Argplod * wi.

Calculation of Plan plausibility Planpld

The Argument weights Argw of all arguments pro and con are aggregated into the deliberated plan plausibility score Planplod for each participant. E.g. Planpld = ∑(Argwi) for all i arguments.

Calculating group statistics of results

Statistics of the Plan plausibility judgment scores across the group (Mean, Median, Range, Min /Max) are calculated and discussed. Areas of emerging consensus are identified, as well as areas of disagreements of lack of adequate information. The interim judgments designated as ‘optional’ above can serve to illustrate the learning process participants go through.

Argument assessment team develops recommendations for decision or improvement of proposed plan

The argument assessment team reports its findings and analysis, makes recommendations to the entire group in a ‘Next Step?’ deliberation.

4 Assessment of plausibility-adjusted plan Quality

Assigning quality judgments

Because pro / con arguments usually refer to the deontic concerns (goals, objectives) in qualitative terms, they do not generally generate adequate information about the actual quality or ‘goodness’ that may be achieved by a plan proposal. A more fine-grain assessment is especially important for the comparison of several proposed plan alternatives. It should be obvious that all predictions about the future performance of plans will be subject to the plausibility qualifications examined in section 3 above. So a goodness or quality assessment may be grafted onto the respective steps of the argument plausibility assessment. The following steps describe one version of the resulting process.

Proposal presentation and first offhand quality judgment

(Optional step:) Upon presentation of a proposal, participants can offer a first overall offhand goodness or quality judgment PlanQoo, e.g. on a +3 / -3 scale, for future comparison with deliberated results.

Listing deontic claims (goals, concerns)

From the pro / con arguments compiled in the argument assessment process (section 3) the goals, concerns (deontic premises) are assembled. These represent ‘goodness evaluation aspects’ against which competing plans will be evaluated.

Adding other aspects not mentioned in arguments

Participants may want to add other ‘standard’ as well as situation-specific aspects that may not have been mentioned in the discussion. (There is no guarantee that all concerns that influence participants’ sense of quality of a plan will actually be brought up and made explicit in a discussion).

Determining criteria (measures of performance) for all aspects

For all aspects, ‘measures of performance’ will be determined that allow assessment about how well a plan will have met the goal or concern. These may be ‘objective’ criteria or more subjective distinctions. For some criteria, ‘criterion functions’ can show how a person’s ‘quality’ score depends on the corresponding criterion.
Example: plan proposals will usually be compared and evaluated according to their
expected ‘cost’; and usually ‘lower cost’ is considered ‘better’ (all else being equal)
than ‘higher cost’. But while participants may agree that ‘zero cost’ would be
best so as to deserve a +3 (couldn’t be better’) score, they can differ significantly
about what level of cost would be ‘acceptable’, and at what level the score should
become negative: Participant x would consider a much higher cost to be still
‘so/so’, or acceptable, than participant o.
+3 –xo————————————————–
+2 ———–o–x—————————————
+1 —————-o—–x——————————-
+/-0 ——————o———x———————–
-1 ———————–o————x—————–
-2 ——————————o———x————-
-3 ——————————————————- ($∞ would be -3 ‘couldn’t be worse’)
$0 |       |        |        |        |         |        |        |         |  > Cost criterion function.

“Weighting’ of aspects, subaspects etc.

The ‘weight’ assignments of aspects (deontics) should correspond to the weighting of deontic premises in the process of argument assessment. However, if more aspects have been added to the aspect list, the ‘weighting’ established in the argument assessment process must be revised: Aspects weights are on a zero to +1 scale, 0 ≤ w ≤ 1 and ∑wi = +1 for all i aspects. For complex plans, the aspect list may have several ‘levels’ and resemble an ‘aspect tree’. The weighing at each level should follow the same rule of 0 ≤ w ≤ 1 and ∑w=1.

Assigning quality judgment scores

Each participant will assign ‘quality’ or ‘goodness’ judgments, on a +3 to -3 scale (+3 meaning ‘could not possibly be better’, -3 ‘couldn’t possibly be worse’, with zero (0) meaning ‘so-so’ or ‘can’t decide’, not applicable) to all aspects / subaspects of the evaluation worksheet, for all competing plan proposals.

Combining quality with plausibility score for a ‘weighted plausibility-adjusted quality score Argqw

Each (partial) quality score q will be combined with the respective argument plausibility score Argpl from the process in section 3, resulting in a ‘weighted plausibility-adjusted quality score’ Argqplwi = Argpli * qi * wi .

Aggregating scores into Plan quality score PlanQ

The weighted partial scores can be aggregated into overall plan quality scores: e.g. :
PlanQ = ∑i (Argqplwi) for all n aspects. or
PlanQ = Min (Argqplw) or
PlanQ = ∏ (Argqpli +3)wi -3
(The appropriateness of these functions for a given case must be discussed!)

Group statistics: GArgqpl and GPlanQ

Like the statistics of the plausibility assessments, statistical analysis of these these scores can be calculated. Whether a resulting measure such as Mean (PlanQ) should be accepted as a ‘group judgment’ is questionable, but such measures can become helpful guides for any decisions the group will have to make. Again, calculation of interim results can provide information about the ‘learning process of team members, ‘weaknesses’ of plans that are responsible for specific poor judgment scores, and guide suggestions for plan improvements.

Team reports results back to main forum

A team report should be prepared for presentation back to the main discussion.

5     Sample procedural agreements

The proposed platform aims at facilitating problem-solving, planning, design, policy-making discussions that are expected to result in some form of decision or recommendation to adopt plans for action. To achieve decisions in groups, it is necessary to have some basic agreements as to how those decisions will be determined. Traditional decision modes such as voting are not appropriate for any large asynchronous online process with wide but unspecified participation (Parties affected by proposed plans may be located across traditional voting eligibility boundaries; who are ‘legitimate’ voters?). The proposed approach aims at examining how decisions might be based on the quality of content contributions to the discourse rather than the mere number of voters or supporters.

Default agreements.

The following are proposed ‘default’ agreements; they should be confirmed (or adapted to circumstances) at the outset of a discourse. Later changes should be avoided as much as possible; ‘motions’ for such changes can be made as part of a ‘Next step’ pause in the discussion; they will be decided upon by a agreed upon majority of participants having ‘enlisted’ for the project, or agreements ‘currently’ in place.

Project groups.

Members of the Planning Discourse FB group (Group members) can propose ‘projects’ for discussion on the Main group’s ‘Bulletin Board’ Thread. Authors of group project proposals are assumed to moderate / facilitate the process for that project. Projects are approved for discussion if an appropriate number __ of group members ‘sign up for ‘participation’ in the project.

Project Participants

Project participants are assumed to have read and agreed to these agreements, and expressed willingness to engage in sustained participation. The moderator may choose to limit the number of project participants, to keep the work manageable.

Discussion

Project discussion can be ‘started’ with a Problem Statement, a Plan Proposal, or a general question or issue. The project will be briefly described in the first thread. Another thread labeled ‘Project (or issue) ___ General comments’ will then be set up, for comment on the topic or issue with questions of explanation clarification, re-phrasing, answers, arguments and suggestions for decisions. Links or references should be accompanied by a brief statement of the answer or argument made or supported by the reference.

Candidate Issues

Participants and moderator can suggest candidate issues: potentially controversial questions about which divergent positions and opinions exist or are expected, that should be clarified or settled before a decision is made. These will be listed in the project introduction thread as Candidate Issues. There, participants can enter ‘Likes’ to indicate whether they consider it necessary to ‘raise’ the issue for a detailed discussion. Likely issue candidates are questions about which members have posted significantly different positions in the ‘General comments’ thread; such that the nature of the eventual plan would significantly change depending on which positions are adopted.

‘Raised’ issues

Issue Candidates receiving an agreed upon number of support (likes, or opposing comments, are accepted and labeled as ‘Raised’. Each ‘raised’ issue will then become the subject of a separate thread, where participants post comments (answers, arguments, questions) to that issue.
It will be helpful to clearly identify the type of issue or question, so that posts can be clearly stated (and evaluated) as answers or arguments: for example:
– Explanations, definitions, meaning and details of concepts to ‘Explanatory questions’;
– Statements of ‘facts’ (data, answers, relationship claims) to Factual questions;
– Suggestions for (cause-effect or means to ends) relationships, to Instrumental questions;
– Arguments to deontic (ought-) questions or claims such as ‘Plan A should be adopted’, for example:
‘Yes, because A will bring about B given conditions C , B ought to be pursued, and conditions C are present’).

‘Next step?’ motion

At any time after the discussion has produced some entries, participants or moderator can request a ‘Next Step?’ interruption of the discussion, for example when the flow of comments seems to have dried up and a decision or a more systematic treatment of analysis or evaluation is called for. The ‘Next step’ call should specify the type of next step requested. It will be decided by getting agreed-upon number of ‘likes’ of the total number of participants. A ‘failed’ next step motion will automatically activate the motion of continuing the discussion. Failing that motion or subsequent lack of new posts will end discussion of that issue or project.

Decisions

Decisions (to adopt or reject a plan or proposition) are ‘settled’ by an agreed-upon decision criterion (e.g. vote percentage) total number of participants. The outcome of decisions of ‘next step?’ motions will be recorded in the Introduction thread as Results, whether they lead to an adoption, modification, rejection of the proposed measure or not.

Decision modes

As indicated before, traditional decision modes such as voting, with specified decision criteria such as percentages of ‘legitimate’ participants, are going to be inapplicable for large (global’) planning projects whose affected parties are not determined by e.g. citizenship or residency in defined geometric governance entitites. It is therefore necessary to explore other decision modes using different decision criteria, with the notion of criteria based on the assessed merit of discourse contributions being an obvious choice to replace or complement the ‘democratic’ one-person, one-vote’ principle, or the principle of decisions made by elected representatives (again, by voting.)
Participants are therefore encouraged to explore and adopt alternative decision modes. The assessment procedures in sections 3 and 4 have produced some ‘candidates’ for decision criteria, which cannot at this time be recommended as decisive alternatives to traditional tools, but might serve as guidance results for discussion:
– Group Plan plausibility score GPlanpl;
– Group Quality assessment score GPlanQ
– Group plausibility-adjusted quality score GPlnQpl;
The controversial aspect of all these ‘group scores is the method for deriving these from the respective individual scores.

These measures also provide the opportunity for measuring the degree of improvement achieved by a proposed plan over the ‘initial’ problem situation a plan is expected to remedy: leading to possible decision rules such as that rejecting plans that do not achieve adequate improvement for some participants (people being ‘worse off ‘after plan implementation) or selecting plans that achieve the greatest degree of improvement overall. This of course requires that the existing situation be included in the assessment, as the basis for comparison.

Special techniques

In the ‘basic’ version of the process, no special analysis, solution development, or evaluation procedures are provided, mainly because the FB platform does not easily accommodate the formatting needed. The goal of preparing decisions or recommendations based on contribution merit or assessed quality of solutions may make it necessary to include such tools – especially more systematic evaluation than just reviewing pro and con arguments. If such techniques are called for in a ‘Next step?’ motion, special technique teams must be formed to carry out the work involved and report the result back to the group, followed by a ‘next step’ consideration. The techniques of systematic argument assessment (see section 3) and evaluation of solution ‘goodness’ or ‘quality’ (section 4) are shown as essential tools to achieve decisions based on the merit of discourse contributions above.
Special techniques teams will have to be designated to work on these tasks ‘outside’ of the main discourse; they should be limited to small size, and will require somewhat more special engagement than the regular project participation.
Other special techniques, to be added from the literature or developed by actual project teams, will be added to the ‘manual’ of tools available for projects. The role of techniques for problem analysis, solution idea generation, as well as that of systems modeling and simulation (recognizing the fact that the premise of ‘conditions’ under which the cause-effect assumption of the factual-instrumental premise of planning arguments can be assumed to hold, really will be the assumed state of the entire system (model) of interrelated variables and context conditions; an aspect that has not been adequately dealt with in the literature nor in the practice of systems consulting to planning projects.)

6 Decision modes

For the smaller groups likely to be involved in ‘pilot’ applications of the proposed structured discourse ideas described, traditional decision modes such as ‘consensus’, ‘no objection’ to decision motion, or majority voting may well be acceptable because familiar tools. For large scale planning projects spanning many ordinary ‘jurisdictions’ (deriving the legitimacy of decisions from the number of legitimate ‘residents, these modes become meaningless. This calls for different decision modes and criteria: an urgent task that has not received sufficient attention. The following summary only mentions traditional modes for comparison without going into details of their respective merit or demerits, but explores potential decision criteria that are derived from the assessment processes of argument and proposal plausibility, or evaluation of proposal quality, above.

Voting:
Proposals receiving an agreed-upon percentage of approval votes from the body of ‘legitimate’ voters. The approval percentages can range from simple majority, to specified plurality or supermajority such as 2/3 or 3/4 to full ‘consensus’ (which means that a lone dissenter has the equivalent of veto power.) Variations: voting by designated bodies of representatives, determined by elections, or by appointment based on qualifications of training, expertise, etc.

Decision based on meeting (minimum) qualification rules and regulations.
Plans for building projects will traditionally receive ‘approval’ upon review of whether they meet standard ‘regulations’ specified by law. Regulations describe ‘minimum’ expectations mandated by public safety concerns or zoning conventions but don’t address other ‘quality’ concerns. They will lead to ‘automatic’ rejection (e.g. of a building permit application) if only one regulation is not met.

Decision based on specified performance measures
Decision-making groups can decide to select plans based on assessed or calculated ‘performance’. Thus, real estate developers look for plan versions that promise a high return on investment ratio (over a specified) ‘planning horizon’. A well known approach for public projects is the ‘Benefit/Cost’ approach calculating the Benefit minus Cost (B-C) or Benefit-Cost ration B/C (and variations thereof).

Plan proposal plausibility
The argument assessment approach described in section 3 results in (individual) measures of proposal plausibility. For the individual, the resulting proposal plausibility could meaningfully serve as a decision guide: a proposal can be accepted if its plausibility exceeds a certain threshold – e.g. the ‘so-so’-value of ‘zero’ or the plausibility value of the existing situation or ‘do nothing’ option. For a set of competing proposals: select the one with the highest plausibility.

It is tempting but controversial to use statistical aggregation of these pl-measures as group decision criteria; for example, the Mean group plausibility value GPlanpld. For various reasons, (e.g. the issue of overriding minority concerns), this should be resisted. A better approach would be to develop a measure of improvement of pl-conditions for all parties compared to the existing condition, with the proviso that plans resulting in ‘negative improvement’ should be rejected (or modified until showing improvement for all affected parties).

Plausibility-adjusted ‘Quality’ assessment measures.
Similar considerations apply to the measures derived from the approach to evaluate plans for ‘goodness or ‘quality’ but adjust the implied performance claims with the plausibility assessments. The resulting group statistics, again, can guide(but should not in their pure form determine) decisions, especially efforts to modify proposals to achieve better results for all affected parties (the interim results pinpointing the specific areas of potential improvement.

7 Contribution merit rewards

The proposal to offer reward points for discourse contributions is strongly suggested for the eventual overall platform but one difficult to implement in the pilot versions (without resorting to additional work and accounting means ‘outside’ of the main discussion). Its potential ‘side benefits’ deserve some consideration even for the ‘pilot’ version.

Participants are awarded ‘basic contribution points’ for entries to the discussion, provided that they are ‘new’ (to the respective discussion) and no mere repetition of entries offering essentially the same content that have already been made. If the discussion later uses assessment methods such as the argument plausibility evaluation, these basic ‘neutral’ credits are then modified by the group’s plausibility or importance assessment results – for example, by simply multiplying the basic credit point (e.g. ‘1’) with the group’s pl-assessment of that claim.

The immediate benefits of this are:
– Such rewards will represent an incentive for participation,
– for speedy assembly of needed information (since delayed entries of the same content will not get credit).
– They help eliminate repetitious comments that often overwhelm many discussions on social media: the same content will only be ‘counted and presented once;
– The prospect of later plausibility or quality assessment by the group – that can turn the credit for an ill-considered, false or insufficiently supported claim into a negative value (by being multiplied by a negative pl-value) – will also discourage contributions of lacking or dubious merit. ‘Troll’ entries will not only occur but once, but will then receive appropriate negative appraisal, and thus discouraged;
– Sincere participants will be encouraged to provide adequate support for their claims.
Together with the increased discipline introduced by the assessment exercises, his can help improve the overall quality of discourse.

Credit point accounts built up in this fashion are of little value if they are not ‘fungible’, that is, have value beyond the participation in the discourse. This may be remedied
a) within the process: by considering their uses to adjust the ‘weight’ of participant’s ‘votes’ or other factors in determining decisions;
b) beyond the process: By using contribution merit accounts as additional signs of qualification for employment or public office. An idea for using such currencies as a means of controlling power has been suggested, acknowledging both that there are public positions calling for ‘fast’ decisions that can’t wait for the outcome of lengthy discussions, and that people are seeking power (‘empowerment’) almost as a kind of human need, but like most other needs we are asked to pay for meeting (in one way or other), introducing a requirement that power decisions will be ‘paid for’ with credit points. (One of the several issues for discussion.)

—ooo—

Advertisements


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s