User talk:Sj/mattersofpolicy

From Wikipedia, the free encyclopedia

Talk: Graded permissions system

The +1 for admin flag will encourage people to obtain sysop access for the wrong reasons.

fair enough. thinking of other ways to include human verification.
Need T > 0.1 to post normally to meta/WP: pages

How does this apply across different projects? I often post to the WP namespace on other Wikipedias despite having made no edits to the article namespace there. Would such a system prevent me doing that or will metrics carry across?

They should carry. This is a crude Turing test to weed out bots, combined with a time-delay to increase the leverage of the community over vandals. Assuming unified logins, this would be a property of that login. +sj+ 07:01, 2004 Apr 14 (UTC)
if the bot tried to create hundreds of accounts, this would be obvious.

It might be obvious, but it also might go unnoticed for some time on the small Wikipedias. I'm not sure this solves anything as a spambot could still make just one edit on all 148 (?) Wikipedias which requires a lot of effort to clean up.

The first two edits by a new user wouldn't automatically update the live version, so a bot spamming the main page of 150 WPs wouldn't ruin their usefulness. This just adds a 6-hr lead time before a new user could get instant gratification.
Requiring that a user with T > 0.1 edit a page after such a newbie, before changes go live, is not offensive

No, but a lot of people will claim it is anti-wiki.

I expect they will. The best response I know of is to develop a beautiful edit interface that cleanly shows both live and 'most-recent' versions, in a way respectful to the most-recent version... and discuss it's wikiness after that is available for comment. +sj+ 07:01, 2004 Apr 14 (UTC)

Overall, I think there are problems with no one being available to code such a thing, and unresolved issues with sockpuppets preventing it working. Angela. 11:54, Apr 13, 2004 (UTC)

Plese elaborate on sockpuppet issues, if you would. +sj+ 07:01, 2004 Apr 14 (UTC)

Hi Sj... my general feeling is that this is a bad idea. Any system can be gamed, and a previously good user can suddenly not be worthy of trust. Basically, I think it's an oversimplification. I was looking at the figures. Surely if someone has been here for 2 months, and has made 700 edits manages to get themselves banned for 24 hours, they no longer have enough credit to post at all? And, is it really logical to give people who have been here a long time extra trust? I assume this is supposed to reflect dedication to the project, but this rewards dedicated trolls (although this is tempered by banning = -1) and sockpuppet creation. On the one hand it feels like this system bypasses "assume good faith", and on the other I feel like it forces us to do just that even where it might not be merited (long term sockpuppets/trolls). Contradictory ramblings end. fabiform | talk 18:34, 14 Apr 2004 (UTC)

Hi fabi, thanks for the feedback. Everyone has enough 'credit' to post; but the ability to get immediate gratification (by updating the googlable and exportable page) is, as you note, a privilege that under this system users might lose for a few months (or until they contribute more to the project) after a single tempban.
As for giving extra trust to long-time users; the point of that is not to reward dedication, but to increase leverage. it's okay if it takes a lot of effort to deal with dedicated trolls (who have behaved for 3 months); that at least can be handled scalably.
I am trying to support "assuming good faith" by allowing anyone to immediately update any public page after proving they're not a bot or a casual first-visit prankster... +sj+ 08:57, 2004 Apr 20 (UTC)

The idea of splitting the version of the page displayed to anons, from the most recent version, is a good one. It's been implemented on other wikis, too. A related idea is StableCopy.

Neat! Is there support on WP for this? Older discussions of it somewhere in the Mediawiki universe? +sj+ 08:57, 2004 Apr 20 (UTC)
Sure, it's been discussed around the place - mailing lists, meta and such. No idea where, off the top of my head. It's obviously the way to go, though. Martin 17:28, 20 Apr 2004 (UTC)

The stuff about points awarded to each user, and so forth - I don't think that's a good idea. Better to make it so that for every user, there's some delay period before edits affect the version seen by anons, google, etc. I also believe you need to consider more deeply the distinction between an account and an IP address, and also that between a troll and a vandal. Finally, maximising "Leverage" is the wrong metric (Nupedia, under that definition had an infinite leverage... and near-zero progress). A better metric is to maximise the net total improvement in Wikipedia quality per day. Martin 22:17, 14 Apr 2004 (UTC)

Well, you have to somehow distinguish b/t changes which at some point affect the version seen by google/anons, and those changes which never affect such versions. I'm not suggesting actually assigning points to users; any system of linearly-graded trust can be described in terms of such a 'point' system, so I'm using it as a shorthand. [you could also see this as a combination of four flags:
  • a "non-bot" flag, (+0.01 pts for waiting 6 hrs)
  • a "non-vandal" flag, (+0.1 pts for a string of acceptable edits)
  • a "non-sockpuppet" flag (+0.9 pt for putting time into an account)
  • a "heavy contributor" flag (+1 pt for extraordinary dedication)
again, note that each of these flags tells us something successively informative about the user via leverage; it is difficult to produce multiple accounts with the non-sockpuppet flag set, and very difficult to produce multiple heavy contributor accounts (though Lir, Wik, Mav and Olivier could each have managed it) +sj+ 08:57, 2004 Apr 20 (UTC)
I am still thinking about accuont/IP and troll/vandal distinctions... and I agree that the goal is not to maximize leverage -- but it is a useful metric, not least because it is actually measurable. (currently, the leverage of the WP system is under 0.1 in the worst cases -- much lower than it should be.) +sj+ 09:06, 2004 Apr 20 (UTC)

Such a distinction is easy: if a version is reverted or edited before it becomes the version seen by google/anons, then that change will never affect the version seen by google/anons.

I'm interested where you get the 0.1 figure from for the current system's "leverage". This seems to me largely a user interface issue. For me, it is far better to discourage negative edits in the first place, rather than fix them efficiently when they happen.

You may care to read Gaming the system: How moderation tools can backfire, if you haven't already. It's an interesting perspective on dangers your system (and any system) may need to avoid. Martin 17:28, 20 Apr 2004 (UTC)