User talk:Schaefer

From Wikipedia, the free encyclopedia

Contents

[edit] Welcome to the Wikipedia

Here are some links I thought useful:

Feel free to contact me personally with any questions you might have. The Wikipedia:Village pump is also a good place to go for quick answers to general questions. You can sign your name by typing 4 tildes, like this: ~~~~.

Be Bold!

Sam [Spade] 22:20, 15 Nov 2004 (UTC)

[edit] Technological singularity article

Congratulations for your great work you're making on the technological singularity article. Maybe one day it will be a featured article, who knows :)

[edit] Singularity edits

I hope you'll keep the changes I putback on the Technological singularity article. I understand you want sources for everything, but sometimes statements are so common knowledge (for example, 2+2=4) that they shouldn't need sources. In my case, I put down words that are common knowledge to any economist. I've provided a source now, to show that the view is held by important economists. MShonle 20:09, 21 July 2005 (UTC)

You kinda gutted the intro. Singularity has two distinct meanings - or two LEVELS of meaning. The transhumanist singularity is not the ONLY kind of singularity. In fact, you can make cases for there being PAST technological singulariies: the culture after the development of agriculture, and the development of written language would be incomprehensible to paleolithic man.
I don't mind people tightening up and reorganizing things to be more efficient. That's a good thing. But I start getting owly when people rip out meaning and distinctions that should be made.
As for moving things into the main body, I believe that a "stub" should be left in the intro. I view the intro as a "brief synopsis" of the entire article: what someone with 30 seconds to look something up will read and take away. However, I think that you're correct in your (apparent) attitude that if it's mentioned in the intro, it should be in the main article body. The should mirror each other, and I updated the intro and hadn't updated the main article yet.
I'll have another look at the intro. I think that the distinction between general and transhumanist technological singularities is an important one, but I think you're quite right that it should be more concise than the half page that was there.
Beowulf314159 03:52, 16 January 2006 (UTC)

Someone reverted the intro. Just to let you know it was not me - Beowulf314159 15:20, 16 January 2006 (UTC)

[edit] .

Foreign languages but not linguistics? ~ Dpr 20:22, 23 July 2005 (UTC)

[edit] Latest Yudkowsky edit

Hi there. The "major writer" part was a good compromise, thanks for adding that. The reason for my preferring "has not yet published any articles in a peer reviewed journal" to "presently has not submitted any journal articles, saying that he is still trying to solve the problems he has become known for identifying (such as the problem of building a Friendly AI)" is that first I'm not convinced the latter is true. After all, one of his articles is to be published in a Springer Volume, so presumably that has been submitted somewhere at some point. Also, there's just no need to write about his motivations for not submitting papers, motivations which are dubious anyhow (after all, he is encouraging people to read his articles, it's just that he publishes them on the web instead of in journals). Regarding the "has become known for identifying" -- I don't think it's appropriate (or common) to write that way about any researcher unless it's someone very famous. Yudkowsky is not "known" at all, save for to a very small specialized crowd (i.e. people like you and me), and the focus of his work is described at many other points in the article. But none of this is any major complaint, and I won't revert the article if you like it this way. I just wanted to point out the kind of things that I don't like about Yudkowsky's own edits, and at the same time take the opportunity to ask you why you prefer his wording to mine. I sincerely saw nothing NPOV in the sentence you removed. Miai 02:25, 7 October 2005 (UTC)

I agree with your point regarding "known for identifying". I'll reword. -- Schaefer 02:36, 7 October 2005 (UTC)

[edit] Featured article for December 25th

I noticed that you have listed yourself as an atheist Wikipedian. You will probably be interested to know that Brian0918 has nominated Omnipotence paradox as the front page article for December 25th. You can vote on this matter here. The other suggestion being supported by others for that date is Christmas, although Raul654 has historically been against featuring articles on the same day as their anniversary/holiday. AngryParsley (talk) (contribs) 08:13, 28 November 2005 (UTC)

[edit] Greg Egan

Thanks for your simple rejig of my reorganisation - two columns makes it look much prettier! Ppe42 13:48, 27 December 2005 (UTC)

[edit] Thanks, references?

Hi there -- thx for all your good work; i keep seeing you around the transhumanist/singularity realms. Since i'm just learning about these things, and since it's easy to find the "pro" side of SIAI, i was wondering if you'd mind sharing references for "There are Singularitarians who openly denounce SIAI." Or any other thoughts would be great, but i don't want to put you out. Thx, and see ya 'round, "alyosha" (talk) 23:26, 30 December 2005 (UTC)

Wow, i'm sorry i've let so much time pass without getting back to you. Thank you for the very helpful resp to my request above. I kept thinking i was going to write back soon...and then you know how it goes. And now, only if you feel like it, i have another question and a half. The half is that i'm still gradually trying to get a sense of the range of opinion re Singularity, incl not only beyond SIAI but also beyond SL4. But i'm finding a good amt of that -- tho if you had any faves to pass along i'd love to see them. But where you could really help me is by recommending places presumably on SL4 where the premises of the SIAI/SL4 discussion are debated (besides the Hibbard stuff). Eg, the ease/hardware requirements of seed AI (such that SIAI could get it before the government/military/corporate/university complex), the speed of progress (quick leap vs a gradual approach thru mouse-level, dog-level, etc), and so on. In a little bit of looking, i haven't found this in SL4; and what i find in Yudkowsky's writings (eg AI-box) isn't what i mean by real critique/debate. Make sense? If this is a bother, then don't bother; but in any case thx again. "alyosha" (talk) 04:02, 22 February 2006 (UTC)

[Note: this is a quick wrap-up of the draft that was laying around when the other stuff happened. My resp to that is in the appropriate section below.]
Well, thx yet again. Much of what you said was familiar to me from biology/AI/etc; but much also clarified your view of things. The SL4 references you offered confirmed that, while i feel closer to eg Ben Goertzel than others, i'm just not very SL4-ish in my approach to these matters. Some of why will be brought out below. (Also note that i'm focusing on the specific topic at hand, not all the interesting stuff on the list.)
Re "dog-level", etc: the main thing to say is that, with apologies for my hasty self-editing, i was only waving my hand in the direction of "premises of the SIAI/SL4 discussion", specifically "speed of progress", not trying to describe those premises, or my own views (of either evolution or AI). My words did come from ideas i'd recently read in Moravec's _Robot_ (see also Kurzweil, etc), using biological cases to exemplify degrees of "personhood" (my word, = ability to sense and interpret one's environment; possessing internal goals and means of evaluation of internal and external states; various modes of reasoning; world-models incl the past and future, both of oneself and the world incl other agents; etc). Suffice it to say that such shorthand doesn't imply that those exemplars evolved in a linear or teleological fashion. But our increasing success at a chosen goal does create linear progress different from the spreading bush of evolution.
In any case, my point was just to ask about the SIAI/SL4 premises, which i'll now categorize more clearly: 1] How close are we to seed AI, and how accidentally could we do it? , which incl 2] What hardware/etc resources will be needed? And what knowledge of various fields? 3] How advanced must seed AI be in AGI? And what else would it need? 4] How much self-improvement is possible within given hardware and other constraints? 5] How gradually will/could we approach seed AI? (that's all i really meant by the "level" talk) -- which is different from 6] How soft/hard will the takeoff be? 7] What kinds of pre-seed precautions could work? 8] What kinds of post-seed containment? 9] What could an escaped super-AI do (in a given world, such as our very near future), incl w/o broad human cooperation, incl re increasing it's scientific knowledge and technological abilities? Then there are background issues such as 10] How MNT will work out, plus 11] All the aspects of how to program safe/friendly AI, and finally 12] Evaluations of other approaches to ensuring friendly AI, such as political action.
As far as i can tell, the answers to these need to be in specific ranges for the SIAI approach to be the best way to go. I know i don't know about all these things, but we're writing because i'm researching it, and i have to live by what i see in the meanwhile; and my sense of maybe all the above areas doesn't fit SIAI/SL4, so far as i can tell. Since i agree about how important this issue is, and i'm eager to learn from those who've worked on it so much more than i have, i wrote you looking for critical discussion of these premises within the SIAI/Y's essays/SL4. I now feel confirmed that there isn't much -- not that satisfies me, anyway. I'll keep on looking into it, and i'll be interested in anything else you may want to send my way.
Thx again; hope this answers some things in turn, "alyosha" (talk) 00:54, 3 March 2006 (UTC)

[edit] Mentifex

I'm being rude to him because I've seen this guy's handiwork elsewhere, and I wish to cut him off at the pass, so to speak. --maru (talk) contribs 23:28, 19 February 2006 (UTC)

I have too. He used to post to SL4 and a few other transhumanist mailing lists I read. -- Schaefer 23:33, 19 February 2006 (UTC)

[edit] Please cite policies

Please cite/link the policies that support what you and others are doing on my talk page. I will do my own research when i have the time. Thx, "alyosha" (talk) 06:30, 25 February 2006 (UTC)

What i've found so far, eg WP:RPA supports, at most, the rewording of the attacking portion, not the whole content. ? Thx for any help, "alyosha" (talk) 07:17, 25 February 2006 (UTC)

Most of that was written as a guideline about what to do about other people's personal attacks. I am the author of the comment I removed text from, so I don't have to worry about misrepresenting myself or pissing myself off, etc. The sentence serves no useful purpose, since the AI project it mentions has since been cancelled. Left in, it's not only uncivil, but misinformative. -- Schaefer 07:23, 25 February 2006 (UTC)
Hello again. I sure regret the complications that have cropped up recently. I was just getting close to done with a resp to your again-helpful SIAI ms, when the first change was made to my user talk page. I even added a heads-up PS to you about it, as a courtesy. Now things are a bit further down the road, and i'm writing to put this peaceably behind us, as succinctly as i can. I also went ahead and posted my continuation re SIAI above.
I apologize for not writing like this earlier. I've been very busy/tired/stressed and even sick lately, so it was a bad time for this all to happen, and i've resp'd so far with a quickie here or there that now obviously wasn't enough. I wish i had earlier said the main thing (btwn us), which is this: if you now regret having written something on my talk page and want to take it back, and wish my agreement in removing it, i agree -- no problem. I don't want to make trouble for you, and i would have resp'd this way at any time if i had been asked, tho i will say that the way things actually went did not feel so good to me. But i feel no need to go into that: what i could have done better, what would have felt better to me coming from others, how i interpreted this or that, our apparent disagreements about wikipedia practice (and all i have to learn about it), etc. On the other hand, if it would be conflict-resolving or otherwise constructive to go into these things, i'd be willing to, so in that case let me know -- but i'd rather just move on, myself.
Another thing i'd like to offer is to email about this, if that would help anything. I actually am curious about all that's happening at once in this case. Maybe there's a chat we could have more securely that would help me understand/sympathize more? Up to you.
Finally, a heads-up that i will resp to User_talk:Sannse#please_cite_policies re the freedom of information/expression aspect of this situation. But this is not about wanting to put the material back up, since your request resolves that. Rather it only involves Sannse's resp to the complaint and legal threat.
Ok, that's it for now; open to any feedback; hope this helps, "alyosha" (talk) 00:54, 3 March 2006 (UTC)

[edit] Online IQ Tests

I re-added a link that was removed. You've correctly surmised that online IQ tests are, for the most part, bunk. However, I included a link to a website that expertly addresses this issue. I'm including it in Introduction to Intelligence Testing for my graduate students this year, for what it's worth, and encourage you to ask me questions if you want further information. Thanks. BrainDoc 20:39, 26 August 2006 (UTC)

[edit] Your edit

Thanks for your edit - I can't spell for nuts, never could never will --Michael Johnson 00:51, 12 November 2006 (UTC)

[edit] Could you restore the POV tag in Dawkins?

Hi Shaefer. user:Sparkhead has removed the POV tag during a vote on whetehr this is PoV!!. If I restore (again) he'll try to get me blocked under 3RR. Could you please? Many thanks. 23:30, 17 November 2006 (UTC)