Wikipedia:Reference desk/Archives/Computing/2008 May 2
From Wikipedia, the free encyclopedia
Computing desk | ||
---|---|---|
< May 1 | << Apr | May | Jun >> | May 3 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
[edit] May 2
[edit] I have a problem signing up for AIM?
I sign up correctly for AIM. I enter the right password to sign up and sign in. I enter the code in image correctly. However, it keeps rejecting my registration and sign ins. When I try signing in on http://www.aim.com/, and I enter the right username and password, it rejects it and It thinks I'm enter the wrong password, but my password is right. I'm tired of this site rejecting me and my account. It has a bug. It has nothing do with the caps lock. It just the Web site. I'm using Ubuntu GNU/Linux 7.10 and I don't have AIM installed. My usernames are fastjet123 and fastjet1233. What should I do? Jet (talk) 00:06, 2 May 2008 (UTC)
- Is there a "forgot password" link? Sorry if this seems obvious but you know how it is ;) ... What about contacting them? I have a similar problem in the client - but I think in my case that I've just forgotten my password, cause I never used the account anyway. Try a different browser. Without looking I will say that i agree that the site might just be really buggy. If that's so, then maybe they tested it in a bad browser. Maybe the site is written with a lot of proprietary code. If you can't do it in Konqueror, try Opera. Perhaps Opera will have or know how to respond to the bugs of the browser they wrote the page in. Konqueror is available through "sudo apt-get install konqueror"; Opera is available through Opera.com 125.236.211.165 (talk) 23:20, 8 May 2008 (UTC)
[edit] IBM Kittyhawk vs Storm botnet
The Kittyhawk would make websites it runs invulnerable to Denial-of-service attacks, unless they're powerful enough to take out the whole of Kittyhawk. DoS attacks are used heavily by the Storm botnet. If Kittyhawk gets off the ground, will the Storm botnet DoS attack IBM while it still can? — DanielLC 02:33, 2 May 2008 (UTC)
- As far as I can tell from glancing at the white paper, Kittyhawk is an attempt to get large web service providers like Google to replace their commodity PCs with IBM hardware. It wouldn't do anything new, just (supposedly) do it cheaper. I doubt they're going to find many takers, but even if they do I don't see how the switch would have any effect on vulnerability to DDoS. It's not as though Google is very vulnerable to DDoS right now. Also, it doesn't make sense for Storm's controllers to "attack IBM while they still can". They wouldn't attack IBM unless they stood to gain something from it. -- BenRG (talk) 01:10, 4 May 2008 (UTC)
[edit] Horizontal scrolling with touchpad
I cant seem to get horizontal scrolling with my touch pad in some programs like Firefox. -USB mouse with scroll wheel works fine so I know its not that the applications don't make use of horizontal scrolling. -MS Word works with the touch pad so its nothing wrong with touchpad itself or its driver some forums mentioned forward/backpaging replacing the horiz. scrolling but im not even getting that
in running vista on a sony sz750 with an apoint touchpad 216.165.243.128 (talk) 06:32, 2 May 2008 (UTC)zyrmpg
[edit] html / js / css is holding back the web?!
I have a feeling that html / js / css is holding back the web?! Is it true anyways? --V4vijayakumar (talk) 08:53, 2 May 2008 (UTC)
-
- If by "holding back" you mean "limiting the potential of", then you are getting warm. If HTML, JavaScript, and CSS were true standards that every web broswer implemented flawlessly, then there would be no problem. Ever designer and programmer would be sharing code and pushing the web to its true limits. Unfortunately, making things work seamlessly in just the top 3 popular browsers is not easy. Making things work in just the 3 popular versions of Internet Explorer is not easy. The blame falls squarely on the web browser designers. It is not up to them to decide to do things their own way. They should ensure that their browser operates exactly the same as every other web browser. That will never happen. So, you can push a single version of a web browser to its limits, but you can't push the web to the limits of its potential.
- On a side note - a huge problem right now is that a web browser is designed to be a "web page renderer", not a "web-based application engine". What we need is a WWA (World-Wide-Application) that sits aside the WWW. Then, we need application browsers that browse the applications. Trying to force a web browser to be an application engine causes a lot of problems that would be easily solved if the program were designed to host applications from the start. Of course, we won't get a WWA. We're just going to get more and more Ajax-rich pages that try to mimic applications. -- kainaw™ 12:09, 2 May 2008 (UTC)
-
-
- This almost asks for a definition of what exactly "js", "css" and such _are_. I mean I know what I think they are, and I fling the words around sometimes like they're going out of fashion - but are you asking about the things I'm thinking of? For example consider Javascript. When I talk about Javascript I'm normally also talking about JScript, which is arguably something completely different. They're both ECMA262 implementations (or _something_).. So by "js" do we mean that? It's really only a selection of methods and syntax, which I think for the most part are very similar if not identical to those used in "proper" computer programming (apologies)... I think that a lot of high level languages allow programmers to call Date() methods, use regular expressions, and set variables as strings. I don't know if the NASA computer programmers use Brainfuck, C, Commodore 64 Basic or enchanted rune stones, but I strongly suspect that they are using a programming language. if we called javascript a programming language, we wouldn't be far wrong, and therefore if we're using the same sort of thing to build web pages now as what NASA might be using to send men into space, then we're probably not doing too badly in that department. (Javascript: lisp with c-like syntax?) In fact I would say that javascript is a dangerous technology, because once you get your head around working with it, it's so tempting to just put in a lot of fancy effects, and leave the page totally unusable for up to sixty percent of your users (and hope that nobody notices).. XML of course is by its very nature extensible. HTML cannot be holding back the web. Even if HTML is no good, not everyone else makes web pages in HTML any more (technically) so it can't hold back the web as a whole. I don't think that anything IS wrong with it, but even if there was, many now write pages in the much more easily parsed XHTML - a true flavour of XML, which brings us SVGs (Scalable Vector Graphics) for example. Note that some browsers don't load SVGs at all, and some have limited implementations. The standard is there and it is being adopted. If anything, although the standards seem to evolve painfully slowly, they still advance more rapidly than what browsers can implement. CSS was designed for web pages, but now is being adopted where style needs to be controlled in a variety of applications. Why? Because it's so easy, so powerful, and so effective. The Context Browser in Amarok is one example of css within an application not strictly made for web browsing. Javascript of course plays very nicely with web pages though. That document.something.something.something notation is actually generally used in web pages to point at parts of the Document Object Model, or DOM, which is very easy to create and understand in terms of XML. CSS and javascript are able to directly target DOM nodes (separately or collectively) by their location or attributes. Also note that CSS degrades gracefully - it only affects the rendered style of a page, and not the content or structure (maybe?)... CSS then is inherently and perfectly extensible. All in all, to varying degrees, the technologies you've named are highly extensible and interoperable, easy to use and widely supported. So, like Kainaw, I think that the standards and such are totally adequate. I think that they are _more_ than adequate. What's holding back the web is how slowly the standards are implemented (a very popular browser recently caught up to be only about five or ten years behind the standards..).. The browsers themselves, market forces, the users, and the people who use the technologies - I think that the standards you mentioned are the life-blood of the internet, and that the influences I mention in this sentence are in fact fighting and working against the ones you named - and at enmity with the standards. Sorry for the insanely long sentence and insanely long paragraph. One more time, those technologies make the web, not hold it back. The real problem lies in our attitudes toward those technologies,and our implementations of them, in my opinion.125.236.211.165 (talk) 16:14, 8 May 2008 (UTC)
-
-
-
- Verily, I think that those three technologies are not. I believe that web applications such as GMail could be as feature-rich and responsive now as "proper" desktop-based applications. The standards give them more than enough power to match their desktop equivalents. What's holding them back, in my humble opinion, is that if you're making a fancy page you have to write at least two versions of it. If you write the page to w3c standards, you then generally find that it needs minor tweaking to get it to work in all the other browsers, and major hacking to get it to work in Internet Explorer. On the other hand, if you write a web page using only deprecated or proprietary code (ie "MSHTML") then you find that it takes major hacking to get it working in Firefox... And then it takes more major work to get it going in Opera... And then it takes even more hacking to get it going in Safari..... The reason why the second method is so much extra work is that ie (and some other browsers no doubt) is very, very, very far behind in implementing the standards, and the "modern" browsers all react differently to ie's shortcomings. I was a big fan of ie until reluctantly I tried a proper browser and got hokoed on its speed. The sad thing is that (although it is a great product and does deserve a lot of attention) ie is tied into the operating system. Most people don't know or care that the internet can be a hundred times better tomorrow, so they continue to use "that browser" - the browser that is holding back the internet in a big way and makes web design difficult (well, it does for me, anyway. I'm sick of writing two copies of every page!! But this is why CSS, for example, is a VERY good thing).
-
- Holding back the web from what? It really depends on what you imagine the web to be. Personally I think the simplicity of those technologies is their real long-term benefit. Consider how much cheaper and easier it is to launch a website than it is to, say, put a single episode of a show together for broadcast media. A single programmer with only moderate experience can put together a pretty functional web page, but putting together something as "rich" as a news feed on CNN takes the work of hundreds of people, thousands of dollars, etc. I also think the web does well as a hybrid between print and video—I think a well-laid-out web page in HTML/CSS is a thousand times better than one in, say, Flash. It's easier to use, more reliable, has a standard interface, etc. But my point here is not to say that I'm right and you're wrong, but rather to say that you've got an unarticulated vision of what you consider the "progress" of the web to be, and it's not necessarily the same one at all that other people have. --Captain Ref Desk (talk) 15:01, 2 May 2008 (UTC)
-
- Web is like what web was 10 years back (plus some bug fixes). I ask this way, is the web ready to replace operating system? I just want to have a system that contains nothing but a web-browser. I don't mind how much memory this browser uses, and, what off-line content it stores into my hard disk. --V4vijayakumar (talk) 00:59, 3 May 2008 (UTC)
-
-
- Is my car holding back agriculture? There has been back and forth on your idea, and all that's old is new again. Topics that may be of interest: Centralized computing, Thin client (I recommend reading the latter, then the former - the former's article is a good springboard to other topics). All things, however, are limited by what resources are available (and consequently, their costs). Consider this - Steam (content delivery) still "caches" entire games rather then being thin-client, as I believe you allude. This has little to nothing to do with web standards, and their extensibility (or lack thereof). However, speaking more directly to that topic, XML and related technologies XSLT speak to maneuverable, agile standards. AJAX is also of interest. The topic is really quite large, and what was it the famed historian said? I shall endeavor to present the facts, let the reader draw their own conclusions -- Ironmandius (talk) 03:42, 3 May 2008 (UTC)
-
[edit] virus
If you download an exe file that has a virus in it, will your computer only become infected if you double click on the file? If you never use the file will the virus just remain dormant? xxx User:Hyper Girl 09:15, 2 May 2008 (UTC)
- It depends on the application sometimes. I once had such a file that was left untouched, but the virus was automatically activated when I moved the file somewhere else. If you want to be safe, change the file extension. Chenzw (talk · contribs) 09:18, 2 May 2008 (UTC) —Preceding unsigned comment added by Chenzw (talk • contribs)
- Wrong, you're fine as long as you don't run it .froth. (talk) 12:40, 2 May 2008 (UTC)
- Would it be possible for a virus to be created such that the OS (or any other app) runs it without you (explicitly) telling it to do so? Zain Ebrahim (talk) 15:02, 2 May 2008 (UTC)
- Wrong, you're fine as long as you don't run it .froth. (talk) 12:40, 2 May 2008 (UTC)
-
-
-
- Yes, Microsoft OSs are famous for gratuitously running things. See autorun.
-
-
-
-
-
- Atlant (talk) 17:02, 2 May 2008 (UTC)
- It's only a problem if it gets executed - generally, this means if you double-click it but as Atlant says, autorun is a problem. It should be noted that even if you view virus code without executing it, say, in Notepad, it won't infect you (unless a virus is bound to Notepad, which is a different matter altogether). x42bn6 Talk Mess 19:31, 2 May 2008 (UTC)
- Atlant (talk) 17:02, 2 May 2008 (UTC)
-
-
-
-
-
-
- That's frightening! Zain Ebrahim (talk) 09:05, 3 May 2008 (UTC)
-
-
-
- AutoRun isn't exclusively the problem. While avoiding technical details, there was an issue with how Windows handled showing pictures. As a result, one could embed into a picture some code (a virus), send that picture to someone in email, and then when they look at that email (which, if it was your first mail of the day, would be automatic) they are infected. This was also a major problem about ten years ago, when we moved from plain text e-mail to fancy-pants pretty e-mail, some features lept ahead of security concerns, and some e-mail clients would allow embedded script code in emails and execute it trustingly. I think that was Melissa (computer worm) or one of her near predecessors, putting it almost precisely a decade behind us, though. (These particular examples have been fixed, but the principle remains available) -- Ironmandius (talk) 14:22, 4 May 2008 (UTC)
This is more of a question than an answer. Possibly a matter of definition and partly of course of computer science and stuff. Does Windows Explorer perform MIME-Type sniffing? Here are some examples of situations where I think that a virus could be executed without your consent.... First up, a macro in Microsoft Office. Yes, you get a warning about the macro. i do think that most people don't even let them run now. Dose this qualify? Next up, can I rename an .exe to a .jpg and have the thumbnailer attempt to execute it? (I doubt it but hey) because if I could, that would open the doorway for infection. Another might be Internet Explorer. Consider VBScript, ActiveX and so on and so forth.You can create scripts that alter the configuration of the browser from within the web page, installing BHOs (Browser Helper Objects like toolbars) and such without your permission... Alarmingly I used Internet Explorer once last year (as a joke), and there was a toolbar nobody had installed - Norton Antivirus missed it, so did AVG; NOD32 found a lot of interesting downloaders on my system.... NOT a shameless product plug!!! Anyway as soon as we reach this point, someone is sending information about our computer back to their website somewhere. So they could possibly know which sites we visited, and when,and how long for, and maybe our passwords and/or credit card numbers... The joyful thing here is that in IE (last tried this in IE6) you can install BHOs without actually clicking anything! (I blame IE. Maybe I downloaded something bad after all?) You only have to visit a page. I believe that the browser helper objects can modify your registry too, and parts of your filesystem - which all equates to privilege escalation - allowing them to download trojan horses, worms, and any kind of malware they feel like at the time. Suddenly every site you visit has popup ads and interstitial pages (even Wikipedia!) and maybe you even get popups when your browser is closed :-(... I know that Internet Explorer is a fantastic product and has been worked on by a lot of great and skillful people but I strongly recommend that you do not use it for surfing the net. So does the Department of Homeland Security. If nothing else, consider that with so many users it has a nice big attack surface area, and that antivirus and firewall software is not a hundred percent reliable, and that most users are capable of extracting a superior experience from other software _as_well_. Just something to chew on, just speculation and opinion, wondering how many of these would be reasonable ways to try to attack someone's computer security. Never thought this would be such a long comment thing.
After all this... If you download a virus intentionally (please don't do this, I don't advocate doing it ever) then it will probably be fine. If it can't get onto your computer without your consent then it's probably not going to run itself without your consent either. You could argue that it is safe to intentionally download a virus. It doesn't even actually exist as far as the computer's concerned, until you try running it, anyway, right?
There are some very advanced things like crashes and buffer overflows, which hackers might be able to use to run arbitrary code on your system (read: install bad things)... But most users (including me) are largely oblivious to all that sort of thing, and it's best left to programmers and hackers to fix those vulnerabilities IMO. 125.236.211.165 (talk)
[edit] The world's first software program?
I want to know what was the world's first computer software program, and what was it all about? Was it something to do with scientific calculations or business calculations...?--deostroll (talk) 10:15, 2 May 2008 (UTC)
- The first computer program is generally considered to have been Ada Lovelace's algorithm for computing Bernoulli numbers. Of course, the computer in question never actually got finished. Algebraist 10:45, 2 May 2008 (UTC)
-
- Using the definition of "computer" to be an electronic device that accepts, stores, computes, and displays numerical values (which is a very limiting definition and takes place long after Lovelace's work), the first computers were used for simple mathematics - such as addition, subtraction, multiplication, and division. When they got big, they were being used as code-cracking machines. I am fairly certain that they got the name "computer" at that time because they were augmenting (and replacing) the original computers, which were people (mostly women) who did the code-cracking by hand. -- kainaw™ 12:02, 2 May 2008 (UTC)
- Just a note, but "computers" pre-computer were just anybody who was in charge of route mathematical operations. They were usually, but not always, women, and were used in fields as varied as statistics and astronomy in the 19th and early 20th centuries. --Captain Ref Desk (talk) 14:53, 2 May 2008 (UTC)
- Using the definition of "computer" to be an electronic device that accepts, stores, computes, and displays numerical values (which is a very limiting definition and takes place long after Lovelace's work), the first computers were used for simple mathematics - such as addition, subtraction, multiplication, and division. When they got big, they were being used as code-cracking machines. I am fairly certain that they got the name "computer" at that time because they were augmenting (and replacing) the original computers, which were people (mostly women) who did the code-cracking by hand. -- kainaw™ 12:02, 2 May 2008 (UTC)
- The predecessors of the modern computers are unit record equipment, which is just a tabulator of information, used for doing census records as well as business records. These same machines were then later used for all sorts of complicated mathematics, often for military purposes (calculating tables of ballistics, calculating fission simulations for the Manhattan Project, code-cracking, etc.). None of these looked a whole lot like a modern computer, and none of these early ones were generalized computers—they usually did one specific sort of task and that's all they did. (And none of them, by modern standards, were very impressive. Your cell phone contains more processing power.) Whirlwind is often cited as the oldest computer that looks like a modern computer, bringing together a number of concepts (real-time operation, video displays, electronic circuits) for the first time. It was developed to be a flight simulator originally but ended up being the basis of quite a number of other machines. --Captain Ref Desk (talk) 14:53, 2 May 2008 (UTC)
-
- I think I'd argue that one of Whirlwind's (and earlier, the Manchester Mark I's?) breakthroughs was the use of random-access memory (Williams tubes and later core memory). Previous machines usually used some sort of sequential memory, greatly restricting the programming "style".
Using a definition of "software" as a stored program that can be loaded onto a machine and executed, the first piece of software would be a factorization program loaded onto and run on the Manchester Small-Scale Experimental Machine in 1948. --Delirium (talk) 15:08, 3 May 2008 (UTC)
[edit] choosing a laptop
Do you know of any site where I can input the features to search for a laptop? (I am not searching for a site that offer reviews, but a tool to compare features). 217.168.3.246 (talk) 14:59, 2 May 2008 (UTC)
- Most laptop manufacturers provide a means to compare features, but only with their own products. Some stores provide a similar comparison, but only with products they stock and in my experience, their feature coverage is patchy at best. Maghazines sometimes do a "group test" where they compare a selection of different manufacturer's laptops that meet some specification (eg. Core 2 Duo laptops under €700). Astronaut (talk) 19:59, 2 May 2008 (UTC)
-
- Yes, I found some partial information but I am trying to find something similar to this comparison of digital cameras but for laptops. 217.168.3.246 (talk) 20:29, 2 May 2008 (UTC)
-
-
- CNet's site does, but it's not great as its selection of laptops is slightly limited. 206.126.163.20 (talk) 00:36, 5 May 2008 (UTC)
-
[edit] Downloading link in Flash: Avoid the innumerous scrappers on the web
I'm trying to create a flash movie that will have one buttons that users will click to download material for free from my server
the current solution i'm using is:
on(release){ getURL("http://webaddress.com/music.mp3"); }
i'm basically looking for a solution that would conceal/encrypt that information from the user. so that when they click to download the material- they will not know the address on the server. I have a problem with some scrapper downloading automatically loads of educational mp3 from my site.
So basically a clean way to transfer material by using flash without giving away the location on the server —Preceding unsigned comment added by 217.168.3.246 (talk) 15:12, 2 May 2008 (UTC)
- Well you can't really do that cleanly—at some level the browser has to know what server it is if it going to download from it. The only way I can think of stopping automated dwonloading is to either implement a CAPTCHA for each download (annoying, but not out of league) or to try and filter by the self-reported user-agent of the browser (problematic both because you might end up discriminating against new/uncommon browsers and any programmer worth his salt could make the scraper look like whatever he/she wanted it to). But I have to admit I find the whole thing a little silly anyway—who cares if a scraper gets it? Bandwidth is pretty cheap these days last time I checked (and if it is not for you, you might want to find a different host), and erecting impediments to access is going to hurt your own users more than it will the scrapers. --140.247.240.135 (talk) 17:41, 2 May 2008 (UTC)
-
- If the material you provide is free, why are you concerned about scrapers stealing your material automatically? Frankly, it's annoying as hell to jump through all kinds of hoops to download free material small one piece at a time, when some scraper can do it all automatically for you. Astronaut (talk) 19:53, 2 May 2008 (UTC)
- You could use a cgi script that renames the file:
- The Original file is located in an inaccessible folder, but that cgi scripts can read.
- The flash calls the cgi script with a code referencing the file, preferably without using it's name.
- Cgi Script copys the file to a temporary, version in an accessible area, and sends it to the client. It then deletes the temporary version once the download is complete.
- @ Astronaut: If someone wanted to keep track of the number of users, where they are, or keep statistics for advertising, etc, they'd have a very valid reason to do as asking. RoadieRich (talk) 19:17, 7 May 2008 (UTC)
[edit] Excel if function text criteria difficulties
Hi all, I'm trying to nail down some gas expenses out of an excel credit card statement. I'm runnign into what seems like a very simple (and frustrating) problem. When I try to use this formula, it doesn't work (it returns 0):
=SUM(IF(B1:B103="SHELL*",C1:C103))
But when I try subbing in an example of the value in column B as the test value, it does work! (returns the sum of expenses at that gas station)
=SUM(IF(B1:B103="SHELL 3596 WEST 41 AVENUEVANCOUVER",C1:C103))
Obviously, I want know how much I spend at different shell stations all over the place, so what am I doing wrong in the first formula? I am pressing ctrl-shift-enter for both of the above to enter them as an array formula, and once i get this figured out I intend to add an OR statement to include other gas stations. Thanks, -24.82.140.138 (talk) 21:00, 2 May 2008 (UTC)
- Why don't you just use SUMIF instead? I'm not sure either SUM or IF can be used in the way you want them to. Excel really sucks at the kind of things you are trying to get it to do (multiple conditionals for multiple rows). --98.217.8.46 (talk) 23:50, 2 May 2008 (UTC)
-
- As I mentioned, I want to add an OR statement to get different kind of companies all together. I can't do that with SUMIF, but I guess I may as well try doing a multi-cell operation with SUMIF in the meantime. -24.82.140.138 (talk) 23:55, 2 May 2008 (UTC)
-
-
- Hello. I am quite sure your wildcard won't work in that context. Try something like this:
-
{=SUM(IF(ISERR(SEARCH("[whatever you're searching for, and you can use wildcards with SEARCH]",B1:B103)),0,C1:C103))}
-
-
- that is,
-
{=SUM(IF(ISERR(SEARCH("SHELL",B1:B103)),0,C1:C103))}
-
-
- SEARCH will return a #VALUE! error if it doesn't find the text, so the formula will return 0 if it doesn't find the text, and the appropriate number from column C if it does find the text. SEARCH supports wildcards, but you don't need a wildcard to evaluate whether "SHELL" is in the text. There may be simpler ways of reaching the goal, but I'm answering your specific formula problem. –Outriggr § 00:52, 3 May 2008 (UTC)
- And I don't think you can do SEARCH over a whole array of cells. Again, I don't think the built-in functions can deal with this. I would write a VBA function to do it. It's not worth the hassle—this sort of thing is the sort of thing that Excel just sucks at. --98.217.8.46 (talk) 01:16, 3 May 2008 (UTC)
- SEARCH will return a #VALUE! error if it doesn't find the text, so the formula will return 0 if it doesn't find the text, and the appropriate number from column C if it does find the text. SEARCH supports wildcards, but you don't need a wildcard to evaluate whether "SHELL" is in the text. There may be simpler ways of reaching the goal, but I'm answering your specific formula problem. –Outriggr § 00:52, 3 May 2008 (UTC)
-
- =SUM(IF(B1:B103="SHELL*",C1:C103)) This looks to me as though you are missing brackets because to read the range (B1:B103) you have to specify that in brackets or you get an error. Just an idea. --Lisa4edit (talk) 10:05, 3 May 2008 (UTC)
- Like Outriggr mentioned, I also suspect the problem is that Excel doesn't allow use of a * wildcard to compare text. If the word SHELL is always the first 5 characters, try using the LEFT function: IF(LEFT(B1:B103,5)="SHELL", ... ) --Bavi H (talk) 14:38, 4 May 2008 (UTC)
[edit] Viruses
What programming language(s) are viruses written in and what executes them (the OS)? Zrs 12 (talk) 21:03, 2 May 2008 (UTC)
- Viruses can be written in almost any known programming language, but I would speculate that the most common languages are c++ and assembly (perhaps java and VB for the less sophisticated ones). Many viruses are executed by the OS, however lots of viruses are have an ActiveX, javascript, or cgi enabler that executes them allowing them to spread via users web browsers. Some viruses are standalone executables that rely on the user to execute them, while the more common (and sophisticated ones) exploit network holes to spread or attach themselves as riders on other executables. -24.82.140.138 (talk) 21:37, 2 May 2008 (UTC)
-
-
- Only if it is compiled (i.e. is an .EXE or .COM file). If you are asking what kinds of programs (including viruses) can be executed uncompiled, it is mostly scripts that the machine already has the ability to read (e.g. already has an interpreter installed), which confines in primarily to batch scripts (.BAT) and various MS Office files that can embed Visual Basic for Applications (VBA) in them (.DOC, .XLS, .PPT, .MBS, .VBS), which many people already have installed, as well as the aforementioned ActiveX and Javascript. --98.217.8.46 (talk) 23:52, 2 May 2008 (UTC)
-
-
-
-
- The infamous Morris worm was partially written in C which was recompiled on the fly on each target machine. This was possible because of the ubiquity of C compilers on Unix machines, and desirable because of the heterogeneity of the hardware. (I originally thought that it was written entirely in C, but according to our article the bulk of it was machine code.) -- BenRG (talk) 16:23, 3 May 2008 (UTC)
-
-
- To respectfully disagree with 24.82, I would hazard that most viruses (by quantity and volume) are written in VB (ease of exploiting the largest market share targets - IE/OE). To be definitive, one would have to pour over List of computer viruses's links. To sidebar, C++ is a language. Think of programming languages this way - if I hire a carpenter to make me a cabinet, it doesn't matter if we discuss it in meters or feet; the final product is a cabinet. -- Ironmandius (talk) 14:16, 4 May 2008 (UTC)
[edit] Hot linking
Am I allowed to hotlink images from wikimedia? I'm thinking its not allowed? Sliver Slave (talk) 21:10, 2 May 2008 (UTC)
- Just about everything EXCEPT the Wikipedia logo (it's copywritten). Just check the rationale on the image's page to make sure it's not being used under fair use. If it is, you might not be able to use it, depending on the circumstances. For future reference, I think questions about Wikipedia go on the Help Desk. Paragon12321 (talk) 21:22, 2 May 2008 (UTC)
- I think we can answer in the general case, though. IMHO creating a plain link to http://en.wikipedia.org/wiki/Image:Example.jpg is fine. However, using http://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg in an img tag is definitely not OK, because that would be a GFDL violation unless you provide a link to the GFDL declaration on the Image:Example.jpg page. --Kjoonlee 08:42, 3 May 2008 (UTC)
-
- It doesn't technically matter if you link to the original GFDL statement; you just need the GFDL statement listed somewhere on the page it is being displayed. Though the way you describe would be most concise (just link back to the original Wikipedia image and its copyright information.) --98.217.8.46 (talk) 14:49, 3 May 2008 (UTC)
- Nonsense. You can put whatever URLs you want in an img tag. That's not redistributing it, that's just telling people where it is! Telling someone where they might go to find an image does not obligate you to pay any attention to the licensing of the image. DMCA lawsuits have reached into the "you can't even tell anybody where to get stuff" area, but GNU (the G in GFDL) is fundamentally opposed to DMCA anyway. It would be beyond shameful for Wikimedia to attempt to enforce such evil RIAA/MPAA-like restrictions. --tcsetattr (talk / contribs) 08:58, 3 May 2008 (UTC)
- Huh? That's reuse and redistribution; if you include a GFDL image on a webpage, then the webpage becomes GFDL. According to the GFDL, you must mention all relevant GFDL dedications. --Kjoonlee 10:22, 3 May 2008 (UTC)
- Notice I said img tag, not a href. --Kjoonlee 10:22, 3 May 2008 (UTC)
- And guess what, the obligation to quote the GFDL declaration is meant to promote Free Cultural Works. --Kjoonlee 10:25, 3 May 2008 (UTC)
- And look, "Not all restrictions on the use or distribution of works impede essential freedoms. In particular, requirements for attribution, for symmetric collaboration (i.e., "copyleft"), and for the protection of essential freedom are considered permissible restrictions." --Kjoonlee 10:26, 3 May 2008 (UTC)
- And we have precedents. The "featured article" star at Wikipedia was mistakenly labeled as GFDL (or was it GPL?) and was used without the link; it was correctly labeled as LGPL to avoid such problems. The logo of WP:SIGNPOST, previously a GFDL image, was being used without the link; it was switched with a PD image. The logos at WP:RD were being used without links; they are mentioned separately at the bottom now. --Kjoonlee 10:31, 3 May 2008 (UTC)
- Uh, yeah, sorry tcsettattr, but you don't seem to understand copyleft very well. It's not public domain, it's not a lack of copyright. There are restrictions and requirements. They're just not the usual ones. There are ways in which some copyleft licenses are incredibly non-free, in my opinion (personally I think releasing to the public domain is more ideal in many circumstances, if the goal is maximum re-use). --98.217.8.46 (talk) 14:49, 3 May 2008 (UTC)
- The point that tcsetattr is making is that when you "include" an image in a web page the HTML source doesn't literally contain the image data, it only contains a URL where the image can be found. It's the reader's browser, not the author of the document, that fetches the image. You certainly don't want to hold the reader responsible for this, and the author hasn't actually copied anything. It's an interesting gray area of copyright law. It has a name, of course—transclusion—and I'm pretty sure Ted Nelson wrote about its legal ramifications long before the Web existed. To tcsetattr I'd like to point out that if a judge did agree with your interpretation of this practice, the precedent would probably have a disastrous effect on the GPL. If you can claim that your software product isn't a derivative work simply because you've put the GPLed code into a separate executable file and "transcluded" it via some sort of IPC interface, the protections of the GPL become meaningless. It's because of problems like this that people are afraid of testing the GPL in court. -- BenRG (talk) 19:56, 3 May 2008 (UTC)
- Um, okay...I stopped being able to tell what people meant about 3 comments up. 24.77.21.240 (talk) 20:21, 3 May 2008 (UTC)
- Dma it wikipedia, keep me logged in. That comment above should be attributed to me. —Preceding unsigned comment added by Sliver Slave (talk • contribs) 20:25, 3 May 2008 (UTC)
- BenRG, but inclusion makes the web page a derived work of the image, rendering the web page GFDL. --Kjoonlee 20:41, 3 May 2008 (UTC)
- "Inclusion" is just not happening. It's just saying "those guys over there have an image which would look good here. If you ask them for it, they might give it to you." --tcsetattr (talk / contribs) 21:36, 3 May 2008 (UTC)
- I'm pretty sure the combined web page with the image data included is eligible for copyright, and it clearly is a derivative work, but that's not the issue here. Our author doesn't want to assert copyright on the combined web page. All he wants to do (and, let's say, all he does do) is run a web server which distributes his HTML file to whoever asks for it. The HTML file contains an img tag with a URL identifying a copyrighted image. URLs themselves are not copyrightable, and the rest of the HTML file is original work. Has the author created a derivative work? I'm not saying he has, I'm not saying he hasn't, I'm saying that this doesn't seem to be adequately addressed by the current copyright code. Whether it's been addressed in any judicial decisions I have no idea. -- BenRG (talk) 22:55, 3 May 2008 (UTC)
- Again, that's reuse, under the GFDL. --Kjoonlee 01:18, 4 May 2008 (UTC)
- Point to which clause in the license contains this "reuse" definition, or any other official statement from FSF to support it. http://www.gnu.org/copyleft/fdl.html doesn't actually contain the word "reuse". It's all about copying and modifying. There doesn't seem to be any part of it which even remotely resembles what you're imagining. It makes no attempt to regulate the dissemination of information on where to find a GFDL'ed document. Such an attempt would look quite out of place. Copyleft is about creative uses of copyright to "promote progress" as it was suppsed to be in the first place, not about using it as a tool of opperssion. --tcsetattr (talk / contribs) 02:02, 4 May 2008 (UTC)
- Again, that's reuse, under the GFDL. --Kjoonlee 01:18, 4 May 2008 (UTC)
- The point that tcsetattr is making is that when you "include" an image in a web page the HTML source doesn't literally contain the image data, it only contains a URL where the image can be found. It's the reader's browser, not the author of the document, that fetches the image. You certainly don't want to hold the reader responsible for this, and the author hasn't actually copied anything. It's an interesting gray area of copyright law. It has a name, of course—transclusion—and I'm pretty sure Ted Nelson wrote about its legal ramifications long before the Web existed. To tcsetattr I'd like to point out that if a judge did agree with your interpretation of this practice, the precedent would probably have a disastrous effect on the GPL. If you can claim that your software product isn't a derivative work simply because you've put the GPLed code into a separate executable file and "transcluded" it via some sort of IPC interface, the protections of the GPL become meaningless. It's because of problems like this that people are afraid of testing the GPL in court. -- BenRG (talk) 19:56, 3 May 2008 (UTC)
-
- I think we can answer in the general case, though. IMHO creating a plain link to http://en.wikipedia.org/wiki/Image:Example.jpg is fine. However, using http://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg in an img tag is definitely not OK, because that would be a GFDL violation unless you provide a link to the GFDL declaration on the Image:Example.jpg page. --Kjoonlee 08:42, 3 May 2008 (UTC)
- Hotlinking via an img tag is the same as linking via a href tag for copyright purposes, as the image itself is not physically on the linker's server nor being served by the linker's server. That said, any method of linking is vulnerable to DMCA/copyright-circumvention lawsuits (in the U.S.), as people who've linked to "private", but non-encrypted windows media streams have found out. 206.126.163.20 (talk) 00:24, 5 May 2008 (UTC)
- Just to elaborate slightly with an analogy:
You can find the book in the library on main street
is by no means the same asHere's the book: (insert contents)
. 206.126.163.20 (talk) 00:31, 5 May 2008 (UTC)
- Just to elaborate slightly with an analogy:
[edit] Can't boot computer after attempting to add RAM
Hello, I'm having a problem with booting my computer... I attempted to add a stick of RAM from another computer that I just realized wasn't compatible, but before I figured this out I added the stick then tried to turn on my computer. The first time I got three slow beeps and no response from the monitor (everything was replugged). I've since removed the stick I attempted to add but when I put the computer back to the way it was the monitor still wouldn't display anything. If it helps, it's a Gateway computer and I have Windows XP on it. Should I try to boot the computer with the Windows XP disk or try something else? I should make sure to add that there is absolutely nothing displayed on the monitor even though the computer appears to be running- no Windows error message or anything. Thanks for your help in advance... —Preceding unsigned comment added by 68.54.42.126 (talk) 22:14, 2 May 2008 (UTC)
- Might be too obvious but is the old stick you put back in seated correctly? 161.222.160.8 (talk) 22:46, 2 May 2008 (UTC)
I'm almost 100% certain I did, and even if I didn't the first time I have in subsequent tries because I flipped the stick a few ways to make sure there weren't any other ways that fit. —Preceding unsigned comment added by 68.54.42.126 (talk) 22:58, 2 May 2008 (UTC)
Update: I'm definitely certain I have it in the right way now but I still have the problem of 3 consecutive beeps when I try to start up the computer, and the monitor still isn't displaying anything- not even a Windows error message. The best I can say is that it's a Gateway... anything I can do to fix this? —Preceding unsigned comment added by 68.54.42.126 (talk) 23:04, 2 May 2008 (UTC)
- 3 beeps at POST certainly isn't good. According to this link from the POST article it could be bad RAM or dead motherboard. Are you able to test the old RAM in another computer? If so, and it tests ok, it would seem to be the mobo. If the RAM test fails you need some new RAM. Memtest86 is a quality free RAM tester if you need one. Of course there may be something I am missing so dont so anything too drastic (no hammers, yet). 161.222.160.8 (talk) 23:28, 2 May 2008 (UTC)
-
- There's always the unfortunate chance that you zapped something with static electricity. Did you take any precautions against electrostatic discharge as you did the work?
-
-
- Actually, after hours of panic I just figured out the problem. It was, ironically enough, what the first poster mentioned... I had the RAM plugged in WAY too loosely for it to be read because I was worried about messing it up. A few firm pushes got it locked in and allowed me to type this on the original computer! Thanks for the help you offered, though... I'll definitely make sure to ground myself carefully and place hardware more firmly in the future! —Preceding unsigned comment added by 68.54.42.126 (talk) 00:49, 3 May 2008 (UTC)
-