Wikipedia:Reference desk/Archives/Computing/2007 July 20
From Wikipedia, the free encyclopedia
Computing desk | ||
---|---|---|
< July 19 | << Jun | July | Aug >> | July 21 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Contents |
[edit] July 20
[edit] DVD to youtube
How do I transfer something from a DVD to youtube? —Preceding unsigned comment added by 75.111.190.135 (talk • contribs) 01:56, 20 July 2007
- You have to rip it off the DVD first. Then edit out the clip you want (I recommend using QuickTime Pro, as it makes this sort of simple editing really easy and can deal with a million formats, but that's just me). Then upload it to YouTube. --24.147.86.187 02:02, 20 July 2007 (UTC)
- Before uploading it to YouTube, I'd suggest resizing it and encoding it in Xvid or something similar. It will massively reduce the time it takes to upload. A Very Noisy Lolcat 07:24, 22 July 2007 (UTC)
[edit] WinXP SP2 object block when downloading
Any idea how to overcome the "object block " problem when downloading an exe file in WINXP SP2??59.92.241.163
- Right click on the information bar and choose download. Or hold down CTRL+ALT when clicking on the link. --soum talk 08:04, 20 July 2007 (UTC)
[edit] Digital Camera Lens Lens
Compact digital cameras nowadays seem to come with a variety of lens sizes. For example, this camera from the Sony T-series has a very small lens, whereas this one has a much larger lens. However, the difference in picture quality is negligible. How does this work, and what advantages do bigger lenses offer?
Also, what do people think about the Samsung NV3 vs. the Samsung NV10.
Many thanks,
--Fadders 08:57, 20 July 2007 (UTC)
- It depends what type of lens. For example, a telephoto lens is alot bigger then others, but has alot better zoom, and therefore can make flowers etc. really stand out. Other lenses have different zooms etc. For effects, there are a wide range of filters available. Hope this helps. Adamlonsdale 09:25, 20 July 2007 (UTC)
[edit] ASCII compression?
What is a good compression method for compressing ASCII text (less than 50characters) on an extremely slow network (maximum 5 byte/s) and is relatively easy to implement and doesn't require much (decoding) hardware? --antilivedT | C | G 11:09, 20 July 2007 (UTC)
- For less than 50 characters, a dictionary based compression system seems overkill. How about delta encoding? --soum talk 11:17, 20 July 2007 (UTC)
-
- Some kind of delta scheme might work, if the individual messages are very like one another (say the outputs from a remote weather station, which follow a fixed rubric). Dictionary compression might just work if the messages have a very fixed set of symbols (again the weather station example, if messages look like "WIND:20.4;WINDDIR:NNE;TEMP:13.0;HUM:66"), such that the dictionary can be fixed (cooked into both parties, rather than being generated dynamically and transmitted - obviously the dictionary, like the huffman tree below, would be bigger than the message). If that's the case you'd probably have to figure out the dictionary manually. -- Finlay McWalter | Talk 12:29, 20 July 2007 (UTC)
- Assuming the data is natural language (e.g. English) text then a simple variable-length coding scheme should reduce that down to around half (but with such small packets, probably less efficiently than that). You can analyse a reasonable test-corpus of representative messages and produce a fixed huffman tree which both ends use for encoding and decoding. The nice thing is that the (marginally more) complex stuff is in the initial analysis, which you do on your desktop machine just once. The code the two parties to the actual live communication have (to encode and decode) is pretty trivial. This all falls apart, however, if your packets don't exhibit the jaundiced letter distribution that a natural language has, or if the distributions in the actual packets differ markedly from those in your test corpus. -- Finlay McWalter | Talk 11:35, 20 July 2007 (UTC)
-
- Yes the data is natural language (song informations actually) and the Huffman coding seems quite good. However I don't quite get what Huffman encoding does except maybe using a dictionary to reduce the amount of bits used for commonly found symbols? Also how can I generate a Huffman tree, what software do I use (on Linux?)? --antilivedT | C | G 21:55, 20 July 2007 (UTC)
-
-
- Yes, all the Huffman coding does is assign short bit sequences to the most common letters, with longer sequences for uncommon letters - that's in contrast to the regular ASCII coding, which assigns as many bits (7) to '<' or '$' or 'Q' as it does to 'e' and 'a'. [Note incidentally that the Huffman article rightly talks about "symbols", rather than just "letters"; I don't know of a straightforward way whereby you'd deal with symbols longer than a letter, for this particular problem.] The process for building the tree is straightforward - you perform a frequency analysis of your test corpus (just count how many of each letter), sort them into frequency order, and then go through the queue-based algorithm described at Huffman coding#Basic technique which builds a binary tree. Later, encoding using the tree just means finding the desired letter in the tree and reporting the sequence of 0 and 1 ("go left", "go right") steps necessary to get it (in practice you'd probably build a lookup table, like the one in the righthand box in the Huffman article. The decoder is just a simple FSM where you push each bit in and it makes a "cursor" descend that tree - when it lands on a letter node it emits the letter and resets the FSM cursor to the root (in practice there are lots of fancier implementations, depending on how you represent the tree in the decoder). On thinking more about your test data, I'm not sure you'll see compression that's really worth the bother. In cryptographer's-english (that language that consists only of 'a..z' and maybe a space) you'd hope for something around 50% compression - but the more additional letters you allow in the input, the less effective the compression will be. There's obviously cost for allowing uppercase (not an entire bit's worth, on average, maybe 0.5 bits), and more cost for newlines, punctuation chars, digits, etc. The more restrictive the input set (e.g. discard punctuation and convert all chars to lowercase) the better the resulting compression. -- Finlay McWalter | Talk 22:30, 20 July 2007 (UTC)
-
-
-
-
-
- No, LZW produces a single output symbol from a varying number of input symbols. Huffman coding produces a varying number of output symbols from a single input symbol. Most general-purpose lossless compression these days uses a hybrid of the two (Lempel-Ziv-Huffman; I'm not sure if this has a Wikipedia article, but see deflate). But LZ-Huffman won't work for your application because it's fundamentally adaptive.
-
-
-
-
-
-
-
- Your best choice depends on how important bandwidth is compared to encoding/decoding complexity on each end. You will probably get substantially better results by using a so-called order-1 model, which means that instead of encoding each symbol using a fixed Huffman tree, you encode it using one of several fixed Huffman trees, with the tree selected based on the previous encoded/decoded symbol. There's a lot of correlation between adjacent characters in English text. You will also get substantially better results if you use arithmetic coding instead of Huffman coding. -- BenRG 16:48, 22 July 2007 (UTC)
-
-
-
-
-
-
-
-
- I think I should just do a single fixed huffman tree out of a sufficiently large sample of data, since I have no idea what arithmetic coding is about or how it works. However how should I go generate a huffman tree? What command should I use to count all the characters? --antilivedT | C | G 06:22, 23 July 2007 (UTC)
-
-
-
-
[edit] Why no $2 computers?
I recently bought a "6 in 1 Casino Game" for one pound - about two dollars. It plugs into the TV and offers six different games including Texas Hold'em with computer opponents and 'music'. The hand-held console has ten different buttons in total.
Now if that game can do all that for £1, why cannot I buy a computer for a pound? Or at least a programmable calculator for £1? Thanks 80.2.192.45 11:19, 20 July 2007 (UTC)
- One can now buy new for a pound a scientific (not-programmeable) calculator which cost £30 twenty years ago. DuncanHill 11:21, 20 July 2007 (UTC)
- The game probably has very little or no memory or storage capacity, and it's instructions are burned on to a chip that can be mass-produced cheaply. A programmable computer needs to have both storage capacity and a more complex interface, which would increase its price. Plus, you have economics working against you. The demand for the casino game is (sadly) higher than the demand for programmable calculators. -- JSBillings 11:37, 20 July 2007 (UTC)
-
- So why cannot we have $20 pocket computers? I saw a USB flash drive with 512MB of memory for a few pounds recetly, LCD displays are common, small keyboards are common. This reminds me that we do have little programmable computers in some mobile phones, but it would be nice to just have the computer part. 80.0.108.224 15:41, 20 July 2007 (UTC)
-
-
- It also depends on how little you want to pay the workers. A coworker purchased a tool set for $1 (I don't know how many pounds that is). It included five common tools (philips head, flat head, pliers, etc...). It said "Made in China". So, we discussed it. Retail markup is 300%. So, assuming the dollar store paid a huge amount for it, we can estimate that they paid 50 cents. For that 50 cents, someone was paid for the metal/plastic supplies, someone was paid to stamp the plastic case and the metal tools, someone was paid to box up the tools, someone was paid to take it to the ship, someone was paid to ship it across the ocean, someone was paid to drive it across the U.S. and finally someone was paid to put it on the shelf in the store. Obviously, a lot of people were paid almost nothing so he could buy those tools for $1. -- Kainaw(what?) 12:03, 20 July 2007 (UTC)
-
- Comparative advantage? Automated production lines? Economies of scale? A million product items shipped in one container? Personally I think buying cheap goods from China is a way of moving western wealth to the third world - twelve hours on a production line is a marginally better job to choose than twelve hours in a muddy field in the cold. 80.0.108.224 15:42, 20 July 2007 (UTC)
- A (presumably US?) dollar is about 49p at current prices Algebraist 13:56, 20 July 2007 (UTC)
-
- It also depends on how little you want to pay the workers. A coworker purchased a tool set for $1 (I don't know how many pounds that is). It included five common tools (philips head, flat head, pliers, etc...). It said "Made in China". So, we discussed it. Retail markup is 300%. So, assuming the dollar store paid a huge amount for it, we can estimate that they paid 50 cents. For that 50 cents, someone was paid for the metal/plastic supplies, someone was paid to stamp the plastic case and the metal tools, someone was paid to box up the tools, someone was paid to take it to the ship, someone was paid to ship it across the ocean, someone was paid to drive it across the U.S. and finally someone was paid to put it on the shelf in the store. Obviously, a lot of people were paid almost nothing so he could buy those tools for $1. -- Kainaw(what?) 12:03, 20 July 2007 (UTC)
- If you open up your game, it will probably look something like the top image at right, with just one or two microchips on it. If you open up a standard PC, it will look something like the bottom image, with dozens of components. Each item costs money, and you have to pay designers to figure out how to connect all those bits together, and you have to pay a sophisticated manufacturer to create the multi-layer boards, and the complexity means that the designers probably won't get it right the first time, so you have to get the manufacturer to do it again, etc. It's the complexity that's expensive. With single-chip systems, a guy could wire it up by hand in his basement. --TotoBaggins 16:19, 20 July 2007 (UTC)
I'm still sceptical that a pocket computer couldnt be produced as cheaply as a transistor radio, especially when you consider things like a computer on a chip. 80.0.105.59 19:02, 20 July 2007 (UTC)
- The One Laptop Per Child people made an enormous effort to put their machine out at US$100 each, and didn't manage. You really have to define what a "pocket computer" is (e.g., a $1 calculator certainly qualifies under some definitions), and go from there. --TotoBaggins 19:26, 20 July 2007 (UTC)
I think a pocket computer could have an LCD screen, flash memory, a USB socket, and a minature keyboard. You can buy for a few pounds cheap 'personal organisers' that record appointments and adresses and have most or all of these features, so it shouldnt be a difficult job to upgrade this to a programmable computer of some kind. I believe you can buy things like what I've described already, I just dont see why they need to be so expensive. 80.2.202.130 20:32, 20 July 2007 (UTC)
Any MP3 player you buy is a computer. They are available for US$5 around here quite often. The key to the costs right now are really the I/O devices- especially the screen- and the human costs (you want a general purpose one instead of an MP3 player- imaging the tech support calls you'd get!) --206.79.158.100 22:47, 20 July 2007 (UTC)
This does not seem to be a problem woth mobile phones / cell phones. I would expect to have a built-in operating system rather than one you can change, perhaps a built-in programming language too. Something like one of the early personal computers such as the Commodore 64 with a lot more memory would be fine. 80.0.133.53 09:08, 21 July 2007 (UTC)
- Yeah, but cell phones are not free either — the components all cost money, the tech support costs money, all of it costs money. The only reason you often get them for free is because you lock into plans with the companies. I think you don't seem to have a very good grasp as to how economies work. You can't just wave away the start-up costs, the capital investments, the R&D, the labor force, etc. etc. etc. These things all factor into the cost of the final product and are why something as tiny as a CPU chip can cost hundreds of dollars. --24.147.86.187 15:08, 21 July 2007 (UTC)
- I do understand how economies work, thanks, and I've got the degrees to prove it! You could have the same arguements regarding transistor radios, yet you can buy those for £1. 80.2.201.9 09:26, 23 July 2007 (UTC)
Google search for webservers on a chip. Like [this]. As far as I know these are pretty small, cheap and rather programmable. You can also check Wireless Sensor Networks but these are not yet released to market so they are still expensive. racergr 18:52, 21 July 2007 (UTC)
[edit] Bandwidth of wikipedia
I am trying to provide my employer with examples of the bandwidth used by large web sites in an attempt to prove to him that our server requirements are fairly minimal no matter how large of a site he intends to build or even how many users we anticipate. I was able to find that the total database of wikipedia is 1.2 TB and that there are 120 servers. Does anybody have even an estimate of the bandwidth usage of a site like wikipedia?
Thanks 96.224.27.39 15:17, 20 July 2007 (UTC) AP
- Wikipedia:Statistics has some graphs that may be useful. The traffic graphs under "Automatically updated statistics" show how many bits are served by all Wikimedia clusters (it's around 1-2 Gbits per second, which equates to between 7.5 and 15 gigabytes per minute). — Matt Eason (Talk • Contribs) 15:36, 20 July 2007 (UTC)
-
- Woah! 15 gigabytes a minute! Good thing there are so many servers... -Mary
-
-
- Assume 100 million people use Wikipedia regularly, and that they visit on average 10 pages per day and that such a page is on average 1 MB in size (just one full-size photograph would cause that). That would be 1 petabyte per day. With about 1000 minutes in a day that would be about 1 TB per minute. A bit of an overestimate. I suppose there are not that many regular users ... yet. Plenty room for growth. DirkvdM 09:47, 22 July 2007 (UTC)
-
- To the original question - are you sure Wikipedia is a good example? It's one of the busiest sites in existence. Might even be bigger than Google in this respect since Google doesn't usually deliver content (only when cached pages are viewed). DirkvdM 09:47, 22 July 2007 (UTC)
As the original poster of the question, I wanted to thank you all. No, I do not believe that Wikipedia is a realistic analogy but my boss does so that's what I was working with :) To take the scenario further, would the server requirements differ if the data itself was being pushed to many users but instead of a MB page at a time, many sequential 100 K snippets at a time? The whole project is tough to explain, but instead of accessing a single big page, we envision serving lots or little bits of info. I guess my question is, is bandwidth bandwidth regardless of how it is being used? Is 1 MB to ten users in a minute the same as 100 K to 100 users in a minute? The data will come from an SQL database if that makes any difference. Thanks again for the help. 96.224.97.113 21:40, 24 July 2007 (UTC)AP
[edit] Google's (and Yahoo's) descriptions of Wikipedia pages
If I search for, e.g., "France" on Google or Yahoo, I get a description of the article, instead of a snippet of the article which is what used to show (I believe). The descriptions are not always the same: to wit Google: "Hyperlinked encyclopedia article covers the country's history, government and politics, geography, economy, demographics, language and culture." Yahoo: "Entry covering the western European country of France. Shows its flag, its coat of arms, demographic information, and information on its government and military."
Where are these descriptions coming from? If I look at the wiki text I don't see this stuff written. Are admins writing this? And if I veiw the webpage's source, I see a bunch of keywords (e.g. "France,Basse-Normandie" -- where did those come from?), but I don't see this description. How does Google know what to put in?
Thanks! --Mary
- I think they come from the Google Directory. Splintercellguy 19:20, 20 July 2007 (UTC)
[edit] Linux and Razr
I recently purchased a new phone, and I'd like to expirement with Linux on my old razr. I understand it won't make calls anymore, but that's not important. And help is greatly appreciated :)
- If there's isn't a project to run Razr on linux, you'd have to basically do it yourself from scratch. You could try using iPod linux as a base, but I doubt you'd get anywhere from it. in other news, according to google, the company that makes the RAZR is planning on making a smartphone version that uses linux --Laugh! 20:24, 20 July 2007 (UTC)
- I heard about that, but i just bought a new phone. How would i go about putting the ipod linux on my razr? They use the same type of processor right?
-
- As far as I know, not only there is no Linux for the Razr, but also the bootloader (which probably is in a builtin ROM) only accepts signed binaries (IIRC, I saw this info on some site which had the layout of the Razr flash files, a long time ago). This was probably done because AFAIK the operating system runs on the same CPU which runs the GSM stack, and they do not want you messing with it. The phones the OpenEZX people run Linux on have two processors, the one which runs the GSM stack (which AFAIK is the same as the RAZR's only processor!), and a common ARM CPU which runs the operating system. --cesarb 01:17, 21 July 2007 (UTC)
[edit] Java Complex Matrix Class
Hi all:
I'm looking for a Java Class that handles Complex Matrices. Mainly, I need eigenvalues and eigenvectors, basic operations like multiplication, etc. I found a class for real matrices here, but it's not enough, sadly. Anybody know where I can find one? Hopefully from a realiable source. Thanks! --Waldsen 21:15, 20 July 2007 (UTC)
- Subclass and extend your own. If you know the concepts, you may be able to implement them.