Talk:Color blindness/Archive1
From Wikipedia, the free encyclopedia
Misconceptions, request for references
I have tried to get rid of most of the errors in this page (and put in a warning against recreating them, though I don't suppose it will work: people have got it so firmly into their heads that there is a red cone and that people have primary colours). Can anyone now document:
- the claim that in emergency situations everyone is colour blind? (from first principles, I don't actually believe this, but I suppose it might be true)
- the high frequency of colour blindness in small-gene-pool (basically, inbred) communities? (I am fairly sure this is right, from individuals I've talked to, but we really need a proper reference).
- For monochromats, this is the subject of Oliver Sacks's book Island of the colorblind. Not the most scientific source, but good enough. I dunno about dichromats, but it's pretty believeable.
seglea 21:58, 14 Apr 2004 (UTC)
In addition, "Some color-blind people have better night vision than those with normal color vision.", seems to be pretty strongly refuted by http://vision.psychol.cam.ac.uk/jdmollon/papers/hudibras.pdf. You could say that monochromats are better off in the first few minutes after sudden dark onset, but is this worth saying?
On the other hand, "Color blind hunters are better at picking out prey against a confusing background, and the military have found that color blind soldiers can sometimes see through camouflage that fools everyone else." Appears to be valid, if anyone was wondering (I was) http://www.hubmed.org/display.cgi?uids=1354367.
- I, too, would be interested in seeing a reference for the claim that everyone is blind in emergency situations. If it can't be backed up, it should be removed. Edwardian 06:52, 31 July 2005 (UTC)
the link to "color blind people can see through camouflage" is misrepresented and given undue prominence for an obscure reference which had only 7 people in the study. also that article refers to textures which is quite different from color vision. this statement should be removed unless you can find a reference that contradicts this: http://www.airborneranger.com/forums/lofiversion/index.php/t17.html and other sites which say that normal color vision is mandatory for SOTIC (sniper).
Removed images
Since neither of the images on the pages were working for their intended purpose, I've removed them. I could definitely see "WIKI" in the first and "WIKIPEDIA" in the second. - UtherSRG 13:28, 5 May 2004 (UTC)
On my talk, Hankwang replies:
- In case you're not watching Talk:color blindness: we have been discussing how to construct our own color blindness tests over the past week, and if you are color blind, we would welcome your input on the talk page. Do you see difference between the various spectra shown at the bottom of the talk page? Do you know whether you're are protanopic or deuteranopic? (Both are forms of red-green color blindness, as you probably already know).
-
- I haven't been following along. It smacks of primary research to me. It would be far more appropriate to give two sample Ishihara plates (one for control, and one that dichromats and all three anomalous trichromats will fail on). This could probably be considered fair use. I do not know if I'm deuteran- or protanamalous, but I do know I'm a trichromat and not a dichromat. - UtherSRG 13:54, 5 May 2004 (UTC)
-
-
- Our "primary research" is an effort to make sense out of the published data that is not documented very clearly on the web. Unfortunately, we lack books on the topic.
-
-
-
- It is fundamentally impossible to detect a protanomalous or deuteranomalous person with an Ishara-like plate that is reproduced with normal CMY print or on an RGB display.
-
-
-
-
- I don't believe you are correct... the plates work for me (ie I can see some but not others) on online images, although they don't work as well as the true plates. However, that's not my point. My point is that we should provide a sample of what exists (secondary research) not create our own version purely from descriptions and technical manuals (primary research). Shorter: The article should not include a valid working test, it should show what a test looks like. - UtherSRG 14:24, 5 May 2004 (UTC)
-
-
-
-
- It might be interesting to take a look at Wikiresearch, which is an effort to set up a wiki for original research. 82.117.135.78 13:40, 19 May 2004 (UTC)
-
Color blindness test
[discussion started on user talk pages]
Are sure that this picture is a good color blindness test? In each of the green channel, the red channel, and the luminosity channel, the letters are recognizable. Update: my protano(p|mal)ic colleague said that he couldn't really tell whether the picture is green or red, but he clearly sees a two bright letters (the green ones) and two dark letters (the red ones).
Such a picture should look like more or less monochromatic intensity noise to a color-blind person, while information about the letters "WIKI" is in the color balance. Remember that for someone with normal color vision, green stimulates all three receptors and red stimulates the green and red receptors. Someone who misses the red receptors can still distinguish green from red because of the difference in blue stimulus. Hence, you need to mix in a bit of blue into the red to make the two indistinguishable. More precisely, R, G, and B on a CRT stimulate the receptors in (roughly) the ratios (0.7,0.3,0), (0.3,0.65,0.05), and (0.15:0.02:0.83) - see CIE diagram on the color page. The trick is to find RGB values that will have the same stimulus ratio for the green/blue receptors, so that they look as intensity differences. And then you have to keep in mind that the intensities on your screen are proportional to the RGB values to the power 2.5 (see gamma correction). It's not trivial...
Han-Kwang (talk) 14:41, 26 Apr 2004 (UTC)
Based on the information you gave, if I understood and got everything right (I could easily have made a mistake or 15), then the WIK, IPE and DIA in the new image should be detectable respectively by the red, green and blue receptors only. It's many years since I saw an actual colour test, so I don't know if it looks like normal tests or not. (I think they normally have dots instead of crystals.) Does your colleague read "WIKIPEDIA" or "IPEDIA" now? Κσυπ Cyp 18:53, 26 Apr 2004 (UTC)
If I had known that you would immediately make a new version of the picture, then I would have checked my facts better. :-) Anyway, my test person was not available today, but one can always construct theories. You have now made the letters such that they are visible in only one of the three (R/G/B) channels; that was not what I meant.
According to this Color FAQ, a typical CRT has the following stimuli, which are a bit different from the ones that I roughly estimated:
R G B | white (6500 K) x (L cones) 0.640 0.300 0.150 | 0.3127 y (M cones) 0.330 0.600 0.060 | 0.3290 z (S cones) 0.030 0.100 0.790 | 0.3582
I forgot to mention that the values of R/G/B are scaled such that RGB=(1,1,1) gives a 6500 K (D65) white on a properly-calibrated CRT display. Judging from your user page, you shouldn't be too scared of a bit of math. :-) So, the above table says that (x,y,z) = A.(R,G,B), where A is the 3x3 matrix above. Find scaling factors (a1,a2,a3) such that A.(a1,a2,a3) = (xw,yw,zw) (the white point). Then you can construct a new matrix B, with Bij = Aijai. Update: this should be Bij = Aij aj (not ai)! Then the tristimulus values can be directly evaluated from the computer screen RGB values as (x,y,z)=B.(R,G,B), except that the numbers in your paint program are proportional to the power (1/2.5) of the true RGB intensities.
Now you have to find metameric black for someone missing x (long-wavelength) receptors, that is an RGB vector (r',g',b') that you can add to any RGB vector without changing the perception for that person. Construct a 3x2-matrix C which is the rows of B corresponding to y and z. Solve
- C.(r',g',b') = (0,0)
Any vector in the solution space can be added to any (RGB) vector without a protanopic seeing the difference, of course with the restriction that (R+r',G+g',B+b') stays within the allowed range of positive values. You appear to have taken (p,0,0) (any value of p) as a solution, but it would look more like p(+1,-0.5,+0.1) (roughly, I'm too lazy to do the matrix inversions now)
Now you can construct an image with random, not too-saturated, colors, and add an appropriate amount of (r',g',b') to all color spots that you want to highlight for a person with normal color vision. Don't forget the gamma exponent to convert between computer RGB values and actual intensities.
Actually, it is probably easier to steal the palette from any other color-blindness test, e.g. here, that is on the web than to calculate the color palette. :-)
On a TFT display, the coefficients are probably different. People with protanomaly instead of protanopia need different colors as well.
Han-Kwang (talk) 12:35, 27 Apr 2004 (UTC)
Whatever, I decided to install Octave (Mathlab clone), learn how to use it, and implement the stuff above, see colorblindpalette.m (description), with a suggestion for color palettes to use. However, it is untested. I suggest that you create your mosaic with random colors 1-10, add 10 to the palette index values for the first "red" word, 20 to the palette index values for the second "green" word, and 30 for the third "blue" word.
Han-Kwang (talk) 14:08, 27 Apr 2004 (UTC)
I've updated the matrix in the program I wrote to convert from LMS to RGB colour, and after finding out that .bmp files are not only stored upsidedown, but also use BGR colour (!!!) instead of RGB colour, fixed the program. Maybe it looked like I just had each line in each screen-primary colour because of the RGB/BGR confusion, so the colours weren't "mixed" correctly. In any case, it now correctly uses the (updated) inverse matrix. (My pocket calculator conveniently does matrix inversion.)
Here's the actual program, if you can read C, and feel like checking for bugs or using it for anything... A crash means it can't find the file - I was lazy. Κσυπ Cyp 16:41, 27 Apr 2004 (UTC)
#include <math.h> #include <stdio.h> unsigned char asdf[54], rgb[3]; /*double mat[3][3]={ {1.768472906, -.7931034483, -.3004926108}, {-.8177339901, 1.908045977, .1018062397}, { .0492610837, -.1149425287, 1.198686371 }}; */ double mat[3][3]={ { 2.088353414, -.9906291834, -.3212851406}, {-1.155287818, 2.236055332, .0495314592}, { .0669344043, -.245426149, 1.271753681 }}; int main() { FILE *i, *o; double R, G, B, L, S, M; /* input file has L "red", M "green", and S "blue" stimuli values */ i=fopen("c:\\projects2\\colours.bmp", "rb"); /* we will calculate the RGB values that generate the above stimuli */ o=fopen("c:\\projects2\\colours.out.bmp", "wb"); fread(asdf, 54, 1, i); fwrite(asdf, 54, 1, o); while(1==fread(rgb, 3, 1, i)) { /* rgb[] contains LSM values, scaled such that (255,255,255) = white point */ /* Convert to unscaled LSM - I think M an S were previously swapped here. */ L=rgb[2]*.3127/255; S=rgb[1]*.329/255; M=rgb[0]*.3582/255; /* calculate RGB from LMS (again, not LSM) */ R=L*mat[0][0]+M*mat[0][1]+S*mat[0][2]; G=L*mat[1][0]+M*mat[1][1]+S*mat[1][2]; B=L*mat[2][0]+M*mat[2][1]+S*mat[2][2]; /* Scaling so LMS(.3127,.329,.3582)==RGB(1,1,1) (LMS?) */ R/=.2120267738; G/=.3921458724; B/=.3957273539; /* Gamma correction and scaling to 0..255 */ R=pow(R, .4); G=pow(G, .4); B=pow(B, .4); rgb[2]=R*255.; rgb[1]=G*255.; rgb[0]=B*255.; fwrite(rgb, 3, 1, o); }; fclose(i); fclose(o); }
The above picture is no better from the first one. In fact, I can read it much more easily then the first one! (I'm deuteranopic, I normally fail to see anything on most of the real test pictures).
I came to believe that part of the problem is that you are mixing the three tests together. But even if I don't see the first three letters very well, I see the continuation of the string and the string is obvious. Then my brain helps my eyes to see the WIK. So someone has to a) find a better pallete and b) make different tests with different strings for different purposes.
If you need help, please contact me on hanke at volny dot cz and we can cooperate to create a better picture.
I'm moving here my comment from above:
It seems to me that the image on this page doesn't illustrate the problem very well. I'm a deuteranopic (meaning I have troubles distinguishing red and green) and I fail most of the common-used tests that are available on the internet. Still, I can distinguish the red and the green letters on the image in this wiki-page quite well. I mean, color-blindness is defined more strictly than that. But you could try to ask at http://members.aol.com/protanope/card1.html if they would permit Wikipedia as an encyclopedia to use one of their images.
I wonder if the use of "polygons" rather than the usual dots is making the tests not work as well as they should. The eye is much better at noticing an edge between two slightly dissimilar shades, rather than comparing two shades separated by white space (the white background of the dots in a typical colour-blindeness test). So that may be causing the problem, even if the colours are very close to being corect. -- DrBob 18:10, 27 Apr 2004 (UTC)
Another attempt, not sure how to do it properly without writing my own drawing program... What are the letters/numbers there, and are they easy or hard to read? The letter picture is aimed at protanopia, the number picture at deuteranopia. Κσυπ Cyp 19:38, 27 Apr 2004 (UTC)
To Cyp: I made a mistake in my equations (see Update-text), but you seem to have used common sense instead of blindly copying equations. :) About your C program: I added comments to what I believe it does. Clever to start with a file that has LMS instead of RGB values, however you seem to use LSM (long=red, short=blue, medium=green), which I suppose you didn't mean to do since then you'd need a different matrix.
About the new "spots on white background" pictures: I see a greenish 8 and a hard-to-distinguish greyish 4 on a brownish background in one picture. The other one clearly shows a red P and a blue G on a green background. I played a bit with the color balance on my CRT, but to no avail. I assume that that's because of the LSM bug. I suppose that you could make the background a bit more noisy, both in intensity and in hue. -- Han-Kwang (talk) 21:29, 27 Apr 2004 (UTC)
This one is much better than the previous picture. I'm deuteranopic and I'm not sure what there really is on these pictures, but after a little time of looking at it it seems to me that there is a 4 over an eight in the left picture. One of the letter is G (but I didn't notice the P, if there really is one). It still needs some work, because on most of the tests I was given by my doctor (the ones relevant to my illness of course) I didn't see *anything*, not even after looking at it for longer time.
Another thing: a reference image that can be seen by all (even the ones with the eye defect like me) should be made in the same style (but with obvious colors) and put as the first image of the test, just as a reference. This is how it's usually done and it's important so that the test subject knows what is it that he should see. (If you don't have any reference, you could for example think that you have seen *something* and this is all, but after you have seen a reference image, you know the rules of the game and you know that unless you clearly see a number or letter, your vision is altered.) But it's important to put a different letter or number on the reference image as not to ruin the test itself. -- hhanke
I called them rgb or RGB, then changed the letters to LSM (I guess I was thinking Long, Short and Micro or something...) just before copying the code here. The LSM/LMS mixup doesn't affect the code semantics.
The left picture looks to me like a (relatively) clear 8, with a very easy to miss 4 in the background, and the right picture looks like a very clear P (clearer than the 8) with a slightly-easier-to-see-than-intended G in the background. Since you can't see the P but can see the 8, I'm tempted to either declare that you are actually protanopic, not deuteranopic, or to redefine the terms, to make the results be as expected...
I suppose that out of the images I've drawn, the P/G image would be the best to have on the page, then... (Even though it doesn't test for the same thing it was intended to...)
I'm curious how the L, M and S receptors are stimulated as a function of light frequency... (Might make it possible to draw a more realistic spectrum at colour. Κσυπ Cyp 22:00, 30 Apr 2004 (UTC) ---
- See cone cell for the spectral response curves which I added recently - they are quite surprising -- DrBob 22:38, 30 Apr 2004 (UTC)
- I saw that graph before. Do you know what the function for the graph is, or what data was used to draw it? Κσυπ Cyp 13:00, 1 May 2004 (UTC)
---
- Well, I'm not entirely sure I'm suffering of deuteranopia. I was only told by my doctor that I have a red-green deficiency which can be equaly deuteranopia that protanopia. But on this site: [[1]] I see the images "normal" and "deuteranopia" as identical and the "protanopia" is clearly different for me. So I think I'm deuteranopic if these images aren't switched. --hhanke
It must have been late when I wrote my previous comment. I'm not sure what I was thinking.Apparently I got so confused by the permutation of the variable names' that I thought that you used the wrong coefficients for the white-point normalization. :-) Also, my earlier remark about protanomalics/deuteranomalics is wrong; you can't test that with a 3-phosphor CRT.
Re the spectra of the receptors: the spectra for a "Standard Observer" are specified by CIE. The hard part is to find a plot on the web. This page provides the raw data (under "Standard observer"). It's weird that these calculated pictures do not work. It can be the color balance of the CRT on which it is viewed (it should be 6500 K). TFT screens have a different color rendering. Maybe it is the display gamma? There was something with Macintoshes having a gamma of 2.2 under certain conditions. Or the standard CRT RGB matrix deviates too much from reality?
The data in the link I provided give an L-curve that has two maxima, quite different from the spectrum in the cone cell page.
Han-Kwang (talk) 23:01, 30 Apr 2004 (UTC)
Huh, I plotted the CIE data and it's indeed rather different. Reading the reference I got my graph from, it seems its data comes directly from spectrophotometry of the cone cells themselves, but says that there's a whole bunch of processing that goes on after that, in the retinal membrane and the brain. This colour vision stuff is more complex that it looks. -- DrBob 00:29, 1 May 2004 (UTC)
Cone fundamentals and generating a spectrum
[Continued from previous discussion]
It turns out that the functions in my link are not cone spectra but rather color matching functions (explanation). If I get it right, CMFs are a linear combination of the cone spectra, for some reason practical to use in calculations if you know what you're doing (does not apply to us, apparently :-). The cone spectra we are interested in are called cone fundamentals. Those look more like your DrBob's spectra.
Maybe it's time for an article that goes into the mathematics of color vision?
-- Han-Kwang (talk) 13:25, 3 May 2004 (UTC)
The cone spectra fundamental text page looks like what I was looking for, except each cone has a different scale.
Here's what I got, although the L, M and S cones are probably incorrectly scaled, and I've just used the same matrix assuming it was still correct. At least the regular spectrum looks a bit like a real spectrum, unlike most rainbows on the internet...
Do any pairs of spectrums appear identical? (Or at least, rather similar?) Κσυπ Cyp 12:03, 4 May 2004 (UTC)
And the program: (There must already be a spectrum.bmp, in 24 bit colour, 441x80 pixels. Contents of spectrum.bmp otherwise irrelevant and will be erased.)
#include <math.h> #include <stdio.h> unsigned char asdf[54], rgb[3]; /*double mat[3][3]={ {1.768472906, -.7931034483, -.3004926108}, {-.8177339901, 1.908045977, .1018062397}, { .0492610837, -.1149425287, 1.198686371 }}; */ double mat[3][3]={ { 2.088353414, -.9906291834, -.3212851406}, {-1.155287818, 2.236055332, .0495314592}, { .0669344043, -.245426149, 1.271753681 }}; double table[441][3]; // wavelength-390, SML double unused; int zero=0; int main() { FILE *i, *o; double R, G, B, L, M, S, f; int x, y; if(!(i=fopen("c:\\projects2\\spectrum.bmp", "rb"))) {printf("File not found.\n");return(0);} fread(asdf, 54, 1, i); fclose(i); if(!(i=fopen("c:\\projects2\\ss2_10e_1.txt", "rt"))) {printf("File2 not found.\n");return(0);} // read cone SML spectra (convert from log to linear) for(x=0;x<226;++x) fscanf(i, "%lf, %lf, %lf, %lf", &unused, &table[x][0], &table[x][1], &table[x][2]); for( ;x<441;++x) { fscanf(i, "%lf, %lf, %lf,", &unused, &table[x][0], &table[x][1]); table[x][2]=-1e99; } for (x=0;x<441;++x) for(y=0;y<3;++y) table[x][y]=exp(table[x][y]); for(x=0,L=M=S=0;x<441;++x) { L+=table[x][0]; M+=table[x][1]; S+=table[x][2]; } //Integrate L=1/L; M=1/M; S=1/S; f=(L+M+S)/3; L/=f; M/=f; S/=f; //Correct proportion, but keep overall intensity about the same (otherwise image would either appear black, or so bright the computer screen catches on fire (at least, if it didn't overflow first...)) for(x=0;x<441;++x) { table[x][0]*=L; table[x][1]*=M; table[x][2]*=S; } fclose(i); // create new bitmap (loop over x,y coordinates) o=fopen("c:\\projects2\\spectrum.bmp", "wb"); fwrite(asdf, 54, 1, o); for(y=79;~y;--y) { if(y<20) f=0; // black else if(y<30) // fuzzy transition black-bright f=1-cos((y-20)*(3.1415626935897932384626433832795/10)); else f=2; // full brightness for(x=0;x<441;++x) { L=table[x][0]/20.; M=table[x][1]/20.; S=table[x][2]/20.; //Scale values to a sensible range L*=f; M*=f; S*=f; x+=390; // wavelength offset // ruler if(y>=55&&x% 5) L=M=S=0; if(y>=60&&x% 10) L=M=S=0; if(y>=65&&x% 50) L=M=S=0; if(y>=70&&x%100) L=M=S=0; if(y>=75&&x%500) L=M=S=0; x-=390; //Add background colour to entire image #define sp(c,i,s) (i*exp((double)(x-c)*(double)(c-x)/(double)(s*s))) L+=.009+sp(55,.0346,50); //Flat background isn't as good at keeping final values between 0 and 1... M+=.009+sp(55,.0346,50); S+=.009+sp(55,.0346,50); // L+=.5; M+=.5; S+=.5; // uncomment one of these lines to generate color-blind versions. The // missing stimulus is an (arbitrary) weighted sum of the other two. //L= M*.9+S* .1; //Pro //M=L*.95 +S*.05; //Deu //S=L*.5 +M*.5 ; //Tri if(R<0||R>1) R=((x^y)&2)/2; if(G<0||G>1) G=((x^y)&2)/2; if(B<0||B>1) B=((x^y)&2)/2; //Sanity check - checkered pattern means values out of range. R=L*mat[0][0]+M*mat[0][1]+S*mat[0][2]; G=L*mat[1][0]+M*mat[1][1]+S*mat[1][2]; B=L*mat[2][0]+M*mat[2][1]+S*mat[2][2]; //Scaling so LMS(.3127,.329,.3582)==RGB(1,1,1) R/=.2120267738; G/=.3921458724; B/=.3957273539; // gamma correction and write to BMP R=pow(R, .4); G=pow(G, .4); B=pow(B, .4); rgb[2]=R*255.; rgb[1]=G*255.; rgb[0]=B*255.; fwrite(rgb, 3, 1, o); } fwrite(&zero, 1, 1, o); /*Dumb rounding line length up to 4 bytes...*/ } fclose(o); //for(x=0;x<441;++x) printf("%d, %ld, %ld, %lf }
The cone spectra in the mentioned webpage are normalized to amplitude 1. The should be normalized to equal surface, such that a flat light spectrum results in lms=(0.33,0.33,0.33), or a 6500 K color temperature in (0.313,0.329,0.3582). Especially the S spectrum needs a significant scaling. That also means that you'll need to prevent overflow/underflow (RGB values must be within 0..255 range). I'm actually not sure why that isn't a problem now; 410 nm will give something like LMS=(0.24,0.01,0.01). After your matrix multiplication, that would result in G=-0.25. On my computer (linux/glibc-2.3), pow(-0.2,0.4) will return NaN. NaN times 255 and converted to unsigned char happens to be 0 on my computer, but I wouldn't consider that an elegant way to deal with colors that cannot be reproduced in RGB. ;)
You have a very, uhh, compact way of programming. I stopped coding like that after a few occasions where I had no clue what the code did after a few months. ;-) The cosine is to make a fuzzy top border, I presume. (Why?)
Han-Kwang (talk) 13:13, 4 May 2004 (UTC)
The fuzzy border was just to look pretty.
The background isn't actually black, that's why there aren't any negative numbers causing problems.
I'll try integrating the spectrum and normalizing in a while. Κσυπ Cyp 13:31, 4 May 2004 (UTC)
By now I am utterly confused. I think that the matrix I provided was not correct; it transformed from RGB to xyz, not to LSM. The normalization condition applies to the xyz coordinates. I still don't grasp what the xyz coordinates represent. I remember from my course in human color vision the phrase "The CIE diagram seems to be designed to confuse beginners in the most thorough way, but once one gets used to it, it is actually quite convenient". Once this gets sorted out, quite a few Wikipedia pages will need rewriting (dominant color, color, CIE, gamut). :-( Han-Kwang (talk) 14:07, 4 May 2004 (UTC)
I've just rescaled, so the proportion of L, M and S should be correct now. (At least, before being multiplied by the matrix in question...) I also scaled the values, so they are within the correct order of magnitude instead of overflowing so they coincidentally look almost right...
Control+Shift+click-on-reload might be required to reload the images in crazy browsers. Κσυπ Cyp 16:20, 4 May 2004 (UTC)
Just looked at a link above, and it says x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z). Doesn't seem as if X, Y and Z are L, M and S, either.
So I suppose what I need to know is what the relative L, M and S stimulus values are for some well defined spectrum (such as E/dλ=constant for wavelength λ>0 and E/dλ=0 for λ<0 or such as E/dω=constant for frequency ω>0 and E/dω=0 for ω<0 - if it's the second, I think I would need to divide by a factor λ² before integrating), and also, in the same L, M and S system, a matrix to transform between R, G, B and L, M, S, where R=G=B=1 is the brightest a typical monitor can show...
(Tried searching the internet, but all I can find is sites that have copies of Wikipedia pages...) Κσυπ Cyp 18:38, 4 May 2004 (UTC)
I'm deuteranopic (I think) and the spectrums normal and deuteranopia seem different to me. In the normal one, there is a clear red (this is much clearer than the kind of red that I can't distinguish from green) while it seems to me that there is no red on the deuteranopic. The deuteranopic and protanopic spectrums seem nearly the same to me, only the protanopic is shrinked (I don't know why it's shorted, it might be that I don't see the latest part or it's really shorter). Hope it helps in some way.
Note that all the images I've seen for doing such comparisons are rather a real images (photos) converted to these palletes. I have given a link above. Maybe it's easier to compare spectras for the brain or something and I can distinguish them even if I wouldn't distinguish them so clearly in a real picture. Or your algorithm is just plain wrong. I really can't tell, I can only tell what I see.
The more I think about it, the more difficult it seems to create a good test for this.
--hhanke 21:50, 4 May 2004 (UTC)
I think I'm starting to grasp it, after some studying of the CVRL pages. The problem with all these different color-matching functions (CMFs) is that it impossible to measure the cone spectra directly. The XYZ quantities from the CIE diagram are closely related to the empirical data, and dependent on the measurement procedure (field of view, hence the 2 deg and 10 deg variants; test persons; light intensity, etc.). From a measurement, one can estimate cone spectra. The cone spectra from the above website (10 deg, Stiles/Burch) are a linear combination of the XYZ CMFs (from CIE 1964). The following matrix converts XYZ to LMS (the tabulated values, without further normalization):
XYZ2LMS = 2.2286e-01 8.3347e-01 -3.8906e-02 -4.2965e-01 1.1990e+00 9.6003e-02 4.6299e-04 -1.1542e-03 4.9692e-01
and the inverse:
LMS2XYZ = 1.9174e+00 -1.3325e+00 4.0755e-01 6.8710e-01 3.5638e-01 -1.5056e-02 -1.9053e-04 2.0693e-03 2.0120e+00
I constructed these matrices by comparing the XYZ and LMS values at 450, 550, and 650 nm, and the CIE 1964 CMFs give the best fit (compared to CIE1931 and corrected CIE1931). I leave it now to Cyp to write a new version of his program that, finally, adequately converts the LMS values to computer RGB values. Note the difference between the above XYZ (uppercase) coordinates and the normalized lowercase xyz coordinates. The Poynton Color FAQ lists the following conversion matrix from RGB RGB (on a CRT, white D65) to XYZ:
RGB2XYZ = 0.412453 0.357580 0.180423 0.212671 0.715160 0.072169 0.019334 0.119193 0.950227
You'll probably need to scale LMS to get RGB into the range 0..1. I probably did something wrong again; RGB=(1,0,0) gives negative LMS values. But I think it's time for a sleep now. :-)
I read somewhere in either the FAQ or the CVRL pages that although a CRT should produce a 6500 K color temperature, in practice they often are 9500 K (too much blue). One may wish to fudge down the blue component in order to get a compromise that more-or-less works on both 6500 K and 9500 K displays.
Does anyone have a suggestion on which page we can put all this stuff about CMFs and cone functions? It's too technical for color and color blindness. Maybe on CIE?
Han-Kwang (talk) 23:40, 4 May 2004 (UTC)
Trying to use the LMS and XYZ data, I got a slightly different XYZ->LMS matrix, maybe I was looking at a different page. Here are the matrixes I used. (I used the RGB->XYZ matrix given above.)
A 450 550 650 L .0498639, .940198, .16141 M .0870524, .977193, .015448 S .955393, .00195896, 0 B 450 550 650 X .370702, .529826, .268329 Y .089456, .991761, .107633 Z 1.9948, .003988, 0 XYZ->LMS=AB-1 X Y Z L .2814879424, .7978837522, -.0630939103 M -.42961038, 1.214543157, .0690102539 S -.00002518893483, .00006279599838, .4789436134
Here is the version of the program I used. I won't try to merge it with the above version (might miss some changes while merging), so it might be hard to read... Contains some dead code.
#include <math.h> #include <stdio.h> unsigned char asdf[54], rgb[3]; /*double mat[3][3]={ {1.768472906, -.7931034483, -.3004926108}, {-.8177339901, 1.908045977, .1018062397}, { .0492610837, -.1149425287, 1.198686371 }}; */ /*double mat[3][3]={ { 2.088353414, -.9906291834, -.3212851406}, {-1.155287818, 2.236055332, .0495314592}, { .0669344043, -.245426149, 1.271753681 }}; */ /*double mat[3][3]={ {4.783887884, -4.408367044, .2244948933}, {-.5422530806, 1.900847365, -.2585584114}, {-.0293065676, -.1488607757, 2.225178275 }}; */ double mat[3][3]={ {5.157293474, -4.866693366, .3407510271}, {-.5694932273, 1.960161505, -.3396528051}, {-.0336993532, -.1446765756, 2.153031303 }}; double table[441][3]; double unused; int zero=0; int main() { FILE *i, *o; double R, G, B, L, M, S, f, g,//ll=0,mm=0,ss=0, ll=.1296229615,mm=.1156955995,ss=.0698320795, //ll=.0377025928,mm=.0150533162,ss=.0236552135,//ll=0*.0274029192,mm=0*.0122894124,ss=0*.0222623102,//ll=.0893632596, mm=.0797616083, ss=.0481428766,//ll=.0712274589, mm=.062982337, ss=.0372662419,//ll=.0241, mm=.011, ss=.021, rng[9]={1,1,1,0,0,0}; int x, y, error=0,best=1323,besty;restart:error=0; if(!(i=fopen("c:\\projects2\\spectrum.bmp", "rb"))) {printf("File not found.\n");return(0);} fread(asdf, 54, 1, i); fclose(i); if(!(i=fopen("c:\\projects2\\ss10e_1.txt", "rt"))) {printf("File2 not found.\n");return(0);} for(x=0;x<226;++x) fscanf(i, "%lf, %lf, %lf, %lf", &unused, &table[x][0], &table[x][1], &table[x][2]); for( ;x<441;++x) { fscanf(i, "%lf, %lf, %lf,", &unused, &table[x][0], &table[x][1]); table[x][2]=-1e99; } for(x=0;x<441;++x) for(y=0;y<3;++y) table[x][y]=exp(table[x][y]); //for(x=0,L=M=S=0;x<441;++x) { L+=table[x][0]; M+=table[x][1]; S+=table[x][2]; } //Integrate //L=1/L; M=1/M; S=1/S; f=(L+M+S)/3; L/=f; M/=f; S/=f; //Correct proportion, but keep overall intensity about the same (otherwise image would either appear black, or so bright the computer screen catches on fire (at least, if it didn't overflow first...)) //for(x=0;x<441;++x) { table[x][0]*=L; table[x][1]*=M; table[x][2]*=S; } fclose(i); o=fopen("c:\\projects2\\spectrum.bmp", "wb"); fwrite(asdf, 54, 1, o); for(y=79;~y;--y) { if(y<20) f=0; else if(y<30) f=1-cos((y-20)*(3.1415626935897932384626433832795/10)); else f=2; for(x=0;x<441;++x) { //L=table[x][0]*.3127; M=table[x][1]*.329; S=table[x][2]*.3582; L=table[x][0]/5.; M=table[x][1]/5.; S=table[x][2]/5.; L*=f; M*=f; S*=f; x+=390; if(y>=55&&x% 5) L=M=S=0; if(y>=60&&x% 10) L=M=S=0; if(y>=65&&x% 50) L=M=S=0; if(y>=70&&x%100) L=M=S=0; if(y>=75&&x%500) L=M=S=0; x-=390; //L+=.615; M+=.615; S+=.615; //L+=.5+.05*exp(-(x-55)*(x-55)/5000.); M+=.5; S+=.5-.05*exp(-(x-55)*(x-55)/5000.); //L+=.3+.05*exp(-(x-55)*(x-55)/5000.)+.2*exp(-(x-100)*(x-100)/50000.); M+=.3+.2*exp(-(x-100)*(x-100)/50000.); S+=.3-.05*exp(-(x-55)*(x-55)/5000.)+.2*exp(-(x-100)*(x-100)/50000.); //L+=.5; M+=.5; S+=.5; #define sp(c,i,s) (i*exp((double)(x-c)*(double)(c-x)/(double)(s*s))) //g=.01+sp(55,.04,30)+sp(210,.01,60);//sp(45,.09,65)+sp(300,.01,100);//.015;//+.03*exp(-(x-55)*(x-55)/9000.)+.01*exp(-(x-105)*(x-105)/50000.); //L+=.009;//+sp(55,.0346/*.07*/,50);//+sp(210,.01,60); //M+=.009;//+sp(55,.0346/*.05*/,50);//+sp(210,.01,60); //S+=.009;//+sp(55,.0346/*.03*/,50);//+sp(210,.01,60); L+=ll;//.035; M+=mm;//.022; S+=ss;//.022; // L+=g; // M+=g; // S+=g; //L= M*.95+S*.05 ; //Pro //M=L*.65 +S*.05 ; //Deu //S=L*-.1+M*1; //Tri (Very hard to get in range... Need to add exactly .028...) //S=L*-.77+M*1.86 +.028; //Tri (Very hard to get in range... Need to add exactly .028...) // S=L*(-.78+.2*(y/40/40.))+M*(1.75+.2*(y%40/40.))+.028;//L*-.2+M*0.7;//L*-.7+M*1.6 ; //Tri // S=L*-.77+M*1.86+.028;//L*-.2+M*0.7;//L*-.7+M*1.6 ; //Tri R=L*mat[0][0]+M*mat[0][1]+S*mat[0][2]; G=L*mat[1][0]+M*mat[1][1]+S*mat[1][2]; B=L*mat[2][0]+M*mat[2][1]+S*mat[2][2];//R=L;G=M;B=S; //R/=.2120267738; G/=.3921458724; B/=.3957273539; //Scaling so LMS(.3127,.329,.3582)==RGB(1,1,1) #define dr(x) {x rng[6]=L; rng[7]=M; rng[8]=S;} if(R<0||G<0||B<0||R>1||G>1||B>1) { ++error; } if(R<rng[0]) rng[0]=R; if(G<rng[1]) rng[1]=G; if(B<rng[2]) rng[2]=B; if(R>rng[3]) dr(rng[3]=R;) if(G>rng[4]) rng[4]=G; if(B>rng[5]) rng[5]=B; if(R<0) R=0; if(G<0) G=0; if(B<0) B=0; if(R>1) R=1; if(G>1) G=1; if(B>1) B=1; if(R<0||R>1) R=((x^y)&2)/2; if(G<0||G>1) G=((x^y)&2)/2; if(B<0||B>1) B=((x^y)&2)/2; R=pow(R, .4); G=pow(G, .4); B=pow(B, .4); rgb[2]=R*255.; rgb[1]=G*255.; rgb[0]=B*255.; fwrite(rgb, 3, 1, o); } fwrite(&zero, 1, 1, o); if(best>error) {best=error;besty=y;}/*Dumb rounding line length up to 4 bytes...*/ } fclose(o); if(error) { printf("Warning: Out of range.\n"); } //else { rr-=.0001; goto restart;}printf("%lf\n", rr+.0001); printf("Range:\nRed: %lf to %lf\nGreen: %lf to %lf\nBlue: %lf to %lf\n", rng[0], rng[3], rng[1], rng[4], rng[2], rng[5]); printf("LMS: %lf, %lf, %lf\n", rng[6], rng[7], rng[8]); //for(x=0;x<441;++x) printf("%d, %ld, %ld, %lf }
How do the following images look now, after reloading the page and images? (Control+Shift+click-on-reload, perhaps.) (The regular spectrum looks better to me now, there seems to be violet at the left end.) Κσυπ Cyp 12:27, 5 May 2004 (UTC)
I've not yet looked at your code, but the uppermost picture looks very realistic, although the yellow/orange band seems a tiny bit too wide - I built myself a toy spectrometer out of a piece of compact disc and cardboard, so i know what it approximately should look like, although it doesn't have a wavelength scale. A few reference wavelengths:
- 632 nm (He-Ne laser) should be bright red, but is shown as orange. Actually the whole red part of the spectrum seems too orange-like.
- 590 nm (sodium street lighting) should be yellow/orange (approximately as it is at 600-610 nm in the current figure), but is now yellow/green.
- 514 nm (argon laser) should be hard green, seems alright here.
- 400 nm looks good.
The eye is most sensitive to the wavelength around 600 nm (transition from red to green). My impressions may be affected by the reddish background color that you used in order to prevent negative RGB values. Didn't it work out with a dark grey background?
The difference between your and my matrix is that you used the 2-degree cone fundamentals instead of the 10-degree. The angle refers to the field of view; the yellow spot (macula) on the retina obviously affects the color perception to some extent. If you use your conversion matrix, then the converted xyz CMFs and the LMS CMFs do of course exactly match at 450, 550, and 650 nm, but at other wavelengths, especially between 440 and 540 nm, they are a bit different. (With CIE1931 instead of CIE1964, it's even worse). Hence, I recommend to use the 10-deg LMS table together with the CIE1964.
Han-Kwang (talk) 13:33, 5 May 2004 (UTC)
If I'm (as a deuteranopic) supposed to see the normal and the deuteranopic spectrum the same, this is still *not* the case. The normal is clearly red on the right while the deuteranopic isn't. I feel sorry that I'm not able to help you with the math and I can only provide a description of what I see :( Also, it might be of interest to you that I have severe problems distinguishing the red links on wikipedia (normal, not visited) from the green ones (the article doesn't exist) (just for reference, this isn't a problem in Wikipedia as I can solve it easily by leting it display the question marks instead of green links). I'm using galeon with the default Wikipedia skin. --hhanke 20:03, 5 May 2004 (UTC)
I've used the 10° data completely now, not using any 2° data. Doesn't seem much different, but who knows... I've re-uploaded the images, again...
Having a grey background requires a rather bright background compared to a having a purple background. The orange on the attempted deuteranopic image might be very slightly brighter now, but I'm not sure.
← This one still has gamma 2.5. Κσυπ Cyp 23:09, 5 May 2004 (UTC)
This image should have an equally easy to see (and equally hard to read font) 'pro', 'deu' and 'tri' written on it. Κσυπ Cyp 23:15, 5 May 2004 (UTC)
Deu is still visible for me. I liked the images with non-connected surfaces much better. You could try to convert it to that stile. --hhanke 16:09, 6 May 2004 (UTC)
Pro is not visible to me (protanope). --Chinasaur 04:27, 30 Sep 2004 (UTC)
My monitor has gamma 1.5, not 2.5. What gamma do other people have? (See image at gamma correction to check.) I changed the gamma of the images to 1.5 and reuploaded them. Κσυπ Cyp 22:50, 16 May 2004 (UTC)
← Gamma 1.65, supposed to test for protanopia...
← Gamma 1.65, supposed to test for deuteranopia...
← Gamma 1.65, supposed to test for tritanopia...
← Gamma 1.65, supposed to test for complete blindness...
- I want 1.92 γ for my Powerbook G4. lysdexia 14:48, 22 Nov 2004 (UTC)
Great! I don't see anything on the deuteranopic image (and I'm supposed to be deuteranopic)! I think this is the best test for deuteranopia from the ones offered here. I can see the other two, but these are for other ilnesses. So if the seccond image is clearly visible to you but invisible to me, great! Please could you additionally make a reference image in some by-all-distinguishable (by all I mean even by monochromatic people) colours (I mean for example blue-yellow) in the same stylo to accompaniate these real tests? --hhanke 07:24, 19 May 2004 (UTC)
I am protanopic and cannot see the first number at all. The second number is difficult to make out, but I got it eventually. Feel free to post to my talk page if you have any more protanope testing needed. The spectrum is also working for protanope me. Not sure what gamma I have set. --Chinasaur 04:23, 30 Sep 2004 (UTC)
The three (now four) images appear approximately equally clear to me. Although I can't guarantee that the new one can be seen by people with only one colour cone, it is intended to be seen by everyone. (At least, by everyone who isn't actually blind, not just colour-blind.
For reference, the numbers are supposed to be , , and . (Click 'edit this page' and read backwards to read the numbers.) Κσυπ Cyp 09:00, 19 May 2004 (UTC)
Note to self, for future creation of these images:
Corel photo-paint 9 Solid fill, 50% black Effects/Noise/Add Noise:Gaussian, Level=100, Density=100, Color mode=Random Effects/Blur/Gaussian blur: Radius=I think about 2 pixels... Draw something on a colour channel: Paint mode=add, Nib=Round-30px, Soft edge=100, Anti-aliasing=on-smoothing=10, Dab-spacing=25 (ratio for #83: 43/70/120) Draw on mask: Texture fill: Vapor 2C, #=11257, Softness=50, Density=0, Brightness=0, Background=black, Vapor=white Invert mask Solid fill, 30% black Save, use program
--- Good, but the last image (the one that should be easy for everyone is harder for me than the protanopic one for example). Maybe you could do it even easier. This doesn't test for anything, so it doesn't have to be too close to the other ones, in my opinion. Just for reference, so that everyone knew he's looking for some number and it's displayed in such and such way. This is at least how these reference images in the tests use to be. Very clear, just to see what I'm looking for. But anyway, great work.
Just one more hint. Please forgive I'm bitching about the very good work that you are doing here and I'm not able to help you. I think it would be better to use simple geometry figures (like a star, a square, a triangle, a circle) instead of numbers. The reason is that you can test even small children that don't know the alphabet and the numerals yet. I've seen somewhere this point as a critique against the (still widely accepted) Ishara tests.
I had the idea that maybe I could ask our doctor to test these images against other people who she diagnoses with colorblindness based on some other (official) tests. It doesn't require much from her, just to show one or two more pictures. I think she might understand our good intentions. This way, we could get feedback (hopefully approval) of more people. I have yet to figure out how to print the images with the correct gamma factor though. --hhanke 15:22, 19 May 2004 (UTC)
Red-Yellow-Green
If yellow is a mixture of pure red and pure green, but red-green color-blindness allows both red and green to look gray, does this allow red-yellow-green to become gray-yellow-gray or gray-gray-gray?? 66.245.6.12 00:03, 25 Aug 2004 (UTC)
- There are different forms of red-green color blindness (see the article), but essentially any red-green deficiency will also cause trouble in distinguishing yellows from greens and reds. However, what happens to make these colors mixed up is not as simple as just turning them all gray. Normal trichromatic vision is based on opponent processes (try googling it): basically any physical light spectrum input is degraded (in normal vision) into two simple color dimensions: "how red versus green is it?", and "how blue versus anything else is it?". So perhaps in red-green color blindness you simply lose the "red versus green" dimension, and now all the complicated light spectrum input is being degraded into only one color dimension: the "blue versus anything else".
- This might be true, but cannot be a complete description of the deficiency, because there is further loss of information even before the opponent process level (e.g. someone lacking the red cone will mix up blue and purple, although this would be detected with only a "blue versus everything else" opponent process). The best way to understand the pre-opponent-process deficiencies is from the perspective of photoreceptors and copunctal points (try googling it): if the red cone is missing, then addition of light spectra that would normally stimulate that cone will not be noticed. So if you add some red light to some blue light, making purple for normal vision, someone missing the red cone sees only blue. --Chinasaur
- (This discussion has focused on color processing, so we have ignored the third dimension of information, the brightness of the light, but this does not affect perceived color very much.)
Explaining dichromat perception to the layman
Here is a summary of the points of agreement coming out of this dispute that basically boiled down to: "how should we best explain the experience of a dichromat or other color deficient to a layman trichromat?". For simplicity all specifics in this summary are stated in terms of protanopia. Many of these points have been made before, but at least this is shorter than trying to read the whole argument...:
- To a layman, the easiest way to explain a dichromat's experience is in terms of what colors will be confused: "Purple appears the same as blue", "green appears the same as yellow", etc. These are important points to make to avoid common misconceptions such as that a red-green colorblind person simply sees gray in the place of red and green. There are additionally two more complicated ways to address this:
- Getting into what a dichromat actually sees, e.g. "only in white, black, blue, and yellow". This question is a natural extension of describing what a dichromat confuses, and is interesting to many laymen. However, what a dichromat actually sees cannot be determined with certainty, and the existing data are not easy to explain (especially not compared to the simplicity of the areas of confusion data). Much of the explanation of the higher perception of color to a dichromat is based on opponent process models. We can tell that the opponent process argument has merit because it agrees to some extent with personal reports from unilaterally color blind individuals. But it is more complicated than the standard blue-yellow, green-red opponency model would suggest, and we can not be fundamentally certain that unilateral color blind accounts bear on dichromatic experience ([2]). This is worth getting into once the initial simple description of what is confused is out of the way.
- One point is that around the neutral point, there are monochromatic stimuli that are colored in trichromats but appear gray to dichromats. However, as noted above, this does not mean that red-green color blind individuals see only gray in place of red or green. A better analysis is that dichromats replace much of the hue discrimination of trichromats with color saturation gradations.
- Getting into the details of color blindness at the photoreceptor level: metamerism, copunctal points, just noticeable differences. This is the nitty-gritty of exactly what will be confused. Photoreceptors determine what input goes into opponent processes and higher perception. Much of it can be covered in other articles and linked, but it should be covered.
- Getting into what a dichromat actually sees, e.g. "only in white, black, blue, and yellow". This question is a natural extension of describing what a dichromat confuses, and is interesting to many laymen. However, what a dichromat actually sees cannot be determined with certainty, and the existing data are not easy to explain (especially not compared to the simplicity of the areas of confusion data). Much of the explanation of the higher perception of color to a dichromat is based on opponent process models. We can tell that the opponent process argument has merit because it agrees to some extent with personal reports from unilaterally color blind individuals. But it is more complicated than the standard blue-yellow, green-red opponency model would suggest, and we can not be fundamentally certain that unilateral color blind accounts bear on dichromatic experience ([2]). This is worth getting into once the initial simple description of what is confused is out of the way.
- We should discourage (as is already done) the misleading association of color names with wavelengths, spectra, or even particular human cones. We must be careful our language is rigorous when writing in the formal article. There are three levels of distinction important in the discussion of color transduction:
- The physical level, in which we talk about frequencies, wavelengths, spectra, etc.
- The high perceptual level, in which we use color names. At this level we can say that yellow has nothing to do with red and green; it has no redness or greenness to it. It makes no sense on a perceptual level to describe yellow as greenish-red.
- The photoreceptor level. This must be distinct from the physical level since information has made a huge shift from a myriad of physical light spectra into simple tristimulus values. However, it must also be distinct from the high perceptual level, in which tristimulus values are further transformed into opponent process information. The best language for this level is LMS cones, not the traditional language of RGB cones. RGB cone terms are too easily confused with the names of actual colors, which exist only at a higher peceptual level.
- This is the source of many linguistic ambiguities, such as "red and green make yellow". An accurate, rigorous rewording is that in trichromatic primates (at least humans and macaque as far as I know), a particular ratio of L and M cone stimulation (without "significant" S cone stimulation) will lead to a peception of yellow through opponent processes. However, the popular idea that "red and green make yellow" probably stems from light mixing demonstrations, in which combining any light spectrum that humans peceive as red with any light spectrum that humans perceive as green will result in a light spectrum that is perceived as some shade of yellow (possibly beige or brown depending on the saturation/brightness). (This is a direct result of the LM cone argument.)
Summary modified/created by:
- Hear, hear. Especially point 2. There are essentially no popular explanations of 'color' that keep these things straight like they ought to, and it makes a mess of things (as evidenced even by some of the questions that get asked on this very page. Let's make Wikipedia the first explanation of color not to suck. eritain 05:48, 16 February 2006 (UTC)
- It's red and green, not blood and hunter. lysdexia 14:48, 22 Nov 2004 (UTC)
Hm...
I am not color-blind (always passed the tests OK) but I have some difficulty seeing clearly the second digit of Gamma 1.65, supposed to test for deuteranopia. The shape seems to be rather irregular...
Question
A person with a red/green defect will confuse green for which color? White, Red, Blue, or Yellow
And what will they confuse red for?
- It depends on what type of red/green deficiency is present. The confusion lines on the plots here: [3] indicate which colors tend to be confused by protanopes and which by deuteranopes. All colors along the lines will tend to look the same hue to the color blind person. If you are not familiar with the horseshoe shaped color space, see International Commission on Illumination; the image there should help you figure out which colors the lines are running over. --Chinasaur
Image subtitles
The images on the article's page should be subtitled what number a normal-seeing person sees, and what color-deficient persons see. --Abdull 11:00, 23 Mar 2005 (UTC)
I disagree. The test would be meaningles if everyone knew what should they see on it. It's much easier to recognize (or maye yourself think you recognize) a number if you know it in advance. The principle is explained well (even giving the numerical value 83) in the first sample image. --hhanke 19:07, 24 Mar 2005 (UTC)
Moreover, if you click on the images, you get the number a normal-seeing person sees. --Celsius
Either my monitor is pretty bad or the images aren't really great.
OK, i'm not colorblind as i've had atleast a couple of different tests online and offline. but the article, especially the 3rd image is dubious. Now it could be my monitor, but the fact is i can make out that the first number is 4, but the second number is where the confusion steps in as it looks like the numbers 1, 4, 7 and not 9. More to do with fuzziness than with color blindness. I suggest that this image is to be improved as a few others i know who are normal have said the same thing when i gave them the "test" on my PC. Other online tests are 100% accurate and they include the ishihara color blindness test.--Idleguy 14:16, Jun 18, 2005 (UTC)
- That one's iffy. If you got that first 4, and can see the second digit at all, you're fine. The second digit is definitely not clear. If someone has access to a different image, great, otherwise, I think we just need to deal with it. Maybe add a note about that one being fuzzy in the caption. -- Dpark 04:42, 23 Jun 2005 (UTC)
-
- Blah, I went ahead and just added the note myself. If we get a newer image, we can always remove the note. -- Dpark 04:48, 23 Jun 2005 (UTC)
I agree with this. I'm not colorblind and never have any trouble seeing the numbers at the optometrist's, but I could barely make out the numbers in these images (except 37). It took me a while to realize the last one was 56 (right?). Colorblind people do not need such ridiculously undersaturated colors to not see them.
Green yellow and greenish yellow
"their peak sensitivities in the blue-violet, green-yellow, and greenish-yellow regions of the spectrum, respectively"
This excerpt from "Causes of color blindness" doesn't make any sense. Green-yellow and greenish-yellow? I am not very knowledgable about this subject, but I hope that someone else can fix this.Apatterno 04:46, 7 August 2005 (UTC)
- yes, that was pretty good nonsense. Someone trying to correct a phrasing they hadn't read properly, and then being pedantic about the colours. I've fixed it for now, but I don't guarantee it won't get messed up again; this is a subject on which schoolchildren (and not only schoolchildren) are taught with great confidence entirely bogus information, so it is susceptible to mis-editing by people who believe in good faith they know what is going on, but don't. seglea 17:39, 8 August 2005 (UTC)
Monochromacy rate of Pohnpei
This article originally stated that roughly 1/12 of the population of Pohnpei suffers from achromatopsia; in actuality, it is the island of Pingelap, which is located 180 miles from Pohnpei and is a part of the Pohnpei State, that has this rate. --Marco Passarani 05:14, 21 August 2005 (UTC)