Color Blind Simulator
Last updated
Last updated
I was driving and as always when I tell someone that I am color blind they immediately test me on all the surroundings. I would have to explain different ways and use my surroundings to come up with the fastest way. Well, I was like, what happens if I could take an image and put a filter over it to show how it would look like to me. It would save time and I could use multiple pre-thought examples. It is a little tough to pick stuff on the spot that fully explains it.
I included two documents. They are what I found after I finished when researching a little bit more. I also found out they already existed such a thing to do what I was trying to accomplish. https://colororacle.org/links.html. Personally being able to showcase the software and verify it is accurate from a color-blind perspective was a key point that really made the software worth wild.
The first and most important decision, what programming language. I already built a few things messing with images so python was an obvious choice to me. I wrote a little bit of code to read in the file and it ran smoothly as expected.
I wrote in a little bit of extra code for a try-catch and expected errors for the file but that was the starting code. Now moving on to the project. How was I going to translate the image over?
First I thought, mask but creating a green mask made no sense, and how the heck would I even create a mask. I afterthought mapping but let's do the math.
255 ^ 3 = 16,581,375 which is a huge table to map. Also, how would I calculate that as the internet doesn't have that provide anything but theories? All of a sudden "I done did it". I got an algorithm from this website. Let me just say when I tried it a huge step was taken backward. The images came out all blue. Like all blue and that was no good.
I tweaked it and in the end, the algorithm just returning no blue was not accurate.
After a lot more research I found a table. I downloaded and cleaned it up, but a huge issue came up very quickly in this information.
If you look at the size it is easily noticeable that not all the colors are presented. I found a different table eventually and it was much better, but with more data comes another issue, time. Mapping all 1.6M colors to 3 different types would be a long load time and even if I got it into the table the amount of space it would take would be insane. The best way is to turn them to hex and go by that and a linked list from there. So now we have
9 1 digit + 90 2 digit + 155 3 digit => 654 digits * 3 color types => 1962 combinations for each of the times.. Yeah, let me just stop now. Too big is the point. So I ask what is the smallest amount to take with the least amount of result.
The first factor, has to be a factor of 255 which would be
1, 3, 5, 15, 17, 51, 85, 255
I can automatically knock out first and last because that would make no difference for 1 pixel in the table size and 255 would be a 100% loss. 51 is still a lot of pixels lost so I knock out that one and 85. 1, 3, 5, 15, 17, 51, 85, 255. This could be an issue. I tested 15 as 15 x 17 and if 15 looked good 17 would push it.
From that, I would say that wouldn't work
In order to test for 3 or 5, I had to go to people with actual vision and test how much of a difference between pixel counts mattered. I was decided that 3 was better but speed-wise, it was worth it as only 1 out of 10 could see a difference (I did a test of 6 girls and 4 guys for best results). I determined this by giving 10 examples of 3 photos. For 2 test, 2 photos were original, and 1 was 5 % For 2 test, 2 photos were original and 1 was 3 % For 2 test, 1 photo was original and 2 were 3 % For 2 test, 1 photo was original and 2 were 5 % For 2 test, 2 photos were 3% and 1 was 5% Math wise 255 / 3 = 85 ^ 3 => 614,125 255 / 5 = 51 ^ 3 => 132,651
So with all those numbers, I put smaller and BAM. My new table size
At first, I was naming them types instead of a type so that caused a few issues. It was an easy mapping and writing to a file.
I ran into the issue that it was very slow. If I wanted to do it in real-time that was nowhere near possible as now it was 10 sec or so for 1 iPhone 2 MB photo. I tried to use NumPy... That failed so I just kept it at that speed and said FUCK IT
We have advanced so far in technology these days that they now have built-in things to help people like me. On video games, there is a color-blind mode which makes it easier to process the details... I know for iPhone under accessibility they have a feature to change all the colors to make it pop out. Everything was so different I was skeptical but then took a color blind test and it was so easy. So I know it works.