The Argus

Happy Monday, dear reader of our blog!

And what is a better way to start the week than with some Greek Mythology? Have you heard of the Gigantes? They were Greek mythological creatures, closely related to the gods, who had a superhuman size and strength. For example there was Polyphemos, the shepherd cyclops who had his eye poked out by Odysseus. There was Orion, the hunter who could walk on water, and who you can still see as a constellation in the sky. And there was Argus.
Argus has slain many evildoers in his lifetime, snakes and bulls and satyrs alike. The thing he was most renown for, however, was the fact that he had a hundred eyes, covering all of his body, that were never sleeping all at the same time. This made him an excellent guardian, which is the job Hera, wife of Zeus, employed him for when she found a cow, who she thought was Io, one of the lovers of her husband, who he had transformed to keep her safe from Hera’s wrath. She was right, but she could not exert her wrath on Io after all, since Zeus sent Hermes to free his mistress, thereby slaying poor Argus.

Argus being slain by Hermes

But what does all of this have to do with the theme of this blog? Little, I admit, except for the giants’ connection to sight, which has led the company Second Sight to name a device after him.

The Argus

The Argus is a device targeted towards people who have lost their vision due damaged photo-receptors in the retina, which can be caused by the retinitis pigmentosa disease. It is a retinal implant, together with a pair of glasses that hold a camera, and an external video processing unit. The glasses capture a video, after which the processing unit converts it to instructions, which are send to the implant. These instructions are meant to (at least to a certain degree) replace the signals that would be coming from the photo-receptors. It effectively bypasses these damaged photo-receptors, and send signals directly to the brain.

The retinal implant, which bypasses the photo-receptors


Now, this does sadly not mean that the user regains all of their vision, but they will at least be able to see some shapes and outlines. This is where the fact that people with this implant were once able to see completely (in case of retinitis pigmentosa at least) comes in handy: they can use the memory of what things looked like to form complete images with the limited information they are getting through the device. And regardless, it is an improvement over being almost, or entirely blind.

You could see this more as a “restoring vision” device rather than a “replacing vision” one, but I found that this kind of tool deserves some highlighting as well. However, that does bring us to an interesting question. If you could choose what to invest your money in, would you rather invest it into a solution that “cure” blindness – that restores vision to people completely, or in one that circumvents blindness – basically changing the world to be more adaptive towards blind people? One would remove the problem completely, but keep in mind that there are many different causes for blindness, and equally as many solutions that would have to be found. The other would help people in accepting the disability, but would ultimately not solve the problem at the roots. What are your thoughts? I’m curious.



Brainport V100

A wonderful day to you, dear reader!


You come to this page seeing a fancy science-fiction-looking title, and obviously you wonder what on earth it could be. Well, it is one of the sensory substitutions we haven’t discussed yet. It replaces vision by sensing with your tongue. It is not quite the same as replacing it by taste, but I’ll explain that later. We’ve had multiple applications that substitute vision by touch (albeit most often at the braille side), we’ve had some that substitute it by hearing (we’re even making our own thesis about it), but this might seem a little less likely to give you the same results.

And partially, that is correct. The Brainport V100 device is meant as a vision “assistant”: it helps a visually impaired person gain some information about shapes of items in front of them, and helps with their orientation, but it does not have the accuracy that some of the technologies previously discussed on this blog have.

The Brainport V100

How does it work? Not unlike our own thesis, the device makes use of a camera, added on to a pair of goggles. The data the camera reads in are processed, and the shapes of the objects recorded is sent to the output. And the output is what really makes this special. It is a 3×3 cm array of 400 electrodes that are individually pulsed according to the recorded image. The array is connected to the goggles via a wire, and is meant to be places on the tongue of the user. They can – though not without any practice – recognize the signals and gain information about their surroundings by using them. It’s no major inconvenience either: the signal is perceived as a fizzly feeling on your tongue, and, as user feedback suggests, can be quite pleasant.

The electrode array

As is per definition the case with sensory substitution, gaining vision is traded off by (temporary) loss of another sense, in a certain location. Indeed, it will be difficult at best to taste anything while using the device, and, perhaps more important, also speech and vision are mutually exclusive in this solution. This is of course a trade-off the users themselves have to make, and as discussed before, just having the option to use either one is a big step forwards already. The device itself makes it easy to change between using and not using as well, which is also an advantage in that regard.

And speaking of options, the choice of which assisting device to use is also a decision that a user can make, depending on his own personal experience. And there are several, quite differing options available. Just scroll through our previous posts :).




More info? Go to the site of the producer, read some articles in the media about it, or have even the manual of the device.


Hey guys! Welcome to another blog post!


Apart from all the technological tools discussed here before, braille has long been the main way of reading for blind people, and still remains very important today. That’s why the main focus this time around is going to be a simple, but cleverly thought out tool to teach braille to young children. Why would you need an extra tool especially marketed towards children? Well, the problem lies in the fact that apart from learning a language, a child still has to learn about the world. As a comparison, most non-visually impaired readers have probably had, when they were toddlers,  some books with pictures of animals and objects, with the names of them spelled out underneath it.

Learning to read

Fittle, a concept that sprouted from an MIT workshop, tries to do just that, but by replacing the medium of vision by the medium of touch. And this does not only count for the letters, but also for the pictures themselves. It is an interactive learning tool that consists out of sets of blocks in particular shapes that fit together. If you fit all the blocks of a set together, the result will be a block shaped like an animal/tool/object, much like the way the pictures are portrayed in the children’s books. On the blocks, a braille symbol indicates a letter of the name of the object you’re building. After building, you’ll be able to read out the entire word.

Fittle blocks portraying the word “Fish”

This makes for a fun and interactive way of learning how to read for children, and it gives you a much more (literally) tangible idea of what the object you just read is. Fittle even made the models of the blocks available for download on their site, so everyone can use a 3D-printer to create them themselves. If you have a printer in the vicinity (like Fablab for my co-students at KULeuven), you can head over there right now and create these for very little money. They had a Indiegogo fundraiser a little while ago, that only raised a fraction of what they strived for, but it is still being worked on.

This is one of those few tools that is beautiful in its simplicity, but extremely powerful and empowering tools. It is unfortunate that I didn’t learn of it sooner, I would definitely have backed it.


Navigation tool for blind people using ultrasound sensors

Welcome back!

In this post, I’d like to take a look at a tool that hits a little closer to home, as in, that is fairly closely related to our thesis, and follow at least partially the general idea we have.

In 2007, Mounir Bousbia-Salah and Mohamed Fezari developed a tool to help blind people navigate for the University of Annaba. Their idea was to build a simple, not-disturbing way to get from one place to the other, while all the way also checking for obstacles around the user. In order to do this, they have a computer voice tell you which ways you can go at intersections, or, as they call it “decision points”. This points the user in the right direction, and leaves him alone until the next decision point. The user can decide for him/herself where to go, and the system will keep track of this. In further research, this is supposed to happen by using GPS, but this has not been implemented as of yet. Right now the distance the person walks is measured in a rather complicated way through accelerometers, and a “footswitch” to check when the user starts a step.Knipsel

The second part of their work, the one that is most resembling of our thesis, is the obstacle detection. In order to achieve this, they use ultrasound sensors, connected with vibrating elements placed on the shoulders of the user. The sensors will detect the closest obstacle, and a vibration will be generated accordingly. The have also implemented this solution in the walking cane, where obstacles are detected in the same way. This leads to an extension, albeit not a physical one, of the cane.

The idea of their solution embodies to me what an ideal solution would look like. It tries to be as little intrusive as possible, only notifying the user when absolutely necessary (for the positional navigation), or using a sense that you generally are not using while walking (for the obstacle avoidance). All the while, it does not require a huge amount of extra hardware (or at least not in the solution they want to be created eventually), which makes the adaptation for the user a bit easier. Another element that adds tot that is that they extend the usage of a very common tool for a blind person, the white cane.

Of course, there are still flaws, in the sense that there is still a decision to make about what should be “decision points”, since this alters the flexibility for the user significantly. Also, even when the GPS solution would be implemented, the device would only be useful in the Americas, since the signal on other continents is much less accurate, and might cause trouble.

As always, remarks and comments are highly encouraged, especially since this is a subject close to our own  work, and any criticism could lead us to change our view on aspects from our thesis!



Tactile surfaces

Hello my dear reading audience!

We’ve written a post about a braille smartphone before, but there are other types of tactile surfaces, not specifically targeted towards visually impaired people, that follow the same reasoning: to be able to interact with a device without having to see the screen. The one in particular I’d like to talk about today is one from a rather uncommon source: the Disney research labs.

Among many projects, they have done research to develop what they call the “Tesla Touch” surface. The goal is to have a tangible touch-screen, not in the sense of having actual buttons pop out of it, like other projects have looked into before, but in the sense of being able to feel different types of surfaces depending on the screen input. The technology is based around having a changing oscillating electric field in the screen, which affects how much resistance you feel when you  slide your finger across it. The technical details are written down in their 2010 research paper on the topic.

The more obvious ways to use this type of technology for blind people would be to have them sense when they hover over icons, and have them receive a special type of sensation depending on what the icon is. Although it is not accurate enough to display braille, since you are only able to create a single electric field across the entire surface depending on where touch is registered (which also means you’ll have a single resistive signal for all fingers you use), it will be able to give some sense of feedback from the visual contents of the screen through touch. Couple this with screen readers, and the device gains a lot of extra functionality when targeted towards the visually impaired.

Also, it is not entirely unthinkable that the accuracy is improvable, since capacitive touchscreens nowadays already use multiple electrodes. Similar setups might be able to generate multiple local electric fields, which might enable braille. But that is speculation of course.

Surface using Tesla Touch technology

The big advantage of this type of screen over the reformable ones (where buttons pop out of/the braille smartphone etc), is that there is obviously no mechanical movement in these, which makes maintenance, but also developer-friendliness and power consumption a lot better.

Screens like this could be built into regular tablets and smartphone devices, you’ll have a single device that can be used for visually impaired, and non-visually impaired people, which could once more decrease the stigmatization a little. It offers functionality for both, and it can be adapted to react differently depending on the user, in the sense that people with full sight need less feedback, but can for example still use it to simulate the feeling of a keyboard or other things, while more visually impaired people can use the functionality in a broader way.

If you can think of other applications for this type of screen, or if you have a preference between this and the formable screens, let us know!


Cone cell gene therapy

Hi all!

We’ve written a post on color blindness before, but there’s still a lot to say, so here’s another one! In 2009, researchers at the University of Washington in Seattle developed a cure for color blindness in squirrel monkeys, that can also be possibly applicable to humans. They cured Daltonism, red-green color blindness, in the monkeys by injecting them with a virus that had the corrective gene for their defective cone cells (photoreceptors in the retina) as a payload.

Squirrel monkeys

Due to a large amount of time they spent training the monkeys to react to colors, they could finally determine that they were as a matter of fact aware of colors, as opposed to their actions before the treatment.

Wy is this so important though? Daltonism is by far the most occuring vision defect in humans, and it stems from similar defective cone cells. Considering about 5% of male humans suffers from it, a cure like this could make a huge difference once it is made available for humans. But it has even more potential than that. Many other human eye-diseases stem from problems with these same cone cells, and these diseases are the cause of partial -or complete- blindness. If it is possible to cure one cone cell disease by using gene therapy, solutions for these even more severe problems might come into range.

We are not there yet though, as the therapy is still in its testing phase for humans, and for that it has to go through the many legislative procedures required for testing medicines. Once that is done though, and if it is considered safe for humans, we can look forward to a very potent new weapon in humanity’s ever increase disease-combatting arsenal.

For more info, see this article from The Guardian, or go to the website of (one of the) authors!


Google Car

Welcome back to our little corner of the Internet!

We’ve talked a lot about tools and devices specifically designed for visually impaired people, but we cannot lose sight of other pieces of research and technology that are developped for a more general audience, but which might provide some – if not a lot – of use for the visually impaired.

One such research topic might be Google’s driverless car. The project is designed with safer traffic in mind for all audiences, but it would also have a specifically great advantage for blind people: they would be able to have a personal vehicle that can be controlled without having another person to drive it for you. This would work as a major “enabling” tool for the mobility of blind and otherwise visually impaired people. The reactions of a legally blind person testing the vehicle can be seen in this movie:

Even though many argue that in order to be more ecologically responsible it is better to leave personal vehicles behind in favor of public transportation, which is already very much accessible for blind people, this evolution does not seem to be happening at a rapid enough pace for the driverless personal vehicle to be useless for blind people. Also, considering that this innovartion, when it happens, will spread to more and more people, the driverless car will actually counteract stigmatizing, which might happen when using other types of tools for the visually impaired, as one of our readers pointed out in the comments section of one of ours previous posts.

Don’t take it for an innovation that might happen very soon though: there are still many legal problems with allowing a driverless vehicle to be produced.This is mostly because jurisdiction usually focusses on the driver of a vehicle, which would obviously be a problem in this case.

But of course, as promising as this technology may look, and although Google is well on its way to prove the opposite, there might always be some imperfections that lead to faulty behavior. In this case, vision would once more become a useful tool for the user to correct the car. So far however, no accidents with driverless cars (in driverless mode) have been recorded. Do you think, based on this, that this is a legitimate concern? Or do you have an opinion on Google’s project or driverless cars in general? Let us know in the comments!


Further info: