The Argus

Happy Monday, dear reader of our blog!

And what is a better way to start the week than with some Greek Mythology? Have you heard of the Gigantes? They were Greek mythological creatures, closely related to the gods, who had a superhuman size and strength. For example there was Polyphemos, the shepherd cyclops who had his eye poked out by Odysseus. There was Orion, the hunter who could walk on water, and who you can still see as a constellation in the sky. And there was Argus.
Argus has slain many evildoers in his lifetime, snakes and bulls and satyrs alike. The thing he was most renown for, however, was the fact that he had a hundred eyes, covering all of his body, that were never sleeping all at the same time. This made him an excellent guardian, which is the job Hera, wife of Zeus, employed him for when she found a cow, who she thought was Io, one of the lovers of her husband, who he had transformed to keep her safe from Hera’s wrath. She was right, but she could not exert her wrath on Io after all, since Zeus sent Hermes to free his mistress, thereby slaying poor Argus.

Argus being slain by Hermes

But what does all of this have to do with the theme of this blog? Little, I admit, except for the giants’ connection to sight, which has led the company Second Sight to name a device after him.

The Argus

The Argus is a device targeted towards people who have lost their vision due damaged photo-receptors in the retina, which can be caused by the retinitis pigmentosa disease. It is a retinal implant, together with a pair of glasses that hold a camera, and an external video processing unit. The glasses capture a video, after which the processing unit converts it to instructions, which are send to the implant. These instructions are meant to (at least to a certain degree) replace the signals that would be coming from the photo-receptors. It effectively bypasses these damaged photo-receptors, and send signals directly to the brain.

The retinal implant, which bypasses the photo-receptors

 

Now, this does sadly not mean that the user regains all of their vision, but they will at least be able to see some shapes and outlines. This is where the fact that people with this implant were once able to see completely (in case of retinitis pigmentosa at least) comes in handy: they can use the memory of what things looked like to form complete images with the limited information they are getting through the device. And regardless, it is an improvement over being almost, or entirely blind.

You could see this more as a “restoring vision” device rather than a “replacing vision” one, but I found that this kind of tool deserves some highlighting as well. However, that does bring us to an interesting question. If you could choose what to invest your money in, would you rather invest it into a solution that “cure” blindness – that restores vision to people completely, or in one that circumvents blindness – basically changing the world to be more adaptive towards blind people? One would remove the problem completely, but keep in mind that there are many different causes for blindness, and equally as many solutions that would have to be found. The other would help people in accepting the disability, but would ultimately not solve the problem at the roots. What are your thoughts? I’m curious.

 

Koenraad

Advertisements

Brainport V100

A wonderful day to you, dear reader!

 

You come to this page seeing a fancy science-fiction-looking title, and obviously you wonder what on earth it could be. Well, it is one of the sensory substitutions we haven’t discussed yet. It replaces vision by sensing with your tongue. It is not quite the same as replacing it by taste, but I’ll explain that later. We’ve had multiple applications that substitute vision by touch (albeit most often at the braille side), we’ve had some that substitute it by hearing (we’re even making our own thesis about it), but this might seem a little less likely to give you the same results.

And partially, that is correct. The Brainport V100 device is meant as a vision “assistant”: it helps a visually impaired person gain some information about shapes of items in front of them, and helps with their orientation, but it does not have the accuracy that some of the technologies previously discussed on this blog have.

The Brainport V100

How does it work? Not unlike our own thesis, the device makes use of a camera, added on to a pair of goggles. The data the camera reads in are processed, and the shapes of the objects recorded is sent to the output. And the output is what really makes this special. It is a 3×3 cm array of 400 electrodes that are individually pulsed according to the recorded image. The array is connected to the goggles via a wire, and is meant to be places on the tongue of the user. They can – though not without any practice – recognize the signals and gain information about their surroundings by using them. It’s no major inconvenience either: the signal is perceived as a fizzly feeling on your tongue, and, as user feedback suggests, can be quite pleasant.

The electrode array

As is per definition the case with sensory substitution, gaining vision is traded off by (temporary) loss of another sense, in a certain location. Indeed, it will be difficult at best to taste anything while using the device, and, perhaps more important, also speech and vision are mutually exclusive in this solution. This is of course a trade-off the users themselves have to make, and as discussed before, just having the option to use either one is a big step forwards already. The device itself makes it easy to change between using and not using as well, which is also an advantage in that regard.

And speaking of options, the choice of which assisting device to use is also a decision that a user can make, depending on his own personal experience. And there are several, quite differing options available. Just scroll through our previous posts :).

 

Koenraad

 

More info? Go to the site of the producer, read some articles in the media about it, or have even the manual of the device.

Fittle

Hey guys! Welcome to another blog post!

Knipsel

Apart from all the technological tools discussed here before, braille has long been the main way of reading for blind people, and still remains very important today. That’s why the main focus this time around is going to be a simple, but cleverly thought out tool to teach braille to young children. Why would you need an extra tool especially marketed towards children? Well, the problem lies in the fact that apart from learning a language, a child still has to learn about the world. As a comparison, most non-visually impaired readers have probably had, when they were toddlers,  some books with pictures of animals and objects, with the names of them spelled out underneath it.

Learning to read

Fittle, a concept that sprouted from an MIT workshop, tries to do just that, but by replacing the medium of vision by the medium of touch. And this does not only count for the letters, but also for the pictures themselves. It is an interactive learning tool that consists out of sets of blocks in particular shapes that fit together. If you fit all the blocks of a set together, the result will be a block shaped like an animal/tool/object, much like the way the pictures are portrayed in the children’s books. On the blocks, a braille symbol indicates a letter of the name of the object you’re building. After building, you’ll be able to read out the entire word.

Fittle blocks portraying the word “Fish”

This makes for a fun and interactive way of learning how to read for children, and it gives you a much more (literally) tangible idea of what the object you just read is. Fittle even made the models of the blocks available for download on their site, so everyone can use a 3D-printer to create them themselves. If you have a printer in the vicinity (like Fablab for my co-students at KULeuven), you can head over there right now and create these for very little money. They had a Indiegogo fundraiser a little while ago, that only raised a fraction of what they strived for, but it is still being worked on.

This is one of those few tools that is beautiful in its simplicity, but extremely powerful and empowering tools. It is unfortunate that I didn’t learn of it sooner, I would definitely have backed it.

Koenraad

Navigation tool for blind people using ultrasound sensors

Welcome back!

In this post, I’d like to take a look at a tool that hits a little closer to home, as in, that is fairly closely related to our thesis, and follow at least partially the general idea we have.

In 2007, Mounir Bousbia-Salah and Mohamed Fezari developed a tool to help blind people navigate for the University of Annaba. Their idea was to build a simple, not-disturbing way to get from one place to the other, while all the way also checking for obstacles around the user. In order to do this, they have a computer voice tell you which ways you can go at intersections, or, as they call it “decision points”. This points the user in the right direction, and leaves him alone until the next decision point. The user can decide for him/herself where to go, and the system will keep track of this. In further research, this is supposed to happen by using GPS, but this has not been implemented as of yet. Right now the distance the person walks is measured in a rather complicated way through accelerometers, and a “footswitch” to check when the user starts a step.Knipsel

The second part of their work, the one that is most resembling of our thesis, is the obstacle detection. In order to achieve this, they use ultrasound sensors, connected with vibrating elements placed on the shoulders of the user. The sensors will detect the closest obstacle, and a vibration will be generated accordingly. The have also implemented this solution in the walking cane, where obstacles are detected in the same way. This leads to an extension, albeit not a physical one, of the cane.

The idea of their solution embodies to me what an ideal solution would look like. It tries to be as little intrusive as possible, only notifying the user when absolutely necessary (for the positional navigation), or using a sense that you generally are not using while walking (for the obstacle avoidance). All the while, it does not require a huge amount of extra hardware (or at least not in the solution they want to be created eventually), which makes the adaptation for the user a bit easier. Another element that adds tot that is that they extend the usage of a very common tool for a blind person, the white cane.

Of course, there are still flaws, in the sense that there is still a decision to make about what should be “decision points”, since this alters the flexibility for the user significantly. Also, even when the GPS solution would be implemented, the device would only be useful in the Americas, since the signal on other continents is much less accurate, and might cause trouble.

As always, remarks and comments are highly encouraged, especially since this is a subject close to our own  work, and any criticism could lead us to change our view on aspects from our thesis!

 

Koenraad

Google Car

Welcome back to our little corner of the Internet!

We’ve talked a lot about tools and devices specifically designed for visually impaired people, but we cannot lose sight of other pieces of research and technology that are developped for a more general audience, but which might provide some – if not a lot – of use for the visually impaired.

One such research topic might be Google’s driverless car. The project is designed with safer traffic in mind for all audiences, but it would also have a specifically great advantage for blind people: they would be able to have a personal vehicle that can be controlled without having another person to drive it for you. This would work as a major “enabling” tool for the mobility of blind and otherwise visually impaired people. The reactions of a legally blind person testing the vehicle can be seen in this movie:

Even though many argue that in order to be more ecologically responsible it is better to leave personal vehicles behind in favor of public transportation, which is already very much accessible for blind people, this evolution does not seem to be happening at a rapid enough pace for the driverless personal vehicle to be useless for blind people. Also, considering that this innovartion, when it happens, will spread to more and more people, the driverless car will actually counteract stigmatizing, which might happen when using other types of tools for the visually impaired, as one of our readers pointed out in the comments section of one of ours previous posts.

Don’t take it for an innovation that might happen very soon though: there are still many legal problems with allowing a driverless vehicle to be produced.This is mostly because jurisdiction usually focusses on the driver of a vehicle, which would obviously be a problem in this case.

But of course, as promising as this technology may look, and although Google is well on its way to prove the opposite, there might always be some imperfections that lead to faulty behavior. In this case, vision would once more become a useful tool for the user to correct the car. So far however, no accidents with driverless cars (in driverless mode) have been recorded. Do you think, based on this, that this is a legitimate concern? Or do you have an opinion on Google’s project or driverless cars in general? Let us know in the comments!

Koenraad

Further info: http://googleblog.blogspot.be/2012/08/the-self-driving-car-logs-more-miles-on.html

Human Echolocation

Hello dear readers!

So far on this blog, we have made a bunch of posts about how technology is being used to help people overcome their lack of vision in a number of ways. But this time, we will focus on something that does not require any external tools. A skill that every human is able to master, but also one that is largely unknown by the public: Human Echolocation.

If your first reaction is somewhere along the lines of “that’s something for bats, not humans”, then you react in the same way I did when I first found out about it. But it is very much possible and several humans have already been able to master it, such as the boy in this video:

What he does is called “active echolocation”, where he himself produces sound, and relies on differences when he listens to this sounds to determine whether there is anything nearby, and even what it is. In order to achieve this, any sound will do, but some work better than others. The clicking sound the boy in the video makes is one of the more efficient ways. It has a relatively high frequency, meaning it (or its echoes) will not easily be made inaudible due to environmental sounds. Also, since the clicking does not require you to exhale, it allows you to use this technique for longer without getting a completely dry mouth, which would not only be an unpleasant feeling, but would make it more difficult to produce the same frequency of sound on top of that.

 People can learn this skill regardless of whether they’re blind or not: it requires only a few hours of practice per day over the span of several weeks to start perceiving whether there is an object in front of you. Further refinement can happen of course, as is shown in the video. Although it is often said that blind people have better hearing, people with perfect eyesight have been known to teach themselves this skill to a certain degree.

Not convinced? Want proof? Have proof! You’re using it already. Passive echolocation, as it’s called, is the counterpart to active echolocation, where you do not make sound yourself, but instead listen to sounds from the environment to determine what said environment looks like. I am sure you’re familiar with footsteps in an empty church, or music played in a concert hall, as opposed to talking to another person in your room. These might be simple things, but thay’re based around the same principles as echolocation.

Although the use of it for visually impaired people is clear, it is still the question what a person with good eyesight can gain from this skill other than having a party trick. It has been suggested that professions that might lead you to low-light environments, like firefighters, might have a use for it, but that seems somewhat  up for debate. Perhaps you, dear reader, have some ideas and suggestions? Let us know in the comments!

Koenraad

Further reading:
http://learnecholocation.blogspot.be/
http://www.medicalnewstoday.com/articles/226653.php