Brainport V100

A wonderful day to you, dear reader!


You come to this page seeing a fancy science-fiction-looking title, and obviously you wonder what on earth it could be. Well, it is one of the sensory substitutions we haven’t discussed yet. It replaces vision by sensing with your tongue. It is not quite the same as replacing it by taste, but I’ll explain that later. We’ve had multiple applications that substitute vision by touch (albeit most often at the braille side), we’ve had some that substitute it by hearing (we’re even making our own thesis about it), but this might seem a little less likely to give you the same results.

And partially, that is correct. The Brainport V100 device is meant as a vision “assistant”: it helps a visually impaired person gain some information about shapes of items in front of them, and helps with their orientation, but it does not have the accuracy that some of the technologies previously discussed on this blog have.

The Brainport V100

How does it work? Not unlike our own thesis, the device makes use of a camera, added on to a pair of goggles. The data the camera reads in are processed, and the shapes of the objects recorded is sent to the output. And the output is what really makes this special. It is a 3×3 cm array of 400 electrodes that are individually pulsed according to the recorded image. The array is connected to the goggles via a wire, and is meant to be places on the tongue of the user. They can – though not without any practice – recognize the signals and gain information about their surroundings by using them. It’s no major inconvenience either: the signal is perceived as a fizzly feeling on your tongue, and, as user feedback suggests, can be quite pleasant.

The electrode array

As is per definition the case with sensory substitution, gaining vision is traded off by (temporary) loss of another sense, in a certain location. Indeed, it will be difficult at best to taste anything while using the device, and, perhaps more important, also speech and vision are mutually exclusive in this solution. This is of course a trade-off the users themselves have to make, and as discussed before, just having the option to use either one is a big step forwards already. The device itself makes it easy to change between using and not using as well, which is also an advantage in that regard.

And speaking of options, the choice of which assisting device to use is also a decision that a user can make, depending on his own personal experience. And there are several, quite differing options available. Just scroll through our previous posts :).




More info? Go to the site of the producer, read some articles in the media about it, or have even the manual of the device.



Hey guys! Welcome to another blog post!


Apart from all the technological tools discussed here before, braille has long been the main way of reading for blind people, and still remains very important today. That’s why the main focus this time around is going to be a simple, but cleverly thought out tool to teach braille to young children. Why would you need an extra tool especially marketed towards children? Well, the problem lies in the fact that apart from learning a language, a child still has to learn about the world. As a comparison, most non-visually impaired readers have probably had, when they were toddlers,  some books with pictures of animals and objects, with the names of them spelled out underneath it.

Learning to read

Fittle, a concept that sprouted from an MIT workshop, tries to do just that, but by replacing the medium of vision by the medium of touch. And this does not only count for the letters, but also for the pictures themselves. It is an interactive learning tool that consists out of sets of blocks in particular shapes that fit together. If you fit all the blocks of a set together, the result will be a block shaped like an animal/tool/object, much like the way the pictures are portrayed in the children’s books. On the blocks, a braille symbol indicates a letter of the name of the object you’re building. After building, you’ll be able to read out the entire word.

Fittle blocks portraying the word “Fish”

This makes for a fun and interactive way of learning how to read for children, and it gives you a much more (literally) tangible idea of what the object you just read is. Fittle even made the models of the blocks available for download on their site, so everyone can use a 3D-printer to create them themselves. If you have a printer in the vicinity (like Fablab for my co-students at KULeuven), you can head over there right now and create these for very little money. They had a Indiegogo fundraiser a little while ago, that only raised a fraction of what they strived for, but it is still being worked on.

This is one of those few tools that is beautiful in its simplicity, but extremely powerful and empowering tools. It is unfortunate that I didn’t learn of it sooner, I would definitely have backed it.


Navigation tool for blind people using ultrasound sensors

Welcome back!

In this post, I’d like to take a look at a tool that hits a little closer to home, as in, that is fairly closely related to our thesis, and follow at least partially the general idea we have.

In 2007, Mounir Bousbia-Salah and Mohamed Fezari developed a tool to help blind people navigate for the University of Annaba. Their idea was to build a simple, not-disturbing way to get from one place to the other, while all the way also checking for obstacles around the user. In order to do this, they have a computer voice tell you which ways you can go at intersections, or, as they call it “decision points”. This points the user in the right direction, and leaves him alone until the next decision point. The user can decide for him/herself where to go, and the system will keep track of this. In further research, this is supposed to happen by using GPS, but this has not been implemented as of yet. Right now the distance the person walks is measured in a rather complicated way through accelerometers, and a “footswitch” to check when the user starts a step.Knipsel

The second part of their work, the one that is most resembling of our thesis, is the obstacle detection. In order to achieve this, they use ultrasound sensors, connected with vibrating elements placed on the shoulders of the user. The sensors will detect the closest obstacle, and a vibration will be generated accordingly. The have also implemented this solution in the walking cane, where obstacles are detected in the same way. This leads to an extension, albeit not a physical one, of the cane.

The idea of their solution embodies to me what an ideal solution would look like. It tries to be as little intrusive as possible, only notifying the user when absolutely necessary (for the positional navigation), or using a sense that you generally are not using while walking (for the obstacle avoidance). All the while, it does not require a huge amount of extra hardware (or at least not in the solution they want to be created eventually), which makes the adaptation for the user a bit easier. Another element that adds tot that is that they extend the usage of a very common tool for a blind person, the white cane.

Of course, there are still flaws, in the sense that there is still a decision to make about what should be “decision points”, since this alters the flexibility for the user significantly. Also, even when the GPS solution would be implemented, the device would only be useful in the Americas, since the signal on other continents is much less accurate, and might cause trouble.

As always, remarks and comments are highly encouraged, especially since this is a subject close to our own  work, and any criticism could lead us to change our view on aspects from our thesis!



Dans le noir?

‘Dans le noir’ is the name of a restaurant in Paris. Now you probably wonder what food has in common with our topic. I am not going to give you a review about this restaurant, since we haven’t visited it , but I am going to talk about the concept of this restaurant.

On the website, they describe their restaurant as

“A diving trip into your imagination to “reset” your senses and meet other people. A unique experience that our divisions Ethik Event and Ethik Management try to share with large corporations or institutions in order to develop a positive perception of the difference and to improve the integration of disabilities at work.”

The visitors are having dinner in the pitch dark, guided and served by blind guides. The goal of this experience is to take away the vision of the visitors and let them use their other senses to experience the world of the blind people. To enhance the experience, they let the visitors choose from a surprise menu. This means that they don’t know what they will get. After the dinner, the detailed menu is revealed, which can be a real surprise for some of the visitors.

I really like the concept of this restaurant. It does not only raise awareness for the blind people but it also employs them in a way so that the visitors are dependent on them.

What do you think about this concept? Will you visit this restaurant if you have the chance? Do you think it will change the view of their visitors?



Tactile surfaces

Hello my dear reading audience!

We’ve written a post about a braille smartphone before, but there are other types of tactile surfaces, not specifically targeted towards visually impaired people, that follow the same reasoning: to be able to interact with a device without having to see the screen. The one in particular I’d like to talk about today is one from a rather uncommon source: the Disney research labs.

Among many projects, they have done research to develop what they call the “Tesla Touch” surface. The goal is to have a tangible touch-screen, not in the sense of having actual buttons pop out of it, like other projects have looked into before, but in the sense of being able to feel different types of surfaces depending on the screen input. The technology is based around having a changing oscillating electric field in the screen, which affects how much resistance you feel when you  slide your finger across it. The technical details are written down in their 2010 research paper on the topic.

The more obvious ways to use this type of technology for blind people would be to have them sense when they hover over icons, and have them receive a special type of sensation depending on what the icon is. Although it is not accurate enough to display braille, since you are only able to create a single electric field across the entire surface depending on where touch is registered (which also means you’ll have a single resistive signal for all fingers you use), it will be able to give some sense of feedback from the visual contents of the screen through touch. Couple this with screen readers, and the device gains a lot of extra functionality when targeted towards the visually impaired.

Also, it is not entirely unthinkable that the accuracy is improvable, since capacitive touchscreens nowadays already use multiple electrodes. Similar setups might be able to generate multiple local electric fields, which might enable braille. But that is speculation of course.

Surface using Tesla Touch technology

The big advantage of this type of screen over the reformable ones (where buttons pop out of/the braille smartphone etc), is that there is obviously no mechanical movement in these, which makes maintenance, but also developer-friendliness and power consumption a lot better.

Screens like this could be built into regular tablets and smartphone devices, you’ll have a single device that can be used for visually impaired, and non-visually impaired people, which could once more decrease the stigmatization a little. It offers functionality for both, and it can be adapted to react differently depending on the user, in the sense that people with full sight need less feedback, but can for example still use it to simulate the feeling of a keyboard or other things, while more visually impaired people can use the functionality in a broader way.

If you can think of other applications for this type of screen, or if you have a preference between this and the formable screens, let us know!


Raise awareness for blind people

The Norwegian Association of the Blind created some advertisement videos to create more awareness for blind people. They have two types of videos. One type is about the guided dog and the other type is about hiring blind people. In total there are six different advertisements.

Guided dog

Hiring blind people

Do you think that they also should do something like this in Belgium or your own country? Do you have any other examples of creating awareness for blind people? With the upcoming federal elections in Belgium and the European elections in May, do you think that the different parties should include ‘disabled people’ in their agenda?


The other advertisements.

Braille e-book

Nowadays, you will think that everyone has easy access to a good book. But you’re wrong, for a part of the population, it is not so easy to find a book to read, let alone to find a good book. For blind people who read in braille, the amount of books translated in braille is very limited. With the technology these days you will think that it should not be difficult to make a braille e-reader, that translates books very easily, when they already have braille smartphones. The problem is not that nobody thought of it, on the contrary, you can find some concepts, that date back from 2009, here and here.

So why can’t we find the braille e-readers yet? Because it is not profitable enough for companies to make it.

Do you think that there should be more invested in the development of such an e-reader? Will you sponsor someone who wants to buy such a device?