The World Wide Web began life as a text-based technology set, with dynamic links from one information source to another, while the first ‘killer app’ of the mobile age was the text message.
Over the past decade, however, our communications have become dominated by cameras. Still images, videos and vines, animations, and more, are beginning to take over the internet, especially on social platforms – on the basis that a picture is worth a thousand words.
Unified communications and collaboration tools are no different and are also becoming increasingly focused on vision, rather than on voice or text. In the future, that trend will only deepen with the mass uptake of 3D headsets and virtual or augmented reality systems, together with Google Glass-style innovations.
The sight challenge
But there’s a problem: blind or partially sighted people have learned to navigate the text-based world, via text-to-speech technologies and screen-readers, but are fast being left behind by a communications world that is focusing more and more on vision.
In the UK alone, over two million people (nearly one in 30) live with some form of sight loss, while 360,000 are registered as being either blind or partially sighted. Text accessibility and ease of navigation took nearly two decades to be taken seriously by many websites, and there is a risk that the battle may have to be fought all over again in an image-centric world.
A number of social networks are now recognising the problem, in Twitter’s case by allowing users to add descriptive text to images. But today, the largest, Facebook, launches an AI-based object recognition and image-reading service that can describe pictures to its blind and partially sighted users via their screenreaders.
The development has been spearheaded by Facebook engineer Matt King, who lost his own sight via the degenerative sight condition, retinitis pigmentosa.
“On Facebook, a lot of what happens is extremely visual,” he told the BBC. “And, as somebody who’s blind, you can really feel like you’re left out of the conversation, like you’re on the outside.”
“Our artificial intelligence has advanced to the point where it’s practical for us to try to get computers to describe pictures in a meaningful way. This is in its very early stages, but it’s helping us move in the direction of that goal of including every single person who wants to participate in the conversation,” he added.
As with similar technologies that are being used in humanoid robotics, the system will improve over time as it learns to recognise different types of objects and scenes.
However, it may not have an easy ride in the medium term: the social network’s strategy is that it will be able to recognise people and identify them by name, via the ever-expanding image-tagging system and database that is already in place.
This is a complex series of issues: creating a global database of people who can be identified by name, on sight, is something that privacy campaigners – and countless other people – have deep concerns about, not least because of the risk of misidentification.
Those risks are very real, given that Facebook’s system isn’t reliant on some verified database, but on ad hoc user tagging, and it’s common for people to be tagged in images in which they do not appear – sometimes even maliciously. People put inordinate faith in context-free data and tags to identify and accurately describe fallible human beings, and algorithms are written based on that faith.
But King is quite right that this information – correct or otherwise – should not be withheld from one group of people based on their disability. “I feel I have a right to that information,” King told the BBC. “I am asking for information that is already available to other people to be revealed to me. So I see it as a matter of fairness.”
Either way, Facebook – and other networks that are working towards improving navigation and accessibility – should be congratulated for recognising the problems facing people with disabilities online and working so quickly towards building a long-term solution.