London, UK (BBN)-Scientists have developed two new smartphone-based systems for driverless cars that can identify a user’s location and various components of a road scene such as street signs, pedestrians and buildings in places where GPS does not function.
The systems can perform the same job as sensors costing tens of thousands of pounds, researchers said, reports the PTI.
The separate but complementary systems have been designed by researchers from the University of Cambridge in UK.
Although the systems cannot currently control a driverless car, the ability to make a machine ‘see’ and accurately identify where it is and what it is looking at is a vital part of developing autonomous vehicles and robotics.
The first system, called SegNet, can take an image of a street scene it has not seen before and classify it, sorting objects into 12 different categories – roads, street signs, pedestrians, buildings and cyclists – in real time.
It can deal with light, shadow and night-time environments, and currently labels more than 90 per cent of pixels correctly.
Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.
Users can upload an image or search for any city or town in the world, and the system will label all the components of the road scene.
The system has been successfully tested on both city roads and motorways.
For the driverless cars currently in development, radar and base sensors are expensive – in fact, they often cost more than the car itself.
In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR (a remote sensing technology), SegNet learns by example – it was ‘trained’ by the researchers, who manually labelled every pixel in each of 5000 images.
Once the labelling was finished, the researchers then took two days to ‘train’ the system before it was put into action.
“It’s remarkably good at recognising things in an image, because it is had so much practise,” said Alex Kendall, a PhD student in the Department of Engineering at Cambridge.
A separate but complementary system uses images to determine both precise location and orientation.
This localisation system runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene.
The system is far more accurate than GPS and works in places where Global Positioning System (GPS) does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.
The localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.
BBN/SK/AD