This is technology that can be
delivered now while we wait for GPS linked systems able to go
anywhere new. It is a huge advance on cruise control and takes
advantage of fact that most of our driving is repetitive.
It can be implemented now on the same
basis of cruise control in which the driver remains engaged. I
would expect to seed it show up quickly from a first adopter who will
drive competition to follow.
The customer really wants to get into
his car and tell it to go to work or to the mall or home. This is
doubly true for rush hour commutes.
It may take some time, but regulators
will become comfortable enough to allow hands off work been conducted
in the self driving car. It will also make rush hour hugely safer.
FEBRUARY 19, 2013
The
Oxford University’s Mobile Robotics Group (MRG) RobotCar is a
modified Nissan LEAF. Lasers
and cameras are subtly mounted around the vehicle and taking up some
of the boot space is a computer which performs all the calculations
necessary to plan, control speed and avoid obstacles. Externally it's
hard to tell this car apart from any other on the road. It is
designed to take over driving while traveling on frequently used
routes.
RobotCar constantly monitors the road ahead to look for pedestrians, cars or anything that could pose a danger. If an obstacle is detected the vehicle comes to a controlled stop and waits until the obstacle has moved out of the way. Once clear the car simply accelerates and continues its journey.
There are three computers onboard. The iPad, the LLC (Low Level Controller) and the MVC (Main Vehicle Computer). The iPad runs the user interface and demands constant attention from the LLC. If any of these computers disagree the driver will not be able to start autonomous driving. If at any point there is a problem when the car is in control the human driver is prompted to take control, if they fail to do so the car is automatically brought to a stop.
The
sensors and computers build up a three-dimensional map of the route.
This is augmented by semantic information such as the location and
type of road markings, traffic signs, traffic lights and lane
information, as well as aerial images. Since such things can change,
the system can also access the internet for updates. Only when the
system has enough data and has been trained enough will it offer to
drive the car.
They have modified the base Nissan LEAF systems to allow complete fly-by-wire control. Everything from the steering to the indicators can be manipulated by the main vehicle computer in the boot. RobotCar senses the world in two main ways. The first uses a pair of stereo cameras to assess the road and navigate, much like a human driver's eyes. The second is a little different and uses several lasers mounted around the vehicle. These sensors assess the 3D structure of world and also improve performance at night.
The MRG team sees an immediate future in production cars modified for autonomous driving only part of the time on frequently driven routes. They estimate that the cost of the system can be brought down from its current £5,000 ($7700) to only £100 (US$155).
How It Works
Infrastructure-Free
Navigation
Already, robots carry
goods around factories and manage our ports, but these are
constrained, controlled and highly managed workspaces. Here, the
navigation task is made simple by installing reflective beacons or
guide wires. Our goal is to extend the reach of robot navigation to
truly vast scales without the need for such expensive, awkward and
inconvenient modification of the environment. It is about enabling
machines to operate for, with and beside us in the multitude of
spaces we inhabit, live and work.
Why Not Use GPS
Even when GPS is
available, it does not offer the accuracy required for robots to make
decisions about how and when to move safely. Even if it did, it would
say nothing about what is around the robot, and that has a massive
impact on autonomous decision-making.
Our Approach
We use the mathematics
of probability and estimation to allow computers in robots to
interpret data from sensors like cameras, radars and lasers, aerial
photos and on-the-fly internet queries. We use machine learning
techniques to build and calibrate mathematical models which can
explain the robot's view of the world in terms of prior experience
(training), prior knowledge (aerial images, road plans and semantics)
and automatically generated web queries. We wish to produce
technology which allows robots always to know precisely where they
are and what is around them.
Why Cars?
Perhaps the ultimate
application is civilian transport systems. We are not condemned to a
future of congestion and accidents. We will eventually have cars that
can drive themselves, interacting safely with other road users and
using roads efficiently, thus freeing up our precious time. But to do
this the machines need life-long infrastructure-free navigation, and
that is the focus of this work.
Learning To drive
Although the car
itself only moves in 2D, it senses in 3D. It only offers the driver
autonomy if the 3D impression it forms as it moves matches that which
it has stored in its memory. So before the car can operate, it must
learn what its environment looks like. As an example, below is a
video of the car learning/discovering what Woodstock town centre
looks like.
Computer Vision
We can use discrete
stereo cameras to figure out the trajectory of the vehicle relative
routes it has been driven on before. This movie shows the vehicle
interpreting live images in the context of its memory of our test
site at Begbroke Science Park. We can also use these cameras to
detect the presence of obstacles - although vision is not great at
night so we also use laser.
Lasers
Tucked under the front
and rear bumpers of the vehicle are two scanning lasers. These lasers
allow us to sense the 3D structure of the cars environment - from
this we can figure out the car’s location and orientation on the
road.
Perception and
Environment Understanding
Knowing what is where
is pivotal for safe and robust operation. Here the car has knowledge
of anything you as a driver might find useful - and more. The
vehicle’s situational awareness is made up of static and dynamic
environment features.
Dynamic Obstacle
Detection
Dynamic information
relates to potential obstacles which are either moving or stationary:
cars, bicycles, pedestrians, etc. Knowing where they are - and where
they are likely going to be in the near future - with respect to the
planned vehicle trajectory is crucial for safe operation as well as
for appropriate trajectory planning. Dynamic obstacles are detected
and tracked using an off-the-shelf laser scanner. The system scans an
85 degree field of view ahead of the car 13 times a second to detect
obstacles up to 50 metres ahead.
Static World
Static information
consists of semantic information like the location and type of road
markings and traffic signs, traffic lights, lane information, where
curbs are, etc. This kind of information rarely changes and so a
fairly accurate model can be built before the vehicle actually goes
out. And it will last. Of course, you don’t really want to blindly
believe such a prior map for all time - after all, things do change
when conducting roadworks, for example - but knowing where you can
expect to find certain things in the world is already incredibly
helpful. The prior semantic map will get updated over time with
information the vehicle actually gathers out there in the real world.
No comments:
Post a Comment