Presentation is loading. Please wait.

Presentation is loading. Please wait.

GOOGLE DRIVERLESS CAR. INTRODUCTION The Google Driverless Car is a project by Google that involves developing technology for Driverless Cars. The project.

Similar presentations


Presentation on theme: "GOOGLE DRIVERLESS CAR. INTRODUCTION The Google Driverless Car is a project by Google that involves developing technology for Driverless Cars. The project."— Presentation transcript:

1 GOOGLE DRIVERLESS CAR

2 INTRODUCTION The Google Driverless Car is a project by Google that involves developing technology for Driverless Cars. The project is currently being led by Google engineer Sebastian Thrum's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA challenge and its US$2 million prize from the U.S department of defense. The team developing the system consisted of 15 engineers working for Google, including Chris Rumson, Mike Montebello, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges. The U.S state of Nevada passed a law in June 2011 concerning the operation of driverless cars in Nevada. Google had been lobbying for driverless car laws. The Nevada law went into effect on March 1, 2012, and the Nevada department of computer vehicles issued the first license for a self-driven car in May 2012. The license was issued to a Toyota Prius modified with Google's experimental driver- less technology.

3 OVERVIEW  The system combines information gathered from Google Street View with artificial intelligence software that combines input from video cameras inside the car, a LIDAR sensor on top of the vehicle, radar sensors on the front of the vehicle and a position sensor attached to one of the rear wheels that helps locate the car's position on the map. As of 2010, Google has tested several vehicles equipped with the system, driving 1,609 kilometres without any human intervention, in addition to 225,308 kilometres (140,000 mi) with occasional human intervention. Google expects that the increased accuracy of its automated driving system could help reduce the number of traffic-related injuries and deaths, while using energy and space on roadways more efficiently.

4  The project team has equipped a test fleet of at least eight vehicles, consisting of six Toyota Prius, an Audi TT, and a Lexus RX450h, each accompanied in the driver's seat by one of a dozen drivers with unblemished driving records and in the passenger seat by one of Google's engineers. The car has traversed San Francisco's Lombard Street, famed for its steep hairpin turns and through city traffic. The vehicles have driven over the Golden Gate Bridge and on the Pacific Coast Highway, and have circled Lake Tahoe. The system drives at the speed limit it has stored on its maps and maintains its distance from other vehicles using its system of sensors. The system provides an override that allows a human driver to take control of the car by stepping on the brake or turning the wheel, similar to cruise control systems already in cars. 

5 ACCIDIENTS  In August 2011, a human-controlled Google driverless car was involved in the project's first crash near Google headquarters in Mountain View, CA. Google has stated that the car was being driven manually at the time of the accident.

6 LIDAR Light Detection And Ranging  LIDAR (Light Detection And Ranging, also LADAR) is an optical remote sensing technology that can measure the distance to, or other properties of a target by illuminating the target with light, often using pulses from a laser. LIDAR technology has application in Geometrics archaeology, geography geology geom orphology, seismology, forestry, remote sensing and atmospheric physics, as well as in airborne laser swath mapping (ALSM), laser altimetry and LIDAR contour mapping.

7  The acronym LADAR (Laser Detection and Ranging) is often used in military contexts. The term "laser radar" is sometimes used, even though LIDAR does not employ microwaves orradio waves and therefore is not radar in the strict sense of the word.

8  LIDAR uses ultraviolet, visible, or near infrared light to image objects and can be used with a wide range of targets, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can be used to map physical features with very high resolution.  LIDAR has been used extensively for atmospheric research and meteorology. Downward-looking LIDAR instruments fitted to aircraft and satellites are used for surveying and mapping – a recent example being the NASA Experimental Advanced Research Lidar.In addition LIDAR has been identified by NASA as a key technology for enabling autonomous precision safe landing of future robotic and crewed lunar landing vehicles.  Wavelengths in a range from about 10 micrometers to the UV(ca. 250 nm) are used to suit the target. Typically light is reflected via backscattering. Different types of scattering are used for different LIDAR applications; most common are Rayleigh scattering, Mie scattering and Raman scattering, as well as fluorescence. Based on different kinds of backscattering, the LIDAR can be accordingly called Rayleigh LiDAR, Mie LiDAR, Raman LiDAR and Na/Fe/K Fluorescence LIDAR and so on. Suitable combinations of wavelengths can allow for remote mapping of atmospheric contents by looking for wavelength-dependent changes in the intensity of the returned signal.

9 DESIGN  In general there are two kinds of lidar detection schema: "incoherent" or direct energy detection (which is principally an amplitude measurement) and Coherent detection (which is best for doppler, or phase sensitive measurements). Coherent systems generally use Optical heterodyne detection which being more sensitive than direct detection allows them to operate a much lower power but at the expense of more complex transceiver requirements.  In both coherent and incoherent LIDAR, there are two types of pulse models: micropulse lidarsystems and high energy systems. Micropulse systems have developed as a result of the ever increasing amount of computer power available combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe," meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring many atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, trace gas concentration (ozone, methane, nitrous oxide, etc.).

10 LASER  Laser — 600–1000 nm lasers are most common for non-scientific applications. They are inexpensive, but since they can be focused and easily absorbed by the eye, the maximum power is limited by the need to make them eye-safe. Eye-safety is often a requirement for most applications. A common alternative, 1550 nm lasers, are eye-safe at much higher power levels since this wavelength is not focused by the eye, but the detector technology is less advanced and so these wavelengths are generally used at longer ranges and lower accuracies.

11 SCANNER AND OPTICS  How fast images can be developed is also affected by the speed at which they are scanned. There are several options to scan the elevation, including dual oscillating plane mirrors, a combination with a polygon mirror, a dual axis scanner (see Laser scanning). Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal.

12 PHOTO DETACTOR  Two main photodetector technologies are used in lidars: solid state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a LIDAR design.

13 POSITION AND NAVIGATION SYSTEM  LIDAR sensors that are mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an Inertial Measurement Unit(IMU).

14

15 FACTS  Cruise control used to be a luxury item for car buyers. More recently, vehicles that could parallel park themselves or steer themselves through skids could inspire envy or respect. But Google’s self-driving car, which has traveled over than 190,000 miles without human assistance in all kinds of traffic, over all kinds of terrain, has put these previous advances to shame.  Equipped with a Velodyne 64-beam laser imaging system on its roof, radar systems on each bumper, and a forward-looking camera, the car constructs an image of the world and of the objects moving all around it; a GPS system and inertial measurement units help it keep track of its precise location on maps in its memory. But the real key to its capabilities is its data-processing systems — only some of which are in the car. Others are part of a cloud-computing network of servers within wireless contact that Google uses to supplement the brains on board.  Building on advances pioneered under DARPA’s Grand Challenge competitions of the last decade to build an autonomous vehicle, Google has leaped beyond them by applying the usual Google formula for success: collect mountains of well- structured data, analyze them with nearly limitless processing resources, and watch what comes out.

16  In its statements, Google emphasizes that its self-driving car (which the company calls autonomous vehicles) is still at the experimental stage, so similar vehicles are still years of development and testing away from sale to consumers. Perfecting the technology, moreover, is only the first step. Psychological, legal, and economic obstacles may be at least as formidable for postponing the era when human drivers will be obsolete.  The first and most obvious problems are that people like to drive, and they may be afraid to put their lives in the hands of a computer while on the road. Of course, people entrust their lives to computers all the time—for example, during landings of modern airliners. But they may feel more nakedly exposed to that risk inside a car, where they’re used to having control.  Even when consumers want the cars, a legal framework for allowing them on the road needs to be in place. Will auto manufacturers want to produce the cars if they would become liable for serious accidents resulting from malfunctions of the self-piloting system, even if those errors involved cloud-computing glitches? Currently, Nevada is the only state where cars without human drivers would be legal, thanks to legislation that passed in June. The other 49 states might need to follow suit. (Google’s early tests of its car were legal because the car always had a person behind the wheel prepared to take control in an emergency, as the cars will in Nevada, at least in the near future.)Nevada

17


Download ppt "GOOGLE DRIVERLESS CAR. INTRODUCTION The Google Driverless Car is a project by Google that involves developing technology for Driverless Cars. The project."

Similar presentations


Ads by Google