From CCJ
Though the driver of a Tesla sedan, which was speeding and operating (against Tesla’s advice) in a fledgling version of an autonomous mode, was mostly to blame for a fatal May 2016 crash in Williston, Florida, the truck driver also involved in the crash shares some of the blame, according to the final crash report released Tuesday by the National Transportation Safety Board.
Photo from NTSB’s preliminary report of the Tesla Model S involved in the crash.
NTSB in preliminary reports said the Tesla driver’s reliance on the autonomous system was mostly to blame for the high-profile crash. However, in its final report, NTSB concluded the driver of the tractor-trailer involved in the crash failed to yield the right of way to the oncoming Tesla sedan. A post-crash drug test also revealed the truck operator had used marijuana before the crash, according to NTSB’s report.
The driver of the Tesla died in the crash. The truck operator survived.
Tesla in a statement after the crash said its autonomous system, still in Beta form, failed to detect the tractor-trailer crossing the road in front of it, as it blended in with the bright sky behind it. NTSB’s report confirmed this Tuesday.
Tesla also said after the crash it’s Autopilot mode isn’t meant to replace drivers’ control of the vehicle — only to assist the driver and prevent a crash, if possible. Tesla says it warns drivers via a visual message to maintain awareness and to be ready to steer and brake if necessary.
NTSB said Tuesday the driver had a “pattern of use of the Autopilot system…and a lack of understanding of the system limitations.”
The truck involved in the crash was a 2014 Freightliner Cascadia daycab, towing a 53-foot dry van trailer.
Driverless cars on public highways?
From the Los Angeles Times
Go for it! In essence, that’s the Trump administration’s new directive on driverless-car development.
Under those guidelines, automakers and technology companies will be asked to voluntarily submit safety assessments to the U.S. Department of Transportation, but they don’t have to do it.
And states are being advised to use a light regulatory hand.
At a driverless-car test track in Ann Arbor, Mich., Transportation Secretary Elaine Chao painted a near future of greater safety, fewer deaths, higher productivity and more time spent with loved ones as robots increasingly take over the tasks of driving and commuters are freed for other activities.
She unveiled a document titled “Vision for Safety 2.0” and delivered a speech that was strong on vision and light on regulation.
“More than 35,000 people perish every year in vehicle crashes,” she said — 94% of those through driver error. After years of decline, fatalities are growing, she said. “Automated driving systems hold the promise of significantly reducing these errors and saving tens of thousands of lives in the process.”
Although the Vision document is vague, Congress is likely to pack on some meat. Last week, the House of Representatives passed a bill that eventually would let automakers each put as many as 25,000 cars on the road even if some features don’t meet current safety standards set by the National Highway Traffic Safety Administration. The cap would rise over a four-year period, allowing each automaker to field 275,000 driverless cars by the end of that period..
The House bill would require safety assessments, but permission to test would not be required. States would be required to follow federal regulations.
The Senate is considering a similar bill, though the Commerce Committee will consider at a Wednesday hearing whether to exempt trucks from the law. Labor unions fear that driverless technology could lead to job losses. Chao, who has expressed similar concerns in the past, said she’s working closely with Congress on the matter.
She was joined at Tuesday’s announcement by Mark Riccobono, president of the National Federation of the Blind, who said fully autonomous vehicles offer “an unprecedented opportunity to bring equal access to people with disabilities.”
Although widespread use of driverless cars is at least several years away, automakers and technology companies are making rapid progress, and features — such as automatic braking and adaptive cruise control — are already available on many new vehicles.
Tesla’s Autopilot feature, for example, enables the vehicle to pass cars automatically on the freeway. An option on the new Cadillac CT6 enables drivers to cruise along a freeway lane for hours without driver intervention. Even models from relatively inexpensive makers such as Hyundai, Mazda, Kia and Subaru offer automatic braking to avoid rear ending the car ahead.
Not everyone was happy with Chao’s announcement. Some consumer groups, which already thought the Obama administration’s standards were too lax, criticized a further pullback from government regulation.
“This isn’t a vision for safety,” said John M. Simpson, Consumer Watchdog’s privacy project director. “It’s a road map that allows manufacturers to do whatever they want, wherever and whenever they want, turning our roads into private laboratories for robot cars with no regard for our safety.”
Two House Democrats, Frank Pallone Jr. of New Jersey and Jan Schakowsky of Illinois, issued a statement that calls Chao’s move a step backward: “The administration chose to cave to industry and pressure the states into not acting.”
But driverless-vehicle proponents cheered Chao’s presentation. “This is great news. Over-regulating autonomous vehicles will slow down the adoption of a technology which will create millions of new high-paying jobs across the United States and make roads safer for all Americans,” driverless industry consultant Grayson Brulte said.
Mitch Bainwol, chief executive of the Alliance of Automobile Manufacturers lobby group, appeared at the Chao event and said, “The future is not something we should be afraid of or try to slow down.”
The new standards replace guidelines published by the Obama administration in September 2016 that asked automakers to voluntarily submit reports on a 15-point “safety assessment.” They were also urged, but not required, to defer to federal rules on safety. Chao did not criticize those guidelines, but called them “Vision for Safety 1.0.”
“The new policy adjusts the tone but continues much of the substance of (the Obama administration) document,” said Bryant Walker Smith, law professor at the University of South Carolina. “It clearly reflects the input of the traditional automotive industry but doesn’t exclude potential new entrants such as Waymo.”
The previous approach, however, didn’t eliminate a patchwork of state-by-state regulations. California’s regulations, for example, are considered fairly strict. Florida, Michigan and Arizona barely regulate driverless cars.
The new “Vision for Safety” advises state officials to remain technology-neutral and not favor traditional automakers over technology companies; to remove regulatory barriers that keep driverless cars off the roads; and to make the federal Transportation Department’s voluntary recommendations into law.
New legislation that emerges from Congress, however, could have more serious implications for state regulations. Under the House bill, California and other states could not bar driverless cars allowed under federal law.
How that might affect a new set of driverless regulations that California officials plan to unveil by the end of the year is unclear. The state Department of Motor Vehicles, which regulates driverless cars, said in a prepared statement that it is reviewing the new federal guidelines.
Transportation officials from both administrations consider driver-assist technology and autonomous cars to be essential safety features that could dramatically reduce collisions, injuries and deaths.
The vast majority of traffic collisions are caused by human driver error, federal safety statistics show. Fatalities have been rising in recent years as cellphones and other distracting devices have become more popular.
In 2016, U.S. highway traffic deaths rose 6%, to about 40,000.
Why self-driving cars need superhuman senses
From Wired
More than any other benefit, self-driving vehicles promise to save lives.
Cutting out the human error that causes 90 percent of crashes could start to save some of the 35,000 lives lost on American roads every year. Manufacturers are convinced that people will happily use at least partially autonomous cars when they’re proven to be safer than human drivers, but that’s a pretty low bar. The ultimate goal is to eliminate crashes all together, and to do that, cars will need to perfectly perceive and understand the world around them—they’ll need superhuman senses.
Pretty much every AV now in testing uses some combination of cameras, radars, and lidar laser systems. But now, an Israeli startup wants to add a new tool to the mix: heat-detecting infrared cameras that can pick out pedestrians from hundreds of feet away.
A fully driverless car, after all, will need to see the world in a wide variety of lighting and weather conditions. “Existing sensors and cameras available today can’t meet this need on their own,” said AdaSky CEO, Avi Katz, in a statement. So this morning, his company announced its plan to offer automakers what it calls Viper, a long distance infrared camera and accompanying computer vision system.
Today’s sensors offer a detailed view of the world in 360 degrees, but each has its weak points. Cameras don’t work well at nighttime, or in dazzling sunlight. Lidar has trouble with rain, fog, and dust, because the laser bounces off the particles in the atmosphere. Radar can be confused by small but highly reflective metal objects, like a soda can in the street.
Even systems that combine data from all three sensors can struggle with images of humans on billboards, or on adverts on other vehicles, as recently shown by Cognata, which simulates training environments for driverless car brains. That’s where AdaSky thinks its sensor can pitch in. If a human-shaped object is giving off heat, it’s probably a real person, not a picture.
“When we have heat radiating from something, and you figure out that it’s a person or an animal, then that tells you there’s the potential for unpredictable behavior,” says Jeff Miller who studies autonomous vehicles at USC. Instead of just knowing there’s an object on the right hand side of the road, a car that perceived that it was a deer would proceed more cautiously.
(The system might even help cars figure out those kid-shaped, terribly-dressed bollards that recently showed up in the UK.)
Perception isn’t just about sight, or even the outside world. Waymo, née Google’s self driving project, recently announced it now uses upgraded microphones to listen for police sirens, for example. Cadillac stuck an infrared cameraon the steering wheel to monitor the driver’s state of awareness when its cars are in semi-autonomous mode.
This process of finding the right mix of sensors will likely never stop evolving, as new technologies become available and car companies puzzle over factors like cost, availability, and durability. Because until the day that cars are 100 percent safe, passengers will have to be convinced that the vehicle they’re climbing into is at least better at handling any situation than a human driver. And when it comes to that making cars superhuman, the better they see, the better they drive.