Wednesday 16 May 2018

Tesla chief Elon Musk defended self-driving car technology on Tuesday



 

    Musk lamented about what he portrayed as an unfair focus on mishaps
    40,000 people dying in accidents in past year got no coverage: Musk
    "Model S hit a fire truck at 60mph and the driver only broke an ankle"

Tesla chief Elon Musk defended self-driving car technology on Tuesday after reports about the latest crash involving one of the electric carmaker's vehicles.

Musk lamented on Twitter about what he portrayed as an unfair focus on mishaps rather than benefits of autonomous vehicles with the potential to make roads safer.

"It's super messed up that a Tesla crash resulting in a broken ankle is front page news and the (approximately) 40,000 people who died in US auto accidents alone in past year get almost no coverage," Musk said in a tweet.

"What's actually amazing about this accident is that a Model S hit a fire truck at 60mph and the driver only broke an ankle."

Whether an Autopilot feature was engaged when a Model S collided with the rear of a stopped fire truck in the US state of Utah on May 11 remained to be confirmed.

According to local media, police said the woman at the wheel of the car claimed it was in a self-driving mode and that her attention was on her phone.

Musk complained in a recent earnings call that accidents involving self-driving cars get sensational headlines while the potential for the technology to save lives is downplayed or ignored.

Among accidents to make headlines was a fiery March 23 crash in California that involved its "Autopilot" feature.

The US National Transportation Safety Board is investigating the accident, which led to the death of a 38-year-old father of two, Walter Huang.
0
Comments

Google Worker Rebellion Against Military Project Grows



  


    An internal petition called for Google to stay out of "business of war"
    The petition was gaining support Tuesday
    About 4,000 Google employees were said to have signed it

An internal petition calling for Google to stay out of "the business of war" was gaining support Tuesday, with some workers reportedly quitting to protest a collaboration with the US military.

About 4,000 Google employees were said to have signed a petition that began circulating about three months ago urging the Internet giant to refrain from using artificial intelligence to make US military drones better at recognising what they are monitoring.

Tech news website Gizmodo reported this week that about a dozen Google employees are quitting in an ethical stand.

The California-based company did not immediately respond to inquiries about what was referred to as Project Maven, which reportedly uses machine learning and engineering talent to distinguish people and objects in drone videos for the Defense Department.

"We believe that Google should not be in the business of war," the petition reads, according to copies posted online.

"Therefore, we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

'Step away' from killer drones
The Electronic Frontier Foundation, an Internet rights group, and the International Committee for Robot Arms Control (ICRAC) were among those who have weighed in with support.

While reports indicated that artificial intelligence findings would be reviewed by human analysts, the technology could pave the way for automated targeting systems on armed drones, ICRAC reasoned in an open letter of support to Google employees against the project.

"As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems," ICRAC said in the letter.

"We are then just a short step away from authorising autonomous drones to kill automatically, without human supervision or meaningful human control."

Google has gone on the record saying that its work to improve machines' ability to recognise objects is not for offensive uses, but published documents show a "murkier" picture, the EFF's Cindy Cohn and Peter Eckersley said in an online post last month.

"If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects," said Cohn and Eckersley.

"Those are hefty ethical stakes, even with humans in the loop further along the 'kill chain.'"

The EFF and others welcomed internal Google debate, stressing the need for moral and ethical frameworks regarding the use of artificial intelligence in weaponry.

"The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety," Cohn and Eckersley said.

"Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behaviour from the military agencies that seek their expertise - and from themselves."