When Stanford Roboticists Review Tesla Autopilot, They Don’t Send Their Best

3 Jun

Stanford

The usual storm of clickbait was pierced by a lightning bolt of ignorance this week, when a Stanford roboticist demonstrated a shocking level of misunderstanding of both Tesla Autopilot and the nomenclature around autonomous cars.

Heather Knight, who works at Stanford’s Department of Mechanical Engineering, claims her research is “half social robots, half autonomous driving.” Based on her May 27th post in Medium, “Tesla Autopilot Review: Bikers will die”, she’s contributing to the very problem one would hope she’s trying to solve.

Degrees don’t bestow wisdom, nor an understanding of the tragically power of titles in a world of TL:DR.

Dear Stanford: if Journalism 101 isn’t a PhD requirement, make it one. Also, please discourage clickbait.

You don’t need to be a Stanford brainiac to know that a headline like “Bikers will die” will become the story. Incredibly, Knight actually claimed to like Tesla Autopilot in a comment posted 48 hours after initial publication, but the damage had been done. Whatever analysis of human-machine interfacing (HMI) she hoped to share was buried as the story was widely reposted.

Beyond the title, Knight’s amateurish post has so many errors and omissions it has to be deconstructed line-by-line to comprehend its naïveté. Let’s begin:

“My colleague and I got to take a TESLA Autopilot test drive on highways, curvy California roads, and by the ocean.”

Knight would seem to be off to a good start. California’s highways are the ideal place to use Tesla Autopilot. Curvy roads? Not so much. Does Knight read the news? My 74-year-old mother knows not to “test” Autopilot anywhere but on a highway or in traffic.

Then Knight commits credibility suicide.

Read the rest over at The Drive

Leave a Reply