For years, the CleanTechnica reader community has amazed me. The quality of thought, diversity of thought, and depth of information is astounding. When I published a story last night on Tesla Full Self Driving (FSD) progress, I didn’t really know what to expect. It’s a true grey area that displays tremendous progress in technology and AI but also continues to have glaring flaws and has progressed a lot more slowly than people initially bullish on the technology expected (like me). Several years ago, I was promoting Tesla has having the best approach toward developing robotaxis, and I did for years. I have been disappointed with FSD and don’t see it as anywhere close to where I expected it to be
today in 2020, but I’m also clearly open to and eager to see it improve. It finally did make a notable improvement (to my eyes) in version 11, but I actually spent most of the article explaining its shortcomings — so, I didn’t know how readers would respond.
Well, after a few comments, I saw a great long one that I thought should be a standalone article, and then I saw another that deserved to be featured, and another. … The comment thread is full of great perspectives, and I won’t add all of those here, but I’m going to repost several that add a bit of perspective to my initial piece.
From Mark H:
I am truly amazed by it all.
I am amazed that our human brains are so successful at processing what our optic nerves send it, so that we can drive as successfully as we have.
I am amazed that a not ready for prime-time beta software which can truly harm the driver and others at any time is safer than the human and his hubris. I certainly could not have imagined this at this early stage of development.
I am amazed that it is still that safe after releasing to a non-professional test fleet of over 400,000.
Whether it forces people driving to pay more attention, I don’t know and don’t care. I am simply amazed that it is. I know NHTSA is dying to shut it down due to public fears of the computer killing machines. They have not so far due to the data.
Those of us testing are aware of the many many flaws which are documented so well. Add to that is the video generation looking to gather likes by blurring the line between “watch out” and “watch this”. I am amazed that it is doing better than humans and our hubris. We would not be allowed to monitor the tools on the path to autonomous driving if it weren’t for this. It is so very important to remember that getting us from point a to point b is only one of the goals. And it is secondary to the primary goal summed up by Musk in two words, “Don’t crash.” Honestly, I have to thank some of my fellow humans for not crashing into me from a false slowdown. While I try to disengage when people approach, I have on a handful of occasion got caught in an instant with someone tailgating me followed by a false slowdown.
What the FSD boosters fail to recognize is that any overall safety improvement while in development is completely irrelevant. Because the one thing FSD enforces better than anything is intense vigilance over what the car is doing. Because it keeps proving over and over that it can do something unexpected, stupid and dangerous at any time.
It forces the driver, who is ultimately responsible, to ensure driving more safety. Of course safety stats will be impressive.
The great test will come when FSD is good enough that people will start letting their guard down. THAT is when it needs to make *zero* mistakes, because the press will have a field day with reports of some defenseless passenger losing their life because the autonomous vehicle made an unexpected, stupid and dangerous move that no human with a functioning brain would ever do.
You know that day is coming.
From Paul Fosse:
My feelings exactly. This latest beta is a real improvement, but I am still left with the feeling that it could be 1 year or 10 years to finish the software. Why? Because the progress has been so inconsistent. We all know that highway autopilot/FSD has been excellent and pretty much unchanged (except it just changed to the new stack recently) for about 5 years. Why didn’t Tesla make that hands free, take liability for the driving and let people relax on long trips and do other things? It would just have to give us 5 seconds notice when it was getting confused and it needed attention. That would have been a huge step forward and Tesla could have done that 4 or 5 years ago. Then work on the city driving.
People will expect a lot more from a robotaxi, just consider what people expect from normal taxis and you will realize that Tesla is far off.
When I take a taxi or a Uber I expect him to do no error ant alol, anny error will make me extremely uncomfortable and Wish I didn’t take a taxi. The vast majority of times meet my expectation.
Once I had a Uber make two errors on me at the second error I asked him to drop me in the spot and I asked Uber my ride to be refunded.
It has happened to me a few times that I wasn’t comfortable in a taxi because the driver wouldn’t drive like I want. Either being too blunt or too sluggish at red lines, it reduced my comfort and I would certainly not take this driver another time if I had the choice. So not only does a taxi driver need to make no error, but also does he need to drive “normally”, smoothly enough and assertively enough.
It is techno idolatry to expect people to be more tolerant with robotic cars just because they are managed by software. The bar for accepting to be driven by an AI is much higher than you seem to realize.
From Daniel Wilder:
Totally agree with the article. Love my Tesla, no regrets about buying FSD in 2019, and love it on the highway. But using it around town is stressful and it annoys other drivers with it’s non-human-like actions. But I am enjoying watching it improve and look forward to every new release.
From Baird Edmonds:
This is an honest description of my experience with FSD over the past 2 1/2 years with my M3LR. Still many errors in quite simple conditions, really too many to list here. I love the car and have great admiration for what Tesla has achieved but FSD is far from ready for autonomous driving.
I’ve had FSD for four years, I don’t think it’s ever going to work. The latest beta is still terrible. Tesla painted themselves in a corner by claiming that the hardware that was available in 2016 was sufficient to perform level 5 self driving and selling it with the promise that they would get there with just software improvements. If they had just sold it as enhanced autopilot and left the FSD claims for the future when better sensors became available they would have made more progress. Cameras have been getting better, radar has been getting better, FLIR cameras are available and LIDAR is getting reasonably cheap. Tesla will have better cameras and a new radar in HW4, and maybe that will help, but it can’t be retrofitted into the million cars that already have FSD so they are stuck trying to do the impossible which is to make it work in cars that have inadequate sensor suites. At what point will they admit that they can’t get there with the hardware that they have.
That captures pretty much every useful perspective I can think up — and, of course, many I didn’t. But keep the conversation going. There is surely more to say.
Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Former Tesla Battery Expert Leading Lyten Into New Lithium-Sulfur Battery Era — Podcast:
I don’t like paywalls. You don’t like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it! We just don’t like paywalls, and so we’ve decided to ditch ours.
Unfortunately, the media business is still a tough, cut-throat business with tiny margins. It’s a never-ending Olympic challenge to stay above water or even perhaps — gasp — grow. So …