Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CEO of Waymo John Krafcik is leaving (waymo.com)
149 points by mgreg on April 2, 2021 | hide | past | favorite | 240 comments


Having read hundreds of these sorts of CEO leaving letters I'm a bit jaded and cynical. If you're interested in what I see when I read that message, I rewrote it[1] in more plain language.

Google demands a lot of its "other bets" companies. Some might say it asks too much. Mostly it seems to me that they want a 'moonshot' company that has the original profitability of search advertising, full stop. And while it would be great for them if they found it, there is a lot to be said for having a bunch of businesses that just make anywhere from a few million to a billion dollars a year in revenue.

I don't know if the attitude has changed since I was there but people creating $20M/year business revenue streams were not considered "successful" back in the day. I found that somewhat self defeating.

[1] https://gist.github.com/ChuckM/ff5fc8c800c7fe9160483b68ec45a...


I understand the sentiment, but $20M/year really is a waste of time for a $200B/year business. I have a hard time thinking of a way it wouldn't be a loss given the added organizational complexity having those kinds of projects would bring.

I thought the entire purpose of "other bets" was to pursue ideas that have the potential to become $XXB/year revenue streams. So of course they want 'moonshot' companies.


> $20M/year really is a waste of time for a $200B/year business.

That's the thing though, it isn't a waste of time.

One can hear very similar logic from people with investments. They will say "The stock market is returning X% / year and is way better than those bonds with only 3%/year. Investing in bonds is a waste of time."

They say that because they have yet to internalize the value of having a diversified their investments. Not everything goes up all the time.

It only makes sense for Google if the Search Ads business were to never ever lose its profitability. And yet, it is losing its profitability. As a result, Google has to aggressively cut back on expenses, remove projects, end of life products, etc as that cash cow slowly deflates.

Consider then the alternative where there are 10, 20, even 30 business lines within Google generating $10 - $30M of profit each. 30 businesses at $30M is only $900M, less than 5% of their revenue, but those businesses are SOLID and provide a supply of management talent, consistency, and some bucks to keep the lights on elsewhere.

That is diversification of execution risk. It works the same way investment diversification works, it adds other, less high margin, businesses to the portfolio that are all revenue positive.

A company like Google can use those businesses to experiment with alternate user support models, management schemes, policies, and communications. All of that helps the "main" company to mature in its thinking about how to be a business. Sadly, executives who have never had any experience other than one wildly successful business tend to think exactly like you do, "Why would I waste time on piddling little products when I've got more cash than I know what to do with being pumped out by my main business?"

Short answer: "Things change."


Do you have any idea how insanely hard it is to create 30 separate $30M businesses from scratch?

Even with the weight of the Google brand, creating new businesses is HARD.

It'd be roughly 1,000X easier to squeeze an extra $900M in revenue out of search than it would be to incubate 30 new mid-size companies.

Instead of going on an insane boondoggle where your brand image is trashed by creating literally thousands of failed companies (the only way you're going to end up with 30 successful ones over the $30M hurdle rate)...why wouldn't Google just buy those 30 companies? They have enough cash on hand to buy 99.9% of Silicon Valley startups outright.

And even then, would the 30 companies they buy grow faster than their core business...or even the S&P 500 at 9% a year? Because otherwise they might as well just dump that money in existing products or return it to shareholders.

If Google buys a bunch of businesses that grow slower, then their valuation and stock price drops dramatically. If investors wanted to own a random sampling of 100 mid-size companies, they'd buy the appropriate index fund! They buy Google because they want a concentrated bet, not an index fund.

This is nowhere near as easy or simple as you think it is.

For all of Google's PR efforts around moonshots and only hiring "the best talent," a vast majority of their revenue still comes from only one product they incubated on their own, the google search engine. The next biggest bucket comes from external acquisitions (DoubleClick, YouTube, Android).

I think the fact that Google hasn't incubated any big success in the last decade is a good thing! It leaves more room for others to take their place. Why would we want one company to dominate everything forever?


I am aware of the difficulty. Google X is hardly "from scratch" however.

First, you have all of the infrastructure for a business already in place. Even when I was there it was straight forward to get resources allocated to a project.

Next, you have billions of "seats" in that users the world over are already using Google branded services every day, have a reasonably good impression on the brand, and typically a low barrier to "trying out something new."

Finally, you have a tremendous amount of smart, experienced, people you can call on for advice for free! I know a lot of people have left but when I was there it wasn't uncommon to have the argument about a thing settled by the person who invented the thing weighing in on the argument. While it became clear to me that Google and I were not compatible long term, it was still intoxicating to walk around bounce ideas of some really really sharp people who could trim months off idea research and development.

As a result of that, starting businesses within Google was akin to scoring in baseball where you got to start at third base (or maybe second base if it was a longer stretch). That is significantly easier than starting from scratch.


$900M/year is <1% of Alphabet's yearly revenue. I know you mentioned profits, but the OP mentioned revenue so I want to keep the same units because they're very different things. It could very well be that if it was $20M/year in profits, then those projects would not be considered failures.

If something happened to their core business, it's unlikely that those tiny projects (<1% of revenues combined) would save them. The more likely thing is that many of those small projects fail over time and it becomes death by a thousand cuts.

What you said is mostly correct and is exactly what they are doing. The only problem is that at the scale of a trillion dollar company, they need 10, 20, even 30 business lines generating $XB - $XXB of revenue each.


In the case that I am completely familiar with the business was returning $20M/year in net profit margin on roughly $180M/year in revenue. Google threw it away.

Their reasoning was that the resource usage to net profit numbers wasn't "good enough." The comparison was always search advertising.


Would you be at liberty to describe at least generally what sort of a business it was?

Thanks!


Yup this is also the final thrashes of a dying company. Slow but surely Google will die and I can't wait to see other companies which are born. It's just evolution.


> Consider then the alternative where there are 10, 20, even 30 business lines within Google

They could not even succeed with one of their messengers, there are so many of them out there. What makes you think they will succeed with 30 business lines?


I think this is the wrong approach. If you don't want to deal with an X$M/year company, you can sell it. Google isn't too big to sell stuff are they? They still sell advertising space, afterall. So take it to an IPO, collect the proceeds and use them to re-invest in more moonshots.

There is as much reason to shut down a profitable company as to throw away gold. That's literally what you are doing, and no one is too big to throw away money.


Selling off is the obvious — and correct — answer.

But these companies were founded inside Alphabet, and grew within Alphabet's infrastructure. This is how they gained a decent chunk of their initial bootstrapping advantage, as described upthread by Chuck. To decouple them out into something suitable for sale is extra time and cost, and beancounters never like time and cost.

So, ironically, part of what makes these attempts possible at all is part of what ends up killing them.


How many XXB/year companies start off making XXB/year or have clear visibility to XXB/year near-term?

For example, I doubt the AirBNB founders early-on and/or 5-10 years ago even imagined they'd have multiple billions of revenue.


Amazon would disagree. They have built a decent company out of a bunch of $XM-$XXM businesses.


I give it less than 5 years before Google offloads or shutters Waymo entirely.


This is fantastic.

You accurately describe the Other Bets climate when I left X in 2015, and I hear it's even worse these days from friends who recently left when Loon shut down.


“Mostly it seems to me that they want a 'moonshot' company that has the original profitability of search advertising, full stop.”

Well, when Larry page wrote “we also like that it means alpha-bet”[1], a lot of readers in tech probably mistook him at his word. When it comes to investing, “alpha” means winner. As in somebody else loses. As in what gets optimized is _the investor_, not necessarily the community. The analogy to investment return above benchmark is a trademark Googley delusion—- it’s rationalizing a position using math or science even when the position (applauding investing culture) is exclusionary or even toxic to a substantial segment of the population (e.g. the 99% who either don’t invest or who form the ETF benchmark). Larry isn’t talking about excellence, he’s talking about dominating, and slipping that through with a cute abc.xyz domain name.

While it’s very interesting to hear these stories about Google, perhaps the lesson learned is just how blind people are to Google’s greed when technical adventures are dangled in front of them.

[1] https://blog.google/alphabet/google-alphabet/


Your rewrite was extremely well done.

It’s too bad CEOs don’t write their PR like you have done. Like when they drive companies into the ground, lay everyone off, and then they exit with millions.


they do, it just usually gets re-written to hell in about 30 very anxious email rounds


that's capitalism


I swear that there is a resource curse for businesses with a singular "successful" product line. In order to expand the business and not have people sucked into the "successful" part, the teams need to be split. Once the teams are split the "successful" team wonders why the "new" team sucks.

In the case that the successful business is Ads/Search... it's tough to compete. I'd imagine even within Ads/Search there is a resource curse between AdX/AdWords/Youtube and every other ad business.


Sounds to me like what they shoukd be doing is instead of trying to "reinvest" profits from ads instead pay them out to shareholders so they can more efficiently redistibute it to multiple smaller businesses


But that would shrink the empire.

The kind of people drawn to "chief" positions are not interested in shrinking their empires.


That is exactly what they should do, but the fact that a majority of voting shares are held by 2 managers makes the company have a huge conflict of interest that favors growing more and more rather than pulling cash out.


They are doing share buybacks which is usually deemed more tax efficient.


Are you familiar with the Innovator's Dilemma? Because it sounds like you're describing it. https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma

And yes, splitting is part of a solution. I think discipline is another part.


The fact is that in a company in Google’s position, management attention is a scarcer resource than money. The CEO can only manage about 10 people so each of them has to manage at least $20 billion in business and grow it $2 billion a year. So at 2000 work hours a year, every meeting they take needs to have at least $10 million of impact. Typically the only such meetings will be regarding the company’s main business line, because creating $2 billion of business in a year is impossible unless you are tweaking a $200 billion business. Which is what causes big companies to keep their heads straight ahead focused on what’s already working for them, and what creates opportunity for the next generation of startups.


Maybe businesses don't need revenue streams. I wonder by how much the Moonshot program, increased Google's Market Cap.


This is a fair question. After all, an increase in market cap means, as execs, you're paper wealth is that much greater.

The challenge is that market cap isn't durable. Specifically businesses that just provide net revenue, even if it is only a "small" amount relative to the overall company, are helpful in mitigating shocks to the market.

So it may be intoxicating that the stock going up 10% adds a few billion dollars to your net worth, but it can also go down 10 or 20% and that wealth goes poof![1]

[1] The reality is that paper wealth is not worth anything, only after it has been diversified can you really count it as part of your net worth. Again, this is my lived experience :-).


The Moonshot program created a lot of optimism about Google for many years.

Over that long time, I'm sure there are benefits for the executives' personal stock(don't CEO's diversify every few years?) , and for Google in general(for ex. Better talent).


Nice rewrite, now I no longer need to bother reading the original!


Man, this is spot on and great insight. I'd love if people were really free to be so honest without repurcussion. Thanks for doing it.


This is the funniest thing I've read in a while. And potentially true.


> I don't know if the attitude has changed since I was there but people creating $20M/year business revenue streams were not considered "successful" back in the day. I found that somewhat self defeating.

It's the "I stumbled onto this giant pile of cash, and if you don't stumble onto one too, you obviously aren't stumbling hard enough" attitude of management.


Could you also translate the last section too? Would love to read that as well


"Please don't hate me." ?


What about the response?


Probably some error in my filters but I checked it twice and couldn't find any actual semantic content :-)


I am 32. I Could have written this 7 years ago. I no longer accept job offers at any salary in this industry.

In fact, all my emails and communications are written in this style all the time with everyone - what shall I do now wise one? (serious question)

You wanna start a band or whatever. Maybe we can start like an Elks club thing?


Are you the CEO or the non-executive employee? One of the reasons I put my email in my profile is that it brings me interesting questions. :-)


I'm gainfully unemployed and like Luke you can't make me go back!

I've job hopped looking for a place where people don't engage in the Kabuki but apparently all the worlds a stage. Maybe I'll make one of those lil ships in a bottle this year.


Perhaps you will find, as I did, that the secret here is having a purpose. I know a number of people in the same situation, they don't "have" to work but when they don't have full time employment they just kind of wander this way and that. This becomes a problem when they realize that there is no 'win' in life, you just die. So the time wasted wandering is just that, time they never had to truly live.

Given my experience and my interest in systems I have become something of a 'journeyman' engineer in the traditional sense. I like helping folks and I like figuring things out. I charge money for that because it weeds out the freeloaders but I don't charge a lot of money. My terms are that if its not working out, we'll part friends, but we will part. As a result I've managed to meet some manager types whose only tool in their manager toolbox is "I'll fire you if you don't do that." (which is a pretty weak toolbox).

It is an exceptional thing to be where you are, at the age you are. How you choose to spend the years can be very satisfying and impactful, or a source of anger and regret.


Thank you.


that was hilarious :)


All of the big players in self-driving have shown that while they can try to solve the tech problem, nobody can solve the team problem. Self-driving brings together people with very diverse backgrounds (perception, planning, controls, hardware, safety, rideshare product, etc), very diverse incentives (established automakers vs start-up founders vs VC investors), and throws them in a pot with a huge amount of money and greed. And does this to serve a public who largely doesn't trust self-driving AI today.

Waymo has had a ton of notable departures:

* Urmson made a good amount of money and left for Aurora.

* The founders of Nuro cashed in $40m each and left for their own thing.

* Levandowski made nearly a quarter billion and took off.

* Drago worked on Streetview and more Google-centric things, then went to Zoox to make about $100k (lol), then returned to Waymo.

* Now the era of Krafcik is coming to an end.

The perhaps unique thing about the self-driving problem is that all of the above individuals made tons of money without having delivered equitable value to end-users. At least not today. When Google was bleeding headcount to Facebook, both companies were making bank. It's surreal to see people minted with money for life and yet deliver so little value. That sort of arbitrage usually only happens on Wall Street.

I think it's worth reflecting on the era of Krafcik as a general success-- he was brought in to do hard work and he generally did a good job. But by no means did he (nor any of his predecessors) solve the "team" problem. Krafcik himself couldn't stick to the team he helped shape, nor the userbase he helped grow, at least nt long enough to actually deliver widespread value on the scale of his own compensation.


This seems like a cause and effect confusion.

Because no one can solve the tech problem, no one wants to stick around for the point where the failure is obvious.

Especially, for a while, self-driving cars have been at the level that impressive demos are possible but actual deployment isn't. So a fine career move is sticking around for long enough to create a demo and then leaving and blaming the failure on your successor.

Reason the tech can't deployed is that while 99% of driving problems involve just a complex, adaptive system, the 1% or .01% remaining involve "understanding what's happening", a far higher bar, one that requires a system well beyond what exists today.


This describes 99% of ML projects in industry. There is a reason the incentives are such that a science team creates a solution and then a theoretical engineering team deploys it. The Science team gets credit for the analysis/prototype but the prototype is so delayed getting to production that only the engineers take the blame or the scientists have moved on.


This describes 99% of ML projects in industry.

This may well be true given how fragile a construct most deep learning systems are.

But there are still substantially different deployment issues for different systems. Google can deploy a question-answerer or a search-by-image-description system and lowered accuracy in practice doesn't really do much damage. Google can't deploy a self-driving car with low safety in the same way.


I'm willing to longbets $5000/20 years:

We won't have self driving cars that replace 30% or more of driving on existing roads and highways by 2040.

The problem is too hard and everyone's drunk the ML kool-aid. (Just like everyone is drinking the NFT kool-aid right now.)

Hype, unrealistic dreams, and spin.

We'll be on Mars before we have generally available self-driving cars that do not require humans.


I think it will happen, but in China. We are trying to solve the wrong problem.

When cars were introduced, people kept getting hit in the street. "Jaywalking" as a crime was created to shift the blame away from the technology (cars) to the victims (pedestrians).

For self-driving vehicles, it suffices that we outlaw the edge cases, and enforce it to make them insignificant. The Uber ATC crash? Blame the jaywalker. Kid chasing a ball in the street? Blame the parents. Treat self-driving cars the same way that we treat trains: get out of the way, or you've no one to blame but yourself.

What that leaves practically is preventing crashes with the static environment, self-driving cars, and "normal" road users (i.e. ones that can't be blamed if a collision occurs). This is much easier than the plastic bags and animals and cyclists and jaywalkers and wacky drivers that form most of the remaining unhandled 1%.

The only country that can pull this off, IMO, is China. The mix of authoritarianism plus technical prowess allows the missing 1% of cases to be bulldozed in the name of progress.


The ruling party can’t even keep the factories north of Beijing from dumping pollution into the capital city’s air (and thus the lungs of the leadership and their families) for more than a few weeks at a time. I wouldn’t count on them here.


I'd consider taking the bet, but only against, "We'll be on Mars before we have generally available self-driving cars that do not require humans."

Both seem unlikely.


I think we should throw commercial fusion in there for the trifecta.


Last .01% = “assume AGI…”


>It's surreal to see people minted with money for life and yet deliver so little value. That sort of arbitrage usually only happens on Wall Street.

This must be a joke? The multi million-dollar exit with zero actual earnings is an almost uniquely tech phenomenon. There are Wall Street-ers that earned high pay for generating high earnings that many years later were found to be value-destructive, but at least at the time they were paid, they were cash-generating. That Wall Street jobs paid so much was because comp was structurally a function of cash generated.


The difference between “selling a growth story” and “selling current profits booked by incurring future liabilities” seems entirely like a semantic one. What’s the practical difference?


Sometimes the code monkeys actually code something.


What evidence there is a team problem. Like building cars requires all those things and companies manage to do it all the time. Adding the self driving tech in there, ok, but these cars already have teams upon teams of tech already working on them. Each part of the inboard computer has a different tech team which works with a different company it's outsourced to. In fact, sometimes different teams work with the same outsourcing company for different parts. That seems a very much a solved problem.

It also seems like you're not really also applying in the fact self driving cars are approved in how many places? And it takes approximately 5 years to get a standard car to market, getting a self driving car that has been tested in just 10 states, it's unreasonable to expect the value to end users you're expecting to see here. Getting the car to market seems to be the actual problem and that seems more of a legal issue of no one wanting to say yes on the major scale. This is a very slow moving industry, electric cars are basically a solved problem, yet we'll be seeing people buying combustion engine cars for decades.

How long has this project been going on for? 12-years? Seeing management change over that amount of time seems normal. It would be notable if we were seeing 1 or 2 people a month leave for 6 months.


Well, I think that they wouldn't be able to solve the tech problem even if they completely solved any and all team problems, so as far as the end users are concerned it doesn't really matter much whether they can solve the team problem. I think that full self driving almost certainly requires artificial general intelligence, and I don't think humanity is anywhere close to creating that, so all current self-driving car projects will end in failure.

I wonder to what extent the leaders of these self-driving car projects understand that their efforts to create the technology have almost no chance of success. How many of them understand this but carry on due to either wild optimism, or to just make money, or on the principle of "it's not up to me to decide whether to try, someone else is paying me to try"?

The degree to which self-driving car hype seized even the minds of many otherwise smart people in the last decade has been strange for me to see.


Don't these facts prove the opposite of your claim? You want to build a company with a culture that extends beyond individuals. All of these people left Waymo and the project is still robust. That's exactly what you want! Did you think Google was dead when Craig Silverstein left?


A lot of Googlers left Google for Facebook and yet Search is still robust. And Google still has revenue! Though Facebook is growing faster...

I’m not saying Waymo is dead, I’m saying creating a sustainable work culture for self-driving is really hard and Waymo evidently don’t have a solution for that. I actually thought Krafcik would be a balancer, but seems things didn’t work. That said, Waymo has made some other major HR mistakes... they hired (and soon fired) Tawni Cranz, one of the most toxic HR heads you could find...


I was talking with a friend who worked at IBM in the heyday. He mentioned that a lot of the time, it wasn't so much that talent got used, it was that talent was not used elsewhere. Stiffing innovation is just as effective as breeding it when you're already on top.


Bell Labs was pretty explicit about this being the reason they existed.

It was also the reason they filed patents. Like the answering machine patent: to prevent it from existing (commercially) for as long as possible.


I'm almost convinced many huge tech companies have massive staff simply to keep everyone else from hiring them. They do these "senior software engineer retention programs" paying employees to stay busy on insignificant projects. If none of the smart people are working for scrappy startups, nobody is going to invent something that disrupts the industry.


The problem is a venture problem / industry not teams and individuals ... VCs have been heavily investing in anything remotely promising because let's face it since smartphone tech hasn't found its solid step into the next level .. VCs have the money they throw it at promising companies, lots of people get rich in the process but the end result is delayed ether no working product or no working business (unit economics) an example is all these ride hailing apps, scooter apps and food delivery where billions has been spent and losses are recorded annually and everyone is happy to move the ball down the field and make it a tomorrow's problem.


I hear you, in that since the smart-phone there has not been a new consumer platform shift, with all the value creation for new B2C companies that a platform shift implies.

That said other parts of the ecosystem are having incredible bursts of innovation and new products: B2B SaaS, Cloud providers, Space (e.g. Starlink), Bio-Tech (e.g. Moderna and mRNA technology), Semi-conductors/Chip companies (TSMC + Nvidia + many many others).

If you're wondering why VCs are eager to back new companies with small revenue but fast growth - it's because interest rates are very low, and the returns to winning are very high.


Does Waymo have product market fit? I know you can ride around in one in Phoenix and they are learning how to manage those vehicles with remote assistance as necessary but is that sufficient?


Just wanted to say, from the outside, John Krafcik seemed to me like a strong leader.

I used to live in Mountain View and at one point I saw a few Waymo vans (this was 3-4 years ago when they were more active) speeding on El Camino (just barely, and being manually driven at the time, but even though I work for a competitor I was concerned if an accident happened it'd be a bad publicity event; i.e., Waymo vehicles should always be extra, extra careful, etc.); and it just made me wonder what was going on. So I connected with him on LinkedIn; and then ended up reaching out to some QA folks I believe.

Anyway, John was super responsive and I always appreciated that. From what I could see (just one anecdote), he was a CEO who dived deep into customer concerns. There's a lot to be said for that.

My two cents is it's a very delicate space. One bad event and the whole industry can be set back. So it's an area (despite extremely cutting-edge tech; deep learning, etc.) that has to roll out gradually.


From the outside, he doesn't seem like a strong leader at all.

Besides him not leaving a clear successor[1][2], I think his communication is disingenuous. The kind of management-speak that aims to hide poor decision-making behind a facade of cheer and care.

People are scared of "might makes right", but I've found "nice makes right" to be far more insidious and pernicious in our society.

1. https://news.ycombinator.com/item?id=26674228

2. https://a16z.com/2013/07/03/shared-command-2/


I talked to someone who was working with BMW 5 years ago and they had a similar story- automotive companies hated Tesla because they all understood that one screw up on self driving probably meant he death of their brand in terms of safety and probably a massive set back for the entire industry. That’s something a start up can afford but for a large car manufacturer it’s an existential threat.


I still think it was a mistake to hire this guy and partner with the auto industry dinosaurs. Should have kept going down the path of being independent and building their own cars without steering wheels. I never liked "Waymo" as a brand either.

Maybe just retrofitting off the shelf cars made sense if they thought they needed hundreds of thousands of cars ASAP, but it turns out they would have had plenty of time to develop the car and even build factories for it while waiting for the technology to mature. And they didn't really need to partner with any manufacturers just to retrofit existing cars anyway.


You are very, very wrong. I saw the sausage being made. The "koala" car was ... not a success.

Building a car is so hard. And Google mostly sucks at hardware manufacturing. There are good reasons why their market share of Chromebooks, Android phones, etc. is vanishingly small.


Not that building a car is easy, but there are contract manufacturers that can build cars and even design cars for you. And they won't have quite as much of a conflict of interest, in that their business model isn't selling cars direct to consumers, which Waymo is explicitly trying to disrupt.

The koala cars may not have been successful but that was many years ago, years that could have been spent making something better.


Do you know how insanely expensive building cars is? Waymo is already bleeding money. If they were making their own cars, they would be bleeding billions, maybe billions per quarter.

Are you following the car industry? Car startups get crushed regularly, and almost non are successful. Pretty much all of them bleed money like crazy.

Tesla went almost bankrupt over and over again.


This is a highly under rated take. The reason they didn't do so was industry pressure/competition.

The counter point could be that the technology is there, but the regulation is not.

Why must everything be retrofitted versus a complete system overhaul? The initial costs may be immense but the return on investment will be far greater.

Just my two cents.


Partnering with an existing car company helps a lot. They know how to build vehicles at scale and they also can provide a lot of support when it comes to interfacing with the control systems for the car. Controlling the car (engine, doors, wheels, wipers, lights, etc) is a surprisingly difficult task on its own. It requires a lot of control systems knowledge, a complex test infrastructure, and a lot of specialized talent that's probably hard to get. It could've been seen as too difficult to take on in addition to figuring out how to actually do all the self driving specific tasks (mapping, planning, detection, etc.).


> They know how to build vehicles at scale

The actual car companies also outsource lot of stuff to other manufactures e.g lights, wires etc. I would imagine understanding and maintaining those relations woulds also be a hassle for Google instead its just better a car companies does all that.


This is why full self driving coming from a company like Tesla matters. It's vertical integration with hardware, manufacturing, battery and chip production makes it so they don't need any of the OEM legacy auto companies. They are already building the best electric cars available. Nobody comes close. When FSD 9.x is released the zero-to-one moment happens and there'll be no going back. The other companies are going to do nothing but play catch-up and many will go bankrupt.


Tesla's FSD doesn't work and they basically admit it is a level 2 system and may never be more. It's been Elon with the football and Charlie Brown running for the kick over and over.


The only thing they said is that in California its a Level 2 system for now. Their long terms strategy is the same it has always been.

The idea that they believe it will never be more then Level 2 is factually wrong. You might believe that, but they don't.


That's sad.

I must be the only one who thinks full self driving will not happen before year 2100. Is there an example of anything that's completely automated that's dangerous? I cannot think of a single thing. At best things are partially automated and need humans still.

The thing is, if you have "partial" self driving, you might as well just drive yourself IMHO. The consequence of being wrong with a car is literally your life. I know people like comparing this with airplanes, trains, or the elevator, but cars in the USA is just too chaotic an environment. It's nothing like any of those.


> Is there an example of anything that's completely automated that's dangerous?

I think this is just the usual goal post moving, if a machine does it then we decide it wasn't really dangerous for a machine to do it.

Garmin Autoland will take a plane which is otherwise working and is in the sky and put it back on the ground because the pilot has a problem (e.g. your plane nut husband finally has another stroke† while taking you and the kids cross country, you've never learned to fly but you just press the button like you were taught and the plane will tell you to remain calm while it figures out where to land and gets itself back down). It isn't cheap (the system needs to control flight surfaces, engines, radios, brakes, more or less everything on a plane) and of course it can't fix a broken plane, but a random non-pilot is going to do a much worse job and "land an aeroplane" seems like it counts as dangerous to me.

> The thing is, if you have "partial" self driving, you might as well just drive yourself IMHO.

If you live in the relatively small service zone for Waymo, you just get in the car and it drives you elsewhere in the zone, like a taxi, except the car is driving. Waymo doesn't want you "partially" driving anything, it doesn't want you near the wheel or pedals, it would prefer if you watch a video or read a book.

† In some countries private pilots have to be fit, but the US has gradually decided that eh, it's your plane, your risk, the main obstacle today for medically unfit private pilots is getting insurance.


> Garmin Autoland will take a plane which is otherwise working and is in the sky and put it back on the ground because the pilot has a problem...

Indeed. For reasonable values of "land" and "on the ground." It's better than a non-trained pilot, but it also relies heavily on the concept of "emergency" in aviation, in which everything else is put on hold and everyone else gets out of the way.

It has basically nothing to do with driving a car.

First, the sky is large - "Big Sky Theory" is the term for it, and it generally means that you can fly around however and wherever you want without hitting anyone. Yes, there are controlled airspace segments, various levels for flying around, but the reality is that if you just ignored all of that and flew a plane from A to B, you wouldn't hit anyone (almost always). It breaks down a bit around airports, and there have been the occasional "plane A not talking to anyone and plane B not talking to anyone colliding" events, but they are exceedingly rare.

Second, large airports are going to be (almost by definition) controlled airspace, with ground response crews. And there are common airbands for radio communication. The Garmin system relies on these things.

If you hit the "Oh Crap!" button, yes. It will control an otherwise operational plane down to landing at some airport (I believe it prefers controlled airspace and emergency services, though I don't know details), and it will clear the road in front of it by setting the appropriate emergency transponder code and broadcasting prerecorded messages on the appropriate frequency that basically amount to "Get out of my way, I'm coming in for a landing at this airport on this runway." Which, for an incapacitated pilot, is absolutely the right thing to do. I'm fairly sure it won't clear the active runway, though. That's a problem for the humans on the ground.

But - literally everyone else in the sky will get out of their way, and ATC will ensure that. If there's an A380 on approach and this system is triggered near the airport, if ATC needs the A380 to get out of the way, they'll tell them to go around and go hold somewhere until the Cirrus is dealt with.

It's really a very, very different class of problem than self driving cars.


> I think this is just the usual goal post moving, if a machine does it then we decide it wasn't really dangerous for a machine to do it.

I've seen a lot of goalpost moving in the other direction on this one, to be honest - Self Driving Cars started as "never have an accident again!" and have slowly migrated to "look, you suck at driving, the robot sucks less, so stop bitching when the robot runs over grandma"


*runs over grandma who us trying to ilegally and suddenly cross the street right in front of the car


Yep, turns out that's the sorta shit you have to deal with when operating a motor vehicle.


Is your bar that the environment should adapt to the tech rather than the other way around? If so, I’d encourage you to take a course in human factors engineering


*runs into a stationary median which is literally bolted into the road, in broad daylight


Have you seen the aerial pictures of the area? There's pretty clearly a path through the median that's inviting pedestrians to cross there.


*in the dark, on a highway


It is so, so much easier to have a computer land a plane than to travel through a road infrastructure made for humans.

There are no roads, no obstacles, usually not even other planes constraining your plane for any autopilot situation. With autolanding, "other planes" might start becoming an issue, but it's still a vastly different level from the momentary and immediate coordination necessary between cars on public roadways.


Also, the operating environment is rather stable for aircraft. You won’t find thing’s like construction cones blocking off airspace that was once available. Even if airspace availability was changed, its not likely to create an immediate hazard (with some caveats like military testing ranges)


Autoland is impressive but it really is more equivalent to a system (in a car) that automatically applies the brakes than to a full self driving system.

Autopilot is simple and a solved problem. Self driving is not either.


I'd very much not want to be in an airplane whose landing strategy was "automatically apply the brakes". Navigating to an airport, lining up with the runway, and gently bringing down the plane seems significantly harder than auto-breaking.


Garmin Autoland is much more than "apply the brakes", as I understand. But it's still guiding your plane through an essentially empty environment, not something in the same ballpark as autonomous driving.


Well, more or less, the "apply the brakes" system still has to detect obstacles and calculate the optimal way to brake and so on. So it has to deal with dynamic environmenrs. Autoland, not so much.


> I think this is just the usual goal post moving, if a machine does it then we decide it wasn't really dangerous for a machine to do it.

I strongly disagree. Most machines are not capable of killing you because it makes a mistake.

Do you have an example of something humans did that could kill them that was automated completely that still can kill them, but generally does not anymore? An elevator is one example, but in the case of an elevator its a very bounded problem to solve, and even still hundreds die for unnecessary elevator deaths yearly.

Not to mention an elevator is not an entropic environment. A full self driving car would have to be able to deal with ice suddenly falling on the ground, people crashing around them, etc.


Fully automated train systems? https://en.wikipedia.org/wiki/List_of_automated_train_system...

Not nearly as complex a problem as self driving cars, but I'd still rather not get hit by a train


Yes, not nearly. Not anywhere close to the complexity needed for self driving cars. Trains run on closed, limited, well-signaled infrastructure, which (comparably) makes it fairly easy to "avoid other trains" (as long as the signaling, coordinated externally, works--if it doesn't the train is likely programmed to just stop until it does again).


My understanding of trains is that most of the safety advances have come from better signaling, not the automation of the trains themselves.


> hundreds die for unnecessary elevator deaths yearly

Elevator deaths are extremely rare in industrialized countries (a couple dozen per year in the USA), and most are of people working on the elevator (e.g. accidentally falling down the elevator shaft), not passengers riding in an elevator. I think construction elevators are also quite a bit less safe than ordinary passenger elevators.

Typical elevators are one of the safest forms of transportation, substantially safer than stairs or ladders.


You mean, like the entire rest of his comment? Garmin Autoland...


The Garmin Autoland is not equivalent to full self driving? That's like self parking which has existed for a while now. In any case planes are inherently orders of magnitude safer than cars to begin with due to the lack of obstacles/chaos.


It frustrates me to no end when people compare autolanding (even the Garmin kind), or worse, general autopilots, with self-driving cars.

Anyone who does that has either never traveled in an airplane (even just as a passenger), or just never observed and thought about the vastly different levels of interaction planes have with their environment.


Did you forget what you originally wrote? You said:

> Do you have an example of something humans did that could kill them that was automated completely that still can kill them, but generally does not anymore?

And I pointed out that the parent comment had already provided an example. Now you are arguing with me that the example given, about Garmin Autoland, isn't comparable to self-driving... Which isn't at all the example you asked for in the first place. Maybe focus and read a little more, write a little less. You'll be more easily understood.


> Do you have an example of something humans did that could kill them that was automated completely that still can kill them, but generally does not anymore?

My understanding from looking at research is that airplanes by experienced pilots rarely killed people... and they... still don't even when (partially) automated? There doesn't appear to be definitive evidence that autopilot is safer [2] as many of the innovations in safety have been conflated with procedural improvements.

In fact there are studies [1] that suggest autopilot is actually making flights unsafe as pilots are becoming complacent and airplanes do not flight themselves completely from point to point including taxiing.

In other words, no, plane automation is not an example as it does not satisfy the criteria of "generally does not" compared to the baseline of no automation.

[1] https://journals.sagepub.com/doi/abs/10.1177/001872088502700...

[2]https://www.eurocontrol.int/sites/default/files/publication/...

Try not to be so condescending, sheesh.


You wrote:

> The Garmin Autoland is not equivalent to full self driving?

This has nothing to do with your argument criteria of:

> Do you have an example of something humans did that could kill them that was automated completely that still can kill them, but generally does not anymore?

> Try not to be so condescending, sheesh.

I didn't mean that to be condescending out of turn, but if you are going to engage in an argument by posting a strong disagreement and an opinion:

> I strongly disagree. Most machines are not capable of killing you because it makes a mistake.

You cannot change the criteria for an example that you asked for simply because you don't feel like arguing that the example did or did not meet your criteria, nor can you do so simply because you didn't bother to read it carefully.

So no, I'm not being condescending out of turn, you really needed some correction on that. Sheesh.


How is it the goalposts moving? Dangerous in the context of that question doesn't mean the solution is dangerous. Dangerous there means that a screwup would be dangerous. Or maybe where a 'simple' screwup would be dangerous for some subjective value of 'simple.'

Garmin autoland seems like a perfect example of something dangerous that's completely automated.


No point in comparing planes to cars.


You can, today, go to Pheonix Arizona, download the Waymo One app, and summon a full self driving vehicle. This happened silently at the end of 2020, and was overshadowed by the ongoing pandemic doom and gloom. https://blog.waymo.com/2020/10/waymo-is-opening-its-fully-dr...

On the subject of danger. Literally everything we do is dangerous to some degree. Everything. Self-driving does not need to be perfect, but only better than human drivers. We're likely already at that point with Waymo vehicles.

On the subject of automation. Humanity is on track to have the computational power to perform whole brain emulation decades before 2100. Even if we don't solve the general AI problem through other pathways, we will solve it through this path. Ethical problems aside, once this is achieved, everything will be automatable at a human level of competence.


> On the subject of danger. Literally everything we do is dangerous to some degree.

I don't disagree with your take and I'm a self driving car proponent, but I'm worried about what process we take to get there.

One thing I've taken away from the pandemic is that people seem to have no problem imposing their tolerance for risk on others. Seems like we are on a path to play this dynamic out again in how self-driving cars come to market unless that safety profile is really well controlled and understandable.

Even if at a population-level self-driving is slightly safer statistically than person-driving, there are enough edge cases to give me pause right now, and at the individual-level it may raise my risk either as a pedestrian or driver and certainly changes what is predictable behavior [1].

[1]: https://arstechnica.com/cars/2021/04/why-its-so-hard-to-prov...


> On the subject of automation. Humanity is on track to have the computational power to perform whole brain emulation decades before 2100. Even if we don't solve the general AI problem through other pathways, we will solve it through this path. Ethical problems aside, once this is achieved, everything will be automatable at a human level of competence.

Is this actually true? I haven't heard of this. You're saying we will have an AI as good as a grown educated adult before 2100? I can't believe this at all - do you have a citation?


This is the kind of thing that is very misleadingly true. If you simply take growth in computing power and extrapolate to the end of the century, and compare it to the current best estimate of human brain computing power, then yes, we'll get there.

But 1) that is an estimate of human brain computing power, based on number of neurons and possible connections. We have no idea if that is really a valid unit of computing "power." Do brain only compute things via voltage thresholds across synapses? Then yes, maybe this is an accurate estimate. But there are a whole lot of subcellular signal cascades happening at molecular levels we can't even begin to count and we have no idea whether or not those are also computing something accessible to the rest of the brain. 2) Matching the computing power of a brain doesn't mean you can emulate it. Emulating something requires knowing the target architecture and software. It is possible we can figure out exactly what a brain is doing to an accurate enough level to emulate it by the end of the century, but we certainly don't know how to do that right now, even if we had the computing power. Remote imaging techniques are not nearly good enough (because again, so many of the processes are subcellular) and you can't open up a brain without destroying it. This is actually a much broader problem in biology and we have figured out ways to partially dissect some animals while still keeping them alive long enough to figure out how a system works in vivo, but doing that with a brain is not something we have ever come remotely close to doing and there isn't really any ethical way to even think of how we can try to figure it out.


Yes, and the methodology I've described (whole brain emulation/WBE) is the worst-case, brute-force, but guaranteed approach. The following diagram captures the expected growth rate of computing power, and contrasts it with several thresholds (in blue) for emulation fidelity ultimately required: https://en.wikipedia.org/wiki/Mind_uploading#/media/File:Who...

The human brain works, therefore this technique will work, it's just a matter of having enough computing power.

I will re-iterate though, we are pursuing various alternative pathways to artificial intelligence, and modern machine learning has already demonstrated super-human performance within constrained domains. It's my personal belief that we will achieve human-level general intelligence long before WBE becomes practical.


This sounds bonkers to me. We don't even understand fully what it would take to emulate a brain right now. I don't believe this for one second.


Emulating a brain doesn't necessarily mean you can tell it what to do. After all, you can't always tell humans what to do, and this might somehow be a necessary component of human-level intelligence.


"There are more synapses in each human brain than stars in the known universe". [1]

I don't believe there is anything guaranteed about "whole [human] brain emulation" in this century.

[1] This isn't quite true, because we estimate there are around 10^22 to 10^24 stars in the universe, while estimations for number of synapses range from 10^14 to 10^15.


Would it not be feasible yet for very simple organisms?


It's currently being worked on [1]. It's only recently that we've managed to capture the complete nervous system connectivity of C. Elegans, one of the simplest organisms with a nervous system. [2]

[1]https://en.wikipedia.org/wiki/OpenWorm

[2]https://www.nature.com/articles/s41586-019-1352-7


No, it's not remotely true. Roughly equating neurons to petaflops, counting flops, drawing the best fitting straight line, and then announcing a date where the line gets high enough is not a remotely reasonable way to estimate a "worst case guarantee" date.

It's not impossible that some sort of thing would happen, but anyone who tells you that it can't possibly fail to be achieved in the next 80 years is lying, either to your or to themselves.


Please suggest an alternative estimation technique then.

There is a certain # of calculations/s necessary to meaningfully emulate a neuron. We currently have no idea what level of fidelity is required, but we can make guesses for each of the levels of fidelity that might be required.

The estimates that assume the highest (and likely unnecessary) level of fidelity currently top out at 2100.

For the record, I never claimed that failure was impossible, I used the phrase "on track". It's totally possible that there might be some surprise quantum weirdness going on that would be intractable to emulate with classical computers, but we've found no evidence of that to date.


We have no idea what kind of "whole brain emulation" it would take to produce what we understand as intelligence, so all this seems highly speculative.


We know it only takes 20W of power for the real thing to work.

Let that sink in. We know it isn’t impossible to run a general intelligence computer on 20W because every single one of us is living proof that it is possible. There is no reason why something man made shouldn’t be able to do the same thing. How to do that is of course a different matter, but it isn’t speculative that it can be done, it’s a direct consequence of our own existence.


> You can, today, go to Pheonix Arizona, download the Waymo One app, and summon a full self driving vehicle.

Consider Google's (okay, Alphabet's) standards for success though.

My guess is that a metro area with less than 0.5% of the US population does not qualify as a successful product. Maybe as a successful small beta test.

Also, given Arizona's overly cozy relationship with self-driving [1], I would not necessarily trust a program too much on the basis that it's operating there.

-------------------------------------------------------

[1] https://www.theguardian.com/technology/2018/mar/28/uber-ariz...


Can you really? I remember that announcement and there was a lot of fudged language. You have to be a "member of the public service". You can download the app because it's in the app store, which is not the same thing as hailing a driverless vehicle.

It's not clear that member of the general public can actually sign up for it. Has anyone done that and taken a driverless ride in Phoenix? Happy to get more updated info / be corrected.

(And if this seems like a unreasonable amount of suspicion, I invite others to go back and read their press releases from the last 3 or 4 years, and tell me if that gives you an accurate picture of where the service is today)


In that 50 sq mile Chandler area, anyone can download the app and hail a ride. I don’t live in AZ, but I’ve seen several YouTube videos and r/selfdrivingcars posts/comments that can confirm it.


>On the subject of danger. Literally everything we do is dangerous to some degree. Everything. Self-driving does not need to be perfect, but only better than human drivers.

The problem is this statement is written like a technocrat wrote it rather than someone who makes public policy. From an engineering perspective, it’s true that it would only need to be better than a human driver. To be implemented though, it needs approval in the public sphere and not just engineers. This presents a very real publicity problem.

You are correct that everything in life contains risk. Risk is defined as severity x probability. While the severity may be the same, I think humans judge the probability very different between human and autonomous drivers.

I think it’s rooted in the need for humans to understand what’s under the hood (no pun intended) to trust the decision making capability. We already have this with human drivers through the tool of empathy evolved over millions of years. We can reasonably assume we know what humans will do. (Incidentally it’s also why witnessing someone with mental illness puts us on edge). We have no such ability to decipher an autonomous car, especially for the layman. So this distorts the uncertainty in the risk assessment and any accident can disproportionately cause our assumption of risk to elevate.


For me personally, your point really resonates. I trust human drivers in part because our incentives are aligned: neither one of us wants to get into an accident. The incentives of a machine learning model are harder to introspect on, as you point out.

I think there's another issue here, too. It relates to the observation that folks have a fear of flying, but not really a fear of driving, despite driving having about a 1 in 98 chance of killing you over your lifetime, while for aircraft it's more like 1 in 7178 (2008 data from USA Today...hopefully directionally correct). It's been discussed in research extensively[0], and seems to relate to whether or not the person feels they have a sense of personal control. I think your point is well-taken that the technological risks and the publicity challenge are independent barriers to the widespread adoption of AVs.

[0]: https://oxfordre.com/communication/view/10.1093/acrefore/978...


There’s been a few downvoted without elaborating on the disagreement, so I’ll add if you think this is incorrect take some time to think about how well policy and science were received during the pandemic. Policy will never be completely data-driven because humans do no intuitively think on statistical grounds.


Is there any update on how's that truly driverless program going for Waymo?


2100 is almost 80 years from now. Think back to 1940 and how different our world was then. Cars will certainly be automated to some degree. And that will cause a great reduction in the almost 1.35 million people killed globally by motor vehicle accidents.

A US statistic: In 2010, there were an estimated 5,419,000 crashes, 30,296 deadly, killing 32,999, and injuring 2,239,000. It is hard for me to imagine a scenario where automated driving is as unsafe as that.

We will truly look back and think, "Did we really trust others to operate multi ton metal machinery at exceedingly high speeds at each other day after day?"


As a whole this is true and could probably be bested. But it would have to be by many magnitudes of order to have people accept it. There's a feeling that isn't necessarily true that one can control their destiny driving and avoid being a statistic. Now we know that isn't always true as you are at the mercy of many other elements. But if autonomous cars were even 90% less fatalities it would still be hard to get humans to give up control for that. It would have to be on the realm of air travel safety.


> Cars will certainly be automated to some degree. And that will cause a great reduction in the almost 1.35 million people killed globally by motor vehicle accidents.

They already are. I'm talking about full self driving. A car without a steering wheel. Cruise control (arguably the first car automation) has existed since the 1900s. Potentially earlier depending on how you define it.


It really depends on what you mean by "full self driving".

Eg, I wouldn't consider the stuff Tesla is doing as self driving at all. I have significant worry because to reduces attention while still requiring it for safety.

If you're talking "Hail robocab, don't touch a steering wheel, arrive at your destination", then Waymo is already there: https://blog.waymo.com/2020/10/waymo-is-opening-its-fully-dr...

You might mean "does everything everyone uses a vehicle for today", however I feel that's perhaps moving the goalposts to unattainable. Eg, replacing nascar racers with self driving cars would defeat the point of nascar, so that's won't happen. (though it might be an interesting side event.)

Between the two extremes of public pilot in limited area and handling every situation, there's a very large space for tremendous value. Just serving urban areas would significantly reduce the need for vehicles. There are many people who only use public transit, or could if they had more economical ways to handle sporadic trips such as grocery shopping and visiting friends across town.


Waymo has to learn the exact space it's going to drive in ahead of time, and then have high resolution lidar maps generated to assist. It's currently limited to use in grid-like pre-mapped areas like Phoenix AZ. Tesla is building a general learning solution such that you can take the vehicle anywhere, and not have the requirement that the area be pre-mapped or 'learned' by Tesla beforehand. Tesla has orders of magnitude more data and mileage driven than Waymo could possibly ever imagine having. Waymo is not going to scale. CEO departures like this are a big red flag. Pin this comment. Google will shutter Waymo in less than five years.


The problem is, Tesla does not posses magic.

We know (from introspection) how trained skills work. Everyone knows the feeling when that bunch of neurons which you trained to semi-autonomously do something for you (e.g. switch gears, write letters, type words, observing traffic, staying in an imaginary lane etc.) raises their hand so that Daddy AGI (our conscious) can come have a look at that weird situation it doesn't know how to handle.

Tesla can't do that, either, and literally no one on this planet has the faintest idea how to do anything like it.


I think you're highlighting the difference in approach instead of pointing out a fatal flaw in Waymo's strategy.

Tesla is focusing on data because they believe they can machine learn through the problem.

Waymo has a lot of data, but is also focusing on better sensors, augmenting them with maps, and ML perception feeding into more traditional programing doing the actual control and decision making.

It's kind of like Tesla and Waymo are both learning basket ball. Tesla believes that if it just plays a lot of games of basket ball, that is the best strategy. Waymo think that playing some games is good, but it should also do things like practice free throws and weight train.

Also, a few misconceptions:

1. Waymo currently only has public service in Arizona, AFAIK. However they are testing in San Fran, Seattle area, and Michigan. Easiest first doesn't mean later is impossible.

2. pre-mapping is NOT a blocker of automated service. Consider how much effort goes into making each section of road. Driving a car by a few times is a trivial cost in comparison. This is further shown by the fact that Google has been doing street view for well over a decade. Also I'd like to see Tesla demonstrate they can drive with the road covered in snow and still know where they are without a map.

3. CEO departures can be a red flag. They can also be a completely benign changeover, or even green flag that the board identified others would perform better in the CEO position. The leaving announcements are always fluff about spending time with family, so it's really hard to tell what's going on from the outside. If this is followed by more leadership leaving Waymo, then I'd be concerned.


Yeah. Andrej Karpathy said in a recent interview that at this point it's almost completely a data problem, instead of a machine learning research problem. Most of their work is in figuring out the best way to collect and annotate lots of good data. Waymo has a few hundred cars and Tesla has millions, and in this problem space, scale wins over more precise sensors.

Here's the interview: https://open.spotify.com/episode/0IuwH7eTZ3TQBfU8XsMaRr?si=6...


Yep, Waymo approach is brittle


A interesting analogy is the Washington DC Metro, which was automated but now isn't due to a significant accident in 2009 which killed 9 people. That's a train on rails. We have much less tolerance for computers killing people compared to people killing people.


You would think that but then that Uber car ran over a pedestrian in the middle of a typical ultra wide deserted American throughfare with perfect lighting and.. kinda nothing happened? My pulse still rises when I think of that dashcam video they released that predictably shows nothing to blame that lady their negligence killed.

Maybe the common denominator is just cars. The regulators are fully absent - they care about the shape of your headlights, but do they care that SUVs have them mounted at the height of mirrors for every other car? Do they care that cars have absurdly overpowered engines that have no imaginable use? Do they care that trucks have fronts so high it's impossible to see a kid walking on a crosswalk and when you hit it, it's hit head-height and thrown under the wheels?

None of these things would ever fly on a train or airplane.


That incident basically ended Uber ATG. I wouldn’t say that nothing happened.


The DC metro was once automated? Interesting


BART in SF was too.


full self driving would far easier if they weren't all trying to cover all cases at once. I, along with others, have mentioned that a far simpler problem to solve is freeway travel.

HOV and Express lanes make this easy. They have well established markings, limited access, and traffic goes in one direction. So industry and regulators would work together to solve standardization of markings for travel and entry and exiting.

Then you take it to the rest of the interstate system and then to limited access highways. Eventually you get down to the neighborhoods and city driving.

I see real promise in Tesla's beta system which has a few thousand drivers. What it tracks is far more advanced than what my current software does in my 3. They finally went to a persistent model and far better labeling. Now where my car shows cones around a parked vehicle they show the vehicle as well. They show all parked and stationary objects. Something the current software does not always do if at all.

Just within the US the lane markings, signage, and even signaling, is not consistent from state to state. Worse the rules for what is an acceptable lane is not consistent either. So the first step there is to standardize it all across the nation.


Tesla is 100% going to win in this space. And as Musk and Karpathy have said, the general solution will be in place and work nearly everywhere, but a long tail of edge scenarios will have to be ironed out over time. This is nascent, world changing technology solving hard problems.

As a M3 owner you get to see the feature set growing over time, but the zero-to-one moment is when that FSD is finally rolled out in 9.x

I can't wait. It's going to be amazing.


The advancements in AI tend to happen in intermittent large leaps, but people tend to make linear projections based on the most recent leap. That’s why there’s always a big hype cycle followed by disillusionment and some sort of AI winter. I personally think 2100 is extremely pessimistic, but all those projections of full self-driving by 2025 were certainly even more extreme in their optimism.


Does anyone else think that may be full autonomous flying of aircraft will be easier to make practical than driving of automobiles?


I thought this was obvious. The complexity that cars deal with during travel is many orders of magnitude higher than the complexity aircraft deal with, and cars actually have an effect on the complexity of the environment they travel in, unlike aircraft. For the most part, aircraft one have ONE object that they have to avoid hitting, and it's the size of a planet, and engages in zero direction changes. Aircraft are simple.


Yes. It’s a much more regulated industry that already has mandatory equipment that helps solve the problem (e.g., transponders), there’s often more time to perceive and mitigate a risk (outside of takeoff and landing), less variable environments etc


The upside is much smaller though. There are lot more drivers than pilots, and crashes are so less common.


> Is there an example of anything that's completely automated that's dangerous?

Monorail trains (like at the airport)

Combine Harvesters

Airplane Autopilot/autoland

Elevators


Literally any operational (and unattended) machine in a modern factory


Factory automation usually has fail-stop behaviour (in process automation like chemical plants, things get more interesting).

And resumption of operation after an emergency stop happens only after a human check and intentional operator acknowledge.

So you're technically right, but the circumstances are vastly different. Nobody will buy a car that does a full emergency brake whenever a leaf falls off a tree. The environment is so much more difficult on the street than in a factory.


There’s a decent amount of factory automation that is essentially fully autonomous with the exception of a “big red emergency shutdown” button for a human to press if all the other safeties fail. Many operate without continuous oversight and in the cases where a human is positioned to watch its usually for economic reasons rather than safety (e.g., to stop the operation if there is a quality issue that will render the product unusable)

Edit: downvoting is fine but please at least add to the discussion by stating why you disagree. I’m speaking only from my personal experience working in factory automation and realize there’s probably a lot of differing experience


My understanding is that with factory automation usually it's designed to the human is not in danger (usually they aren't even present at all). Are there factories that exist that are automated in a way where the human is exposed to a potentially lethal injury?


>Are there factories that exist that are automated in a way where the human is exposed to a potentially lethal injury?

Yes, but probably to a lesser extent than driving a car. Everything has sensors that are meant to identify a risk to a human and stop whatever operation poses that risk.

E.g., there may be robots picking thousands of pounds of parts and driving them around a facility but they stop if they sense a human a their path. Same for welding, stamping, etc. All those operations can injure or kill but unlike vehicles they can rely on the mitigation of “stop and wait”. It’s a much easier problem than self-driving


> I must be the only one who thinks full self driving will not happen before year 2100.

Nope. Me as well. I think self driving is AGI-complete, and we are as far from AGI now as when we were living in caves, banging rocks together.


I wonder if perhaps we can't get self driving cars, but maybe we can can't self driving train carts and move completely away from cars?


This is basically where we will end up when the tech industry gets its head out of the clouds.

The carpool lanes will be converted to rail tracks. You can pay extra for a rail-compatible car if you want and sleep during your commute.

As implausible as it sounds this is still way more likely than driverless wheeled vehicles becoming both safe and useful.


self driving light rail would be interesting


I don't know about 2100 but I've made some bets that self driving cars won't be available in the next 5 years.. 2 years ago.


The metric to beat isn't "is self driving dangerous, yes or no?"

The metric to beat is "is self driving on average substantially safer than human driving?". Once you beat that metric, it would be insane not to allow self driving. This won't take 100 years.


How many of these completely automated things have lots of humans interacting and are outside an closed area?


Even for airplanes and trains, we still have pilots and humans behind the controller with 100% attention.


There's plenty of examples of driverless trains throughout the world.

> https://en.wikipedia.org/wiki/List_of_automated_train_system...


Fair enough. Thanks for correcting me.

But i stand corrected. Guided tracks is hardly a good comparison for this.


We don't have fully automated planes because they're already safe enough and the economics aren't there.

The probability of crashing with a plane is extremely low, even factoring in the higher risk of takeoffs and landings. The entire process is coordinated with Air Traffic Control.

Since planes are so safe already and pilots are relatively inexpensive, there's no strong financial incentive to fully automate planes.


Planes are also substantially regulated. I mean if there was a Vehicular Traffic Control on every intersection the same way there's Air Traffic Control we probably wouldn't have any accidents now, negating the main purported benefit of self driving to begin with.


Disagree on the economics. Pilots cost significant money and since the airline industry runs on low margins this can be a huge plus.

But the main reason we won't see planes without pilots is political and related to trust. I think for the same reasons we are less likely to see truly driverless cars.


> we still have pilots and humans behind the controller with 100% attention.

That really isn't true. Pilots, in particular, have huge challenges in keeping attention up between take off and landing. Likewise for trains, it turns out if you don't require humans to do anything but watch...it is much harder than if they were actually doing the flying/driving themselves.

The airforce is moving to unpiloted drones, though there is someone at a center to intervene as needed. It is only a matter of time before they decide that cargo can be moved in this way (think: fedex).


> Is there an example of anything that's completely automated that's dangerous?

Flying an airplane. They put pilots in there to make people comfortable, but if need be, a modern passenger plane can fly itself, including take off and landing.

Driving a subway. BART in San Francisco launched as self driving in the 60s. It freaked people out so much that they put a human in front. But if you sit up there you'll see the human doesn't do much. They press a button to close the doors, but that could easily be automated.

Also, I took a ride in a self driving car in SF a few years ago. It had a safety driver, but he didn't do anything (he grabbed the wheel once, but then they looked at the data and found the car was about to do the right thing anyway). It handled things like cars going the wrong way, double parked trucks, trash bags in the street, jaywalkers, etc.


I think there's a crucial issue with sensoring. Companies are tackling the problem mainly using sensors that are contained within the car. Even with LIDAR (and not just cameras) it is very hard to "sense" the full scene[1].

What if our roads were built to communicate with the cars? What if there were only AVs on the street and they communicated with each other? What if every bicycle, school backpack, shopping cart did so too, identifying itself in the process?

I think the problem of self-driving is harder right now in the transitional period in which they have to coexist with human drivers and essentially trying to "emulate" how a human driver gathers information (not really, but you get my meaning)

I'm not sure it won't take decades, but I feel like there will be a magical few-months period in which we go from "can it be done?" to "oh, ok,we got it"

[1] shoot, I lost the reference. But Cruise apparently assumes "use whatever is available until it works, we'll make it cost-effective later"


Oh definitely, I've been convinced sensors distributed in the environment are the only way forward for FSD with current tech since about 2017. Do a test run with a dedicated road/lane for a while, gather data, improve the system and then gradually roll it out elsewhere.

It's clearer and clearer that self contained 'autopilot' units are going to keep failing in scenarios involving unpredictable people & situations. They don't have enough data and never will (without AGI, anyway). So we need to spoon feed them that data by tagging everything & everyone around them.


That's a really good idea. You probably wouldn't need too many sensors either. Now that I think about it, a few strategically placed cameras around blind conrners woul be pretty useful for human drivers too!


Then we can just build trains instead. Perfectly automated today.


No, the physics of that don't work at all. Trains for example cannot handle gradients that cars manage with ease.


Not true. Not every train uses steel wheels, you know.


> What if every bicycle, school backpack, shopping cart did so too, identifying itself in the process?

That sounds like a nazi-state dream - easily track every backpack, car, bicycle, etc...

no thank you


Waymo is full self driving, not partial


Waymo is little more than a Jurassic Park jeep ride. Only works on the tracks. Take a Waymo vehicle anywhere their engineering staff hasn't spent countless hours driving around back and forth with those ridiculous spinning lidar sensors and roof rack full of who-knows-what and see what's left of their 'self driving'.


I’m very skeptical about achieving FSD in the next few years, but I don’t think your giving them enough credit. Even if you’re limiting yourself to a known city, you still have to deal with human drivers and pedestrians doing weird shit all the time. That’s no small feat, even though it’s far from the ultimate goal of being able to pass out drunk in the back of the car while it gets you home safely.


So a 50 square mile city with live traffic, pedestrians and objects is a "little more than a Jurassic Park jeep ride"?

What are you going to say when they go to SF next? Just another Jurassic Park ride?


*Waymo is trying to develop full self driving, not partial


It'll be interesting to see where he goes next and if any info comes out about why he's leaving.

Brief summary of his jobs:

  - NUMMI (GM/Toyota), 1984-1986
  - MIT, 1986-1990
  - Ford, 1990-2004
  - Hyundai America, 2004-2013, left as president/CEO
  - TrueCar, 2014-2015
  - Google/Waymo, 2015-2021


Apparently Apple is rumored to be working on a car...


I'm curious about the co-CEO thing. That seems like it is universally something that doesn't work and is just what happens when nobody wants to make a tough decision. Is there any context on why it might be the right move here?


I can only think of The Office: "It doesn’t take a genius to know that any organization thrives when it has two leaders. Go ahead, name a country that doesn’t have two presidents. A boat that sets sail without two captains. Where would Catholicism be without the Popes?"


From the Wall Street Journal:

"The company said Friday that it is promoting its chief technology and operating officers, Dmitri Dolgov and Tekedra Mawakana, to lead a decade-old effort to make self-driving cars a reality. They will share the title of co-chief executive... Mr. Dolgov is one of the founders of Google’s self-driving car project. He joined the program when it began in 2009 and led the development of Waymo’s autonomous system, known as Waymo Driver. He studied physics and math at the Moscow Institute of Physics and Technology before earning a doctorate in computer science from the University of Michigan. As chief operating officer, Ms. Mawakana has led the effort to commercialize Waymo’s self-driving system. She has a law degree from Columbia University and previously worked at other tech companies such as eBay Inc. and Yahoo."

https://www.wsj.com/articles/waymo-ceo-john-krafcik-is-leavi...

My guess it that traditionally the chief operating officer would have been promoted and the chief technology officer would stay as the chief technology officer--but that there was a significant risk of the chief technology officer leaving if he wasn't promoted. So you end up with co-CEOs.


My suspicion here is simple; they couldn't find a proper candidate who can demonstrates technological and operational leadership from both insiders and outsiders. You can simply promote either of them (usually COO suits better though), but that risks departure of another candidate, so this compromise has to be made. Anyway, this might be okay for Waymo for a short term; unlike other companies, even if those two CEO don't agree on a specific matter, there's an escalation path to Sundar (and ultimately Larry and Sergey).


This is a huge red flag. Especially when Waymo's had no significant milestones and when the CEO is using language like 'spend time with friends and family'. That usually means they're being pushed out, or that he's made the realization it's not going to work out and is jumping ship.


Am I the only one to read like an acknowledgement that Waymo didn’t achieve what it was supposed to do and he was asked to go for lack of performance?


I read it as a state of self-driving cars in general. The technology isn't developing as fast as people had hoped. Waymo is still in the lead, even if this guy failed to deliver (he announced Waymo would be a public service for 2018, and that obviously didn't happen).


Definitely google some Tesla FSD beta videos and watch them. Waymo doubled down on a technology set and data methodology that is not scalable. Tesla is going to win in this space 100%


>The technology isn't developing as fast as people had hoped.

I can’t help but think people conflated what the problem statement of self driving really is. We’ve made huge strides in perception in the last two decades but self-driving is a much more complex problem than just accurately perceiving the world sound us.


No, he wants to spend more time with friends and family and travel the world.


Maybe. But a lot of us are cynical when we hear this kind of language from departing CEOs.


I'm pretty sure the person you were responding to was being sarcastic. Then again it's the internet, so who knows?


That's exactly how I read it. It's a huge red flag.


The CEO lists several accomplishments in the press brief. It doesn't appear to say that he's being let go specifically for performance concerns.


It rarely does unless there's been a very public screw up on the part of the executive that results in bad PR the company.


Well, they never say that. I personally think it's because of that horrendous "Waymonauts" moniker.


Some comments are saying self-driving cars are not currently possible and that Waymo isn't successful, but aren't they currently available to the public in some areas in Phoenix, with no safety drivers?

There may be many limitations, but self-driving in sunny flat suburban areas would already cover a decent portion of the American market. Sure they might not be ready for chaotic, busy city centres or regions with harsh weather conditions, but that's letting perfect be enemy of the good - those living in areas suitable for self-driving cars would certainly appreciate them, and who's to say there can't be a gradual retooling of infrastructure to accommodate them as they expand to more and more areas?


More context on the -no safety driver- issue.

"Eliminating the safety driver is an important step toward making Waymo's service profitable. But it may not be enough on its own because Waymo says the cars still have remote overseers.

These Waymo staffers never steer the vehicles directly, but they do send high-level instructions to help vehicles get out of tricky situations. For example, a Waymo spokeswoman told me, "if a Waymo vehicle detects that a road ahead may be closed due to construction, it can pull over and request a second set of eyes from our fleet response specialists." The fleet response specialist can then confirm that the road is closed and instruct the vehicle to take another route."

https://arstechnica.com/cars/2020/10/waymo-finally-launches-...


The language seems to be chosen very carefully.

They say that they don't do one very specific thing, and they give one very specific example of something they do. Leaving it open to interpretation.

I'll remain very skeptical about their no safety driver claims.

We'll probably have to wait until the first fatality or serious injury to know in certain. The lawsuits and police/NTSB investigations will expose a lot of the inner workings.


Every case of remote assistance I've seen in Waymo videos tends to look like this:

https://youtu.be/D1sZnbORfAE?t=830

Stuck in a parking lot, with an unexpected massive obstacle (Christmas tree display) blocking the "road". "RAD MONITORING" shows up on passenger screen. Objects get labelled and vehicle starts making progress.

It seems to be happening a lot less in recent videos.


As long as you can get a good ratio, says 10:1, that sounds workable as a robotaxi, actually.


I encourage you to watch a video of a current Waymo ride in Phoenix. In one ride I watched, the very first turn it made out of a parking lot should have been a left turn, but it turned right instead and went through residential streets to get back in the right direction.

It is nice that they have deployed to the public, but I doubt anyone capable of driving themselves will use it any time soon if it can't even make left turns.


I don't understand what you're saying. Obviously the cars are capable of turning left. Are you saying it took a longer route when there was a shorter path to leave the parking lot? I don't know what video you're referring to (there's quite a few), but that seems to concern the high-level path planning, not any driving capabilities (e.g. a human following GPS instructions might have chosen the same route). Or perhaps the navigation planning system prefers to drive through lower traffic residential areas.


I took a waymo in Chandler that had no safety driver and it was able to make unprotected left turns just fine.


Chandler, not Phoenix. Went over to try the cars recently. They sometimes drop you off or pick you up from a different location than you wanted if they can't get to the location you provided.

If they actually launched in Phoenix that would be encouraging. Instead they're stuck in the suburbs.


Everyone except Tesla is approaching self-driving in the wrong way.

1. This is a looooong term bet, so you need revenue NOW to sustain your company until you get to full self driving.

2. The self driving car should be a practical product, to be purchased and operated by actual common people who drive cars today. So you need to avoid $10k LIDAR spinners on top of the car.

3. Self driving is not going to be a binary thing. It will be a gradual increasing capability. You need to let people use it in every intermediate stage and feed telemetry back to you so you can incorporate it in your development.

4. Self driving cannot rely on extremely detailed mapping of all public roads that exist on the planet. The system will have to be smart enough to figure out what to do, without specialised external help.

If you don't follow these, be ready for a long haul (10-20 years) without any revenue, because that's what it is going to take. And google does not seem to be the long-haul type of operation that this needs, judging from https://killedbygoogle.com/.


As Elon said, 'everyone chasing LIDAR is doomed to failed. DOOOOOOMED.'

Waymo is just one big cope party filled with HBS grads who can't fathom that they didn't succeed at something.


Disclaimer, I work for GM, not on SDC; anything expressed here is solely my opinion.

I think SDCs (self driving cars) are great, and will be available in some capacity in the very near future, BUT I think we are at least one more paradigm change away from the dream of SDC.

Take for instance liability. It seems quite likely that manufacturers will want to disclaim liability for SDCs. It is easy to see the roots of that in current driver assist features. They are very careful to call out in the documentation (if not in the naming and presentation) that the human driving is responsible for safety.

If this liability was not an issue, SDCs in hands-off, eyes-off, brains-off mode would be available today, right now.

So how will the paradigm of liability change? Or will it?


> It seems quite likely that manufacturers will want to disclaim liability for SDCs.

Why though? They could just accept the liability, charge an insurance premium slightly lower than legally mandated liability insurance for human drivers and then self-insure (and potentially re-insure against bigger risks).

This would not only solve the liability problem, but also allow them to make a nice profit because the car presumably gets into at-fault accidents much less often than the human drivers.


On one hand this kind of makes sense to me - his experience in the actual auto industry must have seemed invaluable when he was hired in 2015 and people thought self driving would be brought to mass market in a couple years.

But now that everyone assumes we're in for the long haul (Aurora says 2025 for L4 in some cities), it doesn't make sense for him to want to stay.


I don't wanna be 'that guy' but the news of this on the same day of Tesla's Q1 p/d beat speaks volumes about who's winning in this space. Watch the FSD beta vids, especially the Weimo & Tesla comparison in Phoenix. It's clear who's going to dominate this space.


"To start, I’m looking forward to a refresh period, reconnecting with old friends and family, and discovering new parts of the world."

I suppose that is a nice way of saying wait out my non-compete....


Is Waymo not headquartered in California?


Yes you are correct and it probably is unenforceable if it exists. My mistake. It just reeked of that.


Even without a non-compete, taking a couple months off between jobs is nice. But yeah, at that level I imagine non-competes reach to the full legal extent and then some ;)


I believe I read that he's moving to Austin, so Tesla could be a next step.


I hear Anthony Levandowski is available, having been pardoned by Trump in his last day in office on January 20, 2021.

https://en.wikipedia.org/wiki/Anthony_Levandowski#Criminal_c...


I don't think neither google nor uber would want him back.


This reads like an obituary


For fully autonomous driving need to happen, all cars must exchange data with each other how robots do in a factory floor. Image and depth perception can solve some problems but a crash or a vehicle can maneuver in any direction. If two self driving cars need to take a decision at a intersection, how are they gonna do it?


Do you need to exchange data with every other driver when you're on the road?


Isn’t this what turn signals, horns and brake lights are for? Not to mention making eye contact with pedestrians at crossings.


Self driving cars have turn signals, horns, and brake lights too. And can also read that data from other cars. And some of them even replicate eye-contact now.


Do factory floor robots communicate with each other in any meaningful way? I'm under the impression that they mostly just follow pre-programmed paths and all you need for that is clock synchronization.


Self driving is not possible without changes to infrastructure. We know this because NHS tried in 1960’s and was able to succeed with this approach. Government will have to solve problem first. How? Installing sensors on our highways that cars can use. Anything this big in scale can’t be solved by the private sector alone, you also need better infrastructure.


>Self driving is not possible without changes to infrastructure

Humans seem to be perfectly capable of doing it though.


36,000 US driving deaths annually would beg to differ.


Humans are probably the single worst possible candidate for handling a car


Except for all the other options.


Humans come with their own liability :(


There were a lot of things that weren't possible in the 1960's that are possible now. Highway driving is the easiest type of driving. If Waymo wasn't limiting their driving to specific areas, I have no doubt their Driver could easily drive through the US interstates without issue.


That just isn’t true. Computer technology hasn’t come as far as you want to believe. 144 characters isn’t a technological revolution.


This combined with the recent announcement of Waymo pivoting to monetize their research in what's basically a garage sale (https://waymo.com/lidar) tells me that Waymo is soon to be interred on killedbygoogle.com.


That lidar has been for sale for years, it's not really a pivot. See e.g. this older post on HN: https://news.ycombinator.com/item?id=19319233




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: