Ex-CDOT Chief Klein Discusses Proposed Ban on Self-Driving Cars in Chicago

A driverless car on a test course. Photo: Wikipedia

The 2016 National Shared Mobility Summit takes place in Chicago from October 17-19, bringing together leaders in the fields of bike-sharing, car-sharing, ride-sharing, microtransit and more, hosted by the Chicago-based Shared-Use Mobility Center. You can register for the event here and use the promo code STREETSBLOG to receive 10 percent off the cost of registration.

Three former Chicago Department of Transportation heavy-hitters will be convening for a panel called “Connecting the DOTs – City Commissioners on Shared Mobility. The panel will be moderated by ex-Chicago transportation chief Gabe Klein and will also includehis former CDOT deputies Leah Treat and Scott Kubly.

Klein is part of the new transportation consulting firm CityFi and serves on the board of several transportation-related organizations (including OpenPlans, the parent organization of Streetsblog). Treat and Kubly currently lead the Portland and Seattle DOTs, respectively. Earlier this decade, the three of them launched the Divvy bike-share system, as well as initiatives like the construction of 100 miles of buffered and protected bike lanes, the Bloomingdale Trail, and the Chicago Riverwalk.

Their panel, which takes place from 12:45-2:00 p.m. on Tuesday, October 18, will look at how cities are responding to the challenges of aging infrastructure, changing regulatory demands, and emerging transportation developments to become “hubs of innovation, entrepreneurship, and growth.” They’ll discuss best practices as well as the path forward for shared mobility technologies.

I recently caught up with Klein by phone to get his take on a new proposal by Chicago aldermen to ban autonomous vehicles from the city, a stance some local commentators have blasted as reactionary.

JG: In response to Uber testing self-driving cars in Pittsburgh, Chicago aldermen are saying they don’t want to allow self-driving cars in our city until it’s a proven technology. Aldermen Anthony Beale and Ed Burke are the sponsors of the proposed ordinance. Those guys have been anti-ride-share – they’ve been defenders of the taxi industry. So it appears that they don’t want more competition for taxi drivers. What do you think about the issue of cities preemptively banning self-driving cars?

GK: What people have to keep in mind is that our situation right now with people-driven cars is completely unacceptable. Unfortunately our frame of reference for most of us is, this is the way it’s been since we’ve been alive, that people have been driving cars around and running each other over, in huge numbers. Last year 1.25 million people died worldwide in car crashes – the number one killer of young people.

So, with all due respect to Alderman Beale and Alderman Burke, and I know they mean well, the idea that self-driving cars are going to be less safe is almost impossible. Human error causes 94 percent of car crashes, so the faster we can get people out from behind the wheel of [multi-ton] hunks of metal next to pedestrians and cyclists, the safer our cities will be, the more people will want to live in our cities, the safer and healthier our children will be, because they’ll start walking and biking to school again.

So I’m a huge fan of the technology. I’m very wary of how it’s utilized, because I don’t want to see it cause sprawl and I don’t want to see it hurt the quality of transit systems. On the other hand, I can’t imagine a worse situation than what we have now, because of the number of people driving and the number of deaths that we have, something like 55 million injuries [worldwide annually].

So I think this is more about the taxi lobby, and I think it’s shortsighted. Because the taxis are actually in a unique position with the future of point-to-point ground transportation likely becoming completely commoditized. The taxis – if they can hang on — are one of the only government-sanctioned systems for that type of service. So you would actually think that they would want to be the first market with autonomous vehicles, not last.

Perhaps the industry itself misunderstands how this technology may play out and the distinct advantage that they have, or could have. But my guess is they’re such a fractured industry that they don’t think about these things as a collective.

JG: You mentioned the issue of driverless cars possibly contributing to sprawl. What can we do to make sure that driverless are a net positive for cities, that they lead to less car dependency, not more, and that they don’t encourage people to live farther away from work, and they don’t lead to a drop in transit service?

GK: I think urbanists and the people who care about the health of our cities have a very healthy skepticism about autonomous cars. But what I would encourage them to do is to separate the tool from how it’s used. We have cars that are driven by people now, poorly, and we’ll have some version of cars tomorrow that will hopefully not be driven by humans.

The technology itself is not the problem. The problem that we see with humans is that we get lazy. We say, hey, combustion engine was invented. That looks like it’s more convenient and flexible than the streetcar. Let’s kill the streetcar and make everything buses with rubber tires. We sell lots of cars and we’ll sell lots of tires.

So sometimes the shiny object, the new technology, causes really bad policy decisions to be made in favor of technology. And actually what I’ve been focusing on with my new firm CityFi is to get people to focus on the outcomes that they want for their cities. The outcomes that we want are happy citizens. One of my partners, Ashley Hand, just finished the transportation technology strategy for Los Angeles, and it’s really good. And one of the basic tenants that they focus on is “Transportation Happy” – how happy are you when you’re in your car stuck in traffic? How happy when you’re on your bike or you’re walking and you’re getting exercise and fresh air?

So we could look at the outcomes that we want in cities, and how the various technologies, like autonomous vehicles, fit into it. And I think that autonomous vehicles could be a huge boon to transit. They could be a huge boon to the safety of our citizens. They could help people who didn’t previously have mobility options to get mobility options at very low costs – it could be more equitable. It could more get people to jobs.

But it we don’t set the right policies in government – and we won’t everywhere, that’s a given – in places like Chicago and D.C., and Portland, yeah, you could encourage people to live further out and just use an autonomous car. Because there’s no vehicle miles traveled tax, there’s no taxation on an owned vehicle.

But if you look out the [National Association of City Transportation Officials] policy for autonomous vehicles that we put out a couple months ago, it calls for shared-use vehicles, not owned vehicle. It calls for a 25 mph maximum speed in an urban context, not a Tesla speed. And it calls for Level Four autonomy only, meaning no human involvement, no turning it on and off.

Klein shows off a solar-and-pedal-powered ELF trike by Organic Transit in 2014. Photo: John Greenfield

And I would add to that, just personally, not for NACTO, that I would like to see much smaller format, lighter vehicles. You took a picture of me a couple of years ago with my little Organic Transit ELF. I think we overbuild cars. I love what Elon Musk has done at Tesla and the adoption of electric vehicle technology that we’re seeing. But do we really need a 7,000-pound Model X to get around the city? We don’t. The ELF is like 160 pounds and my bike is 25 pounds.

Our vehicles are out of scale with our cities. I look at what Barcelona is proposing with their superblock strategy and it’s really great. It’s really interesting and provocative. And I think that through the right policies we can push people towards smaller vehicles, more energy-efficient vehicles, pedal-electric vehicles, and autonomy can play a roll in all of that.

I think we also have to take a look at the use case. If you need to go from one neighborhood to another, that’s a trip that you used to make on a streetcar. And now you might make it on bike-share. And perhaps the autonomous vehicle is for the trip that’s more than a neighborhood away. So we set policies and incentives and disincentives so that those vehicles are utilized for those kinds of trips.

And the idea that the aldermen have, that this is going to be competition for taxis, my response is, why aren’t the taxis on this? This is perfect for them. They could be outcompeting everybody, if they were the first to market it.

JG: How would taxi drivers play into it if the cars were autonomous.

GK: Well, there are a couple of scenarios. One is that they don’t play a role at all. But in the cities where they own their own taxis, where they have some sort of relationship with the taxi company, or they own the medallion – there’s a different situation is every city. But in those situations I think there’s an opportunity for more of a co-op model where each vehicle or blocks of vehicles are owned by individuals. It’s going to be very interesting to see how it shakes out.

There’s this thinking out there, oh, Uber’s just going to be running our ground transportation system. I don’t necessarily think that’s how it’s going to shake out. I see them rushing to market. And I like what they’re doing. I like that they’re testing on streets, I like that they’re showing people what they technology can do, they’re learning.

So I’m fan of what they’re doing. But I think it’s a long shot to say that one company is going to run autonomous vehicles. Lots of people will be running them. And taxis are set up in a perfect place to be that service. But I think it’s sad that they don’t see themselves that way.

  • rohmen

    Not saying I support a reactionary ban on self-driving cars, but I do think this is a much grayer issue than Klein is letting onto.

    Somewhere a person writing code on an autonomous driving system has to make an ethical choices of what to do when the car faces no-win situations—an example would be does the car hit a pedestrian who jaywalks if it continues straight, or does it brake hard and swerve which risks injuring the people in the car. The question then becomes do we leave how those ethical questions are handled (which in a sense goes to how the systems are developed) in the hands of a company like Uber, who has a proven track record for taking an arguably aggressive hands off, profit first approach to regulation and safety. Honestly, I think Uber has made their bed a bit in this situation by being so resistant to any safety regulation in past interactions with major cities.

    Those are important questions that need to be discussed before self-driving cars hit the streets, and something Klein only vaguely addresses by elluding to ensuring they’re safe.

  • 1976boy

    But the scenario you present is not the dilemma you say it is. With a driver facing that situation, were they traveling at or under the speed limit and exercising proper caution, they could stop. An autonomous car would never speed. In areas with pedestrian traffic they would be going maybe 15-20 mph.

    Almost all crashes and fatalities are human error.

    The default situation is that the car travels slowly and defers to the road condition. That is actually what opponents do not really want. Human drivers speed and ignore safety rules. It’s just a simple fact. I see it every day.

    Autonomous cars can’t come soon enough for me.

  • rohmen

    If a pedestrian steps out into traffic to jaywalk, and a car is traveling at 25 mph, I disagree that a self-driving car is going to always be able to stop without heavy braking or swerving that might cause an accident and injure the passengers inside. In turn, someone has to code the system as to how to handle that ethical situation (protect the pedestrian who acted illegally, or put the passengers in harms way).

    It’s not a matter of whether self-driving cars would be safer (they would be), it’s a matter of ensuring they’re developed to be as safe as they’re capable of being (something many, including myself, do not necessarily want to leave in the hands of private industry).

  • Fred

    On the other hand, an early self driving Uber fatality would be devastating for their future, so it would be in Uber’s best interest (at least at the beginning) for them be coded to be as defensive as possible.

    Self driving cars are coming and there’s nothing any local government can do about it. There should certainly be regulations in place before they get here, but bans are silly.

  • rohmen

    Two things: one, private industry (especially the auto industry) has always had a checkered history with safety concerns when profit is the sole enforcer. Mitsubishi got busted in the 90s for essentially using a formula to decide when a recall was worth it vs. just paying out the wrongful death claims. Uber I’d say has a pretty poor ethical track record, and having them as the potential gatekeeper is scary.

    Second, I don’t support outright bans, but I believe local governments (and government period) have a much larger role to play in it from day one than Uber likely agrees with, as ironically some day Uber (or whomever else wins the race) will need government support to create a truly autonomous system. It’ll be the government’s role to literally ban human driven cars from the roads to create such a system (if we ever get that far), and demanding a spot at the table from day one if that future request is going to be made isn’t unreasonable.

  • Fred

    One, the way to get around around that is to tie the two together. If the car company is liable for deaths, and it is very expensive for them to kill someone, then it is in their best financial interest not to kill people. There are a number of ways you could make that happen via carrot or stick.

    Second, I agree that governments should be out ahead and regulating self driving cars before they get here. While self driving cars are imminently happening, we are still quite awhile (decades) out of fully eliminating drivers.

  • rohmen

    But who gets the carrot and stick, that’s the question?

    The liability issue is going to be huge, and is something I’m actually paid to pay attention to. When an accident occurs, who do you hold liable: the owner of the autonomous vehicle; the company who built the car; the software developer for the driving system itself; all of the above (the most likely answer)? Liability has always been pretty clear in the past, it was either driver error or a product defect, meaning you either held the driver responsible or the auto manufacturer. Self-driving cars present massive liability and regulatory issues that will need to be hashed out before they hit the road in huge numbers.

    Not to say they’re not coming, or that they shouldn’t come, I just feel like most articles on self-driving cars gloss over these points (and they’re massive obstacles). And that’s leaving aside the google-glass like cultural issues, where a very sizable portion of U.S. culture may not view the loss of human driving as a net positive and fight against what they see as an erosion of a norm.

    My guess—in our lifetime, we’ll see massive gains in autonomous trucking and rail, and you’ll see autonomous livery service, but I’m guessing I will not live to see a fully-autonomous road network. Could be wrong, though.

  • Mcass777

    Do any of you worry about these cars being hacked? https://www.wired.com/2016/03/fbi-warns-car-hacking-real-risk/

    So will the future see a population where people are sitting ducks (literally) inside moving cabins with no controls to override?

  • rohmen

    Personally, I worry more about planes on that front in terms of terrorism, but yeah it’s a concern. Though it’s going to be a concern in cars period going forward given the level of computer integration.

  • Mcass777

    The worry should be for both and yet we interact way more with cars and trucks per day than planes. I think the argument about how coders approach the moral dilemma of a conflict exposes the dark side of driverless anythings. Could a coder miss something? Yes, Could a coder go rogue? Yep. Could multiple vehicles be taken hold of simultaneously? Sure, they have the same insides. I am not a prepper but I have to say there is scary potential out there.

  • rohmen

    I don’t disagree, but I think you play the metrics. Hacking a plane and taking it down causes a lot more damage than hacking a truck. That said, autonomous driving would have the biggest benefit in transporting toxic/hazardous waste, as you eliminate driver error and avoid having to pay drivers a premium to take on the risk. Unfortunately, that creates a juicier target for hackers. But that scary potential is already here really. Computers are already so integrated that it’s just a matter of resources and ability to cause damage at this point. And having a human driver on board likely doesn’t matter if a hacker disables the air brakes, etc.

  • Fred

    Developer going rogue shouldn’t ever be an issue. I say this as software engineer. I write code for a living; a product that has no life or death consequences, and multiple people look at my code before it ever hits production. If an auto company software organization is so dysfunctional that a single engineer can get rogue code onto the road without anyone noticing, then they deserve fines starting in the 11 figure ($10s of billions) range and every c-level exec should be fired for gross negligence.

  • Fred

    I think to only answer for liability is the car company. The car company’s software is the driver so it is therefore liable. The owner of the vehicle really had no role. Software development is a multi-team effort, so trying to nail down blame on a single person would be difficult; eg, do you blame the product person who wrote the requirements? The engineer(s) who wrote the code? The QA team who verified it? Suing an individual engineer who makes $60k/yr for millions in a wrongful death suit seems not worth anyone’s time. And if suing engineers is going to become a thing, custom software development is going to get even more expensive than it is and malpractice insurance is going to become a thing.

    In your scenario where a pedestrian jumps in front of a car and a driver has to choose between hitting the pedestrian and swerving onto the sidewalk, is that driver error or product defect?

    I don’t understand how the liability conversation changes. Either the driver made a decision, or the product failed. Either the driver is a human, or the driver is software (is the car company).

  • rohmen

    In autonomous car situations, essentially everything is going to be product defect suit, and that’s where the complexity sets in. The injured person will still likely just sue the vehicle owner, but the owner will then sue the manufacturer. That’s where the complexity sets in because the “manufacturer” may be Volvo, but the software that actually failed may be a different manufacturer. Depending on how the agreements are structured (and the indemnification provisions) between the companies that put the product on the road, it can become a mess. It just adds a lot of complication to liability determinations in a court setting (and factoring in insurance coverage from an underwriting perspective given the insurer will want to seek subrogation for whatever they pay) that wasn’t as present in traditional car accident cases given how large vertical integration had become in cars (most parts in a Toyota car that could fail are made by Toyota or a Toyota subsidiary, etc.).

    I’d say pre-programmed solutions to ethical dilemmas are likely to be viewed as software defects. But again, traditional insurance covered the driver based on driver error. Insurance for product defects is an entirely different ballgame, and will require entirely new products and regulatory schemes to ensure proper coverage is in place. It’s just background stuff many never think about but will have to be determined before the system goes wide-scale.

  • Mcass777

    excellent point – i should have said a bug in the code vs. a malicious coder.

  • Mcass777

    Most of us would freak out and society would come to a grinding halt if there were a rash of hacked car accidents. The emotional perception would overcome the statistical reality of being involved in such an incident and people would be afraid to leave home. Our future is scary!

  • Fred

    I chalk this up to one of those things that can only happen in theory, at least for now. Sort of like having your identity stolen via RFID. I have never heard an actual story of anyone having their identity stolen this way, even though the news would have you believe that its a huge deal. Until it actually starts happening, I’m not losing any sleep over it.

  • Fred

    Certainly a code bug is possible, but I would hope the standard for testing life or death software would be such that only crazy edge cases get through. Software cannot be expected to be prepared for every. single. scenario any more than a human driver could be expected to.

  • Fred

    “I’d say pre-programmed solutions to ethical dilemmas are likely to be viewed as software defects.”

    Are ethical dilemmas currently considered driver error?

    I assume liability will be sorted out fairly quickly either by some legislation, or by the precedent set by courts after the first incident.

  • rohmen

    Largely yes. If you swerve to avoid hitting someone, and that in turn causes an accident with property damage or bodily injury, you’re probably going to get sued by the person you hit. Now you might be able to sue the pedestrian (and have them held primarily/solely liable), but if they don’t have insurance, etc., the liability is likely going to fall on you (you did leave your lane and hit someone). It’s a true no-win, and something that will continue.

    The point is a small subset of accidents are unavoidable, even with self-driving cars. And someone will be liable under the law. It’s all stuff that can be figured out, but it does need to be dealt with, and shows implementing this stuff isn’t fast or simple. And it will require legislation as you say, meaning while bans are stupid, local/state government does need to get involved sooner rather than later for it all to work right.

  • Mcass777

    I can hardly wait for the sales man to say that at the close!

  • Mcass777

    Hacking cars has happened although it is rare because the bluetooth systems in cars connecting safety, diagnostics, navigation and entertainment operations are just coming on-line. This wireless infrastructure is the foundation for driverless cars. These systems will need to be bullet proof to say the least. I just hope this is part of the conversation at the DOT.

  • NIMBY7666666

    Why doesn’t Gabe trot out his “if a pedestrian is hit by a car going 40 MPH, they are more likely to die than if going 20 MPH”. It seems to be the only thing he or Scott Kubly can say. And how much of this to personally benefit Gabe, like the Divvy bike share system was?

  • cozzyd

    Maybe the self-driving cars won’t stop to pickup or dropoff passengers in bus lanes and bus stops, but of course that will probably be “too hard.”

  • Wells

    A guy gets into a self-driving car and says, ‘Take me to the hamburger drive-thru on Cosmopolitan Boulevard.” The robocar voice answers “No. Every time you’re taken there, you toss the greasy paper wrappings and unfinished soda cup in the back seat, a violation of the lease agreement. Initiating ejection devise. You have 10 seconds to leave before a major unpleasantness will spoil your day.”

  • Wells

    Ralph Nader is a firm skeptic. I draw the line at the safety features of computer emergency braking and routine regulation of speed limits (speeds below such limits remain the driver’s option) is desirable and possible. Total computer control – autonomous operation – is NOT possible nor desirable. Making up self-driving car jokes is an assertion of exactly why the technology is too ridiculous to take seriously. Just deal with it in some other way than ‘censoring’ that opposing viewpoint.


CDOT Vets and Other Leaders Discuss the Future of Urban Transportation

Last week hundreds of civic leaders, entrepreneurs, and academics from across the U.S. convened in our city for the National Shared Mobility Summit, organized by the Chicago-based nonprofit the Shared-Use Mobility Center. This think tank focuses on practices and policies regarding bike-share, ride-share, car-share, and other mobility tools in an effort to maximize the positive […]

Ex-CDOT Deputy Commissioner Scott Kubly Named Head of Seattle DOT

Chicago’s loss is Seattle’s gain. This afternoon, Seattle Mayor Ed Murray named former Chicago Department of Transportation deputy commissioner Scott Kubly the new director of the Seattle DOT. The appointment will require City Council confirmation. Kubly served as a lieutenant to forward-thinking ex-CDOT chief Gabe Klein, and also worked under Klein when Klein was head […]

Evaluating Gabe Klein’s Chicago Legacy

Not long after Gabe Klein reported for work as commissioner of the Chicago Department of Transportation on May 16, 2011, there was speculation that he wouldn’t stick around long. Klein’s wife was remaining in Washington, D.C., where he had previously run the DOT. As an ambitious guy who had worked in several different fields, including […]