Working in automotive cybersecurity, I've seen how rapidly the attack surface has shifted from individual components to entire connected fleets. My biggest concern is fleet-scale disruption. As vehicles become connected through cloud services, APIs, and AI interfaces, a single point of compromise can influence how thousands of vehicles behave simultaneously. We've already seen early signals: coordinated ride requests causing dozens of autonomous vehicles to converge on the same location, effectively creating a physical DDoS on city streets. The challenge is no longer just building a safer car. It is securing a transportation system that runs on software, because you cannot pause a city's streets while patching a vulnerability.
My biggest concern is what happens to safety when autonomy meets cross-border reality: parking garages, ports, customs lots, and dense European streets where signage, lane markings, and "human negotiation" are inconsistent. In my world (30+ years moving cars and household goods from the US to Poland/Europe), I see how even perfectly fine vehicles get thrown into situations their sensors and maps weren't built around. One concrete observation: vehicles sit for weeks in staging yards and then get moved in tight spaces by different handlers--ramps, forklifts, steel containers, low light, wet ground. I've watched a loaded SUV misjudge a short ramp angle and scrape hard enough to tear a bumper cover; that's with a human driving slowly, and it's exactly the kind of low-speed, high-consequence geometry an autonomous stack can struggle with when it loses clean lane references. A personal "this is why I worry" moment came from prepping cars for ocean transport: we require the battery to be charged and fuel kept low, and sometimes the car's state-of-charge drops after sitting. If an autonomous vehicle depends on stable power and sensor calibration, that "it sat too long in a yard" variable becomes a safety factor--especially when it's restarted and immediately asked to maneuver in a chaotic port. If you want a brand example: Teslas are common in US-to-Poland shipments because people buy them in the US market, and the hardware is capable--but the operational environment changes fast once it's off the nice suburban test loop. My worry isn't highway cruising; it's the messy handoffs and degraded conditions between purchase, port, ocean, customs, and first drives on unfamiliar roads.
My biggest concern with these cars is this: we're at a weird point where the tech is good enough to let you space out while driving, but not good enough to actually handle all of that without any human input. I've seen this in action at tech shows, and I've talked to the people who fix these things. You're cruising down the highway, and then suddenly, out of nowhere, there's a weird construction zone or a storm front, and suddenly this car is going to need YOU to take over in two seconds. Good luck with that if you've been staring at a phone screen. And let's be real, after all these years of playing in the automotive space, I know how people behave in these things. We're lucky if we're paying attention when we're supposed to be driving. And then we're going to add this to the equation, this idea that we don't need to be paying attention? That's a recipe for disaster. Snow, heavy rain, sun glare - sensors still struggle with stuff regular drivers deal with daily.
E-bikes are basically invisible to most AV systems right now, and that's a real problem. The sensors on these vehicles are built to spot cars and pedestrians. An e-bike on motor assist doesn't fit either box. We're moving faster than a typical cyclist, sometimes hitting 40 km/h, but we take up a fraction of the space a car does. In my experience watching how traffic behaves around e-bikes, most drivers already struggle to judge our speed and intentions. An AV doing the same job with a camera and a decision algorithm is not going to do better. The part that actually worries me is what happens when a rider tries to anticipate the vehicle. With a human driver you get hints. They slow slightly, they look over, something tells you they've noticed you. With an AV there is nothing. No feedback at all. So riders are left making assumptions about a machine that may or may not have registered them as a real obstacle. E-bike numbers are growing fast and AV pilots are rolling out in the same cities at the same time. That overlap needs to be taken seriously before it produces a pattern of incidents that forces the conversation. The technology to address it exists. Right now it just isn't being prioritized.
My biggest concern is autonomous vehicles confidently transporting people who are medically or cognitively unsafe to ride without a sober, responsible human in the loop--especially post-procedure sedation, relapse risk, or acute withdrawal. I run a physician-led residential detox for high-functioning professionals, and I've seen how quickly orientation, judgment, and impulse control can shift hour-to-hour even in people who "look fine." The personal observation: we do daily clinical re-evaluations and 24/7 monitoring because alcohol/opioid withdrawal, sleep deprivation, and anxiety can create sudden confusion, agitation, or fainting risk. I've watched a stable, articulate executive in the morning become disoriented by afternoon and attempt to leave care; in a normal car, a staff member can redirect, assess, and stop them from making a dangerous choice--an AV can't read "I'm about to panic and bolt" the way trained humans do in real time. Safety-wise, the weak point isn't the driving task--it's the handoff of responsibility when the passenger is impaired, dissociated, or determined to self-harm. If the system's only tools are "continue," "pull over," or "call support," you can end up with a vulnerable person stranded in an unsafe place or being transported somewhere they shouldn't be without anyone validating consent, capacity, or destination. A brand example: Waymo. I'd want hard safeguards around rider verification and "medical vulnerability modes" (e.g., verified caregiver ride, limited destination changes, rapid connection to a live trained responder) because in early recovery, privacy and autonomy matter--but so does not giving a medically unstable person a frictionless way to make a catastrophic decision.
My biggest concern about the safety of autonomous vehicles is how they handle unpredictable human behavior in real-world traffic, particularly when pedestrians, cyclists, or other drivers act erratically. I've observed early pilot programs where AVs struggle to make split-second decisions in complex urban environments, sometimes hesitating or overcompensating in situations a human driver would navigate instinctively. While the technology is improving rapidly, edge cases remain a challenge, and even a single failure in judgment could have serious consequences. This makes rigorous testing in diverse conditions, continuous learning algorithms, and clear safety protocols essential before widespread adoption. Abhishek Bhatia CEO, ShadowGPS LinkedIn: [https://www.linkedin.com/in/abhatia02/](https://www.linkedin.com/in/abhatia02/)
My biggest concern is the gap between what autonomous systems are tested to handle and what actually happens on real roads. At Benzel-Busch, we've been deep in the Mercedes-Benz ecosystem long enough to watch driver-assistance technology evolve from basic cruise control to near-autonomous systems -- and the edge cases still catch people off guard. The clearest pattern I've seen: drivers over-trust the system the moment it works flawlessly a few dozen times in a row. That false confidence is dangerous. A customer once told me their vehicle "just knew" to stop -- they didn't understand they were still the last line of defense. The liability question also keeps me up at night. When something goes wrong, is it the manufacturer, the software company, the dealer who delivered the car? That chain is still legally murky, and as a dealer, I sit right in the middle of that relationship with the customer. From the Dealer Board Chair role, I've pushed hard for clearer consumer education standards around these features. Selling the technology without properly explaining its limits isn't selling a promise -- it's setting someone up for failure.
My biggest concern with autonomous vehicle safety is inconsistent quality in overseas-sourced components like sensors and ECUs, where factories skip multi-point inspections and deliver defects that real-world vibrations expose. With 40+ years manufacturing automotive products through global factories for Fortune 500s at Altraco, I've seen this firsthand--a Vietnamese supplier once shipped deformed metal housings for vehicle mounts due to poor first-article checks, mirroring how AV sensors could fail under stress. We fixed it with supplier scorecards and in-process audits, hitting 99.6% on-time delivery, but without that vigilance, a single defect in AV lidar calibration could cascade into collisions. Diversify factories and demand documented multi-stage testing upfront to catch these risks before they hit roads.
After 30 years of protecting Utah homes from 240 MPH winds and massive snow loads, I worry autonomous vehicles like Waymo aren't programmed to handle "overhead" kinetic threats. My team uses HOVER 3D visualization to map roof pitches because we know exactly how dangerous a 500-pound "snow bomb" can be when it slides off steep shingles. In my experience, sensors focus on the horizontal plane, but a major Utah thaw turns residential eaves into unpredictable launch pads for ice and debris. I've seen ice dams rip heavy-duty gutters clean off a house, and an AV idling in a driveway wouldn't recognize that the roof above is a structural hazard about to fail. We install self-regulating heat cables to prevent these disasters, but without that specific mitigation, the exterior environment becomes a vertical obstacle course. My concern is that an autonomous system might accurately detect a pedestrian while failing to calculate the trajectory of a sliding snow mass that can crush a vehicle's roof in seconds.
Lead - Collaboration Engineering at Baltimore City of Information and Technology
Answered 2 months ago
My biggest concern is that an autonomous vehicle’s onboard AI agents could be compromised or behave in harmful ways without clear human oversight. In my work addressing insider risk I have seen AI bots act like superusers, moving data and taking actions without a human clicking approve. That experience taught me you cannot rely on blocking specific components; you must watch behavior to detect intent and anomalies. I believe vehicle safety must include continuous behavioral monitoring of onboard agents and clear ways to surface suspicious actions to human teams.
Coaching high school football, I'm constantly thinking about split-second decisions under pressure -- and that's exactly where my concern about autonomous vehicles lives. These systems can't read a chaotic Friday night parking lot after a Perry Hall game the way an experienced human can. My real worry is environmental unpredictability. A sensor calibrated for highway driving behaves very differently on rural Harford County roads in a November rainstorm. I see that disconnect regularly driving between Bel Air and away games. At ProMD, we use AI technology (our Entity Med Simulator) but always pair it with a trained clinician making the final call. Autonomous vehicles are being sold without that same human checkpoint built into the decision loop -- and that asymmetry concerns me more than the technology itself.
My biggest concern is that autonomous vehicle systems can act on detected patterns without the full context a human would have, which can create unsafe situations. In my work a tool analyzed three years of campaign data and correctly predicted a two-week delay, but it missed the simple fact that the client's CEO was on vacation. That taught me machines can surface accurate trends while still lacking crucial situational awareness that a human provides. For vehicles, that gap between pattern recognition and human context is where safety issues are most likely to arise, so human interpretation and clear communication must remain central.
My biggest safety concern for autonomous vehicles is that AI will amplify underlying operational and data gaps, producing failures in rare or unexpected driving situations. When we added an AI agent to handle customer questions, scaling up exposed onboarding and knowledge base gaps we had not noticed. That experience taught me that AI systems reveal and depend on the quality of the processes and data beneath them. If similar gaps exist in vehicle data, sensors, or procedures, they could lead to unsafe outcomes on the road.
My biggest concern is the lack of clear, timely transparency from companies when an autonomous vehicle incident or safety concern occurs. In my work with consumer brands I have seen that when safety issues or recalls arise, staying silent feels like avoidance and erodes public trust. That pattern applied to AVs would slow adoption and invite heavier regulation rather than build confidence. My observation leads me to push for straightforward, factual updates and clear disclosures to protect users and preserve trust.
As someone who places medical professionals and relocating families in luxury furnished apartments near Chicago's top hospitals like Shirley Ryan AbilityLab and Northwestern Memorial, I've seen how transport reliability directly affects recovery and work. My biggest concern is autonomous vehicles' ability to safely accommodate passengers with mobility challenges--such as wheelchair users or those with medical equipment--ensuring secure loading, stable rides, and precise drop-offs at complex hospital entrances. A client recovering at SRA shared how standard rideshares fumbled her powered wheelchair during a snowy transfer, delaying therapy by 45 minutes; we secured her Atwater Apartment stay nearby, but it highlighted AVs needing specialized accessibility protocols. Chicago's medical district demands this--Reddit travelers, verify AV providers' ADA compliance ratings before booking extended stays.
Our biggest concern is the legal and ethical ambiguity surrounding decision making in unavoidable accident scenarios. Autonomous vehicles rely on probabilistic models that prioritize certain outcomes, yet society has not fully agreed on acceptable tradeoffs. In high speed situations, milliseconds determine how algorithms interpret risk and assign priority. Without clear standards, liability and public trust become fragile. This concern grew as we followed early deployment trials where incidents sparked debates over responsibility between manufacturers, software providers, and operators. Observing how quickly narratives shifted after isolated crashes revealed how sensitive public perception remains. Even statistically safer systems can lose legitimacy if accountability frameworks lag behind innovation. For autonomous vehicles to scale responsibly, governance must evolve alongside technical capability.
As a cybersecurity engineer who built high-availability systems at IBM, I view autonomous vehicles as mobile data centers where a software vulnerability translates directly into physical catastrophe. My primary concern is the integrity of Over-the-Air (OTA) update pipelines, which could allow a single supply-chain breach to weaponize an entire fleet simultaneously. In my current work with AI Readiness assessments, I frequently find that complex systems "melt" because of poor access controls and a lack of immutable logging. If a hacker compromised the update server for a vehicle like the Tesla Model 3, they could push malicious code to thousands of cars before a security team even registers the anomaly. Our monitoring data shows that cybercriminals launch their most aggressive strikes during holiday "out-of-office" seasons when response times are slowest. A coordinated fleet-wide hack during a peak travel window is a nightmare scenario that current infrastructure--which often treats IT as a secondary liability--is simply not prepared to handle.
My biggest concern is "edge-case" decision-making in mixed human/robot traffic--especially when road cues are ambiguous or the safest human choice is to break a rule. In catastrophic injury cases, I've seen how one weird variable (glare, fatigue, a sudden lane shift) turns a routine drive into a life-altering event, and autonomous systems still struggle when the world doesn't look like their training set. One observation that sticks with me: the same kind of "looks safe until it isn't" design problem we see with speed humps and other traffic-calming devices. A flat, one-color surface can be visually misleading, and people with mobility or vision issues get hurt because the environment lies to them; AVs face a similar trap when markings are faded, work zones change overnight, or a pedestrian's movement doesn't match a neat prediction. The case type that makes me most wary is underride-style truck crashes, which are often fatal and nearly always preventable. When you mix autonomous passenger cars with massive commercial vehicles that have known safety issues (blind spots, sudden stops, imperfect guards), you're betting that the AV will always read intent correctly and choose the right evasive move in a fraction of a second. The brand example I watch closely is Tesla--because real-world use means real-world corner cases, not just ideal conditions. If an AV confidently misclassifies a scenario (like a trailer crossing, a dark roadside obstruction, or a confusing merge), the crash physics don't care how "smart" the system is; the injuries look the same in my files.
One thing that worries me about autonomous vehicles is the weird gray zone between human expectations and machine behavior. Humans drive with a lot of unwritten social rules. Eye contact at a four-way stop, a little wave to let someone merge, that subtle "I see you, go ahead" moment. Autonomous systems are great at following rules, but roads are full of messy human improvisation that doesn't always fit clean logic. I noticed this firsthand riding in a semi-autonomous car during heavy city traffic. The system was technically driving correctly, but it hesitated constantly in situations where a human would make a quick judgment call. It felt safe in the strict sense, but also oddly awkward and unpredictable to everyone around it. When that hesitation happens in a busy intersection or during aggressive urban traffic, it can actually create new kinds of risk. My bigger concern is that people assume autonomy means perfection. In reality it just shifts the type of mistakes that happen. Humans make impulsive mistakes, while machines tend to fail in strange edge cases that nobody anticipated. The challenge over the next decade isn't just making the cars smarter, it's figuring out how humans and machines share the road without constantly confusing each other.
My biggest concern about autonomous vehicle safety is the handoff problem, that dangerous moment when the car decides it cannot handle a situation and suddenly asks the human driver to take over. Living in South Texas where we deal with unpredictable weather, construction zones that change daily, and rural roads with no lane markings, I have watched enough dashcam footage and read enough incident reports to know that the edge cases are where these systems fail most dangerously. What led me to this concern was a personal observation driving on Highway 77 between Harlingen and Brownsville. The road has stretches where lane markings are worn away, shoulders are unpaved, and you regularly encounter slow-moving agricultural equipment pulling onto the highway without warning. I thought about how a system trained primarily on well-marked urban and suburban roads would handle that environment, and the honest answer is it probably would not handle it well. The deeper issue is that autonomous vehicle companies are optimizing for the ninety-five percent of driving scenarios that are predictable while the five percent that actually kills people remains incredibly difficult to solve. A human driver uses judgment, context, and even eye contact with other drivers and pedestrians to navigate ambiguous situations. Current autonomous systems do not have reliable equivalents for those capabilities. I am not opposed to the technology. I think it will eventually save lives on a massive scale. But the current marketing that implies these vehicles are safer than human drivers in all conditions is misleading and creates a false sense of security that could lead to more accidents during the transition period when humans and autonomous vehicles are sharing the same roads with very different decision-making capabilities.