Artificial intelligence(AI) has evolved from theoretical concepts to powering real-world applications. AI in transport has led to the development of cutting-edge applications that promise better mobility globally. There are now a multitude of AI applications in our transport ecosystem, ranging from traffic management to autonomous vehicles.
However, using AI in transport has also raised ethical questions, considering that the algorithms that power these applications rely on data. Misrepresentations, poor quality, or skewed data could lead to accidents with potentially fatal consequences; hence, stakeholders must be cautious in implementing AI.
AI is powerful, and with great power comes great responsibility. This article explores the delicate balance of ethical considerations and the progress of AI in transport.
The Need for Ethical AI in Transportation
AI-driven systems use a lot of data, and some require real-time data. As such, significant consideration must be given to undertaking AI-driven projects. Their implementation must be human-centric, operating within the legal framework and upholding moral principles.
AI-driven software works through integration with other technologies, such as the cloud and IOT sensors. This means careful management of different platforms that work to productionize AI solutions.
Protecting personal data privacy, preventing misuse
Transportation forms the core of modern life, moving people and goods daily. This generates vast amounts of data continually. For instance, autonomous vehicles(AVs) capture data from cameras, radar, lidar, GPS, and multiple sensors. Estimates from Intel indicate that AVs generate 4TB of data daily1.
With the use of edge computing, vehicles are like data centers on wheels. This data is captured from passengers, pedestrians, cars, cyclists, and the environment. However, data commercialization and the unrestrained collection of data mean that people have less control over their data.
Personal data collected may include location information and biometrics. According to Russell Ruben, a marketing director at Western Digital, some of this data is processed in real-time and then discarded, while other datasets are stored for training AI models2. Additionally, personal AVs store additional data, including user preferences and financial information.
Sending data to the cloud for storage, processing and training AI models opens up avenues for data leakages, breaches and unauthorized access. A prime example involved Microsoft AI researchers who exposed 38TB of data by misconfiguring a cloud server3.
A survey by a cyber security company, HiddenLayer, indicated that about 77% of businesses they had surveyed had suffered breaches to their AI solution in 20234. According to the company’s CEO, Chris Sestito, AI is the most vulnerable technology ever deployed in production systems5.
Data collected for AI must be used within the accepted legal frameworks. Companies processing data need to be transparent about this process. For instance, authorities using AI for traffic management must avoid using it for surveillance.
Developers building AI models for transport solutions must also integrate privacy by design. This means privacy becomes an integral component of AI models to limit invasive events. Additionally, strategies like data encryption, robust cyber security practices, and the anonymization of personal data should be at the forefront of building public-facing AI-driven transport solutions.
By creating sound privacy-related regulatory frameworks, developers can create AI systems consistent with privacy by design principles. The EU’s GDPR and California’s CCPA are good regulatory frameworks addressing personal data privacy, although much still needs to be done.
Addressing algorithmic bias and promoting fairness
Bias in AI solutions can emanate from unrepresentative or flawed data that perpetuates inequalities. If unchecked, these biases can lead to AI systems making decisions that may disadvantage specific demographics.
A Georgia Tech study discovered that object detection modules in AVs performed poorly at detecting people with dark skin. This could put such demographics at high risks of accidents6.
Bias in Automated Speech Recognition Application
AI-based voice-activated systems are finding their way into vehicles for onboard and external functionalities. Automated speech recognition(ASR) solutions developed with skewed or biased data can lead to inaccuracies. Difficulties may arise due to diverse speech patterns and accents or recognizing context7.
A study by the University of Sanford highlighted significant racial disparities in ASR systems. According to this research, voice assistant applications with NLP capabilities misidentify 35% of words from black speakers compared to only 19% from White users8.
Voice-activated features are a game changer for the car industry, boosting accessibility in different transport ecosystems. However, with various cultural nuances and regional diversity, ASRs without diverse training data and rigorous testing may fall short of expectations for certain demographics.
Biases in training data can lead to skewed AI systems that limit societal acceptance and trust. An AI that automates and perpetuates stereotypes can harm people in transport operations.
Stakeholders should proactively address any issues that contribute to bias and inequality. Local AI-based transport solutions must balance the needs of the local population while considering visitors.
Navigating Ethical Dilemmas in AI Decision-Making
AI identifies hidden patterns and correlations in data, helping systems and operators make decisions. However, these systems are often viewed as complex and opaque, lacking transparency in decision-making, which amplifies concerns regarding user trust and accountability.
Deep learning systems are criticized for being “opaque boxes” because of a lack of transparency on how the outputs are reached. For instance, AVs are expected to hit the brakes when they detect a pedestrian. However, we cannot trace the AI’s thought process and decision if the AV hits a pedestrian9.
Associate Professor Samir Rawashdeh from the University of Michigan emphasizes the challenges in ensuring the robustness of perception systems in AVs. He suggests that if an accident occurred because the system missed a pedestrian, it would likely be attributed to the system encountering a novel situation.
However, Rawashdeh questions the feasibility of obtaining training data that covers every possible scenario, given the infinite number of permutations in real-world driving conditions, such as varying weather, road treatments, and lighting.
Strategies to address the opaque box problem include developing explainable AI(XAI). This framework helps interpret and understand AI models. It characterizes model accuracy, transparency, and the outcome in AI-powered systems.
Explainable AI can help developers check whether model output is accurate. This is done by running simulations against expected output to check prediction accuracy. Additionally, traceability XAI techniques like DeepLIFT mimic the dependency of inputs and outputs in AI systems10.
Decision-making dilemmas in autonomous vehicles
Due to our transportation systems’ unpredictable and dynamic nature, AVs may encounter complex ethical dilemmas on the roads. One of the most well-known thought experiments used to illustrate this challenge is the “trolley problem.” In the context of AVs, this dilemma arises when the vehicle is forced to make split-second decisions when it must choose between two potentially harmful outcomes.
For example, an AV might find itself in a scenario where it must decide between colliding with a group of pedestrians or swerving and putting its occupants at risk. This raises questions about the ethical principles and decision-making frameworks that should be embedded into the AI systems governing these vehicles. Should AVs prioritize the safety of their passengers over that of pedestrians, or should they be programmed to minimize overall harm?
Balancing these competing priorities and determining the appropriate trade-offs between the safety of different groups is a complex challenge that requires careful consideration and ongoing dialogue among ethicists, policymakers, and industry stakeholders.
Job displacement and workforce impact
A report by PWC indicates that AI and automation could potentially add more than $15 trillion to the world economy by 203011. However, this comes at the cost of job displacement and impacts the workforce. The World Economic Forum states that AI could lead to 75 million job displacements globally by 202512.
There is consensus across the industry that job losses will occur once AVs achieve complete autonomy. For instance, taxi and ride-hailing drivers face the threat of job loss due to the introduction of robotaxis.
Baidu, the search engine giant, runs a robotaxi service in Wuhan, China. Due to its rapid adoption and low base rates, drivers in this city foresee a threat to their jobs. In addition to drivers, road inspectors, and driving schools may witness the same fate13.
With the threat of job displacement, workers in the transport industry will have to upskill. However, relevant authorities must ensure that AI projects in transport are human-centric, with the AI helping augment human capabilities and not leading to job losses.
Safety concerns
In June 2022, a Cruise robotaxi injured several passengers. According to the accident report, the investigation concluded that the vehicle’s decision-making was “unacceptably risky”14. In another incident, a Cruise robotaxi hit the bus from behind due to an error with the prediction model15.
These and several other incidents have brought the safety of AVs into sharp focus. AVs use sophisticated technology that captures data and processes it before making decisions. The AI engine using neural networks is trained with many images for object detection and other tasks. However, this engine can struggle in uncommon situations, leading to questionable decisions.
The safety of public-facing AI systems is a make-or-break matter. Any AI solution that harms or negatively disrupts people will lead to mistrust. AI-driven solutions must be rigorously tested and subjected to continuous simulations to reinforce systems learning.
Balancing Benefits and Risks
AI is revolutionizing the transport sector through applications such as route optimization and improved traffic management. However, ethical concerns, including personal privacy, emphasize the necessity for prudent implementation of AI projects. Therefore, appropriate strategies must be employed to ensure that the risks do not outweigh the benefits.
For example, a suitable legislative framework can provide greater clarity and balance between AI projects and associated risks. However, currently, laws lag behind the transformative leap in AI advancements.
Moreover, authorities and companies must develop public-centric projects with ethical considerations. Standards for frameworks and strategies should be established, including stakeholder involvement, user education, traceability, and explainable AI. Ultimately, ethical AI applications will experience increased acceptance and trust.
Conclusion
AI in transportation can potentially improve efficiency, safety, and sustainability, but addressing ethical considerations is crucial for its successful implementation. A proactive approach to mitigating privacy, algorithmic bias, and safety risks is essential to build public trust and overcome existing concerns. By establishing robust ethical frameworks, standards, and guidelines, AI system developers can ensure that the technology is deployed in a manner that respects individual rights, promotes fairness, and prioritizes the transport stakeholders.
- Data is the New Oil in the Future of Automated Driving | Intel Newsroom ↩︎
- How Data Unlocks the Future of Autonomous Vehicles ↩︎
- Types of Risks in AI – Data Leakage ↩︎
- HiddenLayer’s 2024 AI Threat Landscape Report ↩︎
- Study: 77% of Businesses Have Faced AI Security Breaches ↩︎
- Research Reveals Possibly Fatal Consequences of Algorithmic Bias | College of Computing ↩︎
- Voice-activated Devices: AI’s Epic Role in Speech Recognition ↩︎
- Understanding algorithmic bias and how to build trust in AI: PwC ↩︎
- AI’s mysterious ‘black box’ problem, explained | University of Michigan-Dearborn ↩︎
- What is Explainable AI (XAI)? | IBM ↩︎
- PwC’s Global Artificial Intelligence Study: Sizing the prize ↩︎
- The Impact of AI on Job Roles, Workforce, and Employment: What You Need to Know ↩︎
- Super cheap robotaxi rides spark widespread anxiety in China -CNN Business ↩︎
- Cruise Unacceptably Risky – Part 1 — Retrospect ↩︎
- Why we do AV software recalls | Cruise ↩︎