The 1/10th-scale Connected and Automated Vehicle (CAV), developed by M-TRAIL, is a cost-effective and highly functional platform designed to replicate the sensing, communication, and control capabilities of full-scale CAVs in a compact form. Built with embedded LiDAR, a WiFi communication unit, high-definition cameras, inertial measurement units, and onboard computing resources, this miniature vehicle supports a wide range of autonomous driving tasks, including perception, localization, path planning, and vehicle control. The platform runs on an open-source, modular software architecture that allows seamless integration of custom algorithms for cooperative perception, sensor fusion, and traffic interaction experiments. With its realistic dynamics and real-time processing capabilities, the 1/10th-scale CAV serves as an ideal tool for testing new CAV technologies in a safe and controlled lab or field setting. It significantly lowers the barriers to entry for CAV research and education, offering universities, community colleges, and research institutions a hands-on, scalable testbed for rapid prototyping, STEM outreach, and algorithm validation.
Assembling the 1tenth CARMA car
Completed car model
3D environment construction
The Level 4 Connected and Automated Vehicle (CAV), developed on a hybrid Chrysler Pacifica minivan platform, is a fully integrated research vehicle equipped with a comprehensive suite of sensors, including LiDARs, radars, high-resolution cameras, and a Mobileye system, for robust perception and environment sensing. It features a drive-by-wire control system that enables full automation of throttle, braking, and steering, as well as high-performance, industry-grade onboard computers capable of running advanced algorithms for perception, planning, and control. The vehicle is also equipped with an onboard unit (OBU) for Vehicle-to-Everything (V2X) communication, supporting real-time data exchange with infrastructure and other vehicles. A modular software architecture allows for the seamless integration of custom applications, enabling real-time data processing and experimentation with cooperative perception, sensor fusion, and trajectory optimization. This platform offers a scalable and versatile testbed for universities, public agencies, and industry partners to validate emerging CAV technologies in real-world environments.
Our work addresses the eco-trajectory planning problem, a particularly challenging subdomain of trajectory planning characterized by high non-linearity and non-convex fuel consumption objectives, by introducing an innovative optimization-free approximation (OFA) framework. This two-stage system divides the planning process into an offline module, where an optimal batch of fuel-efficient trajectories is pre-computed under a wide range of initial and terminal conditions, and an online module, which rapidly selects, adjusts, and stitches together feasible trajectories in real time based on actual signal timing and traffic dynamics. By eliminating the need for real-time optimization, our approach significantly reduces computational overhead, making it practical for CAV platforms with limited onboard processing capacity. The framework incorporates mechanisms for dynamic translation, truncation, and smoothing of trajectories, allowing robust adaptation to unexpected behaviors from human-driven vehicles and signal phase changes. Extensive simulation results demonstrate that the OFA framework delivers substantial fuel savings, smooth vehicle trajectories, and sub-millisecond processing times, even under mixed traffic scenarios with varying CAV market penetration rates.
We are advancing the frontier of autonomous vehicle (AV) technology by addressing the unique challenges of AV deployment in rural environments, where sparse infrastructure, unpredictable hazards, limited connectivity, and adverse weather conditions pose substantial operational barriers. Our research tackles these issues through a multifaceted approach combining advanced Vision-Language Models (VLMs) for semantic perception, reinforcement learning (RL) for adaptive control, and infrastructure-aware localization techniques. Specifically, we have developed a robust hazard detection framework—INSIGHT—that integrates semantic and visual inputs to detect and interpret rural-specific challenges such as wildlife crossings, agricultural machinery, poorly marked curves, and narrow bridges. To enhance CAV decision-making, we also implemented deep reinforcement learning algorithms (DDPG and TD3) trained on high-fidelity simulations in CARLA, enabling safe longitudinal control and emergency response in edge-case scenarios, such as multi-vehicle pileups and sudden braking events. These models are supported by advanced sensor calibration pipelines, adaptive sensor fusion for adverse visibility, and edge computing strategies that ensure autonomy in low-connectivity settings. Our unified control framework integrates adaptive cruise control and emergency braking, significantly outperforming conventional ADAS systems in rural testbeds.
We are leading a multidisciplinary research effort to develop a next-generation cyberinfrastructure toolkit that enhances algorithmic capabilities in traffic simulation and deepens foundational knowledge in computational modeling. This project centers on the creation of a stochastic simulation platform designed to evaluate the safety performance of Autonomous Vehicles (AVs) and their Automated Driving Systems (ADS) under adverse winter driving conditions, such as icy or snowy roads. Recognizing the critical need to model stochastic vehicle behaviors and accurately predict crash risks, the platform integrates a CARLA-SUMO co-simulation framework, combining CARLA’s high-fidelity AV testing environment with SUMO’s scalable traffic flow simulation capabilities. The co-simulation operates in CARLA’s Town04 and incorporates dynamic weather and road friction variability, creating a highly realistic and flexible environment for safety assessment. By leveraging this hybrid architecture, the platform enables rigorous validation of vehicle dynamics and control algorithms across diverse and hazardous scenarios.
More information is available on: https://github.com/M-trail/NSF_OAC
Cooperative Perception (CP) systems represent a transformative advancement in transportation safety and efficiency by enabling vehicles to perceive beyond their own sensor limitations through real-time data exchange with nearby vehicles and infrastructure via Vehicle-to-Everything (V2X) communication. Unlike traditional Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS), which are constrained by line-of-sight and environmental conditions, CP significantly enhances situational awareness, allowing vehicles to anticipate and respond to hidden or distant hazards, particularly in complex urban environments. While research in this area is progressing rapidly, challenges remain for large-scale deployment, including the need for reliable prototypes, cross-platform interoperability, and consistent performance under diverse real-world conditions. At M-TRAIL, our multidisciplinary team is addressing these challenges by developing advanced CP prototypes that seamlessly integrate vehicle-mounted sensors with intelligent infrastructure, with a strong focus on interoperability, functional safety, and readiness for deployment in operational environments.
Case 1: CAV Reacted to a Pedestrian - No CP needed
A pedestrian is standing in the middle of the street. The Connected and Automated Vehicle (CAV) is able to directly detect the pedestrian from a sufficient distance using its onboard sensors. As a result, Cooperative Perception (CP) is not required for the vehicle to respond safely and appropriately.
Case 2: CAV Stopped Safely with CP Assistance
A pedestrian is initially obscured by a roadside obstruction, making it impossible for the approaching CAV to detect the individual using its onboard sensors. As the pedestrian begins to cross the street, they suddenly appear in the CAV’s path. A roadside camera detects the pedestrian in advance and transmits the information to the CAV , enabling the CAV to anticipate the pedestrian's presence and perform a timely stop, avoiding a potential collision.
Case 3: CAV Crashed into Pedestrian Without CP
This scenario mirrors the conditions of Case 2, where a pedestrian is obscured by a roadside obstruction and begins crossing the street. However, in this case, no CP system is deployed. Without CP, the CAV relies solely on its onboard sensors, which are unable to detect the pedestrian until it is too late to take corrective action. As a result, the CAV fails to stop in time and collides with the pedestrians.