Note: This article was first published on TechNode China (in Chinese).
China has been making strides in vehicle electrification for some time, with an eye to digitizing its entire automotive industry. As a key part of this shift, Chinese EV makers are currently competing to produce the most comprehensive assisted driving systems, endeavoring to turn their offerings into key selling points as the market matures.
Here, TechNode takes a look at the assisted driving software of three leading players in the Chinese EV sector.
Xpeng’s NGP advanced driver assistance system
The Advanced Driver Assistance System (ADAS) is the standout feature of Xpeng’s new model, the G6. The car has possibly the most advanced autonomous driving technology in China: with its 31 smart sensors, the G6 outperforms its competitors. Dual forward-facing LiDAR sensors, millimeter-wave radar, cameras, and ultrasonic radar throughout give the vehicle the tools to sense and see all around its body.
In urban settings, the City NGP (Navigation Guided Pilot) smart navigation-assisted driving tool enables seamless travel along accessible city roads. Once a user inputs a destination and activates the tool, the vehicle maintains its position within its chosen lane, performs necessary lane changes or overtaking maneuvers, merges on and off roads, navigates around stationary vehicles or obstacles, recognizes and passes through traffic light intersections, circumvents loop roads, steers clear of construction zones, and evades pedestrians and non-motorized vehicles, on its way from inputted A to B.
The G6 comes with Lane Centering Control (LCC), a Lidar-based adaptive cruise and lane-centering feature that also enables the car to maintain optimal cruising speed. Linked to Xpeng’s advanced XNet neural network, the system processes 4D information on dynamic targets, including size, distance, position, and speed of vehicles and two-wheelers, as well as 3D information on static targets: lane lines and road edges from above.
Compared to Xpeng’s first-generation visual perception architecture, the XNet employs neural networks to replace manual post-processing, enabling end-to-end algorithm optimization. It boasts enhanced 360-degree perception, covering more than eight lateral lanes, demonstrating superior performance, and improving lane change success rates. Uniquely, this vehicle relies on vision-based recognition and display capabilities, becoming the first in the industry to not rely on mapping. It includes detailed rendering and visual representation of traffic participants and road infrastructure surrounding the vehicle. Drivers can see lane markings and nearby vehicles on the in-car map. XNet also recognizes and displays traversable areas, traffic lights and turn signals, setting a new industry standard.
On highways, the system can efficiently execute autonomous lane changes, lane selection, and overtaking maneuvers by assessing the surrounding environment and required driving tasks, such as avoiding traffic restrictions and adhering to speed limits. It also provides seamless on and off-ramp transitions while switching between high-speed driving modes, ensuring improved straight-line stability and enhanced cornering.
Li Auto’s City NOA smart driving
By putting strategic effort into smart software and electric power, Li Auto has made huge strides in smart space (SS) R&D, smart driving, and high-voltage fully electric platforms. With its own large model called Mind GPT, Li Auto will soon begin testing its City NOA smart driving system.
Li Auto’s smart driving system doesn’t depend on high-precision maps, as it utilizes a bird’s eye view (BEV) large model to perceive and comprehend road structure information in real time. The BEV large model has undergone extensive training, enabling it to generate stable road structure data on most roads and intersections in real time. Neural Prior Net (NPN) refers to a set of neural network parameters which are difficult for humans to directly interpret when dealing with complex intersection patterns. The large model effectively deciphers these patterns. Compared to high-precision maps, NPN replaces human rules with network models for better understanding and use of environmental information.
For complicated intersections, it’s essential to conduct advanced intersection NPN feature extraction. On a vehicle’s second approach to an intersection, the previously extracted NPN features are retrieved and combined with the BEV feature layer from the vehicle’s large-scale perception model, resulting in what the company says is an optimal perception outcome. In addition, the “AI driver” must comprehend the traffic light regulations at the intersection, posing another challenge on urban streets. The prevailing method involves devising a rule-based algorithm to interpret traffic lights and road use intentions. Li Auto prefers to rely on a large model to address this issue.
To navigate complex urban roads, Li Auto trained a Traffic Intention Net (TIN) to do away with the need for software to interpret pre-set human traffic regulations or even know the exact position of a traffic light. The system will input video footage into the TIN network model, and it will directly indicate the appropriate vehicle maneuver – turn left or right, go straight, or stop and wait. By analyzing the reactions of a large number of human drivers to signal changes at intersections, the performance of the TIN model is highly refined. To ensure the “AI driver” emulates human drivers’ judgment and driving patterns, Li Auto trained the AI with a huge amount of real driver behavior data, making NOA’s decision-making and planning more human-like, while maintaining safety and adherence to traffic regulations.
NOA is designed to accommodate more than 95% of commuting situations for car owners. While using NOA for commuting, each model will receive continual updates and training. In the latter half of the year, Li Auto plans to introduce the NOA commuting feature and expand urban NOA coverage, with the goal of allowing early adopters to commute using NOA’s navigation-assisted driving.
Huawei Aito’s second-generation autonomous driving system
The M5 smart drive edition released by Huawei’s automotive brand Aito sees the debut of the telecom giant’s second generation autonomous driving system ADS 2.0, which offers a comprehensive fusion perception system made up of various sensors working together to provide 360 degree coverage. This fusion perception system consists of 1 LiDAR, 3 millimeter-wave radars, 11 camera sets, and 12 ultrasonic radars, allowing for distance detection of up to 200 meters. The Aito M5 employs network technology based on fused BEV perception capabilities that can identify objects outside the standard obstacle whitelist. Paired with a road topology inference network, the Aito M5 is designed to drive efficiently with or without a map, equipped to see, understand, and navigate regardless.
The Aito M5 can handle changing light conditions in tunnels and minimize the impact of nighttime glare. It can accurately identify pedestrians, vehicles, and obstacles with ease. On urban roads, the car actively maneuvers around obstructions caused by other vehicles and the company claims it can deal with pedestrians carelessly opening car doors or unexpected cyclists emerging from a blind spot. Even in the most challenging conditions, such as intense glare at night, the Aito M5 can brake at speeds of up to 50 km/h.
With assisted driving capabilities, the M5 can merge onto and off highway ramps with a 98.86% success rate. The reliable long-distance piloting system has an average Miles Per Intervention (MPI) of up to 114 km, rivaling experienced drivers.
The Huawei ADS 2.0 package comes with 19 features as-standard, such as high-speed Lane Centering Control (LCC), urban LCC, and high-speed Navigation-based Cruise Assist (NCA). Additionally, the optional advanced package offers urban NCA, Automated Valet Parking Assist (AVP), and enhanced LCC for urban areas.
Huawei’s Aito is the first car brand to achieve high-speed urban smart driving capabilities without relying on high-precision maps, bringing the assisted driving experience significantly closer to the L3 level of autonomy. According to Huawei’s roadmap, its mapless functionality will be introduced in 15 cities, including Shanghai, Guangzhou, and Shenzhen, during the third quarter of 2023. By the fourth quarter, coverage will encompass 45 cities.
A sophisticated race to autonomy
The race to launch assisted driving in the Chinese market is well underway. As time goes on, we can expect more car companies and self-driving solution providers to join. China’s Ministry of Industry and Information Technology (MIIT) plans to introduce an updated standard system guide for smart, network-connected vehicles, which will accompany the competition as it intensifies further.