The automotive navigation of tomorrow is highly connected and capable of dynamically using high-resolution map information and vehicle and environmental data from the cloud. It is an enabler for powerful driver assistance, intelligent e-mobility and autonomous driving. As a global provider of software engineering services for the mobility industry, Intellias is involved in many of these developments.
Although obtaining mapping and navigation data is easier today than it was 10 years ago — thanks to dashcams, UAVs and satellites — collecting this data is still labor-intensive. Even if most corners of the world are already recorded in public and private geographic information systems (GIS), the maps still need to be maintained regularly. Data accuracy and timeliness are the two biggest challenges in the mobility industry, followed by coverage, as the physical world is constantly evolving. To meet these requirements, the evolution of navigation and digital mapping is gathering pace. The following six technology and deployment trends will drive automotive mapping and navigation in the coming years.
1. Enriching Mapping Data With AI
Satellite imagery was a breakthrough for map creation. The wrinkle, however, is that most mapping software cannot work directly with satellite photos. Visual data first needs to be codified into comprehensive navigation datasets in a suitable format such as the Navigation Data Standard (NDS). Then map owners must keep it up to date. Both processes are costly and labor-intensive, making them great use cases for artificial intelligence (AI) in mapping.
AI algorithms improve the speed and precision of digital map building by offering the ability to update maps more regularly and map new areas faster. They can classify objects in satellite images — buildings, roads, vegetation — to create enriched 2D digital maps as well as multi-layer 3D map models. With precise maps, you can delight users with better ETAs, detailed fuel or energy usage estimates, and richer point-of-interest information.
Apart from facilitating the collection of mapping data, AI can also help with generating such data. Researchers from MIT and the Qatar Computing Research Institute (QCRI) recently released RoadTagger. This neural network can automatically predict the road type (residential or highway) and number of lanes even with visual obstructions present, such as a tree or building. The model was tested on occluded roads from digital maps of 20 U.S. cities. It correctly predicted the number of lanes with 77% accuracy and predicted road types with 93% accuracy.
That said, sensor data collection from connected vehicles isn’t going anywhere. OEMs are increasingly relying on their fleets to collect new insights for digital map creation, and this process is becoming easier with advances in machine learning. HERE Technologies recently presented UniMap — a new AI-driven technology for faster sensor data processing and map creation. The new solution can effectively extract map features in 2D and 3D formats, then combine them with earlier map versions. This unified map content data model allows new digital maps to be available in 24 hours.
2. NDS.Live: From offline databases to distributed map data systems
Conventional onboard navigation systems are designed, developed and integrated with proprietary databases, which become obsolete with every new product generation. NDS.Live is the new global standard for map data in the automotive ecosystem, promoting the transition from offline to hybrid/online navigation. It minimizes the complexities of supporting different data models, storage formats, interfaces and protocols with one flexible specification. NDS.Live is not a database, but a distributed map data system.
NDS.Live was co-developed by global OEMs and tech leaders, including Intellias, Daimler, HERE, Denso, Renault and TomTom are among those who have already adopted it. For example, second-generation Mercedes-Benz User Experience (MBUX) systems are powered by NDS.Live. The distributed map data system provides fresh information for the driver assistance system, which gets visualized as augmented reality (AR) instructions on the head-up display (HUD). NDS.Live can help massively improve the navigation experience for EVs and regular connected vehicles. It also helps OEMs deploy value-added subscriptions for assisted driving and navigation.
3. 3D and HD map generation
3D maps enable accurate rendering of physical objects in a three-dimensional form. High-definition (HD) maps feature detailed information about road features (lane placements, road boundaries) and terrain type (severity of curves, gradient of the road surface). Both types of maps are essential for launching advanced ADAS features and, ultimately, ushering in the era of autonomous driving.
3D maps define how the vehicle moves and help it interpret the data it receives from onboard sensors. Since most sensors have a limited range, HD maps assist by providing the navigation system with extra information on road features, terrain and other traffic-relevant objects.
The bottleneck of both HD and 3D mapping is collecting and rendering data. In the case of 3D maps, you need to capture video in real time from multiple cameras, plan for interference due to vibration, temperature and hardware issues, and then repeat the process across billions of kilometers of roads across the globe. Rather than doing this huge task alone, mobility players and OEMs join forces:
• HERE and Mobileye, for example, partnered to crowdsource HD mapping data collection, with VW joining later. Mobileye developed a compact, high-performance computer vision system-on-chip called EyeQ. Installed by more than 50 OEMs across 300 vehicle models, the system supplies Mobileye with ample visual data they can then render into maps with the help of partners.
• TomTom, in turn, teamed up with Qualcomm Technologies to crowdsource HD mapping insights from its users. Qualcomm provides the underlying cloud-based platform for making and maintaining HD maps from various sources, including swarms of connected vehicles.
4. Autonomous driving simulations
Autonomous vehicles require extensive road and track tests to pass security checks. Manufacturers also need to simulate near-crash events without putting anyone in danger. Hyper-realistic virtual worlds can be much safer testbeds for autonomous vehicles (AVs) — especially as virtualization technology improves.
A group of researchers released an open-source, data-driven simulation engine for building photorealistic environments for AV training. The engine can simulate complex sensor types including 2D RGB cameras and 3D lidar, as well as generate dynamic scenarios with several vehicles present. With the new engine, users can simulate complex driving tasks such as overtaking and following.
Waymo takes a similar approach of using real-world data collected from vehicle cameras and sensors to create highly detailed virtual testbeds. The Waymo team has built virtual replicas of several intersections complete with identical dimensions, lanes, curbs and traffic lights. During simulations, Waymo algorithms can be trained to perform the most challenging interactions thousands of times, using the same or different driving conditions and different vehicles from its fleet.
To perfect the performance of the algorithm, the team uses a fuzzing technique. During training sessions, engineers alternate the speed of other vehicles, traffic light timing and the presence or absence of zig-zagging joggers and casual cyclists. Once the Waymo algorithm learns the trick of driving through a specific intersection with a flashing yellow arrow, the “skill” becomes part of the knowledge base, shared with every vehicle across the fleet.
The new generation of high-fidelity 3D environments can be built with data from different sensor types to effectively convey all details of the material world to the algorithm. Existing 3D visual databases already include realistic details for traffic signs, pavement markings and road textures. With machine learning and deep learning algorithms, complex ADAS/AD scenarios can simulate close to real-life conditions.
5. Digital twins of road infrastructure
While OEMs leverage dashcam data collection for building better navigation systems, transportation managers use the same intelligence to digitize road infrastructure. A digital twin is an interactive, virtual representation of physical assets or systems such as a smart traffic light network or smart parking facilities. Powered by real-time data, digital twins of road infrastructure can enable advanced urban planning scenarios. This includes dynamic traffic light signal optimization to reduce congestion as well as prioritized public and service transport management and accurate traffic predictions to optimize planning, signage, construction work schedules, etc.
Low latency is crucial for autonomous driving. Yet 3D map generation on the edge requires substantial computing power. Moreover, vehicles cannot store all mapping data on their route and need to constantly receive over-the-air updates. A group of researchers has proposed placing compact map distribution devices on roadside edges to facilitate point cloud data (PCD) map delivery on the go. The results show that autonomous vehicles can perform self-localization while downloading PCD maps. This system allows autonomous vehicles to receive dynamic new maps for each new destination instead of storing tremendous data records onboard.
6. AR in HUD navigation products
The latest vehicles have an upgraded human-machine interface (HMI)design, featuring new hardware and software elements that allow for AR navigation. AR in HUDs can deliver all standard information from static displays (driving speed, status of the ADAS system, fuel or charge levels), alongside dynamic routing instructions, including information on traffic signs, speed limits, construction work alerts and ETAs.
Overall, AR navigation systems can help drivers make better decisions on the road. A recent comparative study found that drivers using AR-augmented HUDs made fewer errors and drove faster on average than those using conventional HUDs. Participants also rated AR HUD instructions as more useful and easier to understand.
The next advance in navigation will be holographic displays, offering AR instructions in 3D. Advances in lidar technologies already allow for projecting ultra-HD holographic representations of road objects in real time into the driver’s field of view. Such systems can enable shorter obstacle visualization times and reduce driving-related stress, according to Tech Explore.