Android Devices Terminology

Android Automotive uses the following terms and acronyms.

Term Definition
Android Application Package (APK) The archive (package) file format used by the Android operating system to distribute applications.
Android Auto Smartphone projection developed by Google to allow mobile devices running Android 5.0 or higher to project applications into the car.
Android Automotive Embedded operating system and platform on which to develop automotive applications.
Android Open Source Project (AOSP) Repository for the Android software stack. Led by Google, the AOSP repository offers the information and source code for creating custom variants of the Android stack, porting devices and accessories to the Android platform, and ensure Android devices meet compatibility requirements.
Application Programming Interface (API)
Set of protocols that enable users to programmatically access tools and services and create software applications.

Audio Video Bridging over Ethernet (Ethernet AVB) Set of extensions to the core IEEE 802.1 standards that provide time-synchronized low-latency streaming services.
Automotive Safety Integrity Level (ASIL) Risk classification scheme defined by the ISO 26262 (Functional Safety for Road Vehicles) standard.
Automotive Test Suite (ATS) Test suite designed for verifying Android Automotive implementations work as expected. For example, ATS tests might exercise Car*Manager APIs to verify vehicle HVAC integration.
Board Support Package (BSP) SoC-specific firmware for a device.
Controller Area Network (CAN) Vehicle bus standard that allows microcontrollers and devices to communicate with each other.
Compatibility Definition Document (CDD) Document that enumerates the software and hardware requirements of a compatible Android device. For details, refer to Android Compatibility.
Compatibility Test Suite (CTS) Suite of tests to establish compatibility with the upstream Android Platform. For details, refer to Compatibility Test Suite.
Critical User Journey (CUJ) The path users take to achieve a critical (important) goal.
Digital Audio Broadcasting (DAB) and Terrestrial-DAB (T-DAB)
Audio broadcasting in which analog audio is converted into a digital signal and transmitted on an assigned channel in the AM or (more usually) FM frequency range.

Digital Rights Management (DRM) System for protecting the copyrights of data circulated on the Internet or other digital media by enabling secure distribution and/or disabling illegal distribution of the data.
Digital Signal Processor (DSP) Specialized microprocessor (or a SIP block), with architecture optimized for the operational needs of digital signal processing. Designed to measure, filter, and/or compress continuous real-world analog signals.
Driver-Distraction (DD) Driving while engaged in activities that take the driver’s attention away from the road.
Google Automotive Services (GAS) Google Mobile Services (GMS) for automotive implementations. Provides a set of Google services and apps that can be integrated into Android Automotive devices.
Hardware Abstraction Layer (HAL) Software layer that all other higher level modules must interact with to access hardware functionality. Only the HAL can directly call the device drivers for the various hardware components on the device.
Head Unit (HU) Computing unit that powers the main display in the vehicle center console.
Heat, Ventilation and Air Conditioning (HVAC) Set of mechanical infrastructure functions designed to maintain a specific operating environment. HVAC systems perform activities such as warming homes, cooling data centers, and controlling fan speed in vehicles.
In-Vehicle Infotainment (IVI) Set of vehicle hardware and software functions that provide audio and/or video entertainment. Often used synonymously with Head Unit (HU) when describing the user-facing functionality of an Android Automotive device.
Key Performance Indicators (KPI) Business metrics for evaluating factors crucial to the success of an organization.
Local Interconnect Network (LIN) Serial network protocol used for communication between components in vehicles.
Original Equipment Manufacturer (OEM) Automaker (or suppliers) who create integrated IVI software for vehicles.
Real-Time Operating System (RTOS) OS for for real-time applications that process data on receipt with minimal or no buffering delays. Processing time requirements (including OS delays) are measured in tenths of seconds or shorter increments of time.
Service-Level Agreements (SLAs) Service contract between two parties that defines an agreement about the provided service in measurable terms such as performance, availability, reliability, etc.
System on Chip (SoC) Integrated circuit that integrates all components of a computer or other electronic system into a single chip.
Trusted Execution Environment (TEE) Environment created by a small OS that runs beneath the regular kernel and is supported by special hardware. This OS can run special apps that are kept safe from each other and from the regular OS and programs (even when the regular OS is controlling the regular hardware). It can access cryptographic credentials in hardware to let specific programs prove their identity, either over the network or to secure storage hardware.
Vehicle HAL Interface that defines the properties OEMs can implement and contains property metadata (for example, whether the property is an int and which change modes are allowed).
Vehicle Mapping Service (VMS) In-vehicle data exchange service supporting advanced driver assistance systems (ADAS). Enables the sharing of road and navigation data with other vehicle systems, allowing many vehicle components and systems to behave more intelligently as they gain awareness of the road around them.
Vehicle Network Service (VNS) Controls vehicle HAL with built-in security. Access restricted to system components only (non-system components such as third party apps should use car API instead).
Park, Reverse, Neutral, Drive and Low (PRNDL) Gears available in most vehicles.

Exposing a Gesture Sensor in Android Automotive OS

Quite often we wonder, as humans, how we can improve our well-being. The era of digitalisation has been rapidly developing throughout the years, and we are yet to discover new, amazing things that make our everyday lives more meaningful. This includes the automotive industry, which has been seeking to bring out the most innovative technology to the market for drivers and passengers alike.

However, digitalisation efforts in the automotive industry have not seen much improvement throughout the years. If we take a look at the technology that is available in 2020’s cars, we can not see much difference compared to a 2015 car, for instance. Most functionality has been there for years. Albeit with many improvements, indeed, the full potential is yet to be fully unlocked.

But how can we achieve this? The answer is quite simple: Allocate more resources to research and digitalisation efforts, but also work and contribute to open-source projects.

That’s right. Open source is the answer! It’s been a little over a decade since open source has been present in the software development field and it has forever changed the course of the industry. Discussing the advantages of open source goes, however, beyond the scope of this article. But mentioning open source is important because Android, a popular and beloved mobile operating system, is and has been part of this family ever since its initial release in 2008.

We might be more or less aware of the times when Android gained popularity among mobile phones, or “smartphones” as we like to call them. No doubt that, whenever someone mentions the word “Android”, the first thing people might immediately start picturing in their head is probably a smartphone running Android. But many are not yet aware of the fact that Android has been expanding into something more ever since the rise of smartphones. Android can be run on many embedded systems and this enables a huge potential for a lot of industries, the automotive industry being among them.

The popularity of Android on mobile phones has undoubtedly created a need of this operating system on other devices, such as the in-vehicle infotainment (IVI) systems in modern cars. As expected, this need has been identified by manufacturers and vendors who wish to integrate Android into the car market. This opens a new world in the software development field which pushes the digitalisation efforts of cars more than ever before. Add all the advantages of open source to that and notice the difference.

Indeed, as a result, new and innovative use cases did not cease to appear. It is very clear that users, and especially drivers, want as little physical interaction with the car as possible. In other terms, drivers want to mostly control their cars using voice commands and other means that allow them to stay concentrated on the road.

While it may still require improvement, the voice commands feature has been around for a while, and is a quite well-researched area. We can imagine in the future how you’d be able to control your car using your favorite voice assistant. Amazon, for instance, has already created something that goes in this direction, it’s called “Alexa Auto”.

But I think there has been so much focus on the voice commands to the point that other potential use cases have been, more or less, neglected. Additionally, we can see how this use case does not work at all for people who are hearing-impaired and/or mute. Therefore, we have identified two possible scenarios that are more inclusive towards this category of people:

  1. Using a hand gesture detection sensor

  2. Using a rotary pad (covered in a follow-up article)

In this article, we will discuss how to expose a hand gesture detection sensor in Android Automotive. We will focus on the different methods used to achieve this.

The availability of gesture sensors is still limited by the time of writing this article. There are a couple of radar-based gesture sensors on the market, but all of them are proprietary solutions and some require extensive knowledge in radar and physics in particular. But there are also quite a few simple and cheap variants, which are simpler to use, but, on the other hand, less accurate. For brevity, we will pick a very simple scenario in which we have a basic APDS-9960 sensor accompanied by an Arduino board.

Let’s assume that the board supports two types of gesture events:

SWIPE_LEFT (0x0)
SWIPE_RIGHT (0x1)

This would be the raw data coming from the server and our final goal is to make sure that we can transfer this data to our user apps lying on an embedded Android Automotive system, for instance to an In-Vehicle Infotainment (IVI) system that runs Android Automotive.

In a traditional Android app, the process would be pretty straightforward, in which we would use the Native Development Kit (NDK) to call native code using JNI. However, this is not very optimal, and each individual user application (project) would require having the NDK.

In the above diagram, we can see how the boards would communicate with our Android app, but with Android Automotive, things are going to look a little bit different. It is important to remember that Android Automotive provides some abstraction functionality out-of-the-box, and this can simplify the development.

In an Automotive scenario, we have an extra player in the game: it is, you guessed it right, the CAN bus.
CAN (Control Area Network) is a network topology designed to allow microcontrollers to communicate with one another without requiring a host computer. In an automotive context, this allows for a more seamless communication between ECUs. Diving into how the data is transmitted through CAN is, however, beyond the scope of this article. For now, let’s assume that the CAN bus acts as a relay for the raw data signal and we don’t even know what is beyond that. In other words, just some plain 0‘s and 1’s. We don’t know that they’re coming from some basic APDS-9960 sensor.

So how are those tiny little bits going to reach to our end User App and be interpreted as SWIPE_LEFT and SWIPE_RIGHT, respectively? The answer is that there are multiple approaches and solutions.

To be able to understand what we are talking about here, we must have a bit of knowledge about the internals of Android and understand what each architectural layer is about.

System and User Apps have limited access to the underlying layers and this can be a problem in some use cases, where additional access is required. Currently, we know that we have JNI to access the native layer, and from there we can write native code to satisfy our needs. But each and every user has to know how this is done, and using NDK is not very efficient here – in fact, there is so much overhead that it’s a silly thing to do on the long run. This would be the traditional way of achieving our result, and that’s not something we want. We want that each of our Android apps, be it a System or User app, can listen to the gesture sensor’s events without additional code boilerplate. Simply put, we just want to ‘subscribe’ to the methods.

Android now provides new ways to Android Automotive architecture brings to us:

We can observe the following additions:

  • Car API (Framework Level). This is the “entry point” of the API. It contains all the necessary classes which provide Android Automotive functionality. It contains several important classes, the most important and relevant for this article being CarPropertyManager, which we will talk about a little later. If you want to see the full package list, check out this link.

  • Car Service. The Car Service is the underlying service which is responsible for interacting with the system services and the network service of the vehicle, respectively. This provides an easy way to access these underlying services and facilitates access to the lower layers.

  • Vehicle Network Service. Controls the vehicle HAL with built-in security. Access restricted to system components only (non-system components such as third party apps should use car API instead).

  • The Vehicle HAL. Interface that defines the vehicle properties OEMs can implement. Contains property metadata (for example, whether the vehicle property is an int and which change modes are allowed).

The Car API is responsible of providing functionality in which most underlying sensors are accessible at a higher level in the architecture layer.

Let’s have a better look at how this is done using CarPropertyManager, which extends the CarManagerBase class. Other classes that extend this class are, for instance, CarSensorManager (Deprecated, replace!!!), CarHvacManagerCarBluetoothManager, etc. Therefore we can see that several car functionalities can be controlled more or less out of the box.

Through CarPropertyManager, static vehicle properties can be obtained by subscribing using the specified property ID. They are obtainable in the form of a CarPropertyEvent callback. The CarPropertyEventCallback interface contains two methods:

void onChangeEvent(CarPropertyValue value); and
void onErrorEvent(int propId, int zone);

They’re pretty much self-explanatory, onChangeEvent is triggered when a property is updated, and onErrorEvent is called when an error with the property occurred. This can be used to listen to particular sensor events.

For instance, let’s assume that we want to pass a SWIPE_LEFT event coming from our gesture detection sensor. If no errors occurred during the process, onChangeEvent will be passed with the appropriate CarPropertyValueCarPropertyValue holds an abstract type which can be passed as an object containing the value.

So far so good, but this is not the only way to do it. We can also map those 0‘s and 1‘s which we previously talked about to do what we desire using Key Input.

To simplify things, we can map a Key Input Event to pass our events to the Vehicle HAL and CarInputService, respectively.

In our case, the “Hardware Button” here would be the gesture data (SWIPE_LEFT and SWIPE_RIGHT), which can be mapped as input actions in our VHAL:

VEHICLE_HW_KEY_INPUT_ACTION_DOWN = 0,
VEHICLE_HW_KEY_INPUT_ACTION_UP = 1,

In our scenario, VEHICLE_HW_KEY_INPUT_ACTION_DOWN translates to SWIPE_LEFT and VEHICLE_HW_KEY_INPUT_ACTION_DOWN translates to SWIPE_RIGHT.

Afterwards, at a higher level, we can handle the event like so:

public class MyClusterRenderingService extends
InstrumentClusterRenderingService {
@Override protected void onKeyEvent(KeyEvent keyEvent) {
System.out.println(keyEvent.getAction())
}
}

This is a simple example of how it would work on an instrument cluster rendering service, but you can use it in any other case.

Android Automotive and Physical Car Interaction

In oder to be able able to physically interact with the car it is running on, Android Automotive needs access to the in-vehicle networks (IVN). What these IVNs are and how Android handles the connection to them is subject of this article.

In-vehicle Networks

In-vehicle networks (IVN) build up the car’s internal nervous system and are responsible for the communication between the various electronically control units (ECU) as shown in the figure below

In order to gain access to or control over these ECUs, Android needs to have access to these IVNs. On of these IVN is the Controller Area Network that will be main focus point of this article.

Controller Area Network

The Controller Area Network (CAN) is a network designed to allow micro controllers and devices or generally ECUs to communicate with each other without the need for a host computer. The communication takes place on a peer-to-peer fashion. Before using CAN, the individual ECUs within cars were connected through individual cables using analog signals. With an increasing number of ECUs that added significant additional cost and weight to the car which let Bosch to start developing the CAN standard in 1986 to pose a solution to this problem. In 1991 the ISO-11898 standard was established. CAN replaces this old system by connecting each component with just a single uniform data line on which data is transmitted sequentially.

Nowadays the Controller Area Network can be found in road vehicles, hospitals, elevators and many more systems where a reliable communication is crucial.

Physical Architecture

https://creativecommons.org/licenses/by-sa/3.0/deed.en

 

Data within the CAN is transmitted via two cables: CAN high and CAN low. Both data lines are either in a dominant state (representing a digital 0) or in a recessive state (representing a digital 1). The voltage of both data lines is typically at 2.5V during the recessive state and during the dominate state 3.5V (CAN high) and 1.5V (CAN low). In order to transfer data two or mode nodes are required. These nodes are set like seen below and consist of three main components.

Transceiver

Responsible for transmitting and receiving data as electrical signals. It is directly connected to both the CAN high and the CAN low data lines.

CAN controller

Responsible for transmitting/receiving data as serial bits to/from the transceiver and for transmitting/receiving data as CAN frames to/from the microcontroller.

Microcontroller

Responsible for processing received data and deciding on which data to transmit to the CAN bus.

Sending and requesting data

Within the CAN bus all participating nodes can send 4 types of frames. These frames include the data frame, the remote frame, the error frame and the overload frame. For us relevant are the data frame that allows transmitting of data and the remote with which data from an other node can be requested.

Both data and remote frame are preceded with the unique identifier mentioned before (the 11 bit arbitration field as seen below). This ID indicates the type of data that is transmitted. The following bit specifies whether the frame is a data frame (0) or a remote frame (1). In case of a remote frame, the node able to process the request will respond with the requested data by transmitting a data frame with the same ID.

The data length is defined by the last 4 control bits and ranges from 0 to 64 bits. Even if the control bits can be set to 15 the data size is limited to 64 bits by definition. Thus only short messages can be send with a single frame. The following 15 bits are reserved for the CRC that makes sure that no errors took place throughout the transmission. Once a node processed the frame sent, it will set the acknowledged bit to 1.

creative commons: https://creativecommons.org/licenses/by-sa/3.0/deed.en

ID Arbitration

Since CAN is a multi master network there is no single instance that can decide which of the nodes is allowed to talk at a given point in time. So the question arises how this is determined eventually.

Every data frame transmitted by a node is preceded with a unique identifier that at the same time also reflects the priority of the to be transmitted message. This id is used in the arbitration process. On simple terms a message with a lower ID has priority over a message with a higher ID meaning that the message with the lowest ID will always be transmitted first.

To understand how this works within the CAN network, we have to first understand how data is transmitted on the physical layer: The ISO standard differentiates between a “recessive” and a “dominant” state on the data line where a recessive state represents a logical 1 and a dominant state a logical 0. Within the CAN network a logical 0 will always dominate a logical 1.

Now let us assume that two nodes are transmitting a frame of data at the same time. Node 1 (N1) transmits a frame with ID 127 and node 2 (N2) transmits a frame with ID 128.

Each bit of the frame is sent at the same time by both N1 and N2. Simultaneously, both nodes are reading the actual value that has been transmitted on the bus. Looking at the table above we can see that N2 transmits a 1 at the forth bit where N1 transmits a 0. N2 will check the can bus and (since a 0 dominates a 1) reads a 0. This tells N2 that another node is currently transmitting a frame with a lower ID and higher priority. Thus N2 will immediately stop transmitting. N1 on the other side gets back a 0 from the can bus which equals the bit it transmitted and will thus continue transmitting. By the end of the 11 arbitration bits, N1 will be the only winner. It will gain the first access, without any bit loss or delay. N2 will respond to failure to gain bus access by automatically switching to receive mode; it then repeats the transmission attempt as soon as the bus is free again.

To understand how Android handles the physical interaction with the car by connecting to its IVNs, it is worth first taking a look at Android’s architecture layers:

The Hardware Abstraction Layer (HAL) plays the most important role here. It acts as a separation layer between the physical hardware devices and the Android Framework by simply abstracting their functionality into software interfaces.

Before the introduction of Android 8 these interfaces where simply defined as C header files. The hardware specific implementation was then implemented using C/C++. Communication between the Android Framework code writing in Java and the C++ code took place using JNI. The major disadvantage of this approach was that the header files were not versioned which required drivers to be rewritten whenever an interface definition within the HAL changes, eventually slowing down the Android upgrade process.

To solve this problem, Google introduced the HAL Interface Definition Language (HIDL) with the release of Android 8.0.

HIDL

The HAL Interface Definition Language (HIDL) is used within the Hardware Abstraction Layer to define the interfaces describing the individual hardware devices that are used in combination Android. The main advantage HIDL has over the previous approach with using header files is that HIDL interfaces are backwards compatible. To achieve that, each change within the interface is signed, published and versioned. Once published, a specific version of the interface is immutable. Implementations of old versions are expected to work as long as the signature of that interface is supported by the futures versions of Android.

The implementation of HIDL interfaces typically takes place within a separate process so that the system service talks to the hardware driver implementation using Binder, Android’s build-in inter-process communication (IPC) mechanism. One exception is the passthrough implementation for compatibility with already existing legacy HAL implementations that instead runs within the system service.

The difference between the legacy HAL and HIDL-based HAL definition can be found below

struct vibrator_device;
typedef struct vibrator_device {
struct hw_device_t common;
//…
int (*vibrator_on)(struct vibrator_device* vibradev, unsigned int timeout_ms);
int (*vibrator_off)(struct vibrator_device* vibradev);
//…
} vibrator_device_t;

hardware/libhardware/include/hardware/vibrator.h

interface IVibrator {
//…
on(uint32_t timeoutMs) generates (Status vibratorOnRet);
off() generates (Status vibratorOffRet);
//…
};

hardware/interfaces/vibrator/1.0/IVibrator.hal

Android supports a wide selection of hardware devices responsible for audio output, reading sensor data and much more. For the connection to the car’s IVNs Android Automotive includes the Vehicle HAL.

The Vehicle HAL (VHAL)

The Vehicle HAL (VHAL) is one of the many HALs available within Android (Automotive). It is, as mandated by Android 8.0, defined in HIDL. It describes the communication with the in-vehicle networks (IVN) using the functions shown below. Data is transferred using VehiclePropValues.

interface IVehicle {
//…
get(VehiclePropValue requestedPropValue) generates (StatusCode status, VehiclePropValue propValue);
set(VehiclePropValue propValue) generates (StatusCode status);
subscribe(IVehicleCallback callback, vec<SubscribeOptions> options) generates (StatusCode status);
unsubscribe(IVehicleCallback callback, int32_t propId) generates (StatusCode status);
//…
}

hardware/interfaces/automotive/vehicle/2.0/IVehicle.hal

The VHAL is made available to the application layer by the CarPropertyManager that exposes the API to register, get and set VehiclePropValues. An overview on the communication stack between the hardware specific driver implementation and the application layer can be found below.

It is worth noting that Android does not specify the data transfer standard or the IVN to be used. This has to be specified within the implementation of the VHAL. This allows not only a connection to the CAN bus but also to other car internal networks such as the Local Interconnect Network (LIN) or different future in-vehicle communication standards.

Detailed Flow and Responsibilities

For a more detailed understanding of the components involved please refer to the abbreviate class diagram below. It includes all classes and interfaces relevant for providing access to the in-vehicle network. As mentioned previously, the CarPropertyManager plays the main role in proving the application layer access to the VHAL.

When subscribing to a vehicle property (VehiclePropValue) the following steps take place within the Android operating system:

  1. Invoke CarPropertyManager#registerCallback in order to register to a vehicle property by providing the CarPropertyEventCallback, the propertyId id of the vehicle property and the rate (update frequency)

  2. CarPropertyManager talks to the CarPropertyService using Binder and via ICarProperty#registerListener and provides an ICarPropertyEventListener as callback

  3. CarPropertyService checks if the required permission is granted. If that is the case safes the callback in a map, associated with the property id. It then passes on the subscription to the PropertyHalService using PropertyHalService#setListener and PropertyHalService#subscribeProperty.

  4. PropertyHalService verifies that the to be subscribed property id exists and validates the update frequency, then passes on the subscription to VehicleHal#subscribeProperty

  5. VehicleHal checks if the property is actually subscribable and if that is the case passes it on to HalClient#subscribe

  6. HalClient then talks directly to IVehicle

Android Automotive OS Whitepaper

Authors:

Peter Gessler (Android Automotive Architect),
Tino Müller (Mobility Solutions),
Marius Mailat (CTO)

peter.gessler@p3-group.com
tino.mueller@p3-group.com
marius.mailat@p3-group.com

Abstract

Google’s operating system Android Automotive OS for connected in-vehicle infotainment (IVI) systems is already disrupting the traditional automotive infotainment landscape. In this technical white paper, we will give an overview of Android Automotive OS features and architecture to support the decision-making process for Original Equipment Manufacturers’ (OEMs) and Tier 1 suppliers’ concerning their future infotainment strategies.

Keywords

Android Automotive, Operating System (OS), Google Automotive Services (GAS), Human-Machine Interface (HMI), In-Vehicle Infotainment (IVI), User Experience (UX)

I. Introduction

Today’s users demand from cars’ IVIs and connected services the same intuitive and exciting experience they are used to from their favorite consumer electronic devices, apps and cloud services. Furthermore, they expect their personal application eco-system to be integrated into the vehicle. All of this can now be achieved much easier with Google’s Android Automotive OS.

Worldwide, vehicle manufacturers are today evaluating the benefits of Android Automotive OS carefully. Some have already chosen to enter a formal partnership with Google to co-create their next-generation IVI incl. Google Automotive Services (GAS). Others are using the open-source project AOSP incl. the car extensions to build an Android Automotive System independently while a third fraction is still hesitating mostly due to concerns regarding dependencies and data ownership.

For those currently at the decision-making crossroad, we want to shed light on some of the more technical aspects of Google’s Android Automotive OS. The paper is however not trying to be exhaustive in tackling all technical aspects but rather aims at giving an overview. For a more deep-dive discussion, we recommend our Android Automotive Base Training.

II. Feature Overview

In order to understand the individual components and the added value of the operating system, we want to give a brief overview of its structure.

Figure 1 shows the abstract layer architecture of Android Automotive with the division into four layers. In this section, we focus on GAS and the built-in applications.

Google Automotive Services (GAS).
GAS describes a set of customer-specific and technical services that are precompiled by Google and provided through a licensing model. The most important services are

  • Google Maps & Navigation: For navigation from point A to point B with intelligent address, route, petrol station and charging station search.
  • Google Assistant: Voice personal assistant for controlling various vehicle functionalities (can be extended) or give additional information to the user.
  • Google Playstore: Provision and management of 3rd party applications that are tailored to be used in the vehicle.
  • SetupWizard. Creation of vehicle user profile accounts and connectivity setup.
  • Automotive Keyboard: A keyboard adapted for the automotive industry to operate the touchscreen and support various languages.

The OEM receives access to GAS through an associated partnership with Google. This provides access to close communication & support, extended technical documentation as well as the quarter pre-release (QPR) versions with new updates and upgrades.

Non-GAS describes a platform version that does not require the integration of GAS. The OEM simply downloads the freely available AOSP source code with car extensions and integrates its own applications and services. You would choose this variant for example in case of a planned launch in China, due to non-availability of Google services in this market, or if you are a Tier 1 supplier without OEM contract as Google currently only partners directly with OEMs.

Hero applications. Besides GAS, Google is developing applications such as

  • Media Center. Skeleton for the integration of media sources such as the LocalMediaPlayer. The skeleton is fully integrated and interacts seamlessly with the Notification center and the Dialer.
  • Dialer. The central telephone application, which allows the contacts of the connected smartphone to be managed and calls to be made.
  • Car Settings. Management of various system settings such as Time & Languages, User Management and Connectivity.
  • Notification Center. Brief system- notifications for the user and interactions to start applications.

These applications are available at android.googlesource.com. In addition to these vehicle-specific applications, numerous other applications are available at … / package / apps /…

III. Frameworks & Libraries

In order to create a holistic HMI, various frameworks and libraries are required to integrate the applications and implement general rules and restrictions that apply to all system and user applications.

UI Frameworks The SystemUI / CarSystemUI manage the general structure of the central screen. The user can customize these if necessary and change the individual fragments of the bars and their content (e.g. StatusBar at the top of the screen, global NavigationBar at the bottom as well as main fragment and HVAC bar). Furthermore, the OEM/Tier 1 can manage the theming (use of colors, fonts and styles) and the display of pop-ups via the SystemUI.

Google defines the SystemUI as “…a persistent process that provides UI for the system but outside of the system_server process” [2]. The SystemUIApplication extends the SystemUI with a defined set of services, for example the SystemBarsPowerUI or self-designed services that work in an isolated way, which are a major part of the system user interface and starts with the boot process.

One of the most important extensions to Android Automotive is the DrivingUxRestrictions framework. This is already integrated into the applications provided by Google. The framework uses the configuration file specified by the OEM to prevent touch interaction by the end customer in certain driving situations so that the user is not distracted. The OEM can extend and customize the existing framework therein.

Car-lib. In addition to the functions described for the HMI, there are countless others on the other layers that are provided by Google. We want to point out three special services [2] which reduced a lot of work for us.

CarInfoManager. Depending on the development strategy, the OEM may want to manage multiple vehicle variants with one platform version. The CarInfoManager can be used to dynamically adapt the HMI. As a proxy component, this provides the static information regarding the vehicle model, variant and other relevant vehicle properties.

CarPowerManager. The behavior of the infotainment system and its applications largely depends on the system state of the vehicle. These communicate via the CarPowerManager with the Vehicle HAL and the Vehicle Microcontroller Unit (VMCU) based on a generic state machine, which is displayed in Figure 2.

Figure 2: Googles car power state machine. Based on [3].

The applications can, therefore, perform an individual action in the event of a specific state or a state-change. This is necessary, for example, when switching on/off services such as Bluetooth or Wi-Fi.

CarProjectionManager. The efficient integration and handling of different projection technologies is a key requirement for today’s infotainment systems. The user should be free to choose between Android Auto, Apple CarPlay or other mirroring technologies. With CarProjectionManager, Google enables the development of an application that guarantees the same system behavior when establishing a connection, managing smartphones and closing the connection.

IV. Android Automotive OS Architecture

The Android platform (AOSP) can be generically divided into the components displayed in figure 3. Those are,

  • Application framework and applications
  • Android Automotive system service and Binder IPC
  • Hardware Abstraction Layer
  • Linux Kernel

Figure 3: Android system architecture [4].

Google extended its AOSP system with

  • Car system applications
  • Car APIs
  • Car Services
  • Vehicle Hardware Abstraction Layer

to provide a fully functional vehicle-agnostic in-vehicle infotainment operating system (refer to figure 1). The source code distribution of the IVI generally consists of

OEM and 3rd party applications as a set of Android applications including the HMI and application background services in the /product partition.

Android Open Source Project (AOSP). Include all the GIT-tree packages from the generic system applications, the application framework, system services through the HAL interfaces and should be in the /system partition.

Board Support Package (BSP). Includes the Linux kernel image with the HAL implementation for given hardware. The BSP is System on the Chip (SoC) dependent and part of the /vendor partition.

The OEM can extend the existing source code with self-developed automotive or non-automotive applications and system services, e.g. head-up display (HUD) management, tire pressure monitoring, charge program management, and others to extend the functionality of its infotainment system.

Due to the architecture change carried out in Project Treble and the expansion of the available partitions, not only the HMI layer but also the Android framework or the BSP and the hardware can be replaced in the future (see Figure 4).

Figure 4: Platform-based operating system architecture [5].

The following section provides an overview of the responsibilities and tasks of the respective system layers:

Application Framework. Commonly called the “HMI Layer”, the Application Framework contains the system and user applications. Our recommendation is to design the applications in such a way that they are only responsible for the visualization incl. small calculations to not block the MainUI thread and more the core business logic to the System Services in the Service Layer. Furthermore, applications manage their own translation labels and notifications using background services. This

design allows for an easy update in the future and multiple HMI designs, e.g. for different car brands.

Service Layer.  System services are included in the Service Layer and started by the SystemServer. They run as a System process which gives them additional privileges that normal Android Services do not have. This approach provides an opportunity for OEMs to develop other applications, that can use the service without source code duplication. Furthermore, OEMs can use the services as an additional layer for security reasons to avoid direct communication between the applications and the Hardware Abstraction Layer.

Vehicle HAL. The role of the Vehicle HAL is to expose car-specific interfaces to the system services, in an extendable, vehicle-agnostic manner. These interfaces include

  • Access to signals to / from the ECUs in the vehicle
  • Access to signals generated from the vehicle microcontroller unit to the IVI OS
  • Access to service-oriented functions available on the vehicle network (e.g.: SOME-IP)

The described layers are the core elements of the platform and responsible for the data exchange between the applications and the vehicle ECUs. A detailed architecture is displayed in figure 5.

Figure 5: Detailed software component architecture view with extensions.
The processes will run top-down and bottom-up between the different components and layers.

V. Summary and Conclusion

In this technical white paper, we have provided some insights into the Android Automotive OS (AAOS) which is continuously being developed by Google and is available publicly on android.googlesource.com. In addition to the basic features, frameworks and libraries we have explained the layered architecture and described how the system can be expanded by the OEM.

We consider Android Automotive an effective platform that includes all necessary core features. It requires lower development-, integration- and maintenance-cost for connected infotainment systems. The system can be fully customized, however any deviation from the original source code increases the OEM’s development and maintenance effort. Another benefit is that Google will release regular patches and annual major upgrades with added features, extended functionalities and other improvements.

In our experience, the IVI development time can be cut short by 2 years compared to the usual 4-year development cycle. In this case Android Automotive OS was deployed incl. GAS and a fully customized HMI was developed. The implementation of a non-GAS system will require additional to for development and integration.

For more technical information on Android Automotive OS, we recommend our Android Automotive OS workshops to dive deeper into the technical details.

VI. References

  1. Android Automotive
  2. Android Automotive SystemUI
  3. Android Automotive Power Management
  4. Android Automotive Android Architecture
  5. Project Treble. What Makes Android 8 different?