Optimising high level operating systems for embedded applications

Yannick Chammings
CEO and founder
Optimising high level operating systems for embedded applications
Yannick Chammings or Witekio looks at improving user experience of many software driven items: one key method applicable to many systems is to reduce the boot time.

Why optimize boot time ?

Embedded developers that work predominantly in the hardware domain are used to things happening concurrently. In the software domain, things tend to happen sequentially but at speeds which could give the impression of concurrency.

For deeply embedded devices using a small amount of software, sequential execution isn’t an issue thanks predominantly to the speed of modern microcontrollers. However it is becoming increasingly common for embedded devices to use software executives such as kernels or even fully featured operating systems.

Operating systems developed specifically for embedded devices are normally extremely resource-efficient, but the advent of high performance, low power and low costs 32-bit processors means operating systems originally developed for enterprise applications are making their way in to the embedded industry.

For embedded developers unfamiliar with large operating systems such as Linux and Android, the time it takes from applying power to seeing a welcome screen can be agonisingly long. In reality, it isn’t, but it can feel that way if you are expecting the same ‘instant on’ experience possible with other, smaller embedded operating systems. Even for hardware platforms with extremely high performance processors, the boot time for an embedded system running Linux or Android is inarguably longer than one running, say, FreeRTOS. There are reasons for the extended boot time and, fortunately for embedded developers, there are also ways to improve it.

Fast forwarding Linux boot…

Let’s start by looking at the boot-up sequence for a Linux-based system. At power-up the boot loader initialises the hardware and then (normally) copies the kernel image from some form of persistent (or ‘non-volatile’) memory to the system’s RAM. It then decompresses the kernel, which then takes over and starts to initialise the entire system. Once fully initialised, the init script is invoked, ultimately launching a shell and the application program.

This entire sequence may seem very rigid and inter-dependent, but it isn’t. There are techniques that can be used to improve (that is to say, reduce) the boot time for a system running a high level operating system such as Linux or Android.

For example, if system resources allow it, storing the kernel in an uncompressed format will improve boot time, as the system wouldn’t need to decompress the kernel when moving it from persistent memory to RAM. Another option is to adopt an Execute in Place (XIP) approach, which allows the kernel to start executing from persistent memory (typically Flash).

Both these methods have a direct impact on the size or type of memory used in the embedded application, without making too many change to the operating system itself. For applications that have rigid memory requirements it is still possible to improve boot time by removing unnecessary initialisation routines, particularly for hardware features that aren’t present, or aren’t required for the kernel to run.

We’re now getting into the realms of embedded Linux optimisations and techniques identified by the working group looking boot-up time. For example, the working group has defined the term Deferred to refer to operations that can be modified so they don’t start running until later in the boot sequence. This allows for ‘mission critical’ functions to be executed sooner, thereby giving the impression that the system has booted quicker.

The same principle applies to operations that would normally occur in a specific sequence at boot time, which could perhaps be rearranged to provide key features quicker. The working group refers to these as De-serialised operations.

This optimisation could be extended to include functions that enable the larger system to start operating sooner, such as sending a CAN message within seconds of power being applied, but before full boot has completed. After a cold start completes, a Linux system may be able to optimise reboot times by using techniques such as hibernate or suspend (both terms defined by the working group) instead of total power down.

The same principles can be used to improve the boot time of an Android-based system. And the time taken to boot Android can be significantly improved by using a UBIFS (Unsorted Block Image File System) in association with the U-Boot utility, for example.        

As a conclusion… From Embedded to IoT

Embedded systems are changing rapidly. The IoT is bringing with it the need for more communication protocols, a better user experience and more ‘enterprise’ like features. Linux has successfully transitioned from the enterprise space to the embedded domain because it offers these features, aided by the increasing availability of embedded processors capable of running Linux and Android in resource-limited designs.

However, the technical ability to run Linux still needs to be complemented by commercial benefits, particularly when the end user is interacting directly with the device; something that is more common, now that all things are connected. Optimising operating systems such as Linux and Android can help minimise boot time and provide the kind of experience users expect from their embedded devices. 

Share this page

Want more like this? Register for our newsletter
GaN’s Ground-Floor Opportunity Rudy Ramos | Mouser Electronics
GaN’s Ground-Floor Opportunity
The electronics industry has a major role to play in helping to save energy, by enabling better equipment and new ways of working and living that that are more efficient and environmentally friendly. Maintaining the pace of technological progress is key, but improvements become both smaller and harder to achieve as each technology matures. We can see this trend in the development of power semiconductors, as device designers seek more complex and expensive ways to reduce switching energy and RDS(ON) against silicon’s natural limitations.