Monday, September 3, 2012

x86 vs arm. lies, blatant lies and standards

I wanted to publish a rant about x86 vs ARM architecture long ago but had no time to really do so. Therefore, I'm just publishing a slightly edited draft written back in spring.

In short: x86 sucks.

The last two decades of personal computing were dominated by the x86 architecture, while embedded hardware, like cell phones, media players and microwave ovens traditionally used simpler and less power-hungry designs, namely, ARM, MIPS and SH3 architectures.

Ever since first 'smart' devices (PDAs/phones where you could install your own custom applications) have appeared, it has been a dream of many people to reduce the gap in both computing power and abilities of these devices and desktop computers. There have been a lot of community projects to run full-blown linux/BSD distros on handheld devices, but it was not until recently that the major players like M$ and Apple turned their attention to the non-x86 architectures. 2012 is going to see the end of the world appearance of a lot of ARM-based laptops, since Windows 8 is going to be ported to this architecture.

Migration to (or, rather, adoption of) a new architecture has caused a lot of FUD and confusion. Let's just consider some common myths about x86 vs non-x86 debate and analyze the strong and weak points of both camps.

Myth 1: x86 has a huge software base and you cannot replace it.
Reality: most commercial vendors (including M$, Adobe etc) are porting or have already ported their software to the new architecture. Open-source software, because of the efforst of major distributions like Debian, is already ported to virtually anything that can compute.

Of course, there are tons of legacy software, for which even source code may be lost. But that's just because someone was stupid enough to make their software closed-source from the start. Even though this appeals to managers, from an engineering point of view closed source code is the worst sin because it reduces interoperability with other pieces of software, causes a lot of problems when API/ABI breaks (at major updates) and of course you cannot trust the binaries when you don't see the code.

One approach to the problem of legacy software is what the industry has been doing for ages - virtualization. Hardware emulation, to be more precise. While it reduces performance by multiple orders of magnitude, it may be a savior in some extreme cases. For example, when you need your ancient business app once in two months, but want to enjoy your cool and light ARM laptop the rest of the time.

Myth 2: x86 is standardized. Even without vendor support, a generic set of drivers works anywhere. ARM, where is your ACPI?
Reality: The reality is that even though standards exist, they are rarely implemented properly. Even on x86 things like ACPI, hybrid graphics, function keys are often broken and non-standard which means chances are your laptop will not work until you install the customized set of drivers from the OEM.  Or will work but drain battery like crazy.

Myth 3: ARM will never be as fast as X86. While this is true that modern ARM SoCs are typically lag almost a decade behind X86 in terms of processing power, it is possible to build a high-performance RISC SoC. The problem is that increasing performance by the means of reducing die size will raise the power exponentially. Emulating an X86 CISC on a RISC core increases the complexity of the CPU and therefore reduces performance per watt. The reality is that ARM CPUs are mainly utilized in portable hardware, e.g., laptops, smartphones, tablets, where power saving is preferred over performance.

The real problem with X86 is that it is a collective term used to refer to a whole bunch of hardware dating back to calculators from the prehistoric area to the newest 'ivy bridge' systems. One notable problem of this architecture is the presence of three incompatible execution modes: 16-bit real mode, 32-bit protected mode, 64-bit long mode. Legacy software tends to be a mix of these and involves context switches which increase latency and potentially reduce the level of security.

While X86 is ridden by a multitude of remnants of the past, it still has advantages over ARM/MIPS typically used in embedded hardware:

  • Harware virtualziation support (hypervisor). ARM, for example, is yet to release the Cortex A15 core and armv8 instruction set adding virtualization extensions. While it is possible to use the TrustZone hypervisor to virtualize hardware, it should be noticed that there are absolutely no standards describing what APIs must a TZ OS provide, and TrustZone is designed to protect the user from tampering with their hardware and not to protect the user's data from misbehaving execution environment.
  • Standardized BIOS/EFI support for common hardware: VESA framebuffer, SATA/EHCI/XHCI controllers, i8042 keyboard etc. While these interfaces typically do not allow to use the advanced computational and power-saving capabilities of the hardware, they allow to get a custom kernel running on a new board lacking documentation from the hardware vendor.


So, the question is: Is migration to a new architecture needed?
The answer is: yes and no.

What is needed is adapting software to the new computing paradigm. 80-s and 90-s were dominated by the impression that CPU frequency can be increased indefinitely and RAM is expensive. Recently, however, RAM has become very affordable and CPUs are hitting the barrier of frequency due to the technology used and powersaving constraints (well, there are some alternatives like using optical interconnects or returning back to using germanium for semiconductors, but we'll eventually hit the bottleneck once again in a couple of years). Modern computing involves assymetric multiprocessor systems with heterogenous architectures and using vector processors (SSE, NEON, CUDA) for exploiting data-level parallelism.

No comments:

Post a Comment