How Apple is taking ARM ahead of the curve…

Pritam Pratik Agrawal
7 min readJan 14, 2021

--

Apple launched an iPad with specs at par with desktop computing back in June 2020 and the world went crazy as to how and why did Apple launched it just a month after their MacBook Pro came out where former outclassed the performance of the latter’s intel i5 chips.

Yes Apple did go on-stage of launching their own SOC (System On a Chip) in the WWDC 2020 event for a 2-year transition phase thereby creating an Osborn Effect (Click here). This pre-announcement literally killed the sale of their then launched fresh iMacs. So apple next decided to put their last puzzle of intel chip before completely dismantling it into a heap of waste and launched MacBook Pro 13" inch just to keep their annual product rollout while pushing their R&D for own SOC based ARM architecture, thereby setting the field for the next big thing which is now a revolution in the field of Desktop Computing.

Now coming to Architecture which is the underlying reason of all this chaos of competition.

Intel launched their first successful microprocessor 8086 in 1976 which was a 16-bit microprocessor. This came as a storm in the electronics market and it gave rise to what we call today an x86 architecture. This was the first 16-bit microprocessor launched. Eventually a 32-bit x86 architecture chip was developed which is still being used on most of the computers.

AMD Rolling up its Sleeves….

Then came AMD to up the game and it developed an x86–64 architecture under the license from intel. This is a more popularly called x64 architecture because of the 64-bit x86 architecture which allowed more power packed performance for higher-end computing. (This is why we still have Program Files and Program Files(x86) in our system to differentiate a 32-bit with 64-bit application)

ARM-An Unnoticed Dark Horse in 1980's

Amidst all this research and development of power and performance hungry x86 architecture, ARM (Acorn RISC Machines) was slowly trying to gain control of the chip industry with years of research with the prime goal of creating a SOC or SOM (System On a Module) completely power efficient.

Difference between x86 and ARM

Ok. Here is the point where tech gets a bit nasty. So stay with me.. The CPU is regarded as the brain of the computer, but is it ‘smart’? Really? Ponder over it..

A CPU works only when it is given a specific instruction as Instruction Sets. These instruction sets instructs the processor as to where the data should move, either between memory and registers or for performing calculations using execution units.

Ever thought where does your application runs? Where and why it is written? Let’s explore..

Applications are written and compiled in various high level programming languages(java, C++, Python, JavaScript) which are then compiled for specific instruction sets which can run on ARM or x86 CPU’s depending on how closely each component (Memory, I/O chip, RAM, Cache Memory, USB controller) talk to one another. So why not directly write the application on CPU instruction to save the power needed and improve performance for all these components? This will be further a mass chaos due to the current cross-platform application compatibility which runs on a wide variety of chips available in the market. Though some of the codes are directly written in CPU instructions, but these are the CPU’s used for only one purpose wherever needed and no data ever needs to be changed outside its firmware.

The instructions sent by above compiled processes are then decoded into microcode ops (popularly Micro Ops) within the CPU which is a silicon space and power hungry component of your computing device. So the calculation now becomes easy, if we want a lower power CPU, keep the instruction set simpler as in RISC (Reduced Instruction Set Computer) based in ARM architecture, or for a more performance oriented CPU which can be obtained by packing in more complex hardware and instruction at the expense of power, use CISC (Complex Instruction Set Computer) based in x86 architecture.

The difference between CISC and RISC computing and the CPU time with each program, cycles, frequency is a topic for another day.

Now Apple, I know you need the Limelight. Coming Back to You…

The biggest hurdle Apple had was creating a desktop computer with more performance without hurting the battery backup. It is relatively a difficult task as performance and battery backup doesn’t go hand in hand, at least that is what we saw and knew over the years.

Apple never talks about the frequency of RAM, Memory Capacity of RAM as this things doesn’t count when Apple is making such applications which is so closely linked to the hardware that it’s CPU doesn’t need to fire all non-efficient cores up to get that performance level which Intel so readily does.

Intel kept on increasing instructions on there CPU space creating bloat wares under the hood of increased performance, as these were hardly ever used by rest of the system.

On the other side of the story, “most of the invention is an accident”, the beauty of ARM based architecture is that, during its initial development phase, it was accidently discovered that the CPU based on ARM architecture is using very low power to such an extent that the ammeter(used for measuring current) reading was ZERO. “The chip was running off residual power as no power was connected to the chip”

The outcome of such a great invention(or more of a discovery in an invention), is the current mobile computing industry which is slowly taking over the desktop computing industry.

Apple’s History with Chips

In 1991, Apple in collaboration with IBM and Motorola (popularly called AIM), used ARM architecture in there PowerPC chip. But the collaboration didn’t workout well as all companies wanted their own research in semi-conductors industry. So, in 2005, Apple moved to intel chips collaborating with them for making their desktop computing devices. Apple also offered intel to make a power efficient chip for their iPhones but the then CEO of intel declined the offer. So Apple went ahead with ARM to make their iPhones based on CISC and also signed a long term agreement for ARM architecture license (sorry Intel, you missed a fortune here).

In 2012, Apple released their first ever fully customized chip and CPU (A6) codenamed Swift used in iPhone 5. This chip used Apple designed ARMv7-A architecture with dual core CPU, in contrast with the traditional ARM CPU’s.

Revolution with Apple A7

In 2013, apple was the first company to bring a 64-bit architecture(devices can have more than 4 GB RAM now) into mobile devices with their iPhone 5S, beating the ARM’s CPU. They sold these 64-bit architecture design under the name of desktop-class architecture, which most people back then thought of Apple just bragging. But the wave was yet to come….

This brought huge revenue to Apple which they invested in R&D for bringing their own chip at par with desktop class performance, with one goal of increasing the performance by consuming less power. With the Apple A14 bionic they finally surpassed Intel chips 10th Generation i9 10900K on performance. See below for analysis by AnandTech:

In 8 Years, while Intel only managed a bump in performance by 42%, Apple managed to improve the design and performance by a whooping 300%.

Source: Intel

From above, intel i9 10900K uses a massive 125W of power, imagine the battery consumption on a mobile desktop computer, but Apple iPhone 12 with A14 Bionic manages to outperform the chip with its 5W battery consumption, this is a big 25 times less power needed for an SoC.

This brings us to the point where in 2019 iPad Pro outperformed the intel powered MacBook Pro of 2018

But what will be the selling point of a MacBook if it is underplayed by iPad?

Credits: Apple

But the plan was already set. In November 2020, Apple released its first desktop silicon M1 keeping Macs in context with 16 billion transistors (A14 has 11.8 billion transistors) and as its base is same ARM architecture, the new Macs support iPhones and iPads application and all this power is packed in a 5 nanometer process. Just for giving an overview of performance difference, I have added a Geekbench snip of my current laptop of 2016 with Intel 6th Generation i7–6500U processor having 8 GB DD3L RAM.

And this is my MacBook Pro M1 16GB RAM score below:

Credits: Geekbench

From above it is outstanding to see how the Macs single-core performance completely decimates the multi-core performance though the comparison is with 4 generations old Intel i7 chip.

Apple is definitely achieving milestones with its new custom inhouse chips. It is short of the latest AMD Zen 3 desktop class chips like in AMD Rygen 9 5900X in multi-core performance due to 8 cores of M1 up against 12 cores of AMD Rygen 9 5900X but M1 definitely outperforms in single core.

Credits: CPU Monkey, Geekbench

AMD crown might be dethroned in coming months with the much speculated M1X or M2 chips for MacBook 16" inch and higher end iMacs. But the way Apple is moving with the tech is really setting the competition high. Hopefully it also refreshes the look of the macs(thick bezels and design) somewhere in future soon :)

--

--

No responses yet