Why the success of Apple M1 series is hard to replicate for other companies and x86 is here to stay


When the Apple M1 series came out, lots of people were saying that x86 is doomed, Intel and AMD will be out of business and every big tech companies will be designing their own ARM-based processor. In my opinion, these statements are quite detached from reality and is a display of lack of understanding of semiconductor and technology. Here's why:

  1. It takes a lot of money and time to built a world-class chip-design team: Apple started using their in-house designed ARM-based architecture with the iPhone 4S, which was released in 2011. They should be working on the architecture at least 2 years prior to that, which means it takes Apple more than 10 years to be able to build the M1 series. Almost every generation of Apple's in-house design has a huge efficiency advantage over the standard ARM core, which implies the design team of Apple is far more capable than the team in ARM and Qualcomm. Despite that, it still takes them more than 10 years to accumulate the technical legacy and experience to make the M1 series. Few other companies can afford to spend that amount of money and time do built a design team as capable as Apple's, even if they could, it could still be more expensive than just buying processors from Intel or AMD.
  2. It takes even more time and money to re-write software for a different architecture: processors are expensive, but the time of competent software engineers is even more expensive. Good software engineers are required if you want to migrate your software to a different ISA and have it performing well, and they are expensive and rare. It makes more financial sense for companies to assign these engineers to other purposes than to re-write software that runs fine on existing hardware. On top of that, most companies doesn't have the complete control over their platform like Apple does on OSX and iOS, which would makes this task even harder, more expensive and time-consuming.
  3. Few companies can afford to use the most advanced semiconductor manufacturing node: Apple started using the TSMC N5 process more than 1 year earlier than almost everybody else, which is a significant contributing factor to the efficiency advantage of the M1 series. Even datacenter products with extremely high margin such as the AMD EPYC and Nvidia A100 are one generation behind in terms of process node. It takes an insane amount of money to design a processor on the most advanced process node, hundreds of millions for N5, billions is expected for N3. If processor vendors like AMD and Nvidia can't race Apple to the most advanced node, processors for in-house usage by Google, Amazon or Microsoft definitely won't have the volume to do so.
  4. The M1 series is actually quite transistor-inefficient: the M1 Max contains 57 billion transistors, in comparison, the RTX 3080 contains about 28 billion transistors and the Ryzen 5900X contains about 19 billion transistors, even the Nvidia A100 contains “only” roughly 54 billion transistors. While the M1 Max is powerful, it's definitely not as powerful as the RTX 3080 and 5900X combined, even if it contains more transistors. If only fabbing cost is considered, a M1 Max is more expensive to manufacture than 2 RTX 3080 since the cost per transistor stopped dropping since 28nm. The design of M1 series sacrifices transistor efficiency for energy efficiency, which makes sense for mobile products, but for HPC applications better transistor efficiency should be more desirable. AMD and Nvidia products are still more efficient in terms of computing power to manufacturing cost ratio.

In conclusion, the M1 series is the result of Apple pouring tremendous amount of resource over a long time, which most companies can't afford to do. Even if they could, it would probably make more financial sense to simply purchase processors from existing vendors.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *