4 Comments
User's avatar
Brent's avatar

Apple's success with the M1 has made me wonder whether x86 chips need to exist at all. Apple's first generation ARM chips run x86 apps (through their rosetta translation layer) better than they do natively.

Is it crazy to think that before long Azure, GCP, AWS will have their own ARM chips capable or running x86 workloads through a layer of emulation/translation with better outright performance and price performance than a comparable x86 chip? If this is possible what is the raison d'être of x86?

Maybe I am missing something fundamental. curious to get your thoughts.

Expand full comment
Liberty's avatar

When datacenters go ARM, they'll run recompiled ARM code, they won't do emulation (except maybe for some stuff where it does make sense, like rarely used code where it's not worth porting it, but you still want it around just in case), as that would eat up a lot of the benefit.

But generally, there's certainly a movement towards ARM in the datacenter. Intel and AMD's best server chips are still extremely powerful -- with crazy number of cores and very wide memory buses and all that -- but ARM's performance/watt is certainly hard to beat, and many are working on making better server chips.

But on the consumer side (laptops..), I think the final nails are being driven in the coffin, and within a few years I wouldn't be surprised to see Microsoft support ARM much better, which would allow a lot of OEMs to start shipping ARM products.

But of course, Intel and AMD are not sitting still, and could come out with chips that leap forward on performance/watt.. We'll see

Expand full comment
Brent's avatar

I appreciate the response.

Is there a obvious reason why an ARM chip can't be as powerful, with lots of cores and wide memory buses.........? I guess what I'm thinking here is suppose Apple was also in the hyperscale cloud provider business. Is it possible we could see them announce a server chip that competes with the best that Intel and AMD have to offer?

Chip architecture is an area I have no expertise but a genuine interest. I'm asking these questions in an honest effort to gain knowledge.

Expand full comment
Liberty's avatar

There's no reason why it can't happen. It hasn't so far because life is trade offs, and so farm, ARM chips have made trade offs in the direction of being as low-power as possible.

But if you decide that power constraints aren't as much what you want to optimize for and want much better absolute performance, I think it's very possible to do that, and many companies are working on it (some semi design companies and the hyperscalers themselves).

Expand full comment