Apple announces M1 Ultra with 20-core CPU and 64-core GPU

08 March 2022
Is a combination of two M1 Max chips with a new interconnect architecture called UltraFusion.

Sort by:

  • C
  • Chores
  • 7$f
  • 09 Mar 2022

RMS Vaule, 09 Mar 2022This was a bs statement. Lol. Other than Apple favour optim... moreIkr. I couldn’t get mine to do the vacuuming and washing up, I didn’t even ask it to do the ironing. Such simple tasks.

    • y
    • ya
    • L6u
    • 09 Mar 2022

    ho ho Intel Amd Nvdia Qualcomm ,, apple is currently 5years ahead

      • G
      • Goddess Lyrienne
      • Hxp
      • 09 Mar 2022

      kevinmcmurtrie, 09 Mar 2022800MB/s of memory bandwidth? Hopefully that's a typo ... moreTypo. The bandwidth is 800GB/s

        -Trix-, 09 Mar 2022” All that power is for nothing in the end, if it's no... moreThis, honestly. Hell, the Mac Studio, the Mac that the M1 Ultra is premiering in, is made for those exact kinds of people.

          Nickname optional, 09 Mar 2022All that power is for nothing in the end, if it's not ... more” All that power is for nothing in the end, if it's not put to good use. ”

          It is put into good use, this isn’t product for gamers, it’s product for content creators, graphic designers, video makers, 3D makers, music producers and so on. This product in this price range isn’t meant for regular people it’s for very top pro users.

            • ?
            • Anonymous
            • swC
            • 09 Mar 2022

            Nickname optional, 09 Mar 2022All that power is for nothing in the end, if it's not ... moreAll that power is for super zippy instantaneous response in opening pages and rendering in half second or less. People drive ferrari lambo Porsche for super turbo instantaneous power torque. Done speak for others just because you can't afford it.

              Hamburger, 09 Mar 2022Sure there are other metrics to determine GPU performance. ... moreOk. In gfx bench rtx 2060 scored over 30k both in on screen and off screen. Where m1 max scored about 120 and 1200. Good day

                • H
                • Hamburger
                • yAL
                • 09 Mar 2022

                [deleted post]Sure there are other metrics to determine GPU performance. And FP32 is generally used as default across different architecture GPU as well.

                For the whole pic, just look at those applications' benchmarks that require intensive GPU performance. The Max is no slouch and definitely not behind RTX2060.

                  • H
                  • Hamburger
                  • yAL
                  • 09 Mar 2022

                  Goddess Lyrienne, 09 Mar 2022While it sounds amazing and all... Combining 2 processors i... moreProcessors have been in combined mode for decades already. From the old MIPS, DEC Alpha, Intel AMD, etc, they are put in multi sockets config from 2P to 4P to 8P. Crazy stuffs! Common dual processor system would be the Pentium Pro and Athlon back in the days, if you remember.

                  So Apple putting 2 silicon into 1 package is something new. And they use super fast interconnects to join the 2 silicons. That is way faster in speed and latency compared to the old socket methods.

                  I say KUDOS TO APPLE! Intel and AMD are now exploring chiplets method using super fast interconnects. I'm not complaining! Good times ahead for more computing power at low prices! Especially looking forward to Intel and AMD SoC solutions in the coming years!

                    RMS Vaule, 09 Mar 2022You're the guy who comments so carefully and less enga... moreI don't remember ever talking to anyone with a username of yours thus I have no idea what you mean by "this time", but I'm not as deeply interested in or knowledgeable about computer architecture as you do, and if you see my original comment I'm obviously not trying to claim "uH mAc iS tHe bEsT wInDoWs SuCkS" either (I do have a Windows machine right in front of me, in fact). So I'm just going to ask you some questions for clarification:

                    1. Is it wrong for Apple to apply the concept (that is similar to Nvidia and AMD came up with) to its own product when it doesn't infringe their patents?
                    2. Has either of them achieved a similar level of link bandwidth (to 2.5TB/s) on consumer grade products using MCM?

                    I actually didn't know about NVLink on Nvidia's Ampere generation GPU achieving 1.8 TB/s bandwidth, thanks for informing me about it. Also, if you're correcting somebody, you don't need to play a heel and throw insults like "amateur" to them - I'm pretty sure you know all of those kinds of stuff because you're enthusiastic about the topic and NOT because you enjoy looking down on others in an anonymous forum.

                      • H
                      • Hamburger
                      • yAL
                      • 09 Mar 2022

                      RMS Vaule, 09 Mar 2022Here comes another no brain Apple guy. Tell me the signific... moreThe RTX 2060 FP32 is 6.45 TFLOPs. The Max is 10.2 TFLOPs. Ultra is double of that.

                      RTX 2060 memory bandwidth is 336GB/s. The Max has 400GB/s. Ultra is again double of that.

                      I don't know what you're smoking to say the RTX 2060 is way ahead....

                        • D
                        • AnonD-844232
                        • Cx@
                        • 09 Mar 2022

                        Yeah but useless bcos u can't run windows with it

                          • H
                          • Hamburger
                          • yAL
                          • 09 Mar 2022

                          AnonD-1030581, 09 Mar 2022What are you talking about? The apple silicon doesnt have ... moreGo check out the 16" MBP and reviews of it. The boost is exclusive to 16" only due to higher thermal headroom. When it boosts, power draw goes up to about 93W.

                            • ?
                            • Anonymous
                            • M}3
                            • 09 Mar 2022

                            RMS Vaule, 09 Mar 2022Here comes another no brain Apple guy. Tell me the signific... moreIt's no problem making a dedicated graphic card that pulls 200W alone. It's another when entire M1 Mac Mini pulls 39W of power (that's CPU+GPU+MEMORY+motherboard+cooling fan). It's a significant power difference.

                            Complaining Macs are bad at certain things, they are rather focused for content creators immersed in Apple's ecosystem and within that they are not just optimized, but hyper optimized. Clearly, if you want to do tasks outside of that you should consider a different device if it involves Windows and general compute applications that don't benefit from M1 feature sets. That's like doing tasks that require AVX512 on CPU that doesn't support it. It's no different here with M1.

                              [deleted post]Alright, see you tomorrow.

                                • N
                                • Nickname optional
                                • ppq
                                • 09 Mar 2022

                                All that power is for nothing in the end, if it's not put to good use.
                                I am just hopping Apple will enter more clearly in the gaming world. They released Apple Arcade in 2019 with so high hopes and praises. And now basically is just a service for mobile games without ads or IAP's. Similar with Google Play Pass.
                                They don't need necessarily high end games, but at least to make some games that don't look so childish and boring. Now that they have the hardware to do it. And they definitely have the money to try something like this.

                                  • D
                                  • AnonD-1030581
                                  • nDy
                                  • 09 Mar 2022

                                  Kids these days dont know that nvidia still uses the same old tech for 20 years...if they want more gpu power they reach more power consumption hight
                                  M1 ultra will be on par with 3080 or even 3090 in metal api consuming at peak 110W while an 3090 nvidia uses 350W...yes, 3 5 0 W
                                  Even without metal api M1 ultra at 110W can be on level of an 3070 that needs around 220W
                                  So nvidia to reach the performace of Apple gpu , needs to DOUBLE at least the power consumption..THIS IS DINOSAURS age tech

                                    • D
                                    • AnonD-1030581
                                    • nDy
                                    • 09 Mar 2022

                                    RMS Vaule, 09 Mar 2022Here comes another no brain Apple guy. Tell me the signific... moreStop lying man...M1 max can keep with 3060 while using non-metal api..and using metal api can keep up with 3080 discrete gpu
                                    Please leave your cave..or stop playing games..This M1 ultra with its 2.4TB/s ultrafusion link between these 2 SoC, apple in just 1 year managed to do something that Intel and Amd were trying for decades, to proper link 2 Die without almost no latency...but i guess i have to stop talking with someone that cannot even do a little research.Bye

                                      Hamburger, 09 Mar 2022High end computers cost money. This Studio has similar p... more...I definitely wouldn't say that the graphics in the M1 Pro/Max/Ultra are on par with a RTX 3070 - or a RTX 2000 series card for that matter - but it is still quite impressive to see this much integrated graphics power in a CPU anyway, I won't lie.
                                      Although Apple's benchmarks say that the M1 Ultra's graphics can beat a Radeon Pro W6900X, I wouldn't hold my breath for those to be 100% accurate since they were probably done in controlled enviroments.