Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow. For me, the big takeaway from this review are the benchmarks showing the iMac and MBPs beating out the Mac Pro. That shows that this machine is really for niche markets like professional video editing. Makes you wonder why Apple even bothers.


It's a "forward looking" architecture (to steal the line from the iPhone 5S pitch.) They're betting big on GPU compute. I wouldn't say it is for "niche markets", though perhaps today it is only useful to smaller markets, it's that this computer is going to over time get faster and faster relative to the 2013 iMac and MBPs as more apps take advantage of the GPUs. It's kind of a unique phenomenon and makes the benchmarks misleading. That said, it remains to be seen if this bet will pay off -- we could end up 5 years from now with the same small subset of apps taking advantage of GPUs as there are now.


As a mostly-hobbyist 3D artist, I'm already feeling really left out when I look at my relatively poor GPU vs. the capabilities of the rendering software that I use. If this fever has already started to seep down to my level, I certainly wouldn't predict against continued growth of the GPU computing world.


As someone doing GPGPU I just hope that they'll release an NVIDIA option. Scientific GPGPU heavily relies on CUDA support, OpenCL just isn't there yet, if ever.


I would argue that at least the current model actually is for a niche market, i.e. applications that use GPUs right now. By the time more non-gaming applications outside of media editing are using GPUs more extensively, the top end mobile graphics chips will have the power of this dual GPU setup.

It's a bold bet on a possible trend, which I like a lot, since it would mean that non-gamers would profit from the GPUs that would otherwise bore themselves to death on their machines. Also, it would give AMD a better position, maybe averting x86 being a complete Intel monopoly.


This has kind of been the claim for years now, though, with a burgeoning market of compute apps just around the corner. Only it isn't so easy, and compute applications only apply for a specific set of problems, not only because of the GPU geared restrictions and architecture of these designs, but because of the islands of memory forcing endless memory copies back and forth. Unified memory should go a long way to making compute more generally usable, though of course that does nothing for the person paying $6000 for this unit.


The person paying $6000 for this unit is a delighted Final Cut Pro user who has read the reviews and understands how the machine is tailored for them.

For more conventional, but still pro workloads, most of us will be much better off with a $3000-4500 model.

It's going to be interesting to see how pro apps end up tailored for this architecture.


I wouldn't be surprised if Apple develops a library to help facilitate GPU usage, similar to how they developed Grand Central Dispatch to help developers utilize multicore CPUs more effectively.


Even better: what if Apple developed a whole language for GPU compute? They could eventually get other vendors to participate and make it an open standard. How about "Open Compute Language"? Nah, too verbose. How about "OpenCL"? ... =)


Doesn't GCD do OpenCL?

Or do you mean something more seamless and auto-magical?


No - GCD is only about distributing workloads across CPU cores, and doesn't involve the GPU.

OpenCL uses a special programming model so you can't use it for general application code. It's good for doing repetitive operations on large arrays - e.g. Image or signal processing, or machine learning. OpenCL code will run in the CPU if there is no GPU or if the overhead of shipping the data to the GPU is too high.


In general, the MacPro loses out in single-thread performance only, since Xeons are typically one architecture behind. When the Haswell based Xeons come out, Apple will refresh the Mac Pro line, and you'll get parity single-thread performance.


Except that the consumer lines will have moved on by then.


Do we know that though? Historically speaking, the Mac Pro doesn't get updated nearly as often as Intel releases a new architecture.


It actually did for a long time; then they skipped a few iterations. Maybe they'll do that again, maybe they won't; we'll see.


What are the odds the Haswell Xeons will use the same socket as the current Xeons?


Zero since Haswell-EP uses DDR4.


That being said, I wonder what the chances are that Apple will use the same socket for the Haswell-EP CPU daughterboard. Is there anything obvious that'd prevent them from doing that (e.g, chipset compatibility)?


Yes, I would expect the system design to stay as similar as possible for the next five years or so.


"Makes you wonder why Apple even bothers."

The line of video professionals happy to pay $10k/box to get their renders done faster screaming "TAKE MY MONEEY NAOW" might have something to do with it...

Also, since you missed all the charts showing the MP demolishing everything else by 2.5x+ on multithreaded workloads (ya know, the thing that people buy MPs for) you may want to verify your consumption of what the kids call "h8rade".


I think that is exactly the kind of niche they are looking to capitalize on. Think about the reasons to buy a powerful workstation computer. Games? You are probably going to build your own custom machine and put Windows on it. Server? Call up Dell and get a rack, or cobble some basement stuff together and throw Linux on it. The other case for needing a powerful workstation today is professional work like video editing, rendering, music production, and Apple already does will in that market. It's clearly one of their lowest priorities, as can be seen by the fact that they updated every other product line multiple times before they came back around to the Mac Pro, but at least they seem to understand the market. If they were marketing this as a gaming machine or a server platform, then they would be making a mistake.


The Mac Pro has been pretty niche for years, and has been losing out to the top-end iMac for a while on poorly-threaded stuff. To a large extent this is due to Intel's product cycle, where the many-cored high-bandwidth Xeon is at least one core iteration, and sometimes two, behind the iX.


align your silicon choices to your workload

The MacOS market in general might be described niche, but it seems to have sustained itself over the years.


I think many of the design/art fields that they cater to has a need for this sort of MP-capable machine. That and it's also probably a neat exercise for technology development - I'm sure there's some nice engineering data and expertise that came out of this that might "trickle down" somehow.


I think the video editing market is growing fairly rapidly, as it gets easier to take and share high quality video. Of course, most of that growth is entry level youtube-channel stuff, but as those barriers fall so will those to the professional level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: