Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology (Apple) Businesses Apple Technology

Gigahertz Mac Finally SPEC'd 52

FrkyD writes "C't magazine puplished a story with the results of a test they designed using a Mac OS X-adapted benchmark suite by the Standard Performance Evaluation Corporation (SPEC) entitled CPU2000. SPEC allows comparisons to be made within a certain framework with the Intel competition. They compared the G4/1 GHz running Mac OS X with a PIII/1 GHz (Coppermine) running Windows and Linux."
This discussion has been archived. No new comments can be posted.

Gigahertz Mac Finally SPEC'd

Comments Filter:
  • gee, who'd have thought...

    Now, where's the PowerPC chips made on IBM's new process and running at 40Ghz?... :)
  • by Strog ( 129969 ) on Thursday March 07, 2002 @11:05AM (#3124415) Homepage Journal
    Until a more optimized compiler comes along then it looks like for general purpose applications clock speed will mostly be the indicator of performance.

    Having said that, there will always be applications that are optimized enough to kick some butt on a G4 like Photoshop, etc. If you are a programmer then it is nice to not be limited on registers on a RISC cpu. Choose the right tool for the right job. If it comes down to a push then use your favorite. :P

    • Unfortunately SPEC is optimized for Intel processors, and is NOT written to even see the Altivec. Plus, it doesn't even see the G4's extra pipelines or registers. In short, it's not coded to see any advantage of the G4, while any other "Hello World" program compiled to PPC would run laps around the SPEC app.

  • Linux vs. Windows (Score:3, Interesting)

    by crow ( 16139 ) on Thursday March 07, 2002 @11:18AM (#3124456) Homepage Journal
    I found this from the article to be interesting:

    With a SPECint_base value of 306 Apple's 1 GHz machine under Mac OS X ran almost head to head with the equally clocked Pentium III, combined with Linux and GCC, with a SPECint_base value of 309. Under Windows, the bad quality of Microsoft's run-of-the-mill compiler, which pushed the system down to a SPECint_base value of 236

    That means Linux is over 30% faster than Windows!

    Too bad they didn't give similar floating point numbers (or at least I didn't find them in the article), especially seeing as how the Mac is faring so poorly against the Linux PIII in that area.
    • by foobar104 ( 206452 ) on Thursday March 07, 2002 @11:32AM (#3124520) Journal
      That means Linux is over 30% faster than Windows!

      Of course it doesn't. It means that GCC is somewhat better at compiling the SPECint_base benchmark than Visual Studio is.

      I won't pretend to be educated about the inner workings of SPECint, but one would suppose that, because it's purported to be a hardware benchmark rather than an OS benchmark, it is completely independent of the standard C library, or any other OS-level service. One would expect the compiled benchmark to just run pure code inside the CPU, without any system calls or any of that stuff.

      So the same benchmark compiled with the same compiler but run under two different OSs should return exactly the same result, within a certain statistical margin.

      Somebody with more time on their hands could either test this hypothesis, or confirm that it's already been done by somebody else.
      • Of course, since everything on a standard Linux machine (kernel, libraries, X, etc.) is compiled with GCC, while I imagine that everything that Windows runs is compiled with Visual Studio's compiler, if GCC consistently turns out code that's faster than Visual Studio, we could extrapolate that Linux is significantly faster than Windows.

        • we could extrapolate that Linux is significantly faster than Windows.

          No, you couldn't. Writing code in C is generally faster than writing it in Java. However, surely you've seen a C program do something slower than a Java program. For example, starting up Nautilus compared to running a hello world program in Java. While this example is taken to an extreme, it still holds true- there is a big difference in the way hello world in Java and Nautilus does things, the same with Linux and Windows, and the speed of the C compiler says absolutely nothing about the amount of abstraction, the amount of bloat, the amount of optimization, the sheer amount of code or the quality of code written as a part of neither Linux nor Windows. Among plenty of other factors.
        • [I]f GCC consistently turns out code that's faster than Visual Studio, we could extrapolate that Linux is significantly faster than Windows.

          Maybe you could, if GCC could be thus characterized. But there's no evidence in this article that points to that conclusion. Rather, this article says that GCC did a better job of compiling the SPEC benchmarks. As everybody knows-- or should know-- benchmarks are to real applications as fish are to bicycles.
    • Re:Linux vs. Windows (Score:3, Interesting)

      by Graymalkin ( 13732 )
      Actually it means GCC is 30% faster than Visual Studio's compiler which is notoriously shitty. You're also basing your aguement on too few details. You don't know which compiler flags were used so you can't compare -O3 optimized code to VS optimized code. The VS compiler is not the world's greatest compiler and I think they should have gone with the Borland C++ compiler for Windows. Your compiler makes a big difference in the speed at which code is going to run.
  • by jdb8167 ( 204116 ) on Thursday March 07, 2002 @11:23AM (#3124475)
    I know people are going to claim that the SPEC marks aren't susceptible to bias but the SPEC suite only test traditional architectures. As far as I know, they don't test for SIMD vector processing like the altivec.

    No one ever claimed that the FP alone on the G4 was at supercomputer status, just that the G4 in conjunction with Altivec could crunch at FLOPs at "supercomputer" speeds.

    Keep in mind that OS X is hardly optimized for this kind of test. OS X has just recently reached the point where it is useful as a general purpose platform. But Apple is making a big push in the scientific computing area so I expect that you will find vast improvements in the SPEC FP suite in the future.
    • As far as I understand the problem, altivec is very performant but only handles single precision floats, not doubles.

      While single precision floats are largely enough for multi-media processing (filters, compression, etc...), in general, number crunching is done in double precision and the floating point tests of SEPC reflect this. You don't always need double for scientific calculations, but this is altoghter another discussion.

      Maybe one day we'll see a multimedia component of SPEC or Altivec will support double precision numbers (the author even mentions this at the end of the article) but until then Altivec is out and this has nothing to do with a bias of the author.

      As for OS X being optimised for this kind of stuff, we are talking applications that nearly never call the OS for anything, so the impact of OS X is probably nil. The truth is, floating point calculation is not really important for most users and both Intel and PowerPC processors are optimised for integer calculations. There was a good article about this [arstechnica.com] on Ars Technica.

      One reason I could see to explain the large difference lies in the compiler: there has been much more work on gcc to optimise for the Intel instruction set than for the PPC instruction set. Like most RISC processors, the performance of a PPC processor is hugely influenced by the compiler.

      • Are you sure about the OS not having much impact on the test? I've read that there is a floating point library in Mac OS X that is significantly slower than the equivalent in OS 9. Also, the virtual PC people have spent months on OS X getting the speed up on VPC due to issues in OS X with priorities and time-slicing.

        I don't know much about how these benchmarks are written or how the compilers actually generate FP code but if they use a standard OS library that isn't particularly optimized then that would show up in the SPEC FP tests.

        SPEC doesn't just measure CPU speed, it measures it in conjunction with the complete system that is being used to run the test. Unless they've changed their charter, this was always acknowledged by the SPEC consortium.

        I would love to know what kind of impact OS X has on the benchmarks. Has anyone done the equivalent study using Yellow Dog Linux?

        • They said they ran in single user mode, no GUI, on OSX and linux. That would probably mean that it was basically a text-console FreeBSD system. With no other processes running, the OS would have to waste time on purpose to slow down the benchmark!
      • As for OS X being optimised for this kind of stuff, we are talking applications that nearly never call the OS for anything, so the impact of OS X is probably nil.

        That doesn't sound right. Most unix systems, OSX included (and NT FYI) don't allow direct hardware calls. You can only access system resources through operating sytem APIs.

        DOS is the only system I know of that lets you access the hardware directly. (I think NT let's you access graphics systems directly too. However, that has nothing to do with this test).

        Vanguard
    • by Anonymous Coward
      The whole idea with SPEC is that it test a number of very optimized real-world codes written in standard programming languages.

      The rules are simple: You can do anything you want to your system, compiler, libraries, optimization flags, but you are NOT allowed to touch the code.

      This is *GOOD* since it means any optimization introduced by the hardware vendor or compiler authors will benefit all programs, not only hand-tuned assembly.

      So, it's completely OK to use vector processing (and some of the benchmarks would benefit from it), but must do it in the compiler and not hand-tune each executable.

  • All tests indicated that the GCC Compiler produced better results than MSoft's C. It is not clearly indicated whether the GCC Results were from Linux bachines, but I presume so.

    gus
  • by gouldtj ( 21635 ) on Thursday March 07, 2002 @11:40AM (#3124589) Homepage Journal
    I guess, I am supprised by the results. In just USING the various machines I would say that there is about a 1:2 ratio between the MHz on a PPC to an x86 processor. But then I started to think about them. In reality the processors aren't what make the difference. It is everything else. Macs don't tend to use as much 'soft' stuff, and they make better use of the GPU, so the processor is left to do more 'processing'. I guess they're will never be an completely perfect 'user' test, I'll just have to go with feel. But in my experience those SPEC numbers don't tell the whole story.

    It might be interesting to see a comparison with Linux running on both machines... Anyone have one of these?

  • On the SPEC reports I've seen, they usually provide the list of compiler flags and libraries.

    I don't see that info here.

    Is it possible that unoptimized libraries like libm would hobble the Mac's results under OS X?

  • For those interested, arstechnica had some great articles a while ago on the processor families and the different ways they handle instructions.

    Part I [arstechnica.com].

    Part II [arstechnica.com].

  • my own experience (Score:3, Interesting)

    by jchristopher ( 198929 ) on Thursday March 07, 2002 @03:02PM (#3126047)
    My own experience tells me that a 500mhz Mac (iBook, 640 MB RAM) runs OSX and common apps (browser, mail, newsreader, IM) at roughly the speed of a Pentium II 300 with Windows 2000. That's terrible.

    Even a lowend PC these days ($700 or so) will run Windows FAST, whereas Apple's lowend end runs OS X slowly.

    Most of the Mac's "speed problems" lie in the OS, not the hardware. Linux on the iBook described above flies.

    • Sure, but that's not really relevant to the article. SPEC is a measure of the capability of a CPU+Memory Subsystem+compiler in performing Real Work, where Real Work is number/symbol crunching. You know - linear algebra, compiling, those kinds of things. SPEC couldn't care less about measuring how responsive a GUI is, and rightly so. Assuming the OS has decent process+priority management (as it should for Mach+BSD), the GUI bloat should not be relevant.

      I was actually quite surprised by the poor performance of the G4. Although if the other posters are correct in stating that SPEC doesn't test very well for single-precision floats nor SIMD, it's probably not a very good test of the capabilities of the CPU. Of course, Apple has crippled their systems with PC133. That doesn't help either.
      • If I may add something... there's a long thread on exactly this subject in comp.arch right now. Look for the thread called "SPEC2k results for G4". There's some very interesting comments from people that mostly seem know what they're talking about.
    • Re:my own experience (Score:2, Informative)

      by Llywelyn ( 531070 )
      Your experience, in essence, is not the norm for mac users.

      My own experience is that a 300 MHz G3 will blow a 500 MHz Pentium out of the water, thats running MacOS 9.

      System configurations matter, memory matters, &c.
      • That's right, but I'm talking about OS X, not OS 9.

        OS 9 is not a fair comparison - it's more like Windows 98 than 2000. Mac users who want a stable system must use OS X. The equivalent PC operating system is Windows 2000.

        Windows 2000 is far faster than OS X with the same amount of memory.

  • if compilers are making such a huge problem in attaining accurate benchmarks, why doesn't someone hand code some assembly for these machines?

    It's not even reasonable to take readings when you KNOW you're data will be inaccurate. Sheesh. Anyone who can code VB will call themselves a "computer scientist" these says...

  • Why a P3? (Score:3, Interesting)

    by george399 ( 537785 ) on Thursday March 07, 2002 @04:17PM (#3126650) Homepage
    Call me crazy, but why is there a benchmark between a PIII and a G4.
    Wouldn't a P4 be a better test?
  • by Johnny Mnemonic ( 176043 ) <mdinsmore&gmail,com> on Thursday March 07, 2002 @04:36PM (#3126799) Homepage Journal

    Buried in this article is this note: and switched off the second supporting processor of the dual machines. Which means that the Dual 1Gs were only run as single Gig machines--and would therefore be much faster in the real world, so cost comparisons should be made accordingly.
    • Some apps can't take advantage of 2 CPUs.
      • I wouldn't necessarily consider this a bad thing. My p4 1.8 quite frequently makes me miss my 2x p3 500. While I agree that 2GHz > 2x 1GHz on raw performance level (given the same chip), the usability level of the latter often exceeds that of the former. That is, I out multi-task my computer all the freakin' time.
      • But, you are ignoring the fact that Mac OX X itself supports dual processors and will distribute processes and threads between the two processors. Thus, it will run faster than just a single processors system.

        Now, whether the SPEC benchmarks would reflect that is another question entirely. Not knowing anything about how they are written, I can't make any predictions. However, if they are simply single-threaded tests, then, no, they won't show much of an improvement on the dual systems.
  • They should have loaded a BSD or Linux (same kernel versions) onto each computer, just to rule out OS benchmarking. Not to mention similar graphics cards, ram, hard disk, etc. This is one of the worst benchmarks I've ever seen. What a joke!

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...