NVida cheats on Benchmarks!

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Andrew Pratt
    Moderator Emeritus
    • Aug 2000
    • 16507

    NVida cheats on Benchmarks!

    Details

    Cheating in graphics card benchmarks has been around since the very inception of graphics benchmarks. Over a decade ago, manufacturers such as S3 were discovered to be hardcoding certain text strings into their graphics card BIOS and drivers, and it just so happened that those very same text strings were the ones in a widely-used industry graphics benchmark. Since then we've had the Quack vs. Quake controversy, where ATI was found guilty of making a driver that detected whether or not you were running a Quake III benchmark and then adjusted itself to provide erroneously high scores. Now some evidence has surfaced that nVidia may be guilty of fixing its GeForce FX drivers to give misleadingly high 3DMark2003 scores, according to ExtremeTech.

    ExtremeTech noted that while the the GeForce FX performed very well--in fact suspiciously well--on the 3DMark2003 benchmark, other benchmarks did not show marked improvements. However, ExtremeTech had access to something that nVidia itself did not: the developer version of 3DMark2003. Using the developer version ExtremeTech was able to alter some of the key benchmarks within 3DMark2003 by slightly changing the camera path. The results were quite startling, with graphical glitches everywhere. Since such glitches should not be happening if nVidia's drivers were calculating the scene as if it were any other game, the conclusion is that nVidia has cooked its drivers, specifically optimizing them for the specific camera path of 3DMark2003. Since the driver could then "know" in advance what was to be rendered because of the specific camera path, it could use fixed values for certain render parameters instead of calculating them on the fly, thus boosting scores. But if the camera path were not fixed, the fixed values would be wrong, resulting in graphical glitches exactly like the ones ExtremeTech noted.

    It's not an open-and-shut case this time, though. Kyle Bennet, webmaster of [H]ardOCP, has pointed out that ExtremeTech was excluded from the general release party for the newest GeForce FX card, and that it's possible ExtremeTech is simply retaliating against nVidia for the perceived snub by jumping to conclusions. His conversations with nVidia indicate that nVidia thinks the graphical glitches were caused by a driver bug, not a feature, and that it will be corrected in a future release.




  • TonyPTX
    Member
    • Apr 2003
    • 39

    #2
    Quite a disturbing fact to find out....I've been an NVidia fan ever since my old Voodoo 2 and Banshee became obsolete. I think a public apology is in order.




    "Those that don't know, don't know they don't know."
    "Those that don't know, don't know they don't know."

    Comment

    • Gordon Moore
      Moderator Emeritus
      • Feb 2002
      • 3188

      #3
      Not quite cheating. It's been shown time and time again that benchmarks like 3DMark are worhtless numbers because you can code around the benchmark. In Nvidia's case I believe that this was a bug. The story goes deeper than that. Nividia does not have the source because they no longer pay FutureMarks really high licensing fees so they've been snubbed as it were,by Futuremark (which apparantly shows ATI a lot of love these days).

      HardOCP no longer uses FutureMark's bench as a "real world" measure. game Timedemos are more accurrate and give a true measure of how you will stack up against your neighbor...I would tend to agree that 3dMark is in NO WAY a real world measure of how you stack up to another manufacturer. It's okay for testing against like groups ON THE SAME DRIVER SET. That part is important because sometimes a different driver will inflate or deflate your 3dmark but in the real world has not hampered or increased performance as 3dmark would suggest.

      As always, benchmarks should be taken with a grain of salt.

      --Gord

      BTW, John Carmark spoke out on this:

      "Rewriting shaders behind an application's back in a way that changes the output under non-controlled circumstances is absolutely, positively wrong and indefensible.

      Rewriting a shader so that it does exactly the same thing, but in a more efficient way, is generally acceptable compiler optimization, but there is a range of defensibility from completely generic instruction scheduling that helps almost everyone, to exact shader comparisons that only help one specific application. Full shader comparisons are morally grungy, but not deeply evil.

      The significant issue that clouds current ATI / Nvidia comparisons is fragment shader precision. Nvidia can work at 12 bit integer, 16 bit float, and 32 bit float. ATI works only at 24 bit float. There isn't actually a mode where they can be exactly compared. DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit. For just about any given set of operations, the Nvidia card operating at 16 bit float will be faster than the ATI, while the Nvidia operating at 32 bit float will be slower. When DOOM runs the NV30 specific fragment shader, it is faster than the ATI, while if they both run the ARB2 shader, the ATI is faster.

      When the output goes to a normal 32 bit framebuffer, as all current tests do, it is possible for Nvidia to analyze data flow from textures, constants, and attributes, and change many 32 bit operations to 16 or even 12 bit operations with absolutely no loss of quality or functionality. This is completely acceptable, and will benefit all applications, but will almost certainly induce hard to find bugs in the shader compiler. You can really go overboard with this -- if you wanted every last possible precision savings, you would need to examine texture dimensions and track vertex buffer data ranges for each shader binding. That would be a really poor architectural decision, but benchmark pressure pushes vendors to such lengths if they avoid outright cheating. If really aggressive compiler optimizations are implemented, I hope they include a hint or pragma for "debug mode" that skips all the optimizations.

      John Carmack"




      "A RONSTER!"
      Sell crazy someplace else, we're all stocked up here.

      Comment

      • Bing Fung
        Ultra Senior Member
        • Aug 2000
        • 6521

        #4
        Both ATI and Nvidia where found to have applied codes that detect the benchmark and then run their own routines in the drivers to enhance perceived performance. Nvidia's was something like a 24% gain in the benchmark and ATI's was 1.9%, however cheat is cheating (if that's what you consider it) no matter what the value was.




        Bing
        Bing

        Comment

        • Trevor Schell
          Moderator Emeritus
          • Aug 2000
          • 10935

          #5
          My only comment on this is!!

          Bring back the VOODOO!!
          Those were great cards in their time.
          Especially in SLI configuration.




          Trevor
          My HomeTheater S.E.
          Sonically Enhanced
          C5
          Trevor



          XBOX 360 CARD

          Comment

          Working...
          Searching...Please wait.
          An unexpected error was returned: 'Your submission could not be processed because you have logged in since the previous page was loaded.

          Please push the back button and reload the previous window.'
          An unexpected error was returned: 'Your submission could not be processed because the token has expired.

          Please push the back button and reload the previous window.'
          An internal error has occurred and the module cannot be displayed.
          There are no results that meet this criteria.
          Search Result for "|||"