In
the past year we have seen the graphics market
consolidate to the point where there are only two
main players at the top: ATI and Nvidia. Earlier
this year, ATI responded to the GeForce 2
juggernaut with their highly successful Radeon
chipset, which combined a flurry of new DirectX 8
compatible features with some of the best 2D and 3D
image quality gamers have ever seen. New bandwidth
saving techniques helped ATI excel at high
resolution, 32-bit graphics and become a popular
fixture in gaming machines. The GeForce faithful
held firm, patiently waiting for Nvidia to answer
back with their next generation product. The time
has now arrived, and Nvidia has released the
GeForce 3. Not only has it arrived on the PC, but
the GeForce 3 chipset will be the core graphics
engine for the upcoming Microsoft Xbox, which
throws a few interesting wrinkles into the mix. How
well can Nvidia meet the needs of both platforms?
Will one lose out to the other, or will they both
succeed in the eyes of their dedicated public? We
were lucky enough to receive a GeForce 3 reference
board directly from Nvidia and have run it through
our standard array of tests. To find out how well
it holds up and what it has to offer, feast your
eyes on our latest review.
Features
We don't want to
go into a super-technical analysis of the features
that the GeForce 3 brings to the table, because
frankly, we don't want to overwhelm and bore anyone
with too much raw data. However, here are the key
technologies that Nvidia has introduced in their
new product and how they will benefit
you.
Lightspeed
Memory Architecture
These functions
are designed to help minimize bandwidth
requirements and are very similar to the Hyper-Z
technology in the ATI Radeon. The Nvidia solution
consists of four primary functions:
Optimized Memory
Control
Hidden Surface Removal
Lossless Z-Buffer compression
Fast Z-Buffer clearing
The optimized
memory controller allows data to be broken into
smaller segments when needed. This means that
instead of sending data in fixed chunks of 128 or
256 bits, it is possible to send data in smaller
clusters of 32, 64 or 96 bits, for example. Less
wasted data is sent over the memory bus, making the
entire process faster and more
efficient.
The hidden
surface removal feature analyzes 3D graphics to
filter out the processing of those pixels which do
not actually get shown on screen because they are
blocked by objects in the foreground. The computer
does not spend time computing objects that it
really does not have to, and as a result, less data
has to be sent over the memory bus and less
computing power is used in calculating the
display.
Lossless Z-Buffer
compression allows data to be compressed so that it
takes up less space in memory. This concept is very
similar to how files are compressed into ZIP
archives. No data is actually lost, it is just
squeezed down so that it can be transferred faster.
It can then be decompressed when it arrives at its
destination.
Fast Z-Buffer
clearing is a highly efficient method of
data-dumping that helps to get the Z-Buffer area
ready to receive new data quickly. Timing is really
what this is all about, since you don't want data
to be flushed before it is properly rendered on
screen. It is basically a dynamic method of
clearing data-jams; and is somewhat similar to how
traffic lights work. Some lights work on a timer,
where it turns green every three minutes or so,
allowing traffic to move at pre-determined
intervals. However, in high traffic areas, lights
may be set dynamically with sensors that can tell
when traffic is particularly heavy, and adjust the
delay accordingly. When little traffic is present,
lanes may be flushed quickly, or when heavy traffic
is present, lights may stay green a little longer.
It is a complicated procedure that relies heavily
on the efficiency of the driver code.
High
Resolution Anti-Aliasing
This is a feature
that is new to the market and a step above anything
the competition has put forth to date. Like the
system in the Voodoo 5500 from 3dfx, this
anti-aliasing is based in hardware, but it is
designed to be much more efficient. The GeForce 3
uses two different computational methods: The first
is multi-sampling, which is different from the
traditional super-sampling in that it is designed
to eliminate redundancies, something akin to the
hidden surface removal concept. The second is a
proprietary algorithm called Quincunx, where pixels
are blended via a cross-comparison. This algorithm
is more efficient than other established methods
and results in a picture very similar to 4x
anti-aliasing with much less computational
overhead.
Improved
DVD Playback
PC DVD playback
is becoming a fairly common thing, but these
improvements seem targeted more towards the Xbox
crowd, since every console will come with a
built-in DVD player. They key items that Nvidia has
included in the GeForce 3 that enhance DVD playback
are:
High Definition
Video Processor (full screen, full frame
playback)
Hardware Motion Compensation
Sub-Pixel Alpha Blending / Composition
Hardware Scaling (up and down)
We do not include
DVD playback as part of our normal video testing,
but as a matter of comparison did conduct visual
quality assessments using the Matrix DVD. We found
video playback on the GeForce 3 to be improved over
that of the GeForce 2 Pro, but lagging noticeably
behind the outstanding playback offered by the ATI
Radeon. During playback at 1600x1200x32, the
GeForce 3 had some noticeable blur during the fight
scene in the dojo and at times there were jerky
hesitations. The Radeon exhibited none of these
anomalies using WinDVD or PowerDVD, yet they were
present on the GeForce 3 in both applications.
Perhaps future versions of the PC playback software
will utilize the features that the GeForce 3 has to
offer to a greater degree.
nfiniteFX
Engine
With most
graphics cards, such as the ATI Radeon, complex
effects are coded directly into the video hardware;
applications can realize a dramatic increase in
performance by calling these built in hardware
functions as opposed to writing them in software.
The downside is that applications are usually
limited to only those functions that are built
directly into the hardware. Nvidia has taken a
somewhat different approach with the GeForce 3. The
nfiniteFX engine is designed to be fully
programmable, with written functions receiving
hardware assistance where possible. While somewhat
slower than dedicated hardware functions, it is
still substantially faster to have programmable
effects that are hardware assisted instead of
solely processed in software.
The two key areas
of programmability are the Vertex processor and the
Pixel processor. These systems can help to
dramatically increase performance for complex
vertex and pixel manipulation without limiting
programmers to a fixed set of functions. This
programmability may be key to attracting developers
from different platforms with different gaming
engines. By allowing each developer to
custom-configure their environment, it will allow
them to build upon existing projects while
promoting the possibility of more advanced and more
complex environments. Nvidia is providing
development kits to help speed acceptance, but it
will still take some time for developers to take
advantage of this new flexibility. However, since
the GeForce 3 is at the heart of the upcoming Xbox
gaming console, it is very possible that PC gamers
may see a flood of innovative titles coming their
way sooner than they might otherwise expect. Some
games may be ported directly to the PC from the
Xbox, while others may be developed using the
engines created for the Xbox console. Either way,
it could be a boon for gamers and developers
alike.
The bulk of the
features mentioned above help bring Nvidia neck and
neck with their main competitor -- the ATI Radeon
-- in terms of features and functionality. The
exciting nfiniteFX Engine and their improved
anti-aliasing technology pushes the envelope even
further, and now establishes them as a leader in
more than just raw power. Nvidia has sought to
round out their development, combining finesse with
brute force. How successful they will be depends on
how quickly these new features are adopted and how
easy they are to implement. But with Nvidia's track
record, it will be hard to bet against
them.
[
page
1
]
[
page
2
]
|