![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/551
Last week, we offered a sort of "re-introduction" to Rambus DRAM, a very controversial and often hated technology. Before diving into Part 2 of our investigation on this technology, let's quickly recap what the original article was intended to convey about Rambus and their Direct RDRAM technology:
1) More memory bandwidth is necessary in order for the systems of tomorrow to be able to perform at their full potential.
2) Direct RDRAM offers more bandwidth than any currently available competing memory technology; unfortunately, it is at a price point that is unrealistic for most users.
3) There is a definite need for a lower pin count regarding our memory devices. If a solution with a lower pin count isn't sought out, motherboards will begin to grow in terms of price.
4) Both AMD and Intel currently hold licenses to Rambus' interface technology, although as of now, only Intel is advocating moving to the standard.
5) While Intel has a vested interest in Rambus succeeding, they do not own the company. Part of Intel's "interest" in Rambus happens to be that they are given attractive stock options in the company.
6) The yields on RDRAM are not as low as they are rumored to have been. If the yields were truly in the 10 – 20% range, then Toshiba would have been forced to produce at least 20 million PC800 RDRAM chips in order to fill Sony's order for 4 million PC800 RDRAM chips for use with the initial batch of Playstation 2 systems. That's just not realistic.
But perhaps the most important point and most commonly overlooked point of our original article on RDRAM was the following:
7) Direct RDRAM is not a solution for today's desktop, workstation or server computers; DDR SDRAM offers a much more realistic price point while addressing the issue of increasing memory bandwidth. If you didn't get this out of the original article, please take a look at the summary on Page 11.
In spite of the seven covered points that we brought up in the initial article, there was a lack of something you've all become used to seeing (sometimes in excruciatingly large quantities ;)…) at AnandTech: benchmarks.
A sort of rule of thumb we've always kept around AnandTech is that real world performance should always be the final judge when it comes to recommending or denouncing a particular product. For those of you that took Part 1 of our Rambus DRAM investigation to be a recommendation for RDRAM or denunciation for DDR SDRAM, that was not the intent of the article. Rather, it was intended to establish the basis for the argument that there is a need for a higher bandwidth memory solution and that RDRAM is capable of filling that role in the future.
But now comes the time to evaluate whether or not there is a tangible use for RDRAM in the future and to reiterate the fact that RDRAM is currently not a viable option for consumers as it is easily outweighed by technology that has been around in systems for a much longer time.
Heat – Not an Issue
In the first article, we attempted to illustrate that, contrary to popular belief, RDRAM wasn't getting hot enough to warrant active cooling measures in order to maintain stable operation.
The point behind the heat spreader that rests on top of the RDRAM chips themselves is to help distribute the heat, which is generated by the single active RDRAM device on the module, more evenly. A single active RDRAM device (chip) generates more heat than a single active SDRAM device (chip); the only difference between the two is that, on an SDRAM module, all devices can be active at once whereas on a RDRAM module only one device can be active at once. All of the other devices on an RDRAM module draw no more than 250mW of power (compared to 900mW for a device on an SDRAM module), so the only one that really requires attention is the active device, which is drawing upwards of 1.1W of power.
So you have one device on an RDRAM module generating more heat than a device on an SDRAM module, but the rest of the devices on the module generating less heat. The end result is that although the RDRAM module will run warmer than an SDRAM module, the heat that it generates is not something that you need to worry about.
Still not convinced? We measured the temperature of a PC800 RDRAM module and compared it to that of a PC133 SDRAM module using an external thermistor mounted in the following fashion:
Since all of the chips on an SDRAM module are drawing the same power, the same amount of heat is produced from each one of the chips, and we simply chose one chip to mount the thermistor on that was on the opposite side of where the CPU fan resided, thus eliminating any external cooling factors from affecting the test.
On the RDRAM module, we placed the thermistor on the center of the heat spreader, on the left side and on the right side, in addition to on the back of the module itself which remains uncovered by any heat spreader. For the thermistor on the back of the RDRAM module, we oriented the thermistor in such a way that it was placed on the back side of a physical RDRAM device in order to obtain the highest possible temperature.
Temperature Comparison |
||
°C
|
°F
|
|
PC133 SDRAM |
34
|
93.2
|
PC800 RDRAM (Temp measured in the Center) |
37
|
98.6
|
PC800 RDRAM (Temp measured on the Left) |
37
|
98.6
|
PC800 RDRAM (Temp measured on the Right) |
37
|
98.6
|
PC800 RDRAM (Temp measured on the Back) |
37
|
98.6
|
As you can tell by the heat comparison, the PC800 RDRAM came up as being only 3°C warmer than PC133 SDRAM; while that is a definite increase in temperature, it isn't great enough to warrant any sort of active cooling.
For comparison's sake, we've included the temperatures of two video cards that don't use a fan and are used by quite a few users:
Temperature Comparison |
||
°C
|
°F
|
|
PC133 SDRAM |
34
|
93.2
|
PC800 RDRAM (Temp measured in the Center) |
37
|
98.6
|
Voodoo3 3500TV |
66
|
151
|
Voodoo3 3000 |
56
|
132
|
So while the 37°C temperature of the RDRAM is greater than the 34°C we measured on the PC133 SDRAM, it isn't high enough to require a fan and is definitely less than the 66°C that a fan-less Voodoo3 3500TV weighed in at.
Latency
The whole point of the first Rambus article was to attack the topic of memory bandwidth, but we failed to mention a very important characteristic of memory that would determine how effective RDRAM would be in terms of real world performance: latency.
It has been known from the start that RDRAM has a higher latency (the time before data transfer can actually occur) than SDRAM and definitely than DDR SDRAM. However, recently Rambus has made the argument that RDRAM actually has a lower latency than SDRAM.
Since today's applications, games and benchmarks aren't entirely memory bandwidth limited (disk transfer speeds, bus, processor and other such limitations come into play before memory bandwidth becomes the primary limiting factor) latency becomes a much more influential factor in the performance of a particular type of memory.
So who is right? Does RDRAM have a higher latency than SDRAM, and are most hardware enthusiasts right about the technology or does it have a lower one and has Rambus been telling us the truth all along?
The answer is, both. For the longest time Intel has admitted that RDRAM has a higher latency than SDRAM and that the Apollo Pro 133A with PC133 SDRAM actually features a lower latency than the i820 chipset with PC800 RDRAM. Sound hard to believe?
Take a look at the following slide from an Intel presentation:
This is where the latency argument begins to make sense. Remember the 3.7GB/s of memory bandwidth we calculated in the first article as being the amount of memory bandwidth necessary for the next-generation of PCs? Well, we're not at that point just yet, in fact, we're only beginning to reach the limits of PC100 SDRAM's 800MB/s in our everyday tasks.
At the start of 1998, we had just begun to saturate the 66MHz memory bus offered by Intel's LX chipset, which was part of the reason that when the transition to the 100MHz FSB/memory bus occurred, there was relatively no performance difference between the newly released Pentium II 350/100 and the older Pentium II 333/66 other than the added performance provided by the increase in clock speed. It was also evident that we hadn't really begun to saturate the 66MHz memory bus because the 66MHz FSB Celerons were performing quite comparably to their 100MHz FSB Pentium II counterparts.
Now look at the performance of the 66MHz FSB Celerons in comparison to the 100MHz Pentium II/IIIs today, with today's benchmarks, there is a much more noticeable performance difference:
What are we getting at?
Today's applications and games do not require 1.6 – 3.2GB/s of memory bandwidth. If they did, our PC133 SDRAM would be the limiting factor in every single benchmark we ran. Instead, we are still at a relatively low level of bandwidth utilization, illustrated by the lower part of the graph above. And as you can clearly see, as Intel themselves have illustrated, the i820 platform with PC800 RDRAM does have a higher latency than the VIA Apollo Pro 133A with PC133 SDRAM.
Intel's chart also places the BX chipset with PC100 SDRAM at an almost identical level in terms of latency as the i820 chipset with PC800 RDRAM, and the BX line even drops below the 820 line in terms of latency, indicating that Intel even admits that the BX chipset using older PC100 SDRAM is capable of outperforming the i820 chipset with its more expensive RDRAM.
It can also be assumed that if one overclocks the BX chipset to the 133MHz FSB here, the chipset would easily outperform the 820 platform, and since we know that the BX chipset has a higher performing memory controller than the Apollo Pro 133A we can safely predict that the performance of the BX chipset using the 133MHz FSB/memory bus would be higher than that of the Apollo Pro 133A at 133MHz, indicating an even lower latency. So while the line for BX/PC133 is not present on the above chart (for obvious reasons, Intel doesn't exactly condone overclocking), it is safe to assume that it would start out below the Apollo Pro 133A/PC133 line which is very close to the bottom of the graph, indicating a very low latency at current bandwidth usage patterns.
So how can Rambus say that RDRAM actually has a lower latency than SDRAM? Look further on in the graph.
As the bandwidth usage increases, the latency for PC100 and PC133 SDRAM on the various SDRAM platforms increases dramatically. In high bandwidth situations, the latency of RDRAM is actually lower than that of SDRAM because SDRAM is approaching its memory bandwidth limits while RDRAM still has quite a ways to go. But remember that we aren't constantly running an application or performing a task that requires such a large amount of memory bandwidth. Because of that, we don't see PC100/PC133 limiting our performance since at the lower bandwidth utilization points, the latency of SDRAM is actually lower than that of RDRAM.
The X Factor – DDR SDRAM
Imagine for a minute what would happen if the line representing the BX/PC100 or Apollo Pro 133A/PC133 configurations was not effected by the 800MB/s or 1.06GB/s peak bandwidth limitations of the respective memory types. Instead, imagine the lines having a peak bandwidth equal to that of RDRAM, 1.6GB/s or possibly even greater than that.
While these performance lines aren't present on that particular graph, you would be imagining the latency versus bandwidth utilization graph of DDR SDRAM. This would be a lower latency solution than RDRAM on an 820 chipset and would feature a much lower cost premium than RDRAM since the expected price point of DDR SDRAM is supposed to be only a few percent more expensive than SDRAM, which is much lower than the 20%+ price point that RDRAM will have when DDR SDRAM becomes a readily available option in the market later this year.
This is why we made it a point to mention that, in spite of the lower pin count of RDRAM, currently DDR SDRAM is the way to go because it offers all of the bandwidth benefits of RDRAM with the low latency of SDRAM that we're used to, without the price premium of RDRAM.
In the future, when the pin count of 64-bit DDR SDRAM becomes an issue and adding a second channel isn't exactly feasible, then RDRAM will perhaps be primed and ready to take over, but not until then. That was the intended aim of our first article on Rambus.
DDR SDRAM's Competition
We were just comparing DDR SDRAM to the i820 + PC800 RDRAM combination, but the reality of the situation is that it won't be pitted against that platform and if it is, DDR SDRAM will most likely come out on top provided that the memory controller is designed properly. The reason we can make that assumption is because, if we take the BX/PC100 performance graph and increase its peak memory bandwidth, we can essentially estimate what PC1600 DDR SDRAM (100MHz x 2) would do if it were using a memory controller like the one found in Intel's BX chipset. Considering that PC2100 DDR SDRAM (133MHz x 2) should also become available, we can predict a lower latency and thus a lower graph indicating better overall performance than the i820 + RDRAM setup.
The real question is, how will it stack up to dual channel PC800 RDRAM since that is what Willamette's Tehama chipset will use, and the only current implementation we have to go by is that of the i840 chipset.
The i840 chipset is actually a bit more advanced than the i820 chipset in that it does help to reduce the latency of RDRAM at low bandwidth usage situations, not by adding a second RDRAM channel (that only increases the peak bandwidth); however we'll save the i840 versus i820 comparison for another review.
The main thing to realize here is that PC1600 DDR SDRAM will offer a similar if not lower latency than PC800 RDRAM across the board and PC2100 DDR SDRAM should offer an even higher level of performance because of an even lower latency across the board.
We have known this from the start, the only problem with DDR SDRAM is that, because of its higher pin count than RDRAM, creating a two channel DDR SDRAM memory interface would be quite difficult and costly, which rests the fate of SDRAM on the ability for Quad Data Rate (QDR) technology to take off or a potentially higher bandwidth DDR-II standard.
820/PC800 vs. BX/PC100 Again
Before we start diving into the mounds of benchmarks, let's take a quick look at the 820/PC800 versus BX/PC100 latency versus bandwidth graphs. If you notice, in today's range of bandwidth utilization, the two feature almost identical graphs. But why is it that in some benchmarks you see the BX on top and in others the 820 on top?
One of the explanations behind this variation in performance brings us back to the fact that only one RDRAM device on a module can be "active" at any given time. As we mentioned in Part 1, the rest of the devices that aren't "active" (transferring data) can be put in one of three modes, Power Down, Nap, or Standby. For a desktop system, the devices should be put into Standby mode, since you aren't really concerned with conserving power. The only reason the devices should be put into Nap mode is if you're running in a limited power situation, such as a laptop, because a device in Nap mode consumes as low as 10mW of power whereas a device in Standby consumes around 250mW.
The problem with going to Nap mode is that, when a device switches from Nap to Active, it incurs a 100ns latency penalty, which is alright in the case of a laptop where you aren't really concerned with performance as much as you are power conservation, but it isn't acceptable when it comes to a desktop system, since power conservation isn't the primary goal.
Unfortunately, most motherboard manufacturers either don't allow for the presence of a setting that controls this option in the BIOS or set the non-active RDRAM state to Nap by default. In order to properly benchmark RDRAM, it is important that all non-active RDRAM be set to assume a Standby state and not default to Nap mode. This setting can be controlled in the BIOS of AOpen's AX6C and AX6C-L as well as ASUS' P3C-E. On the AOpen board it is called the RDRAM Napdown setting, on the ASUS it is called the Pool B RDRAM Device setting.
Does the removal of this 100ns latency penalty change things for RDRAM? Let's take a look at the benchmarks to find out.
Placing the inactive RDRAM device in Standby mode results in a 1% performance improvement over keeping them in Nap mode, not exactly what we'd like to call a significant performance improvement.
Here there is absolutely no performance improvement at all.
An additional 1.7 fps at 640 x 480 and an addition of 0 fps at the higher quality setting leads us to conclude that the performance difference isn't significant.
While there was a small performance difference under SYSMark 2000, it isn't major at all, and still doesn't make up for RDRAM's high latency in situations of "low" bandwidth usage, which is where we're currently at.
The Three Flavors of RDRAM
As we've known since last November, there are three flavors of RDRAM: PC800, PC700 and PC600, all running at different clock speeds – 400MHz for PC800, 356MHz for PC700, and 266MHz for PC600. As you can probably guess, the performance for the latter two types of RDRAM would be lower than that of PC800, but by how much is the question?
In order to find out, we used the ASUS P3C-E, which is one of the three motherboards we've seen that allows for the manual adjustment of the RDRAM clock frequency (the other two being the Intel OR840 and the AOpen AX6C/L) and ran a few benchmarks.
The difference between PC800 and PC700 RDRAM doesn't seem to be too great, but once you hit PC600 RDRAM the performance drop is definitely noticeable and unacceptable.
In this case, PC800 offers a 5% performance improvement over PC600. While that may not seem like a lot, for the price you're paying, PC600 isn't exactly close enough.
Under Quake III Arena we notice a huge performance difference between the slowest PC600 RDRAM and the fastest PC800, 15.8 fps is quite a bit even if it's only at 640 x 480.
At 1024 x 768 x 32, other limitations kick in before the difference between PC600 and PC800 RDRAM in terms of latency and bandwidth can take their toll. This explains the relatively small difference between the three memory speeds.
The Test
Windows 98 SE Test System |
||||||
Hardware |
||||||
CPU(s) |
Intel
Pentium III 800EB |
|||||
Motherboard(s) |
AOpen
AX6BC Pro Gold II
|
ASUS
P3C-E
|
Intel
OR840
|
ASUS
P3V4X
|
Tyan
Thunder 2400
|
|
Memory |
256MB PC133 Corsair SDRAM |
256MB
PC800 Samsung RDRAM
|
256MB
PC133 Corsair SDRAM
|
256MB
PC133 Corsair SDRAM
|
||
Hard Drive |
IBM Deskstar DPTA-372050 20.5GB 7200 RPM Ultra ATA 66 |
|||||
CDROM |
Phillips 48X |
|||||
Video Card(s) |
NVIDIA GeForce 2 GTS 32MB |
|||||
Ethernet |
Linksys LNE100TX 100Mbit PCI Ethernet Adapter |
|||||
Software |
||||||
Operating System |
Windows 98 SE |
|||||
Video Drivers |
|
|||||
Benchmarking Applications |
||||||
Gaming |
idSoftware Quake III Arena demo001.dm3 |
|||||
Productivity |
BAPCo SYSMark 2000
Ziff Davis Content Creation Winstone 2000 |
Windows 2000 Test System |
||||||
Hardware |
||||||
CPU(s) |
Intel
Pentium III 800EB |
|||||
Motherboard(s) |
AOpen
AX6BC Pro Gold
|
ASUS
P3C-E
|
Intel
OR840
|
ASUS
P3V4X
|
Tyan
Thunder 2400
|
|
Memory |
256MB PC133 Corsair SDRAM |
256MB
PC800 Samsung RDRAM
|
256MB
PC133 Corsair SDRAM
|
256MB
PC133 Corsair SDRAM
|
||
Hard Drive |
IBM Deskstar DPTA-372050 20.5GB 7200 RPM Ultra ATA 66 |
|||||
CDROM |
Phillips 48X |
|||||
Video Card(s) |
NVIDIA GeForce 2 GTS 32MB |
|||||
Ethernet |
Linksys LNE100TX 100Mbit PCI Ethernet Adapter |
|||||
Software |
||||||
Operating System |
Windows 2000 Professional |
|||||
Video Drivers |
|
|||||
Benchmarking Applications |
||||||
Gaming |
idSoftware Quake III Arena demo001.dm3 |
|||||
Productivity |
BAPCo SYSMark 2000
Ziff Davis Content Creation Winstone 2000 |
|||||
Professional |
Ziff
Davis High End Winstone 99
Ziff Davis Dual Processor Inspection Tests Winstone 99 SPECviewperf 6.1.1 |
BX at 133MHz?
The BX chipset has been around for quite some time now, almost 2 years to be exact. After its debut in May of 1998 one of the most difficult challenges was to push the chipset to the limits by attempting to use, reliably, the 133MHz FSB setting. At the time of the chipset's release, neither the memory nor the AGP cards were capable of hitting that 133MHz setting. Most memory had difficulty running at above 124MHz, and the 2/3 AGP clock divider left the AGP bus running at 89MHz, or approximately 35% out of spec.
Luckily, in the past two years, things have come a long way since then. Motherboard manufacturers have tweaked their BX motherboard designs beyond belief, and graphics card manufacturers have also made their boards more tolerant to various extreme conditions. The result of this all is that it is now possible to run quite a few BX based motherboards at the 133MHz FSB setting without any problems.
Two motherboards that we've highly recommended in particular have been able to hit the magic 133MHz mark, those two are the Microstar BXMaster and the AOpen AX6BC Pro Gold, the latter which we used in this comparison. Keep in mind that not all BX motherboards are capable of running at the 133MHz FSB setting, especially the older ones, and it takes more than a solid motherboard to run at that frequency, your AGP card and memory must be capable of running at the overclocked setting as well.
AOpen AX6BC Pro Gold
Microstar BXMaster
i840 with SDRAM?
Another interesting addition to our testing lineup is an i840 with SDRAM platform. The board we used for this platform is the Tyan Thunder 2400 which features the i840 chipset, with two MRH-S chips which are essentially what the i840 uses to convert its RDRAM channels to SDRAM channels.
This combination theoretically allows for a peak bandwidth of 1.6GB/s on an SDRAM platform, unfortunately you have to take into the account the extreme performance penalty incurred by the RDRAM to SDRAM translation process that occurs within the MRH-S chips.
Unfortunately the Thunder 2400 is far from the most stable solution and wouldn't even complete some of our test runs, thus it is absent from some of the benchmark comparisons.
SYSMark 2000 used to be a benchmark completely dominated by the i820 + RDRAM platform, but a few changes have taken place since the benchmark's introduction.
One of the most important changes is that CAS2 PC133 SDRAM has become available that allows the Apollo Pro 133A to pull ahead of the i820 + RDRAM platform.
If you look back at that latency vs bandwidth graph we used earlier, SYSMark 2000 would be a perfect example of where the BX/PC100 and i820/RDRAM lines overlap and where the i840/PC800 and VIA 133A/PC133 lines overlap as well.
The BX chipset at 133MHz comes out on top, which we predicted earlier because of the low latency of SDRAM in low bandwidth usage situations and because of the excellent SDRAM memory controller found in the BX's North Bridge.
Here the BX and Apollo Pro 133A setups fell to the bottom of the performance chart while the RDRAM equipped 840 and 820 platforms and the dual channel SDRAM i840 platform managed to come out on top.
Once again, the low latency SDRAM and the BX's high performing memory controller combined with the 1.06GB/s of memory bandwidth provided by the 133MHz memory bus keeps the advantage in the 440BX's court.
The Detonator 5.16 drivers improve performance on the VIA 133A chipset giving it the advantage over the workstation i840 platform with its dual channel RDRAM. This is a perfect example where the peak bandwidth (3.2GB/s) of the i840's RDRAM channels is meaningless because the bandwidth utilization in this particular case isn't high enough to take advantage of it.
In this case and in most of the benchmarks we ran, latency matters more than peak bandwidth.
The limitation here lies outside of the system memory which explains the relatively small performance difference that exists between the slowest i840/PC100 combo and the fastest BX133/PC133 setup.
Windows 2000 Performance
Under Windows 2000 the only problem we encountered was that our ASUS P3V4X would not reliably complete some tests with the Detonator 5.16 drivers loaded. We are currently investigating the problem which hasn't been solved by the latest release of VIA chipset drivers for Windows 2000. Because of this, the Apollo Pro 133A chipset is absent from this comparison under Windows 2000.
The BX at 133MHz comes out on top yet again, low latency combined with just enough memory bandwidth for today's applications and benchmarks results in quite a deadly combination, at least for the i820/840 + RDRAM platforms.
The BX133 comes out on top yet again, the only interesting thing here is that the i840 managed to come out slightly below the i820. The reason behind this is most likely that the Intel OR840 board isn't as optimized for performance as our ASUS P3C-E which is reasonable since Intel motherboards have always been geared more towards stability and have generally been incapable of outperforming their Taiwanese counterparts.
This helps to illustrate the fact that we are currently at a point where we can't use 3.2GB/s of memory bandwidth, at least on a desktop system, thus giving the i820 platform the opportunity to step ahead of the i840 setup because of differences in the way the motherboard's BIOS is tweaked.
Even High End Winstone 2000, which you would think would go to the 840 platform, was actually faster on the BX at 133MHz. With this level of bandwidth usage, there is no need for any higher bandwidth memory solutions. Unfortunately things won't always remain like this, as the next generation of applications and games promise to be much more demanding than what we currently benchmark with.
The Dual Processor Inspection tests under Winstone 99 are simply a set of multithreaded tests that work on both single and dual processor systems and are designed to illustrate any performance advantage that is present when moving to an SMP setup. The tests also serve our purpose as they stress the memory subsystem fairly nicely.
Once again, our Intel OR840 board which we used for the 840/PC800 RDRAM tests managed to fall behind the 820/PC800 RDRAM setup indicating that the 840's memory bandwidth advantage wasn't of any use in the tests. While we won't be able to say the same thing 8 - 10 months from now, for today's users, this helps to illustrate the point that having a lot of memory bandwidth isn't absolutely necessary and not always the key to greater performance.
The Quake III Arena comparison under Windows 2000 is much like that under Windows 98 SE, although we can make one semi-off topic observation, that the gaming performance of all of the platforms actually improved under Win2K in comparison to Windows 98. But we'll save that comparison for another article...
We mentioned that in the future we would need more memory bandwidth than what PC100/PC133 is capable of giving us, SPECviewperf is the perfect example of that. The AWadvs-03 viewset is generally limited by the fill rate of video cards but in this case you can see a clear performance difference between the SDRAM platforms and the RDRAM platforms.
While performance under the AWadvs-03 viewset isn't representative of what most AnandTech readers use their computers for, this benchmark does help to point out that the added bandwidth is there and can be taken advantage of. It would be interesting to see how a DDR SDRAM platform performs under this test, our guess would be that it would give the i820/PC800 a run for its money.
While the 840/PC800 platform takes the lead yet again, this time it is challenged by the BX133 platform again instead of the more expensive 820/PC800 setup. The DRV-06 viewset would most likely illustrate a level of bandwidth usage described by the middle of the latency vs bandwidth graph we provided earlier on where the bandwidth limitations of PC133 SDRAM begin to kick in but the latency of the PC800 RDRAM on the 820 platform holds it back from overpowering the competition.
Things return to "normal" once again under the DX-05 viewset benchmark with the 840 taking the lead yet again. This time the 840/PC100 combo actually outpaces the old BX/PC100 setup in spite of the extremely slow MRH-S chips that are necessary to adopt SDRAM to the 840's RDRAM memory controller.
The BX133 and 820/PC800 duke it out yet again for that second place spot, but the BX133 setup does manage to take the lead this time. A DDR SDRAM platform would almost definitely take the lead spot away from the 840/PC800 platform simply because of the obvious effects of low latency SDRAM on the Light-03 viewset test.
Nothing too different here, another illustration of a situation in which having a large amount of available memory bandwidth is important. This situation would be represented by the latter part of the latency vs bandwidth graph.
Conclusion
Consider Part 1 of this article to be an indication of a need for more memory bandwidth for a future, and consider this part 2 to be a locator of exactly where we currently are and what our memory bandwidth needs actually are.
There is no question to it, DDR SDRAM will be the memory technology of choice upon its introduction in both Pentium III and Athlon systems later this year. The real question is, how long will DDR SDRAM be around in the future?
Let the analysts play with the crystal balls, for you, the end user, what really matters is what will give you the best bang for your buck. Right now, that's obviously a BX platform running at 133MHz, but in the future you may find yourself lusting after a higher performing DDR solution.
Let's just hope the chipset manufacturers can implement a decent memory controller into their designs, it is kind of sad when a 2 year old creation (BX) manages to topple a handful of next-generation chipsets at a frequency it was never intended to operate at.