Original Link: https://www.anandtech.com/show/2336



The past several years, reducing power consumption in datacenters has been one of the most talked about issues, and some progress has been made. PSUs have become a lot more efficient in the AC/DC conversion, with the conversion efficiency rate going up from 70% to as high as 90%. CPUs are dissipating less heat even at full load and have become quite efficient when running idle thanks to technologies like Intel's EIST and DBS. Memory DIMMs (some of them at least) are able to save a bit power too. But there is still a lot more power that can be saved.


Intel's first idea is to turn off a lot of motherboard components if they are not in use. As you can see, in the current platform, a lot of components are always on, no matter how busy they are. On the picture below you can see which components Intel is trying to turn off - or at least make sure that they run in a lower power state.


To do this, Intel wants to shift away from OS power management to HW management. It is a bit similar to how AMD's Barcelona and Intel's newest Core CPU regulate their power states. The reason why OS management is not desirable is that it is not accurate enough. Each transition from a low power state to a higher power state or vice versa costs a bit of power, so you want to make sure you are not switching between them too quickly.

Intel is also taking a hard look reducing the VRM and capacitors which are needed for the DC/DC conversions from your power supply to your motherboard. These improvements are desirable for both servers and notebooks. Intel expects up to 30% improvements in performance/watt from these modifications. However, it is clear that even these improvements alone are not going to save the datacenters which are short on power right now. The complete chain from the power entering the datacenter to the CPUs/systems using that power must become more efficient.


Some of the current datacenters are already powered by 48V DC which saves quite a bit of power by reducing heat loads from AC to DC power conversions. Rackable Systems has been promoting DC powered servers for quite some time now. However, 48V DC power also requires much thicker (7 to 17 times bigger) and harder to place cables.

That is why Intel feels that the industry should take a good, long look at a 400V DC power infrastructure. At this point in time, 400V DC power infrastructures would be incredibly expensive, and there are no industry standards. But as you can see at the picture above, about 75% of the power that enters the datacenter would actually be used for performing useful things on your server, instead of only 50% now.

There's a lot more to this subject, but for now we'll just leave it at: To be continued...



FB-DIMMs and Servers

IBM's newest x3950 M2 server is an innovative design: it includes a 4GB flash drive (the same as in some of their blade servers) which runs the ESX 3i virtualization layer. The new X4 chipset tries to combine the best of FB-DIMMs (high capacity, easy routing of the serial bus) with the advantages of normal DDR2 DIMMs (low power) by using one buffer - which is serially connected to the main chipset - for up to eight DDR2 DIMMs.


Intel is going to introduce DDR2 support in its DP server line later this year. We talked to Diane M. Bryant, Vice President of the Digital Enterprise Group, and she expressed her firm belief that FB-DIMMs still have a bright future ahead. The power consumption of the AMB has already gone down from 4-5W to 3W.

Qimonda, alias "Infineon", has another solution to further drive down the power requirements of FB-DIMMs.


They propose to use quad-rank (QR) FB-DIMMs. The price of a 4GB QR-DIMM should be lower than two dual-rank 2GB DIMMs. More importantly, given a certain capacity, you have only half the number of AMBs which reduces power significantly.



Dense Xeon MP Servers

Intel put the spotlight on Jim Fowler, executive vice president of Sun, as he showed off a mystery 2U server which contained four Xeon Tigerton 7300 CPUs. As always, Sun has produced a very slick and well finished design, and we took a look closer.


This new Sun server places a full "rack" of no less than 32 DIMMs on top of the Xeon MPs.


Here you can see what the server looks like once you have installed the DIMMs.


For those of you who think that four Xeon MPs in a 2U server is not dense enough, Supermicro developed the X7QC3 board. This board with support for Xeon MP 73xx and 24 DIMMs fits in a 1U server.


That is quite amazing as even the 130W 2.93GHz Xeon MPs can find a home in this server (Update: the 1U server support up to 80W per CPU, 2U and more support 130W CPUs). Remember that the Xeon MP 71xx CPUs are commonly found in 4U and 6U servers.

Other Random Tidbits

As many have previously speculated, Nehalem will have a smaller cache than Penryn, as its integrated memory controller reduces the need for a huge cache. A smaller cache means a smaller die, and this can result in higher clock speeds.

Solid state drives can deliver from 10 to 50 times more I/O operations per second than the traditional hard drives. It is only a matter of time before we will see much higher TPC-c scores with an excellent performance per dollar ratio. Gone will be the days that you need 70 to 100 disks per core to run TPC. While that is good news for the TPC-c benchmark people, it might also be excellent news for all those who are running intensively used databases... provided those databases don't require too much actual capacity.

Log in

Don't have an account? Sign up now