the website wholesale for many kinds of fashion shoes, like the nike,jordan,prama,****, also including the jeans,shirts,bags,hat and the decorations. All the products are free shipping, and the the price is competitive, and also can accept the paypal payment.,after the payment, can ship within short time.
free shipping
competitive price
any size available
accept the paypal
our price:
gstar coogi evisu true jeans $36;
coach chanel gucci LV handbags $32;
coogi DG edhardy gucci t-shirts $15;
CA edhardy vests.paul smith shoes $35;
jordan dunk af1 max gucci shoes $33;
EDhardy gucci ny New Era cap $15;
coach okely **** CHANEL DG Sunglass $16;
It's about time, I'm ired of VGA not beeing integrated in more high-end boards, despite them having extremly many features. Not every high/midend rig is a gamer-rig. So inbuilt vga is a welcome change, even if you're not going to use hybrid SLI.
Hi, silly peoples. Most (I mean All), of the cheapest motherboards include integrated graphics already. To insinuate that this "will make them more expensive" is STUUUPID.
Have you priced the cheapest motherboards? They are all integrated. I think this will lower nVidia's cost by standardizing the entire line-up, including the drivers and Bios, thus allowing for reduced operating, design, production, and engineering costs.
I agree that the display switching and connector issues need work, and I am happy nVidia is working on them. Since the display is to be routed through the motherboard in this iteration, I foresee any problems being worked out, if possible.
Let's not forget that the move is toward DDR3, when/if that is possible, we may see socketed GPU's, hopefully with their own RAM. Just think, 4x Ram slots for DDR3, use more expensive low-latency for the video, and cheaper high-latency for the system.
I am righteously upset at the current practice of a board hosting at least half of the heat production in current form factor, clearly a move to dual socket systems with heat-pipe cooling would be better. Although integrated CPU/GPUs would be interesting as well, and wouldn't preclude a secondary gpu socket, if this technology is perfected.
I agree with the op, seems to me the disadvantages for this, in desktop use, FAR outweigh the advantages. Seems like this'll be yet another driver option we'll have to tick saying 'no! we don't want to use this feature!' Heck, just the increased reliance (is that a word??) on system ram for all things 2d, ON A ENTHUSIAST SYSTEM??, is bad enough. Frankly I hope they do this, give me another excuse to go AMD.
In fact, even in Notebooks it just seems like too much too soon or not even needed at all. I mean, what notebook costs less than 3 grand and DOESN'T use an integrated gpu anyway??? WHAT'S THE POINT?!! Even if, the savings isn't that big. We'll have to buy those big, inefficient PSUs anyway planning for a worst possible scenario, you can't always have the cake ;p. It just all sounds pretty dumb. I mean, even AMD's implementation sounds pretty dumb... but at least they don't make a big deal about it... or is that the fault of anandtech... woopsie ^.^
Hopefully, Nvidia will release some open source code that allows other OS's to take advantage of their features. Otherwise they are just being ignorant of the fact that 2008 will be the year that the penguin goes through puberty and becomes a real desktop alternative.
It's just too bad no one has created a Linux gaming developer platform kit to allow for easy porting of games. *Hint*
I think that would be a major reason why desktop linux would succeed.
[quote]What will happen when future cpus will come with onboard graphics?[/quote]
Exactly. Seems to me they are just laying the groundwork now, so there is less work later. They may actually switch to drop in socketed GPU replacements (like CPU's are now) first, before they mold them into the CPU.
I am also leaning towards the totally stupid idea - comments, because
1) Why heating up the motherboard more and adding features that cost money and no one needs
2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...
3) They should rather invest to power down the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the ram, etc. etc...)
4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P
I am also leaning towards the totally stupid idea - comments, because
1) Why heating up the motherboard more and adding features that cost money and no one needs
2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...
3) They should rather invest to power domn the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the rem, etc. etc...)
4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P
I am also leaning towards the totally stupid idea - comments, because
1) Why heating up the motherboard more and adding features that cost money and no one needs
They are prepping the production line for complete integration, this has more to do with whats going on down the line then right now.
2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...
Eventually the entire system will be virtualized eliminating any delays and the graphics scaling will be transparent, this eliminates the need for complex software rewriting. If they are going to do this, they need the memory bus shared between all components. The more memory controllers you have, the more latency in between each access.
3) They should rather invest to power domn the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the rem, etc. etc...)
I suspect it has to do more with the capabilities of their fabrication plants and the eventual redesign of the entire system. They are not going to invest any engineers into making "design type C" video cards power effecient because they know they will have to eventually integrate everything, so why not work on that now and release older "design type C" video cards just to fill the buffer until the other tech is ready? Just a guess.
4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P
Yes that probably has something to do with it, but I suspect it has more to do with the fact that the entire PCB may be eliminated in the future and your GPU will just be a drop in replacement (complete with socket), similar to what CPU's are now.
I for one am tired of seeing a slower memory system for the CPU and faster ones for the GPU. Why not let them both share the same bandwidth, memory, and controllers? It takes care of a lot of performance issues.
they should put an SLI-like connector on the motherboard itself. then we can connect the add-in card directly to the motherboard's IGP. So the onboard and the add-in cards can talk directly to one another. It'd probably require extra mem on the onboard card though, but who cares, as long as it doesn't tie up other system resources.
They've done this backwards. They need to set things up so that the monitor connects to the discrete graphics card, and add a internal cable connection so that when the discrete GPU is powered down the on-board GPU can get it's signal to the monitor. This way there's no requirement to copy the frame buffer from the discrete GPU back to main RAM for output. Also they could then eliminate the main-board VGA and DVI connectors as well - just use a stub card with monitor ports if there isn't a discrete GPU card installed.
Yes, they would have to change more to make it work this way. It would also work a hell of a lot better. Yes, every card and motherboard would have to include the extra 'Hybrid SLI' connection and appropriate signal switch logic... so what? NVidia can require that just like every other feature and connector they've required in the past.
the whole graphic card and PCI-E bus is stupid. Why not just have replaceable dedicated processors ON the motherboard that share system memory and turn on/off parts of the processor as needed? Or at LEAST have cards that can support adding processor cores/memory. That way you can still build a computer in a square box.
What I want is to be able to increase graphic processing power by adding memory or a processor to my system. Period. When will the day come that I can upgrade my computer by just adding another processor, instead of having to swap one out. We have the ability to add a graphics card now for increased performance. The problem is that they don't turn off when not in use. SLI is fine, just make it so the system doesn't use my $500 video card when it isn't needed. Crazy and radical ideas like making me purchase an additional onboard GPU is silly.
For NVidia to say that it won't increase the cost of my motherboard is just plain outright retarded. Maybe with new process technologies that save NVidia money I won't see an INCREASE, but I certainly won't see a DECREASE in the cost of the board with smaller chipset processes. NVIDIA, quit being a jerk and trying to keep chipset prices high and making us buy extra components from you. Send the savings on down to us please.
I think your missing the point. They are preparing for all-in-one cores. So of course they are going to be making integrated GPU's to "train" their fabrication engineers. Eventually it will move into the CPU, or at least on top of the CPU with some kind of socket, along with the memory controller. The CPU should share the same memory and controller the GPU does and they should be very close to one another to reduce voltage, latency, and timing problems.
A lot of these concerns are based on the understanding that the integrated GPU seems to be the "primary GPU". That's the understanding I got from the article.
1. Old games that are not actively developed and don't support SLI. Which GPU will they use? If these old games use only the integrated GPU, can we still expect good frame rates?
2. What about Windows XP or Linux? Will users on these OSs be limited to using the integrated GPU? Or will there be a BIOS setting that can enable/disable Hybrid SLI.
3. This is going to take up a lot of space on the I/O panel, space that is sorely needed for USB, audio, eSATA, Ethernet, etc etc. Every bit of space on my current I/O panel is used, I don't want to be missing out on extra USBs or Ethernets because of some monitor connections.
4. This will make the NB run way hotter with all the extra traffic running through it.
5. Seems like this plan will be limited by RAM speeds. High performance RAM, higher bus speeds, lower CLs will finally make a significant difference. I guess those memory manufacturers are happy. (I don't think this is good for laptops though, a lot of OEMs deliberately buy low quality slow RAM to cut costs.)
6. What about using multiple GPUs to run more than two monitors? Will that ability be taken away? (ie. Use two graphics cards to run four monitors.)
1. Onboard GPU allows discrete GPU to shut off and save power, especially helpful for notebooks.
2. Allows the onboard chip to go 'SLI' with a low-end discrete card.
Not a lot for the enthusiast here, as far as I can tell. Sounds more like a marketing scheme to get the uninformed who buy onboard graphics to have a moment where they say: 'It will be SLI if I buy this low-end card!11one!'
A crap idea. More problematic junk on the motherboard we don't need. Make single-card solutions that aren't power hogs? That would be nice.
Could that extra GPU, when bypassed during gaming by the higher end videocard, perhaps be used for Nvidias hardware physics acceleration? If so, Nvidia may have an ace up there sleeve in a future software revision...
Honestly sounds like a horrible plan, the last thing NV needs is an integrated GPU mucking up their already buggy and underwhelming MCP. Not only will the the MCP now run hotter with all that extra traffic going through the north bridge, performance will be worst as well as the frame buffer is forced to go through system RAM. And that's before considering potential system bandwidth problems noted in the article.
I may be wrong here, but didn't we learn our lesson with some of the earliest NForce integrated GPU designs, like the GF2 MX series? Some head to head comparisons basically showed that system RAM was simply not fast enough to keep up with GDDR and ultimately became the bottleneck when comparing comparable integrated vs. discrete graphics.
NV should really focus on making a chipset that can compete with Intel's P35/X38/X48 etc., or maybe even support Penryn to start. Instead we get re-hashed boards with Tri-SLI and mGPU. That's all fine and good, but running Quad-SLI-BoostForce with only Conroe/Kentsfield support isn't going to cut it when Nehalem is out and cruising along at 466-500MHz FSB speeds.
I totally agree, the NB/SB on my 680i board are hot as hell, there is no way you can add any more heat to them. This seems like the totally opposite way to go about fixing the power problem, they are lowering power by duplicating components, adding yet more specs and connectors to a board, adding another area for bugs to appear, its insane.
Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.
I totally agree, the NB/SB on my 680i board are hot as hell, there is no way you can add any more heat to them. This seems like the totally opposite way to go about fixing the power problem, they are lowering power by duplicating components, adding yet more specs and connectors to a board, adding another area for bugs to appear, its insane.
Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.
[Quote]Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.[/Quote]
There is a reason chip makers keep trying to integrate the chips into one chip. Cost, for one. Secondly, speed! It is so much easier to have tighter timings when the interconnects are so close to each other.
I think they are moving this way because everyone in the industry knows that the merging of GPU/CPU on the same core is inevitable. This gives the engineers a few generations to get it right before moving the entire chip set core onto the CPU. This is the easiest way to reduce latency.
I predict that it will happen because the power consumption is getting out of hand. I also predict that they will eventually virtualize the whole graphics card / bus, so that it appears to be one video card to the OS, while in reality it could be several GPU's that scale as needed.
I also think that possibly this year you will see GPU chips manufactured so that they are suspended in a non conductive liquid and sandwiched in a housing to drastically increase heat dissipation from the chip.
chip |
v
_
| -------
------| / | \ <-- interconnections all around the chip
| |_
| ^
| very small mounting pole
|
--- Non conductive liquid compressed around the chip.
........chip.|
.............v
......._
......|....-------
------|.../...|...\...<-- interconnections all around the chip
|.....|_
|.............^
|.....very small mounting pole
|
---.Non conductive liquid compressed around the chip.
"Because of the switching among GPUs, the motherboard will now be the location of the various monitor connections, with NVIDIA showing us boards that have a DVI and a VGA connector attached."
Makes me wonder if NVIDIA will get rid of the on board DVI/VGA connectors all together for future discrete video cards. I'm sure doing so could be beneficial to NVIDIA and the consumer I'd think it would help lower costs, complexity, heat, and power plus with everyone having integrated graphics at least that's what they're moving towards they could probably set it up so integrated graphics could double as a physics processor.
They should look for ways to reduce power consumption even more and producing more processing power. Their video cards (ATI/NVIDIA) still very power hungry. At least CPUs (AMD/Intel) are looking for ways to reduce more the power requirements in their chips in the last couple of years and they are aiming to cut more those numbers.
Can you imagine an upcoming mainstream card (9600GT) that will be requiring at least a 400W PSU with 26A in the 12V rail?
...ActiveArmor and the accelerated audio/storage chips...
Why waste time on this stuff?
High-end GPU users are worried about idle power savings...
...when they're the missing next-gen replacement for 8800 GTX?
Granted, the low end stuff *might* end up being useful and popular enough that nVidia isn't just raising false hopes again.
Normal SLI needs a fundamental fix so that the high end solutions are less frustrating. After so many years now, game compatibility with SLI should not have to be hacked in after the fact. Why can't nVidia, ATI, and Microsoft just work together to change the damned APIs? They all know multi-GPU rendering is here to stay.
why do car companies build prototypes that will never see the light of day? why do electronics firms make 108" TVs that will never ship commercially?
every major manufacturer needs to keep pushing the envelope even if there's no valid reason to do so. it keeps the pressure on competitors and keeps their brand at top-of-mind. and the lessons learned from tech explorations will eventually surface in mainstream products.
You're comparing unreleased technology to currently-available "solutions", which should be considered false reasoning. When you compare something, it helps for it to be a valid comparison.
My gaming PC is running 24/7 so I would really love this option to switch from Highend GPU to onboard when I am not gaming. I hope all manufacturers (Intel, NV and AMD) will use something like this soon.
quote: Further we’re getting the impression that this is going to add another frame of delay between rendering displaying (basically this could operate in a manner similar to triple buffering) which would be a problem for users who start getting input lagged to the point where it’s affecting the enjoyment of their game.
Why would this take another full frame?
Copying the buffer from video to system memory is much much faster then rendering a frame, so the delay should be short (and fixed).
A 32-bit 1920 x 1200 buffer is about 9 mb. At 85 hz that'd consume about 1.5 gbyte/s of system memory bandwidth. Not a low amount.
the website wholesale for many kinds of fashion shoes, like the nike,jordan,prama,****, also including the jeans,shirts,bags,hat and the decorations. All the products are free shipping, and the the price is competitive, and also can accept the paypal payment.,after the payment, can ship within short time.
free shipping
competitive price
any size available
accept the paypal
our price:
gstar coogi evisu true jeans $36;
coach chanel gucci LV handbags $32;
coogi DG edhardy gucci t-shirts $15;
CA edhardy vests.paul smith shoes $35;
jordan dunk af1 max gucci shoes $33;
EDhardy gucci ny New Era cap $15;
coach okely **** CHANEL DG Sunglass $16;
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
39 Comments
Back to Article
ebuyings0005 - Friday, September 4, 2009 - link
http://www.ebuyings.com">http://www.ebuyings.comthe website wholesale for many kinds of fashion shoes, like the nike,jordan,prama,****, also including the jeans,shirts,bags,hat and the decorations. All the products are free shipping, and the the price is competitive, and also can accept the paypal payment.,after the payment, can ship within short time.
free shipping
competitive price
any size available
accept the paypal
our price:
gstar coogi evisu true jeans $36;
coach chanel gucci LV handbags $32;
coogi DG edhardy gucci t-shirts $15;
CA edhardy vests.paul smith shoes $35;
jordan dunk af1 max gucci shoes $33;
EDhardy gucci ny New Era cap $15;
coach okely **** CHANEL DG Sunglass $16;
http://www.ebuyings.com/productlist.asp?id=s28">http://www.ebuyings.com/productlist.asp?id=s28 (JORDAN SHOES)
http://www.ebuyings.com/productlist.asp?id=s1">http://www.ebuyings.com/productlist.asp?id=s1 (ED HARDY)
http://www.ebuyings.com/productlist.asp?id=s11">http://www.ebuyings.com/productlist.asp?id=s11 (JEANS)
http://www.ebuyings.com/productlist.asp?id=s6">http://www.ebuyings.com/productlist.asp?id=s6 (TSHIRTS)
http://www.ebuyings.com/productlist.asp?id=s5">http://www.ebuyings.com/productlist.asp?id=s5 (Bikini)
http://www.ebuyings.com/productlist.asp?id=s65">http://www.ebuyings.com/productlist.asp?id=s65 (HANDBAGS)
http://www.ebuyings.com/productlist.asp?id=s21">http://www.ebuyings.com/productlist.asp?id=s21 (Air_max_man)
http://www.ebuyings.com/productlist.asp?id=s29">http://www.ebuyings.com/productlist.asp?id=s29 (Nike shox)
http://www.ebuyings.com/productlist.asp?id=s6">http://www.ebuyings.com/productlist.asp?id=s6 (Polo tshirt)
ATWindsor - Sunday, February 3, 2008 - link
It's about time, I'm ired of VGA not beeing integrated in more high-end boards, despite them having extremly many features. Not every high/midend rig is a gamer-rig. So inbuilt vga is a welcome change, even if you're not going to use hybrid SLI.nubie - Monday, January 14, 2008 - link
Hi, silly peoples. Most (I mean All), of the cheapest motherboards include integrated graphics already. To insinuate that this "will make them more expensive" is STUUUPID.Have you priced the cheapest motherboards? They are all integrated. I think this will lower nVidia's cost by standardizing the entire line-up, including the drivers and Bios, thus allowing for reduced operating, design, production, and engineering costs.
I agree that the display switching and connector issues need work, and I am happy nVidia is working on them. Since the display is to be routed through the motherboard in this iteration, I foresee any problems being worked out, if possible.
Let's not forget that the move is toward DDR3, when/if that is possible, we may see socketed GPU's, hopefully with their own RAM. Just think, 4x Ram slots for DDR3, use more expensive low-latency for the video, and cheaper high-latency for the system.
I am righteously upset at the current practice of a board hosting at least half of the heat production in current form factor, clearly a move to dual socket systems with heat-pipe cooling would be better. Although integrated CPU/GPUs would be interesting as well, and wouldn't preclude a secondary gpu socket, if this technology is perfected.
Knowname - Friday, January 11, 2008 - link
I agree with the op, seems to me the disadvantages for this, in desktop use, FAR outweigh the advantages. Seems like this'll be yet another driver option we'll have to tick saying 'no! we don't want to use this feature!' Heck, just the increased reliance (is that a word??) on system ram for all things 2d, ON A ENTHUSIAST SYSTEM??, is bad enough. Frankly I hope they do this, give me another excuse to go AMD.In fact, even in Notebooks it just seems like too much too soon or not even needed at all. I mean, what notebook costs less than 3 grand and DOESN'T use an integrated gpu anyway??? WHAT'S THE POINT?!! Even if, the savings isn't that big. We'll have to buy those big, inefficient PSUs anyway planning for a worst possible scenario, you can't always have the cake ;p. It just all sounds pretty dumb. I mean, even AMD's implementation sounds pretty dumb... but at least they don't make a big deal about it... or is that the fault of anandtech... woopsie ^.^
Sharpie - Thursday, January 10, 2008 - link
but i don't plan on switching to vista unless forced.roadrun777 - Thursday, January 10, 2008 - link
Hopefully, Nvidia will release some open source code that allows other OS's to take advantage of their features. Otherwise they are just being ignorant of the fact that 2008 will be the year that the penguin goes through puberty and becomes a real desktop alternative.It's just too bad no one has created a Linux gaming developer platform kit to allow for easy porting of games. *Hint*
I think that would be a major reason why desktop linux would succeed.
ark2008 - Tuesday, January 8, 2008 - link
What will happen when future cpus will come with onboard graphics?roadrun777 - Thursday, January 10, 2008 - link
[quote]What will happen when future cpus will come with onboard graphics?[/quote]Exactly. Seems to me they are just laying the groundwork now, so there is less work later. They may actually switch to drop in socketed GPU replacements (like CPU's are now) first, before they mold them into the CPU.
Doctorweir - Monday, January 7, 2008 - link
I am also leaning towards the totally stupid idea - comments, because1) Why heating up the motherboard more and adding features that cost money and no one needs
2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...
3) They should rather invest to power down the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the ram, etc. etc...)
4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P
Doctorweir - Monday, January 7, 2008 - link
I am also leaning towards the totally stupid idea - comments, because1) Why heating up the motherboard more and adding features that cost money and no one needs
2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...
3) They should rather invest to power domn the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the rem, etc. etc...)
4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P
roadrun777 - Thursday, January 10, 2008 - link
I am also leaning towards the totally stupid idea - comments, because1) Why heating up the motherboard more and adding features that cost money and no one needs
They are prepping the production line for complete integration, this has more to do with whats going on down the line then right now.
2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...
Eventually the entire system will be virtualized eliminating any delays and the graphics scaling will be transparent, this eliminates the need for complex software rewriting. If they are going to do this, they need the memory bus shared between all components. The more memory controllers you have, the more latency in between each access.
3) They should rather invest to power domn the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the rem, etc. etc...)
I suspect it has to do more with the capabilities of their fabrication plants and the eventual redesign of the entire system. They are not going to invest any engineers into making "design type C" video cards power effecient because they know they will have to eventually integrate everything, so why not work on that now and release older "design type C" video cards just to fill the buffer until the other tech is ready? Just a guess.
4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P
Yes that probably has something to do with it, but I suspect it has more to do with the fact that the entire PCB may be eliminated in the future and your GPU will just be a drop in replacement (complete with socket), similar to what CPU's are now.
I for one am tired of seeing a slower memory system for the CPU and faster ones for the GPU. Why not let them both share the same bandwidth, memory, and controllers? It takes care of a lot of performance issues.
kevinkreiser - Monday, January 7, 2008 - link
they should put an SLI-like connector on the motherboard itself. then we can connect the add-in card directly to the motherboard's IGP. So the onboard and the add-in cards can talk directly to one another. It'd probably require extra mem on the onboard card though, but who cares, as long as it doesn't tie up other system resources.johnsonx - Monday, January 7, 2008 - link
They've done this backwards. They need to set things up so that the monitor connects to the discrete graphics card, and add a internal cable connection so that when the discrete GPU is powered down the on-board GPU can get it's signal to the monitor. This way there's no requirement to copy the frame buffer from the discrete GPU back to main RAM for output. Also they could then eliminate the main-board VGA and DVI connectors as well - just use a stub card with monitor ports if there isn't a discrete GPU card installed.Yes, they would have to change more to make it work this way. It would also work a hell of a lot better. Yes, every card and motherboard would have to include the extra 'Hybrid SLI' connection and appropriate signal switch logic... so what? NVidia can require that just like every other feature and connector they've required in the past.
JonnyDough - Monday, January 7, 2008 - link
the whole graphic card and PCI-E bus is stupid. Why not just have replaceable dedicated processors ON the motherboard that share system memory and turn on/off parts of the processor as needed? Or at LEAST have cards that can support adding processor cores/memory. That way you can still build a computer in a square box.What I want is to be able to increase graphic processing power by adding memory or a processor to my system. Period. When will the day come that I can upgrade my computer by just adding another processor, instead of having to swap one out. We have the ability to add a graphics card now for increased performance. The problem is that they don't turn off when not in use. SLI is fine, just make it so the system doesn't use my $500 video card when it isn't needed. Crazy and radical ideas like making me purchase an additional onboard GPU is silly.
For NVidia to say that it won't increase the cost of my motherboard is just plain outright retarded. Maybe with new process technologies that save NVidia money I won't see an INCREASE, but I certainly won't see a DECREASE in the cost of the board with smaller chipset processes. NVIDIA, quit being a jerk and trying to keep chipset prices high and making us buy extra components from you. Send the savings on down to us please.
roadrun777 - Thursday, January 10, 2008 - link
I think your missing the point. They are preparing for all-in-one cores. So of course they are going to be making integrated GPU's to "train" their fabrication engineers. Eventually it will move into the CPU, or at least on top of the CPU with some kind of socket, along with the memory controller. The CPU should share the same memory and controller the GPU does and they should be very close to one another to reduce voltage, latency, and timing problems.EateryOfPiza - Monday, January 7, 2008 - link
A lot of these concerns are based on the understanding that the integrated GPU seems to be the "primary GPU". That's the understanding I got from the article.1. Old games that are not actively developed and don't support SLI. Which GPU will they use? If these old games use only the integrated GPU, can we still expect good frame rates?
2. What about Windows XP or Linux? Will users on these OSs be limited to using the integrated GPU? Or will there be a BIOS setting that can enable/disable Hybrid SLI.
3. This is going to take up a lot of space on the I/O panel, space that is sorely needed for USB, audio, eSATA, Ethernet, etc etc. Every bit of space on my current I/O panel is used, I don't want to be missing out on extra USBs or Ethernets because of some monitor connections.
4. This will make the NB run way hotter with all the extra traffic running through it.
5. Seems like this plan will be limited by RAM speeds. High performance RAM, higher bus speeds, lower CLs will finally make a significant difference. I guess those memory manufacturers are happy. (I don't think this is good for laptops though, a lot of OEMs deliberately buy low quality slow RAM to cut costs.)
6. What about using multiple GPUs to run more than two monitors? Will that ability be taken away? (ie. Use two graphics cards to run four monitors.)
emilyek - Monday, January 7, 2008 - link
In summary:1. Onboard GPU allows discrete GPU to shut off and save power, especially helpful for notebooks.
2. Allows the onboard chip to go 'SLI' with a low-end discrete card.
Not a lot for the enthusiast here, as far as I can tell. Sounds more like a marketing scheme to get the uninformed who buy onboard graphics to have a moment where they say: 'It will be SLI if I buy this low-end card!11one!'
A crap idea. More problematic junk on the motherboard we don't need. Make single-card solutions that aren't power hogs? That would be nice.
Spuke - Monday, January 7, 2008 - link
So none of these features work in XP?tcool93 - Monday, January 7, 2008 - link
Notice how Nvidia copies most things that ATI does. Apparently they can't think up anything themselves.Cygni - Monday, January 7, 2008 - link
Could that extra GPU, when bypassed during gaming by the higher end videocard, perhaps be used for Nvidias hardware physics acceleration? If so, Nvidia may have an ace up there sleeve in a future software revision...shabby - Monday, January 7, 2008 - link
Next gen gpu's in q1 :)chizow - Monday, January 7, 2008 - link
There better be a way to disable this....Honestly sounds like a horrible plan, the last thing NV needs is an integrated GPU mucking up their already buggy and underwhelming MCP. Not only will the the MCP now run hotter with all that extra traffic going through the north bridge, performance will be worst as well as the frame buffer is forced to go through system RAM. And that's before considering potential system bandwidth problems noted in the article.
I may be wrong here, but didn't we learn our lesson with some of the earliest NForce integrated GPU designs, like the GF2 MX series? Some head to head comparisons basically showed that system RAM was simply not fast enough to keep up with GDDR and ultimately became the bottleneck when comparing comparable integrated vs. discrete graphics.
NV should really focus on making a chipset that can compete with Intel's P35/X38/X48 etc., or maybe even support Penryn to start. Instead we get re-hashed boards with Tri-SLI and mGPU. That's all fine and good, but running Quad-SLI-BoostForce with only Conroe/Kentsfield support isn't going to cut it when Nehalem is out and cruising along at 466-500MHz FSB speeds.
Rasterman - Monday, January 7, 2008 - link
I totally agree, the NB/SB on my 680i board are hot as hell, there is no way you can add any more heat to them. This seems like the totally opposite way to go about fixing the power problem, they are lowering power by duplicating components, adding yet more specs and connectors to a board, adding another area for bugs to appear, its insane.Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.
Rasterman - Monday, January 7, 2008 - link
I totally agree, the NB/SB on my 680i board are hot as hell, there is no way you can add any more heat to them. This seems like the totally opposite way to go about fixing the power problem, they are lowering power by duplicating components, adding yet more specs and connectors to a board, adding another area for bugs to appear, its insane.Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.
roadrun777 - Thursday, January 10, 2008 - link
[Quote]Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.[/Quote]There is a reason chip makers keep trying to integrate the chips into one chip. Cost, for one. Secondly, speed! It is so much easier to have tighter timings when the interconnects are so close to each other.
I think they are moving this way because everyone in the industry knows that the merging of GPU/CPU on the same core is inevitable. This gives the engineers a few generations to get it right before moving the entire chip set core onto the CPU. This is the easiest way to reduce latency.
I predict that it will happen because the power consumption is getting out of hand. I also predict that they will eventually virtualize the whole graphics card / bus, so that it appears to be one video card to the OS, while in reality it could be several GPU's that scale as needed.
I also think that possibly this year you will see GPU chips manufactured so that they are suspended in a non conductive liquid and sandwiched in a housing to drastically increase heat dissipation from the chip.
chip |
v
_
| -------
------| / | \ <-- interconnections all around the chip
| |_
| ^
| very small mounting pole
|
--- Non conductive liquid compressed around the chip.
roadrun777 - Thursday, January 10, 2008 - link
........chip.|.............v
......._
......|....-------
------|.../...|...\...<-- interconnections all around the chip
|.....|_
|.............^
|.....very small mounting pole
|
---.Non conductive liquid compressed around the chip.
Ignore the dots
DigitalFreak - Monday, January 7, 2008 - link
They need some reason to keep the chipset/GPU tying around.knowom - Monday, January 7, 2008 - link
"Because of the switching among GPUs, the motherboard will now be the location of the various monitor connections, with NVIDIA showing us boards that have a DVI and a VGA connector attached."Makes me wonder if NVIDIA will get rid of the on board DVI/VGA connectors all together for future discrete video cards. I'm sure doing so could be beneficial to NVIDIA and the consumer I'd think it would help lower costs, complexity, heat, and power plus with everyone having integrated graphics at least that's what they're moving towards they could probably set it up so integrated graphics could double as a physics processor.
Olaf van der Spek - Monday, January 7, 2008 - link
Why would removing connectors reduce power usage?It'd merely ensure the cards can't be used in other motherboards.
Shark Tek - Monday, January 7, 2008 - link
They should look for ways to reduce power consumption even more and producing more processing power. Their video cards (ATI/NVIDIA) still very power hungry. At least CPUs (AMD/Intel) are looking for ways to reduce more the power requirements in their chips in the last couple of years and they are aiming to cut more those numbers.Can you imagine an upcoming mainstream card (9600GT) that will be requiring at least a 400W PSU with 26A in the 12V rail?
That sucks pretty bad.
nullpointerus - Monday, January 7, 2008 - link
...ActiveArmor and the accelerated audio/storage chips...Why waste time on this stuff?
High-end GPU users are worried about idle power savings...
...when they're the missing next-gen replacement for 8800 GTX?
Granted, the low end stuff *might* end up being useful and popular enough that nVidia isn't just raising false hopes again.
Normal SLI needs a fundamental fix so that the high end solutions are less frustrating. After so many years now, game compatibility with SLI should not have to be hacked in after the fact. Why can't nVidia, ATI, and Microsoft just work together to change the damned APIs? They all know multi-GPU rendering is here to stay.
RamarC - Monday, January 7, 2008 - link
why do car companies build prototypes that will never see the light of day? why do electronics firms make 108" TVs that will never ship commercially?every major manufacturer needs to keep pushing the envelope even if there's no valid reason to do so. it keeps the pressure on competitors and keeps their brand at top-of-mind. and the lessons learned from tech explorations will eventually surface in mainstream products.
postwarscars - Friday, January 11, 2008 - link
You're comparing unreleased technology to currently-available "solutions", which should be considered false reasoning. When you compare something, it helps for it to be a valid comparison.nullpointerus - Monday, January 7, 2008 - link
Normal SLI is still not fixed.Why add new modes when multi-GPU rendering still has fundamental flaws?
Olaf van der Spek - Monday, January 7, 2008 - link
Sharp will ship it's 108" TV commercially... ;)
Rebel44 - Monday, January 7, 2008 - link
My gaming PC is running 24/7 so I would really love this option to switch from Highend GPU to onboard when I am not gaming. I hope all manufacturers (Intel, NV and AMD) will use something like this soon.Olaf van der Spek - Monday, January 7, 2008 - link
Why would this take another full frame?
Copying the buffer from video to system memory is much much faster then rendering a frame, so the delay should be short (and fixed).
A 32-bit 1920 x 1200 buffer is about 9 mb. At 85 hz that'd consume about 1.5 gbyte/s of system memory bandwidth. Not a low amount.
Olaf van der Spek - Monday, January 7, 2008 - link
Why would that framebuffer have to be copied though? Can't it be streamed instead?ebuyings0005 - Friday, September 4, 2009 - link
http://www.ebuyings.com">http://www.ebuyings.comthe website wholesale for many kinds of fashion shoes, like the nike,jordan,prama,****, also including the jeans,shirts,bags,hat and the decorations. All the products are free shipping, and the the price is competitive, and also can accept the paypal payment.,after the payment, can ship within short time.
free shipping
competitive price
any size available
accept the paypal
our price:
gstar coogi evisu true jeans $36;
coach chanel gucci LV handbags $32;
coogi DG edhardy gucci t-shirts $15;
CA edhardy vests.paul smith shoes $35;
jordan dunk af1 max gucci shoes $33;
EDhardy gucci ny New Era cap $15;
coach okely **** CHANEL DG Sunglass $16;
http://www.ebuyings.com/productlist.asp?id=s28">http://www.ebuyings.com/productlist.asp?id=s28 (JORDAN SHOES)
http://www.ebuyings.com/productlist.asp?id=s1">http://www.ebuyings.com/productlist.asp?id=s1 (ED HARDY)
http://www.ebuyings.com/productlist.asp?id=s11">http://www.ebuyings.com/productlist.asp?id=s11 (JEANS)
http://www.ebuyings.com/productlist.asp?id=s6">http://www.ebuyings.com/productlist.asp?id=s6 (TSHIRTS)
http://www.ebuyings.com/productlist.asp?id=s5">http://www.ebuyings.com/productlist.asp?id=s5 (Bikini)
http://www.ebuyings.com/productlist.asp?id=s65">http://www.ebuyings.com/productlist.asp?id=s65 (HANDBAGS)
http://www.ebuyings.com/productlist.asp?id=s21">http://www.ebuyings.com/productlist.asp?id=s21 (Air_max_man)
http://www.ebuyings.com/productlist.asp?id=s29">http://www.ebuyings.com/productlist.asp?id=s29 (Nike shox)
http://www.ebuyings.com/productlist.asp?id=s6">http://www.ebuyings.com/productlist.asp?id=s6 (Polo tshirt)