Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Photo

Custom Video Card?


  • Please log in to reply
7 replies to this topic

#1 Ryan 3000

Ryan 3000

  • Members
  • 834 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Maryland
  • Local time:02:22 AM

Posted 04 June 2007 - 08:19 PM

Hey guys, I got hit with a huge realization today. By all means, shoot me down if I'm wrong but I think I'm onto something. Why don't they make a video card that functions like a mobo: plug in your own processor and memory, install ur own HSF and cooling options. Think about it, a Dual-core (even QUAD CORE) video card with UP TO 8GB OF MEMORY! What's the problem with this idea? Sounds good to me, and also sounds like it would pwn anything out right now for a fraction of the price.

Edited by Ryan 3000, 04 June 2007 - 08:19 PM.

No pessimist ever discovered the secrets of the stars, or sailed to an uncharted land.

BC AdBot (Login to Remove)

 


m

#2 protozero

protozero

  • Members
  • 447 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Quebec, Canada,
  • Local time:02:22 AM

Posted 05 June 2007 - 08:25 AM

That would soumd alot more pricey. I would think there'd be alot of compatability problems. And alot of the features in video cards are built into it, like the ring bus, and it would force nVidia and ATi to make just plain GPU's in one set.
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

#3 Rabid Ferrets of DOOM!!!

Rabid Ferrets of DOOM!!!

  • Members
  • 74 posts
  • OFFLINE
  •  
  • Local time:12:22 AM

Posted 05 June 2007 - 11:17 AM

And a GPU functions much differently than a CPU, which is most apparant in the fact that the GPU requires specific drivers to function properly. If you want dual core, then hook up SLI. If you want quad core, get 2 dual cores for sale and hook up SLI. For that extra RAM you want, you can have the GPU use shared RAM with the system.

#4 Ryan 3000

Ryan 3000
  • Topic Starter

  • Members
  • 834 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Maryland
  • Local time:02:22 AM

Posted 05 June 2007 - 12:13 PM

Hey guys, uh on Rabid Ferret's comment, I can set any graphics card (not just turbocache) to draw from my RAM? So if I had a good GPU and a hi-speed memory interface, I would have limitless speed (to some degree)? Hey, I never quite understood this, do CPU/GPUs have moving parts?

Edited by Ryan 3000, 05 June 2007 - 12:30 PM.

No pessimist ever discovered the secrets of the stars, or sailed to an uncharted land.

#5 Mr Alpha

Mr Alpha

  • Members
  • 1,875 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Finland
  • Local time:09:22 AM

Posted 05 June 2007 - 12:33 PM

That is not so dumb an idea as it might at first seem. A graphics card is a motherboard with a graphics processor and some memory. It theory I think it could be doable. But there are some severe technical drawbacks.

A simple one is space. Graphics cards are quite cramped already. Then there is memory bandwidth and latency. You get around 10 GB/s of bandwidth from the RAM interface we are using on motherboards now. My graphics card has 86 GB/s of bandwidth. Some go up to 140 GB/s. So a graphics card with add-in memory would have an incredibly big bottleneck around the memory bandwidth.

As for the quad-core and 8 GB of memory: It isn't nearly as impressive as it at first sounds. A CPU normally has one or two (or four) big, complex general purpose cores. A GPU is a specialized processor and has no need of big general purpose cores. My graphics card has 128 small, simple stream processors. Compared to that four cores isn't much.

So a graphics card with plug-in GPU an memory would be bigger, slower and more expensive that what we have today. Then there are a bunch of other business issues in the way.

Looking to the future, things may change. Technology will improve. Someday we will have an plug-in interface with hundreds gigabytes of bandwidth. But then the whole point with a graphics card will be moot. Instead we will have stuff like AMD's Fusion project. It is about replacing a couple of the cores in our multi-core CPU's of the future with specialized graphics cores, and so the motherboard becomes the graphics card.
"Anyone who cannot form a community with others, or who does not need to because he is self-sufficient [...] is either a beast or a god." Aristotle
Intel Core 2 Quad | XFX 780i SLI | 8GB Corsair | Gigabyte GeForce 8800GTX | Auzentech X-Fi Prelude| Logitech G15 | Logitech MX Revolution | LG Flatron L2000C | Logitech Z-5500 Digital

#6 Ryan 3000

Ryan 3000
  • Topic Starter

  • Members
  • 834 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Maryland
  • Local time:02:22 AM

Posted 05 June 2007 - 12:36 PM

Still, why not at least have a video card where I can plug in my own GPU instead of having to buy the whole new thing? Wouldnt that be less expensive and more efficient?
No pessimist ever discovered the secrets of the stars, or sailed to an uncharted land.

#7 Rabid Ferrets of DOOM!!!

Rabid Ferrets of DOOM!!!

  • Members
  • 74 posts
  • OFFLINE
  •  
  • Local time:12:22 AM

Posted 05 June 2007 - 02:02 PM

A GPU is very finely tuned between all its parts. If you were to replace the gpu with some other one it'd be like trying to run a CPU for DDR800 RAM with DDR900 chips - one of which has a CAS latency of 4 the other of which has a CAS latency of 5. You'll be getting massive bottlenecks as was said before and lose more than you gain.

See think of why a GPU costs so damn much. Most of the parts don't cost any signifigant amount of the profits to produce. The cost is due to the time, effort, and rich super smart science geeks that are needed to create an efficient GPU. If we were able to make them ourselves then the costs would go waaaay down since less development would be required.

With GPUs today you shouldn't need to upgrade too often. I'm running everything max settings on a 7900 GT. The only reason to upgrade if you have a decent card now is because of directx 10 (which is why I'm saving up. XD) But there's just too many things that can and most likely go wrong if you did that.

Now the shared RAM (Which doesn't cause a bottleneck in the system - it never adds more shared ram than it already has in there which is very different than if it had nothing but the ram you have [why do you think laptop gpus are so much slower?]) what OS are you running?

#8 Rabid Ferrets of DOOM!!!

Rabid Ferrets of DOOM!!!

  • Members
  • 74 posts
  • OFFLINE
  •  
  • Local time:12:22 AM

Posted 05 June 2007 - 02:41 PM

Anyway, I don't know why I asked what OS you're using. I think I'm just an idiot. :thumbsup: So to change the amount you go into your BIOS and set it there. I'm assuming it's a PCI card - AGP you change the aperture not the shared amount. A lot of people say that it will slow it down but I disagree it's really based on your system. If you have enough ram it should be fine - I also wouldn't do it unless you're using DDR2 not DDR1. The times it'll slow down is when it's accessing the shared RAM - under high amounts of stress but going ddr2 speeds is almost always faster than not having enough RAM. I run with 256 ram in my gpu and an additional 128 shared ddr 2 off of my 2 gigs. Recently I was Doing the same but with 1 gig of DDR1. I have triple FPS now and it DOES go faster than with no shared. So I'd try it with no shared and then with half shared and go with whichever's faster.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users