Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Photo

Graphics Question


  • Please log in to reply
7 replies to this topic

#1 pcpunk

pcpunk

  • Members
  • 5,663 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Florida
  • Local time:12:41 PM

Posted 02 January 2016 - 10:22 PM

I am trying to understand why some computers have 1 and 2 Graphics Cards in them, or, is one not a Card and just a Controller? like this:

Does someone know of a good wiki or site that explains this.  I notice some pc's have switches that control which Card is being used?  Maybe this isn't a good example below but that's all I could find.

 
Card-1: Intel 4th Gen Core Processor Integrated Graphics Controller bus-ID: 00:02.0
Card-2: NVIDIA GK107GLM [Quadro K1100M] bus-ID: 02:00.0

sBCcBvM.png

Created by Mike_Walsh

 

KDE, Ruler of all Distro's

eps2.4_m4ster-s1ave.aes_pcpunk_leavemehere

 


BC AdBot (Login to Remove)

 


#2 dejavou42

dejavou42

  • Members
  • 11 posts
  • OFFLINE
  •  
  • Local time:11:41 AM

Posted 02 January 2016 - 11:35 PM

The two main scenarios that come to mind are:

 

1.gaming or other graphics intensive application - integrated graphics are not powerful enough.

2.need for additional graphics ports for additional displays 



#3 the_patriot11

the_patriot11

    High Tech Redneck


  • BC Advisor
  • 6,755 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Wyoming USA
  • Local time:09:41 AM

Posted 02 January 2016 - 11:44 PM

Im not sure about the above cards-but there are various applications for two video cards. I used to run what is called Crossfire-where you can between 2-4 video cards together, I only had 2 ATI 4890s. One was the primary, the other was hooked to it, and both worked together to allow better craphics. Nvidia has a similar setup except they name their system SLI. SLI also allows you to use a second card as a seperat entity for 3D rendering.

 

An old school trick people used to do to allow their computer to use more then 2 monitors was to have 2 seperate video cards running on their own (instead of in tandom like SLI or Crossfire) but that severely slowed the system down. WIth newer modern cards, you can do that either with one card with eyefinity (or the nvidia equivelant, no idea what its called) or with crossfired/sli cards.


picard5.jpg

 

Primary system: Motherboard: ASUS M4A89GTD PRO/USB3, Processor: AMD Phenom II x4 945, Memory: 16 gigs of Patriot G2 DDR3 1600, Video: AMD Sapphire Nitro R9 380, Storage: 1 WD 500 gig HD, 1 Hitachi 500 gig HD, and Power supply: Coolermaster 750 watt, OS: Windows 10 64 bit. 

Media Center: Motherboard: Gigabyte mp61p-S3, Processor: AMD Athlon 64 x2 6000+, Memory: 6 gigs Patriot DDR2 800, Video: Gigabyte GeForce GT730, Storage: 500 gig Hitachi, PSU: Seasonic M1211 620W full modular, OS: Windows 10.

If I don't reply within 24 hours of your reply, feel free to send me a pm.


#4 Captain_Chicken

Captain_Chicken

  • BC Advisor
  • 1,354 posts
  • OFFLINE
  •  
  • Gender:Male
  • Local time:12:41 PM

Posted 03 January 2016 - 11:01 AM

Card one is yes ur integrated graphics on your processor, the other is a dedicated gpu.

Computer Collection:

Spoiler

Spoiler

Spoiler

Spoiler

#5 pcpunk

pcpunk
  • Topic Starter

  • Members
  • 5,663 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Florida
  • Local time:12:41 PM

Posted 03 January 2016 - 05:11 PM

Thanks guys that helps a lot!


sBCcBvM.png

Created by Mike_Walsh

 

KDE, Ruler of all Distro's

eps2.4_m4ster-s1ave.aes_pcpunk_leavemehere

 


#6 Ram4x4

Ram4x4

  • Members
  • 228 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Pennsylvania
  • Local time:12:41 PM

Posted 08 January 2016 - 08:17 PM

What the heck, I'll add my 0.02 worth :-)

 

Not all, but most CPU's in relatively recent times come with built-in graphics.  This means you don't have to purchase a separate "discrete" video card.  The downside to the built-in graphics is that it typically isn't powerful enough to run modern games well, or not suitable for specialized graphics needs, like CAD, nor do they generally support multiple monitors.

 

In the specific example you gave, what is listed is the built-in graphics on the CPU (Intel 4th Gen Core Processor Integrated Graphics Controller) and the second one is a discrete video card that was added to a slot on the motherboard (NVIDIA GK107GLM [Quadro K1100M]).

 

Based on that scenario, this is most likely from a "workstation" class PC.  The Nvidia Quadro graphics cards are designed specifically for CAD/vector graphics and/or multi-display setups...which the built-in graphics can't do.

 

Another scenario is to use more than one discrete video card in the system connected together to boost graphics performance above what a single video card can do (think of it like putting a second engine into your car).  You have to use a pair (or more) of the same model of card to do it also...you can't mix and match.

 

The two big graphics card companies (AMD and Nvidia) have their own name and method for connecting multiple graphics cards in one system.  AMD calls it "Crossfire" and Nvidia it is known as "SLI" (Scalable Link Interface).  The motherboard you use has to support at least one of those methods in order to use multiple linked graphics cards.  It is not unknown in this day for some folks to use 3 or even 4 video cards "Tri-SLI", or "Quad SLI".  Of course, that can be very expensive as high end graphics cards today can cost more than the rest of the system combined.  The Nvidia Titan Z, for example, is upwards of $1,200...just for one video card.

 

Typically hardcore gaming fanatics will run SLI or Crossfire rigs to boost frame rates in their favorite games as maximum graphics settings in these modern games take a ton of graphics processing horsepower.  Add to that increasing display resolutions as well, which adds to need for more graphics oomph.

 

Due to the nature of Crossfire or SLI, having two of the same video card does not equal double the performance.  There is some overhead in the linking process and other technical reasons why (CPU bottlenecking, for one example).  Some games see higher performance increases than others.  It's all a balancing act of graphics card, CPU and $$$.

 

It's common today for many to refer to the graphics card as a "GPU" (Graphics Processing Unit) as in reality, a GPU is an incredibly powerful processor.  In fact, a higher end GPU is actually hundreds, if not thousands of times more "powerful" than a CPU (the main computer processor).  It's just GPU's only do one thing, but they do it extremely well...massive parallel processing.  Nvidia pioneered "GPU Accelerated Processing" back around 2007.  The idea being to offload computational intensive portions of code onto the GPU while the CPU continues to run its sequential code.  These GPU accelerated processing software programs are being used a lot in computational finance, data science and analytics and so on.  GPU's do this well because they are massively parallel (lots of cores processing in parallel) vs a CPU with a few cores running sequential/serial.



#7 the_patriot11

the_patriot11

    High Tech Redneck


  • BC Advisor
  • 6,755 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Wyoming USA
  • Local time:09:41 AM

Posted 08 January 2016 - 09:01 PM

Rams advice is right on-though I have seen some areas that were doubled with a SLI/Crossfire rig, it definetly was a rare case, and didn't happen every time, it really depends on the rig. And oftentimes, you would see remarkable performance in creases in one area, but not another. When I ran Crossfire, I noticed nearly double the framerate at the same graphics level as one card, but my image quality was not doubled, it only went up maybe by a third in most cases.
 

Edited by the_patriot11, 09 January 2016 - 11:42 AM.

picard5.jpg

 

Primary system: Motherboard: ASUS M4A89GTD PRO/USB3, Processor: AMD Phenom II x4 945, Memory: 16 gigs of Patriot G2 DDR3 1600, Video: AMD Sapphire Nitro R9 380, Storage: 1 WD 500 gig HD, 1 Hitachi 500 gig HD, and Power supply: Coolermaster 750 watt, OS: Windows 10 64 bit. 

Media Center: Motherboard: Gigabyte mp61p-S3, Processor: AMD Athlon 64 x2 6000+, Memory: 6 gigs Patriot DDR2 800, Video: Gigabyte GeForce GT730, Storage: 500 gig Hitachi, PSU: Seasonic M1211 620W full modular, OS: Windows 10.

If I don't reply within 24 hours of your reply, feel free to send me a pm.


#8 Ram4x4

Ram4x4

  • Members
  • 228 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Pennsylvania
  • Local time:12:41 PM

Posted 09 January 2016 - 04:29 AM

And to continue a little...(I drank some coffee earlier, so I'm still wide awake, lol)...

 

Adding tons of graphics processing alone doesn't necessarily equate to uber performance.  The multi-GPU scenario has to be used along with a powerful CPU and fast system memory as well.  Remember, "computing" requires an entire system and if you have one component that is particularly weak compared to the others you'll get poor performance.

 

The two main scenarios that will cause SLI or Crossfire to not render double frame rates are:

 

   1. CPU bottlenecking - this occurs when the program you are running (typically a game), is "heavy" on CPU processing requirements.  The effect is that your multiple GPU's are loafing waiting for the CPU to do its processing.  If you have the current, highest end CPU, then there is nothing you can do at that point really...adding another GPU to the mix won't change a thing.

 

   2.  The code (the program itself and the way it is written) doesn't make efficient use of multiple GPU's or CPU cores.  Some folks incorrectly assume that simply having more GPU's, or even more cores in the CPU automatically means improved performance and that's not the case.  The software HAS to be written so that it will use the extra cores and/or efficiently make use of multiple GPU's.  It's called "multi-threading".  Depending on the purpose of the software, it can be difficult, if not impossible to multi-thread it.  Operating Systems typically aren't multi-threaded, although it plays a part in directing multi-threaded operations with software that does.

 

Some software that is multi-threaded can only use 2 cores, so even if you had a 4-core CPU, it's only going to use 2 of them.  Games are notorious for being single threaded, or maybe dual threaded at most today (although will change over time) and is the primary reason hardcore gamers today will pick say, an I5 or I7 CPU over one of the 6 or 8-core Haswells. 

 

An I7-4790K has 4 cores and a clock speed of 4.0GHz native.  This CPU is currently considered the "king" for building a gaming rig.  I personally have an I7-5820K, which is 6-cores, but only runs at 3.3 GHz native.  In a gaming environment, that 4-core I7-4790K beats my 6-core because games use at most 2-cores and the cores on the 4790K have a 0.7GHz clock speed advantage.  The 4-core wins in that scenario....strictly speaking in terms of absolute frame rate in games...but realistically, that 5820K paired with a good GPU will play every game that the 4790K will at the same settings well enough that the differences are negligible.  This is where all the hoopla in forums and "bragging rights" come into play and where many "fanboy" arguments occur about who's system is most badass at playing <insert your favorite game here>.  It's sort of the modern world rendition of who's hotrod car is the fastest and most powerful. 

 

But....step outside games and start running virtual machines, or Photoshop, or some other software that will use all the CPU cores and I'll leave them in the dust.

 

Additionally, Intel uses "Hyper Threading" technology.  In essence, it allows two threads to be assigned to a single core.  The effect is that computer system "thinks" it has double the amount of cores it physically has.  So, the I7-4790K, with hyper threading can effectively "see" 8-cores.  My I7-5820K "sees" 12-cores.

 

So, if I were to set up a virtual machine (basically a computer running within a computer) I have the option to assign any number of cores to that virtual machine.  I could assign 6-cores to my local host (the main PC and native OS) and 6-cores to the VM.  That 4790K can only do 4 and 4 at most.  Now run, say Photoshop on the native machine and the VM simultaneously and guess what?  My 5820K will smoke the 4790K!

 

For the average user needing basic day-today computing and gaming, the quad core I5 CPU is probably the best bang for the buck (but does not support hyper threading).  The 6 and 8 core Haswell family of processors are a fair bit more expensive and unless you have a need for 6+ cores, you;re just throwing away money.  The I7's do support hyper threading, but cost more than the I5's and unless you really need hyper threading are really a waste of money too...unless you just have to have the fastest gaming CPU around.

 

The 5390X 8-core I7 is a touch over $1,000 just for the CPU (and it only runs at 3.0GHz clock speed).  After that, you have to step up to the enterprise class CPUs (the vaunted Xeon processors) and the upper level Xeons cost several thousand dollars.  These really are specialized CPU's intended for use in servers or very high end workstations.

 

Beyond all those numbers, the good old hardcore gamers typically get into "over clocking".  Basically they tweak the various clock multipliers and voltage settings to run the CPU, memory and GPU's at higher clock speeds than stock in order to get more processing power from them all in the name of higher frame rates in their games (again, this is the modern equivalent of hotrodding your car).  This is specifically why if you were to go to say, newegg and searched for CPU heatsinks you'll find this vast array incredible solutions...I mean some of these aftermarket coolers are simply HUGE!  They are designed to absorb and dissipate all that heat when you overclock (because overclocking will cause your CPU to generate a lot more heat).  It's spawned the liquid cooling industry as well.

 

I used to water cool back before water cooling became all the rage :-)  In the late 90's when the first "Covington" Celerons came out, I had one of the 300MHz versions.  I discovered that if I sawed off the square knob on the top of a threaded 4" PVC pipe cap, it fit perfectly over the CPU slug (IHS).  Some silicone secured it to the CPU.  A couple holes tapped in the side, some tubing and fittings, an aquarium pump and a bucket of water and I was keeping my little Celeron jacked up to 464MHz plenty cool.  Then my kids started walking and well...the thought of a kicked over bucket wasn't good.   Besides, by then I moved on up to a "real" CPU and never really found the need for overclocking anymore (and I have never liquid cooled a CPU since).   I almost dabbled in "submersed motherboards".  The idea being to literally put your entire motherboard with CPU, memory, etc into a case of de-ionized water or oil.  I know several folks who did it, successfully, but it was a maintenance nightmare.  Constant leaks, etc.  I just never found the need for such extremes anymore.

 

Overclocking in the world of gamers has become mainstream.  Many CPU or GPU reviews you see online almost always contain data on "OC'd" set ups.  The reality is that you seriously chance damaging your components, or shortening their life span and the gains are not worth the risk...but kids these days just have to have the fastest and the baddest of the bad ;-P






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users