[mythtv-users] Which NVidia card

Steve Smith st3v3.sm1th at gmail.com
Tue Jan 22 15:16:06 UTC 2008


On 21/01/2008, Michael T. Dean <mtdean at thirdcontact.com> wrote:
>
> On 01/21/2008 02:14 PM, Rich West wrote:
> > Michael T. Dean wrote:
> >
> >> On 01/21/2008 09:34 AM, Steve Smith wrote:
> >>> On 21/01/2008, Cal <cal at graggrag.com> wrote:
> >>>> Marius Schrecker wrote:
> >>>>> After a Loooong break I'm trying to breathe new life into my
> >>>>> MythTV box.
> >>>>>
> >>>>> Why didn't I listen to you all before when you warned bout ATI
> >>>>> cards? I'd have saved myself a lot of grief! Anyway:
> >>>>>
> >>>>> Now I'm going to get a card that does proper XvMc (Nvidia the
> >>>>> only choice here??)
> >>>> Forget about XvMC. It's a relic of a past era where our poor
> >>>> struggling sub-2G single core cpus _needed_ some graphics
> >>>> processing to be offloaded in order to survive. So, the poor little
> >>>>  graphics card endured constantly cooking up a storm doing XvMC,
> >>>> and without even a fan to cool its brow. Now, I believe you're
> >>>> better off letting a nice little dual core do the hard work. It's
> >>>> better equipped to deal with the heat output too.
> >>>>
> >>>> Worst of all, XvMC comes with some tedious artifacts. You don't get
> >>>>  anything for free. My suggestion is have the graphics card do no
> >>>> more than it has to, and provide the necessary grunt via cpu
> >>>> instead. I have a 6200 and a 7300 (two systems), and I really can't
> >>>>  pick between them. They're both excellent as long as I don't try
> >>>> to include XvMC in the mix.
> >>>  That's an interesting take on the use of XvMC.
> >> I completely agree with Cal.
> >>>  My opinion is use a specialized tool for a specialized job.
> >> But the problem is that the specialized tool was designed only for
> video
> >> decoding.  It was not designed to be specialized for MythTV.
> > What shouldn't be forgotten is that one size (of mythtv) does not fit
> > all.  MythTV can be set up in any number of ways (combined FE/BE,
> > separate FE & BE, multiple FE's and multiple BE's, etc., etc.), and it
> > doesn't make a lot of sense to generalize.
> >
> > Off-loading tasks like XvMC to the video card is to relieve the stress
> > on the CPU and, thus, reducing the requirements.  In other words, you
> > don't *need* that dual-core processor in a measly frontend-only system
> > if all that system is doing is displaying video.
> >
> > In my setup, I don't have a need for high-powered frontend systems
>
> And you don't have a need for color OSD and you don't have a need for
> timestretch.  (And does timestretch even work with XvMC?)  And those
> using the PVR-350's TV out have their own (similar--though probably
> larger) list of things that Myth can provide that they don't need.
>
> Oh, and there's the whole issue that XvMC can only be as good as the
> driver developer makes it.  (I.e. when using the NVIDIA proprietary
> drivers, you're stuck with what they give you.  When using MythTV to
> decode, you can fix anything yourself (or get someone to help fix issues).
>
> > Anyhow, this particular discussion boils down
> > to the debate between hardware performance vs software performance
> >
>
> Hardware is simply permanent software.  Anything implemented in software
> can be implemented in hardware and vice versa.  It all comes down to
> which is most efficient, which is a question of what you're trying to
> optimize (capabilities, flexibility, power usage, CPU usage,
> upgradeability, ...).  And, for the vendors, there's production cost,
> implementation cost, update/upgrade cost.
>
> > With regard to the original poster's question: Nvidia is still the
> > easiest way to go with regard to the choice of a graphics card.
>
> I definitely agree with that--whether using XvMC or not, NVIDIA cards
> are the way to go today.  That may change in a few years as ATI
> continues to open specs for its cards (or NVIDIA may follow suit and
> open their specs once large numbers of "alternative" OS's start to jump
> ship and buy ATI cards).  But, for today, I wouldn't consider using
> anything but an NVIDIA card in my (non-EPIA/non-specialized-hardware)
> Myth boxes.
>
> However, I'd love to see some real-world power/noise comparisons.  I
> would expect there to be little difference with today's Core 2 or Athlon
> 64 X2 CPU's.  When video decoding requires nearly 100% of a core, there
> may be a difference, but as processors become more powerful, the
> benefits of "hardware acceleration" tend to disappear (just like you
> won't really see a power/noise benefit using XvMC for decoding standard
> def video on a modern processor--and may actually get a power/noise
> disadvantage using something like a PVR-350's "hardware" decoder--which
> is really still using the software provided by the firmware).  And,
> since modern CPU's tend to be focusing a lot on decreased power usage
> (compared to the Athlon XP/Pentium 4 days), the more power-efficient and
> more-powerful processor may actually hold its own to an older CPU with
> XvMC GPU assist.  (Though this is all speculation as I haven't done any
> testing myself.)
>
> I may change my mind and go back to GPU-assisted video decoding when VA
> API is usable, but only if it's truly flexible enough that I don't lose
> capabilities by using it (and, at this point, I don't expect that to be
> the case).  However, I will /never/ use XvMC as losing capability and
> getting no benefits is a no-brainer decision (that is, no benefits /for
> me/).  However, I have a setup that allows my frontend to be as loud and
> as ugly as I want (and it's both loud and ugly) since it's nowhere near
> my TV/speakers/viewing room and it can be shut down when I'm not using
> it since backends are separate.  Note, also, that my frontend is /not/ a
> "measly" frontend.  It's significantly more powerful than either of my
> backends (which can't even decode the video they capture in real time).
> I'm a believer in the idea that the backend is where you skimp on
> hardware and the frontend is where you put the power.
>
> Mike
> _______________________________________________
> mythtv-users mailing list
> mythtv-users at mythtv.org
> http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
>

Mike,

I see your point about things like the PVR-350, I only included that as a
demonstration of the reduced power requirements of specialized devices.
(Although modern CPUs probably have a small enough feature size to equal the
old technology of the 350's dedicated device for the same power or less)

Actually I quite like the XvMC approach (which on my system still allows
timestretch etc), as a kind of halfway house between full hardware decoding
and full software decoding. Effectively the graphics card running XvMC is
just using a graphics co-processor in the same way CPU's used to have a
separate Floating Point Processor. Now no-one would consider a CPU without a
built in FPU.

Modern GPUs are just an extension of the same idea, specialized processors
designed to excell at graphics processing. Sony have taken this once again
to the next level with their PS3 chips (Core chips if I remember the name),
which include several GPUs on the same die as the general purpose CPU. (In
fact the GPUs are general purpose enough to be used for all sorts of jobs so
that physics researchers are using clusters of PS3s as Supercomputers!)

Of course as you point out this can all fall apart in the real world when
drivers get in the way and stop the co-processor (XvMC circuitry) working
correctly. In these cases its often much easier to throw some CPU grunt at
the job.

Cheers

Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mythtv.org/pipermail/mythtv-users/attachments/20080122/7e2ff6cd/attachment.htm 


More information about the mythtv-users mailing list