[mythtv-users] Back-end Virtualization

Zarthan South zarthan at gmail.com
Fri May 11 17:19:44 UTC 2012


On Fri, May 11, 2012 at 10:51 AM, Raymond Wagner <raymond at wagnerrp.com>wrote:

> On 5/9/2012 21:22, Nathan Hawkins wrote:
>
>> Is ‘Anyone’ virtualizing their backend?! I cant believe I’m the only
>> person out there who wants to do this…
>>
>
> I've stated my views on virtualization on this mailing list multiple times
> in the past, but new people keep joining and asking, so here goes...
>
> Why do you want to want to virtualize your backend? "Because I think it
> would be interesting" is a perfectly valid reason here.  "Because the
> industry uses it and VM vendors tell me it magically makes everything
> better" is not.  So why does industry use it?
>
> The first reason would be security.  There are many mechanisms to isolate
> different processes on a single system, but none of them can be as complete
> or absolute as full system virtualization.  You can run different
> application servers, or servers for multiple departments with different
> user access rights on one physical system, and if one gets compromised,
> there is (almost) no risk to the rest.  It can be flushed and restored from
> a clean copy with no harm to the other services.  If you are looking to
> virtualize MythTV for security purposes, your journey ends here.  If you
> are not running MythTV on a safe, trusted network, you either shouldn't be
> running MythTV, or you should be looking into adding cryptographic
> authentication into all the various communication interfaces MythTV


The first reason is to more fully use physical resources. 90 plus percent
of physical machines use less than 10% of the CPU.  Today it is very
difficult to right size a server for a single workload. For the vast
majority of servers a single core Atom CPU would be overkill. In a virtual
environment I can tailor the exact needs of a particular workload. If I
need no more than 848 MB of RAM I can assign exactly that. If I require 3
CPU cores I can assign that. I can balance the resource load across all the
servers to further maximize resource utilization.

Up until recently security and the inability to ensure network traffic
security was very much lacking. Today unless you spend the extra money for
all the security measures you do not have any means of protecting or
monitoring inter VM traffic.

>
> The second reason is high availability.  The virtual machine allows you to
> save the state of the machine, and in the event of a failure, resume that
> state on another piece of hardware.  This is really only a crude route to
> high availability, as such capability is much more effectively and
> efficiently performed by the application itself, such as MySQL clustering
> and replication servers.  It becomes a question of how valuable is the
> application to your needs, and is it valuable enough to warrant the time
> and expense developing native support in the application.  If your interest
> is in HA, you've already thrown far more money at virtual machines than
> someone who would be asking a basic question such as this on an open source
> mailing list.
>
> And that's it.  Those are the two good reasons to run virtual machines in
> a production environment.
>

Reduced datacenter space and power and cooling. A dual multicore CPU server
can easily handle a 20 to one ratio and virtual desktops can go 100 to
one.
During off peak hours I can migrate VM to fewer physical servers and shut
down the extras. When needs change I can run up the extra physical machines
and migrate the VMs back.



> -- Counter Argument 1 --
> What about the need to run on a different hardware architecture or
> operating system?  Don't.  Find software that suits your needs that can all
> run on one OS.  If it's something critical that you cannot do without,
> perhaps that makes it worth dedicating another machine to.
>
> I am primarily a Linux guy but I still have a Windows server or two.
Getting a separate machine just for the Windows servers is totally
wasteful.

>
> -- Counter Argument 2 --
> What about software that refuses to run on anything outside a pre-defined
> hardware set, so you define it against a virtual machine to allow the VM to
> be mobile?  Shame on the company for putting such restrictive licensing
> measures on their software.  Double shame for botching it in such a manner
> that it could be bypassed by simply using a VM.  From an idealist
> standpoint, shame on the customer for patronizing such an abusive developer.
>
> Virtual isn't as unusual as it was and most workloads are totally
acceptable / supported in a virtual environment. Legacy applications that
only run on say NT 4 Windows 95 or even DOS can be migrated (cloned from
physical ) and run as virtual allowing very old hardware to be retired. Not
at all uncommon.



> -- Counter Argument 3 --
> What about software that could crash and take out a system?  If you're
> doing development, this is a great thing.  Any application that you expect
> to crash with destructive results should not be used in a production
> environment.
>
> Virtualization in the PC world began as a development tool for that
reason. Today it is not only development but allows for large scale
testing.


> -- Counter Argument 4 --
> What about ease of management?  When trying to run multiple applications
> and servers on a single system, you may run into dependency conflicts. You
> may update one library for one application, only to find it has broken
> another application.  Virtual machines let you run multiple independent
> installs, with independent dependency sets, to avoid these issues.  This is
> the big one, and I believe the reason most people improperly use virtual
> machines.
>

It comes down to right sizing the resources of a particular workload. When
all the software runs in a single OS instance a less important process can
hinder the needs of a more important process. It took lots of work to fine
tune some apps to cooperate well together. I make tiny VMs and don't give
them a high priority for non important processes. If they aren't needed
they can be paused or migrated to another physical host.



> This has NOTHING TO DO with virtualization.  Virtual machines require this
> behavior, but this behavior does not require virtual machines.  You can do
> the SAME EXACT THING with a simple chroot, without all the unnecessary
> complexity and overhead of running a fully virtualized hardware instance.
>  All your required libraries would be in the self-contained chroot.  The
> only thing you would have to match for binary compatibility moving from one
> machine to the next to the next is the kernel interfaces, and those
> interfaces retain backwards compatibility for a long time.  If you really
> wanted, you could even implement these on opaque disk images, that would
> get loopback mounted where ever you wanted to run them, just like virtual
> machines.


I use chrooted images and while useful (running backtrack on my phone) it
doesn't satisfy most of the reasons most people virtualize. There is no
real isolation and if the kernel or module crashes I can loose everything.

The ability to isolate access to resources and tailor that access for
differing needs and differing workloads is the prime reason to virtualize.
In my opinion that applies in the home as well as in the enterprise.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.mythtv.org/pipermail/mythtv-users/attachments/20120511/1f1dfad8/attachment.html>


More information about the mythtv-users mailing list