Эта запись защищена паролем. Для её просмотра введите, пожалуйста, пароль:


Hyper-V Linux P2V migrations using MondoRescue

International part:

MondoRescue is very usefull for Hyper-V Linux P2V migrations

From Russian MS Forum

Перенести Centos на Hyper-v

==

помогите пожалуйста с переносом сервера centos 5.5 в виртуальную среду?

КАк это сделать чтоб не поднимать все заново ?

==

~

~

Mondo

~

~

http://www.mondorescue.org/downloads.shtml

~

~

==

Linux P2V migrations using MondoRescue By Hewlett Packard

MondoRescue is a backup and recovery tool for Linux, it’s packaged for various
distributions and supports common architectures (i386, x86_64 and ia64). It
allows online and offline backups to network storage, local disks, cd/dvd and
tape. It supports a large variety of filesystems (including but not limited to
ext2, ext3, ReiserFS and XFS) and partition/disk layouts (software raid,
hardware raid and LVM1/2). During a restore MondoRescue will also resize
partitions depending upon the new disk geometry. Those coming from an HP-
UX background may liken MondoRescue to HP Ignite-UX.

The methods MondoRescue uses to archive and restore a machine means it’s
well suited for use as a P2V/P2P (physical-2-virtual/physical-2-physical aka
P2V/P2P) migration tool.

==

~

~

Очень много интересных практик:

~

Linux P2V migrations using MondoRescue by Lester Wade

http://www.mondorescue.org/docs.shtml

http://www.mondorescue.org/docs/p2v.pdf

~

~

==

centos 5.5

==

~

~

Очень настоятельно советую "про-upgrad-ить" операционную систему до CentOS 5.9

~

~

В EL v5.9 интегрированны драйвера для Hyper-V

~

~

P.S.

Забавно: нашел по hv_storvsc_load=YES ( что вообще-то из FreeBSD)

http://blogs.sysadminz.ru/blog/DrNight/59.html

==

. . .

Создаю файл /etc/yum.repos.d/mainrepo.repo следующего содержания
[MainRepo]
name= MainRepo
baseurl=http://mirror.centos.org/centos/5/os/i386/
enabled=1
gpgcheck=0


. . .

Тестирую
#yum clean all
#yum list

Должен выйти список компонент репозитория.

Обновляю систему. Для этого добавляю в файл /etc/yum.conf строку

exclude=centos-release-* perl-* initscripts-* openldap*
Первые три компоненты обновить с налету не получилось, дальше копать не стал. А обновленный openldap несовместим с текущей версией PGP, поэтому оставляем его старым.

Запускаю обновление
#yum update

Несмотря на формальную версию Centos 5.5, после обновления все компоненты находятся на уровне CentOS 5.9, в которой осуществлена встроенная поддержка Hyper-V

. . .

Смотрю в /etc/grub.conf версию новоустановленного ядра и пути к загрузочным файлам.

splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-348.el5PAE)
root (hd0,0)
kernel /vmlinuz-2.6.18-348.el5PAE ro root=/dev/sda2 noexec=off
initrd /initrd-2.6.18-348.el5PAE.img
title CentOS (2.6.18-348.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-348.el5 ro root=/dev/sda2 noexec=off
initrd /initrd-2.6.18-348.el5.img


Включаю модули поддержки виртуализации
#mkinitrd /boot/initrd-2.6.18-348.el5PAE.img 2.6.18-348.el5PAE –preload hv_storvsc –preload hv_vmbus –preload hv_utils –preload hv_netvsc –preload hid-hyperv –preload hid-base-hv -f
#mkinitrd /boot/initrd-2.6.18-348.el5.img 2.6.18-348.el5 –preload hv_storvsc –preload hv_vmbus –preload hv_utils –preload hv_netvsc –preload hid-hyperv –preload hid-base-hv -f

#shutdown -h now

Удаляю устаревший сетевой адаптер и добавляю обычный.

После включения проверяю успешную загрузку модулей командой
#lsmod | grep hv

. . .

==


~

~
==

и VMDK2VHD,

==

~

~

Или взять free утилиту от StarWind

StarWind V2V Image Converter

VMware VMDK to Hyper-V VHD Converter

преобразованиe VMDK в VHD-файлы и VHD в VMDK а также в IMG-файл

http://ru.starwindsoftware.com/converter

~

~

P.S.

Естес-но _сперва_ нужно получить образ дисков физической ЭВМ , как правило, это все-таки .IMG

~

~

P.P.S.

Интересный метод:

ssh IP_физической_системы "dd if=/dev/sda" | dd of=/dev/sda"

По сути P2V – дамп диска физической системы в виртуальный. Но чтоб система потом взлетела, должен быть драйвер диска для нового SCSI контроллера (LSI Logic, Bus Logic), которого не было в физической железке.

С линуксами все очень просто – добавляем драйвер в initrd перед/после дампа, с виндовс – нужны специальные конвертеры выполняющие реконфигурацию системы, иначе она в синий экран падает.

Т.е.
1. Создаем ВМ с диском нужного размера
2. На физической системе добавляем драйвер: mkinitrd –with=mptspi … (для LSI Logic)
3. Делаем битовую копию разделов в виртуальный диск. Загружаем ВМ с какого-то LiveCD и:
ssh IP_физической_системы "dd if=/dev/sda" | dd of=/dev/sda"

Не пробовал с Hyper-V, но неоднократно пробовал с ESX и XenServer. С Hyper-V должно быть то же самое, просто драйвер диска другой.

Возможны нюансы в параметрах при использовании RAID или LVM, но суть, я надеюсь, понятна.

==



Question
You cannot vote on your own post

~

~

==

а если нет VMWare инфраструктуры?


==

~

~

А в чем проблема развернуть? Ж-)

~

М.б. удастся обойтись VMWare Workstantion

Но, если для VMware vCenter Converter нужен именно VMWare ESX ( Ok, "сфера"), то

— можно установить на физический комп

— или можно запустить VMWare ESX внутри VMWare Workstantion

~

~

Или:

~

VirtualBox — весьма качественно эмулирует реальные контроллеры IDE, сетевые карты и т.п.

Плюс понимает .VHD напрямую

~

~

Но для начала пробуем загрузить .VHD ( полученный из .IMG) в Hyper-V

Если Syntetic LANCard не определяется ( CentOs 5.5 без драйверов), то выставляем 1 CPU , Legacy LANCard

~

~

~

~

==

Для p2v Linux можно использовать platespin

==

~

~

Здесь статья с примером использования:

~

Про PlateSpin Migrate 9.3 July 31, 2013

https://www.netiq.com/documentation/platespin_migrate_9/readme/data/readme.html

==

New hypervisor (virtualization platform) support: This release introduces support for two new virtualization platforms:

Microsoft Windows Server 2012 with Hyper-V

==

~

~

Там же:

==

Discontinued: The following features have been discontinued in this release:

Support for Microsoft Windows Server 2008 R2 with Hyper-V (semi-automated) and Citrix XenServer 5.5 as target virtualization platforms (fully automated).

Support for Windows Clusters workloads.

Microsoft WinPE as the temporary pre-execution environment for migration.

Temporarily Discontinued: The following features have been temporarily discontinued in this release.

Support for imaging (X2I and I2X scenarios).

==

Т.е. наверно, хорошо что успел скачать PlateSpin Migrate 9.2

~

~

~

~

~

~

— — — — — — — — — — — —

~


~

TO psix :

— определитесь можно ли в Вашей ситуации:

— обновить CentOs до 5.9

— "на месте" т.е. до переноса

— или уже после переноса

— или есть специфические модули ядра ( *.ko) или т.п. и upgrade невозможен

— подходит ли Вам MondoRescue ( мне с Debian 6.X на Hyper-V он весьма пригодился)

~

~

~

~

FreeBSD 10 on Hyper-V

2013-10-10:

Good news: _Both_ FastIDE and CD-Rom work in FreeBSD 10 on Hyper-V


Good news:
_Both_ FastIDE and CD-Rom work in FreeBSD 10 on Hyper-V

http://svnweb.freebsd.org/base?view=…revision=256304
==
Revision 256304
Modified Thu Oct 10 22:46:49 2013 UTC
. . .
Allow the legacy CDROM device to be accessed in a FreeBSD guest,
while still using enlightened drivers for other block devices.
. . .
==

Details, and some info for Hyper-V sysadmins :

Microsoft hyperv dev team:

One of the issues with Hyper-V is that it does not virtualize the CD device and therefore
we rely on the ATA driver in the guest operating system to manage CDROMs.
What we would like to do is disable the ATA driver for all device types except the CDROM in the presence of Hyper-V.



vvm :

>> Disable both primary and secondary ATA controller prevent use CD-ROM
>> device in VM ( guest)
>>
>> Because "synthetic storage driver" not handle CD-ROM devices
>> ( IMHO, even more: Hyper-V host work with CD-ROM only as "PCI bus" device)

mav@ :

A.M.> Then may be possibility of blocking/hiding specific ATA channels or
A.M.> devices could be investigated.


mav@ :
>> Unfortunately, CAM subsystem used for both ATA and SCSI stacks in
>> FreeBSD 9.x and above is mostly unaware of "NewBus" infrastructure
>> used for driver probe and attachment. That is why you can’t replace
>> driver for a single disk in the same way as you replaced driver for
>> the whole controller. The highest level present in "NewBus" is ATA
>> channel. So if disk and CD-ROM are always live on different channels,
>> you can create dummy driver not for the whole controller (atapciX),
>> but for single hardcoded ATA channel (ataX).



Microsoft hyperv dev team:

>
> We have decided to have the DVD and the root device on different channels
> as that appears to be the simplest way to address this problem.
. . .
> . . . dummy driver for a single hardcoded ATA channel.
>The boot device has to be on the first channel,
> this would require the ISO device to be on the second channel.
>

mav@ :
==
Revision 256304 (view) . . .- [select for diffs]
Modified Thu Oct 10 22:46:49 2013 UTC
File length: 5783 byte(s)

Allow the legacy CDROM device to be accessed in a FreeBSD guest, while
still using enlightened drivers for other block devices.

Submitted by:	Microsoft hyperv dev team, mav@
Approved by:	re@

==



From: vvm
To: mav@
==
Thanks for patch!
==

Microsoft hyperv dev team:
==
Thanks Alexander.
==


Microsoft hyperv dev team:
==
We tested the patch and it seems to work fine.
We observed that the CDROM is not accessible on channel 0 but is accessible on channel 1 as intended.
The solution is good enough as the Hyper-V UI in general biases a user
to attach the root device to channel 0
and the CDROM to channel 1
and we can further reinforce this for FreeBSD users.
I think this solution is good enough for now and we can explore more later.
==

vvm:
==
Yes: "this solution" (
"the root device to channel 0" with syntectic ATA driver ,
"CDROM accessible on channel 1" with non-Hyper-V specific ATA driver
) "is good enough for now"

I.e.:
1) "this solution" "will let the ATA driver install FreeBSD"
2) and is "way to enable FastIDE"

==


Microsoft hyperv dev team:
==
Many many thanks for getting this to work.
==




2013-10-08:

ToDo: check this with

FreeBSD-10.0-ALPHA5-amd64-memstick.img

2013-10-07:

FreeBSD in Hyper-V R2
==

Convert to .VHD FreeBSD-10.0-ALPHA4-amd64-memstick.img

Attach as disk to secondary ATA channel, boot and install:

— syntetic LANCard work ( may be need run "dhclient hn0" )

==

P.S.

Sorry for short post: I’m not shue, what can public some private e-mail messages,
best ask about details in comment or by e-mail

~

~

—– Original Message —–

From: "Victor Miasnikov"
To: "Abhishek Gupta (LIS)" ; "Karl Pielorz"; freebsd-virtualization (at) freebsd.org
Sent: Thursday, September 19, 2013 3:49 PM
Subject: Re: turn off 220V on UPS device =} file system got corrupted Re: Hyper-V 2012 Cluster / Failover – supported? – Any known issues?

A.G.> if high availability failover scenarios will work for FreeBSD VMs on Hyper-V.
A.G.>if the power plug is pulled from the Hyper-V server
A.G.>then would the FreeBSD VM failover and restart without any issues on the failover server.

Karl, are You want this behavior:
==
you walk up and yank the power cord out of the back of the server the secondary mirror will take over with zero client
downtime
==
or?


Karl, are You use entry level fault tolerant system ftServer 2600 by Stratus Technologies? Or analog?

If "no use" , then read some info about real Hyper-V Fault Tolerance :

. . .
==

~

Info:

~

http://freebsd.1045724.n5.nabble.com/Hyper-V-2012-Cluster-Failover-supported-Any-known-issues-td5844967.html

==

—–Original Message—–
From: Victor Miasnikov
Sent: Wednesday, September 18, 2013 8:46 AM
To: Abhishek Gupta (LIS); Karl Pielorz;
Subject: turn off 220V on UPS device =} file system got corrupted Re: Hyper-V 2012 Cluster / Failover – supported? – Any known issues?

Hi!

K.P.> – Pulling the power on the active node hosting both VM’s (i.e. Windows
K.P.> guest, and FreeBSD guest) – this showed the remaining node trying to bring
K.P.> up the VM’s (of which Windows came up OK, and FreeBSD [file system] got corrupted).


A.G.> Yes, it should work.
A.G.>My understanding is that the failover should be agnostic to the guest OS but there could be some integration
component that we might have missed.


What _exactly_ "should work" ?


1) This issue not related Hyper-V cluster itself
!) When "Pulling the power" i.e. turn off 220V in Europa ( or 110V in USA ) on UPS device _both_ FAT on Windows and
FreeBSD [file system] got corrupted

( "Windows came up OK" look like because on this VM file system is NTFS )


K.P.> Hyper-V correctly see’s the node fail, and restarts both VM’s on the
K.P.> remaining node. Windows 7 boots fine (says it wasn’t shut down correctly –
K.P.> which is correct) – but FreeBSD doesn’t survive.
K.P.>
K.P.> At boot time we get a blank screen with "-" on it (i.e. the first part of
K.P.> the boot ’spinner’) – and nothing else.
K.P.>
K.P.> Booting to a network copy of FreeBSD and looking at the underlying virtual
K.P.> disk – it appears to be trashed. You can mount it (but it understandably
K.P.> warns it’s not clean) – however, any access leads to an instant panic (’bad
K.P.> dir ino 2 at offset 0: mangled entry’).
K.P.>
K.P.> Trying to run fsck against the file system throws up an impressive amounts
K.P.> of ’bad magic’ errors and ’rebuild cylinder group?’ prompts.

To Karl: I ask You about some details . . .
Are You see related e-mail?


Best regards, Victor Miasnikov

Abhishek Gupta (LIS)
Sep 18, 2013; 9:18pmRE: turn off 220V on UPS device =} file system got corrupted Re: Hyper-V 2012 Cluster

Karl is asking if high availability failover scenarios will work for FreeBSD VMs on Hyper-V. He was specifically interested in knowing if the power plug is pulled from the Hyper-V server then would the FreeBSD VM failover and restart without any issues on the failover server.

My response was that yes the above scenario should work.
Thanks,
Abhishek


==

~

~

http://blogs.msdn.com/b/clustering/archive/2010/10/06/10072013.aspx

==

Evaluating High-Availability (HA) vs. Fault Tolerant (FT) Solutions

10-06-2010 4:09 AM

High Availability Solutions

High availability solutions traditionally consist of a set of loosely coupled servers which have failover capabilities.Each system is independent and self-contained, yet the servers are health monitoring each other and in the event of a failure, applications will be restarted on a different server in the pool of the cluster.Windows Server Failover Clustering is an example of an HA solution.HA solutions provide health monitoring and fault recovery to increase the availability of applications.A good way to think of it is that if a system crashes (like the power cord was pulled), the application very quickly restarts on another system.HA systems can recover in the magnitude of seconds, and can achieve five 9’s of uptime (99.999%)… but they realistically can’t deliver zero downtime for unplanned failures.They also are flexible in that they enable recovery of any application running on any server in the cluster.

Fault Tolerant Solutions

Fault tolerant solutions traditionally consist of a pair of tightly coupled systems which provide redundancy.Generally speaking this involves running a single copy of the operating system and the application within, running consistently on two physical servers.The two systems are in lock step, so when any instruction is executed on one system, it is also executed on the secondary system.A good way to think of it is that you have two separate machines that are mirrored.In the event that the main system has a hardware failure, the secondary system takes over and there is zero downtime.

HA vs. FT

So which solution is right for you?Well, the initial and obvious conclusion most instantly come to is that ’no’ downtime is better than ’some’ downtime, so FT must be preferred over HA!Zero downtime is also the ultimate IT utopia which we all strive to achieve, which is goodness.Also FT is pretty cool from a technology perspective, so that tends to get the geek in all of us excited and interested.

However, it is important to understand they protect against different types of scenarios… and the key aspect to understand is what are the most important to you and your business requirements.It is true that FT solutions provide great resilience to hardware faults, such as if you walk up and yank the power cord out of the back of the server… the secondary mirror will take over with zero client downtime.However, remember that FT solutions are running a common operating system across those systems.In the event that there is a software fault (such as a hang or crash), both machines are affected and the entire solution goes down.There is no protection from software fault scenarios and at the same time you are doubling your hardware and maintenance costs.At the end of the day while a FT solution may promise zero downtime for unplanned failures, it is in reality only to a small set of failure conditions.With a loosely coupled HA solution such as Failover Clustering, in the event of a hang or blue screen from a buggy driver or leaky application.Then the application will failover and recover on another independent system.

==

~

~

http://social.technet.microsoft.com/Forums/windowsserver/en-US/d965a9d9-4324-4da8-a326-3159fb48c43d/vms-rebooting-after-simulating-failover

==

I’ve got a two node server-cluster, WS 2008R2 x64, Hyper-V and CSV, Everything seems to be working fine along with live migration.

I am currently testing the functionality of the setup, he is my current layout:

Node A:VM 1
Node B:
VM 2

When I simulate a host failure on node A, VM 1 transfers over to Node B but reboots the virtual machine before bringing it back up.

Is this normal behavior for Clustering with CSV? I have another cluster setup in the same manner but without CSV enabled, Its been a while but I’m sure when this was tested the Virtual machine that failed over didn’t reboot.

Is this a difference between High availability and Fault tolerance?

If any of you guys can shed some like, it would of great help…

Thanks :)


Yes, it works as expected. "High availability" does not mean "no downtime". If you want to have zero (OK, close to zero) downtime then you need either configure guest VM cluster or use your app built-in clustering features. If your app has none and it’s not cluster aware consider moving to VMware to use it’s Fault Tolerance feature (no equivalent for Hyper-V so far).

. . .

As VR38DETT says, what you are seeing is normal. Live Migration, which moves a VM from one host to another, is a planned action. You tell the clustering software to move the machine. This gives the software time to copy the contents of the memory on the currently hosting machine to the memory of the destination machine. In a failover environment, there is no time for the memory on the failed machine to get copied. Therefore, all that can happen is to start the VM with a boot to get the memory loaded into the desitination machine. That’s a pretty typical definition of high availability.

Now with 2012 coming out, there is a capability the Microsoft engineers have built in called ’replica’. It keeps a copy of the memory of a running virtual machine on another virtual machine. However, it is asynchronous, so it is not always up to the second current. But it gets much closer to what you are asking for.

Or, there are third parties, such as Stratus, that create a mirrored environment between two systems in order to keep two copies up to date. As you can imagine, there are additional costs involved in such a solution as this, so you need to make the business case for 100% availability.

And, as VR38DETT says, with additional capabiliies, like clustering the VMs at their operating system/application level can provide a different sort of [near] continuous operation. I say [near] because it is definitely dependent upon the software you are running within the VM. For example, if you are running a particular type of SQL Server, you have SQL running on both nodes of a pair of clustered VMs, and if the Hyper-V host fails, the SQL will continue operating on the surviving VM. But there may be a very brief period of unavailability while the surviving SQL VM takes ownership and starts serving out requests. Neither the OS or SQL would have to restart in this environment, but it does take just a bit of time to transfer ownership of the resources.

==

~

~

http://windowsitpro.com/hyper-v/q-are-there-any-fault-tolerant-solutions-hyper-v

==

Q: Are there any fault-tolerant solutions for Hyper-V?

| Windows IT Pro Aug. 4, 2012

A: Fault tolerance allows a virtual machine (VM) to carry on running without interruption, even in unplanned host failures such as a host crashing. This is different from high availability, part of Hyper-V, which in a host failure moves VMs to another host but has to restart the VMs, incurring a small outage to the VM. This is also different from planned outages, which allow a VM to be moved between hosts with no downtime using technologies such as Live Migration.

This fault tolerance is achieved by the VM running on multiple hosts with changes from the master replicated in real time to the slave. It should be noted that fault tolerance protects only from a crash of the host; any problem within the guest OS isn’t protected by fault tolerance as any guest problem would just replicate to the copy.

Hyper-V doesn’t have a built-in fault tolerant solution, but there are some options from third parties you can evaluate. However, typically fault tolerance of an application is better handled through application-aware solutions or guest clustering, that provide protection from guest OS crashes. (A good discussion of this can be found at this MSDN blog.) The two main third-party solutions are as follows:

==

==

Stratus Launches ’Mission-Critical Hyper-V’ across …

MAYNARD, Mass., USA and LONDON, UK, Oct. 6, 2010 -Expanding the options for businesses seeking affordable uptime reliability for demanding virtual workloads, Stratus Technologies today announced support for Microsoft Hyper-V across its entire ftServer line of fault-tolerant platforms. Mission
sourcewire.com/news/59737/stratus-launches-mission-cri…

==

==

. . .

the company went a step further by announcing support for Hyper-V. This means that the Microsoft hypervisor gets out-of-the-box the famous Stratus’ 99.999% uptime.

Specifically, Stratus now supports Windows Server 2008 R2 on its entry level fault tolerant system ftServer 2600.

. . .

==

Linux Hyper-V Dynamic Memory hot add

2013-09-13: Good “hot news”:
VVM>> Ok: when we can use Dynamic memory hot add in RHEL ?
> RHEL-6.5 will add auto enable of hotplug memory for the balloon driver

http://www.phoronix.com/scan.php?page=news_item&px=MTMyMjU
===
The set of six new patches enhances their memory balloon driver to add in support for memory hot-add. System memory to Linux guests is dynamically managed at run-time and now implemented for the Windows Dynamic Memory protocol, which is a combination of ballooning and hot-add for the dynamic balanacing of available memory across competing virtual machines.
===

Dynamic memory hot add work ( minimum 1 time :-) )

==

SLES_v11.3_x64 Dynamic Memory 598Mb Uptime 00-55-15

==

I try add comment on page

http://www.aidanfinn.com/?p=15458

but:
==
Sorry, comments are closed for this item.
==

See text of this Comment

http://www.aidanfinn.com/?p=15458&cpage=1#comment-75033

====

http://vvm.blog.tut.by/2013/09/11/linux-hyper-v-dynamic-memory-hot-add/

and on

http://social.technet.microsoft.com/Forums/windowsserver/en-US/8e1994b9-9ca1-4411-ad8e-25e6b1ee28e1/dynamic-memory-on-linux-vm

( some external URL: )
===
The set of six new patches enhances their memory balloon driver to add in support for memory hot-add. System memory to Linux guests is dynamically managed at run-time and now implemented for the Windows Dynamic Memory protocol, which is a combination of ballooning and hot-add for the dynamic balanacing of available memory across competing virtual machines.
===

Dynamic memory hot add work

Details:
==
Image with SLES 11.3
Startup Memory 512MB
Memory 512MB
Assigned Memory 598MB

==

====

[Reply] ( http://www.aidanfinn.com/?p=15458&replytocom=74985#respond )

Artenmonk said: 2013.09.10 14:22

Aidan, can show system load ‘w command’? When I enable dynamic memory, I see load 1. When the system doesn not work, load should be 0. I think here is a bug.

[Reply] ( http://www.aidanfinn.com/?p=15458&replytocom=75004#respond )

John Spade said: 2013.09.11 01:50

I believe what Artenmonk meant was something I experienced on WS 2012 (R1) and Centos 6.4. I noticed that dynamic memory was functioning and it seemed to adjust reasonably well. However, when you looked at an idle system, the load via ‘w’ or ‘uptime’ was always 1.00 rather than 0.00, but only when dynamic memory was enabled. Inside the WS2012 host, there did not appear to be any CPU load. I also experienced a few crashes of that VM. Disabling dynamic memory made it function normally and stable. I hope to re-test it with R2 soon.

2013-09-13: Good “hot news”:
VVM>> Ok: when we can use Dynamic memory hot add in RHEL ?
> RHEL-6.5 will add auto enable of hotplug memory for the balloon driver

2013-05-23

http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg439101.html

==
To SUSE Hyper-V for Linux Team:

Olaf, I’m received Yours e-mail msg ( answer on my earler msg)

Thanks for answer
Unfortunately Yours ( SUSE/Novel) Bug-Tracker not opened by IE from Win Srv Edtions
I’m try do it from OpenSUSE v12.3 LiveCD but can not boot from it in Hyper-V VM when “Dynamic memory” turn on

See details on
{ this page }

Or ask SUSE Support Team write me by e-mail for detail
==

2013-05-18

OpenSUSE 13.1 Milestone 1 Build0466 on Hyper-V

First impression:

Even KDE LiveCD include GParted 0.16.1 ( with support LVM2 PV Resize/Move )

uname -a
==
Linux linux.site 3.9.0-1-desktop #1 SMP PREEMPT Tue May 7 08:14:56 UTC 2013 (d6e99fd) x86_64 x86_64 x86_64 GNU/Linux
==
=
lsmod
==
hv_utils 13647 0
hv_netvsc 31301 0
hv_storvsc 17568 0

hyperv_fb 17606 2
fb_sys_fops 12703 1 hyperv_fb
sysimgblt 12674 1 hyperv_fb
sysfillrect 12701 1 hyperv_fb
syscopyarea 12529 1 hyperv_fb
hid_hyperv 13059 0
hv_vmbus 51328 5 hyperv_fb,hv_utils,hv_netvsc,hv_storvsc,hid_hyperv
==

No
hv_balloon
:-(
( May be only in LiveCD enviroment? )

dmesg
==
[ 0.000000] Linux version 3.9.0-1-desktop (geeko@buildhost) (gcc version 4.7.3 (SUSE Linux) ) #1 SMP PREEMPT Tue May 7 08:14:56 UTC 2013 (d6e99fd)

[ 2.082841] Disabled vesafb on Hyper-V.

[ 14.779219] hv_vmbus: registering driver hyperv_fb
[ 14.794243] hyperv_fb: Screen resolution: 1152×864, Color depth: 16
==

==
[ 2.492437] ata_piix 0000:00:07.1: Hyper-V Virtual Machine detected, ATA device ignore set
==

==
[ 7.530438] input: Microsoft Vmbus HID-compliant Mouse as /devices/virtual/input/input3
[ 7.530519] hid-generic 0006:045E:0621.0001: input: HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on
==

From source code:

==
> +#if IS_ENABLED(CONFIG_HYPERV_FB)
> + /*
> + * On Hyper-V both the emulated and synthetic video devices are
> + * available. To avoid conflicts, we disable vesafb for the emulated
> + * video if hyperv_fb is configured.
> + */
> + if (is_hyperv()) {
> + pr_info(“Disabled vesafb on Hyper-V.\n”);
> + return -ENODEV;
> + }
> +#endif
==

===
+ * Hyper-V Synthetic Video Frame Buffer Driver
+ *
+ * This is the driver for the Hyper-V Synthetic Video, which supports
+ * screen resolution up to Full HD 1920×1080 with 32 bit color on Windows
+ * Server 2012, and 1600×1200 with 16 bit color on Windows Server 2008 R2
+ * or earlier.
+ *
+ * It also solves the double mouse cursor issue of the emulated video mode.
+ *
+ * The default screen resolution is 1152×864, which may be changed by a
+ * kernel parameter:
+ * video=hyperv_fb:x
+ * For example: video=hyperv_fb:1280×1024
+ *
+ * Portrait orientation is also supported:
+ * For example: video=hyperv_fb:864×1152
===

“solves the double mouse cursor” — yes, solved

2013-05-22 {{

RAM Usage of OpenSUSE LiveCD[s] :

OpenSUSE 13.1-Milestone1 Rescue-CD x86_64
based XFCE 4.10 need 1140Mb . . 1200Mb

SuSE 13.X x64 KDE LiveCD ( i.e. OS and GUI without ASSP):
1500Mb — _absolutely_ minimum
2220Mb — recommended by Hyper-V MMC

}}

back up Linux Hyper-V guest virtual machines (VMs) through Volume Shadow Copy Service (VSS)

2013-07-11 :

Back-link:

Error 4096 Hyper-V VSS ( topic on technet.microsoft.com Forums )
===

I am getting the following error (see below) on an Hyper-V box with a ( VVM: Linux ) guest – I have the online backup integration service disabled as this is not supported but still get this error.

Log Name: Microsoft-Windows-Hyper-V-Integration-Admin
Source: Microsoft-Windows-Hyper-V-Integration-VSS
Date: 27/03/2012 08:41:06
Event ID: 4096

Description:
The Volume Shadow Copy integration service is either not enabled, not running, or not initialized. (Virtual machine)

===
Answer:

2013-05-13 :

Look like, VSS implemented in Linux on Hyper-V since Linux Kernel Linux Kernel v3.10-rc1 2013-05-12

See
hv_vss_daemon.c
==
An implementation of the host initiated guest snapshot for Hyper-V.
==

http://www.altaro.com/hyper-v/linux-on-hyper-v/

==
Backup

Hyper-V’s built-in method of enabling back up of Windows virtual machines while they’re powered on involves the tried-and-true Volume Shadow Copy Service (VSS) that Microsoft has employed since Windows 2000. VSS is triggered at the hypervisor level and it communicates through the integration components with VSS inside the virtual machine. If you’re interested, we provided an earlier series that covered this in some detail. Since VSS is a proprietary Microsoft technology, your Linux virtual machines won’t have it. As a general rule, Hyper-V backups will not be able to take a live backup of Linux guests. That’s because if VSS in the hypervisor is unable to communicate with VSS through integration components, its default behavior is to save the virtual machine, take a VSS snapshot, and then bring the virtual machine back online. Some backup applications, notably Altaro Hyper-V Backup(Disclaimer: this blog is run by Altaro), can override this behavior and back up a Linux guest without interruption. Even with this capability, nothing can escape the fact that Linux does not have VSS. These backups will be taken live, but the backed up image will be crash-consistent, not application-consistent. If you’re not sure what that means, please reference the VSS article linked earlier.
==

Home > Virtualization > Q. Can I back up Linux Hyper-V guest virtual machines (VMs) through Volume Shadow Copy Service (VSS), avoiding the need to stop the VMs?

http://windowsitpro.com/virtualization/q-can-i-back-linux-hyper-v-guest-virtual-machines-vms-through-volume-shadow-copy-serv

==

Q. Can I back up Linux Hyper-V guest virtual machines (VMs) through Volume Shadow Copy Service (VSS), avoiding the need to stop the VMs?

A. VSS consists of several components, including VSS writers that are provided by application authors to enable a consistent backup to be taken of application and OS data without stopping the application or OS. These backups work by making a VSS request. The VSS writers are notified of an imminent backup, so they make sure data is flushed to disk and further activity is cached, ensuring the data on disk that’s being backed up is in a consistent state and is restorable.

Hyper-V extends this backup functionality by allowing a VSS backup to be taken at the Hyper-V host level. The VSS request is actually passed through the integration services to the OS of Windows VMs, which then notifies the registered VSS writers in the VM of the backup. So backups can be initiated at the Hyper-V host level and VM backups will still be consistent and usable, without actually doing anything in the guest OS.

Certain versions of Linux are also supported on Hyper-V, but Linux OSs don’t support VSS. So a backup taken on the Hyper-V host can’t tell the Linux OS in a guest VM to put itself in a backup-consistent state. To back up a Linux OS, either stop the VM while you take the backup or, if you can’t have downtime, perform the backup from within the Linux VM instead of at the Hyper-V host level.
==

RHEL 6.4 on Hyper-V

RHEL 6.4 / CentOS 6.4 include support for Hyper-V drivers

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html-single/6.4_Release_Notes/index.html

==

8.2. Hyper-V

. . .

Inclusion of, and Guest Installation Support for, Microsoft Hyper-V Drivers

Integrated Red Hat Enterprise Linux guest installation, and Hyper-V para-virtualized device support in Red Hat Enterprise Linux 6.4 on Microsoft Hyper-V allows users to run Red Hat Enterprise Linux 6.4 as a guest on top of Microsoft Hyper-V hypervisors. The following Hyper-V drivers and a clock source have been added to the kernel shipped with Red Hat Enterprise Linux 6.4:
  • a network driver (hv_netvsc)
  • a storage driver (hv_storvsc)
  • an HID-compliant mouse driver (hid_hyperv)
  • a VMbus driver (hv_vmbus)
  • a util driver (hv_util)
  • an IDE disk driver (ata_piix)
  • a balloon driver (hv_balloon)
  • a clock source (i386, AMD64/Intel 64: hyperv_clocksource)
Red Hat Enterprise Linux 6.4 also includes support for Hyper-V as a clock source and a guest Hyper-V Key-Value Pair (KVP) daemon (hypervkvpd) that passes basic information, such as the guest IP, the FQDN, OS name, and OS release number, to the host through VMbus. An IP injection functionality is also provided which allows you to change the IP address of a guest from the host via the hypervkvpd daemon.

. . .

Hyper-V balloon Driver

On Red Hat Enterprise Linux 6.4 guests, the balloon driver, a basic driver for the dynamic memory management functionality supported on Hyper-V hosts, was added. The balloon driver is used to dynamically remove memory from a virtual machine. Windows guests support Dynamic Memory with a combination of ballooning and hot adding. In the current implementation of the balloon driver for Linux, only the ballooning functionality is implemented, not the hot-add functionality.

==

==

RHEL 6.4 -- lsmod with hv_balloon

==

http://social.technet.microsoft.com/Forums/en-US/linuxintegrationservices/thread/c0df70a8-b65c-4400-bc16-2b2985c941ef

==

Andy Schmidt

Wednesday, January 23, 2013 7:45 PM

I’m new to the Unix World. Was able to install two virtual machiens, one hosting an Apache Web Server and one a MySQL server. Problem is that the Integration Services are flawed:

a) They disable the ability to mount CDROMs to that virtual machine:
http://support.microsoft.com/kb/2600152
I’m not sure if the INSMOD work-around will disable the faster disk driver? The command doesn’t seem to be specific to the CDROM devices…

b) Worse, I’m unable to install any security fixes to the kernel, because of:
http://support.microsoft.com/kb/2387594
Unfortunately, the DKMS work-around is based on IC 2.1 – when the ISO still contained the source. The 3.4 (and prior) are RPMs.

There is also a work-around floating which patches the uname string to the hardcoded new kernel name, before calling "MAKE". It claims to work with IC 3.2, but when I checked THAT ISO, it too had NO source – so I can’t imagine that working.
http://www.acumen-corp.com/Blog/tabid/298/entryid/19/HOWTO-Upgrading-the-CentOS-kernel-for-a-Hyper-V-virtual-machine-with-Linux-Integration-Components.aspx

I wonder if there is a way to "uninstall" the integration services, "re-enable" the necessary native drivers, then run the kernel updates, and after rebooting, re-install the 3.4 integration services?

In general, I’m a bit surprised that the lack of CD/DVD support and the inability to run kernel updates hasn’t bubbled to the top of he priority list after so many months/years – as I would have expected every single user to encounter that?


==

==

{{{

a) They disable the ability to mount CDROMs to that virtual machine:
http://support.microsoft.com/kb/2600152

=={{

CAUSE:

This issue occurs because the Hyper-V Linux Integration Services unloads the ata_piix driver

[ VVM: "unloads" look like "prevent load" :

=={

install ata_piix { /sbin/modprobe hv_storvsc 2>&1 || /sbin/modprobe --ignore-install ata_piix; }

==}

]

in order to provide an optimized IDE driver (hv_blkvsc [ VVM: hv_blkvsc is RIP ; hv_storvsc for _both_ IDE and SCSI disks after kernel v3.2 or for LIC/LIS v3.4 ] ) for the root [ VVM to All sysadmins: on IDE _need_ be place only boot loader ( GRUB or syslinux) and /boot , _other_ ( include / ) _best_ place on SCSI. But because current RHEL not contain hv_storvsc in install CD-ROM, we are need use "Mondo Rescue" after Mondo backup ] file system.

WORKAROUND:

To mount an ISO file in the virtual machine, the following command must be run before executing the mount command:

# insmod /lib/modules/$(uname -r)/kernel/drivers/ata/ata_piix.ko

WORKAROUND N2:

Alternatively, copy the ISO file into the virtual machine and mount it using the -o loop option.

[ VVM: Bingo! 8-/ But . . .

How about kernel for LiveCD? In any case need patched ata_piix ]

==}}

I’m not sure if the INSMOD work-around will disable the faster disk driver? The command doesn’t seem to be specific to the CDROM devices…


~

Yes: not disable "faster disk driver", because it is hv_storvsc

Yes #2 : this command not "specific to the CDROM devices" , and after

load both unpatched ata_piix and hv_storvsc

You can see results: by run blkid and look in output of this command ( and in _fact_) duplicates(!) of IDE disks

~

~

Again: _true_ solutions is "backport patches related ata_piix on Hyper-V"

==

==

Solutions ( in order from low problematic to hi):

— use RHEL 6.4

==

Shortly:

Debian Wheezy (7.0) contain a backport Hyper-V drivers from Kernel 3.4
But the Hyper-V kernel modules ( as minimum,

hv_vmbus
hv_utils

hv_storvsc
hv_netvsc

) are missing in the installer image of Debian Wheezy created before 2012-11-13

Solution:
You need use .iso created after 2012-11-14
( or .iso by Arnaud Patard )

you’ll be able to use paravirt NIC, SCSI disks,
additionally you’ll get mouse integration and support for more than 1 vCPU.

Debian v7.0 on Hyper-V
или прощай Legacy LANCard


Или как добавление одной строчки
==
hyperv-modules
==

в нужный файл решает множество проблем при установке Debian v7.0 на Hyper-V


Используйте для инсталяции .iso от Arnaud Patard :

http://www.hupstream.com/~rtp/azure/monolithic/mini.iso

и забудьте о
— смене сетевой карты на Legacy и обратно
— про параметр загрузки ata_piix.prefer_ms_hyperv=0
— переподключении HDD на IDE
— про потолок в 3 шт. HDD


Как результат:

==
SCSI , 10Gb LanCard hupstream.com-monolithic-mini.iso-2012-10-16-.iso _
==

==
hupstream.com-monolithic-mini.iso-2012-10-16-_.iso-Image-2[1]
==


International part:

——————–

From: "Victor Miasnikov" <vvm (at) tut (dot) by>
To: "Arnaud Patard"
Cc: "Mathieu Simon"; "Bernhard Schmidt" ; <690978@bugs.debian.org>; <684283@bugs.debian.org>
Sent: Wednesday, October 31, 2012 5:59 PM
Subject: All work as need with 0001-Add-Hyper-V-modules-to-netboot-and-cdrom.patch Fw: debian-installer: d-i unable to find disk storage on Hyper-V Fw: the Hyper-V kernel modules ( as minimum, hv_vmbus hv_utils hv_storvsc hv_netvsc )


Hi!


==
From: Arnaud Patard apatard (at) hupstream (dot) com
To: Debian Bug Tracking System
Subject: debian-installer: d-i unable to find disk storage on Hyper-V
Date: Fri, 19 Oct 2012 20:35:01 +0200

install testing on a Hyper-V VM through CD . . . Hyper-V drivers

An installation has succeeded with the attached patch and a d-i monolithic
iso.


0001-Add-Hyper-V-modules-to-netboot-and-cdrom.patch
==
Commit cd006086fa5d91414d8ff9ff2b78fbb593878e3c:

. . .

build/pkg-lists/cdrom/isolinux/amd64.cfg | 2 ++
build/pkg-lists/cdrom/isolinux/i386.cfg | 1 +
build/pkg-lists/netboot/amd64.cfg | 1 +
build/pkg-lists/netboot/i386.cfg | 1 +

. . .

— a/build/pkg-lists/ <All> / <All> .cfg
+++ b/build/pkg-lists/ <All> / <All> .cfg

. . .

+hyperv-modules-${kernel:Version}


==

==



Good job!

.patch implement this


==
But _true_ solution is

add the Hyper-V kernel modules ( as minimum,

hv_vmbus
hv_utils

hv_storvsc
hv_netvsc

to initramfs of installer .ISO
==


When boot use
http://www.hupstream.com/~rtp/azure/monolithic/mini.iso


All work as need:

SCSI , 10Gb LanCard:

Image 1
==

==

Image 2
==

==


Best regards, Victor Miasnikov
Blog: http://vvm.blog.tut.by/



P.S.

==
Why would you use a workaround on command line
while we have a udeb containing the right module, which means that one
can install in Hyper-V out of the box ?
==

As _temporary_ solution ( example see later)



P.P.S.

But this .iso
{
Debian v7.X _x64 Daily build 2012-10-31
http://cdimage.debian.org/cdimage/daily-builds/daily/arch-latest/amd64/iso-cd/

debian-testing-amd64-netinst.iso

}

not contain in file "initrd" from
install.amd’initrd.gz

kernel/drivers/hv/hv_vmbus.ko
kernel/drivers/hv/hv_utils.ko
kernel/drivers/net/hyperv/hv_netvsc.ko
kernel/drivers/scsi/hv_storvsc.ko

This is very bad for Hyper-V admins :-(


P.P.S.

(

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684283
Date: Wed, 8 Aug 2012 12:51:01 UTC

+
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=690978
Date: Fri, 19 Oct 2012 18:54:04 UTC

)

=>

MS Excell think
2012-10-31 – 2012-08-08 = 24.03.1900

:-)

2012-11-02 :

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=690978

—————-
From: Cyril Brulebois
To: Arnaud Patard , 690978@bugs.debian.org
Subject: Re: Bug#690978: debian-installer: d-i unable to find disk storage on Hyper-VDate: Fri, 2 Nov 2012 11:54:25 +0100

> . . . with the attached patch . . .

Thanks, I’ve just applied your patch, it should be part of beta 4.

Mraw,
KiBi.


2012-11-30:

– — –
From: Cyril Brulebois
To: 690978-close@bugs.debian.org
Subject: Bug#690978: fixed in debian-installer 20121114
Date: Wed, 14 Nov 2012 16:32:45 +0000

. . .

Changes:
debian-installer (20121114)

. . .

[ Cyril Brulebois ]
* Apply patch from Arnaud Patard to include Hyper-V linux kernel udebs
on cdrom and netboot images for am64 and i386 (Closes: #690978). This
is needed after a kernel change on the ata_piix side (cd006086fa in
mainline).

FIXed

Tested with

2012-11-30
debian-testing-amd64-netinst.iso
SHA1 b544066bbdd40c4a561007064dafc359a750e4e4

"Open/Public letter":
Erik Scholten ( in http://www.vmguru.nl/wordpress/2012/09/new-enterprise-hypervisor-comparison-2/#disqus_thread ):
If you have improvements, please mention it in the comments or write me an e-mail specifying which value needs to be altered and some proof of you statement, a MS website, video, etc.

{
www.vmguru.nl — Enterprise Hypervisor comparison v4.2 2012-10-01
==
Hypervisor comparison
Version 4.2
A feature comparison of the three main competitors in the hypervisor space, Citrix, Microsoft and VMware.
Updated to the latest two versions, Citrix XenServer 5.6 & 6.0, Microsoft Hyper-V 2008 R2 SP1 & 2012 and VMware vSphere 5.0 & 5.1.
©2012, VMGuru.nl
Erik Scholten
1 October 2012

At manufacturer’s websites and in the blogosphere there are many hypervisor comparisons which only compare hypervisors based on a single driver (performance, features or cost). In my opinion it’s a bit more complicated than that. After the everlasting discussion on make-believe cheaper Microsoft Hyper-V and Citrix XenServer implementations, I spend a fair deal of my time explaining to colleagues and clients that this is a hoax and that cost is not the only reason to base their decision on. Especially in the case of XenServer the choice and the long term effects make it a little bit more complicated.

Now you probably think ‘These VMGuru.nl guys are VMware fans so here we go again‘ but the opposite is true. Like Chris I think every situation has its own ideal solution and you should select the hypervisor based on well-considered selection criteria and because my employer, Imtech ICT, focuses on clients with 500+ workstations/employees these criteria are Enterprise-class hypervisor selection criteria.

==

}

{

I’m not found word "dynamic" ( and info about Dynamic Memory in MS Hyper-V) in Enterprise Hypervisor comparison v4.2 2012-10-01 ( 2012/09/Hypervisor-comparison.pdf 01 Oct 2012 17:55:04 GMT )

==

Victor Miasnikov:

Hyper-V 2008 R2 _support_ "Hot-add memory"

Erik Scholten:
With SP1 for Hyper-V R2 Microsoft added a feature that is called Dynamic Memory, which adjusts the amount of memory
available to virtual machines depending on the needs of each virtual machine. Which is much like but not entirely
similar to hot-add/remove.
Victor Miasnikov:
Dynamic Memory can increase/decrese _without_ reboot etc. , in real time

<=>
on _practic_ similar ( _very_ simular) to hot-add/remove
==

Erik Scholten ( in Enterprise Hypervisor comparison v4.2 2012-10-01 ( 2012/09/Hypervisor-comparison.pdf 01 Oct 2012 17:55:04 GMT ) :
==
Now you probably think ‘These VMGuru.nl guys are VMware fans so here we go again‘ but the opposite is true.

==
In fact: true i.e. VMGuru.nl guys are VMware fans
}

IMHO, need more comments:

{
Solaris ( and _all_ OS _without_ IC , for example see later Win NT 4.0 )
==
. . .
Quite some OS lacking Hyper-V drivers (like BSD) do run on Hyper-V but with quite of a performance penalty if at all:

  • Only legacy (emulated) 100Mbit Ethernet
  • Only 4 emulated IDE disks which are then quite slow, no paravirt SCSI controller
  • 1 vCPU and no dynamic memory (Linux doesn’t support the later one today, only Windows guests) . . .

. . .

==

}
{
Windows Server 2000
==
Friday, 6 March, 2009

We have need for a temporary legacy setup here at the shop.

We used a Windows Server 2000 Standard SP2 CD ISO to install the base OS.

We have an ISO based DVD with every conceivable Microsoft service pack and needed critical update on it that we use to service our VMs. We needed to mount that ISO and update to Service Pack 4 before we could get the Hyper-V Integration Services installed.

==

but only 1 CPU :
==
Configure virtual machines running Windows 2000 Server with 1 virtual processor.
==

}
again, IMHO, need more comments:
{
Windows NT 4.0
==

. . .

Legacy operating system support

Q. The virtual machine settings include a processor option which limits processor functionality to run an older operating system such as Windows NT on the virtual machine. What does this feature actually do?
A. This feature is designed to allow backwards compatibility for older operating systems such as Windows NT 4.0 (which performs a CPUID check and, if CPUID returns more than three leaves, it will fail). By selecting the processor functionality check box Hyper-V will limit CPUID to only return three leaves and therefore allow Windows NT 4.0 to successfully install. It is possible that other legacy operating systems could have a similar issue.

Q. Does this mean that Windows NT 4.0 is supported on Hyper-V?
A. Absolutely not. Windows NT 4.0 is outside its mainstream and extended support lifecycle and is not supported on Hyper-V and no integration components will be released for Windows NT 4.0.

Q. But one of the stated advantages for virtualisation is running legacy operating systems where hardware support is becoming problematic. Does this mean I can’t virtualise my remaining Windows NT computers?
A. The difference here is between “possible” and “supported”. Many legacy (and current) operating systems will run on Hyper-V (with emulated drivers) but are not supported.

. . .

==
}
{
Novel NetWare
==
Ben Armstrong ( Microsoft)
NetWare 5.0 worked, but 6.0 / 6.5 did not.
==
}
{

Ubuntu 12.04 has backported from kernel v3.4.3 *.ko/drivers for Hyper-V , work fine, has support
Debian v7.X has backported from kernel v3.4.3 *.ko/drivers for Hyper-V
}
{
RHEL 6.X , 5.X ( and CenOs, SL, OEL , etc. ) :
the Linux Integration Services 3.4 Release for Hyper-V adds support for the following guest operating systems:

  • RHEL 5.7 (x86 and x64)
  • RHEL 5.8 (x86 and x64)
  • RHEL 6.3 (x86 and x64)
  • CentOS 5.7 (x86 and x64)
  • CentOS 5.8 (x86 and x64)
  • CentOS 6.3 (x86 and x64)
==
}
{
RHEL 5.9 Beta:
http://www.microsoftnow.com/2012/09/red-hat-includes-built-in-integration-services-for-hyper-v-in-rhel-5-9.html
==
Microsoft have made strategic investments in interoperability that continues to reap rewards and here’s another big one TODAY with Red Hat. Today, Red Hat has announced the beta of RHEL 5.9 which includes the Linux Integration Services for Hyper-V built-in.
==

==
This means that RHEL will include the following Linux Integration Components for Hyper-V “inbox”:

1. Driver support: Linux Integration Services supports the network controller and the IDE and SCSI storage controllers that were developed specifically for Hyper-V.

2. Fastpath Boot Support for Hyper-V: Boot devices now take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.

3. Timesync: The clock inside the virtual machine will remain accurate by synchronizing to the clock on the virtualization server via Timesync service, and with the help of the pluggable time source device.

4. Integrated Shutdown: Virtual machines running Linux can be shut down from either Hyper-V Manager or System Center Virtual Machine Manager by using the “Shut down” command.

5. Heartbeat: This feature allows the Hyper-V to detect whether the virtual machine is running and responsive.

6. Key Value Pair (KVP) Exchange: Information about the running Linux virtual machine can be obtained by using the Key Value Pair exchange functionality on the Hyper-V host.

7. Integrated Mouse Support: Linux Integration Services provides full mouse support for Linux guest virtual machines.
==

The Microsoft press release
http://blogs.technet.com/b/openness/archive/2012/09/21/windows-server-hyper-v-drivers-supported-in-red-hat-enterprise-linux.aspx

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5-Beta/html/5.9_Release_Notes/virtualization.html
==
Chapter 9. Virtualization

Inclusion of, and Guest Installation Support for, Microsoft Hyper-V Drivers
Integrated Red Hat Enterprise Linux guest installation, and Hyper-V para-virtualized device support in Red Hat Enterprise Linux 5.9 on Microsoft Hyper-V allows users to run Red Hat Enterprise Linux 5.9 as a guest on top of Microsoft Hyper-V hypervisors. The following Hyper-V drivers and a clock source have been added to the kernel shipped with Red Hat Enterprise Linux 5.9:

a network driver ( hv_netvsc)

a storage driver ( hv_storvsc)

an HID-compliant mouse driver ( hid_hyperv)

a VMbus driver ( hv_vmbus)

a util driver ( hv_util)

a clock source (i386: hyperv_clocksource, AMD64/Intel 64: HYPER-V timer)

Red Hat Enterprise Linux 5.9 also includes a guest Hyper-V Key-Value Pair (KVP) daemon ( hypervkvpd) that passes basic information, such as the guest IP, the FQDN, OS name, and OS release number, to the host through VMbus.

==

}
{
FreeBSD v9.X , v8.3 – v8.2

http://blogs.technet.com/b/port25/archive/2012/08/09/windows-server-hyper-v-is-now-a-hypervisor-for-freebsd.aspx
==
Microsoft and partners NetApp and Citrix are excited to announce the availability of FreeBSD support for Windows Server Hyper-V

8,500 lines of code released under the BSD license, is the result of collaboration between Microsoft, NetApp, and Citrix to enable FreeBSD to run as a first-class guest on Windows Server Hyper-V.
==

http://blogs.technet.com/b/openness/archive/2012/08/09/available-today-freebsd-support-for-windows-server-hyper-v.aspx
==
enable FreeBSD to run on Hyper-V with high performance. This release includes 8,500 lines of code submitted under the BSD license, supporting FreeBSD 8.2 on Windows Server 2008 R2.
. . .

Analysis is currently underway to assess customer demand and partner capacity to extend support to FreeBSD 9.0 on Windows Server 2012.

==

FreeBSD enlightened device drivers for Hyper-V/Azure with FreeBSD source tree

http://freebsdonhyper-v.github.com/

https://lists.launchpad.net/freeonhyper-v/msg00000.html
==
From: Chris Knight stryqx (at) DOMAIN.HIDDEN
Date: Mon, 13 Aug 2012 20:34:57 +1000

I’ve pulled down a git clone and created patchsets for FreeBSD 8.2,
8.3, 9.0 and 9.1-BETA1. They can be found here:
==

http://blog.chrisara.com.au/2012/08/hyper-v-integration-components-for_13.html
==
Monday, August 13, 2012

Hyper-V Integration Components for FreeBSD – Patchfiles

Call me old fashioned, but I’d much prefer a patchset than having to install a version control package and suck down a source code check out. So please find a patchset for the Hyper-V integration components for the following versions of FreeBSD:

FreeBSD 8.2 Hyper-V Integration Components Patchset

FreeBSD 8.3 Hyper-V Integration Components Patchset

FreeBSD 9.0 Hyper-V Integration Components Patchset

FreeBSD 9.1-BETA1 Hyper-V Integration Components Patchset

Download the patchset, then issue:

patch –p –d /usr/src <

to patch the source tree, followed by:

cd /usr/src; make kernel KERNCONF=HYPERV_VM INSTKERNNAME=kernel.HYPERV

to install the Hyper-V enabled kernel to /boot/kernel.HYPERV.

Before booting to the Hyper-V enabled kernel it’s best to use GEOM labels to mount the partitions. Follow the instructions here to do this. This makes it easy for you to quickly swap between a Hyper-V enabled kernel and a non-Hyper-V enabled kernel – the reason being the Fast IDE storage driver presents itself as a SCSI driver, changing the device node path which prevents /etc/fstab from working correctly.

. . .
==

}

Copyright © 2014 Виктор Мясников ( Victor Miasnikov). Search Engine Optimization by Star Nine. Distributed by Wordpress Themes
Хостинг hoster.by   Сервис белорусских блогов: BLOGS.TUT.BY