Veeam MultiHost SureReplica v7 - Demystified


After months of eagerly waiting to post about new features in Veeam Backup & Replication v7, I can finally go ahead. If you read through my blog post you will notice that I love to talk about Surebackup as I think we take a very interesting approach on how we separate the isolated network and the production network.

One of the new features in v7 is the Surereplica. I think it is a great features and has great benefits:

  • You will be able to test to Replica's at the other side and see if they work successfully. Again another checkbox that can be checked in your DR plan automatically.
  • More interesting is the fact that you will be able to use to resource at the other side as a test environment. The great thing is that the storage at the other side will probably be a copy or has similar storage performance specifics so that your lab runs at the same speed as the VMs in production. It will also allow you to create bigger sandboxes. In which case you could even use replica's just to create lab environments (replicating maybe only once a month or manually to refresh the latest data). Not specifically for DR scenarios.
One challenge with this setup is of course that not all replica's may land on the same ESXi host. In v6.5 this would have been a problem as a virtual lab (and then specifically the network part is created only one ESXi host)

In v7, specifically we have added the Multi host setup. Instead of creating the lab on vSwitch, you will need to have a dvSwitch in place (which you will be able to select during setup as shown below)

Now one of the tricky parts is that a dvSwitch has uplinks of course. This is good so that the VM's on different host will be able to talk to eachother. One tricky part, however is the vlan part now.

If you have a single host setup the vlan for the isolated network does not really matter as the switch has no uplinks so you won't connect it to production anyway. With multi host setup you need to watch out as you will have uplinks
  • For every isolated network, make sure that you use a VLAN ID that is not in use in production
  • Make sure your physical switch knows this VLAN and are forwarding the packets from one ESXi host to another.
Other then that, the setup is similar. Portgroups will be created automatically on the dvSwitch with correct VLAN ID.

Surebackup and Multi Host

So interesting question came in my mailbox this week. Can I use Multi host for surebackup as well. First thought was, yes of course you can. Then it hit me. You can not select the cluster for a virtual lab. So although the network is multi host, instant VM recovery will always be done to one host. It's the host you selected during vlab creation.

So I thought during testing, why not try to vMotion VMs as they are powered on. Well it turns out there are a few things you need to take into account.
  • Make sure that vPower is mounted on all your ESXi hosts. You can do this manually or initiate an Instant VM Recovery to every individual Host. To do this manually check out http://www.veeam.com/kb1055
  • When you backup a VM make sure all cdrom and floppies are disconnected. This avoid having local "cdroms" connected. This is a best practice for VMware environments anyway.
  • vMotion with snapshots should work starting from vSphere 4.0 http://kb.vmware.com/kb/1035550
  • Make  sure your I/O redirection datastore is a shared one and all your ESXi hosts have mounted it.
If you start the surebackup job the VM's will be started on the selected ESXi host. However if you setup DRS to balance your cluster, the VMs will be balanced automatically if they are vmotionable (I'm not sure it is a word but ok :)). If you are actually working in a totally seperated lab environments, this might be one of those times that you want to change the recommendation settings of DRS so that balancing will be done faster

Then when you fire up your lab you should see this happen if the load is getting to high on your initial ESXi server.

If it does not, just try a manual vMotion to check why the system is not able to do a vMotion.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.