2013/11/25

After the backup copy job, the auto import

For me personally the most interesting feature in Veeam v7 would be the backup copy job. Why? Well it solves one of the most important challenges Veeam users were having with backup policies.

First of all you can now do tiering of your backups. Start by creating fast backups on fast disks with a limited amount of restore points, then use the backup copy job to copy the data to slower disks for a longer retention. Before this was also possible ... with scripts!

Second of all, you can now apply GFS like retention policies. I like that GFS is only available on the backup copy job. It forces people to think about tiering in combination with GFS to slow disks, so that you still have a (limited) amount of fast restore points to do Instant VM Recovery and Surebackup. Before this was also possible ... with scripts!

Lastly WAN acceleration is now built into the product. People were trying to get there backups off-site with previous versions but maybe not always in the most successful way. People were using of course scripts to RSYNC & Robocopy to get the backups shipped to a second location but didn't always liked how much bandwidth this required or the manual actions they had to take. Now you can get your backups off-site via very small connections. Best of all, you no longer require somebody to manually export the tapes and take them home everyday so that your corporate data is safe.

But what about those off-site copies? Getting your restore points off-site is the easy part. You can just use the backup copy job. Actually there are 2 ways you can get your backups of site.

In the push strategy, you will install the Veeam B&R Management Server on the source location. Backups will be made by a source proxy to a source repository. Then the backup copy job will send the data to the remote location. This is mostly used when your IT staff is working on the source location. One disadvantage about this scenario is that you can not share WAN accelerators between management servers in v7. Since every location is running their own Management Server, you will have to install multiple Windows servers for each WAN accelerator instance. However, connection outages won't result in backups not running locally.

In the pull strategy, you will install the Veeam B&R Management Server on the remote location (or HQ in a ROBO design). You will have a source proxy and a source repository. Local backups will be made locally but scheduled by the remote location. The backup copy job will then copy the data to the remote location. This scenario is mostly used when you have centralized IT and you have very stable connections between HQ and your branch offices. In this case, because the Management Server is running centrally, you don't need to deploy a windows server for each pair of WAN accelerators. In fact, you can profit from the fact that if you set up a new WAN accelerator pair, the server will copy the cache from an existing WAN accelerator on HQ.

In the push strategy your local and remote backups will be visible as restore points at the source side. This is is good when you want to do local restores. However what if you want to do a file level recovery at the remote location? In this case you could have a clean install of the backup management server and import the backups there.

In the pull strategy restoring at the remote side or HQ is easy. However restoring locally is hard because you will need a clean install of the backup management server and import the backup locally just to do a file level recovery.

How to solve it? Well actually in both scenario I would start by installing a local empty management server. Please don't forget to install the Powershell SnapIn as well, as we need to do the autoimport. From a licensing perspective, you can reuse your license because this empty install won't be backing up any VM's and so you won't consume any sockets. Notice that both Veeam servers should be running the same version or at least the server importing should be newer or the same level as the one creating the backups.

Once you installed the empty server you can start by adding the repository directory as a new repository on the clean installed server. When you add the repository give it a name but start with the prefix "WANBCJ". Alternativally you can alter the script I am providing you later.


In the last step you can then click to automatically import existing backups


After this is done you should see your backups appear as "imported" and you can start FLR or Instant VM Recovery easily via the regular way.


One thing that won't happen however is that your repository will be automatically for new restore points. So if a couple of weeks later, you need to do a restore of a freshly copied restore point, you will have to manually rescan your repository. It's quite easy to do this. Just go to the repository in backup infrastructure, right click it and choose rescan


Well now here is the fun part. You can easily automate this. Just open up a Powershell window, by going to the main menu (it's the blue button in the top left corner of the GUI)


Ok so the basic script is actually one line of code:
Get-VBRBackupRepository | where { $_.name -match "^WANBCJ" } | ForEach-Object { Sync-VBRBackupRepository -Repository $_ | out-null}
You can see why WANBCJ is required as a prefix as the code will match any repository where the name start with WANBCJ. Then for each of these repository we will ask a resync. You can see the result poping up in the history tab


Now lets make this code automated. The easiest way on various platforms is just to create a ps1 file and add the following code to it:
Add-PSSnapin -Name "VeeamPSSnapin"
Get-VBRBackupRepository | where { $_.name -match "^WANBCJ" } | ForEach-Object { Sync-VBRBackupRepository -Repository $_ | out-null}
Notice that there is some added code that will load the VeeamPSSnapin. When you trigger powershell via the Veeam menu it is done automatically. However, the task scheduler of windows won't do it for you so you have to do it manually. In my case I have saved the script under c:\vbrscripts\syncrepo.ps1


Now in the Windows task scheduler you can schedule a new task to run this script on a daily basis (or more frequently if your prefer). You can see the program is "powershell" and the argument is the path to the script enclosed with quote signs.


If you want to test that it works, just hit the run button and see if you can see the event in the history tab.


The nice thing about this script is that if you add another repository which names starts with WANBCJ, you won't have to do anything as it will be automatically rescanned!

2013/09/24

Getting the most out of Windows 2012 Deduplication with Veeam

With the release of Windows 2012, Microsoft allows you to do deduplication in software. This feature can potentially save you a lot of storage space without having to buy specialized hardware.

For example if you have multiple Veeam backup jobs, storing the data on a common repository can give you global deduplication with Windows 2012. You can find a good blog article about this on the Veeam web site http://www.veeam.com/blog/how-to-get-unbelievable-deduplication-results-with-windows-server-2012-and-veeam-backup-replication.html . With the release of v7 I think you can build even more interesting scenario's where the primary repository is a volume without deduplication for fast backup and fast restore. Then you can use the backup copy job to copy the backups from the primary volume to a dedup volume in combination with GFS. Because GFS will create multiple full backups, this should lead to interesting dedup levels.

A couple of days ago I got an interesting question about in-guest deduplication and file level recovery with Veeam. I was pretty confident it would work because Veeam shows the disk to Windows via a propriety driver. I figured out that one of the requirement would be that the backup server is a Windows 2012 server.

However when I tried it I got the following error "Browsing deduplicated volumes requires that backup server is installed on Windows Server 2012"


I thought it was a bug because my B&R was running on 2012. After opening a case with support, it turns out that you just need to enable the deduplication role on the backup server (File and Storage Services > File and iSCSI Services > Data deduplication)


Once you do that, FLR will just work out of the box


So not only can you use deduplication on the backup server, you can also use it in guest, knowing that Veeam can successfully recover files from it.

2013/08/16

Veeam MultiHost SureReplica v7 - Demystified

Surereplica

After months of eagerly waiting to post about new features in Veeam Backup & Replication v7, I can finally go ahead. If you read through my blog post you will notice that I love to talk about Surebackup as I think we take a very interesting approach on how we separate the isolated network and the production network.

One of the new features in v7 is the Surereplica. I think it is a great features and has great benefits:

  • You will be able to test to Replica's at the other side and see if they work successfully. Again another checkbox that can be checked in your DR plan automatically.
  • More interesting is the fact that you will be able to use to resource at the other side as a test environment. The great thing is that the storage at the other side will probably be a copy or has similar storage performance specifics so that your lab runs at the same speed as the VMs in production. It will also allow you to create bigger sandboxes. In which case you could even use replica's just to create lab environments (replicating maybe only once a month or manually to refresh the latest data). Not specifically for DR scenarios.
One challenge with this setup is of course that not all replica's may land on the same ESXi host. In v6.5 this would have been a problem as a virtual lab (and then specifically the network part is created only one ESXi host)

In v7, specifically we have added the Multi host setup. Instead of creating the lab on vSwitch, you will need to have a dvSwitch in place (which you will be able to select during setup as shown below)


Now one of the tricky parts is that a dvSwitch has uplinks of course. This is good so that the VM's on different host will be able to talk to eachother. One tricky part, however is the vlan part now.


If you have a single host setup the vlan for the isolated network does not really matter as the switch has no uplinks so you won't connect it to production anyway. With multi host setup you need to watch out as you will have uplinks
  • For every isolated network, make sure that you use a VLAN ID that is not in use in production
  • Make sure your physical switch knows this VLAN and are forwarding the packets from one ESXi host to another.
Other then that, the setup is similar. Portgroups will be created automatically on the dvSwitch with correct VLAN ID.

Surebackup and Multi Host

So interesting question came in my mailbox this week. Can I use Multi host for surebackup as well. First thought was, yes of course you can. Then it hit me. You can not select the cluster for a virtual lab. So although the network is multi host, instant VM recovery will always be done to one host. It's the host you selected during vlab creation.

So I thought during testing, why not try to vMotion VMs as they are powered on. Well it turns out there are a few things you need to take into account.
  • Make sure that vPower is mounted on all your ESXi hosts. You can do this manually or initiate an Instant VM Recovery to every individual Host. To do this manually check out http://www.veeam.com/kb1055
  • When you backup a VM make sure all cdrom and floppies are disconnected. This avoid having local "cdroms" connected. This is a best practice for VMware environments anyway.
  • vMotion with snapshots should work starting from vSphere 4.0 http://kb.vmware.com/kb/1035550
  • Make  sure your I/O redirection datastore is a shared one and all your ESXi hosts have mounted it.
If you start the surebackup job the VM's will be started on the selected ESXi host. However if you setup DRS to balance your cluster, the VMs will be balanced automatically if they are vmotionable (I'm not sure it is a word but ok :)). If you are actually working in a totally seperated lab environments, this might be one of those times that you want to change the recommendation settings of DRS so that balancing will be done faster


Then when you fire up your lab you should see this happen if the load is getting to high on your initial ESXi server.

If it does not, just try a manual vMotion to check why the system is not able to do a vMotion.



2013/06/12

Protect your Veeam backups from physical access to your repository

A feature that is not in Veeam is encrypted backups. The features is not one of the top requested features like tape but still, every now and then I get a mail asking on how you can store your backups in an encrypted way with Veeam. The short answer is, it is not possible. However with Linux repositories you can do some pretty neat stuff.

This blog article continues on my previous article "Veeam and Linux Repository" . What I will show you in this article is how you encrypt the home volume so that all your backups are stored in an encrypted way. If someone would steal your server, the data would be worthless without the key thus protecting you from physical access.

So lets continue. Just after you have configured the firewall, you can create the repo group
groupadd repos;
echo "%repos ALL=(root) NOPASSWD: ALL" >> /etc/sudoers.d/repos;
However just before you create the repo01 user, we will encrypt the home volume. To do this, you will need take the home volume offline. Also encrypting the volume will destroy all the data, so do this before you put the server in production or migrate the data first.

To put home volume offline, go to to the console and go to runlevel 1 so that all remote users and other users will be disconnected. This should clear all the file locks but will also disable networking so you really need to do this on the console and not via ssh. Afterwards we will switch back to runlevel 4
telinit 1
umount /home
telinit 4
Now check your /etc/fstab file and look for the logical volume that you want to encrypt. In my case it is  /dev/mapper/vg_repo-lv_repository


Then you can use shred to clear any existing data on the disk. If you are using thin provisioning in VMware this is not recommended
shred -v --iterations=1 /dev/mapper/vg_repo-lv_repository
Then you can encrypt the disk with cryptsetup and open it. This will create a new disk under /dev/mapper
cryptsetup --verbose --verify-passphrase luksFormat /dev/mapper/vg_repo-lv_repository;
cryptsetup luksOpen /dev/mapper/vg_repo-lv_repository encrypted_home;

You can check if the disk is properly mapped:
fdisk -l /dev/mapper/encrypted_home
Now that the disk is under /dev/mapper/encrypted_home, you can format the disk with ext4
mkfs.ext4 /dev/mapper/encrypted_home
Finally you will need to add some lines to crypttab and fstab so that the disk is mounted at boot
echo "encrypted_home /dev/mapper/vg_repo-lv_repository none" >> /etc/crypttab 
echo "/dev/mapper/encrypted_home /home                   ext4    defaults        1 2" >> /etc/fstab
You will also have to comment out or remove the line in /etc/fstab that is responsible for mounting the old unencrypted volume /dev/mapper/vg_repo-lv_repository


Now you can execute "mount -a" to mount the encrypted volume or just reboot the machine. During the boot, the machine will ask for a password to write and read from the encrypted volume:



Now that you have an encrypted home volume, you can create the user an add the repository to Veeam
useradd -m -G repos repo01;
echo "repo01:repo01" | chpasswd;

Now you are able to write backups to your encrypted volume

In my test, the repository was not the bottleneck, however I only have a limited lab environment so there might be some overhead when you try instant vm recovery or while running the backups. If in doubt add more CPU and Memory :)


2013/05/31

Veeam FLR and Linux searches

Veeam has an excellent framework for searching and restoring files for Windows. The one click file restore is a feature well appreciated by users. However the functionality is not available for Linux servers. I thought about this and came up with some possible solutions.

Using the Veeam FLR

The first possible solution is using the Veeam FLR appliance. In the end it is just a Linux appliance and guess what, you can just logon to it. You can find all the info you need on the KB (http://www.veeam.com/kb1447)

Once you are logged in you can use the "mount" commando to find out where Veeam mounts the partitions. This seems to be on the pretty standard location "/media". Then you can dive into those partitions and use "find" to locate your file. In the example below you can see I used the FLR to search for the file /tmp/processfollow


Once you know the path, you can go back to the Explorer and do the restore operation

Using the native mlocate method

Linux has a standard tool for doing Indexing. This tool called mlocate, can be easily installed on Redhat system by executing
$ yum install mlocate

To create your initial database, just run the updatedb commando. Then if you want to update the file you can use the updatedb command again
$ yum install updatedb

The great thing is that Centos for example automatically creates a daily cron job so that this updating is automatically. Just take a look at "/etc/cron.daily/mlocate.cron" . In the script you will also see that Centos uses renice and ionice so that the process of indexing does not take all the available resource.

Once you have a database, you can use "locate" to find a file. For example
$ locate findthisfile 


Multiple local versions of the index

The index or database is just a flat file you can find under "/var/lib/mlocate/mlocate.db" . In fact you could adjust your cron job so that it copies the index file and renames it using the current date. You can then use locate to find a file that might already be deleted from disk. You can see an example below. The copy statement is 

$ cp /var/lib/mlocate/mlocate.db  /var/lib/mlocate/mlocate-$(date +%y%m%d).db

Then you can use the "-d" parameter to find a file in an older index
$ locate -d  /var/lib/mlocate/mlocate-date findthisfile

In the screenshot below you will see a trial run, where you can see that I am unable to find a file in the current db but I am able to find the file in an old index


Multiple versions of the index on a remote server

The other great thing is that you seem to be able to copy those indexes to a central server and use locate to search for files that are or were on a specific server.

In my example I have a central server 192.168.149.55. I used the following statement to copy my index to this central server
$ scp /var/lib/mlocate/mlocate.db index@192.168.149.45:/home/index/$(date +%y%m%d)-$(hostname).db

Then I created a small script on this central server called lsearch
#!/bin/sh
INDEXDIR=/home/index
for curdb in $(ls -t /home/index/*.db)
do
        echo ">>>>> $curdb"
        locate -d $curdb $1
done

After chmoding it, I am able to search for a file using ./lsearch
$./lsearch findthisfile


Of course I am not sure about the performance with bigger machines and of course with more servers. The script itself is very very very basic but I do hope it might inspire some people to create nicer and better implementations.


2013/05/07

Veeam and Linux Repository

**** This blog article was tested with centos 6. However in centos 7, you might need to install an extra package via "yum install perl-Data-Dumper". You can follow the discussion on the Veeam forum. Credit goes to Tom Sightler for this update****

This post is a rather trivial post, but I just wanted to create a reference that you could use if you are in doubt on which distribution could work. Also I have some other blog post planned for which this would serve as an excellent start guide.

If you want to create a Linux repository and you look at the manual, it will only state
"Any major Linux distribution"
The FAQ on the forum gives a bit more detail stating
"Any storage directly attached to, or mounted on a Linux server (x86 and x64 of all major distributions are supported, must have SSH and Perl installed). The storage can be local disks, directly attached disk based storage (such as USB hard drive), NFS share, or iSCSI/FC SAN LUN in case the server is connected into the SAN fabric."

So I decide to test just a major distribution CentOS. The good thing is that they have a minimal version which only installs the bare essentials. In this post, I used "CentOS-6.4-x86_64-minimal". When you boot from the CD you can just go through the installer which is pretty self explanatory. Thus I'll only discuss some of the steps.


At boot I like to tab and add the kernel parameter resolution=1024x768. I've noticed that on a virtual console you are missing part of the screen if you do not do this.


I advice you to set the network settings via the gui. You can do it afterwards directly in the config file but it takes a bit more time. I also enable connect automatically, so that after the install, I can ssh directly to the new server.


For the storage option, select review and modify in the bottom left corner so that you have more control.


In this step you can see that I downsized the  root vol (/) to only 5GB. You will see that this is plenty. In fact I think even 3GB would suffice. I created another vol (/home) to store all the backups in. When your installation is configured it should only install a couple of packages and then you are ready to go.

When your machine is installed and rebooted, you should be able to SSH to the machine, if you configured the network. One thing I like to do is go to the config file of your network and change NM_CONTROLLED=yes to NM_CONTROLLED=no. You can edit the file via
vi /etc/sysconfig/network-scripts/ifcfg-eth0


Then when this is done you should install the software that is required. First do an update of the system
yum update -y
Now install client and server openssh. This should already be done but just in case
yum install -y openssh openssh-clients openssh-server
You will require sudo for elevating the rights. Notice this should also already be ok
yum install -y sudo
Finally install perl. This is not included in the base install
yum install -y perl
If you install it in a VM, for a test, this is the moment you can install VMware Tools. I have this oneliner for installing VMware tools.
mount /dev/cdrom /mnt;tar -xzf /mnt/VMware*.tar.gz -C /usr/src/;/usr/src/vmware-tools-distrib/vmware-install.pl

Now we will configure the firewall. Veeam requirements say that you need to open certain ports . Mainly this should be

  • 22 tcp (ssh)
  • 2500 - 5000 tcp

The easiest way at this point (so that you don't have to install extra components) is manipulating the iptables file directly. Just use vi to edit "/etc/sysconfig/iptables" and add the following line between "ssh (--dport 22) " and "-j Reject". The order is important!
-A INPUT -m state --state NEW -m tcp -p tcp --dport 2500:5000 -j ACCEPT

Then you can restart the system
service iptables restart
For this tutorial we will create a separate user for the repository. Of course since you allow the user to elevate to sudo it is not 100% secure but it offers a bit of seperation. If you want real seperation, I advise you to create multiple systems or multiple chroot environments.

In the following step, we will create a group repos. Then we create a user repo01 that is part of this repos group and change the users password. Finally we add the repos groups to a sudoers file so that you don't have to modify the original file nor do you need to allow Veeam to manipulate it.

groupadd repos;
useradd -m -G repos repo01;
echo "repo01:repo01" | chpasswd;
echo "%repos ALL=(root) NOPASSWD: ALL" >> /etc/sudoers.d/repos;


Now you should be able to add the repository to Veeam. By default you will see that we are not using too much space. In my example the lv_root was only used for 28% (1.3GB of 4.9GB)

Some interesting screen shots while adding the repository. You can see I don't use the root account because repo01 can elevate its right. You don't need to allow Veeam to alter the sudoers file as this is already been done

You can also see that we have opened up all the required ports


In the repository step, I just used the home folder of repo01 so that the backups are stored nicely in this separate container.


During backups, you will see that Veeam will automatically push and start the necessary agents



2013/04/04

Surebackup Network Deepdive

In this post I will deepdive into the Surebackup networking part. I can imagine that this might be to deep for some people but at least it should serve as a good reference how Surebackup vlab works.

But first things first. If you don't know what SNAT, DNAT or Netmap means, I advise you to read my previous post. Understanding this post is very crucial for understanding how Surebackup works internally. Even if you know those concepts, I advise you to read the last part of the post describing how to connect overlapping subnet ranges as this is the base for Surebackup.

The simple network

We will first start by creating a very simple Surebackup network. This particular Veeam customer has only one network, his production range.  So lets put the network parameter is in a clean table. This is very interesting information as it contains everything we need to set up the Virtual Lab

NamePortgroupVLANSubnetNetmaskDefault GatewayDNS
ProductionVM Network0192.168.149.x255.255.255.0192.168.149.2192.168.149.20

Now for every production network you want to use in Surebackup, you will need a subnet range that is not being used and has the same subnet size. How do you know which production networks you are going to use. Take a look at the VMs you want to test in Surebackup and write down in which network they are connected.  In this case it is only one network

Subnet MaskProduction SubnetSubnet not in use anywhere on the network
255.255.255.0192.168.149.x192.168.150.x

You could also use a different private range in a different private address space . This will allow you to make a clear distinction between production and your Surebackup network. For example:

Subnet MaskProduction SubnetSubnet not in use anywhere on the network
255.255.255.0192.168.149.x172.16.149.x
255.255.255.0192.168.129.x172.16.129.x

The Goal

Before I start explaining how to configure everything, let me first start by explaining what our final goal is. I think 99% of all problems with Virtual Labs occur because users don't know what the final result should be.

With Surebackup we will start up Virtual Machine directly from a backup in an isolated network. So what is an isolated network? An isolated network is a network that mimics a production network. It means that VMs will reuse the same network settings as listed up in our table earlier. This portgroup or network will be created automatically on a vSwitch without any uplinks by Veeam. You could say that Veeam creates a network sandbox.

Imagine a server SF0006 with IP 192.168.149.36 . It is running in the production network. When Surebackup is configured, a copy of this network is created: the isolated network. The backup copy we want to test will be started in this isolated network. The result is something like this. Notice that I added the IP of the Veeam Backup Server

VM Network (has uplinks on vSwitch)
  • SF0006 : 192.168.149.36
  • Default gateway : 192.168.149.2
  • Veeam Backup Server : 192.168.149.22

vLab VM Network (has no uplinks on vSwitch)
  • SF0006_Backup_Copy : 192.168.149.36
This is already a great start. The only thing is that we can not talk to our "SF0006_Backup_Copy" machine as it is isolated. This is where vLab networking comes in to play. Veeam will deploy a small Linux NAT Router. The router will sit between the Production Network and the Isolated vLab VM Network.

In the production network the vLab router will just need to get any available IP. You can use a DHCP address but I recommend using a fixed IP. In my case I choose 192.168.149.50. So let me update the VM Network

VM Network (has uplinks on vSwitch)
  • SF0006 : 192.168.149.36
  • Default gateway : 192.168.149.2
  • Veeam Backup Server : 192.168.149.22
  • vLab router interface 0 : 192.168.149.50
Then in the isolated network we will also have to choose and IP address. However the choiche is very easy. If a virtual machine wants to contact the outside world it will use its default gateway. The backup copy is not aware that it is started in an isolated environment. When it runs, it will also want to talk to the default gateway to send out traffic. So in the isolated environment our vLab router will mimic the default gateway that is running in production. Our vLab VM Network now looks like:

vLab VM Network (has no uplinks on vSwitch)
  • vLab router interface 1 : 192.168.149.2
  • SF0006_Backup_Copy : 192.168.149.36

Now if the Veeam server or the production machine wants to talk the backup copy it will just send packetsto vLab router interface 0. This vLab router can then forward or route the package to the isolated environment. What is more interesting is that you have overlapping subnets. So you will need to do some form of NAT to hide this isolated environment behind another subnet.

Lets see how it looks in VMware. First of all  you will see 3 running machines. The first one is the production machine SF0006. The second one is the backup copy. Notice that we won't use the name sf0006_backup_copy but rather sf0006_insertveryrandomhashhere. The last one is the vlab network

Now lets take a look at SF0006. Like stated before it is has the IP 192.168.149.36 and is connected to the VM Network

The backup copy of SF0006 also uses the IP 192.168.149.36. However it is connected to the vLab VM Network

 The vLab router itself has 2 IPs. One of those IP is 192.168.149.50 in the VM Network. You can also see that it is connected to the vLab VM Network

If we look at the ESX networking, you can see clearly that the isolated network has no way to talk with the outside world, except via the vLab router

The Configuration

So now we know what the result should be, lets see how you can configure it. You will need to go to the backup infrastructure and add a new virtual lab

Give the Virtual Lab a name. I like vLab because it is short

The vLab router is a Linux appliance. It is a virtual machine so you will need to select a vSphere host where you want to run it on. Important, all the networking is only created on this vSphere host. All the machines that will be test will be powered on on this host.

Select a datastore to store the Linux appliance on

Now configure the router so that it has an IP in your production network. Again this is the free IP we reserved for the virtual machine. 

Instead of using DHCP, we choose the static IP 192.168.149.50 . This IP will be set on interface 0 and will be entry point for all packets coming from production.


 Then in the next step choose manual configuration. If you made it this far, you know what you are doing.

Now lets create an isolated network. In our case we only need to mimic one portgroup called VM Network. However if you would have multiple networks (Production, DMZ, ...) and you need those to test your VM's, you will need to create a copy or isolated network of each production network . I will show you this in a later post.

For each isolated network you will have to define a vNIC or interface in that isolated environment. Remember that our Linux router will mimic the default gateway of production.

So in the settings for the isolated network vLab VM Network, configure the default gateway 192.168.149.2 of production. The vLab router will then mimic the gateway in this isolated network. Interesting enough you will now have to configure a "Mask". Basically you will configure a subnet that does not exists in your environment and that will hide away or mask your isolated environment. What you are actually doing is creating a NETMAP rule in the Virtual Lab router.

I choose 192.168.150.x because I have only one network. However I can not stress this enough, this mask should be unique in your network to avoid problems.

Just skip static mapping for now. I will cover it later.

And your are setup.

The Result

When you are running a Surebackup job the result will be something like this (our goal)


VM Network (has uplinks on vSwitch)

  • SF0006 : 192.168.149.36
  • Default gateway : 192.168.149.2
  • Veeam Backup Server : 192.168.149.22
  • vLab router interface 0 : 192.168.149.50
vLab VM Network (has no uplinks on vSwitch)
  • vLab router interface 1 : 192.168.149.2
  • SF0006_Backup_Copy : 192.168.149.36

Remember that we masked or did a NETMAP for the vLab network. So any production VM that wants to talk to SF0006_Backup_Copy from the VM Network will not use it regular IP 192.168.149.36 but will use his masked version 192.168.150.36. You can see this in the screenshot that I am able to ping the machine succesfully

There is only one problem. The Veeam Backup Server (VBS) its default gateway is set to 192.168.149.2. So if it wants to talk to 192.168.150.36, it would talk with that default gateway. The default gateway is not aware of the situation and just drops the packets.

So how do we fix this. Well it is automatically fixed. If you run a surebackup job or you run an U-AIR Wizard, Veeam will automatically add static routes on the Veeam Backup Server (VBS) or the machine running the U-AIR Wizard. You can see this in a console screen using the command "route print".

Veeam has added a static route saying that traffic for subnet 192.168.150.0/24 should be forwarded to 192.168.149.50 which is our vLab router interface 0. When the VBS wants to talk with 192.168.150.36, it will send a package to our vLab router and the traffic will be translated.

Deepdive

I've shown you how to configure the vLab router but lets see what happens underneath. If you have not read my previous post, please do so.

One thing we discussed in the NAT post is how you can manage overlapping subnet. This is exactly how the vLab router solves this same challenge. So lets take a look under the hood. If we run ifconfig we can see the IPs set on the interfaces. In this case

  • Interface 0 : eth0 : 192.168.149.50
  • Interface 1 : eth1 : 192.168.149.2




First lets look at the NAT rules. There are two important NAT rules. The Netmap rule and the Masquarade rule

The Netmap rule in Pre Routing stage
When?TypeComing from InterfaceExit InterfaceOriginal DestTranslation Dest
Pre RoutingNETMAP eth0 / Interface 0*192.168.150.x/24192.168.149.x/24

The Masquerade rule in Post Routing stage
When?TypeComing from InterfaceLeaving on InterfaceOriginal Source
Post RoutingMasquerade*eth1 Interface 10.0.0.0/0 = everything

So lets follow a packet coming from our backup server 192.168.149.22. It enters on interface 0 and matches the netmap rule. The destination is translated
IP Packet
Source192.168.149.22
Destination192.168.150.36 192.168.149.36
MessageHello from Backup Server

Again the routing will be solved by marking. I will show this later on. The packet is forwarded to interface1. There the Post Routing Masquerade rule kicks in. It replace the source IP with the the IP set on interface 1
IP Packet
Source192.168.149.22 192.168.149.2
Destination192.168.150.36 192.168.149.36
MessageHello from Backup Server


SF0006_backup_copy will be able to receive the message and respond back to the router. The router will reverse the whole sequence and the package is delivered

So how is the marking set up? Well this is a bit tricky and I hope I am explaining it right.

When a package enters on interface 0 (eth0) it will be marked using a value 0x6 with bit mask 0xffffffff (32-bit) if the destination is 192.168.150.x/24. Remember, this rule is applied before NAT rules are applied

Then if we look at the ip routing tables we will see that the all the traffic  should leave via interface 0 / eth0. As default gateway the production router is configured.

If we look at the ip rules, we will see that an fwmark is set. It say that all trafic matching 0x2 with bitmask 0x2 should use an alternative table 2. 

Now this is a bit tricky as you would expect that the data would be marked with mark 6. However fwmark is implying a bitmask 0x2. This bitmask works like a filter, only allowing bits to pass if the bitmask has that bit set to 1

So lets convert the last byte to binary
0x6                  = 0000 0110
bitmask 0x2     = 0000 0010
Result 0x6/0x2 = 0000 0010

0x2                  = 0000 0010
bitmask 0x2     = 0000 0010
Result 0x2/0x2 = 0000 0010


You can see that 0x6/0x2 will now match 0x2/0x2 and so the routing table 2 is chosen.

If we take a look at this alternative table 2, we will see that indeed it say to forward traffic for 192.168.149.0/24 via interface 1 / eth1 towards our isolated network

Static Mapping

Now that we know how the basic settings work we can look at static mapping. Static mapping is an alternative on top of the Netmap. Lets look for example at our server SF0006_backup_copy (192.168.149.36). If we want to reach it we will have to connect to 192.168.150.36. Our computer knows due to static routes that it must send packets for 192.168.150.36 to our vlab router 192.168.149.50 

If other clients in the same subnet want to talk with this server they will have to manually add the static routes. There is however another way. If for example in production you have a Free IP 192.168.149.136 you can map this IP to our server in our isolated environment. Other clients can just connect to 192.168.149.136 and the router will do the translation to our SF0006_backup_copy (192.168.149.36)

Static mapping is part of the virtual lab configuration. You can see I have enabled it

 I added a mapping in VM Network so that the production IP 192.168.149.136 will be mapped to the isolated IP 192.168.149.36

The result will be that SF0006_backup_copy is reachable on 2 addresses
  • 192.168.150.36 via the Netmap rule
  • 192.168.149.136 via Static mapping


If we look under the hood a couple of extra rules will be added


Static mapping adds 2 rules, one DNAT & one SNAT
The DNAT rule in Pre Routing stage
When?TypeComing from InterfaceExit InterfaceOriginal DestTranslation Dest
Pre RoutingDNAT eth0 / Interface 0*192.168.149.136192.168.149.36

The Masquerade rule in Post Routing stage
When?TypeEnter InterfaceExit InterfaceOriginal SourceTranslated SourceDestination
Post RoutingSNAT*eth1 Interface 1*192.168.149.2192.168.149.36

In 2 stages the source and destination will be rewritten
IP Packet
Source192.168.149.22 192.168.149.2
Destination192.168.149.136 192.168.149.36
MessageHello from Backup Server

An additional mark rule will be created so that not only traffic going to 192.168.150.x is marked with 0x6 but also traffic going to 192.168.149.x



Reference: