Adding AD/OU users to VBO365 via Powershell

For the Veeam fans out there, you must have been living under a rock if you don't know that there is a new backup product for Office 365 (Veeam Backup for Office 365 or short handed VBO365). It allows you to backup mailbox items like mail, calendar items, etc.

While 1.0 is already released, the 1.5 is currently in public beta. One of the cool things it brings is scalability, which a lot of users have been asking for. However it also brings full automation support in the form of a complete Rest API and a complete Powershell module. In this blog post I want to show you the power you get with the new Powershell module.

Quite often I get asked on how to add only a selected amount of users to a job. For example, a company has 4000 mailboxes, but only want to select a certain amount of mailbox for protection in a certain job. This makes even more sense with v1.5 since you can define multiple repositories with a different retention. So maybe for the helpdesk guys, you don't really want to backup to long, but for the managers, you want to keep the mails backed up for 8 years. Handpicking those users per job can be a tedious job.

With the new Powershell Module, you can  automate this task. There is a new cmdlet called "Add-VBOJob" that allows you to define a new job. It takes the following parameters:

  • Organization (Get-VBOOrganization)
  • Target Repository (Get-VBORepository)
  • Mailboxes (Get-VBOOrganizationMailbox)
  • Schedule Policy (New-VBOJobSchedulePolicy)
  • Name

To see it in action, I made a sample scripts that queries Active Directory and get's all users in a certain OU. Then based on those users, you can make a list of email address that you want to add. Then armed with that list, you can use "Get-VBOOrganizationMailbox" to select the correct mailboxes.

You can find the script here. It should be quite straight forward. Here are some screenshots seeing it in action

Firs of all, the module is in "C:\Program Files\Veeam\Backup365\Veeam.Archiver.PowerShell". So you can just execute "import-module 'C:\Program Files\Veeam\Backup365\Veeam.Archiver.PowerShell'". However the $installpath trick in this scripts, tries to find out the installation directory even if you did not install VBO365 on the default location.

Now as you can see from the output, it found 3 users in the OU :

  • bbols@x.local
  • ppeeters@x.local
  • tbruyne@x.local
The scripts "builds" the email address list based on the SamAccountName, but of course if you have a different policy, you can change the example. For example, I imagine quite a few companies having something like FirstName.LastName@company.com. Btw if you are wondering, "x.local" isn't a real DNS name, so how does that work with VBO365? Well it seems that 1.5 will also support on premise Exchange and Hybrid Deployments.

After building the email list, the script created the job

If we check the job, you will see those email addresses (mailboxes) where successfully added to the job

Well in this case, it was only 3 users in my test lab, but I can imagine if you need to add 500 users, you will be grateful nothing having to add them one by one. Also, you would be able to do this in a for loop, going over multiple OU's, creating multipes jobs.  Finally if you are going to use this in production once it is GA, I would recommend that you validate that you have the same amount of users in the OU as in the job. In this example, it just simply checks all the mailboxes (Get-VBOOrganizationMailbox) and verifies if the email address associated to a mailbox is in the initial email list. If it is, it is added to the job.


Gathering your Veeam Agent's in Veeam Backup & Replication

So with the new upcoming version of Veeam Agent for Windows, you will be able to backup directly to a Backup & Replication server. Not everybody knows this but you will also be able to backup to Veeam Backup & Replication Free Edition provided you have a license for the Veeam Agent for Windows. This might be important for smaller shops who have only a couple of machines and do not have a Veeam Backup & Replication license.

The steps to enable this are not so difficult but without the GA product, there is no documentation so it might be difficult to figure out how it all ties together. So here are the 7 steps you need to take to get it all working. Special thanks to Clint Wyckoff who shared these instructions internally.

Step 1: Start by installing Veeam Backup & Replication

Fairly simple start. Download Veeam Backup & Replication Free Edition. Then mount the ISO to your target server, click the install button to start the installation. Basically, in this example, we did a next next next finish install. If you are doing this in production, it might actually be good to read what you are doing. Notice in the license step, I did not assign any license so the free mode will be installed.

Step 2: Enable full functionality view

The next step is where one of my partners got stuck. You need to enable "Full Functionality" view to get through the next steps. By default, you get "Free Functionality" view which shows you only the options you can use with the Free mode. However, if you add a Veeam Agent Windows license, you will unlock more functionality than is available by default in the free mode for your agent backups

To enable it, go to the main menu, select view and then finally select the "Full Functionality" mode

Step 3: Add the Veeam Agent for Windows license to Backup & Replication

This might also be confusing, but you do not need to add the license during the Veeam Agent for Windows install. Rather, you add it to Veeam Backup & Replication and then, when you connect a Veeam Agent for Windows to VBR, it will acquire the license from the VBR server. This is good because you get a central place to manage the license.

Go to the main menu but this time choose "License". In the popup, click the install license button and select the Veeam agent for Windows license file (lic file) in the file browser. The result should be that the license is installed but the VBR server itself remains in Free mode

Step 4: Define permissions

Next step is to define the permissions on your repository. Got to "Backup Infrastructure" section and click the "Backup repositories" node. Then select the repository you want to assign rights to. Finally click "Agent Permissions". Then in the popup window, you will be able to assign permissions

For this tutorial, I made a separate user called "user1", just to show you that you can do very granular permissions

Step 5: Install the client

Installing the agent on another machine should be fairly trivial. However, in this setup, we choose not to configure the backup during install nor to create a recovery medium. However I would highly recommend you to do create a recovery medium so that you can execute bare metal recoveries if needed.

Step 6: Configure the client

Once the product is installed, we can configure it. To open the control panel, go to your system tray. A new icon should have appeared  which has a green V. Because we did not configure anything yet, it should also have a small blue question mark on it. Right click it and select control panel.

When the control panel appears, ignore the fact that it does not have a license (click no). Click configure backup to start the configuration. 

Finally in the backup wizard, as a target select Veeam Backup & Replication Repository. Specify the FQDN/IP and the credentials. When you click next, the permissions are checked and the license is acquired from the backup server. In the next step, you are able to select the repository.

Btw, if a user connects without permissions on the repository, the configuration wizard will refuse to go the next step

Step 7: Run the backup

With the configuration done, you are ready to run the backup. You can see the backup job and backup from the Veeam Backup & Replication repository.

In the Veeam Agent for Windows, if you click the license tab, you will also see that the agent is licensed through Veeam Backup & Replication

So what's next?

Well, you can explore what other functionality is enabled when you backup to a free edition. One cool feature would be to "backup copy" your job to a second location. For example, in the following screenshot, I defined a repository on another drive, and then did a backup copy job to the second location.


Under the hood: How does ReFS block cloning works

In the latest version of Veeam 9.5, there is a new feature called ReFS Block cloning integration. It seems that the ReFS Block cloning limitations confuse a lot of people so I decided to look a bit under the hood. It turns out that most limitations are just basic limitation of the API itself.

To understand better how it all works, I made a small project called Refs-fclone . The idea is to give it a source file (existing) and then to duplicate that file to a target file (non existing) with the API. Basically creating a synthetic full from an existing VBK. 

It turns out that idea was not so original. During my google quests for more information (because some parts didn't worked), it appeared that there was a fellow hacker that made the exact tool. I must admit that I reused some of his code so you can find his original code here

Nevertheless I finished he project, just to figure out for myself how it all works. In the end, the API is pretty "easy" I would say. I don't want to go over the complete code but highlight some important bits. If you don't understand the C++ code, just ignore it and read the text underneath it. I tried to put the important parts in bold.


Before I even got started, my initial code did not want to compile. I couldn't figure it out because I had the correct references in place. But for some reason, it could not find "FSCTL_DUPLICATE_EXTENTS_TO_FILE". So I start looking into my projects settings. Turned out, it was set to compile with Windows 8.1 as a target and when I changed it to 10.0.10586.0, all of the sudden it could find all reference. 

This shows an important lesson. This code is not meant to be ran on Windows 2012 because it just doesn't have the API call supported. So many customers have been asking, will the ReFS integration work on Windows 2012 and the answer is simple: NO. At the time it was developed, the API call didn't exist. Also, you will need to have the underlying volume formatted with Windows 2016 because again, the ReFS version in 2012 did not support this API call.

So let's look at the code. First, before you clone blocks, there are some requirements which I want to highlight in the code itself:
FILE_END_OF_FILE_INFO preallocsz = { filesz };
SetFileInformationByHandle(tgthandle, FileEndOfFileInfo, &preallocsz, sizeof(preallocsz));
This bit of code defines the end of the file. Basically it tells windows how big the file should be. In this case, the size if filesz which is the original file size. Why is that important? Well to use the block clone API, we need to tell it where it should copy it data to. Basically a starting point + how much data we want to copy. But this starting point has to exist, so if want to make a complete copy, we have to resize it to be a big as the original

if (filebasicinfo.FileAttributes | FILE_ATTRIBUTE_SPARSE_FILE) {
FILE_SET_SPARSE_BUFFER sparse = { true };
DeviceIoControl(tgthandle, FSCTL_SET_SPARSE, &sparse, sizeof(sparse), NULL, 0, dummyptr, NULL);
Next bit is the sparse part. The "if" statements basically check if the source file is a sparse file, and if it is, we should make the target sparse (tgthandle) as well. So what it is a sparse file? Well basically if a file is not a sparse file, it will allocate all the data on disks if you resize it. Even if you didn't write anything to it yet. A sparse file only allocates space, when you write non zero data somewhere. So even if it looks like it is 15GB big, it might only consume 100MB on disk but space is not really allocated. 

Why is that important? Well again, the API requires that source and target files need to have the same setting. This code actually runs before the resizing part. The reason is simple, if you do not make it sparse, the file will allocate all the space on disk, even if we didn't write to it. Not a great way to make space-less fulls.

if (DeviceIoControl(srchandle, FSCTL_GET_INTEGRITY_INFORMATION, NULL, 0, &integinfo, sizeof(integinfo), &written, NULL)) {
DeviceIoControl(tgthandle, FSCTL_SET_INTEGRITY_INFORMATION, &integinfo, sizeof(integinfo), NULL, 0, dummyptr, NULL);
Finally this bit. Basically it get the integrity stream information from the source file and then copies it to the target file.  Again, they have to be the same for the code to allow for block cloning.

This shows that basically the source and target file have to be pretty much the same.  This partially explains why you need to have an Active Full on your chain before block cloning starts  to work. The old backup files might not have been created with ReFS in mind!

Also for integrity streams to work, we don't need to do anything fancy. We just need to tell ReFS, this file should be checked. 

The Cool Part

for (LONGLONG cpoffset = 0; cpoffset < filesz.QuadPart; cpoffset += CLONESZ) {
if ((cpoffset + cpblocks) > filesz.QuadPart) {
cpblocks = filesz.QuadPart - cpoffset;
DUPLICATE_EXTENTS_DATA clonestruct = { srchandle };
clonestruct.FileHandle = srchandle;
clonestruct.ByteCount.QuadPart = cpblocks;
clonestruct.SourceFileOffset.QuadPart = cpoffset;
clonestruct.TargetFileOffset.QuadPart = cpoffset;
DeviceIoControl(tgthandle, FSCTL_DUPLICATE_EXTENTS_TO_FILE, &clonestruct, sizeof(clonestruct), NULL, 0, dummyptr, NULL);
That's it. That is all what is required to do the real cloning. So how does it work. Well first there is a for loop that goes over all the chunks of data of the source files. There is one limitation with the block clone API. You can only copy a chunk of 4GB at a time. In this project CLONESZ is defined as 1GB to be on the safe side.

So imagine you have a file of 3.5GB. The for loop will calculates that the first chunk starts a 0 bytes and the amount of data we want to copy is 1GB. Next time, it will calculate the the next chunck starts at 1GB and we need to copy 1 GB, and so on..

However the forth time, it actually detects that there is only 500GB remaining, and instead of copying 1GB, we copy only what is remaining (filesize - where we are now).

But how do we call the API? well first we need to create a struct (think of it is as a set of variables). The first variable references the original file. Bytecount says how much data we want to copy (mostly 1GB). Finally the source and target file offset are filed in with the correct starting point. Since we want duplicates, the starting point for the block clone is the same.

Finally we just tell windows to execute the "FSCTL_DUPLICATE_EXTENTS_TO_FILE" on the target file, which basically invokes the API. We give it the set of variables we filled in correctly. So basically the clone API itself + filling in the variables is only 5 lines of code.

The important bit here, is that you can not just copy files on a ReFS volume and expect ReFS to do the block cloning. An application really has to tell ReFS to clone data from one file to the other and both files have to be on the same disk.

This has one advantage though. The API just clones data even if Veeam has compressed that data or encrypted it. Since Veeam actively tells ReFS to clone the data, it doesn't have to figure out what data is duplicate, it just does the job. That is a major advantage against deduplication: you can still secure and compress your files. Also, since the clone is just a simple call during the backup, it doesn't require any post processing. And no post processing means no exuberant CPU usage or extra I/O to execute the call.

Seeing it action

This is how E:\CP looks like before executing refs-fclone. Nothing special. An empty directory and the ReFS volume has 23GB free

Now lets copy a VBK to E:\CP with the tool. It shows that the source file is around 15GB big and it is cloning 1GB at the time. Interestingly enough you see that last run, it just copies the remainder of the data.

This run took around 5 seconds max to execute this "copy". Seems like nothing really happened. However, if we check the result on disk, we see something interesting: 

The free disk space is still 23GB. However we can see that a new file is created that is 15GB+. Checksumming both files give exactly the same result.

Why is this result significant? Well it shows that the interface to the block clone API is pretty straight forward. It also means that although it looks like Veeam is cloning the data, it is actually ReFS that manages everything under the hood. From a Veeam perspective (and also end-user perspective), the end result looks exactly like a complete full on disk. So once the block clone API call is made, there is no way to undo it or to get statistics about it. All of the complexity is hidden.

Why do we need aligned blocks?

Finally I want to share you this result

In the beginning, I made a small file that had some random text like this. In this example, it has 10 letters in it, which means it is 10 Bytes on disk. When I tried the tool, it didn't work (as you can see), but the tool did work on Veeam backup files. 

So why doesn't it work. Well the clone API has another important limitation. Your clone regions must match a complete set of clusters. By default the cluster size is 4KB (although for Veeam it is strongly recommended to use 64KB to avoid some issues). So if I want to make a call, the starting point has to be a multiple of 4KB. Well 0 in a sense is a multiple of 4KB so that's OK. However the amount of bytes you want to copy, also has to be a multiple of 4KB, and 10B is clearly not. When I padded the file, to be exactly 4KB (so adding 4096 chars), everything worked again.

This show a very important limitation. For block cloning to work, the data has to be aligned, since you can not copy unaligned data. Veeam backup files are by default not aligned. Thus it is required to run an active full before the block clone API can be used. To give you a visual idea what this means. On top "a default" Veeam Backup file, at the bottom, an aligned file which is required for ReFS integration

Due to compression, data blocks are not always the same size. So to save space, they just can be appended after each other. However for the block clone API, we need align the blocks. The result is that we sometimes have to pad a sector with empty data. So why do we need to align, can we just not clone more data? After all it doesn't consume more space?

Well take for example the third block. Unaligned, it is cluster 2,3,4. Aligned it is only in cluster 3 and 4. So because of the aligned blocks, we have to clone less data. You might think, why does it matter because cloning does not take extra space? 

Well first of all it keeps the files more manageable without filling it with junk data. If you copy 2 and 4 from the unaligned file, you basically add data that is not required. Next, if you delete the original file, the data does start "using space on disk". Because of the reference, you basically tell ReFS not to delete the data blocks as long as they are referenced by some file. So the longer these chain continue, the more junk data you might have.

So this is the reason why you need an active full. A full backup has to be created with ReFS in mind, otherwise the blocks are not aligned and in this case Veeam refuses to use the API

If you want to read more about block size I do recommend this article from my colleague Luca

One more thing

Here is a fun idea. You could use the tool together with a post process script to create GFS point on a Primary Chain. Although not recommended, you could for example run a script every month that "clones" the last VBK to a separate folder. The clone is instant so doesn't take a lot of time or much extra space. You could script your own retention or manually delete files. Clearly this is not really supported but it would be a cool idea to keep for example one full VBK as a monthly full for a couple of years


Recovery Magic with Veeam Agent for Linux

This week the new Veeam Agent for Linux was released. It includes file level recovery but also bare metal recovery via a Live CD. If you just want to do a bare metal recovery it is fairly easy to use. But you can do more then just do a 1-to-1 restore. You also have the option to switch to the command line and change your recovered system before (re)booting into it.

You might wonder why? Well because it gives a lot of interesting opportunities. In this example, I have a Centos 7 Installation which I want to restore. However the system was running LVM and during the restore I decided to not restore the root as an LVM volume but rather directly to disk. Maybe the other way around would make more sense but it is just for the fun of showing you the chrooting process.

Basically, I did a full system recovery (restore whole) but just before restoring, I selected the LVM setup, deleted it and I restored the LVM volume directly back to /dev/sda2. Here is the result

I have a GIF here of the whole process but browser do not seem to like it. You can download it to see the whole setup

Because we altered the partitions the system will be unbootable. If you try to boot, you might see the kernel load but it will because it can not find it's filesystem. Here is for example a screenshot of such a system, that fails to boot because we did not correct for the changes made below. Again this is only when you change the partition setup drastically. If you do a straight restore, you can just reboot the system without any manual edits.

Once restored I went back to the main menu and selected switch to command line

Once we are there we need a couple of thing. Basically we will mount our new system and chroot into it. You can start by checking if your disk was correctly restored with "fdisk -l /dev/sda" for example. It shows you the layout which make it easier for the next commands. Execute the following commands but do adopt them for your system (you might have a different layout then I). Make sure to mount your root filesystem before any other system.

mkdir -p /chroot
mount /dev/sda2 /chroot
mount /dev/sda1 /chroot/boot
mount -t proc none /chroot/proc
mount -o bind /dev/ chroot/dev
mount -t sysfs sys /chroot/sys
chroot /chroot

The output should be your systems shell

Ok so we are in the shell. For Centos 7 we have to do 2 things. First change /etc/fstab and second of all update the grub2 config. Fstab is quite straight forward. Use "vi /etc/fstab" to edit the file with VI. Then update the line that mounts your root "/". In my case I had to change "/dev/mapper/centos-root" to "/dev/sda2"

Now we have to update grub2 (and this is why we actually need the chroot). Use "vi /etc/default/grub" to edit the default grub config. Then  remove rd.lvm.lv=centos/root. Here are before and after screenshot. If you are going the other way, you might have to add LVM detection



Now we need to still apply the default by using "grub2-mkconfig -o /boot/grub2/grub.cfg"

Now exit the chroot by typing "exit". Then unmount all the (pseudo)filesystems we have mounted earlier and you are ready to reboot. You can use "mount" without arguments to check the mounted filesystems. Make sure to umount "/chroot" as the last one

umount /chroot/proc
umount /chroot/sys
umount /chroot/dev
umount /chroot/boot
umount /chroot

Now reboot and see your system transformed. You can just type "exit" to go back to the "GUI" interface and reboot from there or just type reboot directly from the command line


Figuring out Surebackup and a remote virtual lab

The Idea

If you want a to setup a Surebackup job, the most difficult part is setting up the virtual lab. In the past, great articles have been written about how you need to set them up but  a common challenge is that the backup server and the virtual lab router have to be in the same network. In this article, I wanted to take the time to write out a small experiment I made the other day, to see if I could easily get around this. This question pops up once in a while, and now at least I can tell that it is possible.

In this example, the virtual lab is called "remotelab". A linux appliance has been made, called "remotevlab" which sits in the same network as the backup server. It routes requests from the backupserver to a bubble network called remotevlab VM Network. This mimics the production range and reuses the same range. To allow you to communicate with the segment, the appliance uses Masquerading. In my example, I used a mask of 192.168.5.x, so if I want to access the ad, I can contact, and the router translates that, when the package passes.

For those who have already setup virtual labs, this is probably not rocket science. However for this scheme to work, the backup server needs to be aware that it should push IP packages for the 192.168.5.x range to the remotevlab router. So when you start a Surebackup Job, it automatically creates a static route on the backup server. When the backup server and the remotevlab router are in the same production network, all is good.

However, when they are in a different network, suddenly it doesn't work anymore. That is because, your gateway is probably not aware of the 192.168.5.x segment. So when the package is send to the router, it just drops it or routes it to its default gateway (which in turn might drop it). One way to resolve the issue, is to create those static routes in the uplink router(s) but network admins are not persée the infrastructure admins, and most of the times, they are quite reluctant to add static routes to routers they do not control (most of the times they are quite reluctant to execute any of the infra admins request, but on a sunny day, they might considering opening some ports though). So lets look at the following experiment

In my small home lab, I don't really have 2 locations connected via MPLS or different routers. So to emulate it, I created an internalnet which is not connected with a physical NIC. I connected the remotevlab there (the production side of the virtual lab router). In this network, I use a small range called "192.168.4.x".

The Connection Broker

Ok so far so good. Now we need a way for the v95backup server to talk to the remotevlab. To do this, a small linux VM was created with Centos 7 minimal. Itself has 2 virtual network adaptors. I called them eno1 and eno3 but these are just truncated names as you will see in the config. eno1 has assigned an IP in a production range. In this case it is the same range as v95backup server, but you will soon see that this doesn't have to be the case. The other network eno3 is connected to the same network as remotevlab and this is by design. In fact it is acting like the default gateway for that segment. Here are some copies of the configuration :

# [root@vlabcon network-scripts]# cat ifcfg-eno16780032
# [root@vlabcon network-scripts]# cat ifcfg-eno33559296
You will also need to setup routing (forwarding), and a static route, so that the appliance is aware of masquerade networks. This is fairly simple, by creating a route script
#[root@vlabcon network-scripts]# cat route-eno33559296 via dev eno33559296
 And by changing the correlated kernel parameter. You might check with sysctl -a if the parameter net.ipv4.ip_forward is not already set to forwarding (but on a clean install it should not)
# enable forwarding  /etc/sysctl.d/99-sysctl.conf
# check with sysctl -a | grep net.ipv4.ip_f
echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/90-forward.conf
sysctl -p /etc/sysctl.d/90-forward.conf
# check with sysctl -a | grep net.ipv4.ip_f
So basically we setup, yet another router. So how do we talk to the appliance without having to add static route to the appliance. Well we can use a layer 2 VPN. Any VPN software will do but in this example I choose PPTPD. You might argue that it is not that secure, but it is not really about security here, it is just about getting a tunnel. Plus I'm not really a network expert, and PPTPD seemed extremely easy to setup. Finally, because the protocol is quite universal, you don't have to install any VPN software on the backup server, it is built into Windows. I followed this tutorial https://www.digitalocean.com/community/tutorials/how-to-setup-your-own-vpn-with-pptp . Although it was written for Centos 6, most of it can be applied to Centos 7

The first thing we need to do is download PPTPD. It is actually hosted in the EPEL repository, so you might need to add those repositories, if you have not done that yet.
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Then install the software
yum install pptpd
Now the first thing you need to do is assign address to the ppp0 adapter and the clients logging in. To do configure localip (ppp0) and remoteip (clients) in /etc/pptpd.conf 
#added at the end of /etc/pptpd.conf
Next step is to make a client login. By default it uses a plaintext password. Again, since it is not really about security (not trying to build tunnels over the internet here), that is quite ok. You set them up in /etc/ppp/chap-secrets. "surebackup" is the login, "allyourbase" the password. PPTPD is just the default configuration name. * means that everybody can use this. So if you want, you can still add a bit of security by specifing only the backup servers IP.
#added at the end of /etc/ppp/chap-secrets
surebackup pptpd allyourbase *
I did not add the DNS config to /etc/ppp/options.pptpd as we don't really need it. So now the only thing left to do is to start the service and configure it to boot at power on.
systemctl enable pptpd
systemctl restart pptpd

v95Backup configuration

With the server being done, we can now head over to the backup server. You can just add a new VPN and put it to PPTP.

So the connection is called robo1 and we use a PPTP connection. Specify the username surebackup and password allyourbase. I also changed the adapter settings. By default the pptp connection will create a default route. That means that you will not be able to connect to other networks anymore, once you connected to the appliance. To fix that, you can disable this behavior

In the adapter settings > networking tab > ipv4 > advanced, uncheck "use default gateway". I also put the automatic metric off, and gave in the number 5. Now because, you disabled the default gateway, the Backup server does not use this connection anymore except for the "192.168.3.x" range. So it will no longer to talk to the vlab router. To fix it, add a persistent route, so that you can discover the remotevlab router.
route -p add mask metric 3 if 24
It should be straight forward, except "if 24". Basically, this says, route it over interface 24, which in this example was the robo1 interface, as shown below (using "route print" to discover your number).

Now you have to make sure that the connection is of course on when you start the Surebackup test. One way to do this, is by scheduling a tasks, that constantly checks the connection and restart the connection if it failed For my test, I just disabled the surebackup schedule and made a small powershell script. It basically dials the connection and than start the job:

asnp veeampssnapin
rasdial robo1 surebackup allyourbase
get-vsbjob -name "surebackup job 2" | start-vsbjob -runasync

You can see a strange scheduling time, that is because I configured the tasks 10min later, and then restarted the backup server, just to see if it would work if nobody is logged in. Very importantly, like with other tasks, make sure that you have the correct right to start the vsbjob. I configured the task to have administrator rights.

The result: it just works. Here are some screenshots:

The Virtual Lab Configuration. You can see that it is connected to the internalnet. Very important is that you point to the connection broker ( as the default gateway.

The vSphere network

Robo1 connected

The routing table on the backup server. Here you can see the static route 192.168.4.x going to PPTP connection. What is even nicer is that, because we defined the 192.168.4.x, when Surebackup adds the 192.168.5.x, windows routes it correctly to the network because of the persistent static route;

Finally, a succeeded test


The lab setup works and setup is relatively easy. If you made an OVF or Livecd of the setup, it would pretty easy to duplicate this setup if you have multiple locations. You might need to consider smaller IP ranges. 

PPTP might not be the best protocol, so other VPN solutions might be considered. For example, you might remove another subnet, if you could bridge the VPN port to internal network directly or if you could create a stretched layer 2 connection. However, my test was more to see what needs to be done to get this to work. What I liked the most is that it has good compatibility between Windows and Linux and I didn't need to setup special software on the backup server.

One other use case is that you could also allow other laptops in the network to access the virtual lab for testing. If they don't really need internet (or you need to setup the correct masquerading/dns in iptables/pptp), they could just connect to the network with a predefined VPN connection in Windows, even if they are not connected to the same segment as the backup server (something those network also would really appreciate).

Appendix : Hardening with IPTables

For a bit more hardening and to document the ports, I also enabled IPtables (instead of firewalld). For my install firewalld was not installed/configured but you might need to remove it. Check out http://www.faqforge.com/linux/how-to-use-iptables-on-centos-7/

The iptables configuration, I based on the Archlinux documentation found here https://wiki.archlinux.org/index.php/PPTP_server#iptables_firewall_configuration

First we need to install the service and enable it at boot
yum install iptables-services
systemctl enabled iptables 
Then I modified the config. Here is a dump of the config
#[root@vlabcon ~]# cat /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 1723 -j ACCEPT
-A INPUT -p 47 -j ACCEPT
-A FORWARD -i ppp+ -o eno33559296 -j ACCEPT
-A FORWARD -o ppp+ -i eno33559296 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-net-unreachable

Basically I added two input rules. For the PPTP connection, open up TCP port 1723. You also need to open up the GRE protocol (-p 47). This shows a weakness of PPTP. You need to ask your firewall guys to open the connection, but more importantly, it will probably not survive any NAT/PAT. The good thing is that overhead should be minimal although this is not so important for the simple tests. To allow routing to occure, forwarding must be allowed between the ppp connection and the eno3 interface. Simply start iptables with
systemctl start iptables
If everything is configured correctly, the setup should still work, but people that are able to connect to the connection broker can not necessarily get to the virtual lab. They first need to make the PPTP connection. 

Notice that, not masquerading has been setup towards the remotevlab router (as in the Archlinux doc). That is because the remotevlab router uses the connection broker as the default gateway, so when it replies, it will always do so redirecting the request back to the connection broker.