2016/09/22

Figuring out Surebackup and a remote virtual lab

The Idea

If you want a to setup a Surebackup job, the most difficult part is setting up the virtual lab. In the past, great articles have been written about how you need to set them up but  a common challenge is that the backup server and the virtual lab router have to be in the same network. In this article, I wanted to take the time to write out a small experiment I made the other day, to see if I could easily get around this. This question pops up once in a while, and now at least I can tell that it is possible.


In this example, the virtual lab is called "remotelab". A linux appliance has been made, called "remotevlab" which sits in the same network as the backup server. It routes requests from the backupserver to a bubble network called remotevlab VM Network. This mimics the production range and reuses the same range. To allow you to communicate with the segment, the appliance uses Masquerading. In my example, I used a mask of 192.168.5.x, so if I want to access the ad, I can contact 192.168.5.103, and the router translates that 192.168.1.103, when the package passes.

For those who have already setup virtual labs, this is probably not rocket science. However for this scheme to work, the backup server needs to be aware that it should push IP packages for the 192.168.5.x range to the remotevlab router. So when you start a Surebackup Job, it automatically creates a static route on the backup server. When the backup server and the remotevlab router are in the same production network, all is good.

However, when they are in a different network, suddenly it doesn't work anymore. That is because, your gateway is probably not aware of the 192.168.5.x segment. So when the package is send to the router, it just drops it or routes it to its default gateway (which in turn might drop it). One way to resolve the issue, is to create those static routes in the uplink router(s) but network admins are not persée the infrastructure admins, and most of the times, they are quite reluctant to add static routes to routers they do not control (most of the times they are quite reluctant to execute any of the infra admins request, but on a sunny day, they might considering opening some ports though). So lets look at the following experiment


In my small home lab, I don't really have 2 locations connected via MPLS or different routers. So to emulate it, I created an internalnet which is not connected with a physical NIC. I connected the remotevlab there (the production side of the virtual lab router). In this network, I use a small range called "192.168.4.x".

The Connection Broker

Ok so far so good. Now we need a way for the v95backup server to talk to the remotevlab. To do this, a small linux VM was created with Centos 7 minimal. Itself has 2 virtual network adaptors. I called them eno1 and eno3 but these are just truncated names as you will see in the config. eno1 has assigned an IP in a production range. In this case it is the same range as v95backup server, but you will soon see that this doesn't have to be the case. The other network eno3 is connected to the same network as remotevlab and this is by design. In fact it is acting like the default gateway for that segment. Here are some copies of the configuration :

eno1:
# [root@vlabcon network-scripts]# cat ifcfg-eno16780032
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.1.199
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=8.8.8.8
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno16780032
UUID=1874c74a-6882-435f-a465-f5fb11c60901
DEVICE=eno16780032
ONBOOT=yes
eno3:
# [root@vlabcon network-scripts]# cat ifcfg-eno33559296
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.4.1
NETMASK=255.255.255.0
DEFROUTE=no
PEERDNS=no
PEERROUTES=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=no
IPV6_PEERDNS=no
IPV6_PEERROUTES=no
IPV6_FAILURE_FATAL=no
NAME=eno33559296
DEVICE=eno33559296
ONBOOT=yes
You will also need to setup routing (forwarding), and a static route, so that the appliance is aware of masquerade networks. This is fairly simple, by creating a route script
#[root@vlabcon network-scripts]# cat route-eno33559296
192.168.5.0/24 via 192.168.4.2 dev eno33559296
 And by changing the correlated kernel parameter. You might check with sysctl -a if the parameter net.ipv4.ip_forward is not already set to forwarding (but on a clean install it should not)
# enable forwarding  /etc/sysctl.d/99-sysctl.conf
# check with sysctl -a | grep net.ipv4.ip_f
echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/90-forward.conf
sysctl -p /etc/sysctl.d/90-forward.conf
# check with sysctl -a | grep net.ipv4.ip_f
So basically we setup, yet another router. So how do we talk to the appliance without having to add static route to the appliance. Well we can use a layer 2 VPN. Any VPN software will do but in this example I choose PPTPD. You might argue that it is not that secure, but it is not really about security here, it is just about getting a tunnel. Plus I'm not really a network expert, and PPTPD seemed extremely easy to setup. Finally, because the protocol is quite universal, you don't have to install any VPN software on the backup server, it is built into Windows. I followed this tutorial https://www.digitalocean.com/community/tutorials/how-to-setup-your-own-vpn-with-pptp . Although it was written for Centos 6, most of it can be applied to Centos 7

The first thing we need to do is download PPTPD. It is actually hosted in the EPEL repository, so you might need to add those repositories, if you have not done that yet.
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Then install the software
yum install pptpd
Now the first thing you need to do is assign address to the ppp0 adapter and the clients logging in. To do configure localip (ppp0) and remoteip (clients) in /etc/pptpd.conf 
#added at the end of /etc/pptpd.conf
localip 192.168.3.1
remoteip 192.168.3.2-9
Next step is to make a client login. By default it uses a plaintext password. Again, since it is not really about security (not trying to build tunnels over the internet here), that is quite ok. You set them up in /etc/ppp/chap-secrets. "surebackup" is the login, "allyourbase" the password. PPTPD is just the default configuration name. * means that everybody can use this. So if you want, you can still add a bit of security by specifing only the backup servers IP.
#added at the end of /etc/ppp/chap-secrets
surebackup pptpd allyourbase *
I did not add the DNS config to /etc/ppp/options.pptpd as we don't really need it. So now the only thing left to do is to start the service and configure it to boot at power on.
systemctl enable pptpd
systemctl restart pptpd

v95Backup configuration

With the server being done, we can now head over to the backup server. You can just add a new VPN and put it to PPTP.


So the connection is called robo1 and we use a PPTP connection. Specify the username surebackup and password allyourbase. I also changed the adapter settings. By default the pptp connection will create a default route. That means that you will not be able to connect to other networks anymore, once you connected to the appliance. To fix that, you can disable this behavior


In the adapter settings > networking tab > ipv4 > advanced, uncheck "use default gateway". I also put the automatic metric off, and gave in the number 5. Now because, you disabled the default gateway, the Backup server does not use this connection anymore except for the "192.168.3.x" range. So it will no longer to talk to the vlab router. To fix it, add a persistent route, so that you can discover the remotevlab router.
route -p add 192.168.4.0 mask 255.255.255.0 192.168.3.1 metric 3 if 24
It should be straight forward, except "if 24". Basically, this says, route it over interface 24, which in this example was the robo1 interface, as shown below (using "route print" to discover your number).


Now you have to make sure that the connection is of course on when you start the Surebackup test. One way to do this, is by scheduling a tasks, that constantly checks the connection and restart the connection if it failed For my test, I just disabled the surebackup schedule and made a small powershell script. It basically dials the connection and than start the job:

asnp veeampssnapin
rasdial robo1 surebackup allyourbase
get-vsbjob -name "surebackup job 2" | start-vsbjob -runasync

You can see a strange scheduling time, that is because I configured the tasks 10min later, and then restarted the backup server, just to see if it would work if nobody is logged in. Very importantly, like with other tasks, make sure that you have the correct right to start the vsbjob. I configured the task to have administrator rights.



The result: it just works. Here are some screenshots:


The Virtual Lab Configuration. You can see that it is connected to the internalnet. Very important is that you point to the connection broker (192.168.4.1) as the default gateway.


The vSphere network

Robo1 connected


The routing table on the backup server. Here you can see the static route 192.168.4.x going to PPTP connection. What is even nicer is that, because we defined the 192.168.4.x, when Surebackup adds the 192.168.5.x, windows routes it correctly to the 192.168.3.2 network because of the persistent static route;


Finally, a succeeded test


Conclusion

The lab setup works and setup is relatively easy. If you made an OVF or Livecd of the setup, it would pretty easy to duplicate this setup if you have multiple locations. You might need to consider smaller IP ranges. 

PPTP might not be the best protocol, so other VPN solutions might be considered. For example, you might remove another subnet, if you could bridge the VPN port to internal network directly or if you could create a stretched layer 2 connection. However, my test was more to see what needs to be done to get this to work. What I liked the most is that it has good compatibility between Windows and Linux and I didn't need to setup special software on the backup server.

One other use case is that you could also allow other laptops in the network to access the virtual lab for testing. If they don't really need internet (or you need to setup the correct masquerading/dns in iptables/pptp), they could just connect to the network with a predefined VPN connection in Windows, even if they are not connected to the same segment as the backup server (something those network also would really appreciate).

Appendix : Hardening with IPTables

For a bit more hardening and to document the ports, I also enabled IPtables (instead of firewalld). For my install firewalld was not installed/configured but you might need to remove it. Check out http://www.faqforge.com/linux/how-to-use-iptables-on-centos-7/

The iptables configuration, I based on the Archlinux documentation found here https://wiki.archlinux.org/index.php/PPTP_server#iptables_firewall_configuration

First we need to install the service and enable it at boot
yum install iptables-services
systemctl enabled iptables 
Then I modified the config. Here is a dump of the config
#[root@vlabcon ~]# cat /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 1723 -j ACCEPT
-A INPUT -p 47 -j ACCEPT
-A FORWARD -i ppp+ -o eno33559296 -j ACCEPT
-A FORWARD -o ppp+ -i eno33559296 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-net-unreachable
COMMIT

Basically I added two input rules. For the PPTP connection, open up TCP port 1723. You also need to open up the GRE protocol (-p 47). This shows a weakness of PPTP. You need to ask your firewall guys to open the connection, but more importantly, it will probably not survive any NAT/PAT. The good thing is that overhead should be minimal although this is not so important for the simple tests. To allow routing to occure, forwarding must be allowed between the ppp connection and the eno3 interface. Simply start iptables with
systemctl start iptables
If everything is configured correctly, the setup should still work, but people that are able to connect to the connection broker can not necessarily get to the virtual lab. They first need to make the PPTP connection. 

Notice that, not masquerading has been setup towards the remotevlab router (as in the Archlinux doc). That is because the remotevlab router uses the connection broker as the default gateway, so when it replies, it will always do so redirecting the request back to the connection broker.