2016/06/10

Veeam repository in an LXC container

In the past, I have always wanted to investigate the concept of a Linux repositories with extra safety like a chroot. However I never took the time to work on a example. With the rise of docker, I was thinking, could I make a repository out of a docker container. After some research, I understood that docker has the whole abstraction layer of storage which I really didn't needed. Next to that, docker containers run in privileged mode. So I decided to try out LXC and go a bit deeper into the container technology.

Now before we start, this is just an experimental setup in a lab, so do proper testing before you put this in production. It is just an idea but not a "reference architecture", so I imagine improvements can be done. So here is a diagram of the set up which will discuss in this blog.


The first thing you might notice is that we will set up a bridge between the host ethernet card and the containers virtual network ports. In some lxc setups, NAT is preferred between the real world and a disconnected virtual bridge. I guess that works for web servers but in this example, Veeam needs to be able to connect to each container via SSH and you might need to open up multiple ports. When you bridge your containers virtual network ports and your outside ethernet port, you basically create a software switch. This give the advantage that you can assign an individual IP to every individual container as if it was a standalone machine

Next to that we will make a separate LVM volume group. You might notice that the root of every tenant is colored green. That's because we will use lxc-clone with snapshot functionality. That means, set up the root (container) once, and than clone it via an LVM snapshot so you can instantly have a new container/tenant. Finally, an LVM volume called "_repo" is assigned to each individual container and mounted under /mnt. This is where you will store the backups itself, separated from the root system.

The first thing is of course install debian. Not going to show it as it is basically following the wizard which is straightforward. I do want to say that I assigned 5GiB for the root, but it turns out, after all the configuration, I only use 1.5GiB. So if you want to save some GB's, you could assign for example only 3GiB. 

Maybe one important note, the volume group name for storing the container roots need to be called different than the container name in order for lvm-clone to work correctly. I ran into the issue where cloning did not work because of it.  So for example, call the volume group  "vg_tenstore" and the containers/logical volumes "tenant" . During the initial install, only setup the volume group. The logical volumes will be made by lxc during configuration.

So after the install, I just installed drivers and updates by executing the following. If you don't run it in VMware, you of course do not need the tools. You might also go leightweight by not installing the dkms version.
apt-get update
apt-get upgrade
apt-get install open-vm-tools-dkms
apt-get install openssh-server
reboot
After the system has rebooted, you are able to start an SSH session to it. Now let's install the LXC software. (based on https://wiki.debian.org/LXC)
apt-get install lxc bridge-utils libvirt-bin debootstrap
Once that is done, let's set up the bridge. For this edit /etc/network/interfaces. Here is a copy of my configuration
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
#auto eth0
#iface eth0 inet static
#       address 192.168.204.7
#       netmask 255.255.255.0
#       network 192.168.204.0
#       broadcast 192.168.204.255
#       gateway 192.168.204.1
#       # dns-* options are implemented by the resolvconf package, if installed
#       dns-nameservers 192.168.204.2
#       dns-search t.lab
auto br0
iface br0 inet static
        bridge_fd 0
        bridge_stp off
        bridge_maxwait 0
        bridge_ports eth0
        address 192.168.204.7
        netmask 255.255.255.0
        broadcast 192.168.204.255
        gateway 192.168.204.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.204.2
        dns-search t.lab
Notice that I kept the old configuration part in comments. You can see that the whole address configuration is assigned to the bridge (br0). So the bridge literally gets the host IP. After modding, I restarted the OS, just to check if the network settings would stick, but I guess you can also restart the networking via "/etc/init.d/networking restart" . The result should be something like this



Ok so that was rather easy. Now lets add some lines to the default config so that every container will be connected to this bridge by default. To do so, edit  /etc/lxc/default.conf and add
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
In fact, if you look at previous screenshot, you can see that there are already 2 containers running, as 2 "vethXXXXX" interfaces already exist.

Now let's setup the root for our container. If you forgot your volume group name, use 'lvm vgdisplay" to display all your volumes groups. Then execute the create command
lxc-create -n tenant -t debian -B lvm --lvname tenant --vgname vg_tenstore
This will create a new container tenant based on the debian template. A template is a preconfigured script to make a certain flavor of containers. In this case, I used a debian flavor, but you can find other flavor here "/usr/share/lxc/templates". The backing store is lvm and the logical volume that will be created is called "tenant" and will be created in the volume group "vg_tenstore". Keep the name of the container and the logical volume the same as mentioned before.

Now the container will be created, you can actually edit it's environment before starting it. I did a couple of edits, so that everything boots smoothly. First mount the filesystem
mount /dev/vg_tenstore/tenant /mnt
First I edited the networking file /mnt/etc/network/interfaces and set a static ip
#....not the whole file
iface eth0 inet static
address 192.168.204.6
netmask 255.255.255.0
gateway 192.168.204.1
dns-nameservers 192.168.204.2
Then I edited /mnt/etc/resolv.conf
search t.lab
nameserver 192.168.204.2
Finally I made a huge security hole by allowing root login via SSH in /mnt/etc/ssh/sshd_config
PermitRootLogin yes
You might want to avoid this but I wanted something quick and dirty for testing. Now umount the filesystem and let's boot the parent container. I used -d to daemonize so you don't get an interactive container you can't escape
umount /dev/vg_tenstore/tenant
lxc-start --name tenant -d
Once started, use lxc-attach to attach directly to the console with root and execute passwd to setup a password. You can go out by typing exit after you setup password
lxc-attach --name=tenant
passwd
exit
Now test your password by going into the console. While you are there, you can check if everything is ok and than halt the container
lxc-console --name tenant
#test what you want after login
halt
You can also halt a container from the host by using "lxc-stop -n tenant". Now that is done, we can actually create a clone. I made a small wrapper script but you can of course run the commands manually. First I made a file "maketenant" and used "chmod +x maketenant" to make it executable. Here is the content of the script
tenant=$1
ipend=$2

if [ ! -z "$tenant" -a "$tenant" != "tenstore" -a ! -z "$ipend" ]; then
        trepo=$tenant"_repo"
        echo "Creating $tenant"
        lxc-clone -s tenant $tenant
        lvm lvcreate -L 10g -n"$trepo" vg_tenstore
        mkfs.ext4 "/dev/vg_tenstore/$trepo"
        echo "/dev/vg_tenstore/$trepo mnt ext4 defaults 0 0" >> /var/lib/lxc/$tenant/fstab
        mount "/dev/vg_tenstore/$tenant" /mnt
        sed -i "s/address [0-9.]*/address 192.168.204.$ipend/" /mnt/etc/network/interfaces
        umount /mnt
        echo "lxc.start.auto = 1" >> /var/lib/lxc/$tenant/config
        echo "Starting"
        lxc-start --name $tenant -d
        #check if repo is mounted
        echo "Check if repo is mounted"
        lxc-attach --name $tenant -- df -h | grep repo
        echo "Check ip"
        lxc-attach --name $tenant -- cat /etc/network/interfaces | grep "address"
else
        echo "Supply tenant name"
fi
Ok so what does it do:
  • lxc-clone -s tenant tenantname : makes a clone of our container based on a (LVM) snapshot
  • lvm lvcreate -L 10g -n"tenantname_repo" vg_tenstore : create a new logical volume for the tenant to store it's backups in
  • mkfs.ext4 /dev/vg_tenstore/tenantname_repo : format that volume with ext4
  •  echo "/dev/vg_tenstore/tenantname_repo mnt ext4 defaults 0 0" >> /var/lib/lxc/tenantname/fstab : Tells LXC to mount the volume to the container under /mnt
  • We then mount the FS again of the new tenant to change the ip
    • sed -i "s/address [0-9.]*/address 192.168.204.$ipend/" /mnt/etc/network/interfaces : replaces the IP with a unique one for the tenant
  • echo "lxc.start.auto = 1" >> /var/lib/lxc/tenantname/config : tells LXC that we want to start the container at boot time
Finally we start the container  and check if indeed the repository was mounted and the IP is correctly edited. The result? :

We have a new tenant (tenant2) is up and running. Let check the logical volumes. You can see that tenant2 is running from a snapshot and that a new logical volume was made as a repository


And the container is properly bridged to our br0


Now you can add it to Veeam based on the IP. If populate, you will see only what you assigned to the container. 


You can than select the /mnt point


And you are all set up. Let's do a backup copy job to it


And let's check the result with "lxc-attach --name tenant2 -- find /mnt/"


That's it, you can now make more containers to have separate chroots. You can also think about security like Iptables or Apparmor but since the container is not privileged, it should already be a huge separation from just separate  users / "/home directories" with sudo rights.

2016/06/06

RPS 0.3.3

It took some time to release this version because it packs a lot of changes which hopefully makes the tool more useful. This release focuses on more "export" functionality.

So the first major change is that the "Simulate" button has been lowered. This makes more sense as you will probably first put in the info and than run the simulation. But the main reason why it was lowered is the additional export functionality. You will now see a couple of checkbox next to the simulate button.


The first checkbox is the "export" functionality. When you check it and run the simulation, an input field will appear with an URL. If you click somewhere in the field, the complete text should be selected, which you can than easily copy with for example ctrl+c. When you reuse the URL, your simulation will automatically execute with all the previous inputs. This way you can share your simulation without having to screenshot everything. Make sure to push the Simulate button before copy pasting as this will refresh the URL field.

But what if you still want a clean screenshot of the result? I can not tell you how many screenshots I already saw of the RPS output in mails/documentation/etc.. However, screenshotting the output might be challenging. First of all you need specific software to cut it out. Next, if the simulation is longer, you should screenshot multiple times and than concat the result. So in this release I'm introducing "Canvas rendering". This will render the result in an hidden HTML5 canvas and than replace a visible image with a copy of the HTML5 output. The result should be a cleaner output, that you can use in documentation. Also opted to reduce the amount info on the output as dates etc. make little sense when a partner wants to send a result to an end customer.



If you are running Firefox or Chrome, you will be able to save the picture by clicking the Download link. The advantage is that the link will push a formatted name with the current date and time. However if you are not using one of both (like IE or Safari), you should still be able to right click it and save it as picture. The reason why the download link does not work in every browser is because I'm using an unofficial "download" attribute. Let's hope in the future, Edge, IE, Safari, etc. will also support it.


Another new future would be support for Active Full support that was added in Veeam v9. This future required some testing to check if the result made sense. Hopefully I got it right, and I would love to hear your feedback!

Finally, a very small request that has been implemented is Replica support. In this first version, I only added support for VMware, but this might change in the future