2020/07/09

Light Weight Reporting for B&R

Currently I have 2 customers where existing solutions like Veeam Service Provider Console (ex-VAC) or Veeam Enterprise Manager do not really work. One of the customers has a highly secure environment and they only have 2 allowed protocols for each environment: FTP and Email (because they monitor anything in and out of those servers). The second one has an extremely low bandwidth (satellite links) where every byte counts. They resorted to parsing the daily emails but that contains a lot of styling.

So unless you have a very specific use case like this, you probably don't want to use this project. VSPC probably makes more sense if you have distributed multi-tenant environment where Enterprise manager makes more sense if you have a one team shop.

So what is LWR? In reality it is just 2 Powershell scripts you can find on github; one that generates a (compressed) JSON report and one that reads that report (potentially for multiple servers). The most important part is that LWR doesn't define how you transfer these files. They just need to get from one server to another via a mechanism and dumped in a folder. For this blog post, I used FTP (including sample scripts) but they are just samples, not the actually definition. 

So how does it work. Run the site-local on each remote branch on a regular basis. Do edit the script to change the uniqueid per site or use the param statement to modify them on the fly. The result will be a file in a directory every time you run:
"C:\veeamlwr\99570f44-c050-11ea-b3de-0242ac13000x\99570f44-c050-11ea-b3de-0242ac13000x_1594283630.lwr"

The first directory is the repository, the second directory is the unique id and finally the last file is the  _.lwr data file. This is a gziped json file. You can use -zip $false to disable compression but then you need to be consistent for all the sites and the main site. It's just there to troubleshoot or if you want to easily parse the result yourself. By running the script, a new version is created which you must transfer to the main site. For this blog article, I just made a task in the windows scheduler running the sitelocal script and then the ftpsync-sitelocal (ftp sample) to transfer the data to the central site


On the central site, you might want the sync script for the central site manually just to get an update, and then run the central script to get the output


So here you see the sync + first run. You can see that the script downloaded multiple files but central will only use the latest version (this is why the uniqueid  + the timestamp in the filename is important). Clean up at the central site is something you should do yourself as I can imagine use cases where you want to keep those files for a longer history (for example how did the license usage expand over time)

Let's fail a job, run a new upload from the site local, and resync the latest lwr to central



Now you can see there are 2 failed jobs, reflecting the latest status

If you want to verify your license usage, you can also check the licenses with the -mode parameter


That's it. There is nothing more to show or to do. Again, you probably want to use other alternatives like VSPC or Enterprise Manager but in case both are no feasible due to network/security restrictions, please feel free to use and extend to your liking!

2020/07/03

Livemount MySQL with Data Integration API

New in v10 is that you can mount disk via iSCSI to any other machines. In the lab, I was recently playing with MySQL and wondered if I could live mount it's backup to another server. It, turns out, you can. So what are the use-case? Well you could test if the databases are mountable without having the spin up a lab. You could allow your devs to access real production data without accessing the production servers. But of course, feel free to find other use cases 😏

To get started, you need another machine where you have MySQL installed. So this is a plain Ubuntu server with ssh + MySQL installed as a target for live mount. You can see that I'm missing my production martini database


The next step is to publish the your restore point that you backed up. You can find a sample in the documentation .




Ok with that done, we can go to the Linux Machine. You do need to install the open-iscsi tools


Once that is done, you can mount the iscsi volume. For this you need two commands




Here is a screenshot of the discovery process (finding the volumes) and the mounting. By doing an fdisk -l before and after, you can see that /dev/sdb is showing up. The most important part(ion) is of course /dev/sdb2. I also included a logout to show that it is going away but of course at this point, you only need to do the login part


Now let's mount /dev/sdb2. For this, we make a temporary directory under /mnt and mount /dev/sdb2



At this point you want to stop the mysql database, mount the mysql database file directory to the correct location and restart the DB. I did have to correct the permissions so that the directory and all its files are owned by mysql after the mount




And there you have it, martini database is back

Once you are done, you need to clean up stuff



At this point, you can unpublish the session.


And voila, everything is cleaned up.

You can also automate the whole process, check the complete code on VeeamHUB

2020/05/19

Make your own tags in Veeam Backup & Replication

Very common question is how do you scale a fix proxy pool to certain jobs. If you are talking about proxies towards repositories, you can use proxy affinity (a cool feature that not a lot of people seem to know about).

But if you want to do some more custom things? This has been playing in my head for a long time and it turns out it is fairly seems simple to setup. First things first, add a tag to the description in the form of [location:mylocation]. Why the location specifier? Because you might want to add different tags (e.g jobtype, production level, etc.), if you want to use the grouping mechanism for other purposes

Here are some examples in my lab



Then you can the following script/gist . The script could be a bit more generic, that's why it made more sense to just add it as a gist. The output is fairly simple.


Here is where Powershell group-object really shines. It is really easy to group certain objects together and then just loop over the targeting object without having to re-filter over and over again. The regex should be fairly simple if you want to do another grouping

Hope it is an inspiration to other people

2020/04/10

NAS Multiplex

If you ever tried to add multiple shares from a root share in the new Veeam B&R v10, you might have noticed that you have to map them one by one. During beta, I already experimented with some sample code how to add those shares in bulk. So for the GA, I decided to build a GUI around it.

First of all you can find the script on https://github.com/veeamhub/powershell/tree/master/BR-NASMultiplex .

So after that, it is just a matter of running the script by clicking it and say "run with powershell". You might need to change your execution policy if you downloaded it. Also make sure to run this on the backup server as it load the VeeamPSSnapin in the background

So let's look at an example. In my lab, there is a server with 4 shares, share1, share2, share 3 and share 4. Before running the script, 2 are mapped and added to 1 job


Now when you start NASMultiplex, you can select that job, and select one of the shares as a template or a sample for the newly created shares. Basically the new shares will be added with all the settings of the sample share.


"Base" is basically a prefix field. In the big field below, you can see that per line,  a share can be added (e.g share3 and share4). The concats base+share to form the full path. It does just concatenate the strings so you could either choose to have base ending in "\" or if you don't you must add the separator in front of every share.

Finally there is "Altpath". If you're sample share has storage snapshots enabled, it show you the pattern it use to fill in the storage alternate path for the new share. {0} will be replace with base and {1} will be replaced with the share name. So for this example, share 3 would become \\172.18.87.231\share3\.snapshot . If altpath is disabled, than it should mean that your sample share uses VSS snapshots or just direct copy

When you are ready to go, click add shares and the script will run in the background until a popup appears


Add that point, the shares should be added to the system and to the job



Final notes: This has only been tested with SMB so with NFS it might not work, but it should be pretty easy to get it fixed if it doesn't. It was also built in one evening so would be great to hear your feedback, what does work and what definitely doesn't work

2019/06/12

Compiling martini-cli from source

Just a small post for those wanting to compile martini-cli from source (for example if you want to run the cli from Mac or windows)

First you need to install go(lang) compiler. Depends a bit on the OS but should be straight forward in most case : https://golang.org/dl/ . For Ubuntu, you can just find the package golang in the repository so you can just install it via the packet manager

sudo apt-get install golang


Then once the go  compiler is installed, you can just download the code and compile it with the go command
go get -d github.com/tdewin/martini-cli
go install github.com/tdewin/martini-cli


And that's it! You should now have a binary in go/bin called martini-cli. You can find and edit the src in go/src. Recompiling with the go install.. command. Happy hacking!

Installing Project Martini

At VeeamON my colleague Niels and I presented Project Martini. A kind of Manager of Manager for Veeam Backup for Office 365. Today I'm writing a small post for those who want to go ahead and install the solution

The first thing you need is a Linux server. Could be virtual, could be physical. We strongly recommend an Ubuntu version as this is what we used in AWS and the screenshots of this blog post are from an on premise Ubuntu installation (18.04 LTS).

Once you have installed Ubuntu, you can download the martini-cli. I'll make another small blog post on how to compile it from source but for this post, you can just download the binary. So let's make sure we have the right tools to unzip and get the binary first

sudo apt-get update
sudo apt-get install unzip wget


Now you can go ahead and install the binary

wget https://dewin.me/martini/martini-cli.zip
sudo unzip martini-cli.zip -d /usr/bin
sudo chmod +x /usr/bin/martini-cli


Once you have the binary in place, you are ready to start the installation. If you are running Ubuntu, the setup should detect this and propose you to install the prerequirements

sudo martini-cli setup


On the question if you want to install prereq, answer "y". It should download all the prerequirements. Then it will prompt you to install terraform and martini-pfwd. Please answer "y" to continue. Once this is finished, it will prompt you to create the mysql database. It is automatically installed but you might want to secure it a little bit more.


So start the mysql prompt and create the database. The cli gives you a very simple example to go ahead and create the database. Please make sure you remember the password.

mysql -u root -p


Finally rerun the setup wizard but this time, don't install the prereqs (as this already is done). This will download the latest martini-www release from github and will ask you for the database connection parameters

sudo martini-cli setup


Once you went through the whole process, martini is installed. Make sure to remember the admin password you gave in during the installation. You need it to authenticate the cli or the webgui. You also might need to chown the config file if you run the process with sudo. In my case, my user is timothy but that might be different on your system. Once everything is done, you can test the connection with martini-cli

sudo chown timothy:timothy .martiniconfig
martini-cli --server http://localhost/api connect 


You can now run the cli. For example martini-cli tenant list. Since there is no tenant yet, you should should see a lot of # signs.

If you browse to the IP of the server, you should now also be able to login with admin and the password. In my screenshots, I used port 999 to map to an internal server so you can just ignore that and use the regular port 80 if you have direct access to your machine



2019/01/30

Get your offline RPS version

Some people have asked me in the past, is there a way to get RPS offline or to export it's result. In previous versions I tried to add the canvas rendering (basically generating a PNG) or generate an URL that you can share with your colleagues. However, for a lot of people this does not seem to be enough. Enter RPS-offline...

RPS-offline can be download here. The source code itself is on github. If you have a Mac or Linux, you should be able to compile it with GO(lang). Once you have download it, my recommendation would be to extract the tool in c:\rps-offline. This is not a strict requirement, but you will see why later


The first time you run the exe (or the bat file), it will download a zip (latest version from github) and start a local server, redirecting your browser to it


The next time the binary runs, it will detect the zip and it should not require any network connection.

But of course that's not all. When you have done a simulation, you can hit the c key (csv). It will export the data to a csv file. You can also push the j key (JSON), which will export the data a JSON file. Why JSON? Well it allows to export more data fields in a predictable way and a lot of scripting language can easily import it. That's why, you can also run a postscript after the JSON is generated


And this is where the .bat files comes in to play. It will tell rps-offline to run a script called Word.ps1. I'm not really planning on extending this script. The Word.ps1 is merely a sample script that you delete, change, modify, etc. It does show you one of the possible things you could do e.g generate a word document with Powershell. Of course, this will not work on Linux or Mac, but you are free to run whatever command you want. That's the beauty of not embedding all the code inside rps-offline itself. You will also see that the .bat and .ps1 script refer to the fixed path c:\rps-offline so if you want to store it on another path, make sure to edit the .bat and .ps1 file


When you are done, push q (quit) to stop the program and the local web-service.

This is of course a first version, but I'm curious if people find it useful or to see if people make other conversion scripts. I could imagine a scenario where you take the JSON output and import it in a DB as well. So please let me know what are you thoughts on twitter @tdewin ! Hope you guys enjoy!