2018/02/23

vCoffee: Looking at the VAC REST API Integration

In the blog post, we will take a look at the VAC REST API integration. First of all, all the code specifically for VAC can be found on VeeamHub in app/backend/restIntegration-vac.js. You will see that this file is only 100 lines big, but still packs all the functionality to talk to the REST API. For this blog post, I'm using slightly altered code which can be found here


Step 1 : Understanding fetch (JavaScript Specific)

If you want, you can play with this code by installing NodeJS (in this demo v6.11). The demo is  dependent on one framework called fetch that is automatically included in NativeScript (the framework used to build the app), but is not installed by default in NodeJS. So the first thing we need to do is install fetch and check if it works. Installing fetch can be done with "npm install node-fetch" once you have installed NodeJS


Once you install node-fetch, you can start node and execute the JS code to test the module. I have to say that the first time I saw the Fetch code, it was quite confusing for me. Fetch uses a "pretty new" JavaScript feature called a Promise. Promises are you used when you want to execute some code asynchronously, and when it's done, it will run or the resolve code (everything went OK) or the reject code. This is pretty weird, but it means that Fetch doesn't block the main thread. This also means that if you try to fetch an URL that doesn't exists, at first it looks like nothing is happening, but only after the timeout has happened, your reject code will be ran. So be patient, and wait for the error if it looks like nothing happened

In the app, the GUI basically passes a success function (for example, moving to the next screen) and a fail function (alerting the user that something went wrong). For this example, we will just print some messages to the console.

What is also pretty weird, is the statement ".then(function(res) { return res.text() })". Basically this part creates a new Promise, that will succeed if we can parse the body content, and if yes, run the next then clause. This seems pretty trivial when you are parsing text, but for example, if you try to parse JSON, errors might occur while parsing. So this chains the promises. First you get the "Promise" that the webpage was downloaded or the fail code is ran. Then you get that the "Promise" that the body is parsed or otherwise the fail code while be ran.


In the app I use the short hand arrow functions which are also quite "new" in JS. Basically it allows you to create anonymous function but in a more compact way. Compare "function(res) { return res.text() }" which roughly compares to "res => res.text()"

If this sounds like Chinese to you, that's fine, then focus mostly on what is between the fetch function, and the second then. The result is quite clear, we get "http://dewin.me/vcoffee/jobs.json" and then print out the body to the console.

Now the nice thing about Fetch is that if we just alter the first then, and change text() to json(), fetch will parse the JSON body to a JavaScript object. So now the second then, does not receive plain text, but real objects if parsing was successful.



Step 2 : Logging into VAC, the real deal

OK, so let's look at the VAC Rest API. To login, the GUI passes an object with the URL, Username and Password. The goal is to login and get a token. When we get a token, we can are basically logged in, and next time we want to fire a request, we can just use the token, instead of sending a username and password. The theory is defined in the VAC REST API documentation, but let's put this into practice


The maincontext object is something that is created in the GUI and then passed to the backend. Here we are just creating it on the command line. In this code, there is a first fetch to "/v2/jobs", you can ignore this one. Basically it is some probe code to see that the server is really a VAC server before trying to authenticate. More importantly is the second fetch.

Here we do a fetch of apiurl+"/token". But instead of  just doing a plain download of the webpage, we are altering the request a bit. First of all, we need to use the "POST" method. Secondly, we add some headers to the request. Content-type should be set to "application/json" and we add the "Authorization" set to "Bearer" as specified by the API documentation

Finally in the body of the request, we tell the API that we want to authenticate with a password (grand_type), and supply the username and password. This is supplied as an URL encoded format, and honestly, the way the code is written in this example is not that clean. It would have been safer to use something that takes a JavaScript Object and parse it to an URL encoded format.

If all went well, you should get some JSON. Again we can parse the text and then we can extract the access_token. The code prints out the access code to the console and set's it to maincontext.sessionid for later reuse.

I don't want to go too much into detail but your access_token expires every hour. In the same reply, there is also a refresh_token you can reuse, to get a new access_token before it expires. I'm not going to cover this, but if you want your application to stay logged in, you would need to have some timed code that runs every, let's say, 45 mins, and that renews your token. In this case, your grant_type is not password but refresh but this also covered in the documentation. Finally, once you are done, you should also log out. Again, we will not cover this in this blog post.

Step 3 : Getting the jobs

Now that we are logged in, we can actually do some work. Every time we need to do a request, we should specify that we are using JSON, and also that we are authenticating via the Bearer access token. Since we need to do this for every request, let's make a functions, that just make the headers object based on the access_token, so we can reuse it over and over again. Basically we need to set the "Authorization" header to "Bearer "+access_token. What is really important is that if you are building this code in another language, there is a space between the word "Bearer" and the access_token.




Then, let's get a list of the jobs. Following the documentation, we should be able to get this by executing a "GET" to /v2/Jobs (watch out for caps). However, since other types like "Tenants" or "BackupServers" are requested in exactly the same way, just a different URI ending, we will just make a generic "getList" function, and then just a small wrapper function that use the "getList" function. 

Here is where the VAC API in combination with JavaScript really shines in my humble opinion.  We get the page, using the authentication header, we parse the JSON text to objects and the we run success. In success, we get an array of jobs which are passed. We can select the first element by using jobs[0], and use it properties to print the name and the id. For me this code is quite compact and extremely easy to read.


Step 4 : Starting a job

The last step is to start the job. Be careful, this code will actually start the job! Following the API documentation, we need to a "POST" to /v2/Jobs//action . For example, in the previous screenshot, we can see that "Replication Job 1" has an id of 16, so if we want to start it, we need to a POST to /v2/Jobs/16/action.

Now we also have to specify the request body (which is also JSON). To start the job, the documentation states we need to send '{"start":"null"}'. Again, this scenario, can be made generic for other actions like start, stop, disable, etc. by just replacing the word start. So let's build another more generic function called actionJob. We will use the jobaction parameter to modify the JSON text we are sending. Again, this is not the most clean example.

You should actually take a JS object, and stringify it to JSON. And in fact, I should update the code to something like "JSON.stringify({[action]:null})", but this is just to keep the example as simple as possible


In the success function, we can now use the console.dir function to dump the content of the reply object. If the job started succesfully, you will get a reply like "Action Started"


Finally: where to go from here

Well, first of all, the API documentation is really good. I'm pretty sure that by modifying this sample code, you can get a long way. I hope it also shows that there is no really rocket science involved, especially if you want to use the API to dump the status data to another system for example. I could imagine that you can use this sample code, to get the data out of VAC, and automatically create tickets for failed jobs in your Helpdesk system.

2018/02/15

vCoffee : Drink coffee and check your backup jobs from your smartphone

For some months now I had the idea to make an App for Veeam Availibility Console (VAC) and / or Veeam Backup & Replication (VBR). While getting a coffee in the morning I noticed, people spend quite a lot of time queuing or waiting for the machine to deliver the coffee.



That's why I'm glad to introduce vCoffee today. It is a small app, currently available in alpha which you need install manually on your Android device. The app itself is rather simple: You login, get an overview of all jobs and you can see the latest state. If required, you can click the job, and start the job directly from the app. As a bonus, you also get an "RPO" indicator, which tells you if the jobs started in the last 24h. So if the job was successfully but didn't run for the last 5 days, this will also be indicated in the first screen.

What I'm also really excited about is that it covers both VAC REST API and VBR REST API. This means that as a partner, you can check all your customers or tenants from the app. As a customer, you are able to monitor your VBR server via the Enterprise Manager. Do note that REST API is part of the Enterprise Plus licensing.

One thing that is really important is that Android seems to be quite paranoid about security and this means that you can not use self signed certificates. For VAC, I don't think this is a big issue but maybe for enterprise manager, you might have used self signed certificate. That's why I also would like to refer to my colleague Luca Dell'oca. He wrote an excellent article about "Let's encrypt". I used the article for both VAC and Enterprise Manager. For enterprise manager, if you have already installed it, there is an excellent article that explains how to replace the certificate.

Here is a small demo of the app. I can tell you that on my native phone it works a lot faster but due to the Android emulation, it looks quite slow in the demo.

The code is released under MIT License on VeeamHub, the Veeam community which get contributions from Veeam employees but also external consultants an Veeam enthusiasts.  This means that everybody can contribute and reuse the code as he or she likes. It also means that no responsibility will be taken and you can not contact Veeam Support for help. Basically, this app was not developed by Veeam R&D and has not been checked by Veeam QA.

In a follow up article, I'm planning to discus the VAC REST API, because I was amazed how simple this really was. This because the app itself is JavaScript code and the JSON support of the VAC REST API makes parsing the objects extremely simple

Finally, you can download a debug build here as long as I'm not running out of bandwidth usage.

2017/11/23

RPS Workspace

Many people have been using http://rps.dewin.me and honestly it gives me great pleasure that people like it and use it so often. I tried to make the tool as straightforward as possible but one thing people do not seem to understand is the line "Work Space". So on a regular basis I get the question, what the hell is "Workspace" and how is it calculated.

In the early days of RPS, it didn't have this Workline space. However, during some discussions, some fellow SE's where concerned that there was no buffer space for:

  • Occasionally running a manual full
  • Not filling the Filesystem for 100% cause that is just not best practice
  • Space that is used during the backup process itself


So the fist two ones, I hope, are pretty clear. The second one is not always clear. So imagine that you are running a forever incremental. You configured 3 points, and that is what you will get after the backup is done. However, during the backup, the first thing that happens is that an incremental point is created. After the incremental backup is done, the merge process happens. However, that also means that during that "working period", you actually have 4 restore points on disk (1 full + 3 incrementals). Thus you need to have some extra space.

That hopefully explains the why. Now the how. This one is a bit more complicated. The initial workspace was pretty simple, take a full backup additionally. While this is great in smaller environments we pretty soon came to the conclusion that if you have 200TB of "full data" (all fulls together), you probably do not need 200TB of workspace. Especially because typically there is not one humongous job that covers the complete environment. Probably you have split up the configuration in a couple of jobs and those jobs are probably not running all at the same exact time.

So the workspace has some kind of bucket system where the first bucket has a higher rate then the last one. Once the first bucket is filled, it overflows to the next one. This means that the workspace does not grow lineair with the amount of used space.

Here are the buckets themselves:
0-10 TB = source data will be compressed and then multiplied with a factor of 1.05
10-20 TB = source data will be compressed and then multiplied with a factor of 0.66
20 - 100 TB = source data will be compressed and then multiplied with a factor of 0.4
100 - 500 TB = source data will be compressed and then multiplied with a factor of 0.25
500 TB+ = source data will be compressed and then multiplied with a factor of 0.10

Let me give you some examples. If you have 5TB of source data, that 5TB will fit exactly in the first bucket. Thus the calculation is rather easy. If you use a compression factor of 50% (the default), you will get:
5TB x 50/100 x 1.05 =~ 2.6 TB Workspace

If you have a source data of 50TB however, it does not fit in the first bucket. It has to split the data over 3 buckets. The first 10TB in the first bucket, the next 10TB in the second bucket and the last 30TB in the third bucket. Thus the calculation would be roughly:
10 TB x 50/100 x 1.05 + 10 TB x 50/100 x 0.66 + 30 TB x 50/100 x 0.4 =~ 5 + 3 + 6 = 14TB Workspace

You can verify that here:
http://rps.dewin.me/?m=1&s=51200&r=14&c=50&d=10&i=D&dgr=10&dgy=1&dg=0&e

Finally if you have a big customer or you are a big customer and you have 500TB. You will see a split of 10,10,80,400. Thus the calculation would be:
10 TB x 50/100 x 1.05 + 10 TB x 50/100 x 0.66 + 80 TB x 50/100 x 0.4 + 400 TB x 50/100 x 0.25 =~ 5 + 3 + 16 + 50 = 74TB Workspace

You can verify that here:
http://rps.dewin.me/?m=1&s=512000&r=14&c=50&d=10&i=D&dgr=10&dgy=1&dg=0&e

So instead of saying that with 500TB, you will need 250TB of workspace, it is drastically lowered to 74TB. And again, that makes sense, the environment will be split up in multiple jobs, so those will not be running all at the same time and you will probably not run an active full on all of them at the same time.

For those want to play with it and see those buckets in action, I created a small jsfiddle here:
https://jsfiddle.net/btyzvxen/2/

Just change the workspaceTB and click run to update the output

2017/10/03

Self Service Demo with Veeam Backup for Office 365 using the REST API

Just today Veeam released the 1.5 GA version of Office 365. This versions ships with a proxy-repository model introducing scalability for the bigger shops and service providers. It also features a complete REST API and I personally love it. It means that the community has a chance to extend the product without any limitations. (For those just getting started, know that by default, the REST API is not enabled, so you should enable it in the main options. You can find the main menu in the top left corner under the "hamburger" icon)


And just to demo how powerful it is, I already made a small demo. The demo basically allows you to startup a self service recovery wizard, on which a user can login in with his LDAP/AD credentials, and then restore his own mails independently from the admin. This is quite common request I get in the field where admins don't really feel comfortable poking around the end-user's mailbox even if they don't have bad intentions. 

The self service demo aka "Mail Maestro" source can be found on VeeamHub . A compiled Windows version can be found here . Besides the source code, the Github also shows how you can use certificates to "secure" the connection between the end user and the server. BTW, the code only works with an on-premises exchange server and a local LDAP connection, just because I didn't had the time to set up an Office365 account etc. Most of the wizard will probably work, I'm just assuming that during restore, the credentials that are being used to restore (by default, the credentials that are being used to login) might not work. 

Ok so let's try it out. When you download the compiled version, you will get the binary and the config file. Start by editing the json file with for example notepad. I removed the "vbomailbox" argument because I will supply this by command line.


Maybe some side notes. The LDAP server is of course a reference to the LDAP / AD server. To lookup the user that you want to allow to do his self service restore, we temporarily need to bind to it and lookup his account, email address and it's distinguished name. You can use a readonly user for this. The rest should be quite self explanatory, except maybe for "LocalStop". If you enable LocalStop, you can type "stop" on the command line, to cleanly close the session  from the server side. The user himself will be able to stop the wizard from the portal after logging in to indicate that he is ready. Both will clean up the restore session in VBO365 (headless Veeam Explorer).

So let's go to the command line and pass the config file. Since we removed vbomailbox, Mail Maestro will complain that it is not aware, what user you want to use in this session. You can supply it at the command line by using -vbomailbox


Let's supply a user that is being backed up



Great, the process is starting. Mail Maestro is able to find the user, start the headless Veeam Explorer session and was able to find the mailbox in the backups. You can also see that it is serving on the http://localhost:4123. Open the firewall port and replace the localhost with the server ip to grant remote access

So if the user logs in with his email address, he will be authenticated against LDAP and then hopefully the wizard will be quite self explanatory


Let's login the mailbox and delete all the mails in the inbox



Now let's restore them from Mail Maestro by clicking the green restore button next to the Email Box

... and the mails are back


When the user is done, he can stop the portal via the button in the top right corner of the portal. I noticed that if the browser window is too small, the button might not show up. Anyway you can always stop the wizard by typing "stop" on the command line

Final notes, as with many of my projects, this is just a demo. If you feel like you could use this in production environment, please evaluate the code. This is published under MIT licenses, so basically, you can do whatever you want with it on your own risk. I hope however, that this shows how powerful the new API is and what you can do with it. I can only imagine that in the future, service providers would be able to built their own backup portal and offer Backup as a service. In fact I know my colleague Niels Engelen  has been working on such a demo in PHP. 

2017/09/12

Adding AD/OU users to VBO365 via Powershell

For the Veeam fans out there, you must have been living under a rock if you don't know that there is a new backup product for Office 365 (Veeam Backup for Office 365 or short handed VBO365). It allows you to backup mailbox items like mail, calendar items, etc.

While 1.0 is already released, the 1.5 is currently in public beta. One of the cool things it brings is scalability, which a lot of users have been asking for. However it also brings full automation support in the form of a complete Rest API and a complete Powershell module. In this blog post I want to show you the power you get with the new Powershell module.

Quite often I get asked on how to add only a selected amount of users to a job. For example, a company has 4000 mailboxes, but only want to select a certain amount of mailbox for protection in a certain job. This makes even more sense with v1.5 since you can define multiple repositories with a different retention. So maybe for the helpdesk guys, you don't really want to backup to long, but for the managers, you want to keep the mails backed up for 8 years. Handpicking those users per job can be a tedious job.


With the new Powershell Module, you can  automate this task. There is a new cmdlet called "Add-VBOJob" that allows you to define a new job. It takes the following parameters:

  • Organization (Get-VBOOrganization)
  • Target Repository (Get-VBORepository)
  • Mailboxes (Get-VBOOrganizationMailbox)
  • Schedule Policy (New-VBOJobSchedulePolicy)
  • Name

To see it in action, I made a sample scripts that queries Active Directory and get's all users in a certain OU. Then based on those users, you can make a list of email address that you want to add. Then armed with that list, you can use "Get-VBOOrganizationMailbox" to select the correct mailboxes.

You can find the script here. It should be quite straight forward. Here are some screenshots seeing it in action

Firs of all, the module is in "C:\Program Files\Veeam\Backup365\Veeam.Archiver.PowerShell". So you can just execute "import-module 'C:\Program Files\Veeam\Backup365\Veeam.Archiver.PowerShell'". However the $installpath trick in this scripts, tries to find out the installation directory even if you did not install VBO365 on the default location.

Now as you can see from the output, it found 3 users in the OU :

  • bbols@x.local
  • ppeeters@x.local
  • tbruyne@x.local
The scripts "builds" the email address list based on the SamAccountName, but of course if you have a different policy, you can change the example. For example, I imagine quite a few companies having something like FirstName.LastName@company.com. Btw if you are wondering, "x.local" isn't a real DNS name, so how does that work with VBO365? Well it seems that 1.5 will also support on premise Exchange and Hybrid Deployments.

After building the email list, the script created the job


If we check the job, you will see those email addresses (mailboxes) where successfully added to the job


Well in this case, it was only 3 users in my test lab, but I can imagine if you need to add 500 users, you will be grateful nothing having to add them one by one. Also, you would be able to do this in a for loop, going over multiple OU's, creating multipes jobs.  Finally if you are going to use this in production once it is GA, I would recommend that you validate that you have the same amount of users in the OU as in the job. In this example, it just simply checks all the mailboxes (Get-VBOOrganizationMailbox) and verifies if the email address associated to a mailbox is in the initial email list. If it is, it is added to the job.

2017/04/18

Gathering your Veeam Agent's in Veeam Backup & Replication

So with the new upcoming version of Veeam Agent for Windows, you will be able to backup directly to a Backup & Replication server. Not everybody knows this but you will also be able to backup to Veeam Backup & Replication Free Edition provided you have a license for the Veeam Agent for Windows. This might be important for smaller shops who have only a couple of machines and do not have a Veeam Backup & Replication license.

The steps to enable this are not so difficult but without the GA product, there is no documentation so it might be difficult to figure out how it all ties together. So here are the 7 steps you need to take to get it all working. Special thanks to Clint Wyckoff who shared these instructions internally.

Step 1: Start by installing Veeam Backup & Replication

Fairly simple start. Download Veeam Backup & Replication Free Edition. Then mount the ISO to your target server, click the install button to start the installation. Basically, in this example, we did a next next next finish install. If you are doing this in production, it might actually be good to read what you are doing. Notice in the license step, I did not assign any license so the free mode will be installed.



Step 2: Enable full functionality view

The next step is where one of my partners got stuck. You need to enable "Full Functionality" view to get through the next steps. By default, you get "Free Functionality" view which shows you only the options you can use with the Free mode. However, if you add a Veeam Agent Windows license, you will unlock more functionality than is available by default in the free mode for your agent backups

To enable it, go to the main menu, select view and then finally select the "Full Functionality" mode


Step 3: Add the Veeam Agent for Windows license to Backup & Replication

This might also be confusing, but you do not need to add the license during the Veeam Agent for Windows install. Rather, you add it to Veeam Backup & Replication and then, when you connect a Veeam Agent for Windows to VBR, it will acquire the license from the VBR server. This is good because you get a central place to manage the license.

Go to the main menu but this time choose "License". In the popup, click the install license button and select the Veeam agent for Windows license file (lic file) in the file browser. The result should be that the license is installed but the VBR server itself remains in Free mode




Step 4: Define permissions

Next step is to define the permissions on your repository. Got to "Backup Infrastructure" section and click the "Backup repositories" node. Then select the repository you want to assign rights to. Finally click "Agent Permissions". Then in the popup window, you will be able to assign permissions

For this tutorial, I made a separate user called "user1", just to show you that you can do very granular permissions




Step 5: Install the client

Installing the agent on another machine should be fairly trivial. However, in this setup, we choose not to configure the backup during install nor to create a recovery medium. However I would highly recommend you to do create a recovery medium so that you can execute bare metal recoveries if needed.




Step 6: Configure the client

Once the product is installed, we can configure it. To open the control panel, go to your system tray. A new icon should have appeared  which has a green V. Because we did not configure anything yet, it should also have a small blue question mark on it. Right click it and select control panel.

When the control panel appears, ignore the fact that it does not have a license (click no). Click configure backup to start the configuration. 

Finally in the backup wizard, as a target select Veeam Backup & Replication Repository. Specify the FQDN/IP and the credentials. When you click next, the permissions are checked and the license is acquired from the backup server. In the next step, you are able to select the repository.







Btw, if a user connects without permissions on the repository, the configuration wizard will refuse to go the next step

Step 7: Run the backup

With the configuration done, you are ready to run the backup. You can see the backup job and backup from the Veeam Backup & Replication repository.

In the Veeam Agent for Windows, if you click the license tab, you will also see that the agent is licensed through Veeam Backup & Replication




So what's next?

Well, you can explore what other functionality is enabled when you backup to a free edition. One cool feature would be to "backup copy" your job to a second location. For example, in the following screenshot, I defined a repository on another drive, and then did a backup copy job to the second location.