2016/04/13

Veeam RESTful API via Powershell

In this blog post I'll show you how you can play around with Veeam RESTful API via Powershell.  This post will show you how to find a job and start it. You might wonder why would you do such a thing? Well in my case it is to showcase the interaction with the API (per line of code), very similar as you with do with wget or curl. If you want an interactive way of playing with the api, know that you can always replace the /api with /web/#api/ (for example http://localhost:9399/web/#/api/) to get an interactive browser. However, via Powershell you get the real sense that you are interacting with the API and all methods used here should be portable to any other language. That is why I've not chosen to use "invoke-restmethod", but rather a raw HTTP call.


So the first thing (which might not be required), is to ignore the self signed certificate. If you access the API via  FQDN on the server itself, the certificate should be trusted, but that would make my code less generic.
add-type @"
    using System.Net;
    using System.Security.Cryptography.X509Certificates;
    public class TrustAllCertsPolicy : ICertificatePolicy {
        public bool CheckValidationResult(
            ServicePoint srvPoint, X509Certificate certificate,
            WebRequest request, int certificateProblem) {
            return true;
        }
    }
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
So with that code executed, you have told dotnet to thrust everything. Next step is to get the API version
$r_api = Invoke-WebRequest -Method Get -Uri "https://localhost:9398/api/"
$r_api_xml = [xml]$r_api.Content
$r_api_links = @($r_api_xml.EnterpriseManager.SupportedVersions.SupportedVersion | ? { $_.Name -eq "v1_2" })[0].Links
With the first request, we basically do a get request to the API page. The Veeam REST API uses XML in favor of  JSON. So we can just convert the content itself to XML. Once that is done, we can browse the XML. The cool thing about Powershell is that it allows you to browse the structure and autocompletes. Just execute $r_api_xml and you will get the root element. By adding a dot and the start-element, you can see what's underneath this node. You can repeat this process to "explore" the XML (or you can just print out the $r_api.Content without conversion to see the plain XML).

Under the root container (EnterpriseManager), we have a list of all SupportedVersion. By applying a filter we get the v1_2 (v9) API version. This one has one Link which indicates how you can logon
PS C:\Users\Timothy> $r_api_links.Link | fl

Href : https://localhost:9398/api/sessionMngr/?v=v1_2
Type : LogonSession
Rel  : Create
So the Href show the link we have to follow. The type tells use the name of the link and then finally, Rel explains us which http method we should use. Create means we need to do a Post.

Most of the times:
  • get method is if you want to get details but don't want do a real action. 
  • post method is used if you want to do a real action
  • put method if you want to update
  • delete method if you want to destroy something
When in doubt, check the manual :  https://helpcenter.veeam.com/backup/rest/requests.html 

Ok for authentication, we have to do something special, we need to send the credentials via basic authentication. This is a pure HTTP standard, so I'll show you two ways to do it
$r_login = Invoke-WebRequest -method Post -Uri $r_api_links.Link.Href -Credential (Get-Credential -Message "Basic Auth" -UserName "rest")

#even more raw

$auth = "Basic " + [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes("mylogin:myadvancedpassword"))
$r_login = Invoke-WebRequest -method Post -Uri $r_api_links.Link.Href -Headers @{"Authorization"=$auth}
Well the first method uses Powershell built-in functionality for doing BASIC authentication. The second method, actually shows what is really going on in the HTTP request. "username:password" is encoded as base64. then "Basic " and this encoded string are concatenated. This result is then set in the Authorization header of the request.

The result is that (if we logged on successfully), we get a logon session, which has links to almost all main resources. Before we go any farther, we do need to analyze a bit the response.
if ($r_login.StatusCode -lt 400) {
If you do a call, you can check the StatusCode or return code. You are expecting a number between 200-204 which means success. If you want to know the exact meaning of the HTTP return code in the Veeam REST API : https://helpcenter.veeam.com/backup/rest/http_response_codes.html

The next thing now is to extract the Rest session id. Instead of sending over the username and password the whole time, you need to send over this header to authenticate. The header is returned after you succesfully logged in.
    #get session id which we need to do subsequent request
    $sessionheadername = "X-RestSvcSessionId"
    $sessionid = $r_login.Headers[$sessionheadername]
So now we have the session id extracted, lets do something useful
    #content
    $r_login_xml = [xml]$r_login.Content
    $r_login_links = $r_login_xml.LogonSession.Links.Link
    $joblink = $r_login_links | ? { $_.Type -eq "JobReferenceList" }
First we take the logon session, convert it to XML and we browse the links. We are looking for the link with name "JobReferenceList". Let's follow this link. In the process, don't forget to configure in your header the session id.
    #get jobs with id we have
    $r_jobs = Invoke-WebRequest -Method Get -Headers @{$sessionheadername=$sessionid} -Uri $joblink.Href
    $r_jobs_xml = [xml]$r_jobs.Content
    $r_job = $r_jobs_xml.EntityReferences.Ref | ? { $_.Name -Match "myjob" }
    $r_job_alt = $r_job.Links.Link | ? { $_.Rel -eq "Alternate" }
So the first line is just getting and converting the XML. The page which we requested is a list of a the job in reference format. The reference format is a compacted way of representing the object that you requested, basically showing the name and the ID of the job and some links. If you add the "?format=Entity" to such a list (or request of an object/objectlist). You get the full details of the job.

So why the reference representation? Well it is a pretty similar concept to the GUI. If you open the Backup & Replication GUI and you select the job list, you don't get the complete details of all the jobs. That would be kind of overwhelming. But when you click a specific job and try to edit it, you get all the details. Similar, if you want to built an overview of all the jobs, you wouldn't want the API to give you all the unnecessary details as this would make the "processing time" of the request much bigger (downloading the data, parsing it, extracting what you need, ..)

So in the 3rd line, what we do is look for the job (or rather the reference to a job), which names matches "myjob". We then take a look at the links of this job and look for the alternate link. Basically, this is the jobid + "?format=Entity" to get the complete details of the job. Here is the the output of $r_job_alt
PS C:\Users\Timothy> $r_job_alt | fl
Href : https://localhost:9398/api/jobs/f7d731be-53f7-40ca-9c45-cbdaf29e2d99?format=Entity
Name : myjob
Type : Job
Rel  : Alternate
Now ask for the details of the job
    $r_job_entity = Invoke-WebRequest -Method Get -Headers @{$sessionheadername=$sessionid} -Uri $r_job_alt.Href
    $r_job_entity_xml = [xml]$r_job_entity.Content
    $r_job_start = $r_job_entity_xml.Job.Links.Link | ? { $_.Rel -eq "Start" }
By now, the first 2 lines should be well understood. In the third line we are looking for a link on this object with name start. This is basically the method we need to execute to start the job. Start is a real action and if you look it up, you will see that you need a POST method it to call it
 #start the job
    $r_start = Invoke-WebRequest -Method Post -Headers @{$sessionheadername=$sessionid} -Uri $r_job_start.Href
    $r_start_xml =  [xml]$r_start.Content

    #check of command is succesfully delegated
    while ( $r_start_xml.Task.State -eq "Running") {
        $r_start = Invoke-WebRequest -Method Get -Headers @{$sessionheadername=$sessionid} -Uri $r_start_xml.Task.Href
        $r_start_xml =  [xml]$r_start.Content
        write-host $r_start_xml.Task.State
        Start-Sleep -Seconds 1
    }
    write-host $r_start_xml.Task.Result
Ok so that a bunch of code, but still wanted to post it. So first we follow the start method and parse the XML. The result is actually a "Task". A Task in the end is representing a process that is running on the RESTful api, that you can refresh, to check the actual status of the process. What is important, it is the REST process but not the Backup Server process. That means that if a task is finished for REST, it doesn't mean necessarily that the action at the backup server is finished. What is finished is that the API has passed your command to the backup server.

So in this example, we check if the task is "Running", we will refresh the task and write out the State, sleep 1 second, and  then do the whole thing again while it is in "Running" state. Once it is finished, we write out the Task.Result. Again, if this task is Finished, it does not mean the job is finished, but the backup server has "started" the job hopefully succefully

So finally we need to log out. That rather easy. In the logon session, you will find the URL to do so. Since you are logging out or deleting your session, you need to use the method "DELETE". You can check that by checking the relationship of the link.
    $logofflink = $r_login_xml.LogonSession.Links.Link | ? { $_.type -match "LogonSession" }
    Invoke-WebRequest -Method Delete -Headers @{$sessionheadername=$sessionid} -Uri $logofflink.Href
I have uploaded the complete code here:
https://github.com/tdewin/veeampowershell/blob/master/restdemo.ps1 
There is some more code in that example that does the actual follow up on the process. But you can skip or analyze the code if you want.

2016/01/14

Extending Surebackup in v9

Now that everybody has posted there favorite new features in Veeam v9, I want to take the time to highlight one particular feature. This is the credential manger part in Surebackup. This extra tab can be found when you configure your VM in the application group.


So why this extra tab? Well you can read my Surebackup Sharepoint validation script and instantly see the biggest problem. Storing your credentials in a secure way is 60% of the whole blog article. This is because of in v8, all scripts are started by the backup service and thus inherent this account and permissions.

Enter v9, the credentials tab is added. My first assumption was that all scripts will run under the configured account. That turned out to be incorrect. The script will be started up with the backup service account, but the Network Credentials are changed. This has one big advantage, even if your backup server is not in the domain, you can still use these credentials. Think of it as using "runas /netonly" to start up an application (this is how Product Management explained it to me). The credentials are only applied when connecting to a remote server.

So for the fun of it, I have already looked into some example scripts. They might not be all that stable and it is better to change them to your liking, but they should give you an idea on where to start.

First of all, you can find an update version of the Sharepoint script. The only parameters to set are:
  •     -server [yourserver, %vm_ip% for example]
  •     -path [to content you want to check, by default : /Shared%20Documents/contenttest.txt]
  •     -content [the content to check, by default  "working succesfully"]
If you then setup the account correctly that can access the webservice, it will authenticate succesfully with the network credentials, download the file and match the content. The real magic? "$web.UseDefaultCredentials = $true"

But the fun doesn't stop there. I also tried to make a SQL script. You only need to pass:
  • -server [yourserver, %vm_ip% for example]
  • -instance [by default MSSQLSERVER]
It will logon to the instance, issue a "use" and "query the tables" to all databases. Finally it check the state of the databases in "sys.databases". The use, makes sure that SQL Server actually tries to mount the database. But the cool thing is, you can easily alter the example to execute a full blown sql query and then check if the output satisfies your need. The real magic? "Server=$instancefull;Integrated Security=True;"

Also added a template for just any plain Powershell script  . This might look trivial (it doesn't do anything but logon and write the hostname), but I spend some time figuring out you need " -Authentication Negotiate" and that there is no need to setup SSL. However, do check if the firewall allows remote connection from outside the domain if you want to use this one.

So no more excuses for writing those extensive application test scripts!

Final tip, if you are customizing these examples, you can do a Powershell "write-host" at anytime. The output can be found in the matching surebackup log. By default in:
  %programdata%\veeam\Backup\\Job..log

For example, for the SQL script, you would find something like:
[08.01.2016 15:57:59]  Info     [SureBackup] [SQLandSP] [ScriptTests] [Console] online : master

2015/11/30

Extending Surebackup with custom scripts : Sharepoint

Often I visit customers and ask them about there restore tests. Most common answer? We test the backups when we do the actual restores. To the question why not test more frequently, the most common answer would be "time and resources".

A couple of months ago, I actually visited a customer that tried to do a restore from backup. It failed, B&R was able to restore the backup but the data inside seemed to be corrupt. The SQL server refused to mount the database. Exploring multiple restore points, this gave the same issue. It was a strange issue because all backup files where consistent (no storage corruption), and the backup job did not have any failed states. The conclusion was Changed Block Tracking corruption. In light of the recent bugs in CBT, I wanted to emphasize again how critical it is to validate your backups. If the customer would have tested his backups with for example, the SQL test script included in v8 , they might have caught the error before the actual restore failed.

This shows another thing I want to highlight. Surebackup is a framework but your "verification" is only as good as your test. By default, Surebackup application tests are just portscans. This tells you that the service has started (it was able to bind to the port and it is answering), but doesn't tell you anything about how good the service is performing. For example, the SQL service / instance could start, but maybe some databases where not able to be mounted inside the instance.

Few people visit this topic, but you can actually extend the framework. The fact that is supports Powershell makes it quite simple to write more extensive test.

So here is a small test for Sharepoint. I hacked it together today, so please reread the whole script to "Surebackup" my coding skills. It is rather basic, but you could use it for any kind of webservice actually. It simply reads the content of a txt file in a certain site. If the content matches a predefined value, you know that
a) The database was mounted inside the instance
b) Sharepoint is able to talk to the instance and query it
c) The webservice is responding to requests

So how do you get started? Well first upload a txt file with some content. In my case, I uploaded file contesttest.txt with the content "sharepoint is working succesfully" as shown below:


You can right click the link and copy it's location. Test if you can really access it this way as shown below


Now get the powershell goodness from https://github.com/tdewin/veeampowershell/blob/master/suresharepoint.ps1 and put it somewhere on the backup server. Now edit the file.



First of all you can see that everything can be passed as a parameter (e.g. commandline call, use -server "ip" to change the ip address). Change the username and plaintext password to the user that will be used to authenticate against sharepoint. Preferably and account with read only rights and not the administrator as in my screenshot, this way you are sure it doesn't break anything ;).

You might wonder, do I need to provide the password in plaintext? No you don't have to actually, you can also follow this procedure but it might make things more complex. Instead of plaintext passwords, you can use Powershell encrypted passwords but understand that if you want to decrypt the password, you need to be the same user as the one that encrypted the password (the whole point of encrypting it, right?). When Surebackup runs, it is actually being ran by the backup service. So the account that is being used to decrypt the password is the service account used to run this service (as shown in the screenshot below)



If this is not Local System account but a service account, you can use the following cmd script to create an encrypted password:
https://github.com/tdewin/veeampowershell/blob/master/encryptedpasstoclip/encryptedpasstoclip.cmd

Change the username in the bat file, run it, give in the password for the service account and finally give in the password for the account you want to use to authenticate to sharepoint. The result should be that an encrypted password is put on your clipboard. replace the whole password statement in the file. for example:
$pass = "01000000d08c9ddf0115d1118c7a00c04fc297eb01000000c9b320ead0059d409978380353923e8000000000020000000000106600000001000020000000b1816dffef13bc70672b55dfcee25a41488d5bb395ae28242b70afeb90938db9000000000e8000000002000020000000bd7da1d0d06893bed8b035c411c34f181b000aa9f0e4f46658eb3efe3e73c06840000000948652774f7f82848ba3065af8193c23fe25b773cea3ecf65957bdc12cdcc71868a82ba11d0475e65b321056a900d0571a05184b89132c0f21452642033c918340000000e8fcabb194c06c78ad01ee2192b73bf7ba799630adfedb6091dc1a629dc9d5a2a6025a64fcf74fe8a89d4a579a54c3538928ee0d22a57f22f6e50da240deaa62"
If you go this far (or you skipped the whole password encryption part because your backup server is a Fort Knox anyway), we can now configure the script. Go to Veeam B&R and configure the test script as shown below in the application group (or in the linked jobs part):



Notice that I also configured "Arguments" as "-server %vm_ip%". This will pass the sandbox IP to the script directly.

Before you actually startup Surebackup, you can test the script with your production environment. If it doesn't work against production environment, it will probably fail against your lab environment. In case you configured an encrypted password with another account, you can temporarily override it with the following command (In case you did not, you can just run as script.ps1 -server )
PS C:\Users\demo2> C:\scripts\suresp.ps1 -server -password (read-host -assecurestring -prompt "pass" | convertfrom-securestring)


Now if everything is green and you got a match, run Surebackup, and validate if you can get the same output in your lab


If it failed, you can actually check the logs for the output the script gave. Go to "%programdata%\Veeam\Backup\". It should contain a folder with your name. In this folder, there should be a log called Job.. You can open it with notepad


Scroll all the way down in the log and look for "[console]"


This should give you the output of the console. In this case, everything was ok!

2015/11/27

Veeam Application Report

As many as you know, although Veeam B&R has an agentless approach it still makes sure that all the applications are consistently flushed just before the backup starts. To do this, Veeam B&R leverages VSS. One thing it also does is, it tries to detect what applications are installed in what VM. This data is collected so that during restore, you don't have to figure out which VM is holding what application and where exactly the application database is stored inside of the VM (for example for Exchange, it will detect the path(s) leading to the EDB(s)).

Now a fellow SE colleague requested to add this "application detection" to the main GUI. They wanted to leverage the detection to sort out with VMs have what application installed. Adding it to the main GUI would however make it more complex but you can actually leverage the data via Powershell.

So here is a sample script you can use as a starting point:
https://raw.githubusercontent.com/tdewin/veeampowershell/master/veeam-per-app-detect.ps1

It generates a nice clean report with all the VMs that have detected applications (yes even Oracle so it is v9 ready), grouped per application. The output should look something like shown in the screenshot below:


Enjoy!

2015/10/09

RPS 0.3.2

Just a small update (which required some re-engineering under the hood).

First of all, when you click the backup files size, you get a small "pop-window". This will tell you the uncompressed and compressed bandwidth usage over multiple interval. It should help you to understand how much processing power you need for a certain amount of input. The first 2 lines are uncompressed in byte and bit. the second two lines is the data 'compressed' in bit an bits. The columns indicate your time window. Notice that clicking on the full file or the incremental file gives different output so you can check full runs vs incremental runs real easily.


The second feature is available when you click the total file size. It will give you a table overview of the output, which you can easily copy it to excel or calc. The numbers are all in GB so they give a predictable output.


The final feature is a very small one but I really like it because it took literally 10minutes to code but will remove some frustration. During a recent conf call with my colleague Johan Huttenga, I noticed he was struggling with inputting 30Tb of data. He needed to calculate it by multiplying 1024 by 30 to get exactly 30TB and not 29,99 TB. So in this version, you can input 30TB, and it will be automatically converted to "30720". Same for 1PB to "1048576". The input is case insesitive so tb,Tb,TB,pb,PB,pB, etc. should all work. For example, fill in 1TB like shown below


As soon as you push enter, tab to another input or click on the simulate button, the input will be dynamically changed.

2015/08/25

Global backup report in Powershell

A lot of partners, try to go that extra step and also manage onsite Veeam backup environments. They mostly want a report with all jobs status from all Veeam backup servers, instead of plowing through 100's of emails sent by multiple backup servers

Enterprise manager allows you to do that. You can add multiple backup servers and it will give you a global overview. However, it also acts like a license manager. So if you have different customers, with different licenses, you can not add them all together in one Enterprise manager.

A way around it would be to create some Restful API integration. That would be the cleanest way to do it in my humble opinion. However if you want to have a quick hack, you can also do it by using Powershell. Just launch a remote session to all those backups servers and collect the data.

Now a lot of people just need a small "sample" script, just to get started. So here is a basic "sample". It is surely not future complete and has very poor error handling. But it can get you started.

So the first part defines the instances. Granted it would be cleaner to take the table, convert it to a csv, and then import it at the beginning of the script. The instances table exists out of objects that define: the customer, the backupserver, the username and then the password in an encrypted form. Not sure how to get the password in an encrypted form? Just use the code at the top to generate what you need. However make sure that the whole password doesn't have any line breaks when you copy past!


Resulting in the pre-created code


After correct copy/pasting and removing line breaks, you should get something like this


If you then run the code, it should connect to all the instances, execute some Veeam PS Code, built a table and then collect it centrally. The end result? A $globaljob table, which you can then use to build a csv report, html report, one big email, etc.. Hope it can be useful to somebody as a starting point!

2015/08/17

Getting the correct input into RPS

In a first article about the Restore Point Simulator (RPS) I talked about the history of RPS and why it was created. I want to take the time now to explain the correct input parameters. Although I added some tool-tips in 0.3.1, people are sometimes confused how it all ties together.

TL;DR? The RPS GUI maps quite directly to the Veeam interface, check the screenshots to see how, if you already understand how Veeam works.

For those that didn't read the previous article, I'll repeat the formula here which we use at Veeam to do rough estimations. Why? Because RPS is directly based on it:
Backup size = C * (F*Data + R*D*Data)
Data = sum of processed VMs size by the specific job (actually used, not provisioned)
C = average compression/dedupe ratio (depends on too many factors, compression and dedupe can be very high, but we use 50% - worst case)
F = number of full backups in retention policy (1, unless backup mode with periodic fulls is used)
R = number of rollbacks (or increments) according to retention policy (14 by default)
D = average amount of VM disk changes between cycles in percent (we use 10% right now, but will change it to 5% in v5 based on feedback... reportedly for most VMs it is just 1-2%, but active Exchange and SQL can be up to 10-20% due to transaction logs activity - so 5% seems to be good average)
This formula and RPS have 3 parameters in common. Data would be the first one and it maps to "Used Size GB". To give you and example, if you would have a VM with one VMDK, the used size would be the amount of blocks the VM has already written to. So for example if you would have a thick provisioned VMDK of 50GB but you only use 20GB inside the guest, the used size would be around 20GB. A more correct definition would be what the VM is actually using when it would be thin provisioned. Because Veeam backs up at the block level, a thin provisioned disk would be exactly what Veeam needs to process during a full backup (and some meta data).

D or delta is the amount of data that changes between backups. In RPS the parameter is called change rate. This is dependent on 2 parameters. First of all the frequency of backups which is in most companies "daily". Secondly the application. Don't put this parameter to low. Veeam backups up at the block level. A small update, can flag a bigger block on the VMDK level than you estimated. If you fill up a disk sequentially like a file server, you won't notice this  so much because 10 sequential changes could be only flagging 1 block. However, if you are doing a lot of small random I/O on various locations, the number can quickly rise. So from my experience,  a number of 5% changes is fairly optimistic while 10% is rather conservative. I personally prefer a more conservative approach.

Finally C or compression which is called "Data left after reduction". I have had tons of people discussing this parameter with me because you can interpret this one in various ways. However look at the formula and it will all make sense. The formula basically says:
Backup Data = C * (Total Data In)
So C is a factor that should make "Total Data In" smaller. So the smaller the number, the smaller the backup. Compression factor is thus a percentage that tells how much data is left after it has done it job. If you define 40% (40/100) and consider a "data in" of 100, the backup data would be 40 = (40/100)*100 . If you define 60%, you actually tell the engine that you expect worse results because the end result would be 60 = (60/100)*100. So the lower the compression value, the better the compression.

For some people this feels counter-intuitive. If you prefer compression factors like 2x or 3x instead, you can easily convert those by dividing 1/(compression factor). So for example 2x would be 1/2 = 50%. If we put that in the previous formula, we would get 50 backup data = 50%*(100 total data in). So data was compressed to 2x. For 3x, you would get 1/3 or 33% =~ 35%. Again if we fill this into the formula you would get 35 = 35%*(100). Use this link if you really want to use 33%

If you want to disable compression, put data reduction to 100%.

In the formula there are 2 parameters left, F & R. Like discussed in the previous article, these one are hard to calculate, and that is exactly what RPS does for you. So for retention points, you should just define your policy. If you need 14 daily restore points, put in 14 for "Retention points" and daily for "Interval". You will see that in some case, you might end up with more restore points than you configured. This can be normal as Veeam considers your retention policy but also dependencies between Fulls and Incrementals. So don't try to adjust the retention points because you feel it miscalculated, instead take a look at the example given in previous blog article.

What also influences the amount of parameters is the style parameter. This actually maps directly to Veeam GUI.  For example, "Backup Copy Job" (BCJ) should be quite easy to understand. It refers directly to a Backup Copy Job . Selecting should it should also show you BCJ specific configurations


For the other "styles" called Incremental and Reverse, the other setting map mainly to the advanced settings of a regular job except for "Retention Points"


If you selected Incremental without any active fulls or synthetic fulls, you would get a forever incremental job. It also maps directly to the GUI but more specifically to the advanced settings of a regular job. Granted, the checkboxes are under the buttons "Days" and "Months". Also there is no global "enable" checkbox for synthetic or active but do understand that checking one of the boxes enables "Active" or "Synthetic". Finally "Monthly" has preference over "Weekly". In the Veeam GUI, you can selected only one, here you can enable checkbox for both but weekly will be ignored when you enabled one Month.


So if you want a weekly synthetic backup, it would be like shown below. I did not implement "Transform", and with good reason. Until the day of today, I have received 0 requests for it.


A Monthly Active Backup would like shown below. Important here is to know this does not defines GFS policies. Those can only be defined in a Backup Copy Job. I once had a guy that checked only January because he want to have a "yearly full backup for archiving". He was quite surprised by the result. Basically you just told the engine that you want to have a yearly chain which only get resets in January.


Finally "Reverse" incremental, can be found in the same place.


That leaves only 1 parameter left, and that is the growth simulation. It is a recent addition to RPS, and personally I think it is one of the coolest things added to it. Let me explain what it does and how it works. If you need to size for the following 3 years, and you know that you have a growth rate of 10% on a yearly basis, you can just add that to the RPS. What it does is, it takes your "used data", and grows it on a daily basis via: Future Used Data = Used Data * ( 1 + 10% ) ^ (Day X/365). Thus on the last days of the simulation -after 3 years-, it would put Future Used Data = Used Data * ( 1 + 10%) ^ (1095/365).

So lets imagine you have chosen reverse incremental, 3 years simulation, 10% year on year data growth and 1000GB of data. For simplicity, let's disable compression. Calculating this manually would give you roughly 1000*(1+10%)^(1095/365) = 1000*(110%)^3 = 1000*(1,10)^3 = 1000*1,10*1,10*1,10 = 1331. Now check the Full backup in the configured example. Also interesting is that the increments from 2 days  ago (retention point 3), has a smaller "Future Used data" set. It used 3y - 2 days in the formula, thus Future Used Data = 1000*(1+10%)^(1093/365) =~ 1330,30.


The differences are small in regular jobs but once you go GFS, the time growth calculation can have a dramatic impact which is hard to calculate with excel.


With this, I have discussed all parameters. Some might ask, what about "Quick Presets", what does that do? Well they are just quickly preconfigured scenario's that you can use. For example, if you want to have a monthly active backup, you can click all 12 select boxes or you can just select the matching style + monthly active full to quickly configure this scenario

If you made it this far, thanks for reading and enjoy playing with RPS.