tag:blogger.com,1999:blog-83450422944474045072024-03-19T05:22:05.995+01:00Timo's Techie Cornerecho "Blogging about random geek stuff" > /etc/about.confTimothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.comBlogger118125tag:blogger.com,1999:blog-8345042294447404507.post-51706083481205049502020-07-09T16:57:00.000+02:002020-07-09T16:57:02.422+02:00Light Weight Reporting for B&R <div dir="ltr" style="text-align: left;" trbidi="on">
Currently I have 2 customers where existing solutions like Veeam Service Provider Console (ex-VAC) or Veeam Enterprise Manager do not really work. One of the customers has a highly secure environment and they only have 2 allowed protocols for each environment: FTP and Email (because they monitor anything in and out of those servers). The second one has an extremely low bandwidth (satellite links) where every byte counts. They resorted to parsing the daily emails but that contains a lot of styling.<div>
<br /></div>
<div>
So unless you have a very specific use case like this, you probably don't want to use this project. VSPC probably makes more sense if you have distributed multi-tenant environment where Enterprise manager makes more sense if you have a one team shop.</div>
<div>
<br /></div>
<div>
So what is LWR? In reality it is just <a href="https://github.com/tdewin/powershell/tree/master/BR-LightWeightReporting" rel="nofollow">2 Powershell scripts you can find on github</a>; one that generates a (compressed) JSON report and one that reads that report (potentially for multiple servers). The most important part is that LWR doesn't define how you transfer these files. They just need to get from one server to another via a mechanism and dumped in a folder. For this blog post, I used FTP (including sample scripts) but they are just samples, not the actually definition. </div>
<div>
<br /></div>
<div>
So how does it work. Run the site-local on each remote branch on a regular basis. Do edit the script to change the uniqueid per site or use the param statement to modify them on the fly. The result will be a file in a directory every time you run:</div>
<div>
"C:\veeamlwr\99570f44-c050-11ea-b3de-0242ac13000x\99570f44-c050-11ea-b3de-0242ac13000x_1594283630.lwr"</div>
<div>
<br /></div>
<div>
The first directory is the repository, the second directory is the unique id and finally the last file is the <uniqueid>_<unixtime>.lwr data file. This is a gziped json file. You can use -zip $false to disable compression but then you need to be consistent for all the sites and the main site. It's just there to troubleshoot or if you want to easily parse the result yourself. By running the script, a new version is created which you must transfer to the main site. For this blog article, I just made a task in the windows scheduler running the sitelocal script and then the ftpsync-sitelocal (ftp sample) to transfer the data to the central site</unixtime></uniqueid></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNgmIhwlAvKbPJcqhWwuSwJ-WU_ecbtHKfj_crtnbi-jozDbrX28C9RQsXI-PZ3yflGy3eR_43aoQaLjMBOIXMWCOB59dV1lURqIGs9eZjconRCIvgNrFiPAVr3jthoK7dspzIfN0s_-A/s1600/sitelocal.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="515" data-original-width="669" height="246" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNgmIhwlAvKbPJcqhWwuSwJ-WU_ecbtHKfj_crtnbi-jozDbrX28C9RQsXI-PZ3yflGy3eR_43aoQaLjMBOIXMWCOB59dV1lURqIGs9eZjconRCIvgNrFiPAVr3jthoK7dspzIfN0s_-A/s320/sitelocal.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
On the central site, you might want the sync script for the central site manually just to get an update, and then run the central script to get the output</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwtYU1A6KpT5L2KenWPs7NrOMjYPBElYJhHeKMR4q4TwYHDeo9B-cNdDAddX0ySm5b2T3vpa3HK4cnky1WdjAdGwAISFq_mm2rhzVyhYw-flW2gVnogJX1qUHZwP5zbGp_aQV7rcLMJE4/s1600/prejob.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="530" data-original-width="992" height="170" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwtYU1A6KpT5L2KenWPs7NrOMjYPBElYJhHeKMR4q4TwYHDeo9B-cNdDAddX0ySm5b2T3vpa3HK4cnky1WdjAdGwAISFq_mm2rhzVyhYw-flW2gVnogJX1qUHZwP5zbGp_aQV7rcLMJE4/s320/prejob.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
So here you see the sync + first run. You can see that the script downloaded multiple files but central will only use the latest version (this is why the uniqueid + the timestamp in the filename is important). Clean up at the central site is something you should do yourself as I can imagine use cases where you want to keep those files for a longer history (for example how did the license usage expand over time)</div>
<div>
<br /></div>
<div>
Let's fail a job, run a new upload from the site local, and resync the latest lwr to central</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0DCNTsIFdF3wUgG2aXzqOsGM8eLX7cX4fRXqk1OKANlb8KHAAquyeJ4gZ73meobqd07_CdVFAhLStrwjcPdYTMsYyzPycjSOHdfRM1ixnFMq5QkYydEDD-BazxFM21sccdmipwiF-i7o/s1600/failedjob.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="82" data-original-width="1038" height="25" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0DCNTsIFdF3wUgG2aXzqOsGM8eLX7cX4fRXqk1OKANlb8KHAAquyeJ4gZ73meobqd07_CdVFAhLStrwjcPdYTMsYyzPycjSOHdfRM1ixnFMq5QkYydEDD-BazxFM21sccdmipwiF-i7o/s320/failedjob.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgarxqQIeyACCgHxylF61q2fevl72DTA0xktckLWYfGeJcoIsw4vK_405gaP5Nfdi11bDoIBm3qk9UhgdXqBkfoOC1YtfE5JylQe07YBNVj_AztHLkPJ8XRSMAXH_00gDKGlTpAGVKKJ8w/s1600/failedjobpost.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="529" data-original-width="997" height="169" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgarxqQIeyACCgHxylF61q2fevl72DTA0xktckLWYfGeJcoIsw4vK_405gaP5Nfdi11bDoIBm3qk9UhgdXqBkfoOC1YtfE5JylQe07YBNVj_AztHLkPJ8XRSMAXH_00gDKGlTpAGVKKJ8w/s320/failedjobpost.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Now you can see there are 2 failed jobs, reflecting the latest status</div>
<div>
<br /></div>
<div>
If you want to verify your license usage, you can also check the licenses with the -mode parameter</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq5Xn0VoiVYAn3Z_3VSYMdV46EF3CXm_w30j_g0nQKQvSPfN2Quz83MPz9D_LZfvkHaD6dbKhq3i8LMxzmWrV3rRhNuGCfcocbgYntKqL1Wo3M9V8Saxg2RV4cyaOLRBH5FLas_v-MPSU/s1600/licenseusage.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="523" data-original-width="987" height="169" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq5Xn0VoiVYAn3Z_3VSYMdV46EF3CXm_w30j_g0nQKQvSPfN2Quz83MPz9D_LZfvkHaD6dbKhq3i8LMxzmWrV3rRhNuGCfcocbgYntKqL1Wo3M9V8Saxg2RV4cyaOLRBH5FLas_v-MPSU/s320/licenseusage.png" width="320" /></a></div>
<div>
<br /></div>
<div>
That's it. There is nothing more to show or to do. Again, you probably want to use other alternatives like VSPC or Enterprise Manager but in case both are no feasible due to network/security restrictions, please feel free to use and extend to your liking!</div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-41215984776294346902020-07-03T15:13:00.000+02:002020-07-07T12:44:05.964+02:00Livemount MySQL with Data Integration API<div dir="ltr" style="text-align: left;" trbidi="on">
New in v10 is that you can mount disk via iSCSI to any other machines. In the lab, I was recently playing with MySQL and wondered if I could live mount it's backup to another server. It, turns out, you can. So what are the use-case? Well you could test if the databases are mountable without having the spin up a lab. You could allow your devs to access real production data without accessing the production servers. But of course, feel free to find other use cases 😏<br />
<br />
To get started, you need another machine where you have MySQL installed. So this is a plain Ubuntu server with ssh + MySQL installed as a target for live mount. You can see that I'm missing my production martini database<br />
<span id="goog_800117494"></span><span id="goog_800117495"></span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPTY05URzSzjo8bLou2U6CLaAp0q9eehQIfHTBMRzE1ublg6nV3YjsKwu3_IJojSIQdHMd1lFL32BCKfcwadjCL7Ai3klEEU0lsM4eZS4WPZWkRBOp3dIWbpWtpRAdUpr7mALEzyOSKEk/s1600/showdatabasesbefore.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="529" data-original-width="1235" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPTY05URzSzjo8bLou2U6CLaAp0q9eehQIfHTBMRzE1ublg6nV3YjsKwu3_IJojSIQdHMd1lFL32BCKfcwadjCL7Ai3klEEU0lsM4eZS4WPZWkRBOp3dIWbpWtpRAdUpr7mALEzyOSKEk/s320/showdatabasesbefore.png" width="320" /></a></div>
<br />
<div>
The next step is to publish the your restore point that you backed up. You can find a <a href="https://helpcenter.veeam.com/docs/backup/powershell/publish-vbrbackupcontent.html?ver=100" rel="nofollow">sample in the documentation</a> .</div>
<br />
<div style="text-align: center;">
<textarea cols="80" rows="9">$targetserver = "172.17.193.194" #where you want to mount
$name = "mysql"
asnp veeampssnapin
$latestrp = Get-VBRRestorePoint -Name $name | Sort-Object -Property completiontimeutc -Descending | select -First 1
$iscsipublish = Publish-VBRBackupContent -RestorePoint $latestrp -AllowedIps $targetserver
Get-VBRPublishedBackupContentInfo -Session $iscsipublish
Get-NetIPAddress -AddressFamily IPv4 | select ipaddress #backupserveriplist
</textarea>
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLvDQ_FAdzx-Ie5L3sMvo5goKBM3zYxQ1nyswPoRJVMpAi2doFbVRlAx0KMDraqjUicIbu_4M6Nw8uqBvjNm5nQ8YR0q3NATp2Hi_z0rGhK-IoKpfJIFCRKepCiWcWQX7gpBOTXrpktuk/s1600/codemount2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="441" data-original-width="1187" height="118" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLvDQ_FAdzx-Ie5L3sMvo5goKBM3zYxQ1nyswPoRJVMpAi2doFbVRlAx0KMDraqjUicIbu_4M6Nw8uqBvjNm5nQ8YR0q3NATp2Hi_z0rGhK-IoKpfJIFCRKepCiWcWQX7gpBOTXrpktuk/s320/codemount2.png" width="320" /></a></div>
<br />
Ok with that done, we can go to the Linux Machine. You do need to install the open-iscsi tools<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiU2Eu3Tp-B71FePL21TSWJGYmlEOan-MZJCxHP96n8x79PjgGARNcfi7xX7iteh9OiVNjgKCWYd-OmooMUGoIzJ_tkDtC_dETTmfgWW6OrpPLQ7gAdMcb1uI6cdASBst1qvUapwlpq4yM/s1600/openiscsi.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="426" data-original-width="673" height="202" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiU2Eu3Tp-B71FePL21TSWJGYmlEOan-MZJCxHP96n8x79PjgGARNcfi7xX7iteh9OiVNjgKCWYd-OmooMUGoIzJ_tkDtC_dETTmfgWW6OrpPLQ7gAdMcb1uI6cdASBst1qvUapwlpq4yM/s320/openiscsi.png" width="320" /></a></div>
<br />
Once that is done, you can mount the iscsi volume. For this you need two commands<br />
<br />
<div style="text-align: center;">
<textarea cols="80" rows="6">sudo iscsiadm --mode discovery -t sendtargets --portal [backupserver]
sudo iscsiadm --mode node --targetname [targetfromfirstcmd] --portal [backupserver] --login
</textarea>
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_6NDfDHAIgPWMR2SN_wAY-myOpCkr9nD4WM9rp8fVcEKLRScyG-32H-U2IibVHnNtaBd8pTlzxGygzgED4hyphenhyphenK3r7128mPbgL20_EovYdxB2g-y4HFmthdCjFewD2SsG5bRKfDzMnMW4Q/s1600/discoverylogin.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="80" data-original-width="1206" height="21" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_6NDfDHAIgPWMR2SN_wAY-myOpCkr9nD4WM9rp8fVcEKLRScyG-32H-U2IibVHnNtaBd8pTlzxGygzgED4hyphenhyphenK3r7128mPbgL20_EovYdxB2g-y4HFmthdCjFewD2SsG5bRKfDzMnMW4Q/s320/discoverylogin.png" width="320" /></a></div>
<br />
<br />
Here is a screenshot of the discovery process (finding the volumes) and the mounting. By doing an fdisk -l before and after, you can see that /dev/sdb is showing up. The most important part(ion) is of course /dev/sdb2. I also included a logout to show that it is going away but of course at this point, you only need to do the login part<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4X1mD36q2LpHYFF7vcycZ81kl6-BMsR789bn-0cYIvsxuajBZ0HLvw_3VuWeIklc3vPdyfOH6JMsS1nru7TiHH89gh09OhQYD3fMRN1qXgk8Kq70loH6lezDhZO-Byl_qAUDyhrnaElc/s1600/diskshow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="468" data-original-width="1189" height="125" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4X1mD36q2LpHYFF7vcycZ81kl6-BMsR789bn-0cYIvsxuajBZ0HLvw_3VuWeIklc3vPdyfOH6JMsS1nru7TiHH89gh09OhQYD3fMRN1qXgk8Kq70loH6lezDhZO-Byl_qAUDyhrnaElc/s320/diskshow.png" width="320" /></a></div>
<br />
Now let's mount /dev/sdb2. For this, we make a temporary directory under /mnt and mount /dev/sdb2<br />
<br />
<div style="text-align: center;">
<textarea cols="80" rows="2">sudo mkdir /mnt/mysqlrecovery
sudo mount /dev/sdb2 /mnt/mysqlrecovery
</textarea>
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiibWewm5lPqocXjw2k-YxmrdwBKGxPtIYKns7xI-s_rAK706SlJAuiNtPNBvLEfNcb0M66vJvEIDGZa2Yff6HaphmP8ryq7qGgGqgrxt8k__Vsrcqf80QGryqhF2k2q7W19-w1t5KdMqw/s1600/mountdisk.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="115" data-original-width="1132" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiibWewm5lPqocXjw2k-YxmrdwBKGxPtIYKns7xI-s_rAK706SlJAuiNtPNBvLEfNcb0M66vJvEIDGZa2Yff6HaphmP8ryq7qGgGqgrxt8k__Vsrcqf80QGryqhF2k2q7W19-w1t5KdMqw/s320/mountdisk.png" width="320" /></a></div>
<br />
<div>
At this point you want to stop the mysql database, mount the mysql database file directory to the correct location and restart the DB. I did have to correct the permissions so that the directory and all its files are owned by mysql after the mount</div>
<div>
<br /></div>
<div style="text-align: center;">
<textarea cols="80" rows="6">sudo systemctl stop mysql
sudo mv /var/lib/mysql /var/lib/mysql.backup
sudo mkdir /var/lib/mysql
sudo mount --bind /mnt/mysqlrecovery/var/lib/mysql /var/lib/mysql
sudo chown -R mysql:mysql /var/lib/mysql
sudo systemctl start mysql
</textarea>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDZZE-WllItTrMBaLu0RRRSXDvRe36wwQ30oeBvXedezp0DROMQ4LVfNJnVcPb0ZsjL5zKrm_9WOVQgNhlzmc0-g3uHgKQag34qgEkNQd0kyTjl3O4gAeIznuO1s26ovpTGflA_C3IS38/s1600/mountdatabase.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="528" data-original-width="918" height="184" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDZZE-WllItTrMBaLu0RRRSXDvRe36wwQ30oeBvXedezp0DROMQ4LVfNJnVcPb0ZsjL5zKrm_9WOVQgNhlzmc0-g3uHgKQag34qgEkNQd0kyTjl3O4gAeIznuO1s26ovpTGflA_C3IS38/s320/mountdatabase.png" width="320" /></a></div>
<div>
<br /></div>
And there you have it, martini database is back<br />
<br />
Once you are done, you need to clean up stuff<br />
<br />
<div style="text-align: center;">
<textarea cols="80" rows="6">sudo systemctl stop mysql
sudo umount /var/lib/mysql
sudo umount /mnt/mysqlrecovery
sudo iscsiadm --mode node --targetname [target] --portal [backupserver] --logout
</textarea></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilRYAlWuiy1Rb0O0ocaxO48A8zftxQPNtNm397GYVcFOq2fTYawrjlb5ebx5DVRBZH04f7h3pa3iMUxLWOaD2M8KQ-YEWdtzSzMZzrKVQNwh0f9axtrapzA7IBZ2949zer-kIvHZuPZos/s1600/cleanup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="132" data-original-width="1300" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilRYAlWuiy1Rb0O0ocaxO48A8zftxQPNtNm397GYVcFOq2fTYawrjlb5ebx5DVRBZH04f7h3pa3iMUxLWOaD2M8KQ-YEWdtzSzMZzrKVQNwh0f9axtrapzA7IBZ2949zer-kIvHZuPZos/s320/cleanup.png" width="320" /></a></div>
<br />
At this point, you can unpublish the session.<br />
<br />
<div style="text-align: center;">
<textarea cols="80" rows="1">Unpublish-VBRBackupContent -Session $iscsipublish
</textarea>
</div>
<br />
And voila, everything is cleaned up.<br />
<br />
You can also automate the whole process, check the complete code on <a href="https://github.com/VeeamHub/powershell/tree/master/BR-DataIntegrationAPI-Mysqldevmount">VeeamHUB</a></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-13327225180833422792020-05-19T21:00:00.002+02:002020-09-30T12:02:49.969+02:00Make your own tags in Veeam Backup & Replication<div dir="ltr" style="text-align: left;" trbidi="on">
Very common question is how do you scale a fix proxy pool to certain jobs. If you are talking about proxies towards repositories, you can use proxy affinity (a cool feature that not a lot of people seem to know about).<br />
<br />
But if you want to do some more custom things? This has been playing in my head for a long time and it turns out it is fairly seems simple to setup. First things first, add a tag to the description in the form of [location:mylocation]. Why the location specifier? Because you might want to add different tags (e.g jobtype, production level, etc.), if you want to use the grouping mechanism for other purposes<br />
<br />
Here are some examples in my lab<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggux9iejzxLHIqLNL-5Ton4du3pIYan03maur8Ij18XU9n-H87ejpA8cIH-oPjajg1A5FU4bM8P1zv3mgexE9kB6Q0ia3s5EKiPxDp6JVdb3vSQvEVenadPEQSTKzYZ3Eh7S3vkG7US8o/s1600/2020-05-19+19_00_27.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="547" data-original-width="759" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggux9iejzxLHIqLNL-5Ton4du3pIYan03maur8Ij18XU9n-H87ejpA8cIH-oPjajg1A5FU4bM8P1zv3mgexE9kB6Q0ia3s5EKiPxDp6JVdb3vSQvEVenadPEQSTKzYZ3Eh7S3vkG7US8o/s320/2020-05-19+19_00_27.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqtlMaLUUGGdZSl-r9FSgqn4YL2gt6jk8H9vb6N3vF1FbwLd90bxYdbl8GoZJ1F9nGmV1UShqD_xsmkTB-CoTy8Kbfq29-jvPE5SD2IAqUR8_bKbxYNFP-jzgM0rEr6wyiGn0bfW6rYv8/s1600/2020-05-19+19_01_09.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="545" data-original-width="761" height="229" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqtlMaLUUGGdZSl-r9FSgqn4YL2gt6jk8H9vb6N3vF1FbwLd90bxYdbl8GoZJ1F9nGmV1UShqD_xsmkTB-CoTy8Kbfq29-jvPE5SD2IAqUR8_bKbxYNFP-jzgM0rEr6wyiGn0bfW6rYv8/s320/2020-05-19+19_01_09.png" width="320" /></a></div>
<br />
Then you can the following <a href="https://gist.github.com/tdewin/b7a21d5e7a20c984e69dadf07c32dda1">script/gist</a> . The script could be a bit more generic, that's why it made more sense to just add it as a gist. The output is fairly simple.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc0AOc7NMqT3HiZUNOZ29AVYYXAhNJOWbY8gpe9fNDlUZUkG_Yq5-Vg7HaptbiBoD3s_Gul4kqfVp0PH5ZHrcU7VgSELwIwl5eRlzIosZ-Hfj6CcKHQOKUxaGCV26GdsxR3Gv9RZ68ziE/s1600/2020-05-19+19_01_38.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="736" data-original-width="640" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc0AOc7NMqT3HiZUNOZ29AVYYXAhNJOWbY8gpe9fNDlUZUkG_Yq5-Vg7HaptbiBoD3s_Gul4kqfVp0PH5ZHrcU7VgSELwIwl5eRlzIosZ-Hfj6CcKHQOKUxaGCV26GdsxR3Gv9RZ68ziE/s320/2020-05-19+19_01_38.png" width="278" /></a></div>
<br />
Here is where Powershell group-object really shines. It is really easy to group certain objects together and then just loop over the targeting object without having to re-filter over and over again. The regex should be fairly simple if you want to do another grouping<br />
<br />
Hope it is an inspiration to other people</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-8670815502372789642020-04-10T09:12:00.000+02:002020-04-10T11:16:48.378+02:00NAS Multiplex<div dir="ltr" style="text-align: left;" trbidi="on">
If you ever tried to add multiple shares from a root share in the new Veeam B&R v10, you might have noticed that you have to map them one by one. During beta, I already experimented with some sample code how to add those shares in bulk. So for the GA, I decided to build a GUI around it.<br />
<br />
First of all you can find the script on <a href="https://github.com/veeamhub/powershell/tree/master/BR-NASMultiplex">https://github.com/veeamhub/powershell/tree/master/BR-NASMultiplex</a> .<br />
<br />
So after that, it is just a matter of running the script by clicking it and say "run with powershell". You might need to change your execution policy if you downloaded it. Also make sure to run this on the backup server as it load the VeeamPSSnapin in the background<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1nfQ6ZWToBGfkATDibWggoMLnx2L0qHXqv4NyoJFa8RrLy0GLnXVMIXOvtO8oyHXV1QXMtLi6WneR1_K2vr3Pvkqd4HrdQi6OI1nTck-fP28G36jCDhj8iUdCJQlklFrfWn12LQN39OM/s1600/runas.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="411" data-original-width="598" height="219" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1nfQ6ZWToBGfkATDibWggoMLnx2L0qHXqv4NyoJFa8RrLy0GLnXVMIXOvtO8oyHXV1QXMtLi6WneR1_K2vr3Pvkqd4HrdQi6OI1nTck-fP28G36jCDhj8iUdCJQlklFrfWn12LQN39OM/s320/runas.png" width="320" /></a></div>
<br />
So let's look at an example. In my lab, there is a server with 4 shares, share1, share2, share 3 and share 4. Before running the script, 2 are mapped and added to 1 job<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu4Kth6m_yyIGHu5DfICo6nn_X_d0ON646QcZyScdUVk66FquuUVbw7ONOGaC2b3NigCsnP792CnsQZSus-nRRPFFfz089nV2972_6TMIsDTKcEWs9D4oVXYr_FjVACF53oxXdfOgqgUI/s1600/pre.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="442" data-original-width="912" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu4Kth6m_yyIGHu5DfICo6nn_X_d0ON646QcZyScdUVk66FquuUVbw7ONOGaC2b3NigCsnP792CnsQZSus-nRRPFFfz089nV2972_6TMIsDTKcEWs9D4oVXYr_FjVACF53oxXdfOgqgUI/s320/pre.png" width="320" /></a></div>
<br />
Now when you start NASMultiplex, you can select that job, and select one of the shares as a template or a sample for the newly created shares. Basically the new shares will be added with all the settings of the sample share.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGxMfMQTW-Y924Y2tc0r0yDZGs85mUHlV3z7FkYrqILQPLBVqGwxCpWxoiETxavXdsPeUQ8Zj6AfYnJSs0w-7dqAAF8xeCys2Obs-10n5ZbXSjkA8SZ1f6bykBoInAJ5ZZ_M0eKj6KJ0c/s1600/nasmultiplex.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="462" data-original-width="809" height="182" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGxMfMQTW-Y924Y2tc0r0yDZGs85mUHlV3z7FkYrqILQPLBVqGwxCpWxoiETxavXdsPeUQ8Zj6AfYnJSs0w-7dqAAF8xeCys2Obs-10n5ZbXSjkA8SZ1f6bykBoInAJ5ZZ_M0eKj6KJ0c/s320/nasmultiplex.png" width="320" /></a></div>
<br />
"Base" is basically a prefix field. In the big field below, you can see that per line, a share can be added (e.g share3 and share4). The concats base+share to form the full path. It does just concatenate the strings so you could either choose to have base ending in "\" or if you don't you must add the separator in front of every share.<br />
<br />
Finally there is "Altpath". If you're sample share has storage snapshots enabled, it show you the pattern it use to fill in the storage alternate path for the new share. {0} will be replace with base and {1} will be replaced with the share name. So for this example, share 3 would become \\172.18.87.231\share3\.snapshot . If altpath is disabled, than it should mean that your sample share uses VSS snapshots or just direct copy<br />
<br />
When you are ready to go, click add shares and the script will run in the background until a popup appears<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQQr9EbBRtArjHitPkP3NCQ5hgr6m3Jb4hR_PDRs-AtBxyR_N1GRr9RPh-ktOxZsOTTm6-5Rlnx9Hgkx2Mt9WTi9M7g4Qp1ai3xSq42s8vzFhDgbAHw7E8rCz6_JW_qyBDbFaTDHui1o/s1600/gui.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="462" data-original-width="809" height="182" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQQr9EbBRtArjHitPkP3NCQ5hgr6m3Jb4hR_PDRs-AtBxyR_N1GRr9RPh-ktOxZsOTTm6-5Rlnx9Hgkx2Mt9WTi9M7g4Qp1ai3xSq42s8vzFhDgbAHw7E8rCz6_JW_qyBDbFaTDHui1o/s320/gui.png" width="320" /></a></div>
<br />
Add that point, the shares should be added to the system and to the job<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxoldvIuwUhOI3xpiA-rJe9fHetHDTdN1awH65G3Y27xloeP91MT19QAUb4o39oGApe1RtQj3CtyFu-aqCnBmnOFz3_RhTB1wHxOgBYHQQm8gXE54HE8WgpDE3z21rNLUyUgVgTO3rS4w/s1600/jobadded.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="549" data-original-width="762" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxoldvIuwUhOI3xpiA-rJe9fHetHDTdN1awH65G3Y27xloeP91MT19QAUb4o39oGApe1RtQj3CtyFu-aqCnBmnOFz3_RhTB1wHxOgBYHQQm8gXE54HE8WgpDE3z21rNLUyUgVgTO3rS4w/s320/jobadded.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifZrO7pI6DqBSWG8Ah4Bz2Hju-cImkPZgggC7ayj-7EjAFXSj92eEOc1sxQLSbPAe1pbFahqTFSOwr3qFKiXXnl0XueXir-PqSZ9I04MprKe-tK7Kab2Tc_D1pFYo3-OVhbsI4vExDhq4/s1600/sharesadded.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="443" data-original-width="934" height="151" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifZrO7pI6DqBSWG8Ah4Bz2Hju-cImkPZgggC7ayj-7EjAFXSj92eEOc1sxQLSbPAe1pbFahqTFSOwr3qFKiXXnl0XueXir-PqSZ9I04MprKe-tK7Kab2Tc_D1pFYo3-OVhbsI4vExDhq4/s320/sharesadded.png" width="320" /></a></div>
<br />
Final notes: This has only been tested with SMB so with NFS it might not work, but it should be pretty easy to get it fixed if it doesn't. It was also built in one evening so would be great to hear your feedback, what does work and what definitely doesn't work</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-21485458346701357642019-06-12T10:19:00.001+02:002019-06-12T10:19:31.101+02:00Compiling martini-cli from source<div dir="ltr" style="text-align: left;" trbidi="on">
Just a small post for those wanting to compile martini-cli from source (for example if you want to run the cli from Mac or windows)<br />
<br />
First you need to install go(lang) compiler. Depends a bit on the OS but should be straight forward in most case : <a href="https://golang.org/dl/">https://golang.org/dl/</a> . For Ubuntu, you can just find the package golang in the repository so you can just install it via the packet manager<br />
<br />
<blockquote class="tr_bq">
sudo apt-get install golang</blockquote>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8f0FoIIcirB0_ZVyd6CtCZJMoFFIAF9gJS5hbAZFMpaJClCoUpqmOicxtJ4eCVIz9bHZX_DMQty4_nXq4f2iWP_OVRdj6Br3fM9IBBgMuXUHDgCusDZkNxSUd_mHATOri_Jqfg9qVDmg/s1600/a01golang.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8f0FoIIcirB0_ZVyd6CtCZJMoFFIAF9gJS5hbAZFMpaJClCoUpqmOicxtJ4eCVIz9bHZX_DMQty4_nXq4f2iWP_OVRdj6Br3fM9IBBgMuXUHDgCusDZkNxSUd_mHATOri_Jqfg9qVDmg/s320/a01golang.png" width="320" /></a></div>
<br />
Then once the go compiler is installed, you can just download the code and compile it with the go command<br />
<blockquote class="tr_bq">
go get -d github.com/tdewin/martini-cli<br />go install github.com/tdewin/martini-cli</blockquote>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj06lgMyRWo2Us9zrTyrUYK5aV9xlTsOZsb9mu67wJMTYWkz1iEZty8aaGFqaLHaKbnjmeXIALBf7Z79D-q6d1r7A9j2HgG-Q7ifSiGH5aQRmu0R9mJKQgcCMOHPqsi59dFnpiJmZSmXqk/s1600/a03getinstall.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj06lgMyRWo2Us9zrTyrUYK5aV9xlTsOZsb9mu67wJMTYWkz1iEZty8aaGFqaLHaKbnjmeXIALBf7Z79D-q6d1r7A9j2HgG-Q7ifSiGH5aQRmu0R9mJKQgcCMOHPqsi59dFnpiJmZSmXqk/s320/a03getinstall.png" width="320" /></a></div>
<br />
And that's it! You should now have a binary in go/bin called martini-cli. You can find and edit the src in go/src. Recompiling with the go install.. command. Happy hacking!</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-91945478573284548892019-06-12T09:03:00.002+02:002019-06-12T09:03:30.218+02:00Installing Project Martini<div dir="ltr" style="text-align: left;" trbidi="on">
At VeeamON my colleague Niels and I presented Project Martini. A kind of Manager of Manager for Veeam Backup for Office 365. Today I'm writing a small post for those who want to go ahead and install the solution<div>
<br /></div>
<div>
The first thing you need is a Linux server. Could be virtual, could be physical. We strongly recommend an Ubuntu version as this is what we used in AWS and the screenshots of this blog post are from an on premise Ubuntu installation (18.04 LTS).</div>
<div>
<br /></div>
<div>
Once you have installed Ubuntu, you can download the martini-cli. I'll make another small blog post on how to compile it from source but for this post, you can just download the binary. So let's make sure we have the right tools to unzip and get the binary first</div>
<div>
<br /></div>
<blockquote class="tr_bq">
sudo apt-get update<br />sudo apt-get install unzip wget</blockquote>
<div>
<br /><div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEXIyC_T4PBHdoKTbGXHLwxrWXHArgJxnlicHFheYtRlLbREhyphenhyphenaLBlBuIAPviFM-vVbqsQgYPHUTGgjQo9kVEHL0Yuk7pGDkLXU7VUESwcqo0lIDxTQXdM_ZX4Cv4FK5VH8bMDE7B8NPI/s1600/02download.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="599" data-original-width="800" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEXIyC_T4PBHdoKTbGXHLwxrWXHArgJxnlicHFheYtRlLbREhyphenhyphenaLBlBuIAPviFM-vVbqsQgYPHUTGgjQo9kVEHL0Yuk7pGDkLXU7VUESwcqo0lIDxTQXdM_ZX4Cv4FK5VH8bMDE7B8NPI/s320/02download.png" width="320" /></a></div>
</div>
<div>
<br /></div>
<div>
Now you can go ahead and install the binary</div>
<div>
<br /></div>
<blockquote class="tr_bq">
wget https://dewin.me/martini/martini-cli.zip<br />sudo unzip martini-cli.zip -d /usr/bin<br />sudo chmod +x /usr/bin/martini-cli</blockquote>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7wbf_RGwTSsXY_pQrv8Avj2ccVebJwhtZYfBfRIwNxVt5xmMLEbsX1bBE5n-R0ZBfyCDbX7cOtbARZkN8yL8d3-c9tWaQfbqR6FXtlZHJ05-Y6x7Ylw0mfb5ncdZOdB7nfiOg2HiekIs/s1600/03download.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7wbf_RGwTSsXY_pQrv8Avj2ccVebJwhtZYfBfRIwNxVt5xmMLEbsX1bBE5n-R0ZBfyCDbX7cOtbARZkN8yL8d3-c9tWaQfbqR6FXtlZHJ05-Y6x7Ylw0mfb5ncdZOdB7nfiOg2HiekIs/s320/03download.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Once you have the binary in place, you are ready to start the installation. If you are running Ubuntu, the setup should detect this and propose you to install the prerequirements</div>
<div>
<br /></div>
<blockquote class="tr_bq">
sudo martini-cli setup</blockquote>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLWX6r8HK-oggxHOEVBIwKqRpwjFgmKWSE9EKBfE7lg_bEZkKMnbhpj7R8pjRJaaQ2zknzK-IWhlUGrJ7T53gqIJV9vFOBw8gchdx0ukWSRKqwbboFqRKJZR7A-NmbgObrO_rJCRFdrUQ/s1600/04install.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLWX6r8HK-oggxHOEVBIwKqRpwjFgmKWSE9EKBfE7lg_bEZkKMnbhpj7R8pjRJaaQ2zknzK-IWhlUGrJ7T53gqIJV9vFOBw8gchdx0ukWSRKqwbboFqRKJZR7A-NmbgObrO_rJCRFdrUQ/s320/04install.png" width="320" /></a></div>
<div>
<br /></div>
<div>
On the question if you want to install prereq, answer "y". It should download all the prerequirements. Then it will prompt you to install terraform and martini-pfwd. Please answer "y" to continue. Once this is finished, it will prompt you to create the mysql database. It is automatically installed but you might want to secure it a little bit more.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp0QHwuT4Rsydkz60IliTc4JfxZAeQrVF4Ju989_KUnt717R1eeuhdPxAjMuB3h2wnsCI3peYGuawkYGMne_y0QOYvq5FREkd6zyY_NpZ_efqwPEkhguqPhI4XvmNOow3eCK6wBW7zK_c/s1600/06martini.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp0QHwuT4Rsydkz60IliTc4JfxZAeQrVF4Ju989_KUnt717R1eeuhdPxAjMuB3h2wnsCI3peYGuawkYGMne_y0QOYvq5FREkd6zyY_NpZ_efqwPEkhguqPhI4XvmNOow3eCK6wBW7zK_c/s320/06martini.png" width="320" /></a></div>
<div>
<br /></div>
<div>
So start the mysql prompt and create the database. The cli gives you a very simple example to go ahead and create the database. Please make sure you remember the password.</div>
<div>
<br /></div>
<blockquote class="tr_bq">
mysql -u root -p</blockquote>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpYQKLVYp53YKcSR0KOHulKT9lvDUNmeRnhJg0Pn3tH2GMFCP5Uh7SJVyRisqmhUWPNVslIJFfgwR3u4nGEwYotEc6zG-Jv_UyRxE3wDyySiDArnkz5uhyphenhyphenAYbqDd4fVNx3an3iI71l-cs/s1600/07mysql.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpYQKLVYp53YKcSR0KOHulKT9lvDUNmeRnhJg0Pn3tH2GMFCP5Uh7SJVyRisqmhUWPNVslIJFfgwR3u4nGEwYotEc6zG-Jv_UyRxE3wDyySiDArnkz5uhyphenhyphenAYbqDd4fVNx3an3iI71l-cs/s320/07mysql.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Finally rerun the setup wizard but this time, don't install the prereqs (as this already is done). This will download the latest martini-www release from github and will ask you for the database connection parameters</div>
<div>
<br /></div>
<blockquote class="tr_bq">
sudo martini-cli setup</blockquote>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjynKEt5SdQQ76Sbvx3XnKsP4jHLosgSLt1u1rjtqtiwlzqMAwTDRoFX9EDvMwvUuIk7zipqJdadrPE-cBcYXRLGr-GutPGCOfMgOOr1aukNRwkeX2blaSBkX5vs6iYAMt_IZ0d_e8ENBs/s1600/08noprereq.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjynKEt5SdQQ76Sbvx3XnKsP4jHLosgSLt1u1rjtqtiwlzqMAwTDRoFX9EDvMwvUuIk7zipqJdadrPE-cBcYXRLGr-GutPGCOfMgOOr1aukNRwkeX2blaSBkX5vs6iYAMt_IZ0d_e8ENBs/s320/08noprereq.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Once you went through the whole process, martini is installed. Make sure to remember the admin password you gave in during the installation. You need it to authenticate the cli or the webgui. You also might need to chown the config file if you run the process with sudo. In my case, my user is timothy but that might be different on your system. Once everything is done, you can test the connection with martini-cli</div>
<div>
<br /></div>
<blockquote class="tr_bq">
sudo chown timothy:timothy .martiniconfig</blockquote>
<blockquote class="tr_bq">
martini-cli --server http://localhost/api connect </blockquote>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg04GAIuui9vIR3kbH5xZ5pzdS-0XhFNmNPKl604IAuRVZTcUp-T4KtqGwE294LOkRaqjY4hoP7pN1hhNX44N_DG5Pi61-lO5XSYTKlLG5sHSyiOjkFEr9MVlUsCgcgHUVc8gFU0CtViC4/s1600/10chmodandconnect.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg04GAIuui9vIR3kbH5xZ5pzdS-0XhFNmNPKl604IAuRVZTcUp-T4KtqGwE294LOkRaqjY4hoP7pN1hhNX44N_DG5Pi61-lO5XSYTKlLG5sHSyiOjkFEr9MVlUsCgcgHUVc8gFU0CtViC4/s320/10chmodandconnect.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
You can now run the cli. For example martini-cli tenant list. Since there is no tenant yet, you should should see a lot of # signs.<br />
<br />
If you browse to the IP of the server, you should now also be able to login with admin and the password. In my screenshots, I used port 999 to map to an internal server so you can just ignore that and use the regular port 80 if you have direct access to your machine<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEeycL2O6qr8MQopVJRTaYlXSkdIxn3FC298TLJBz_vjTctwKkMzjtHLkKpjdEZKrL0k9o8j6wdwepVfClKTGk22iynfjuxBMBP-7iBBAYH3qR1hezeO1E-g67eif1rmAsKtbvvI2fRUA/s1600/11connect.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="1005" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEeycL2O6qr8MQopVJRTaYlXSkdIxn3FC298TLJBz_vjTctwKkMzjtHLkKpjdEZKrL0k9o8j6wdwepVfClKTGk22iynfjuxBMBP-7iBBAYH3qR1hezeO1E-g67eif1rmAsKtbvvI2fRUA/s320/11connect.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4rnQ8nITJPyv8ABgr-pIqTorQghBtIg0P65TxR1073msm2skn7laySC_h-HIDHGxMkistSK2wdArWklgCvuE8g3FMzswgO6WlmQ7PAe-F_6qmAm4bT3_NxrIwYeWXx5Ydux3QdoqL7Gw/s1600/11login.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="1005" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4rnQ8nITJPyv8ABgr-pIqTorQghBtIg0P65TxR1073msm2skn7laySC_h-HIDHGxMkistSK2wdArWklgCvuE8g3FMzswgO6WlmQ7PAe-F_6qmAm4bT3_NxrIwYeWXx5Ydux3QdoqL7Gw/s320/11login.png" width="320" /></a></div>
<br /></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-38532514923731569162019-01-30T13:40:00.000+01:002019-01-30T13:42:35.027+01:00Get your offline RPS version<div dir="ltr" style="text-align: left;" trbidi="on">
Some people have asked me in the past, is there a way to get RPS offline or to export it's result. In previous versions I tried to add the canvas rendering (basically generating a PNG) or generate an URL that you can share with your colleagues. However, for a lot of people this does not seem to be enough. Enter RPS-offline...<br />
<br />
RPS-offline can be download <a href="http://rps.dewin.me/rps-offline.zip">here</a>. The source code itself is on <a href="https://github.com/tdewin/rps-offline">github</a>. If you have a Mac or Linux, you should be able to compile it with GO(lang). Once you have download it, my recommendation would be to extract the tool in c:\rps-offline. This is not a strict requirement, but you will see why later<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN24hGe08UgBJGE8sEClboeMvfv2G656U9jKQh9B4Csp0Iq7_L70AF9fGhZzxBs8KGXNmNbFJITkk8y85kUTp0hFySqPWacSn9ScLqgOf_lBkrjoZpuyoKCJpQxPmYLSb4icPHXmLHGkc/s1600/extract.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="348" data-original-width="810" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN24hGe08UgBJGE8sEClboeMvfv2G656U9jKQh9B4Csp0Iq7_L70AF9fGhZzxBs8KGXNmNbFJITkk8y85kUTp0hFySqPWacSn9ScLqgOf_lBkrjoZpuyoKCJpQxPmYLSb4icPHXmLHGkc/s320/extract.png" width="320" /></a></div>
<br />
The first time you run the exe (or the bat file), it will download a zip (latest version from github) and start a local server, redirecting your browser to it<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSpkagRBBSyPgL-RWHno-pG52LhNCeQDDEL21UGBDKAeTZ0UVrty1WUt4GQXlUygSAGdCFMT1Ps9nJpYtISbKzDjm_FPf-4NdIeK0xMgDhFvKiopv5FOfuC8__Bs8lE2BgVwBCWgPKQFI/s1600/download.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="749" data-original-width="1383" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSpkagRBBSyPgL-RWHno-pG52LhNCeQDDEL21UGBDKAeTZ0UVrty1WUt4GQXlUygSAGdCFMT1Ps9nJpYtISbKzDjm_FPf-4NdIeK0xMgDhFvKiopv5FOfuC8__Bs8lE2BgVwBCWgPKQFI/s320/download.png" width="320" /></a></div>
<br />
The next time the binary runs, it will detect the zip and it should not require any network connection.<br />
<br />
But of course that's not all. When you have done a simulation, you can hit the c key (csv). It will export the data to a csv file. You can also push the j key (JSON), which will export the data a JSON file. Why JSON? Well it allows to export more data fields in a predictable way and a lot of scripting language can easily import it. That's why, you can also run a postscript after the JSON is generated<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgA3wTmKd4Y3kOHRlPIicD0xbn5aPzcvOM_rQS2G4Y7ywfnUgUX7nAL_LF-VjTeaqjbYb4D_lalyQtZxDdtkKjD2bcULr2d6osUQc5uRfD4vbEHydcYGRHiDtIO8ddH-PGzHtEP1caj0Z0/s1600/csv.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="749" data-original-width="860" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgA3wTmKd4Y3kOHRlPIicD0xbn5aPzcvOM_rQS2G4Y7ywfnUgUX7nAL_LF-VjTeaqjbYb4D_lalyQtZxDdtkKjD2bcULr2d6osUQc5uRfD4vbEHydcYGRHiDtIO8ddH-PGzHtEP1caj0Z0/s320/csv.png" width="320" /></a></div>
<br />
And this is where the .bat files comes in to play. It will tell rps-offline to run a script called Word.ps1. I'm not really planning on extending this script. The Word.ps1 is merely a sample script that you delete, change, modify, etc. It does show you one of the possible things you could do e.g generate a word document with Powershell. Of course, this will not work on Linux or Mac, but you are free to run whatever command you want. That's the beauty of not embedding all the code inside rps-offline itself. You will also see that the .bat and .ps1 script refer to the fixed path c:\rps-offline so if you want to store it on another path, make sure to edit the .bat and .ps1 file<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnF5UEdO6tUWiVaDtnmxAm5ocHMzQtT66hiFZbtrZXM6FwTTsuzS2VNefeRsyoSjy5bvRaF-lI5krUMlCXhddQZrUKPab6DCLI3N_qDSUVS3lIVjATTzaMoB9KMHS3U6PmuZ6AFCxQYN4/s1600/2019-01-30+13_18_06-rps-projects.docx+-+Word.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="622" data-original-width="688" height="289" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnF5UEdO6tUWiVaDtnmxAm5ocHMzQtT66hiFZbtrZXM6FwTTsuzS2VNefeRsyoSjy5bvRaF-lI5krUMlCXhddQZrUKPab6DCLI3N_qDSUVS3lIVjATTzaMoB9KMHS3U6PmuZ6AFCxQYN4/s320/2019-01-30+13_18_06-rps-projects.docx+-+Word.png" width="320" /></a></div>
When you are done, push q (quit) to stop the program and the local web-service.<br />
<br />
This is of course a first version, but I'm curious if people find it useful or to see if people make other conversion scripts. I could imagine a scenario where you take the JSON output and import it in a DB as well. So please let me know what are you thoughts on twitter @tdewin ! Hope you guys enjoy!</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-44808139773796775492018-09-25T16:19:00.000+02:002018-09-25T16:19:01.421+02:00Show me your moves, VBO365<div dir="ltr" style="text-align: left;" trbidi="on">
Maybe one of the biggest new hidden gems in VBO365 V2.0 is in the Powershell cmdlets. You can now move data from one repository to another. Why is that important? Well retention is defined on a per-repository level. Imagine Alex. Alex is just a regular employee but recently got promoted to upper management. In terms of backups, that means Alex his emails become super important! So instead of 2 years, we now need to keep Alex his emails forever.<br />
<br />
In previous versions, you could have just excluded Alex from one job and included Alex in the appropriate job. However, that would mean that you would have to download all the data again. In v2, there is a simple commando to do this called Move-VBOEntityData.<br />
<br />
Before we can use this command, we first need to prepare some variables. First we need to get the source repository and the target repository. In my case repo2y and repoinf. Then we get Alex with Get-VBOEntityData.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9uIjIWPdb6-eJJsvEAxZxY-5ZloYUtaN9ZxUtVGtU1LUb9GkaflK41415irTJorIMJZGtR11DN3zSyMUpFRsITt82Kwi0WjRD7mAGeCAgukv6El9_qIA_Nn5KAH21HQggjDyyKoIRJgE/s1600/getalex.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="663" data-original-width="650" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9uIjIWPdb6-eJJsvEAxZxY-5ZloYUtaN9ZxUtVGtU1LUb9GkaflK41415irTJorIMJZGtR11DN3zSyMUpFRsITt82Kwi0WjRD7mAGeCAgukv6El9_qIA_Nn5KAH21HQggjDyyKoIRJgE/s320/getalex.png" width="313" /></a></div>
<br />
Now we can actually move Alex to a new job. Make sure the source and target job are not running before you moved the data.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT5PqZDsFUSAY-sXo-0998z8BRR6JXWlhOo9ZT-VqTgs-zbXdZZTGM0kStcMX9o59Mhmb___rfPEn9c1FXwwnqQbuUN31t7WoX4QrtQ5xHAUPsc_0hAipN5_18DELCg1stdOGKdMCcMdA/s1600/removefromsource.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT5PqZDsFUSAY-sXo-0998z8BRR6JXWlhOo9ZT-VqTgs-zbXdZZTGM0kStcMX9o59Mhmb___rfPEn9c1FXwwnqQbuUN31t7WoX4QrtQ5xHAUPsc_0hAipN5_18DELCg1stdOGKdMCcMdA/s320/removefromsource.png" width="320" /></a></div>
<br />
Alex is in the job<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghrCKuTwuynzLUj3BZM1x3gISpfvYuSUZPG6L4dxOJb2YOYmvQ8ChBBHKShxQn8ZGm36V6UDS5ACDtofw5EIevDpsiA9hkjp2LY1sQnZP2pY1qGyDGNsviY3qtjlSLph9j-n-bgjT34Xg/s1600/gone.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghrCKuTwuynzLUj3BZM1x3gISpfvYuSUZPG6L4dxOJb2YOYmvQ8ChBBHKShxQn8ZGm36V6UDS5ACDtofw5EIevDpsiA9hkjp2LY1sQnZP2pY1qGyDGNsviY3qtjlSLph9j-n-bgjT34Xg/s320/gone.png" width="320" /></a></div>
<br />
And now Alex is gone!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhy-CW3ciUDpkrEdMaCpk4IIP6XWEKx7ULxhLXiy7SnFJzvQpi-tZLEh__ouaCc_c8zhrLFuamktwBPdreKxoNsklvapPvHM6WpATpSTOMP-BZVnGBKrMS4UlvuQpiBH7dgQ2GzFVd-7As/s1600/newjob.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhy-CW3ciUDpkrEdMaCpk4IIP6XWEKx7ULxhLXiy7SnFJzvQpi-tZLEh__ouaCc_c8zhrLFuamktwBPdreKxoNsklvapPvHM6WpATpSTOMP-BZVnGBKrMS4UlvuQpiBH7dgQ2GzFVd-7As/s320/newjob.png" width="320" /></a></div>
<br />
A brand new VIP job for Alex<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsUwfQKn-Kj-oGRRkabimfPNjUQ8LdYsHr1AltiuU1f5-XykPYbLMjsCx4iXSzMNrOLdVD0f_natIu3OZ6P8IsEs5EMfopJvTZ2TFLSbl3_8gfAbt7E1-zALPPY1Zz4gKqN1pepQZksd0/s1600/addalex.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsUwfQKn-Kj-oGRRkabimfPNjUQ8LdYsHr1AltiuU1f5-XykPYbLMjsCx4iXSzMNrOLdVD0f_natIu3OZ6P8IsEs5EMfopJvTZ2TFLSbl3_8gfAbt7E1-zALPPY1Zz4gKqN1pepQZksd0/s320/addalex.png" width="320" /></a></div>
<br />
Pointing to the new repository<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtup-ub580CcNeQf-Oz78YiqcSo0Ov5k0vVKudLFss8FlfFtOkpe3FqjUK3DZN5VkXj0DwIj6XAA8oneNu8K9c63d2bqyGKAqnVT6mY2nmBXJqmZ0mQIMhUG4uGx7OkQ5knSEoeZpUqiw/s1600/target+repo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtup-ub580CcNeQf-Oz78YiqcSo0Ov5k0vVKudLFss8FlfFtOkpe3FqjUK3DZN5VkXj0DwIj6XAA8oneNu8K9c63d2bqyGKAqnVT6mY2nmBXJqmZ0mQIMhUG4uGx7OkQ5knSEoeZpUqiw/s320/target+repo.png" width="320" /></a></div>
<br />
Don't run the new job!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPyh9qNePzb2Rs06Z0k34PnUoG3eHlX6ZTmJ4Avf6jopKrv2pUiPoWOjBRlGMvHO5tVL6WVoQwBOdEd3-SWO_MCZO0a1MBZK90rx9rNx1ZJY2j_fQoJT60uqcqBr79KWu1FK7SxfWSosQ/s1600/dontrun.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPyh9qNePzb2Rs06Z0k34PnUoG3eHlX6ZTmJ4Avf6jopKrv2pUiPoWOjBRlGMvHO5tVL6WVoQwBOdEd3-SWO_MCZO0a1MBZK90rx9rNx1ZJY2j_fQoJT60uqcqBr79KWu1FK7SxfWSosQ/s320/dontrun.png" width="320" /></a></div>
<br />
Let's now execute the move passing the source, target repository and Alex<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-fkN-GwJTZpc1MnMY_I96oM3F6PEBA_F82Bq8AGvw1AUrQZnTvIfMf8pBFCbSjfK09g1Ph8O7IdGB1FKkc4WY-cgAcN5PqPjtX5Pqll-vrk4TeUWaM-bR_osMmtskjcn4Hf1QQdz1b1c/s1600/movedata.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="663" data-original-width="650" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-fkN-GwJTZpc1MnMY_I96oM3F6PEBA_F82Bq8AGvw1AUrQZnTvIfMf8pBFCbSjfK09g1Ph8O7IdGB1FKkc4WY-cgAcN5PqPjtX5Pqll-vrk4TeUWaM-bR_osMmtskjcn4Hf1QQdz1b1c/s320/movedata.png" width="313" /></a></div>
<br />
If you now start the job, you will notice that it processes the items but it doesn't have to write anything to disk because the data is already there. Basically, it checks if all the data is there but doesn't download it again<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeGnfTwwVimetIDgThs3Rd1uEbDHYe0daKfqxH14pV5B4NtoPZ4TLDwyERz5Ak3OEqRRGLA3HpdKQ2VEwyn8GvRFtbQptXpH8kgblAfJc55mB_srOMabHN4go_0B7gPGU0WqWOdXG5WDA/s1600/ran.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="925" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeGnfTwwVimetIDgThs3Rd1uEbDHYe0daKfqxH14pV5B4NtoPZ4TLDwyERz5Ak3OEqRRGLA3HpdKQ2VEwyn8GvRFtbQptXpH8kgblAfJc55mB_srOMabHN4go_0B7gPGU0WqWOdXG5WDA/s320/ran.png" width="320" /></a></div>
<br />
And we can actually see that nothing is being download in the logs itself (default is "C:\ProgramData\Veeam\Backup365\Logs")<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq3T2byfY-kbeDUoeSWAr4mq5_GvaWetKuW3bZzsRRvCVBhRPF9iicYbF91aLH0lHeY2gNp4U28x9kPFMWoXaSTBNeYL8JP_eIzMhCzP0eExNoiimZs3DUHbXTXz396GXXNB_GVUuK4eY/s1600/zlog.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="1024" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq3T2byfY-kbeDUoeSWAr4mq5_GvaWetKuW3bZzsRRvCVBhRPF9iicYbF91aLH0lHeY2gNp4U28x9kPFMWoXaSTBNeYL8JP_eIzMhCzP0eExNoiimZs3DUHbXTXz396GXXNB_GVUuK4eY/s320/zlog.png" width="320" /></a></div>
<br />
For those looking to remove data instead of moving it, notice there is also a <a href="https://helpcenter.veeam.com/docs/vbo365/powershell/remove-vboentitydata.html?ver=20">Remove-VBOEntityData</a> now. For those wanting the complete code (although pretty trivial), you can check out <a href="https://gist.github.com/tdewin/085c449f0c1bde6035de62e26b57b13c#file-move-vboentitydata-sample-ps1">this gist on github</a><br />
<br />
<br /></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-10023062461983463662018-02-23T11:05:00.001+01:002018-02-23T11:17:12.747+01:00vCoffee: Looking at the VAC REST API Integration<div dir="ltr" style="text-align: left;" trbidi="on">
In the blog post, we will take a look at the VAC REST API integration. First of all, all the code specifically for VAC can be found on <a href="https://github.com/VeeamHub/vCoffee/blob/master/app/backend/restIntegration-vac.js">VeeamHub</a> in app/backend/restIntegration-vac.js. You will see that this file is only 100 lines big, but still packs all the functionality to talk to the REST API. For this blog post, I'm using slightly altered code which can be found <a href="http://dewin.me/vcoffee/restIntegration-vac.html">here</a><br />
<br />
<br />
<h2 style="text-align: left;">
Step 1 : Understanding fetch (JavaScript Specific)</h2>
If you want, you can play with this code by installing NodeJS (in this demo v6.11). The demo is dependent on one framework called fetch that is automatically included in NativeScript (the framework used to build the app), but is not installed by default in NodeJS. So the first thing we need to do is install fetch and check if it works. Installing fetch can be done with "npm install node-fetch" once you have installed NodeJS<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVP3A1dZ43soW3WmQY9nMfMKvrLfJi376cvUwo_MQipXID1UaJ84nYvYez77Jrl2fbaM3HH-PAcanfrUH3AVG42PCDJa7dXUe37Hus6sGhuRpHU54-nE8dWXb0YOH38Uykr4gS6pzVvb0/s1600/01-install-fetch.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="713" data-original-width="1152" height="198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVP3A1dZ43soW3WmQY9nMfMKvrLfJi376cvUwo_MQipXID1UaJ84nYvYez77Jrl2fbaM3HH-PAcanfrUH3AVG42PCDJa7dXUe37Hus6sGhuRpHU54-nE8dWXb0YOH38Uykr4gS6pzVvb0/s320/01-install-fetch.png" width="320" /></a></div>
<br />
Once you install node-fetch, you can start node and execute the JS code to test the module. I have to say that the first time I saw the Fetch code, it was quite confusing for me. Fetch uses a "pretty new" JavaScript feature called a Promise. Promises are you used when you want to execute some code asynchronously, and when it's done, it will run or the resolve code (everything went OK) or the reject code. This is pretty weird, but it means that Fetch doesn't block the main thread. This also means that if you try to fetch an URL that doesn't exists, at first it looks like nothing is happening, but only after the timeout has happened, your reject code will be ran. So be patient, and wait for the error if it looks like nothing happened<br />
<br />
In the app, the GUI basically passes a success function (for example, moving to the next screen) and a fail function (alerting the user that something went wrong). For this example, we will just print some messages to the console.<br />
<br />
What is also pretty weird, is the statement ".then(function(res) { return res.text() })". Basically this part creates a new Promise, that will succeed if we can parse the body content, and if yes, run the next then clause. This seems pretty trivial when you are parsing text, but for example, if you try to parse JSON, errors might occur while parsing. So this chains the promises. First you get the "Promise" that the webpage was downloaded or the fail code is ran. Then you get that the "Promise" that the body is parsed or otherwise the fail code while be ran.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ4wRRgtr8WcFXy7PhbnjRyXoTw0qEbLmHXhssAPEn_8JS-7x_0zZ8AQQXxKejfqXxF3biuP11T_wOPfq4ejqoyGAedHjYL_yZMetBeYIoo7viABz6R5xyibA5ZFzEWmPPMqa625h1r7s/s1600/02-short-hand-version.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="713" data-original-width="1152" height="198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ4wRRgtr8WcFXy7PhbnjRyXoTw0qEbLmHXhssAPEn_8JS-7x_0zZ8AQQXxKejfqXxF3biuP11T_wOPfq4ejqoyGAedHjYL_yZMetBeYIoo7viABz6R5xyibA5ZFzEWmPPMqa625h1r7s/s320/02-short-hand-version.png" width="320" /></a></div>
<br />
In the app I use the short hand <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions">arrow functions </a>which are also quite "new" in JS. Basically it allows you to create anonymous function but in a more compact way. Compare "function(res) { return res.text() }" which roughly compares to "res => res.text()"<br />
<br />
If this sounds like Chinese to you, that's fine, then focus mostly on what is between the fetch function, and the second then. The result is quite clear, we get "<a href="http://dewin.me/vcoffee/jobs.json">http://dewin.me/vcoffee/jobs.json</a>" and then print out the body to the console.<br />
<br />
Now the nice thing about Fetch is that if we just alter the first then, and change text() to json(), fetch will parse the JSON body to a JavaScript object. So now the second then, does not receive plain text, but real objects if parsing was successful.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfDtQNNQvIRblb32a47toWJsMRbHGB2NHH4AQyMtf2c90_xNWvEr81Ltx3PSBhO-8RQgGG9SI-4dSMeyxZoT2wgVtFdxvEwGwgjEaL8CPHhJEMCAyPSxmO4UuMsbzO0HU7sxpxMtrNO3Q/s1600/03-parse-json.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="713" data-original-width="1152" height="198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfDtQNNQvIRblb32a47toWJsMRbHGB2NHH4AQyMtf2c90_xNWvEr81Ltx3PSBhO-8RQgGG9SI-4dSMeyxZoT2wgVtFdxvEwGwgjEaL8CPHhJEMCAyPSxmO4UuMsbzO0HU7sxpxMtrNO3Q/s320/03-parse-json.png" width="320" /></a></div>
<br />
<br />
<h2 style="text-align: left;">
Step 2 : Logging into VAC, the real deal</h2>
OK, so let's look at the VAC Rest API. To login, the GUI passes an object with the URL, Username and Password. The goal is to login and get a token. When we get a token, we can are basically logged in, and next time we want to fire a request, we can just use the token, instead of sending a username and password. The theory is defined in the <a href="https://helpcenter.veeam.com/docs/vac/rest/authentication.html">VAC REST API documentation</a>, but let's put this into practice<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfyje4NFjCIuoQ4YPr9Injw-bhlSQ6CL9NmjMnGDWDYfJqVEKGXsXvGSIiDbWD8zYTX-5igAV-3wpIolhfYyhLTWOaNKfvJAFFYVJDxo84QSgaEWKIzUkxJHaMAfMeUzQL51Sd1hfburg/s1600/04-get-token-line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="800" data-original-width="1152" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfyje4NFjCIuoQ4YPr9Injw-bhlSQ6CL9NmjMnGDWDYfJqVEKGXsXvGSIiDbWD8zYTX-5igAV-3wpIolhfYyhLTWOaNKfvJAFFYVJDxo84QSgaEWKIzUkxJHaMAfMeUzQL51Sd1hfburg/s320/04-get-token-line.png" width="320" /></a></div>
<br />
The maincontext object is something that is created in the GUI and then passed to the backend. Here we are just creating it on the command line. In this code, there is a first fetch to "/v2/jobs", you can ignore this one. Basically it is some probe code to see that the server is really a VAC server before trying to authenticate. More importantly is the second fetch.<br />
<br />
Here we do a fetch of apiurl+"/token". But instead of just doing a plain download of the webpage, we are altering the request a bit. First of all, we need to use the "POST" method. Secondly, we add some headers to the request. Content-type should be set to "application/json" and we add the "Authorization" set to "Bearer" as specified by the API documentation<br />
<br />
Finally in the body of the request, we tell the API that we want to authenticate with a password (grand_type), and supply the username and password. This is supplied as an URL encoded format, and honestly, the way the code is written in this example is not that clean. It would have been safer to use something that takes a JavaScript Object and parse it to an URL encoded format.<br />
<br />
If all went well, you should get some JSON. Again we can parse the text and then we can extract the access_token. The code prints out the access code to the console and set's it to maincontext.sessionid for later reuse.<br />
<br />
I don't want to go too much into detail but your access_token expires every hour. In the same reply, there is also a <a href="https://helpcenter.veeam.com/docs/vac/rest/authentication.html?ver=20">refresh_token</a> you can reuse, to get a new access_token before it expires. I'm not going to cover this, but if you want your application to stay logged in, you would need to have some timed code that runs every, let's say, 45 mins, and that renews your token. In this case, your grant_type is not password but refresh but this also covered in the documentation. Finally, once you are done, you should also log out. Again, we will not cover this in this blog post.<br />
<br />
<h2 style="text-align: left;">
Step 3 : Getting the jobs</h2>
<div>
Now that we are logged in, we can actually do some work. Every time we need to do a request, we should specify that we are using JSON, and also that we are authenticating via the Bearer access token. Since we need to do this for every request, let's make a functions, that just make the headers object based on the access_token, so we can reuse it over and over again. Basically we need to set the "Authorization" header to "Bearer "+access_token. What is really important is that if you are building this code in another language, there is a space between the word "Bearer" and the access_token.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYIi6kEVUKa0l121Z-zRQpnmIE4xEG1Xq40MURFvgpQPmiWUU1tsIA7O2aFTPIMGuOY3y-n-aynpBQ5hbMd124SHoAI_QWyiq8-fxtstgJk_laeUFFGa5nBAXo6Uu0saOpFtiJq3YvTkY/s1600/05-get-job.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="800" data-original-width="1152" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYIi6kEVUKa0l121Z-zRQpnmIE4xEG1Xq40MURFvgpQPmiWUU1tsIA7O2aFTPIMGuOY3y-n-aynpBQ5hbMd124SHoAI_QWyiq8-fxtstgJk_laeUFFGa5nBAXo6Uu0saOpFtiJq3YvTkY/s320/05-get-job.png" width="320" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Then, let's get a list of the jobs. Following the documentation, we should be able to get this by executing a <a href="https://helpcenter.veeam.com/docs/vac/rest/get_jobs.html?ver=20">"GET" to /v2/Jobs</a> (watch out for caps). However, since other types like <a href="https://helpcenter.veeam.com/docs/vac/rest/get_tenants.html?ver=20">"Tenants"</a> or <a href="https://helpcenter.veeam.com/docs/vac/rest/get_backup_servers.html?ver=20">"BackupServers"</a> are requested in exactly the same way, just a different URI ending, we will just make a generic "getList" function, and then just a small wrapper function that use the "getList" function. </div>
<div>
<br /></div>
<div>
Here is where the VAC API in combination with JavaScript really shines in my humble opinion. We get the page, using the authentication header, we parse the JSON text to objects and the we run success. In success, we get an array of jobs which are passed. We can select the first element by using jobs[0], and use it properties to print the name and the id. For me this code is quite compact and extremely easy to read.</div>
<div>
<br />
<br /></div>
<h2 style="text-align: left;">
Step 4 : Starting a job</h2>
<div>
The last step is to start the job. Be careful, this code will actually start the job! Following the API documentation, we need to a "POST" to <a href="https://helpcenter.veeam.com/docs/vac/rest/start_job.html">/v2/Jobs/<the id="" job="" of="" the="">/action</the></a> . For example, in the previous screenshot, we can see that "Replication Job 1" has an id of 16, so if we want to start it, we need to a POST to /v2/Jobs/16/action.</div>
<div>
<br /></div>
<div>
Now we also have to specify the request body (which is also JSON). To start the job, the documentation states we need to send '{"start":"null"}'. Again, this scenario, can be made generic for other actions like start, stop, disable, etc. by just replacing the word start. So let's build another more generic function called actionJob. We will use the jobaction parameter to modify the JSON text we are sending. Again, this is not the most clean example.<br />
<br />
You should actually take a JS object, and stringify it to JSON. And in fact, I should update the code to something like "JSON.stringify({[action]:null})", but this is just to keep the example as simple as possible</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilDdJx495b1ZZG9XyCrxjE-dQ-FQ8wkxzFUHLtrwDr_RyJk1IVG341-HBgkteeOHifyimH69itN8VI_5N4lJwCxLyFLzCSNUkurP5CeehiKz3hG0LDHD7U9isx4tfd0pb74VZh2C4jj4o/s1600/06-job-action.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="800" data-original-width="1152" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilDdJx495b1ZZG9XyCrxjE-dQ-FQ8wkxzFUHLtrwDr_RyJk1IVG341-HBgkteeOHifyimH69itN8VI_5N4lJwCxLyFLzCSNUkurP5CeehiKz3hG0LDHD7U9isx4tfd0pb74VZh2C4jj4o/s320/06-job-action.png" width="320" /></a></div>
<div>
<br /></div>
<div>
In the success function, we can now use the console.dir function to dump the content of the reply object. If the job started succesfully, you will get a reply like "Action Started"</div>
<div>
<br />
<br /></div>
<h2 style="text-align: left;">
Finally: where to go from here</h2>
<div>
Well, first of all, the API documentation is really good. I'm pretty sure that by modifying this sample code, you can get a long way. I hope it also shows that there is no really rocket science involved, especially if you want to use the API to dump the status data to another system for example. I could imagine that you can use this sample code, to get the data out of VAC, and automatically create tickets for failed jobs in your Helpdesk system.</div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-13928004325830344942018-02-15T17:30:00.000+01:002018-02-16T18:32:10.093+01:00vCoffee : Drink coffee and check your backup jobs from your smartphone<div dir="ltr" style="text-align: left;" trbidi="on">
For some months now I had the idea to make an App for Veeam Availibility Console (VAC) and / or Veeam Backup & Replication (VBR). While getting a coffee in the morning I noticed, people spend quite a lot of time queuing or waiting for the machine to deliver the coffee.<br />
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju1X8WFOL2-7mI0VMGYp0ArjbWmD8v4RevUsHbzSegg8KGkS3VewfQLBtGX6wiELs__NL2LUZSHqvb3auFwF0VaKYaCa9fr7dvZBILDAxedUJNLx_ePaoqlJktZW12_6W48gbe-f_YGeg/s1600/vcoffee.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju1X8WFOL2-7mI0VMGYp0ArjbWmD8v4RevUsHbzSegg8KGkS3VewfQLBtGX6wiELs__NL2LUZSHqvb3auFwF0VaKYaCa9fr7dvZBILDAxedUJNLx_ePaoqlJktZW12_6W48gbe-f_YGeg/s320/vcoffee.jpg" width="240" /></a></div>
<div>
<br />
<div>
<br /></div>
<div>
That's why I'm glad to introduce vCoffee today. It is a small app, currently available in alpha which you need install manually on your Android device. The app itself is rather simple: You login, get an overview of all jobs and you can see the latest state. If required, you can click the job, and start the job directly from the app. As a bonus, you also get an "RPO" indicator, which tells you if the jobs started in the last 24h. So if the job was successfully but didn't run for the last 5 days, this will also be indicated in the first screen.</div>
<div>
<br /></div>
<div>
What I'm also really excited about is that it covers both VAC REST API and VBR REST API. This means that as a partner, you can check all your customers or tenants from the app. As a customer, you are able to monitor your VBR server via the Enterprise Manager. Do note that REST API is part of the Enterprise Plus licensing.</div>
<div>
<br /></div>
<div>
One thing that is really important is that Android seems to be quite paranoid about security and this means that you can not use self signed certificates. For VAC, I don't think this is a big issue but maybe for enterprise manager, you might have used self signed certificate. That's why I also would like to refer to my colleague Luca Dell'oca. He wrote an excellent article about <a href="https://www.virtualtothecore.com/en/use-lets-encrypt-free-certificates-in-windows-for-veeam-cloud-connect/">"Let's encrypt"</a>. I used the article for both VAC and Enterprise Manager. For enterprise manager, if you have already installed it, there is an excellent article that explains <a href="https://helpcenter.veeam.com/backup/rest/ssl_encryption.html">how to replace the certificate</a>.</div>
<div>
<br /></div>
<div>
<a href="https://www.youtube.com/watch?v=aumdrz3Z_N4&t=79s">Here is a small demo of the app</a>. I can tell you that on my native phone it works a lot faster but due to the Android emulation, it looks quite slow in the demo.</div>
<div>
<br /></div>
<div>
The code is <a href="https://github.com/VeeamHub/vCoffee">released under MIT License on VeeamHub</a>, the Veeam community which get contributions from Veeam employees but also external consultants an Veeam enthusiasts. This means that everybody can contribute and reuse the code as he or she likes. It also means that no responsibility will be taken and you can not contact Veeam Support for help. Basically, this app was not developed by Veeam R&D and has not been checked by Veeam QA.<br />
<br />
In a follow up article, I'm planning to discus the VAC REST API, because I was amazed how simple this really was. This because the app itself is JavaScript code and the JSON support of the VAC REST API makes parsing the objects extremely simple</div>
<div>
<br /></div>
<div>
Finally, you can download a <a href="http://dewin.me/vcoffee/vCoffee-debug.apk">debug build here</a> as long as I'm not running out of bandwidth usage.</div>
</div>
</div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-61313363117747821692017-11-23T13:09:00.000+01:002017-11-23T13:09:31.073+01:00RPS Workspace<div dir="ltr" style="text-align: left;" trbidi="on">
Many people have been using http://rps.dewin.me and honestly it gives me great pleasure that people like it and use it so often. I tried to make the tool as straightforward as possible but one thing people do not seem to understand is the line "Work Space". So on a regular basis I get the question, what the hell is "Workspace" and how is it calculated.<br />
<br />
In the early days of RPS, it didn't have this Workline space. However, during some discussions, some fellow SE's where concerned that there was no buffer space for:<br />
<br />
<ul style="text-align: left;">
<li>Occasionally running a manual full</li>
<li>Not filling the Filesystem for 100% cause that is just not best practice</li>
<li>Space that is used during the backup process itself</li>
</ul>
<br />
<br />
So the fist two ones, I hope, are pretty clear. The second one is not always clear. So imagine that you are running a forever incremental. You configured 3 points, and that is what you will get after the backup is done. However, during the backup, the first thing that happens is that an incremental point is created. After the incremental backup is done, the merge process happens. However, that also means that during that "working period", you actually have 4 restore points on disk (1 full + 3 incrementals). Thus you need to have some extra space.<br />
<br />
That hopefully explains the why. Now the how. This one is a bit more complicated. The initial workspace was pretty simple, take a full backup additionally. While this is great in smaller environments we pretty soon came to the conclusion that if you have 200TB of "full data" (all fulls together), you probably do not need 200TB of workspace. Especially because typically there is not one humongous job that covers the complete environment. Probably you have split up the configuration in a couple of jobs and those jobs are probably not running all at the same exact time.<br />
<br />
So the workspace has some kind of bucket system where the first bucket has a higher rate then the last one. Once the first bucket is filled, it overflows to the next one. This means that the workspace does not grow lineair with the amount of used space.<br />
<br />
Here are the buckets themselves:<br />
0-10 TB = source data will be compressed and then multiplied with a factor of 1.05<br />
10-20 TB = source data will be compressed and then multiplied with a factor of 0.66<br />
20 - 100 TB = source data will be compressed and then multiplied with a factor of 0.4<br />
100 - 500 TB = source data will be compressed and then multiplied with a factor of 0.25<br />
500 TB+ = source data will be compressed and then multiplied with a factor of 0.10<br />
<br />
Let me give you some examples. If you have 5TB of source data, that 5TB will fit exactly in the first bucket. Thus the calculation is rather easy. If you use a compression factor of 50% (the default), you will get:<br />
5TB x 50/100 x 1.05 =~ 2.6 TB Workspace<br />
<br />
If you have a source data of 50TB however, it does not fit in the first bucket. It has to split the data over 3 buckets. The first 10TB in the first bucket, the next 10TB in the second bucket and the last 30TB in the third bucket. Thus the calculation would be roughly:<br />
10 TB x 50/100 x 1.05 + 10 TB x 50/100 x 0.66 + 30 TB x 50/100 x 0.4 =~ 5 + 3 + 6 = 14TB Workspace<br />
<br />
You can verify that here:<br />
<a href="http://rps.dewin.me/?m=1&s=51200&r=14&c=50&d=10&i=D&dgr=10&dgy=1&dg=0&e">http://rps.dewin.me/?m=1&s=51200&r=14&c=50&d=10&i=D&dgr=10&dgy=1&dg=0&e</a><br />
<br />
Finally if you have a big customer or you are a big customer and you have 500TB. You will see a split of 10,10,80,400. Thus the calculation would be:<br />
10 TB x 50/100 x 1.05 + 10 TB x 50/100 x 0.66 + 80 TB x 50/100 x 0.4 + 400 TB x 50/100 x 0.25 =~ 5 + 3 + 16 + 50 = 74TB Workspace<br />
<br />
You can verify that here:<br />
<a href="http://rps.dewin.me/?m=1&s=512000&r=14&c=50&d=10&i=D&dgr=10&dgy=1&dg=0&e">http://rps.dewin.me/?m=1&s=512000&r=14&c=50&d=10&i=D&dgr=10&dgy=1&dg=0&e</a><br />
<br />
So instead of saying that with 500TB, you will need 250TB of workspace, it is drastically lowered to 74TB. And again, that makes sense, the environment will be split up in multiple jobs, so those will not be running all at the same time and you will probably not run an active full on all of them at the same time.<br />
<br />
For those want to play with it and see those buckets in action, I created a small jsfiddle here:<br />
<a href="https://jsfiddle.net/btyzvxen/2/" rel="nofollow">https://jsfiddle.net/btyzvxen/2/</a><br />
<br />
Just change the workspaceTB and click run to update the output<br />
<br /></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-19296519608229132462017-10-03T15:45:00.002+02:002017-10-03T15:45:49.629+02:00Self Service Demo with Veeam Backup for Office 365 using the REST API<div dir="ltr" style="text-align: left;" trbidi="on">
Just today Veeam released the 1.5 GA version of Office 365. This versions ships with a proxy-repository model introducing scalability for the bigger shops and service providers. It also features a complete REST API and I personally love it. It means that the community has a chance to extend the product without any limitations. (For those just getting started, know that by default, the REST API is not enabled, so you should enable it in the main options. You can find the main menu in the top left corner under the "hamburger" icon)<div>
<br /><div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgItRfzFpUZ87Thhj3rihFu_28lDQPOGmETkveI9ua5duUcHAPJTZq8X1LG0JyY8vX7XR8G16TNnAiUQOLiHvUyL0m2kdEn5xuTfvdvElZhWzR8_HGIiFGeV-NTFRFq6zqcv_yrKTiW-KI/s1600/enable-rest.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="592" data-original-width="886" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgItRfzFpUZ87Thhj3rihFu_28lDQPOGmETkveI9ua5duUcHAPJTZq8X1LG0JyY8vX7XR8G16TNnAiUQOLiHvUyL0m2kdEn5xuTfvdvElZhWzR8_HGIiFGeV-NTFRFq6zqcv_yrKTiW-KI/s320/enable-rest.png" width="320" /></a></div>
<div>
<div>
<br /></div>
<div>
And just to demo how powerful it is, I already made a small demo. The demo basically allows you to startup a self service recovery wizard, on which a user can login in with his LDAP/AD credentials, and then restore his own mails independently from the admin. This is quite common request I get in the field where admins don't really feel comfortable poking around the end-user's mailbox even if they don't have bad intentions. </div>
<div>
<br /></div>
<div>
The self service demo aka "Mail Maestro" source can be found on <a href="https://github.com/VeeamHub/mailmaestro">VeeamHub</a> . A compiled Windows version can be found <a href="http://dewin.me/mailmaestro/mailmaestro-1.0.1.zip" rel="nofollow">here</a> . Besides the source code, the Github also shows how you can use certificates to "secure" the connection between the end user and the server. BTW, the code only works with an on-premises exchange server and a local LDAP connection, just because I didn't had the time to set up an Office365 account etc. Most of the wizard will probably work, I'm just assuming that during restore, the credentials that are being used to restore (by default, the credentials that are being used to login) might not work. </div>
<div>
<br /></div>
<div>
Ok so let's try it out. When you download the compiled version, you will get the binary and the config file. Start by editing the json file with for example notepad. I removed the "vbomailbox" argument because I will supply this by command line.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBGqlkZ4lM_RGc_XG-fCPseBLbENKr3AYtW7cT1rDqMYI708iHxN-p5b7c6V-IQ2EKDjndznL2SKjF7JWCAD5ZbmL5OMooL0_XAcm5cIx6Et6mzRxpdKFeN4EY4X6QmhrlxdFGQxlM9xQ/s1600/json-config-mail-maestro.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="577" data-original-width="723" height="255" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBGqlkZ4lM_RGc_XG-fCPseBLbENKr3AYtW7cT1rDqMYI708iHxN-p5b7c6V-IQ2EKDjndznL2SKjF7JWCAD5ZbmL5OMooL0_XAcm5cIx6Et6mzRxpdKFeN4EY4X6QmhrlxdFGQxlM9xQ/s320/json-config-mail-maestro.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Maybe some side notes. The LDAP server is of course a reference to the LDAP / AD server. To lookup the user that you want to allow to do his self service restore, we temporarily need to bind to it and lookup his account, email address and it's distinguished name. You can use a readonly user for this. The rest should be quite self explanatory, except maybe for "LocalStop". If you enable LocalStop, you can type "stop" on the command line, to cleanly close the session from the server side. The user himself will be able to stop the wizard from the portal after logging in to indicate that he is ready. Both will clean up the restore session in VBO365 (headless Veeam Explorer).</div>
<div>
<br /></div>
<div>
So let's go to the command line and pass the config file. Since we removed vbomailbox, Mail Maestro will complain that it is not aware, what user you want to use in this session. You can supply it at the command line by using -vbomailbox</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlYkY4tdhWbYp_WpsRoUvrBMwsZpTN-TgUmvdLoG__aaJQNiZMjAIv62aS9udBfBws5j8pRvOHJNvbc_43D31-rq6HUti5U-628OsysmUkP1x8bseroAKDi_1xtCsWZIiyEf0zReZCGp8/s1600/without-user.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="512" data-original-width="979" height="167" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlYkY4tdhWbYp_WpsRoUvrBMwsZpTN-TgUmvdLoG__aaJQNiZMjAIv62aS9udBfBws5j8pRvOHJNvbc_43D31-rq6HUti5U-628OsysmUkP1x8bseroAKDi_1xtCsWZIiyEf0zReZCGp8/s320/without-user.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Let's supply a user that is being backed up</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD6Jb1Tm-gyL2OT53M85qqA7_ScBws0rR84ldq93djXZNqXVi7wyK9taBXlti7aOT2HTUoEqx_qGhaDDVE6wQHwAhQ_s3lJ0izRLQk6W73QCsBaHopOJgXJ4n5BpdxS1talCvBAm1pbkU/s1600/user-backedup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="592" data-original-width="886" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD6Jb1Tm-gyL2OT53M85qqA7_ScBws0rR84ldq93djXZNqXVi7wyK9taBXlti7aOT2HTUoEqx_qGhaDDVE6wQHwAhQ_s3lJ0izRLQk6W73QCsBaHopOJgXJ4n5BpdxS1talCvBAm1pbkU/s320/user-backedup.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1el7mzXNZcAsD4-Mpc38vgSX4IIFCfZNUJQYMcL4stdhw0aUOufECYh7KX3p2Nw_H_iftb8WeWvDEOMyAGIrDb3LfQ2hqUEP-_c6VbqiVirqGIWoHqJqPL8aVbQsWVI7sYq6Kdw9j3h0/s1600/starting-the-session.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="815" data-original-width="978" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1el7mzXNZcAsD4-Mpc38vgSX4IIFCfZNUJQYMcL4stdhw0aUOufECYh7KX3p2Nw_H_iftb8WeWvDEOMyAGIrDb3LfQ2hqUEP-_c6VbqiVirqGIWoHqJqPL8aVbQsWVI7sYq6Kdw9j3h0/s320/starting-the-session.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
Great, the process is starting. Mail Maestro is able to find the user, start the headless Veeam Explorer session and was able to find the mailbox in the backups. You can also see that it is serving on the http://localhost:4123. Open the firewall port and replace the localhost with the server ip to grant remote access</div>
<div>
<br /></div>
<div>
So if the user logs in with his email address, he will be authenticated against LDAP and then hopefully the wizard will be quite self explanatory</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhchatTN8n9VxOomsmSn3yVSa1EUd0tYLB0H4T1XPqaeD0ON66XRqYXz2ph0UYb4NKEfWK3zc0TdpVNo49WNp3VXEPP6dOcY2hWXLwb83tpSxsElL14tgW8Dmzp-1SWshhOltpy87puOdY/s1600/1-login.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="971" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhchatTN8n9VxOomsmSn3yVSa1EUd0tYLB0H4T1XPqaeD0ON66XRqYXz2ph0UYb4NKEfWK3zc0TdpVNo49WNp3VXEPP6dOcY2hWXLwb83tpSxsElL14tgW8Dmzp-1SWshhOltpy87puOdY/s320/1-login.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb_D1QInVCIXV_6rtoROzPpgqNaHPTn6oyvO2GWCLhnzc8u7gKeBsWZki4QwzRKB9sdcZ9IAdWAJKXs5Y-ZkI4UH8tzfZHVRpMXfPHdmmxbFvSNBbDSWpOhYAPQzbixri5T9K-J6NGnbI/s1600/2-overview.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="971" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb_D1QInVCIXV_6rtoROzPpgqNaHPTn6oyvO2GWCLhnzc8u7gKeBsWZki4QwzRKB9sdcZ9IAdWAJKXs5Y-ZkI4UH8tzfZHVRpMXfPHdmmxbFvSNBbDSWpOhYAPQzbixri5T9K-J6NGnbI/s320/2-overview.png" width="320" /></a></div>
Let's login the mailbox and delete all the mails in the inbox<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJI4TDJ9x8tMzuCiJ6_HOs_xuINNjbvyJpWJZNgp_ovV_boewX37NM5Z_qSz4yhFFtT0Mn7OlJq6NuSXlTyfgJMoUu06dhomdHCgaRhdXrQzcscoGXfcd98CY4nbii3Rtto20Jgvep0VM/s1600/3-mails.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="971" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJI4TDJ9x8tMzuCiJ6_HOs_xuINNjbvyJpWJZNgp_ovV_boewX37NM5Z_qSz4yhFFtT0Mn7OlJq6NuSXlTyfgJMoUu06dhomdHCgaRhdXrQzcscoGXfcd98CY4nbii3Rtto20Jgvep0VM/s320/3-mails.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDHEVZ3dYAHeTQzbT8lRkuH-9_skXrCNdpk-7FffhyLwhaba0VHPBLpAtFYp_b78X4-XkMbpSaOQ1ysfmfYpUNfhV9G3aIHYBcV9RAFTImawjUMYVxpmfjqqa6CXwMlmm8KUyrId8yKDo/s1600/4-delete.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="971" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDHEVZ3dYAHeTQzbT8lRkuH-9_skXrCNdpk-7FffhyLwhaba0VHPBLpAtFYp_b78X4-XkMbpSaOQ1ysfmfYpUNfhV9G3aIHYBcV9RAFTImawjUMYVxpmfjqqa6CXwMlmm8KUyrId8yKDo/s320/4-delete.png" width="320" /></a></div>
<br />
Now let's restore them from Mail Maestro by clicking the green restore button next to the Email Box<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMedM9Kv5Q14JnyHt6kJaBDJFfPjRj0Yg5zUoboNXCKngO2bFBhzTxlZmn2Tcuofe7SD0WiEfQDT28NlOTPB3YMDy6PnrTCT99Jbb81OIKgOL8xXNddKWaAPjqOhrSzbT-fnefm_EzlRg/s1600/5-restore.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="971" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMedM9Kv5Q14JnyHt6kJaBDJFfPjRj0Yg5zUoboNXCKngO2bFBhzTxlZmn2Tcuofe7SD0WiEfQDT28NlOTPB3YMDy6PnrTCT99Jbb81OIKgOL8xXNddKWaAPjqOhrSzbT-fnefm_EzlRg/s320/5-restore.png" width="320" /></a></div>
... and the mails are back<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhESTB7M6Hr1cZ9KCc6fdoGnU_dNGVj8sEa1xxCmjkDnOQnZ9oBU3Z7XAvan-_pZF7XEuGt-PCyPt8JirmZPfhsnRYzp-tMP0TVdrx6OsvcCYrA3sUIY_Vf867OlO6xwuSq6s1IYaG2zyU/s1600/6-restored.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="971" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhESTB7M6Hr1cZ9KCc6fdoGnU_dNGVj8sEa1xxCmjkDnOQnZ9oBU3Z7XAvan-_pZF7XEuGt-PCyPt8JirmZPfhsnRYzp-tMP0TVdrx6OsvcCYrA3sUIY_Vf867OlO6xwuSq6s1IYaG2zyU/s320/6-restored.png" width="320" /></a></div>
</div>
</div>
<div>
<br /></div>
<div>
When the user is done, he can stop the portal via the button in the top right corner of the portal. I noticed that if the browser window is too small, the button might not show up. Anyway you can always stop the wizard by typing "stop" on the command line</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUbK9l96mNLl1G3zhZVNy2O2jP0sQmho_-Sd_C-MoKvFiNgjwYxwVX842UmAcowUQ40r43LxnrAxEmHoJBweIkc07UjaPl3uuHedQoegJC_tUZIXXHaKIN53NSBnBBQBLkQGqf9IvnbUs/s1600/stop-mail-maestro.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="1067" height="218" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUbK9l96mNLl1G3zhZVNy2O2jP0sQmho_-Sd_C-MoKvFiNgjwYxwVX842UmAcowUQ40r43LxnrAxEmHoJBweIkc07UjaPl3uuHedQoegJC_tUZIXXHaKIN53NSBnBBQBLkQGqf9IvnbUs/s320/stop-mail-maestro.png" width="320" /></a></div>
<div>
Final notes, as with many of my projects, this is just a demo. If you feel like you could use this in production environment, please evaluate the code. This is published under MIT licenses, so basically, you can do whatever you want with it on your own risk. I hope however, that this shows how powerful the new API is and what you can do with it. I can only imagine that in the future, service providers would be able to built their own backup portal and offer Backup as a service. In fact I know my colleague <a href="https://github.com/nielsengelen/vbo365-rest">Niels Engelen </a> has been working on such a demo in PHP. </div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-45996226226573608042017-09-12T17:12:00.002+02:002017-09-12T17:12:18.510+02:00Adding AD/OU users to VBO365 via Powershell<div dir="ltr" style="text-align: left;" trbidi="on">
For the Veeam fans out there, you must have been living under a rock if you don't know that there is a new backup product for Office 365 (Veeam Backup for Office 365 or short handed VBO365). It allows you to backup mailbox items like mail, calendar items, etc.<br />
<br />
While 1.0 is already released, the 1.5 is currently in public beta. One of the cool things it brings is scalability, which a lot of users have been asking for. However it also brings full automation support in the form of a complete Rest API and a complete Powershell module. In this blog post I want to show you the power you get with the new Powershell module.<br />
<br />
Quite often I get asked on how to add only a selected amount of users to a job. For example, a company has 4000 mailboxes, but only want to select a certain amount of mailbox for protection in a certain job. This makes even more sense with v1.5 since you can define multiple repositories with a different retention. So maybe for the helpdesk guys, you don't really want to backup to long, but for the managers, you want to keep the mails backed up for 8 years. Handpicking those users per job can be a tedious job.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-WAKWh0fScAgRYvDcgbLTe4uFlHNGXy0kGTADlzCPc_ftm9PwlaFfOyHP4OWIUoS9FeleFpM1N3lm_E3HnwLL3R9PFSuhHsvPJ9Tp74BvOTsocfc_E4thb44QZ38ZUCnBSiHjLhPK4X4/s1600/vbo365-repository-retention.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-WAKWh0fScAgRYvDcgbLTe4uFlHNGXy0kGTADlzCPc_ftm9PwlaFfOyHP4OWIUoS9FeleFpM1N3lm_E3HnwLL3R9PFSuhHsvPJ9Tp74BvOTsocfc_E4thb44QZ38ZUCnBSiHjLhPK4X4/s320/vbo365-repository-retention.png" width="320" /></a></div>
<br />
With the new Powershell Module, you can automate this task. There is a new cmdlet called "Add-VBOJob" that allows you to define a new job. It takes the following parameters:<br />
<br />
<ul style="text-align: left;">
<li>Organization (Get-VBOOrganization)</li>
<li>Target Repository (Get-VBORepository)</li>
<li>Mailboxes (Get-VBOOrganizationMailbox)</li>
<li>Schedule Policy (New-VBOJobSchedulePolicy)</li>
<li>Name</li>
</ul>
<br />
To see it in action, I made a sample scripts that queries Active Directory and get's all users in a certain OU. Then based on those users, you can make a list of email address that you want to add. Then armed with that list, you can use "Get-VBOOrganizationMailbox" to select the correct mailboxes.<br />
<br />
You can find the script <a href="https://gist.github.com/tdewin/8b787096380467aa1dc1fe8a7732ba17" rel="nofollow">here</a>. It should be quite straight forward. Here are some screenshots seeing it in action<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZTVfLEtgjzJQnCUPVFbFE9EmlbKGIz1iaTMaG2X4Jbe1_ZoDU51hREXC-bhdBpJuVvDq9JaZ_txPvyF2NgJVgEd_yAxEG70qqWKj0j6396kUrtN0n7ZE6lqFV3rMRR3SC0MnHcc3dSkk/s1600/vbo365-ps-running.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZTVfLEtgjzJQnCUPVFbFE9EmlbKGIz1iaTMaG2X4Jbe1_ZoDU51hREXC-bhdBpJuVvDq9JaZ_txPvyF2NgJVgEd_yAxEG70qqWKj0j6396kUrtN0n7ZE6lqFV3rMRR3SC0MnHcc3dSkk/s320/vbo365-ps-running.png" width="320" /></a></div>
<br />
Firs of all, the module is in "C:\Program Files\Veeam\Backup365\Veeam.Archiver.PowerShell". So you can just execute "import-module 'C:\Program Files\Veeam\Backup365\Veeam.Archiver.PowerShell'". However the $installpath trick in this scripts, tries to find out the installation directory even if you did not install VBO365 on the default location.<br />
<br />
Now as you can see from the output, it found 3 users in the OU :<br />
<br />
<ul style="text-align: left;">
<li>bbols@x.local</li>
<li>ppeeters@x.local</li>
<li>tbruyne@x.local</li>
</ul>
<div>
The scripts "builds" the email address list based on the SamAccountName, but of course if you have a different policy, you can change the example. For example, I imagine quite a few companies having something like FirstName.LastName@company.com. Btw if you are wondering, "x.local" isn't a real DNS name, so how does that work with VBO365? Well it seems that 1.5 will also support on premise Exchange and Hybrid Deployments.</div>
<div>
<br /></div>
<div>
After building the email list, the script created the job</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhZdiNJknmD2HU82HsIru2pRHA_d-tFCbIBM1mOCWEpSVWFUdnknhrN6Aw48T1iJnzPZdAq1IAAOiGaTLOoMU-zA9unFjroQpY5EcDrj_AlvcQdXEa7CN7OVLMlhpV58iIiUmaN1AHUgs/s1600/vbo365-job-add.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhZdiNJknmD2HU82HsIru2pRHA_d-tFCbIBM1mOCWEpSVWFUdnknhrN6Aw48T1iJnzPZdAq1IAAOiGaTLOoMU-zA9unFjroQpY5EcDrj_AlvcQdXEa7CN7OVLMlhpV58iIiUmaN1AHUgs/s320/vbo365-job-add.png" width="320" /></a></div>
<div>
<br /></div>
<div>
If we check the job, you will see those email addresses (mailboxes) where successfully added to the job</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh67Ua5lZOSYRb2CNJbmQKHlSFjHXjMH7yq61faghNyL6juSusdOe4t_BajBRXebGjKipyzlNbgqqzPJKuV3dKRo2euuNJvXpsSwm0ZJQfpuEZGrB8eZZogkEtzp7V5NWIbGWgJwzv6iBA/s1600/vbo365-job-selected-users.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh67Ua5lZOSYRb2CNJbmQKHlSFjHXjMH7yq61faghNyL6juSusdOe4t_BajBRXebGjKipyzlNbgqqzPJKuV3dKRo2euuNJvXpsSwm0ZJQfpuEZGrB8eZZogkEtzp7V5NWIbGWgJwzv6iBA/s320/vbo365-job-selected-users.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Well in this case, it was only 3 users in my test lab, but I can imagine if you need to add 500 users, you will be grateful nothing having to add them one by one. Also, you would be able to do this in a for loop, going over multiple OU's, creating multipes jobs. Finally if you are going to use this in production once it is GA, I would recommend that you validate that you have the same amount of users in the OU as in the job. In this example, it just simply checks all the mailboxes (Get-VBOOrganizationMailbox) and verifies if the email address associated to a mailbox is in the initial email list. If it is, it is added to the job.</div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-61454969278484635772017-04-18T14:04:00.003+02:002017-04-18T14:40:39.888+02:00Gathering your Veeam Agent's in Veeam Backup & Replication<div dir="ltr" style="text-align: left;" trbidi="on">
So with the new upcoming version of Veeam Agent for Windows, you will be able to backup directly to a Backup & Replication server. Not everybody knows this but you will also be able to backup to Veeam Backup & Replication Free Edition provided you have a license for the Veeam Agent for Windows. This might be important for smaller shops who have only a couple of machines and do not have a Veeam Backup & Replication license.<br />
<br />
The steps to enable this are not so difficult but without the GA product, there is no documentation so it might be difficult to figure out how it all ties together. So here are the 7 steps you need to take to get it all working. Special thanks to Clint Wyckoff who shared these instructions internally.<br />
<br />
<h2 style="text-align: left;">
Step 1: Start by installing Veeam Backup & Replication</h2>
<div>
Fairly simple start. Download <a href="https://www.veeam.com/virtual-machine-backup-solution-free.html" rel="nofollow">Veeam Backup & Replication Free Edition</a>. Then mount the ISO to your target server, click the install button to start the installation. Basically, in this example, we did a next next next finish install. If you are doing this in production, it might actually be good to read what you are doing. Notice in the license step, I did not assign any license so the free mode will be installed.<br />
<br /></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin8u_NWfSkpBKkovjDCIxfM1VT-mt0LdAhJ_Am_bVpiot-gCfKc5baFuAoXxl2SUrFkik_P_Po4OaMt4nbeWAdf3ZfKHq4cG4LxNyGQ46Jdp87zSV0TWdy7HQ8uk07RVCnVP4dygGa_mM/s1600/01-veeam-iso-install.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin8u_NWfSkpBKkovjDCIxfM1VT-mt0LdAhJ_Am_bVpiot-gCfKc5baFuAoXxl2SUrFkik_P_Po4OaMt4nbeWAdf3ZfKHq4cG4LxNyGQ46Jdp87zSV0TWdy7HQ8uk07RVCnVP4dygGa_mM/s320/01-veeam-iso-install.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjn19vfy5Y4jXAGyKk7b0vCoyqDkwvLUsU1C0xCBS-92-lLzaIiIcAQNi906bWdA2G1qCPS0HAmX0u9TW46gPUhXKlKURoVSIaX5E7thZbX7RznlfDgV3U5CPKo8Is56vS6BbEQBXmS2i4/s1600/02-nolicense.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjn19vfy5Y4jXAGyKk7b0vCoyqDkwvLUsU1C0xCBS-92-lLzaIiIcAQNi906bWdA2G1qCPS0HAmX0u9TW46gPUhXKlKURoVSIaX5E7thZbX7RznlfDgV3U5CPKo8Is56vS6BbEQBXmS2i4/s320/02-nolicense.png" width="320" /></a></div>
<span id="goog_935092046"></span><span id="goog_935092047"></span><br />
<h2 style="text-align: left;">
Step 2: Enable full functionality view</h2>
</div>
<div>
The next step is where one of my partners got stuck. You need to enable "Full Functionality" view to get through the next steps. By default, you get "Free Functionality" view which shows you only the options you can use with the Free mode. However, if you add a Veeam Agent Windows license, you will unlock more functionality than is available by default in the free mode for your agent backups</div>
<div>
<br /></div>
<div>
To enable it, go to the main menu, select view and then finally select the "Full Functionality" mode</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYjZfQYpQHs6twog_wIPxO_CPiY1KgxwdlAxgi80LxiKkf4vDj4VlTL6ghcchNyOwjaBpb671AG2Embhz3YQtykzb4uL5Y6q03cmaDJ-_Te4tJg4iHUb2SBrm5od2nMW2UltAMtTeI7Ak/s1600/03-full-funct.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYjZfQYpQHs6twog_wIPxO_CPiY1KgxwdlAxgi80LxiKkf4vDj4VlTL6ghcchNyOwjaBpb671AG2Embhz3YQtykzb4uL5Y6q03cmaDJ-_Te4tJg4iHUb2SBrm5od2nMW2UltAMtTeI7Ak/s320/03-full-funct.png" width="320" /></a></div>
<div>
<br /></div>
<h2 style="text-align: left;">
Step 3: Add the Veeam Agent for Windows license to Backup & Replication</h2>
<div>
This might also be confusing, but you do not need to add the license during the Veeam Agent for Windows install. Rather, you add it to Veeam Backup & Replication and then, when you connect a Veeam Agent for Windows to VBR, it will acquire the license from the VBR server. This is good because you get a central place to manage the license.</div>
<div>
<br /></div>
<div>
Go to the main menu but this time choose "License". In the popup, click the install license button and select the Veeam agent for Windows license file (lic file) in the file browser. The result should be that the license is installed but the VBR server itself remains in Free mode</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpiyP5Rw4dFocLva_UMnOEgZ4rdX7W2CigLEV4HMhqtOln6KsdUx54yE03xKFiAGY2KEBKerPYa9Zs1hKbdMhJISXcFrUmydi0KCQp7zHDM_b2utubgXXpRrReaH5ZK8BL1RXQvuKU5Us/s1600/04-addlic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpiyP5Rw4dFocLva_UMnOEgZ4rdX7W2CigLEV4HMhqtOln6KsdUx54yE03xKFiAGY2KEBKerPYa9Zs1hKbdMhJISXcFrUmydi0KCQp7zHDM_b2utubgXXpRrReaH5ZK8BL1RXQvuKU5Us/s320/04-addlic.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7TGVEuVKkZzk_NilAcVAB1kJi-1kzMt_jV5ByVm27c7NS1mfyun4mF-SPuWunEyrOEJgADp-vxTauydaJMjqH5einumA_YZXoeEmxtoj_w3wiJnEDsDkEu0GUr6UGR53qG6HB_FK1Az4/s1600/05-agentlicadded.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7TGVEuVKkZzk_NilAcVAB1kJi-1kzMt_jV5ByVm27c7NS1mfyun4mF-SPuWunEyrOEJgADp-vxTauydaJMjqH5einumA_YZXoeEmxtoj_w3wiJnEDsDkEu0GUr6UGR53qG6HB_FK1Az4/s320/05-agentlicadded.png" width="320" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<h2 style="text-align: left;">
Step 4: Define permissions</h2>
<div>
Next step is to define the permissions on your repository. Got to "Backup Infrastructure" section and click the "Backup repositories" node. Then select the repository you want to assign rights to. Finally click "Agent Permissions". Then in the popup window, you will be able to assign permissions</div>
<div>
<br /></div>
<div>
For this tutorial, I made a separate user called "user1", just to show you that you can do very granular permissions</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYDNr62rOpXblmslkJf9AmJJTZ3UEfOxgSxqSqNod5mwGkPQ0yBGJrkiX3BOa6jyiR5qL2nlGkZdn3Ni9Vll0pjL24pjt5fFD48BkRIe0Ci48_oT_nO_e1w8kJL253L5dE7FRHrY9J2vA/s1600/06-localuser.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYDNr62rOpXblmslkJf9AmJJTZ3UEfOxgSxqSqNod5mwGkPQ0yBGJrkiX3BOa6jyiR5qL2nlGkZdn3Ni9Vll0pjL24pjt5fFD48BkRIe0Ci48_oT_nO_e1w8kJL253L5dE7FRHrY9J2vA/s320/06-localuser.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvFqv9RDfUXekXIXayRQBH7XUGvggqd9fQvIxyCa6MmW3630he0ms-fRYo0ryrfclVYrffQfiYMm8tw3rvUaZc95euPOQI0l89g5qsdDHk503Rm3Dl0FIiu0JwPypzc9CqDN9vGOZ8NJk/s1600/07-repomgmt.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvFqv9RDfUXekXIXayRQBH7XUGvggqd9fQvIxyCa6MmW3630he0ms-fRYo0ryrfclVYrffQfiYMm8tw3rvUaZc95euPOQI0l89g5qsdDHk503Rm3Dl0FIiu0JwPypzc9CqDN9vGOZ8NJk/s320/07-repomgmt.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqLPGokHhLDjzsF5A-5hwzYAlozI2LZwrNOUzVnkq9_4JwcmsYaTKuOUhtVSkNa42nNtbJBRhCZu8YnJu51l1WhHg2tt4kvgEbVRBFDgiZuc66UDAhvSPm2474czbhy9QqQlps3GXM9Xk/s1600/08-addperm.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqLPGokHhLDjzsF5A-5hwzYAlozI2LZwrNOUzVnkq9_4JwcmsYaTKuOUhtVSkNa42nNtbJBRhCZu8YnJu51l1WhHg2tt4kvgEbVRBFDgiZuc66UDAhvSPm2474czbhy9QqQlps3GXM9Xk/s320/08-addperm.png" width="320" /></a></div>
<div>
<br /></div>
<h2 style="text-align: left;">
Step 5: Install the client</h2>
<div>
Installing the agent on another machine should be fairly trivial. However, in this setup, we choose not to configure the backup during install nor to create a recovery medium. However I would highly recommend you to do create a recovery medium so that you can execute bare metal recoveries if needed.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhipv3eP0_4Hm1t4aXUddPo1HYhPejRrTKtlY_o1THufNPiEnyZPSiXawPrId-B5iN7JNFVlb6dg8UTcs2ZNLHlRC_J6daptXIxJcLs-hEXb6zOO0lKy3K2iTPnVlV23TaYgsv35MDPap8/s1600/09-install.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhipv3eP0_4Hm1t4aXUddPo1HYhPejRrTKtlY_o1THufNPiEnyZPSiXawPrId-B5iN7JNFVlb6dg8UTcs2ZNLHlRC_J6daptXIxJcLs-hEXb6zOO0lKy3K2iTPnVlV23TaYgsv35MDPap8/s320/09-install.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyeIvvGRPpj9lYyvOZYGbdpSA08WrpL77KU2P9iCEVkylghyT8y8y9OLiwoh9bQ-dcYIbWVbAggEjaYfmP2J67h-hfhiZAO7MdQ9do4b1n3sU7vENULEfnQOHAhnsQQBqBxF2auEpi9Nw/s1600/10-noconfig.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyeIvvGRPpj9lYyvOZYGbdpSA08WrpL77KU2P9iCEVkylghyT8y8y9OLiwoh9bQ-dcYIbWVbAggEjaYfmP2J67h-hfhiZAO7MdQ9do4b1n3sU7vENULEfnQOHAhnsQQBqBxF2auEpi9Nw/s320/10-noconfig.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuRI4lflzM_HhyDw-a8_g3vjxDWo7pG6Pqef_iJR8_S4VLnPKSPqWxZICXNljlI-IvHqcN608lBLaEiLu5DzROinIqi71geaP-U-Fm7K7mAaUfHpU5IIiSd-VHWEo21ir2Lj_HR6AoNhA/s1600/11-norecmedia.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuRI4lflzM_HhyDw-a8_g3vjxDWo7pG6Pqef_iJR8_S4VLnPKSPqWxZICXNljlI-IvHqcN608lBLaEiLu5DzROinIqi71geaP-U-Fm7K7mAaUfHpU5IIiSd-VHWEo21ir2Lj_HR6AoNhA/s320/11-norecmedia.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<h2 style="text-align: left;">
Step 6: Configure the client</h2>
<div>
Once the product is installed, we can configure it. To open the control panel, go to your system tray. A new icon should have appeared which has a green V. Because we did not configure anything yet, it should also have a small blue question mark on it. Right click it and select control panel.</div>
<div>
<br /></div>
<div>
When the control panel appears, ignore the fact that it does not have a license (click no). Click configure backup to start the configuration. </div>
<div>
<br /></div>
<div>
Finally in the backup wizard, as a target select Veeam Backup & Replication Repository. Specify the FQDN/IP and the credentials. When you click next, the permissions are checked and the license is acquired from the backup server. In the next step, you are able to select the repository.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQiOCZNay9I3sv0hcnMhzJcE_s7WmwLXAvt0xUN-V6Lgkru1q_nPxB1NtyCLJN9Ggh21tA2Ue4aUORg5wb3qKi2gRDr3pgHcr21IOCZ0KKkOUYJPXePyl4CMN1favAYgkgF3ejQGknDlE/s1600/12-controlpanel.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQiOCZNay9I3sv0hcnMhzJcE_s7WmwLXAvt0xUN-V6Lgkru1q_nPxB1NtyCLJN9Ggh21tA2Ue4aUORg5wb3qKi2gRDr3pgHcr21IOCZ0KKkOUYJPXePyl4CMN1favAYgkgF3ejQGknDlE/s320/12-controlpanel.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8rQKnT-O7q3gBcsVAq9C94P53ZWm3DS8mN2Ojly27Z2sZJE8oMbE4H-QYgpc6qE7eom_VooI6KbzQyTAwuvgAOENsOgAUVpJp_WMXjBuE5Wg10YBbMV-NyQ5pmTEkW5kebdFxXPo9IW8/s1600/14-ignorelic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8rQKnT-O7q3gBcsVAq9C94P53ZWm3DS8mN2Ojly27Z2sZJE8oMbE4H-QYgpc6qE7eom_VooI6KbzQyTAwuvgAOENsOgAUVpJp_WMXjBuE5Wg10YBbMV-NyQ5pmTEkW5kebdFxXPo9IW8/s320/14-ignorelic.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnHtFJNdnUawFuG8QRNfJnEyJj3clGBrQY6fNoEL0x-R7-jQT4pcirbNnq0pk_7PxKA0MfrQsvj3li0LJKOzcdcCul_99DUw_3x6W7nESj2cRWXeB61dXJ3w-0hpy5YQZeKwmOf8cUo60/s1600/15-configurebackup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnHtFJNdnUawFuG8QRNfJnEyJj3clGBrQY6fNoEL0x-R7-jQT4pcirbNnq0pk_7PxKA0MfrQsvj3li0LJKOzcdcCul_99DUw_3x6W7nESj2cRWXeB61dXJ3w-0hpy5YQZeKwmOf8cUo60/s320/15-configurebackup.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4xB9BbZqWClBiYBQJSmamEyDwqLANQE-RYKR5Ddm3DeRLbo7ekBhXviAyCbznWsEC3wkzJL-9q-WpQpBQ54D3eT9eGNP1cozdvcucnG5ednPVMLuneIRMQr0w2LBmUu2DznF58ZHiwg8/s1600/16-configrepo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4xB9BbZqWClBiYBQJSmamEyDwqLANQE-RYKR5Ddm3DeRLbo7ekBhXviAyCbznWsEC3wkzJL-9q-WpQpBQ54D3eT9eGNP1cozdvcucnG5ednPVMLuneIRMQr0w2LBmUu2DznF58ZHiwg8/s320/16-configrepo.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqHIULJh71BgE5CIHN8BnnMTmsDSSrJHqcHYv35NoOmRXj0tFQFyaohuvls4VYAdbTCho4Eg9yH4SJ-QZ8gKTZ1eqdoY2VoIGaMkGmo26Jbi_QHHxHEvI1U4nZKjZ5zz299tBFXhlyp_s/s1600/17-userconfig.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqHIULJh71BgE5CIHN8BnnMTmsDSSrJHqcHYv35NoOmRXj0tFQFyaohuvls4VYAdbTCho4Eg9yH4SJ-QZ8gKTZ1eqdoY2VoIGaMkGmo26Jbi_QHHxHEvI1U4nZKjZ5zz299tBFXhlyp_s/s320/17-userconfig.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIZOlZSEIvmGTY4keTeWMoJ8-vSZ5RWNvU9UpONsBfawI8-U9zREanATBp_kpqQ0l7HD5pJNDlZTiitjhSazCoB07swgO_CIx_EdQ0PJJKDdUueWFfQ6gtnbWixBLporQrbHcaR3123YU/s1600/18-retention.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIZOlZSEIvmGTY4keTeWMoJ8-vSZ5RWNvU9UpONsBfawI8-U9zREanATBp_kpqQ0l7HD5pJNDlZTiitjhSazCoB07swgO_CIx_EdQ0PJJKDdUueWFfQ6gtnbWixBLporQrbHcaR3123YU/s320/18-retention.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Btw, if a user connects without permissions on the repository, the configuration wizard will refuse to go the next step</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLm_syh0zkIuX-V7sNz1QZtKuC-FcgcHTBpOD3O_m9j54jIYto8KsjyAWxlxknD2v1Hw2G-qUBu616ZRbdA6KPU8F8jA-tamFML_2ELQlMuynRsOAXUkiJoSMSL1Vy6zrufm7JKpvcOQs/s1600/20-wrong.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLm_syh0zkIuX-V7sNz1QZtKuC-FcgcHTBpOD3O_m9j54jIYto8KsjyAWxlxknD2v1Hw2G-qUBu616ZRbdA6KPU8F8jA-tamFML_2ELQlMuynRsOAXUkiJoSMSL1Vy6zrufm7JKpvcOQs/s320/20-wrong.png" width="320" /></a></div>
<div>
<br /></div>
<h2 style="text-align: left;">
Step 7: Run the backup</h2>
<div>
With the configuration done, you are ready to run the backup. You can see the backup job and backup from the Veeam Backup & Replication repository.</div>
<div>
<br /></div>
<div>
In the Veeam Agent for Windows, if you click the license tab, you will also see that the agent is licensed through Veeam Backup & Replication</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_pNFRp7Ecqx73HC05l_QR_Rngi63Vfx2PLRtXOWrha13_wDd2FktVHFmP9R0Wb6KPqp3GBac6g1MwFuVLOx6N4ul_sJ-SSt_H-QbPhoeljjse50wgncCaeX2tAGz_W0iEdGUUeTKFvEE/s1600/21-result.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_pNFRp7Ecqx73HC05l_QR_Rngi63Vfx2PLRtXOWrha13_wDd2FktVHFmP9R0Wb6KPqp3GBac6g1MwFuVLOx6N4ul_sJ-SSt_H-QbPhoeljjse50wgncCaeX2tAGz_W0iEdGUUeTKFvEE/s320/21-result.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgx2UQzxxhyphenhyphenN9NA4kdZP1KMJu0E-xkz2mhS53Rmiamw8Wkd_W4jfIsmZEKD1D6V1RnLT49Sjm1ajgwslnfFoTaNg6xSrl1lU8sIJZ-nlzJ6DQp4YlBJchT7DB6QznByRB0nDAXdHCLoG-U/s1600/22-result.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgx2UQzxxhyphenhyphenN9NA4kdZP1KMJu0E-xkz2mhS53Rmiamw8Wkd_W4jfIsmZEKD1D6V1RnLT49Sjm1ajgwslnfFoTaNg6xSrl1lU8sIJZ-nlzJ6DQp4YlBJchT7DB6QznByRB0nDAXdHCLoG-U/s320/22-result.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSAp2WMYDtUq8dCDsDHwdQVyA7VKLNPnQh9B6vXSkS4CEhS69ox8UqFuJD43C4edtXgawOf6n5DBHEVhQDxSiseW8hyphenhyphenZFTWJE1FQ-kyf8xig3YoMVPUSob2F1XlcJ4NdlP73gp8tZo1vc/s1600/23-managedlic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSAp2WMYDtUq8dCDsDHwdQVyA7VKLNPnQh9B6vXSkS4CEhS69ox8UqFuJD43C4edtXgawOf6n5DBHEVhQDxSiseW8hyphenhyphenZFTWJE1FQ-kyf8xig3YoMVPUSob2F1XlcJ4NdlP73gp8tZo1vc/s320/23-managedlic.png" width="320" /></a></div>
<div>
<br /></div>
<h2 style="text-align: left;">
So what's next?</h2>
<div>
Well, you can explore what other functionality is enabled when you backup to a free edition. One cool feature would be to "backup copy" your job to a second location. For example, in the following screenshot, I defined a repository on another drive, and then did a backup copy job to the second location.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhau5e8kEkrfYuFYVr3IW-qqivsf1zkzBaQz64cw7J61Ao5a62pOGzuitgVncfwhCTz82XWq_DL3GgtXe_j6g-ES3oCvG-_aEZPuHBT0Og8O8GtOYKJy5xamZqk97UbSIrjGeOaQjjrF4A/s1600/31-repo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhau5e8kEkrfYuFYVr3IW-qqivsf1zkzBaQz64cw7J61Ao5a62pOGzuitgVncfwhCTz82XWq_DL3GgtXe_j6g-ES3oCvG-_aEZPuHBT0Og8O8GtOYKJy5xamZqk97UbSIrjGeOaQjjrF4A/s320/31-repo.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEik3VC4f48zdL8Zsv9urrez0ZO2S9SaPZDnTtrWDijVFuHsKADltSmUEuoIbwMGsRnC6sfQ55cZkbzx1mi-ZE0CYH1f0G7sKieD2dUnRWmnhufNtBM6XlG1nV4C_7WaZDE-SYnzImB8zV8/s1600/32-repo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEik3VC4f48zdL8Zsv9urrez0ZO2S9SaPZDnTtrWDijVFuHsKADltSmUEuoIbwMGsRnC6sfQ55cZkbzx1mi-ZE0CYH1f0G7sKieD2dUnRWmnhufNtBM6XlG1nV4C_7WaZDE-SYnzImB8zV8/s320/32-repo.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0jPiqHqsRNf-JdJmpBhAFR2utBN_I38F9gKHLnJeHRajQjfVTA_51eZo_lMrnI0Dlg6FY2Cc7b_dVuIoMlSu2nzKOKgNZjXYV9TW_0WRgwpIASrvM4bAtQH0Y85wKW60Kd7NBaChNxzw/s1600/33-repo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0jPiqHqsRNf-JdJmpBhAFR2utBN_I38F9gKHLnJeHRajQjfVTA_51eZo_lMrnI0Dlg6FY2Cc7b_dVuIoMlSu2nzKOKgNZjXYV9TW_0WRgwpIASrvM4bAtQH0Y85wKW60Kd7NBaChNxzw/s320/33-repo.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3FgWBST2KErSME6lDOG3hcyeihjlVD0hqa8YMNvAWUlVBSvY5LfX5q68Lb39VTrNqsETAR0Ww2392Z42Itk0GBJ-457iiEGi8oZJGR07yqHs1udIMOjOZma3CVaqeQ87r1sUcASGO9_8/s1600/34-bcj.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3FgWBST2KErSME6lDOG3hcyeihjlVD0hqa8YMNvAWUlVBSvY5LfX5q68Lb39VTrNqsETAR0Ww2392Z42Itk0GBJ-457iiEGi8oZJGR07yqHs1udIMOjOZma3CVaqeQ87r1sUcASGO9_8/s320/34-bcj.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizVT-Nru-AJlQp7LiKT7JFOPo-P6GYhLja9zSUWHPNCrt2w-Ve6N_CW2LMGkJ81s_EUdM_iUAwvFl5D9VauWqAbLeIYqIaUQiLKy_tHFi7CozW7xZ-EY2z3LkU_wmciGyNcNNEYWACw6s/s1600/35-bcjjobsrc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizVT-Nru-AJlQp7LiKT7JFOPo-P6GYhLja9zSUWHPNCrt2w-Ve6N_CW2LMGkJ81s_EUdM_iUAwvFl5D9VauWqAbLeIYqIaUQiLKy_tHFi7CozW7xZ-EY2z3LkU_wmciGyNcNNEYWACw6s/s320/35-bcjjobsrc.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipT7lWWiSsnsmB7P1OL6pmJfkG2abTo-NK8ooaiI2rs_0Slqf-dug5vS_rp-QW8C2M640x3ShocbRRod9CmpBcs30Ry29_MpjK8pludG4AzuzyN8i4VTfZX1i9-B33fKatVvjIXq1HJyo/s1600/36-repo2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipT7lWWiSsnsmB7P1OL6pmJfkG2abTo-NK8ooaiI2rs_0Slqf-dug5vS_rp-QW8C2M640x3ShocbRRod9CmpBcs30Ry29_MpjK8pludG4AzuzyN8i4VTfZX1i9-B33fKatVvjIXq1HJyo/s320/36-repo2.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBbRS2_XnvzOX5rufbloRCIg2GuH360Ou55I4TtyOPFHlIVHOgPld6oVCNSP-XxJbS2uf5Ynj8P4VeZxT8HmtwUJfGcqDveIbWm1IjiHqWgiKul_lLpj46vtaUw1-JedF3ZCmNK-eqs1k/s1600/37-copyoffsite.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBbRS2_XnvzOX5rufbloRCIg2GuH360Ou55I4TtyOPFHlIVHOgPld6oVCNSP-XxJbS2uf5Ynj8P4VeZxT8HmtwUJfGcqDveIbWm1IjiHqWgiKul_lLpj46vtaUw1-JedF3ZCmNK-eqs1k/s320/37-copyoffsite.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
<br /></div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-55288454197937164482017-02-22T15:56:00.002+01:002017-02-22T16:07:10.679+01:00Under the hood: How does ReFS block cloning works<div dir="ltr" style="text-align: left;" trbidi="on">
In the latest version of Veeam 9.5, there is a new feature called ReFS Block cloning integration. It seems that the ReFS Block cloning limitations confuse a lot of people so I decided to look a bit under the hood. It turns out that most limitations are just basic limitation of the API itself.<br />
<div>
<br /></div>
<div>
To understand better how it all works, I made a small project called <a href="https://github.com/tdewin/refs-fclone">Refs-fclone</a> . The idea is to give it a source file (existing) and then to duplicate that file to a target file (non existing) with the API. Basically creating a synthetic full from an existing VBK. </div>
<div>
<br /></div>
<div>
It turns out that idea was not so original. During my google quests for more information (because some parts didn't worked), it appeared that there was a fellow hacker that made the exact tool. I must admit that I reused some of his code so you can find his original code <a href="https://github.com/0xbadfca11/reflink/">here</a>. </div>
<div>
<br /></div>
<div>
Nevertheless I finished he project, just to figure out for myself how it all works. In the end, the API is pretty "easy" I would say. I don't want to go over the complete code but highlight some important bits. If you don't understand the C++ code, just ignore it and read the text underneath it. I tried to put the important parts in bold.</div>
<div>
<br /></div>
<h2 style="text-align: left;">
Prerequirements</h2>
<div>
Before I even got started, my initial code did not want to compile. I couldn't figure it out because I had the correct references in place. But for some reason, it could not find "FSCTL_DUPLICATE_EXTENTS_TO_FILE". So I start looking into my projects settings. Turned out, it was set to compile with Windows 8.1 as a target and when I changed it to 10.0.10586.0, all of the sudden it could find all reference. </div>
<div>
<br /></div>
<div>
<b>This shows an important lesson. This code is not meant to be ran on Windows 2012 because it just doesn't have the API call supported. So many customers have been asking, will the ReFS integration work on Windows 2012 and the answer is simple: NO. At the time it was developed, the API call didn't exist. Also, you will need to have the underlying volume formatted with Windows 2016 because again, the ReFS version in 2012 did not support this API call.</b></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
So let's look at the code. First, before you clone blocks, there are some requirements which I want to highlight in the code itself:</div>
<blockquote class="tr_bq">
FILE_END_OF_FILE_INFO preallocsz = { filesz };<br />
SetFileInformationByHandle(tgthandle, FileEndOfFileInfo, &preallocsz, sizeof(preallocsz));</blockquote>
This bit of code defines the end of the file. Basically it tells windows how big the file should be. In this case, the size if filesz which is the original file size. Why is that important? Well to use the block clone API, we need to tell it where it should copy it data to. Basically a starting point + how much data we want to copy. But this starting point has to exist, so if want to make a complete copy, we have to resize it to be a big as the original<br />
<br />
<div>
<blockquote class="tr_bq">
if (filebasicinfo.FileAttributes | FILE_ATTRIBUTE_SPARSE_FILE) {<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>FILE_SET_SPARSE_BUFFER sparse = { true };<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>DeviceIoControl(tgthandle, FSCTL_SET_SPARSE, &sparse, sizeof(sparse), NULL, 0, dummyptr, NULL);<br />
}</blockquote>
</div>
<div>
Next bit is the sparse part. The "if" statements basically check if the source file is a sparse file, and if it is, we should make the target sparse (tgthandle) as well. So what it is a sparse file? Well basically if a file is not a sparse file, it will allocate all the data on disks if you resize it. Even if you didn't write anything to it yet. A sparse file only allocates space, when you write non zero data somewhere. So even if it looks like it is 15GB big, it might only consume 100MB on disk but space is not really allocated. </div>
<div>
<br /></div>
<div>
<b>Why is that important? Well again, the API requires that source and target files need to have the same setting. This code actually runs before the resizing part. The reason is simple, if you do not make it sparse, the file will allocate all the space on disk, even if we didn't write to it. Not a great way to make space-less fulls.</b></div>
<div>
<b><br /></b></div>
<div>
<blockquote class="tr_bq">
if (DeviceIoControl(srchandle, FSCTL_GET_INTEGRITY_INFORMATION, NULL, 0, &integinfo, sizeof(integinfo), &written, NULL)) {<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>DeviceIoControl(tgthandle, FSCTL_SET_INTEGRITY_INFORMATION, &integinfo, sizeof(integinfo), NULL, 0, dummyptr, NULL);<br />
}</blockquote>
Finally this bit. Basically it get the integrity stream information from the source file and then copies it to the target file. Again, they have to be the same for the code to allow for block cloning.<br />
<br />
<b>This shows that basically the source and target file have to be pretty much the same. This partially explains why you need to have an Active Full on your chain before block cloning starts to work. The old backup files might not have been created with ReFS in mind!</b><br />
<b><br /></b>
<b>Also for integrity streams to work, we don't need to do anything fancy. We just need to tell ReFS, this file should be checked. </b><br />
<b><br /></b>
<br />
<h2 style="text-align: left;">
The Cool Part</h2>
<div>
<blockquote class="tr_bq">
for (LONGLONG cpoffset = 0; cpoffset < filesz.QuadPart; cpoffset += CLONESZ) {<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>LONGLONG cpblocks = CLONESZ;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>if ((cpoffset + cpblocks) > filesz.QuadPart) {<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>cpblocks = filesz.QuadPart - cpoffset;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>}<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>DUPLICATE_EXTENTS_DATA clonestruct = { srchandle };<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>clonestruct.FileHandle = srchandle;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>clonestruct.ByteCount.QuadPart = cpblocks;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>clonestruct.SourceFileOffset.QuadPart = cpoffset;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>clonestruct.TargetFileOffset.QuadPart = cpoffset;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span><span class="Apple-tab-span" style="white-space: pre;"> </span>DeviceIoControl(tgthandle, FSCTL_DUPLICATE_EXTENTS_TO_FILE, &clonestruct, sizeof(clonestruct), NULL, 0, dummyptr, NULL);<br />
}</blockquote>
</div>
That's it. That is all what is required to do the real cloning. So how does it work. Well first there is a for loop that goes over all the chunks of data of the source files. There is one limitation with the block clone API. You can only copy a chunk of 4GB at a time. In this project CLONESZ is defined as 1GB to be on the safe side.<br />
<br />
So imagine you have a file of 3.5GB. The for loop will calculates that the first chunk starts a 0 bytes and the amount of data we want to copy is 1GB. Next time, it will calculate the the next chunck starts at 1GB and we need to copy 1 GB, and so on..<br />
<br />
However the forth time, it actually detects that there is only 500GB remaining, and instead of copying 1GB, we copy only what is remaining (filesize - where we are now).<br />
<br />
But how do we call the API? well first we need to create a struct (think of it is as a set of variables). The first variable references the original file. Bytecount says how much data we want to copy (mostly 1GB). Finally the source and target file offset are filed in with the correct starting point. Since we want duplicates, the starting point for the block clone is the same.<br />
<br />
Finally we just tell windows to execute the "FSCTL_DUPLICATE_EXTENTS_TO_FILE" on the target file, which basically invokes the API. We give it the set of variables we filled in correctly. So basically the clone API itself + filling in the variables is only 5 lines of code.<br />
<br />
<b>The important bit here, is that you can not just copy files on a ReFS volume and expect ReFS to do the block cloning. An application really has to tell ReFS to clone data from one file to the other and both files have to be on the same disk.</b><br />
<b><br /></b>
<b>This has one advantage though. The API just clones data even if Veeam has compressed that data or encrypted it. Since Veeam actively tells ReFS to clone the data, it doesn't have to figure out what data is duplicate, it just does the job. That is a major advantage against deduplication: you can still secure and compress your files. Also, since the clone is just a simple call during the backup, it doesn't require any post processing. And no post processing means no exuberant CPU usage or extra I/O to execute the call.</b><br />
<br />
<h2 style="text-align: left;">
Seeing it action</h2>
</div>
<div>
This is how E:\CP looks like before executing refs-fclone. Nothing special. An empty directory and the ReFS volume has 23GB free</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL37xjxXEpj31Cdo-LTxTUSGzKtiiiPGUKMwuhm_lhIeTozPL2RZ_XeOm0OHtdJa9TTTTLYX0BojQos1vIos5p_qGGXCVHY6yIVXP3YNZet0uOSlGxx1cG-owpjnyrWqTp0BJ7bAQe5Ys/s1600/before.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL37xjxXEpj31Cdo-LTxTUSGzKtiiiPGUKMwuhm_lhIeTozPL2RZ_XeOm0OHtdJa9TTTTLYX0BojQos1vIos5p_qGGXCVHY6yIVXP3YNZet0uOSlGxx1cG-owpjnyrWqTp0BJ7bAQe5Ys/s320/before.png" width="320" /></a></div>
<br />
Now lets copy a VBK to E:\CP with the tool. It shows that the source file is around 15GB big and it is cloning 1GB at the time. Interestingly enough you see that last run, it just copies the remainder of the data.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii1m2Iqq3Ff_pmJcsucmbwxLnuR2gbd2_Yi9LVp0MMMJcFK8vSsdIB0Vb8qEzHlUK1ijilO3JonHmKPjocAxJJOIX-YRhJWsc-9ieV7PENdrngzOiDThaWS3cNMr1vH7_mDh0REMbB7KE/s1600/running.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii1m2Iqq3Ff_pmJcsucmbwxLnuR2gbd2_Yi9LVp0MMMJcFK8vSsdIB0Vb8qEzHlUK1ijilO3JonHmKPjocAxJJOIX-YRhJWsc-9ieV7PENdrngzOiDThaWS3cNMr1vH7_mDh0REMbB7KE/s320/running.png" width="320" /></a></div>
<div>
<br /></div>
<div>
This run took around 5 seconds max to execute this "copy". Seems like nothing really happened. However, if we check the result on disk, we see something interesting: </div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgITbmwxUr2S-B-JqUCbIUhC05ht6370zJkstEWlR7Z-a4eUJNeZbsdMDAVTY6lrFsKLiQZRb7eVVzIslhMa28EXYRJ33AghpEEqwLoOzwBsC2bQRrkKwJ2eWzBfD4W3pBVRjPxVZXKxOM/s1600/after.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="269" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgITbmwxUr2S-B-JqUCbIUhC05ht6370zJkstEWlR7Z-a4eUJNeZbsdMDAVTY6lrFsKLiQZRb7eVVzIslhMa28EXYRJ33AghpEEqwLoOzwBsC2bQRrkKwJ2eWzBfD4W3pBVRjPxVZXKxOM/s320/after.png" width="320" /></a></div>
<div>
<br /></div>
<div>
The free disk space is still 23GB. However we can see that a new file is created that is 15GB+. Checksumming both files give exactly the same result.</div>
<div>
<br /></div>
<div>
Why is this result significant? Well it shows that the interface to the block clone API is pretty straight forward. It also means that although it looks like Veeam is cloning the data, it is actually ReFS that manages everything under the hood. <b>From a Veeam perspective (and also end-user perspective), the end result looks exactly like a complete full on disk. So once the block clone API call is made, there is no way to undo it or to get statistics about it. All of the complexity is hidden.</b></div>
<div>
<b><br /></b></div>
<h2 style="text-align: left;">
<b>Why do we need aligned blocks?</b></h2>
<div>
Finally I want to share you this result</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgE9H21b58dJ0PAFriVc8vTAFMKPUGbOc9xPddkvl6xLN5dEcEfvPfZX8a127Q6Q6Zk_iJ9vsyILjlvN8s3iqlQpwon8Tmb4KPeKqq5BLKHVDU0UzBzHOhtXKznDyGIZyaH6KkAh-ElD28/s1600/doesntwork.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgE9H21b58dJ0PAFriVc8vTAFMKPUGbOc9xPddkvl6xLN5dEcEfvPfZX8a127Q6Q6Zk_iJ9vsyILjlvN8s3iqlQpwon8Tmb4KPeKqq5BLKHVDU0UzBzHOhtXKznDyGIZyaH6KkAh-ElD28/s320/doesntwork.png" width="320" /></a></div>
<div>
<br /></div>
<div>
In the beginning, I made a small file that had some random text like this. In this example, it has 10 letters in it, which means it is 10 Bytes on disk. When I tried the tool, it didn't work (as you can see), but the tool did work on Veeam backup files. </div>
<div>
<br /></div>
<div>
So why doesn't it work. Well the clone API has another important limitation. Your clone regions must match a complete set of clusters. By default the cluster size is 4KB (although for Veeam it is strongly recommended to use 64KB to avoid some issues). So if I want to make a call, the starting point has to be a multiple of 4KB. Well 0 in a sense is a multiple of 4KB so that's OK. However the amount of bytes you want to copy, also has to be a multiple of 4KB, and 10B is clearly not. When I padded the file, to be exactly 4KB (so adding 4096 chars), everything worked again.</div>
<div>
<br /></div>
<div>
<b>This show a very important limitation. For block cloning to work, the data has to be aligned, since you can not copy unaligned data. Veeam backup files are by default not aligned. Thus it is required to run an active full before the block clone API can be used. To give you a visual idea what this means. On top "a default" Veeam Backup file, at the bottom, an aligned file which is required for ReFS integration</b></div>
<div>
<b><br /></b></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr7-yFN_OLhX2a9rQHKZxV75PA9y7XB82CmM26CbnIBy80yOtuFeTVsh2CEIFGL0vHQGaqdFr1XD6CoFniCkQcIQn8gtBmz_EepdTfCcnEMJOGh2rVg_zXWOQwJEyiQBgEq8NZGrtAguc/s1600/aligned.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr7-yFN_OLhX2a9rQHKZxV75PA9y7XB82CmM26CbnIBy80yOtuFeTVsh2CEIFGL0vHQGaqdFr1XD6CoFniCkQcIQn8gtBmz_EepdTfCcnEMJOGh2rVg_zXWOQwJEyiQBgEq8NZGrtAguc/s1600/aligned.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
Due to compression, data blocks are not always the same size. So to save space, they just can be appended after each other. However for the block clone API, we need align the blocks. The result is that we sometimes have to pad a sector with empty data. So why do we need to align, can we just not clone more data? After all it doesn't consume more space?</div>
<div>
<br /></div>
<div>
Well take for example the third block. Unaligned, it is cluster 2,3,4. Aligned it is only in cluster 3 and 4. So because of the aligned blocks, we have to clone less data. You might think, why does it matter because cloning does not take extra space? </div>
<div>
<br /></div>
<div>
Well first of all it keeps the files more manageable without filling it with junk data. If you copy 2 and 4 from the unaligned file, you basically add data that is not required. Next, if you delete the original file, the data does start "using space on disk". Because of the reference, you basically tell ReFS not to delete the data blocks as long as they are referenced by some file. So the longer these chain continue, the more junk data you might have.</div>
<div>
<br /></div>
<div>
<b>So this is the reason why you need an active full. A full backup has to be created with ReFS in mind, otherwise the blocks are not aligned and in this case Veeam refuses to use the API</b></div>
<div>
<br /></div>
<div>
If you want to read more about block size I do recommend this article from my colleague <a href="http://www.virtualtothecore.com/en/refs-cluster-size-with-veeam-backup-replication-64kb-or-4kb/">Luca</a></div>
<div>
<br /></div>
<h2 style="text-align: left;">
One more thing</h2>
<div>
Here is a fun idea. You could use the tool together with a post process script to create GFS point on a Primary Chain. Although not recommended, you could for example run a script every month that "clones" the last VBK to a separate folder. The clone is instant so doesn't take a lot of time or much extra space. You could script your own retention or manually delete files. Clearly this is not really supported but it would be a cool idea to keep for example one full VBK as a monthly full for a couple of years</div>
<div>
<br /></div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-66000054077555367562016-12-15T12:50:00.002+01:002016-12-15T13:10:50.241+01:00Recovery Magic with Veeam Agent for Linux<div dir="ltr" style="text-align: left;" trbidi="on">
This week the new Veeam Agent for Linux was released. It includes file level recovery but also bare metal recovery via a Live CD. If you just want to do a bare metal recovery it is fairly easy to use. But you can do more then just do a 1-to-1 restore. You also have the option to switch to the command line and change your recovered system before (re)booting into it.<br />
<br />
You might wonder why? Well because it gives a lot of interesting opportunities. In this example, I have a Centos 7 Installation which I want to restore. However the system was running LVM and during the restore I decided to not restore the root as an LVM volume but rather directly to disk. Maybe the other way around would make more sense but it is just for the fun of showing you the chrooting process.<br />
<br />
Basically, I did a full system recovery (restore whole) but just before restoring, I selected the LVM setup, deleted it and I restored the LVM volume directly back to /dev/sda2. Here is the result<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRCRIqjGsurpSXqNqh6LLCml1uWhzZeZ2baBXZMbdiZ67FGhcyHB3V9RJ1ZYBdN4I2H1N-putP1UKSXmVFTQ0TctzaOgVOdhGFly1O4BY7LDSzwjS5WyzbN19f1LsmgK4SD97j9lhFNQY/s1600/changerestore.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="229" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRCRIqjGsurpSXqNqh6LLCml1uWhzZeZ2baBXZMbdiZ67FGhcyHB3V9RJ1ZYBdN4I2H1N-putP1UKSXmVFTQ0TctzaOgVOdhGFly1O4BY7LDSzwjS5WyzbN19f1LsmgK4SD97j9lhFNQY/s320/changerestore.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
I have a <a href="http://dewin.me/img/restorepartitionsdifferently.gif">GIF here</a> of the whole process but browser do not seem to like it. You can download it to see the whole setup<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Because we altered the partitions the system will be unbootable. If you try to boot, you might see the kernel load but it will because it can not find it's filesystem. Here is for example a screenshot of such a system, that fails to boot because we did not correct for the changes made below. Again this is only when you change the partition setup drastically. If you do a straight restore, you can just reboot the system without any manual edits.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs4X542AMgN4o8LT3QucBuCoeWBWXva3vw8w6t0Ff2Fsz_0ZmDaR-IfWNXzU6Ydwg7Gk8Ldau9Id8h2ArUuJS-fI6Ix2i6Mrm5IN7lb47rXZyqx-pO8l_4iQdVkGvLAkQZffeEIHYbcNM/s1600/failed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs4X542AMgN4o8LT3QucBuCoeWBWXva3vw8w6t0Ff2Fsz_0ZmDaR-IfWNXzU6Ydwg7Gk8Ldau9Id8h2ArUuJS-fI6Ix2i6Mrm5IN7lb47rXZyqx-pO8l_4iQdVkGvLAkQZffeEIHYbcNM/s320/failed.png" width="320" /></a></div>
<br />
<br />
Once restored I went back to the main menu and selected switch to command line<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjbga1-puv49SePdWw2MSyVq52b_nNbM9-8hDKoY3lUXXAo99m166Ougiu55A291moIydQHvsdoNMqvryvLv_PlbPkEI4qevCqn1UouFUvXO3Awo0qGyg_NN2ew1FQdZ1Qi3e3sCfyLe4/s1600/commandline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjbga1-puv49SePdWw2MSyVq52b_nNbM9-8hDKoY3lUXXAo99m166Ougiu55A291moIydQHvsdoNMqvryvLv_PlbPkEI4qevCqn1UouFUvXO3Awo0qGyg_NN2ew1FQdZ1Qi3e3sCfyLe4/s320/commandline.png" width="320" /></a></div>
<br />
Once we are there we need a couple of thing. Basically we will mount our new system and chroot into it. You can start by checking if your disk was correctly restored with "fdisk -l /dev/sda" for example. It shows you the layout which make it easier for the next commands. Execute the following commands but do adopt them for your system (you might have a different layout then I). Make sure to mount your root filesystem before any other system.<br />
<br />
<blockquote class="tr_bq">
mkdir -p /chroot<br />
mount /dev/sda2 /chroot<br />
mount /dev/sda1 /chroot/boot<br />
mount -t proc none /chroot/proc<br />
mount -o bind /dev/ chroot/dev<br />
mount -t sysfs sys /chroot/sys<br />
chroot /chroot</blockquote>
<br />
The output should be your systems shell<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF0HfPMBiWIaimGawBy84UE9ByRNpbAHtgAZuWO1cTrZuPpFeTlcb02VzkEHnfXNWsUmV2qimVh-ghwnTASIekwNAHjw5IuSIKsWLmt5fMmh6hfPC5Y7Yx6nsv78cnWGY92zuSbvT4JJE/s1600/buildchroot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF0HfPMBiWIaimGawBy84UE9ByRNpbAHtgAZuWO1cTrZuPpFeTlcb02VzkEHnfXNWsUmV2qimVh-ghwnTASIekwNAHjw5IuSIKsWLmt5fMmh6hfPC5Y7Yx6nsv78cnWGY92zuSbvT4JJE/s320/buildchroot.png" width="320" /></a></div>
<br />
Ok so we are in the shell. For Centos 7 we have to do 2 things. First change /etc/fstab and second of all update the grub2 config. Fstab is quite straight forward. Use "vi /etc/fstab" to edit the file with VI. Then update the line that mounts your root "/". In my case I had to change "/dev/mapper/centos-root" to "/dev/sda2"<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk-O0kNoCxX5ZY-xWwsrw0InwjD0LIw5XY21a14wguCHyBFQVoBJj6MiSQ_bTZhRUiMPbK8zgGSvliEl4tc5nUs9_Hd_apmEW4XrYbeeS7_T6QFWmqmNBC_tQ1htRmt0MNPcQc_M8UONU/s1600/fstabupdate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk-O0kNoCxX5ZY-xWwsrw0InwjD0LIw5XY21a14wguCHyBFQVoBJj6MiSQ_bTZhRUiMPbK8zgGSvliEl4tc5nUs9_Hd_apmEW4XrYbeeS7_T6QFWmqmNBC_tQ1htRmt0MNPcQc_M8UONU/s320/fstabupdate.png" width="320" /></a></div>
<br />
Now we have to update grub2 (and this is why we actually need the chroot). Use "vi /etc/default/grub" to edit the default grub config. Then remove rd.lvm.lv=centos/root. Here are before and after screenshot. If you are going the other way, you might have to add LVM detection<br />
<br />
Before:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsLFqQagZl125uiHKSjYfzDxW0-S8gEmEt7I0J7ywhISge4amb224SDQa6A4-cDflf2L6Rx1sg9Rlom5755N3JR7PPEbBgNacaEJc9ejTMApNjv_GW76cfLiUPPJX6FPQ-IhkimIs0fOc/s1600/grubbefore.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsLFqQagZl125uiHKSjYfzDxW0-S8gEmEt7I0J7ywhISge4amb224SDQa6A4-cDflf2L6Rx1sg9Rlom5755N3JR7PPEbBgNacaEJc9ejTMApNjv_GW76cfLiUPPJX6FPQ-IhkimIs0fOc/s320/grubbefore.png" width="320" /></a></div>
<br />
After:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6ZKMVpTW1ZxFFNyQHVOHztk7yjpCTJSew1ohZyQOU6B3cEQW9cxrCxD0cYa55rEAGOG6O7me1OsNsr-hYTk_YQtsy_TBmnnruKgQ_J6IgC6Mf_ggUe489mjjzHSnfuDbyfSPdR3t61no/s1600/grubafter.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6ZKMVpTW1ZxFFNyQHVOHztk7yjpCTJSew1ohZyQOU6B3cEQW9cxrCxD0cYa55rEAGOG6O7me1OsNsr-hYTk_YQtsy_TBmnnruKgQ_J6IgC6Mf_ggUe489mjjzHSnfuDbyfSPdR3t61no/s320/grubafter.png" width="320" /></a></div>
<br />
Now we need to still apply the default by using "grub2-mkconfig -o /boot/grub2/grub.cfg"<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMs5KZJkJ-RYabEfwx516Clf-8CPj315vRkGBD0ywuWCJ-C9NmrOlQMSyiDEHDWfX3vJn8uF0_SymbW2h6QdZ0P7cYR2tCyW6KNdZXdvJyxD0coikyII92H1ba1crCnFk_gI_RsgwdkDg/s1600/mkgrubconfig.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMs5KZJkJ-RYabEfwx516Clf-8CPj315vRkGBD0ywuWCJ-C9NmrOlQMSyiDEHDWfX3vJn8uF0_SymbW2h6QdZ0P7cYR2tCyW6KNdZXdvJyxD0coikyII92H1ba1crCnFk_gI_RsgwdkDg/s320/mkgrubconfig.png" width="320" /></a></div>
<br />
Now exit the chroot by typing "exit". Then unmount all the (pseudo)filesystems we have mounted earlier and you are ready to reboot. You can use "mount" without arguments to check the mounted filesystems. Make sure to umount "/chroot" as the last one<br />
<br />
<blockquote class="tr_bq">
umount /chroot/proc<br />
umount /chroot/sys<br />
umount /chroot/dev<br />
umount /chroot/boot<br />
umount /chroot</blockquote>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlcKtkFEuvUXc-b_mR71F3Qh86nd9Cpss3mXAXkh3ZHmpSkEkooJiGcuifV2nHNZ_GbB3oQcS6nJWjfbvpy5pDUBW0-mCKCQqVsant6vThcALYVn4sDeRZlQuOPOC3UZ1GF2KYc_3OjfM/s1600/umount.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlcKtkFEuvUXc-b_mR71F3Qh86nd9Cpss3mXAXkh3ZHmpSkEkooJiGcuifV2nHNZ_GbB3oQcS6nJWjfbvpy5pDUBW0-mCKCQqVsant6vThcALYVn4sDeRZlQuOPOC3UZ1GF2KYc_3OjfM/s320/umount.png" width="320" /></a></div>
<br />
<br />
Now reboot and see your system transformed. You can just type "exit" to go back to the "GUI" interface and reboot from there or just type reboot directly from the command line<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJXPgjhCDJgqwyWTMBiI3km6okcFm41AVWb5DGtMVZ46pzxIV3g1DPx5ECE5gIxHwY5xVsvegWLOZMpCiiWP14BLJlcFAvMGf748N4tUopYC9jMYmwibAGtYE1GNsdjiqOLouuxtkeM6o/s1600/changed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJXPgjhCDJgqwyWTMBiI3km6okcFm41AVWb5DGtMVZ46pzxIV3g1DPx5ECE5gIxHwY5xVsvegWLOZMpCiiWP14BLJlcFAvMGf748N4tUopYC9jMYmwibAGtYE1GNsdjiqOLouuxtkeM6o/s320/changed.png" width="320" /></a></div>
<br /></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-2067488845877544412016-09-22T14:02:00.000+02:002016-09-22T14:03:17.612+02:00Figuring out Surebackup and a remote virtual lab<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
The Idea</h2>
If you want a to setup a Surebackup job, the most difficult part is setting up the virtual lab. In the past, great articles have been written about how you need to set them up but a common challenge is that the backup server and the virtual lab router have to be in the same network. In this article, I wanted to take the time to write out a small experiment I made the other day, to see if I could easily get around this. This question pops up once in a while, and now at least I can tell that it is possible.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7qdYmda-2Xx99GPHamJTu5bbSk_Eo7fGHYK-uIiY0pL1BPpHH0n-CYBuclJZdNe6Csc0_wAUsUk9c7uszkJPimRjVqXwa3Yy3IN3a3aDvGeoZKqJG4Tb6BhWec0PvvNwq8C9vVwizRHk/s1600/nsetup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7qdYmda-2Xx99GPHamJTu5bbSk_Eo7fGHYK-uIiY0pL1BPpHH0n-CYBuclJZdNe6Csc0_wAUsUk9c7uszkJPimRjVqXwa3Yy3IN3a3aDvGeoZKqJG4Tb6BhWec0PvvNwq8C9vVwizRHk/s320/nsetup.png" width="287" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
In this example, the virtual lab is called "remotelab". A linux appliance has been made, called "remotevlab" which sits in the same network as the backup server. It routes requests from the backupserver to a bubble network called remotevlab VM Network. This mimics the production range and reuses the same range. To allow you to communicate with the segment, the appliance uses Masquerading. In my example, I used a mask of 192.168.5.x, so if I want to access the ad, I can contact 192.168.5.103, and the router translates that 192.168.1.103, when the package passes.<br />
<br />
For those who have already setup virtual labs, this is probably not rocket science. However for this scheme to work, the backup server needs to be aware that it should push IP packages for the 192.168.5.x range to the remotevlab router. So when you start a Surebackup Job, it automatically creates a static route on the backup server. When the backup server and the remotevlab router are in the same production network, all is good.<br />
<br />
However, when they are in a different network, suddenly it doesn't work anymore. That is because, your gateway is probably not aware of the 192.168.5.x segment. So when the package is send to the router, it just drops it or routes it to its default gateway (which in turn might drop it). One way to resolve the issue, is to create those static routes in the uplink router(s) but network admins are not persée the infrastructure admins, and most of the times, they are quite reluctant to add static routes to routers they do not control (most of the times they are quite reluctant to execute any of the infra admins request, but on a sunny day, they might considering opening some ports though). So lets look at the following experiment<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi599zz7yfsIKcs2WyA5k9scj6v6MjrQ3a1g3wsgR5562rdZ3F3dvVrbHy_a27EqEVMr91NogqOpCO48qByEVyVEJAVcJ2poFcaKdXOHrQReHdU0VfnR5iQhTL3wCWPFmXHMPnLxaJnxqI/s1600/setup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi599zz7yfsIKcs2WyA5k9scj6v6MjrQ3a1g3wsgR5562rdZ3F3dvVrbHy_a27EqEVMr91NogqOpCO48qByEVyVEJAVcJ2poFcaKdXOHrQReHdU0VfnR5iQhTL3wCWPFmXHMPnLxaJnxqI/s320/setup.png" width="275" /></a></div>
<br />
In my small home lab, I don't really have 2 locations connected via MPLS or different routers. So to emulate it, I created an internalnet which is not connected with a physical NIC. I connected the remotevlab there (the production side of the virtual lab router). In this network, I use a small range called "192.168.4.x".<br />
<br />
<h2 style="text-align: left;">
The Connection Broker</h2>
Ok so far so good. Now we need a way for the v95backup server to talk to the remotevlab. To do this, a small linux VM was created with Centos 7 minimal. Itself has 2 virtual network adaptors. I called them eno1 and eno3 but these are just truncated names as you will see in the config. eno1 has assigned an IP in a production range. In this case it is the same range as v95backup server, but you will soon see that this doesn't have to be the case. The other network eno3 is connected to the same network as remotevlab and this is by design. In fact it is acting like the default gateway for that segment. Here are some copies of the configuration :<br />
<br />
eno1:<br />
<blockquote class="tr_bq">
# [root@vlabcon network-scripts]# cat ifcfg-eno16780032<br />
TYPE=Ethernet<br />
BOOTPROTO=static<br />
IPADDR=192.168.1.199<br />
NETMASK=255.255.255.0<br />
GATEWAY=192.168.1.1<br />
DNS1=8.8.8.8<br />
DEFROUTE=yes<br />
PEERDNS=yes<br />
PEERROUTES=yes<br />
IPV4_FAILURE_FATAL=no<br />
IPV6INIT=no<br />
IPV6_AUTOCONF=yes<br />
IPV6_DEFROUTE=yes<br />
IPV6_PEERDNS=yes<br />
IPV6_PEERROUTES=yes<br />
IPV6_FAILURE_FATAL=no<br />
NAME=eno16780032<br />
UUID=1874c74a-6882-435f-a465-f5fb11c60901<br />
DEVICE=eno16780032<br />
ONBOOT=yes</blockquote>
eno3:<br />
<blockquote class="tr_bq">
# [root@vlabcon network-scripts]# cat ifcfg-eno33559296<br />
TYPE=Ethernet<br />
BOOTPROTO=static<br />
IPADDR=192.168.4.1<br />
NETMASK=255.255.255.0<br />
DEFROUTE=no<br />
PEERDNS=no<br />
PEERROUTES=no<br />
IPV4_FAILURE_FATAL=no<br />
IPV6INIT=no<br />
IPV6_AUTOCONF=no<br />
IPV6_DEFROUTE=no<br />
IPV6_PEERDNS=no<br />
IPV6_PEERROUTES=no<br />
IPV6_FAILURE_FATAL=no<br />
NAME=eno33559296<br />
DEVICE=eno33559296<br />
ONBOOT=yes</blockquote>
You will also need to setup routing (forwarding), and a static route, so that the appliance is aware of masquerade networks. This is fairly simple, by creating a route script<br />
<blockquote class="tr_bq">
#[root@vlabcon network-scripts]# cat route-eno33559296<br />
192.168.5.0/24 via 192.168.4.2 dev eno33559296</blockquote>
And by changing the correlated kernel parameter. You might check with sysctl -a if the parameter net.ipv4.ip_forward is not already set to forwarding (but on a clean install it should not)<br />
<blockquote class="tr_bq">
# enable forwarding /etc/sysctl.d/99-sysctl.conf<br />
# check with sysctl -a | grep net.ipv4.ip_f<br />
echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/90-forward.conf<br />
sysctl -p /etc/sysctl.d/90-forward.conf<br />
# check with sysctl -a | grep net.ipv4.ip_f</blockquote>
So basically we setup, yet another router. So how do we talk to the appliance without having to add static route to the appliance. Well we can use a layer 2 VPN. Any VPN software will do but in this example I choose PPTPD. You might argue that it is not that secure, but it is not really about security here, it is just about getting a tunnel. Plus I'm not really a network expert, and PPTPD seemed extremely easy to setup. Finally, because the protocol is quite universal, you don't have to install any VPN software on the backup server, it is built into Windows. I followed this tutorial <a href="https://www.digitalocean.com/community/tutorials/how-to-setup-your-own-vpn-with-pptp" rel="nofollow">https://www.digitalocean.com/community/tutorials/how-to-setup-your-own-vpn-with-pptp</a> . Although it was written for Centos 6, most of it can be applied to Centos 7<br />
<br />
The first thing we need to do is download PPTPD. It is actually hosted in the EPEL repository, so you might need to add those repositories, if you have not done that yet.<br />
<blockquote class="tr_bq">
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm</blockquote>
Then install the software<br />
<blockquote class="tr_bq">
yum install pptpd</blockquote>
<div>
Now the first thing you need to do is assign address to the ppp0 adapter and the clients logging in. To do configure localip (ppp0) and remoteip (clients) in /etc/pptpd.conf </div>
<div>
<blockquote class="tr_bq">
#added at the end of /etc/pptpd.conf<br />
localip 192.168.3.1<br />
remoteip 192.168.3.2-9</blockquote>
</div>
<div>
Next step is to make a client login. By default it uses a plaintext password. Again, since it is not really about security (not trying to build tunnels over the internet here), that is quite ok. You set them up in /etc/ppp/chap-secrets. "surebackup" is the login, "allyourbase" the password. PPTPD is just the default configuration name. * means that everybody can use this. So if you want, you can still add a bit of security by specifing only the backup servers IP.</div>
<div>
<blockquote class="tr_bq">
#added at the end of /etc/ppp/chap-secrets<br />
surebackup pptpd allyourbase *</blockquote>
</div>
<div>
I did not add the DNS config to /etc/ppp/options.pptpd as we don't really need it. So now the only thing left to do is to start the service and configure it to boot at power on.</div>
<div>
<blockquote class="tr_bq">
systemctl enable pptpd<br />
systemctl restart pptpd</blockquote>
</div>
<h2 style="text-align: left;">
v95Backup configuration</h2>
<div>
With the server being done, we can now head over to the backup server. You can just add a new VPN and put it to PPTP.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLYvwI1IOucghqjwIr5SzeqxxiqUpBv4tZsqnfy8cXHOVerWGodYuLhxRxDJgF16sdcKYQN9zHjCLrTaV5U3auDGpniZD-PdoDn_jj9TNHiXsl_35CeeDtxNq6zCvPZ7IL7G-K3mTR9JA/s1600/vpn.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLYvwI1IOucghqjwIr5SzeqxxiqUpBv4tZsqnfy8cXHOVerWGodYuLhxRxDJgF16sdcKYQN9zHjCLrTaV5U3auDGpniZD-PdoDn_jj9TNHiXsl_35CeeDtxNq6zCvPZ7IL7G-K3mTR9JA/s320/vpn.png" width="320" /></a></div>
<div>
<br /></div>
<div>
So the connection is called robo1 and we use a PPTP connection. Specify the username surebackup and password allyourbase. I also changed the adapter settings. By default the pptp connection will create a default route. That means that you will not be able to connect to other networks anymore, once you connected to the appliance. To fix that, you can disable this behavior</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQvn8NDAOpb4uEzetVS6PAS2AA2Um5XLNVkOi-NgwwPpetHvZaby2ImrX39vm2cUMkbbv7AhL0BdMQ9-ojOHOMkCeDl6zoXYBXke9jTVtKbFB56upsJ2GCVJ0PSYChuSfQGLWZHbQnT-8/s1600/defaultgw.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQvn8NDAOpb4uEzetVS6PAS2AA2Um5XLNVkOi-NgwwPpetHvZaby2ImrX39vm2cUMkbbv7AhL0BdMQ9-ojOHOMkCeDl6zoXYBXke9jTVtKbFB56upsJ2GCVJ0PSYChuSfQGLWZHbQnT-8/s320/defaultgw.png" width="320" /></a></div>
<div>
<br /></div>
<div>
In the adapter settings > networking tab > ipv4 > advanced, uncheck "use default gateway". I also put the automatic metric off, and gave in the number 5. Now because, you disabled the default gateway, the Backup server does not use this connection anymore except for the "192.168.3.x" range. So it will no longer to talk to the vlab router. To fix it, add a persistent route, so that you can discover the remotevlab router.</div>
<blockquote class="tr_bq">
route -p add 192.168.4.0 mask 255.255.255.0 192.168.3.1 metric 3 if 24</blockquote>
<div>
It should be straight forward, except "if 24". Basically, this says, route it over interface 24, which in this example was the robo1 interface, as shown below (using "route print" to discover your number).</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSMgUTmQURYghzoAUuoHZhLs_V1onKxDJUE7Z46tXtZ9U7uTzmIyxZOl1oNHKLYIkOWoMJIonQ8wAU0hI2e_Q3CbiWczsx1YrPrWg29FpiS0zktVfp-WIAUvSDmVzWLf8RMvl2TXAyb_w/s1600/interfacenum.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="72" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSMgUTmQURYghzoAUuoHZhLs_V1onKxDJUE7Z46tXtZ9U7uTzmIyxZOl1oNHKLYIkOWoMJIonQ8wAU0hI2e_Q3CbiWczsx1YrPrWg29FpiS0zktVfp-WIAUvSDmVzWLf8RMvl2TXAyb_w/s320/interfacenum.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Now you have to make sure that the connection is of course on when you start the Surebackup test. One way to do this, is by scheduling a tasks, that constantly checks the connection and restart the connection if it failed For my test, I just disabled the surebackup schedule and made a small powershell script. It basically dials the connection and than start the job:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
asnp veeampssnapin<br />
rasdial robo1 surebackup allyourbase<br />
get-vsbjob -name "surebackup job 2" | start-vsbjob -runasync</blockquote>
<div>
<br /></div>
<div>
You can see a strange scheduling time, that is because I configured the tasks 10min later, and then restarted the backup server, just to see if it would work if nobody is logged in. Very importantly, like with other tasks, make sure that you have the correct right to start the vsbjob. I configured the task to have administrator rights.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVm-JjmGYRu-YEAuFk3S-AAzCZi2lS1j5FNzTHNnFAD9w92KN6KfLlMw8FWt7kRlS3C4heMxoa31YvVXJ9K8qj8qwb8fiRsKr_L9-yFQMWL1-lZ7FwvREQSujPtTRENmkQtXOH32Y8vbk/s1600/scheduledtask.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVm-JjmGYRu-YEAuFk3S-AAzCZi2lS1j5FNzTHNnFAD9w92KN6KfLlMw8FWt7kRlS3C4heMxoa31YvVXJ9K8qj8qwb8fiRsKr_L9-yFQMWL1-lZ7FwvREQSujPtTRENmkQtXOH32Y8vbk/s320/scheduledtask.png" width="320" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
The result: it just works. Here are some screenshots:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir9lRoktAQEUMPGpABPUgO2bvNwcgGwVP9ztAQb1hTcKP_usG1xgQieLJQN4KXSJKVIzMhD7ZfnzDjitLHe3q4tywVNTLPD47KDq4ggjYrDsyAFDOL6Uh8TH4oH0MmFb9hv5M_5bw4BjI/s1600/vlab1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir9lRoktAQEUMPGpABPUgO2bvNwcgGwVP9ztAQb1hTcKP_usG1xgQieLJQN4KXSJKVIzMhD7ZfnzDjitLHe3q4tywVNTLPD47KDq4ggjYrDsyAFDOL6Uh8TH4oH0MmFb9hv5M_5bw4BjI/s320/vlab1.png" width="320" /></a></div>
<br />
The Virtual Lab Configuration. You can see that it is connected to the internalnet. Very important is that you point to the connection broker (192.168.4.1) as the default gateway.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicsNnLQ_Pmny352NsXI_QkmYW9IZucJiBKTNOMrYGD5fqY4ZGcrN_8vg7fua5bfIWDMBLRFrpaxs3sF5Q1jIl2jhUl_V5rzKfHtYRm1gCOm_o7NIIR_hUrj1gO6zSlo__y08clvi57Qq0/s1600/vlab2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicsNnLQ_Pmny352NsXI_QkmYW9IZucJiBKTNOMrYGD5fqY4ZGcrN_8vg7fua5bfIWDMBLRFrpaxs3sF5Q1jIl2jhUl_V5rzKfHtYRm1gCOm_o7NIIR_hUrj1gO6zSlo__y08clvi57Qq0/s320/vlab2.png" width="320" /></a></div>
<div>
<br /></div>
<div>
The vSphere network</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjveLj_FJSHJIfyy5m3VatU9r1smeqaqqGWcnKaU6t2a0gb8KtWF_EW9ShBFDuqyB5cnxlqmvk0DgjpZoloBtq4DwadoIzGCy-21sKTpD1JA1oKknamWsY2x-X73Y_VBYkh50SMOaqH39g/s1600/vspnet.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjveLj_FJSHJIfyy5m3VatU9r1smeqaqqGWcnKaU6t2a0gb8KtWF_EW9ShBFDuqyB5cnxlqmvk0DgjpZoloBtq4DwadoIzGCy-21sKTpD1JA1oKknamWsY2x-X73Y_VBYkh50SMOaqH39g/s320/vspnet.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Robo1 connected</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhK75ywAZe6JFZscgC8raA6smknYT6skH5vAwT2mIGQKNP1mTvQQcjYea55YaLtu9GnOkrzbz9E_yrAEUE8H234WXbJork57aB3kpXKubtGNeCly6kYA7yS15VmwcX8bj4j4f43oHgqRGY/s1600/robo1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="314" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhK75ywAZe6JFZscgC8raA6smknYT6skH5vAwT2mIGQKNP1mTvQQcjYea55YaLtu9GnOkrzbz9E_yrAEUE8H234WXbJork57aB3kpXKubtGNeCly6kYA7yS15VmwcX8bj4j4f43oHgqRGY/s320/robo1.png" width="320" /></a></div>
<div>
<br /></div>
<div>
The routing table on the backup server. Here you can see the static route 192.168.4.x going to PPTP connection. What is even nicer is that, because we defined the 192.168.4.x, when Surebackup adds the 192.168.5.x, windows routes it correctly to the 192.168.3.2 network because of the persistent static route;</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWdFE4mlsDd9nb_q2tfBiTJ6syO0r64bsnSQFskvj40qwWx0FAEc0OtoIK-C4i5rpOFMmEdDzWCgQoE9Yimj3zsVkd181jKFEIycVmFmw7b5qXwataY5FfO4DeC4Pjuu4v_Xoyeu8qXWI/s1600/routes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="229" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWdFE4mlsDd9nb_q2tfBiTJ6syO0r64bsnSQFskvj40qwWx0FAEc0OtoIK-C4i5rpOFMmEdDzWCgQoE9Yimj3zsVkd181jKFEIycVmFmw7b5qXwataY5FfO4DeC4Pjuu4v_Xoyeu8qXWI/s320/routes.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Finally, a succeeded test</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUaLScvNLn5OIJ_qGnw1H4vsQM7BE7vKg4OTBtcZgaFEDU_4KPjBf-cnLYo9zVFG3vG-KSNlgWD3OazUA9AgEAreb6_DvaH7CeaTvc5ONu04Wk65N6rAYefWhN5pDq5P6OJhvbOgOZxLE/s1600/test.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUaLScvNLn5OIJ_qGnw1H4vsQM7BE7vKg4OTBtcZgaFEDU_4KPjBf-cnLYo9zVFG3vG-KSNlgWD3OazUA9AgEAreb6_DvaH7CeaTvc5ONu04Wk65N6rAYefWhN5pDq5P6OJhvbOgOZxLE/s320/test.png" width="315" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<h2 style="text-align: left;">
Conclusion</h2>
<div>
The lab setup works and setup is relatively easy. If you made an OVF or Livecd of the setup, it would pretty easy to duplicate this setup if you have multiple locations. You might need to consider smaller IP ranges. </div>
<div>
<br /></div>
<div>
PPTP might not be the best protocol, so other VPN solutions might be considered. For example, you might remove another subnet, if you could bridge the VPN port to internal network directly or if you could create a stretched layer 2 connection. However, my test was more to see what needs to be done to get this to work. What I liked the most is that it has good compatibility between Windows and Linux and I didn't need to setup special software on the backup server.</div>
<div>
<br /></div>
<div>
One other use case is that you could also allow other laptops in the network to access the virtual lab for testing. If they don't really need internet (or you need to setup the correct masquerading/dns in iptables/pptp), they could just connect to the network with a predefined VPN connection in Windows, even if they are not connected to the same segment as the backup server (something those network also would really appreciate).<br />
<br /></div>
<h2 style="text-align: left;">
Appendix : Hardening with IPTables</h2>
<div>
For a bit more hardening and to document the ports, I also enabled IPtables (instead of firewalld). For my install firewalld was not installed/configured but you might need to remove it. Check out <a href="http://www.faqforge.com/linux/how-to-use-iptables-on-centos-7/">http://www.faqforge.com/linux/how-to-use-iptables-on-centos-7/</a></div>
<div>
<br /></div>
<div>
The iptables configuration, I based on the Archlinux documentation found here <a href="https://wiki.archlinux.org/index.php/PPTP_server#iptables_firewall_configuration">https://wiki.archlinux.org/index.php/PPTP_server#iptables_firewall_configuration</a></div>
<div>
<br /></div>
<div>
First we need to install the service and enable it at boot</div>
<blockquote class="tr_bq">
yum install iptables-services<br />
systemctl enabled iptables </blockquote>
Then I modified the config. Here is a dump of the config<br />
<blockquote class="tr_bq">
#[root@vlabcon ~]# cat /etc/sysconfig/iptables<br />
# sample configuration for iptables service<br />
# you can edit this manually or use system-config-firewall<br />
# please do not ask us to add additional ports/services to this default configuration<br />
*filter<br />
:INPUT ACCEPT [0:0]<br />
:FORWARD ACCEPT [0:0]<br />
:OUTPUT ACCEPT [0:0]<br />
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT<br />
-A INPUT -p icmp -j ACCEPT<br />
-A INPUT -i lo -j ACCEPT<br />
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT<br />
-A INPUT -p tcp -m state --state NEW -m tcp --dport 1723 -j ACCEPT<br />
-A INPUT -p 47 -j ACCEPT<br />
-A FORWARD -i ppp+ -o eno33559296 -j ACCEPT<br />
-A FORWARD -o ppp+ -i eno33559296 -j ACCEPT<br />
-A INPUT -j REJECT --reject-with icmp-host-prohibited<br />
-A FORWARD -j REJECT --reject-with icmp-net-unreachable<br />
COMMIT</blockquote>
<br />
<div>
Basically I added two input rules. For the PPTP connection, open up TCP port 1723. You also need to open up the GRE protocol (-p 47). This shows a weakness of PPTP. You need to ask your firewall guys to open the connection, but more importantly, it will probably not survive any NAT/PAT. The good thing is that overhead should be minimal although this is not so important for the simple tests. To allow routing to occure, forwarding must be allowed between the ppp connection and the eno3 interface. Simply start iptables with</div>
<blockquote class="tr_bq">
systemctl start iptables</blockquote>
<div>
If everything is configured correctly, the setup should still work, but people that are able to connect to the connection broker can not necessarily get to the virtual lab. They first need to make the PPTP connection. </div>
<div>
<br /></div>
<div>
Notice that, not masquerading has been setup towards the remotevlab router (as in the Archlinux doc). That is because the remotevlab router uses the connection broker as the default gateway, so when it replies, it will always do so redirecting the request back to the connection broker.</div>
<div>
<br /></div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-84942233728582890242016-06-10T12:23:00.004+02:002016-06-10T13:31:15.918+02:00Veeam repository in an LXC container<div dir="ltr" style="text-align: left;" trbidi="on">
In the past, I have always wanted to investigate the concept of a Linux repositories with extra safety like a chroot. However I never took the time to work on a example. With the rise of docker, I was thinking, could I make a repository out of a docker container. After some research, I understood that docker has the whole abstraction layer of storage which I really didn't needed. Next to that, docker containers run in privileged mode. So I decided to try out LXC and go a bit deeper into the container technology.<br />
<div>
<br /></div>
<div>
Now before we start, this is just an experimental setup in a lab, so do proper testing before you put this in production. It is just an idea but not a "reference architecture", so I imagine improvements can be done. So here is a diagram of the set up which will discuss in this blog.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZN00FqVAC5GQ9ojVmoNbhMCcSByMwfseXtAEN_Ix6vxwv0L10RQV0u2RJFRiyDa30SWQcKG2qkdhoaKAZxWmzCUIriQvowHeIKjEHoXv-YfwWt9n0FLJhkuJuRcWbNrNSuF28w11vLAc/s1600/lxc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="167" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZN00FqVAC5GQ9ojVmoNbhMCcSByMwfseXtAEN_Ix6vxwv0L10RQV0u2RJFRiyDa30SWQcKG2qkdhoaKAZxWmzCUIriQvowHeIKjEHoXv-YfwWt9n0FLJhkuJuRcWbNrNSuF28w11vLAc/s320/lxc.png" width="320" /></a></div>
<div>
<br /></div>
<div>
The first thing you might notice is that we will set up a bridge between the host ethernet card and the containers virtual network ports. In some lxc setups, NAT is preferred between the real world and a disconnected virtual bridge. I guess that works for web servers but in this example, Veeam needs to be able to connect to each container via SSH and you might need to open up multiple ports. When you bridge your containers virtual network ports and your outside ethernet port, you basically create a software switch. This give the advantage that you can assign an individual IP to every individual container as if it was a standalone machine</div>
<div>
<br /></div>
<div>
Next to that we will make a separate LVM volume group. You might notice that the root of every tenant is colored green. That's because we will use lxc-clone with snapshot functionality. That means, set up the root (container) once, and than clone it via an LVM snapshot so you can instantly have a new container/tenant. Finally, an LVM volume called "_repo" is assigned to each individual container and mounted under /mnt. This is where you will store the backups itself, separated from the root system.</div>
<div>
<br /></div>
<div>
The first thing is of course install debian. Not going to show it as it is basically following the wizard which is straightforward. I do want to say that I assigned 5GiB for the root, but it turns out, after all the configuration, I only use 1.5GiB. So if you want to save some GB's, you could assign for example only 3GiB. </div>
<div>
<br /></div>
<div>
Maybe one important note, the volume group name for storing the container roots need to be called different than the container name in order for lvm-clone to work correctly. I ran into the issue where cloning did not work <a href="https://lists.linuxcontainers.org/pipermail/lxc-users/2014-February/006307.html" rel="nofollow">because of it</a>. So for example, call the volume group "vg_tenstore" and the containers/logical volumes "tenant" . During the initial install, only setup the volume group. The logical volumes will be made by lxc during configuration.</div>
<div>
<br /></div>
<div>
So after the install, I just installed drivers and updates by executing the following. If you don't run it in VMware, you of course do not need the tools. You might also go leightweight by not installing the dkms version.</div>
<div>
<blockquote class="tr_bq">
apt-get update<br />
apt-get upgrade<br />
apt-get install open-vm-tools-dkms<br />
apt-get install openssh-server<br />
reboot</blockquote>
</div>
<div>
After the system has rebooted, you are able to start an SSH session to it. Now let's install the LXC software. (based on <a href="https://wiki.debian.org/LXC" rel="nofollow">https://wiki.debian.org/LXC</a>)</div>
<blockquote class="tr_bq">
apt-get install lxc bridge-utils libvirt-bin debootstrap</blockquote>
<div>
Once that is done, let's set up the bridge. For this edit /etc/network/interfaces. Here is a copy of my configuration</div>
<div>
<blockquote>
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
source /etc/network/interfaces.d/*<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
# The primary network interface<br />
allow-hotplug eth0<br />
#auto eth0<br />
#iface eth0 inet static<br />
# address 192.168.204.7<br />
# netmask 255.255.255.0<br />
# network 192.168.204.0<br />
# broadcast 192.168.204.255<br />
# gateway 192.168.204.1<br />
# # dns-* options are implemented by the resolvconf package, if installed<br />
# dns-nameservers 192.168.204.2<br />
# dns-search t.lab<br />
auto br0<br />
iface br0 inet static<br />
bridge_fd 0<br />
bridge_stp off<br />
bridge_maxwait 0<br />
bridge_ports eth0<br />
address 192.168.204.7<br />
netmask 255.255.255.0<br />
broadcast 192.168.204.255<br />
gateway 192.168.204.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 192.168.204.2<br />
dns-search t.lab</blockquote>
</div>
<div>
Notice that I kept the old configuration part in comments. You can see that the whole address configuration is assigned to the bridge (br0). So the bridge literally gets the host IP. After modding, I restarted the OS, just to check if the network settings would stick, but I guess you can also restart the networking via "/etc/init.d/networking restart" . The result should be something like this</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCzGrjXKmK2zftWkgHZlUKOU871mzayu-Qu1mWzt4-9-CHbhq5pahXogVzbEFgDSPow4mu2Xg1ISMTt1f5uOeaoS21ykhFDvwaQoWQG4Vedpiq_nc-a78m36SXwbJsyvH2CZFJqRNNIX8/s1600/ifconfig.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCzGrjXKmK2zftWkgHZlUKOU871mzayu-Qu1mWzt4-9-CHbhq5pahXogVzbEFgDSPow4mu2Xg1ISMTt1f5uOeaoS21ykhFDvwaQoWQG4Vedpiq_nc-a78m36SXwbJsyvH2CZFJqRNNIX8/s320/ifconfig.png" width="291" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Ok so that was rather easy. Now lets add some lines to the default config so that every container will be connected to this bridge by default. To do so, edit /etc/lxc/default.conf and add</div>
<div>
<blockquote class="tr_bq">
lxc.network.type = veth<br />
lxc.network.flags = up<br />
lxc.network.link = br0</blockquote>
</div>
<div>
In fact, if you look at previous screenshot, you can see that there are already 2 containers running, as 2 "vethXXXXX" interfaces already exist.</div>
<div>
<br /></div>
<div>
Now let's setup the root for our container. If you forgot your volume group name, use 'lvm vgdisplay" to display all your volumes groups. Then execute the create command</div>
<blockquote class="tr_bq">
lxc-create -n tenant -t debian -B lvm --lvname tenant --vgname vg_tenstore</blockquote>
<div>
This will create a new container tenant based on the debian template. A template is a preconfigured script to make a certain flavor of containers. In this case, I used a debian flavor, but you can find other flavor here "/usr/share/lxc/templates". The backing store is lvm and the logical volume that will be created is called "tenant" and will be created in the volume group "vg_tenstore". Keep the name of the container and the logical volume the same as mentioned before.</div>
<div>
<br /></div>
<div>
Now the container will be created, you can actually edit it's environment before starting it. I did a couple of edits, so that everything boots smoothly. First mount the filesystem</div>
<blockquote class="tr_bq">
mount /dev/vg_tenstore/tenant /mnt</blockquote>
<div>
First I edited the networking file /mnt/etc/network/interfaces and set a static ip</div>
<div>
<blockquote class="tr_bq">
#....not the whole file<br />
iface eth0 inet static<br />
address 192.168.204.6<br />
netmask 255.255.255.0<br />
gateway 192.168.204.1<br />
dns-nameservers 192.168.204.2</blockquote>
</div>
<div>
Then I edited /mnt/etc/resolv.conf</div>
<div>
<blockquote class="tr_bq">
search t.lab<br />
nameserver 192.168.204.2</blockquote>
</div>
<div>
Finally I made a huge security hole by allowing root login via SSH in /mnt/etc/ssh/sshd_config</div>
<blockquote class="tr_bq">
PermitRootLogin yes</blockquote>
<div>
You might want to avoid this but I wanted something quick and dirty for testing. Now umount the filesystem and let's boot the parent container. I used -d to daemonize so you don't get an interactive container you can't escape</div>
<blockquote class="tr_bq">
umount /dev/vg_tenstore/tenant<br />
lxc-start --name tenant -d</blockquote>
Once started, use lxc-attach to attach directly to the console with root and execute passwd to setup a password. You can go out by typing exit after you setup password<br />
<blockquote class="tr_bq">
lxc-attach --name=tenant<br />
passwd<br />
exit</blockquote>
Now test your password by going into the console. While you are there, you can check if everything is ok and than halt the container<br />
<blockquote class="tr_bq">
lxc-console --name tenant<br />
#test what you want after login<br />
halt</blockquote>
You can also halt a container from the host by using "lxc-stop -n tenant". Now that is done, we can actually create a clone. I made a small wrapper script but you can of course run the commands manually. First I made a file "maketenant" and used "chmod +x maketenant" to make it executable. Here is the content of the script<br />
<blockquote>
tenant=$1<br />
ipend=$2<br />
<br />
if [ ! -z "$tenant" -a "$tenant" != "tenstore" -a ! -z "$ipend" ]; then<br />
trepo=$tenant"_repo"<br />
echo "Creating $tenant"<br />
lxc-clone -s tenant $tenant<br />
lvm lvcreate -L 10g -n"$trepo" vg_tenstore<br />
mkfs.ext4 "/dev/vg_tenstore/$trepo"<br />
echo "/dev/vg_tenstore/$trepo mnt ext4 defaults 0 0" >> /var/lib/lxc/$tenant/fstab<br />
mount "/dev/vg_tenstore/$tenant" /mnt<br />
sed -i "s/address [0-9.]*/address 192.168.204.$ipend/" /mnt/etc/network/interfaces<br />
umount /mnt<br />
echo "lxc.start.auto = 1" >> /var/lib/lxc/$tenant/config<br />
echo "Starting"<br />
lxc-start --name $tenant -d<br />
#check if repo is mounted<br />
echo "Check if repo is mounted"<br />
lxc-attach --name $tenant -- df -h | grep repo<br />
echo "Check ip"<br />
lxc-attach --name $tenant -- cat /etc/network/interfaces | grep "address"<br />
else<br />
echo "Supply tenant name"<br />
fi</blockquote>
<div>
Ok so what does it do:</div>
<div>
<ul style="text-align: left;">
<li>lxc-clone -s tenant tenantname : makes a clone of our container based on a (LVM) snapshot</li>
<li>lvm lvcreate -L 10g -n"tenantname_repo" vg_tenstore : create a new logical volume for the tenant to store it's backups in</li>
<li>mkfs.ext4 /dev/vg_tenstore/tenantname_repo : format that volume with ext4</li>
<li> echo "/dev/vg_tenstore/tenantname_repo mnt ext4 defaults 0 0" >> /var/lib/lxc/tenantname/fstab : Tells LXC to mount the volume to the container under /mnt</li>
<li>We then mount the FS again of the new tenant to change the ip</li>
<ul>
<li>sed -i "s/address [0-9.]*/address 192.168.204.$ipend/" /mnt/etc/network/interfaces : replaces the IP with a unique one for the tenant</li>
</ul>
<li>echo "lxc.start.auto = 1" >> /var/lib/lxc/tenantname/config : tells LXC that we want to start the container at boot time</li>
</ul>
<div>
Finally we start the container and check if indeed the repository was mounted and the IP is correctly edited. The result? :</div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0BUxXWxcmqUiFyDDU_6po3SEvC5jcAQA5GYkq8BEI6VlyuytwCs91_s-BDBQ_AvgzNeDiG-BLfIOFocz4WIg4GtjsDfVdnwZdFRp2RlpjKZtXxLL522dEfhIoWMCZRXBCQ7CObXe3yxU/s1600/2016-06-10+10_08_20-root%2540repo_%257E.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0BUxXWxcmqUiFyDDU_6po3SEvC5jcAQA5GYkq8BEI6VlyuytwCs91_s-BDBQ_AvgzNeDiG-BLfIOFocz4WIg4GtjsDfVdnwZdFRp2RlpjKZtXxLL522dEfhIoWMCZRXBCQ7CObXe3yxU/s320/2016-06-10+10_08_20-root%2540repo_%257E.png" width="320" /></a></div>
<div>
<br /></div>
<div>
We have a new tenant (tenant2) is up and running. Let check the logical volumes. You can see that tenant2 is running from a snapshot and that a new logical volume was made as a repository</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSz3iL7UAy_43wdPaH4Bc3SkkKxRITjSkesOWrh0CaDigZvkIryeYOyLHIY3DcvaWA3SaB1M2qbR5GIP3_499wYWs1bO6Yuze4P2G1pNwd_JdPxrpR4h-dmMrV1ocf8xswwuRxWE8ye9g/s1600/lvdisplay.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSz3iL7UAy_43wdPaH4Bc3SkkKxRITjSkesOWrh0CaDigZvkIryeYOyLHIY3DcvaWA3SaB1M2qbR5GIP3_499wYWs1bO6Yuze4P2G1pNwd_JdPxrpR4h-dmMrV1ocf8xswwuRxWE8ye9g/s320/lvdisplay.png" width="286" /></a></div>
<div>
<br /></div>
<div>
And the container is properly bridged to our br0</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP1TvoBf_owuE-0vDJhQJlJKga6fLBDtYSL5r6xTaGjNslVkfdjs9Lt1eqN4-nqz2RiQIQdnC6NYzN0kWbI-zX1WsEiB5ztYNRVf55QEsUObMNHVCUfQxgw5ZnuGR8YqA14PExmkqbRLA/s1600/br0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="60" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP1TvoBf_owuE-0vDJhQJlJKga6fLBDtYSL5r6xTaGjNslVkfdjs9Lt1eqN4-nqz2RiQIQdnC6NYzN0kWbI-zX1WsEiB5ztYNRVf55QEsUObMNHVCUfQxgw5ZnuGR8YqA14PExmkqbRLA/s320/br0.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Now you can add it to Veeam based on the IP. If populate, you will see only what you assigned to the container. </div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXKMrKp3q3X8pG99Fn44uHCi0rcoQABUK3VWzUSiIt8uqYUhgYqzLixwAxzHsMp8uYKnm91wcEzQdg0mlS5MBiaIqWfK96JSo8EObXkxR0va3rA1-QV88OflUfkHSRrQldFt4E8q107RM/s1600/addserver.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXKMrKp3q3X8pG99Fn44uHCi0rcoQABUK3VWzUSiIt8uqYUhgYqzLixwAxzHsMp8uYKnm91wcEzQdg0mlS5MBiaIqWfK96JSo8EObXkxR0va3rA1-QV88OflUfkHSRrQldFt4E8q107RM/s320/addserver.png" width="320" /></a></div>
<div>
<br /></div>
<div>
You can than select the /mnt point</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhOAczysirdKy9C5KG3tXrzaZw6lZsIBvK-K7VADvoh5BwDa_dPsHHo2Iupm5ZuqCUeFCbe9wxlN_QwdyvzPqk1_CNTawDW39D7BwCcOFch5v8yolUY2GfcqNHlw5Qm5rf53xKof1_MJg/s1600/mntbackup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhOAczysirdKy9C5KG3tXrzaZw6lZsIBvK-K7VADvoh5BwDa_dPsHHo2Iupm5ZuqCUeFCbe9wxlN_QwdyvzPqk1_CNTawDW39D7BwCcOFch5v8yolUY2GfcqNHlw5Qm5rf53xKof1_MJg/s320/mntbackup.png" width="320" /></a></div>
<div>
<br /></div>
<div>
And you are all set up. Let's do a backup copy job to it</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbBFFKZ33sIhKYyGdVKN8hgYL1Vu0Ni-pQIycvE88YwVzEbeRlbP0CRTG441vWcejasb6deNsinMo9OGGPlvI03sAdRcQIWNi4nLTPhK4-YjwGg-WbxZcAPcnoIJKP44w9nj8IKEkaVLQ/s1600/bcjlxc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbBFFKZ33sIhKYyGdVKN8hgYL1Vu0Ni-pQIycvE88YwVzEbeRlbP0CRTG441vWcejasb6deNsinMo9OGGPlvI03sAdRcQIWNi4nLTPhK4-YjwGg-WbxZcAPcnoIJKP44w9nj8IKEkaVLQ/s320/bcjlxc.png" width="320" /></a></div>
<div>
<br /></div>
<div>
And let's check the result with "lxc-attach --name tenant2 -- find /mnt/"</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4dxkwrQ2c2UowFQ0BTpgeNuhGDP6KwrtI4RI8_UR1jnxJfc1EECl-bYw3a-EfNJpgIWXy1sdURXBYCJDuZHsJvXPsF4DH9-s_Idv34woBw0oCMBVnZB6dnKpWArXIEPReL5oRVpsemPA/s1600/reporesult.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4dxkwrQ2c2UowFQ0BTpgeNuhGDP6KwrtI4RI8_UR1jnxJfc1EECl-bYw3a-EfNJpgIWXy1sdURXBYCJDuZHsJvXPsF4DH9-s_Idv34woBw0oCMBVnZB6dnKpWArXIEPReL5oRVpsemPA/s320/reporesult.png" width="320" /></a></div>
<div>
<br />
That's it, you can now make more containers to have separate chroots. You can also think about security like Iptables or Apparmor but since the container is not privileged, it should already be a huge separation from just separate users / "/home directories" with sudo rights.</div>
</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-75899341536653456022016-06-06T14:32:00.003+02:002016-06-06T14:32:54.909+02:00RPS 0.3.3<div dir="ltr" style="text-align: left;" trbidi="on">
It took some time to release this version because it packs a lot of changes which hopefully makes the tool more useful. This release focuses on more "export" functionality.<br />
<br />
So the first major change is that the "Simulate" button has been lowered. This makes more sense as you will probably first put in the info and than run the simulation. But the main reason why it was lowered is the additional export functionality. You will now see a couple of checkbox next to the simulate button.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqYn3W-bIgZyI5jZWtUllj30Svv1xxYd8j6B-r7cIrAURo9T6RcenQSjk3B5cJomherfJDq-rxGxBNxY-rvCyy5rS1AKa3fYe5UD8vJgS8yPS4ObNK9eU_gONvNL2LJHHnLxKTK0p6cD4/s1600/export.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqYn3W-bIgZyI5jZWtUllj30Svv1xxYd8j6B-r7cIrAURo9T6RcenQSjk3B5cJomherfJDq-rxGxBNxY-rvCyy5rS1AKa3fYe5UD8vJgS8yPS4ObNK9eU_gONvNL2LJHHnLxKTK0p6cD4/s320/export.png" width="320" /></a></div>
<br />
The first checkbox is the "export" functionality. When you check it and run the simulation, an input field will appear with an URL. If you click somewhere in the field, the complete text should be selected, which you can than easily copy with for example ctrl+c. When you reuse the URL, your simulation will automatically execute with all the previous inputs. This way you can share your simulation without having to screenshot everything. Make sure to push the Simulate button before copy pasting as this will refresh the URL field.<br />
<br />
But what if you still want a clean screenshot of the result? I can not tell you how many screenshots I already saw of the RPS output in mails/documentation/etc.. However, screenshotting the output might be challenging. First of all you need specific software to cut it out. Next, if the simulation is longer, you should screenshot multiple times and than concat the result. So in this release I'm introducing "Canvas rendering". This will render the result in an hidden HTML5 canvas and than replace a visible image with a copy of the HTML5 output. The result should be a cleaner output, that you can use in documentation. Also opted to reduce the amount info on the output as dates etc. make little sense when a partner wants to send a result to an end customer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUptlIaUQ0j8XQjshhdOWEC9lcB9P4dMmB99fMM1a23YNy00yfiErvNYJB3HZqVHOtPoCwW1SH9_ux7CphrCWt5dA5bMqNjjxmnyUqFYR8NrEp82uFHd5oiQOttrYUZW3MzF0V8HOG-Xk/s1600/canvas.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUptlIaUQ0j8XQjshhdOWEC9lcB9P4dMmB99fMM1a23YNy00yfiErvNYJB3HZqVHOtPoCwW1SH9_ux7CphrCWt5dA5bMqNjjxmnyUqFYR8NrEp82uFHd5oiQOttrYUZW3MzF0V8HOG-Xk/s320/canvas.png" width="320" /></a></div>
<br />
<br />
If you are running Firefox or Chrome, you will be able to save the picture by clicking the Download link. The advantage is that the link will push a formatted name with the current date and time. However if you are not using one of both (like IE or Safari), you should still be able to right click it and save it as picture. The reason why the download link does not work in every browser is because I'm using an unofficial "download" attribute. Let's hope in the future, Edge, IE, Safari, etc. will also support it.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7-ExwaNwbxIbNoxfhG5Msum9-HZrwgyJYMCOx_wsnF18KJiStEtVBpEETE-sTAyAgRO7cbitQnTlzYdDTGKTjcSpNcl21lRfwiGkcvvsj07WlsoXO9WoEn7U01Nc-KRGIVHj9pMeZp5g/s1600/activefull.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="304" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7-ExwaNwbxIbNoxfhG5Msum9-HZrwgyJYMCOx_wsnF18KJiStEtVBpEETE-sTAyAgRO7cbitQnTlzYdDTGKTjcSpNcl21lRfwiGkcvvsj07WlsoXO9WoEn7U01Nc-KRGIVHj9pMeZp5g/s320/activefull.png" width="320" /></a></div>
<br />
Another new future would be support for Active Full support that was added in Veeam v9. This future required some testing to check if the result made sense. Hopefully I got it right, and I would love to hear your feedback!<br />
<br />
Finally, a very small request that has been implemented is Replica support. In this first version, I only added support for VMware, but this might change in the future<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcCj1TJnlRxFG9Km4x-wBTlN_KNOWVYe8o_J7gxl7MR4yYh7zg_UScuZfakFMf_7TqKSFLy98glac5qRc1bdl-omx04__LpZqgpvSe5mlk1SbpahHO09CJbafr0df5xeiYtQ69FqNhhwQ/s1600/replica.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="291" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcCj1TJnlRxFG9Km4x-wBTlN_KNOWVYe8o_J7gxl7MR4yYh7zg_UScuZfakFMf_7TqKSFLy98glac5qRc1bdl-omx04__LpZqgpvSe5mlk1SbpahHO09CJbafr0df5xeiYtQ69FqNhhwQ/s320/replica.png" width="320" /></a></div>
<br /></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-48874400890264776462016-05-10T12:26:00.002+02:002016-05-11T13:40:48.590+02:00BytesPerSec from WMI Raw class via Powershell<div dir="ltr" style="text-align: left;" trbidi="on">
If you ever tried to query data from WMI, you might have noticed that there is preformatted data and raw data classes. Recently I tried to make due with the preformatted data but after a while, I saw it was pretty inaccurate for monitoring as it only shows you that specific moment. Especially for disk writing, which seems to be buffered, meaning you get hugh spikes and hugh dips because of buffers emptying all at once.<br />
<br />
Well I tried the raw classes and couldn't make sense of it at first. After my google-kungfu did not seems to yield any result (mostly people asking the same question), I tried to make sense of the data via the good old "trial-and-error" method and see if I could squeeze some reasonable results out of it.<br />
<br />
The biggest issue with raw classes is that you take samples, and the values are just augmented with new values over time by the system. So you get a hugh number that doesn't mean anything. What you need to do is take 2 samples, lets call them "old" and "new". Your real value over the interval would be<br />
<blockquote class="tr_bq">
(new val - old val) / (new timestamp - old timestamp)</blockquote>
<br />
Well with "BytesPerSec", I could not get it to work until I realised, bytes per sec is already written in an interval. So for "BytesPerSec", it seems you have to look at the "Timestamp_Sys100NS". To convert this to seconds, you multipy it with "*1e-7". (google "1*1e-7 seconds to nanoseconds" to understand why). So what you get is :<br />
<blockquote class="tr_bq">
(New BytesPerSec - Old BytesPerSec) /((new Timestamp_Sys100NS - old Timestamp_Sys100NS)*1e-7)</blockquote>
So that seems strange because BytesPerSec is already in seconds. On a 1 second interval, you would not need to divide because the difference between Timestamps would be around 1 anyway. However, consider a 5 second sample interval. In this case, the system seems to add 5 samples of "bytespersec". By dividing it by 5, you get an average over the interval. Well it seems to be more complex than that. If you put the sample interval on 100ms, the formula still seems to work, which basically tells me the system is constantly adding to the number but adjusting it to"Bytes Per Seconds". For example, in the script below, I sleep 900ms because that allows powershell to do around 100ms of querying/calculations.<br />
<br />
Now, my method of discovery is not very scientific (I could not correlate to any doc), but it does seems to add up if you live compare it to the task manager. So below is a link to a sample powershell script, you can use to check the values. Although I'm writing a small program in C#, I can only recommend powershell to play with WMI, as it allows you to play with WMI without having to recompile all the time, and discover the values via for example the Powershell_ISE.<br />
<br />
<a href="https://github.com/tdewin/veeampowershell/blob/master/wmi_raw_data_bytespersec.ps1">https://github.com/tdewin/veeampowershell/blob/master/wmi_raw_data_bytespersec.ps1</a><br />
<br />
The script queries info from "Win32_PerfRawData_PerfDisk_PhysicalDisk" and "Win32_PerfRawData_Tcpip_NetworkInterface". If you are looking for a fitting class, you can actually use this oneliner:<br />
<blockquote class="tr_bq">
$lookfor = "network";gwmi -List | ? { $_.name -imatch $lookfor } | select name,properties | fl</blockquote>
Adjust $lookfor, for what you are actually looking for.<br />
<br />
<br />
<b>Update: </b><br />
There is acutally a a "scientific method" to do it<b>. </b>More playing and googling turns up interesting results.<br />
<br />
<b> </b>First of lookup your class and counter. So lets say I want "BytesPerSec" from "Win32_PerfRawData_PerfDisk_PhysicalDisk". You would google it, and hopefully you get to this class page:<br />
<br />
<a href="https://msdn.microsoft.com/en-us/library/aa394308%28v=vs.85%29.aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/aa394308%28v=vs.85%29.aspx</a><br />
<br />
This would tell you about the counter:<br />
DiskBytesPerSec<br />
Data type: uint64<br />
Access type: Read-only<br />
Qualifiers: <b>CounterType (272696576)</b> , DefaultScale (-4) , PerfDetail (200)<br />
<br />
If you click enough, you would end up this page:<br />
<a href="https://msdn.microsoft.com/en-us/library/aa389383%28v=vs.85%29.aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/aa389383%28v=vs.85%29.aspx</a><br />
<br />
That type is actually 272696576<b> PERF_COUNTER_BULK_COUNT</b>. If you google "<b>PERF_COUNTER_BULK_COUNT</b>", you might end up here:<br />
<br />
<a href="https://msdn.microsoft.com/en-us/library/ms804018.aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/ms804018.aspx </a><br />
<br />
This would tell you to use the following formula: <br />
<i>(N<span class="sub">1</span> - N<span class="sub">0</span>) / ( (D<span class="sub">1</span> - D<span class="sub">0</span>)
/ F, where the numerator (N) represents the number of operations
performed during the last sample interval, the denominator (D) represent
the number of ticks elapsed during the last sample interval, and the
variable F is the frequency of the ticks.</i><br />
<i><br /></i>
<i>This counter type shows the average number of operations completed
during each second of the sample interval. Counters of this type measure
time in ticks of the system clock. The variable F represents the number
of ticks per second. The value of F is factored into the equation so
that the result is displayed in seconds. This counter type is the same
as the <a href="https://technet.microsoft.com/en-us/library/cc740048%28v=ws.10%29.aspx">PERF_COUNTER_COUNTER</a> type, but it uses larger fields to accommodate larger values.</i><br />
<br />
This might still be not so trivial but the nominator should be fairly clear. It is the same value we used before, namely newvalue - oldvalue. The dominator, actually is ( (D<span class="sub">1</span> - D<span class="sub">0</span>)
/ F). Which would be (newticks-oldticks)/frequency. This turns out to translated to :<br />
<blockquote class="tr_bq">
($new.Timestamp_PerfTime - $old.Timestamp_PerfTime)/($new.Frequency_PerfTime).</blockquote>
<br />
Interesting enough "$new.Frequency_PerfTime" is always the same because it is actually the hz speed of your processor. So it basically tells you how much ticks it can handle per second. The timestamp_PerfTime, is I guess, how many ticks have already passed. So by deducting the old from the new, you get the amount of ticks that have been done between your sample. If you divided that through hz, you get how much "seconds" have past (can be a float). That means you don't have to convert to nanoseconds, and you can use the formula directly like this:<br />
<blockquote class="tr_bq">
$time = ($new.Timestamp_PerfTime - $old.Timestamp_PerfTime)/($new.Frequency_PerfTime)</blockquote>
<br />
So the total formula would be<br />
<blockquote class="tr_bq">
$dbs = $new.DiskBytesPersec - $old.DiskBytesPersec<br />
$time = ($new.Timestamp_PerfTime - $old.Timestamp_PerfTime)/($new.Frequency_PerfTime)<br />
$cookeddbs = $dbs/$time</blockquote>
<br />
Running the mentioned method in the script and this method give you almost the same results but I guess the tiny differences have to do with rounding. Anyway this method in the update should be the most accurate as this is what Microsoft describes as using themself for cooking up the data. Should also give you a more "stable" way to calculate other values, instead of trial and error </div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-10528832926598242942016-04-13T13:52:00.003+02:002016-04-13T13:57:10.385+02:00Veeam RESTful API via Powershell<div dir="ltr" style="text-align: left;" trbidi="on">
In this blog post I'll show you how you can play around with Veeam RESTful API via Powershell. This post will show you how to find a job and start it. You might wonder why would you do such a thing? Well in my case it is to showcase the interaction with the API (per line of code), very similar as you with do with wget or curl. If you want an interactive way of playing with the api, know that you can always replace the /api with /web/#api/ (for example http://localhost:9399/web/#/api/) to get an interactive browser. However, via Powershell you get the real sense that you are interacting with the API and all methods used here should be portable to any other language. That is why I've not chosen to use "invoke-restmethod", but rather a raw HTTP call.<br />
<br />
<br />
So the first thing (which might not be required), is to ignore the self signed certificate. If you access the API via FQDN on the server itself, the certificate should be trusted, but that would make my code less generic.<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;">add-type @"</span><br />
<span style="color: #93c47d;"> using System.Net;</span><br />
<span style="color: #93c47d;"> using System.Security.Cryptography.X509Certificates;</span><br />
<span style="color: #93c47d;"> public class TrustAllCertsPolicy : ICertificatePolicy {</span><br />
<span style="color: #93c47d;"> public bool CheckValidationResult(</span><br />
<span style="color: #93c47d;"> ServicePoint srvPoint, X509Certificate certificate,</span><br />
<span style="color: #93c47d;"> WebRequest request, int certificateProblem) {</span><br />
<span style="color: #93c47d;"> return true;</span><br />
<span style="color: #93c47d;"> }</span><br />
<span style="color: #93c47d;"> }</span><br />
<span style="color: #93c47d;">"@</span><br />
<span style="color: #93c47d;">[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy</span></blockquote>
So with that code executed, you have told dotnet to thrust everything. Next step is to get the API version<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;">$r_api = Invoke-WebRequest -Method Get -Uri "https://localhost:9398/api/" </span><br />
<span style="color: #93c47d;">$r_api_xml = [xml]$r_api.Content</span><br />
<span style="color: #93c47d;">$r_api_links = @($r_api_xml.EnterpriseManager.SupportedVersions.SupportedVersion | ? { $_.Name -eq "v1_2" })[0].Links</span></blockquote>
With the first request, we basically do a get request to the API page. The Veeam REST API uses XML in favor of JSON. So we can just convert the content itself to XML. Once that is done, we can browse the XML. The cool thing about Powershell is that it allows you to browse the structure and autocompletes. Just execute $r_api_xml and you will get the root element. By adding a dot and the start-element, you can see what's underneath this node. You can repeat this process to "explore" the XML (or you can just print out the $r_api.Content without conversion to see the plain XML).<br />
<br />
Under the root container (EnterpriseManager), we have a list of all SupportedVersion. By applying a filter we get the v1_2 (v9) API version. This one has one Link which indicates how you can logon<br />
<blockquote class="tr_bq">
PS C:\Users\Timothy> $r_api_links.Link | fl<br />
<br />
Href : https://localhost:9398/api/sessionMngr/?v=v1_2<br />
Type : LogonSession<br />
Rel : Create</blockquote>
So the Href show the link we have to follow. The type tells use the name of the link and then finally, Rel explains us which http method we should use. Create means we need to do a Post.<br />
<br />
Most of the times:<br />
<ul style="text-align: left;">
<li>get method is if you want to get details but don't want do a real action. </li>
<li>post method is used if you want to do a real action</li>
<li>put method if you want to update </li>
<li>delete method if you want to destroy something</li>
</ul>
When in doubt, check the manual : <a href="https://helpcenter.veeam.com/backup/rest/requests.html" rel="nofollow">https://helpcenter.veeam.com/backup/rest/requests.html </a><br />
<br />
Ok for authentication, we have to do something special, we need to send the credentials via basic authentication. This is a pure HTTP standard, so I'll show you two ways to do it<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;">$r_login = Invoke-WebRequest -method Post -Uri $r_api_links.Link.Href -Credential (Get-Credential -Message "Basic Auth" -UserName "rest")</span><br />
<span style="color: #93c47d;"><br /></span>
<span style="color: #93c47d;">#even more raw</span><br />
<span style="color: #93c47d;"><br /></span>
<span style="color: #93c47d;">$auth = "Basic " + [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes("mylogin:myadvancedpassword"))</span><br />
<span style="color: #93c47d;">$r_login = Invoke-WebRequest -method Post -Uri $r_api_links.Link.Href -Headers @{"Authorization"=$auth}</span></blockquote>
Well the first method uses Powershell built-in functionality for doing BASIC authentication. The second method, actually shows what is really going on in the HTTP request. "username:password" is encoded as base64. then "Basic " and this encoded string are concatenated. This result is then set in the Authorization header of the request.<br />
<br />
The result is that (if we logged on successfully), we get a logon session, which has links to almost all main resources. Before we go any farther, we do need to analyze a bit the response.<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;">if ($r_login.StatusCode -lt 400) {</span></blockquote>
If you do a call, you can check the StatusCode or return code. You are expecting a number between 200-204 which means success. If you want to know the exact meaning of the HTTP return code in the Veeam REST API : <a href="https://helpcenter.veeam.com/backup/rest/http_response_codes.html" rel="nofollow">https://helpcenter.veeam.com/backup/rest/http_response_codes.html</a><br />
<br />
The next thing now is to extract the Rest session id. Instead of sending over the username and password the whole time, you need to send over this header to authenticate. The header is returned after you succesfully logged in.<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;"> #get session id which we need to do subsequent request</span><br />
<span style="color: #93c47d;"> $sessionheadername = "X-RestSvcSessionId"</span><br />
<span style="color: #93c47d;"> $sessionid = $r_login.Headers[$sessionheadername] </span></blockquote>
So now we have the session id extracted, lets do something useful<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;"> #content</span><br />
<span style="color: #93c47d;"> $r_login_xml = [xml]$r_login.Content</span><br />
<span style="color: #93c47d;"> $r_login_links = $r_login_xml.LogonSession.Links.Link</span><br />
<span style="color: #93c47d;"> $joblink = $r_login_links | ? { $_.Type -eq "JobReferenceList" } </span></blockquote>
First we take the logon session, convert it to XML and we browse the links. We are looking for the link with name "JobReferenceList". Let's follow this link. In the process, don't forget to configure in your header the session id.<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;"> #get jobs with id we have</span><br />
<span style="color: #93c47d;"> $r_jobs = Invoke-WebRequest -Method Get -Headers @{$sessionheadername=$sessionid} -Uri $joblink.Href</span><br />
<span style="color: #93c47d;"> $r_jobs_xml = [xml]$r_jobs.Content</span><br />
<span style="color: #93c47d;"> $r_job = $r_jobs_xml.EntityReferences.Ref | ? { $_.Name -Match "myjob" }</span><br />
<span style="color: #93c47d;"> $r_job_alt = $r_job.Links.Link | ? { $_.Rel -eq "Alternate" }</span></blockquote>
So the first line is just getting and converting the XML. The page which we requested is a list of a the job in reference format. The reference format is a compacted way of representing the object that you requested, basically showing the name and the ID of the job and some links. If you add the "?format=Entity" to such a list (or request of an object/objectlist). You get the full details of the job.<br />
<br />
So why the reference representation? Well it is a pretty similar concept to the GUI. If you open the Backup & Replication GUI and you select the job list, you don't get the complete details of all the jobs. That would be kind of overwhelming. But when you click a specific job and try to edit it, you get all the details. Similar, if you want to built an overview of all the jobs, you wouldn't want the API to give you all the unnecessary details as this would make the "processing time" of the request much bigger (downloading the data, parsing it, extracting what you need, ..)<br />
<br />
So in the 3rd line, what we do is look for the job (or rather the reference to a job), which names matches "myjob". We then take a look at the links of this job and look for the alternate link. Basically, this is the jobid + "?format=Entity" to get the complete details of the job. Here is the the output of $r_job_alt<br />
<blockquote class="tr_bq">
PS C:\Users\Timothy> $r_job_alt | fl<br />
Href : https://localhost:9398/api/jobs/f7d731be-53f7-40ca-9c45-cbdaf29e2d99?format=Entity<br />
Name : myjob<br />
Type : Job<br />
Rel : Alternate</blockquote>
Now ask for the details of the job<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;"> $r_job_entity = Invoke-WebRequest -Method Get -Headers @{$sessionheadername=$sessionid} -Uri $r_job_alt.Href</span><br />
<span style="color: #93c47d;"> $r_job_entity_xml = [xml]$r_job_entity.Content</span><br />
<span style="color: #93c47d;"> $r_job_start = $r_job_entity_xml.Job.Links.Link | ? { $_.Rel -eq "Start" }</span></blockquote>
By now, the first 2 lines should be well understood. In the third line we are looking for a link on this object with name start. This is basically the method we need to execute to start the job. Start is a real action and if you look it up, you will see that you need a POST method it to call it<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;"> #start the job</span><br />
<span style="color: #93c47d;"> $r_start = Invoke-WebRequest -Method Post -Headers @{$sessionheadername=$sessionid} -Uri $r_job_start.Href</span><br />
<span style="color: #93c47d;"> $r_start_xml = [xml]$r_start.Content </span><br />
<span style="color: #93c47d;"><br /></span>
<span style="color: #93c47d;"> #check of command is succesfully delegated</span><br />
<span style="color: #93c47d;"> while ( $r_start_xml.Task.State -eq "Running") {</span><br />
<span style="color: #93c47d;"> $r_start = Invoke-WebRequest -Method Get -Headers @{$sessionheadername=$sessionid} -Uri $r_start_xml.Task.Href</span><br />
<span style="color: #93c47d;"> $r_start_xml = [xml]$r_start.Content</span><br />
<span style="color: #93c47d;"> write-host $r_start_xml.Task.State</span><br />
<span style="color: #93c47d;"> Start-Sleep -Seconds 1</span><br />
<span style="color: #93c47d;"> }</span><br />
<span style="color: #93c47d;"> write-host $r_start_xml.Task.Result</span></blockquote>
Ok so that a bunch of code, but still wanted to post it. So first we follow the start method and parse the XML. The result is actually a "Task". A Task in the end is representing a process that is running on the RESTful api, that you can refresh, to check the actual status of the process. What is important, it is the REST process but not the Backup Server process. That means that if a task is finished for REST, it doesn't mean necessarily that the action at the backup server is finished. What is finished is that the API has passed your command to the backup server.<br />
<br />
So in this example, we check if the task is "Running", we will refresh the task and write out the State, sleep 1 second, and then do the whole thing again while it is in "Running" state. Once it is finished, we write out the Task.Result. Again, if this task is Finished, it does not mean the job is finished, but the backup server has "started" the job hopefully succefully<br />
<br />
So finally we need to log out. That rather easy. In the logon session, you will find the URL to do so. Since you are logging out or deleting your session, you need to use the method "DELETE". You can check that by checking the relationship of the link.<br />
<blockquote class="tr_bq">
<span style="color: #93c47d;"> $logofflink = $r_login_xml.LogonSession.Links.Link | ? { $_.type -match "LogonSession" }</span><br />
<span style="color: #93c47d;"> Invoke-WebRequest -Method Delete -Headers @{$sessionheadername=$sessionid} -Uri $logofflink.Href</span></blockquote>
I have uploaded the complete code here:<br />
<a href="https://github.com/tdewin/veeampowershell/blob/master/restdemo.ps1" rel="nofollow">https://github.com/tdewin/veeampowershell/blob/master/restdemo.ps1 </a><br />
There is some more code in that example that does the actual follow up on the process. But you can skip or analyze the code if you want. </div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-17103719519647403422016-01-14T18:27:00.004+01:002016-01-14T18:27:39.175+01:00Extending Surebackup in v9<div dir="ltr" style="text-align: left;" trbidi="on">
Now that everybody has posted there favorite new features in Veeam v9, I want to take the time to highlight one particular feature. This is the credential manger part in Surebackup. This extra tab can be found when you configure your VM in the application group.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVv7xJeQR3ebBk2A0JOT2jNsavsdOwOBL0xjEWTjaTFmpG3MCuqfq8BXXXX-K7fAJhbVpaBF51z95RmW6ya-O3HCDbnNBDkiwShshEOxAKmbXZswSdDEjLSVcyUWP_I6zV7Gikbj2SqsA/s1600/20160114+17+33+11.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVv7xJeQR3ebBk2A0JOT2jNsavsdOwOBL0xjEWTjaTFmpG3MCuqfq8BXXXX-K7fAJhbVpaBF51z95RmW6ya-O3HCDbnNBDkiwShshEOxAKmbXZswSdDEjLSVcyUWP_I6zV7Gikbj2SqsA/s320/20160114+17+33+11.png" width="314" /></a></div>
<br />
So why this extra tab? Well you can read my <a href="http://blog.dewin.me/2015/11/extending-surebackup-with-custom.html">Surebackup Sharepoint validation script</a> and instantly see the biggest problem. Storing your credentials in a secure way is 60% of the whole blog article. This is because of in v8, all scripts are started by the backup service and thus inherent this account and permissions.<br />
<br />
Enter v9, the credentials tab is added. My first assumption was that all scripts will run under the configured account. That turned out to be incorrect. The script will be started up with the backup service account, but the Network Credentials are changed. This has one big advantage, even if your backup server is not in the domain, you can still use these credentials. Think of it as using "runas /netonly" to start up an application (this is how Product Management explained it to me). The credentials are only applied when connecting to a remote server.<br />
<br />
So for the fun of it, I have already looked into some example scripts. They might not be all that stable and it is better to change them to your liking, but they should give you an idea on where to start.<br />
<br />
First of all, you can find an update version of the <a href="https://github.com/tdewin/veeampowershell/blob/master/suresharepoint-netcred.ps1">Sharepoint script</a>. The only parameters to set are:<br />
<ul style="text-align: left;">
<li> -server [yourserver, %vm_ip% for example] </li>
<li> -path [to content you want to check, by default : /Shared%20Documents/contenttest.txt]</li>
<li> -content [the content to check, by default "working succesfully"]</li>
</ul>
If you then setup the account correctly that can access the webservice, it will authenticate succesfully with the network credentials, download the file and match the content. The real magic? <span class="pl-k">"$</span><span class="pl-smi">web</span><span class="pl-en">.UseDefaultCredentials</span> <span class="pl-k">=</span> <span class="pl-k">$</span><span class="pl-c1">true"</span><br />
<br />
<span class="pl-s"></span>
But the fun doesn't stop there. I also tried to make a <a href="https://github.com/tdewin/veeampowershell/blob/master/suresql-netcred.ps1">SQL script</a>. You only need to pass:<br />
<ul style="text-align: left;">
<li>-server [yourserver, %vm_ip% for example] </li>
<li><span class="pl-k">-</span><span class="pl-smi">instance [by default </span><span class="pl-s">MSSQLSERVER]</span></li>
</ul>
<span class="pl-smi">It will logon to the instance, issue a "use" and "query the tables" to all databases. Finally it check the state of the databases in "</span><span class="pl-s">sys.databases</span><span class="pl-s">". The use, makes sure that SQL Server actually tries to mount the database. But the cool thing is, you can easily alter the example to execute a full blown sql query and then check if the output satisfies your need. The real magic? "</span><span class="pl-s">Server=<span class="pl-k">$</span><span class="pl-smi">instancefull</span>;<b>Integrated Security=True</b>;"</span><br />
<br />
<span class="pl-s">Also added a template for just any plain <a href="https://github.com/tdewin/veeampowershell/blob/master/surepowershell-netcred.ps1">Powershell script</a> . This might look trivial (it doesn't do anything but logon and write the hostname), but I spend some time figuring out you need "</span> <span class="pl-k">-</span>Authentication Negotiate" and that there is no need to setup SSL. However, do check if the firewall allows remote connection from outside the domain if you want to use this one.<br />
<br />
So no more excuses for writing those extensive application test scripts!<br />
<br />
Final tip, if you are customizing these examples, you can do a Powershell "write-host" at anytime. The output can be found in the matching surebackup log. By default in:<br />
%programdata%\veeam\Backup\<surebackup jobname="">\Job.<surebackup jobname="">.log</surebackup></surebackup><br />
<br />
For example, for the SQL script, you would find something like:<br />
<b>[08.01.2016 15:57:59] Info [SureBackup] [SQLandSP] [ScriptTests] [Console] online : master</b><br />
<br /><b> </b></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-51290772185191106332015-11-30T17:50:00.002+01:002015-11-30T17:50:25.468+01:00Extending Surebackup with custom scripts : Sharepoint<div dir="ltr" style="text-align: left;" trbidi="on">
Often I visit customers and ask them about there restore tests. Most common answer? We test the backups when we do the actual restores. To the question why not test more frequently, the most common answer would be "time and resources".<br />
<br />
A couple of months ago, I actually visited a customer that tried to do a restore from backup. It failed, B&R was able to restore the backup but the data inside seemed to be corrupt. The SQL server refused to mount the database. Exploring multiple restore points, this gave the same issue. It was a strange issue because all backup files where consistent (no storage corruption), and the backup job did not have any failed states. The conclusion was Changed Block Tracking corruption. In light of the recent bugs in CBT, I wanted to emphasize again how critical it is to validate your backups. If the customer would have tested his backups with for example, the <a href="http://www.danilochiavari.com/2014/12/01/new-sql-server-surebackup-script-in-veeam-br-v8/">SQL test script included in v8</a> , they might have caught the error before the actual restore failed.<br />
<br />
This shows another thing I want to highlight. Surebackup is a framework but your "verification" is only as good as your test. By default, Surebackup application tests are just portscans. This tells you that the service has started (it was able to bind to the port and it is answering), but doesn't tell you anything about how good the service is performing. For example, the SQL service / instance could start, but maybe some databases where not able to be mounted inside the instance.<br />
<br />
Few people visit this topic, but you can actually extend the framework. The fact that is supports Powershell makes it quite simple to write more extensive test.<br />
<br />
So here is a small test for Sharepoint. I hacked it together today, so please reread the whole script to "Surebackup" my coding skills. It is rather basic, but you could use it for any kind of webservice actually. It simply reads the content of a txt file in a certain site. If the content matches a predefined value, you know that<br />
a) The database was mounted inside the instance<br />
b) Sharepoint is able to talk to the instance and query it<br />
c) The webservice is responding to requests<br />
<br />
So how do you get started? Well first upload a txt file with some content. In my case, I uploaded file contesttest.txt with the content "sharepoint is working succesfully" as shown below:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuAcBMMl_QiaujbWVobIKG7eoWtT_-9kl7TvzYrS7fCVFTpsdS4qvcp1-fAM-riYcOVQ3qy7mu3-1HjAD4ThBd6G7A7vKZ0EgwAPs7hyphenhyphenivWFqlFCcRQDEYbuh_lstEuJUTIcgSlZKAkU4/s1600/sps.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuAcBMMl_QiaujbWVobIKG7eoWtT_-9kl7TvzYrS7fCVFTpsdS4qvcp1-fAM-riYcOVQ3qy7mu3-1HjAD4ThBd6G7A7vKZ0EgwAPs7hyphenhyphenivWFqlFCcRQDEYbuh_lstEuJUTIcgSlZKAkU4/s320/sps.png" width="320" /></a></div>
<br />
You can right click the link and copy it's location. Test if you can really access it this way as shown below <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilMXis1zpLZbkJRktCwwhn8npGQy6jjETm5hPNoVKEcuKRx9taCsikhtLD9ghlXAcERDZ68ILcQLlEoSHZ7-ygoIr_bwdPzWTaoQbaX86eSc858K3hwnaZ7JQmkb7HcvH2c13rhkLCXec/s1600/content.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="86" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilMXis1zpLZbkJRktCwwhn8npGQy6jjETm5hPNoVKEcuKRx9taCsikhtLD9ghlXAcERDZ68ILcQLlEoSHZ7-ygoIr_bwdPzWTaoQbaX86eSc858K3hwnaZ7JQmkb7HcvH2c13rhkLCXec/s320/content.png" width="320" /></a></div>
<br />
Now get the powershell goodness from <a href="https://github.com/tdewin/veeampowershell/blob/master/suresharepoint.ps1">https://github.com/tdewin/veeampowershell/blob/master/suresharepoint.ps1</a> and put it somewhere on the backup server. Now edit the file.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8lTHFbm26tHgV76x3zt4QNEenw_AVFWJsUcyOeFcfZsK8xyaDirIxRXCYCZLkUzWy9t5yIIefkPI5jgBMi8h6L5UQYluimMSltgLjgCDED32wq6rqKYUrjYAZWeAHyUl6ZsLMDFrRweE/s1600/editscript.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="73" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8lTHFbm26tHgV76x3zt4QNEenw_AVFWJsUcyOeFcfZsK8xyaDirIxRXCYCZLkUzWy9t5yIIefkPI5jgBMi8h6L5UQYluimMSltgLjgCDED32wq6rqKYUrjYAZWeAHyUl6ZsLMDFrRweE/s320/editscript.png" width="320" /></a></div>
<br />
<br />
First of all you can see that everything can be passed as a parameter (e.g. commandline call, use -server "ip" to change the ip address). Change the username and plaintext password to the user that will be used to authenticate against sharepoint. Preferably and account with read only rights and not the administrator as in my screenshot, this way you are sure it doesn't break anything ;). <br />
<br />
You might wonder, do I need to provide the password in plaintext? No you don't have to actually, you can also follow this procedure but it might make things more complex. Instead of plaintext passwords, you can use Powershell encrypted passwords but understand that if you want to decrypt the password, you need to be the same user as the one that encrypted the password (the whole point of encrypting it, right?). When Surebackup runs, it is actually being ran by the backup service. So the account that is being used to decrypt the password is the service account used to run this service (as shown in the screenshot below)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9p_nwA7xdnlAoIvm-3uuqp_p6eZ1GSLByULd6M8MJkoipep0eMmiWEhr8LkVtl774gudFiwAfGFpedD3WhF97JvsKcDyuJWlJSdNEa6J1KqjjihM_uYiKBaufg1IMnMWJ0Gd-oRS8gNI/s1600/20151130+17+08+46.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9p_nwA7xdnlAoIvm-3uuqp_p6eZ1GSLByULd6M8MJkoipep0eMmiWEhr8LkVtl774gudFiwAfGFpedD3WhF97JvsKcDyuJWlJSdNEa6J1KqjjihM_uYiKBaufg1IMnMWJ0Gd-oRS8gNI/s320/20151130+17+08+46.png" width="282" /></a></div>
<br />
<br />
If this is not Local System account but a service account, you can use the following cmd script to create an encrypted password:<br />
<a href="https://github.com/tdewin/veeampowershell/blob/master/encryptedpasstoclip/encryptedpasstoclip.cmd">https://github.com/tdewin/veeampowershell/blob/master/encryptedpasstoclip/encryptedpasstoclip.cmd</a><br />
<br />
Change the username in the bat file, run it, give in the password for the service account and finally give in the password for the account you want to use to authenticate to sharepoint. The result should be that an encrypted password is put on your clipboard. replace the whole password statement in the file. for example:<br />
<blockquote class="tr_bq">
$pass = "01000000d08c9ddf0115d1118c7a00c04fc297eb01000000c9b320ead0059d409978380353923e8000000000020000000000106600000001000020000000b1816dffef13bc70672b55dfcee25a41488d5bb395ae28242b70afeb90938db9000000000e8000000002000020000000bd7da1d0d06893bed8b035c411c34f181b000aa9f0e4f46658eb3efe3e73c06840000000948652774f7f82848ba3065af8193c23fe25b773cea3ecf65957bdc12cdcc71868a82ba11d0475e65b321056a900d0571a05184b89132c0f21452642033c918340000000e8fcabb194c06c78ad01ee2192b73bf7ba799630adfedb6091dc1a629dc9d5a2a6025a64fcf74fe8a89d4a579a54c3538928ee0d22a57f22f6e50da240deaa62"</blockquote>
If you go this far (or you skipped the whole password encryption part because your backup server is a Fort Knox anyway), we can now configure the script. Go to Veeam B&R and configure the test script as shown below in the application group (or in the linked jobs part):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi45wAGrrhK_g38wBre2hX3CYr-URX9tytSJeZ_UDWNFzWPuKIoLXcwnQIhT-k-jqVmYKZtg_BEm-ClnQHuJoRD0yoBpAi2OI8dKkYCCcQ4cJyJuz-Rx3jqWAMKaKgdsEZKTQeC2fId1R8/s1600/configuring.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="255" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi45wAGrrhK_g38wBre2hX3CYr-URX9tytSJeZ_UDWNFzWPuKIoLXcwnQIhT-k-jqVmYKZtg_BEm-ClnQHuJoRD0yoBpAi2OI8dKkYCCcQ4cJyJuz-Rx3jqWAMKaKgdsEZKTQeC2fId1R8/s320/configuring.png" width="320" /></a></div>
<br />
<br />
Notice that I also configured "Arguments" as "-server %vm_ip%". This will pass the sandbox IP to the script directly.<br />
<br />
Before you actually startup Surebackup, you can test the script with your production environment. If it doesn't work against production environment, it will probably fail against your lab environment. In case you configured an encrypted password with another account, you can temporarily override it with the following command (In case you did not, you can just run as script.ps1 -server <dnsname>)</dnsname><br />
<blockquote class="tr_bq">
PS C:\Users\demo2> C:\scripts\suresp.ps1 -server <dnsname> -password (read-host -assecurestring -prompt "pass" | convertfrom-securestring)</dnsname></blockquote>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdLWU9NHsAd8k3nMtf-NkagyCjNRNCi6_gjXxu31UmodVqc5n9rKxRq5k1cfGRPaNzTQ8ab1BSbSFqTwTxwBGD_v6ag96Nc7saEhncNzSjUDT4W-tXGk9Oe7xrd9tz3RPwpcrTr1f7NYE/s1600/dryrun.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="43" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdLWU9NHsAd8k3nMtf-NkagyCjNRNCi6_gjXxu31UmodVqc5n9rKxRq5k1cfGRPaNzTQ8ab1BSbSFqTwTxwBGD_v6ag96Nc7saEhncNzSjUDT4W-tXGk9Oe7xrd9tz3RPwpcrTr1f7NYE/s320/dryrun.png" width="320" /></a></div>
<br />
Now if everything is green and you got a match, run Surebackup, and validate if you can get the same output in your lab<br />
<br />
<div class="separator" dir="rtl" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIVcNTX0YPVeghy02iJMXK_fLC6Wx6RCFXq0gho8ldQGnJAnolk-auSy2DY1SgVYz0tijFbJ9QBvVh0f1tgelyxOoo-MFlDLZE_AWFM7u1AftgjzZs9RXUagppi69K0nEQOrCyu6ltAgU/s1600/suresuccess.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIVcNTX0YPVeghy02iJMXK_fLC6Wx6RCFXq0gho8ldQGnJAnolk-auSy2DY1SgVYz0tijFbJ9QBvVh0f1tgelyxOoo-MFlDLZE_AWFM7u1AftgjzZs9RXUagppi69K0nEQOrCyu6ltAgU/s320/suresuccess.png" width="320" /></a></div>
<br />
If it failed, you can actually check the logs for the output the script gave. Go to "%programdata%\Veeam\Backup\". It should contain a folder with your <surebackup job="" name=""> name. In this folder, there should be a log called Job.<your job="" name="" surebackup="">. You can open it with notepad</your></surebackup><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7miwriZTAobARuq0XurZu2knu4IUdfJyLeChKJ5nxRLLsZGmmuWKnpXM_AIxS8_mbfzYoCBRKtErIssTQ4gN5ML8AtVGKhBUjyujwNIbN_0RQQQI0cJq0m17upD1MXa_PGgng43G5pcQ/s1600/20151130+16+37+48.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="109" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7miwriZTAobARuq0XurZu2knu4IUdfJyLeChKJ5nxRLLsZGmmuWKnpXM_AIxS8_mbfzYoCBRKtErIssTQ4gN5ML8AtVGKhBUjyujwNIbN_0RQQQI0cJq0m17upD1MXa_PGgng43G5pcQ/s320/20151130+16+37+48.png" width="320" /></a></div>
<br />
Scroll all the way down in the log and look for "[console]"<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDtVVt3eTWfQlbyiW7nzf5hv2A_m7XFLsnEsQmAfdbvLGaQoOnpvFCOgRllHOW461oGjCd3y2O4tFByVFrFiVjAq5pti5hu8NcirAdLeZjaXiVEsPbre5ZZLJ0X91hXWgGB7RkLvHMl-M/s1600/20151130+17+37+23.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="94" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDtVVt3eTWfQlbyiW7nzf5hv2A_m7XFLsnEsQmAfdbvLGaQoOnpvFCOgRllHOW461oGjCd3y2O4tFByVFrFiVjAq5pti5hu8NcirAdLeZjaXiVEsPbre5ZZLJ0X91hXWgGB7RkLvHMl-M/s320/20151130+17+37+23.png" width="320" /></a></div>
<br />
This should give you the output of the console. In this case, everything was ok!</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-5817914526954707412015-11-27T12:04:00.003+01:002015-11-27T12:04:54.251+01:00Veeam Application Report<div dir="ltr" style="text-align: left;" trbidi="on">
As many as you know, although Veeam B&R has an agentless approach it still makes sure that all the applications are consistently flushed just before the backup starts. To do this, Veeam B&R leverages VSS. One thing it also does is, it tries to detect what applications are installed in what VM. This data is collected so that during restore, you don't have to figure out which VM is holding what application and where exactly the application database is stored inside of the VM (for example for Exchange, it will detect the path(s) leading to the EDB(s)).<br />
<br />
Now a fellow SE colleague requested to add this "application detection" to the main GUI. They wanted to leverage the detection to sort out with VMs have what application installed. Adding it to the main GUI would however make it more complex but you can actually leverage the data via Powershell.<br />
<br />
So here is a sample script you can use as a starting point:<br />
<a href="https://raw.githubusercontent.com/tdewin/veeampowershell/master/veeam-per-app-detect.ps1">https://raw.githubusercontent.com/tdewin/veeampowershell/master/veeam-per-app-detect.ps1</a><br />
<br />
It generates a nice clean report with all the VMs that have detected applications (yes even Oracle so it is v9 ready), grouped per application. The output should look something like shown in the screenshot below:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTR9FT2Cs3u3yspKIS-drIU39eQH9PpfO_86WAxkSx-jH_5ehEYW6tk6zSUTWoD-Xr8NRcrRp12nkM10GMBD3XwIT8VkUNO6yfHjpWdpPjmxeyDS6bp7b5G0_4AcUNZ591e98SISLllbU/s1600/veeamapppsreport.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTR9FT2Cs3u3yspKIS-drIU39eQH9PpfO_86WAxkSx-jH_5ehEYW6tk6zSUTWoD-Xr8NRcrRp12nkM10GMBD3XwIT8VkUNO6yfHjpWdpPjmxeyDS6bp7b5G0_4AcUNZ591e98SISLllbU/s320/veeamapppsreport.png" width="320" /></a></div>
<br />
Enjoy!</div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0tag:blogger.com,1999:blog-8345042294447404507.post-65491468692560959682015-10-09T15:44:00.000+02:002015-10-09T15:44:01.939+02:00RPS 0.3.2<div dir="ltr" style="text-align: left;" trbidi="on">
Just a small update (which required some re-engineering under the hood).<br />
<br />
First of all, when you click the backup files size, you get a small "pop-window". This will tell you the uncompressed and compressed bandwidth usage over multiple interval. It should help you to understand how much processing power you need for a certain amount of input. The first 2 lines are uncompressed in byte and bit. the second two lines is the data 'compressed' in bit an bits. The columns indicate your time window. Notice that clicking on the full file or the incremental file gives different output so you can check full runs vs incremental runs real easily.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmFIZxk-5dm34oUNsij7EaTE3Bw2TlmnXT8FVmfkyo6nPe50yJEFh8pUeBeBFhhOrmaYgvFO1B2ln6YWbWuGvZ-rAY2ZC2C5w8f9Dw4o7DXE_xgDxeZIvMeBRxA2eGyz6qnHE1Q4wIiQA/s1600/rps-bandwith.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmFIZxk-5dm34oUNsij7EaTE3Bw2TlmnXT8FVmfkyo6nPe50yJEFh8pUeBeBFhhOrmaYgvFO1B2ln6YWbWuGvZ-rAY2ZC2C5w8f9Dw4o7DXE_xgDxeZIvMeBRxA2eGyz6qnHE1Q4wIiQA/s320/rps-bandwith.png" width="320" /></a></div>
<br />
The second feature is available when you click the total file size. It will give you a table overview of the output, which you can easily copy it to excel or calc. The numbers are all in GB so they give a predictable output.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhKm7MQnPmsYB1xeCYUCeM8WjzjQR-yZ4jXitLv74jUby_k_h9S5cWUD7BrzGpZc1uTmd1bhiFCHuloGNv-AZMKpDTUQiZ0dwaFFiw1qC8Z4dOd_c3_A3rTJFfO4badmNKWH6EbQ_m6UA/s1600/rps-totals.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhKm7MQnPmsYB1xeCYUCeM8WjzjQR-yZ4jXitLv74jUby_k_h9S5cWUD7BrzGpZc1uTmd1bhiFCHuloGNv-AZMKpDTUQiZ0dwaFFiw1qC8Z4dOd_c3_A3rTJFfO4badmNKWH6EbQ_m6UA/s320/rps-totals.png" width="320" /></a></div>
<br />
The final feature is a very small one but I really like it because it took literally 10minutes to code but will remove some frustration. During a recent conf call with my colleague Johan Huttenga, I noticed he was struggling with inputting 30Tb of data. He needed to calculate it by multiplying 1024 by 30 to get exactly 30TB and not 29,99 TB. So in this version, you can input 30TB, and it will be automatically converted to "30720". Same for 1PB to "1048576". The input is case insesitive so tb,Tb,TB,pb,PB,pB, etc. should all work. For example, fill in 1TB like shown below <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihNcHevTXVUHEmjtlwG5N1Up6phRrJAPwDDjipB-uzrJnn5vpaW1rrvVKzFXxtPtGY2uIitu7Tm9dhx9yb_Ual9m8livTnPciAiOJP8_QMWvCDmFyX9xLWLFgm_3bMlU9zBx0WOXPKC40/s1600/1tb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="83" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihNcHevTXVUHEmjtlwG5N1Up6phRrJAPwDDjipB-uzrJnn5vpaW1rrvVKzFXxtPtGY2uIitu7Tm9dhx9yb_Ual9m8livTnPciAiOJP8_QMWvCDmFyX9xLWLFgm_3bMlU9zBx0WOXPKC40/s320/1tb.png" width="320" /></a></div>
<br />
As soon as you push enter, tab to another input or click on the simulate button, the input will be dynamically changed.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPTGRSk7uS0cIHyAfrp4bBM9dVmRr4Y6JFagCrbYu6abZDShcp0PSNAEd_dAMO_e4xclBVNJZqZnjUqwpWl8LkjO2CeZMxFXb9U0r2B7l9T3Ec6IOLW64l1Ih1Z0wLUrdSz23NBW4R-W8/s1600/1tbtogb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="83" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPTGRSk7uS0cIHyAfrp4bBM9dVmRr4Y6JFagCrbYu6abZDShcp0PSNAEd_dAMO_e4xclBVNJZqZnjUqwpWl8LkjO2CeZMxFXb9U0r2B7l9T3Ec6IOLW64l1Ih1Z0wLUrdSz23NBW4R-W8/s320/1tbtogb.png" width="320" /></a></div>
<br /></div>
Timothy Dewinhttp://www.blogger.com/profile/14126614276831882160noreply@blogger.com0