Veeam has an excellent framework for searching and restoring files for Windows. The one click file restore is a feature well appreciated by users. However the functionality is not available for Linux servers. I thought about this and came up with some possible solutions.
Using the Veeam FLR
The first possible solution is using the Veeam FLR appliance. In the end it is just a Linux appliance and guess what, you can just logon to it. You can find all the info you need on the KB (http://www.veeam.com/kb1447)
Once you are logged in you can use the "mount" commando to find out where Veeam mounts the partitions. This seems to be on the pretty standard location "/media". Then you can dive into those partitions and use "find" to locate your file. In the example below you can see I used the FLR to search for the file /tmp/processfollow
Once you know the path, you can go back to the Explorer and do the restore operation
Using the native mlocate method
Linux has a standard tool for doing Indexing. This tool called mlocate, can be easily installed on Redhat system by executing
$ yum install mlocate
To create your initial database, just run the updatedb commando. Then if you want to update the file you can use the updatedb command again
$ yum install updatedb
The great thing is that Centos for example automatically creates a daily cron job so that this updating is automatically. Just take a look at "/etc/cron.daily/mlocate.cron" . In the script you will also see that Centos uses renice and ionice so that the process of indexing does not take all the available resource.
Once you have a database, you can use "locate" to find a file. For example
$ locate findthisfile
Multiple local versions of the index
The index or database is just a flat file you can find under "/var/lib/mlocate/mlocate.db" . In fact you could adjust your cron job so that it copies the index file and renames it using the current date. You can then use locate to find a file that might already be deleted from disk. You can see an example below. The copy statement is
$ cp /var/lib/mlocate/mlocate.db /var/lib/mlocate/mlocate-$(date +%y%m%d).db
Then you can use the "-d" parameter to find a file in an older index
$ locate -d /var/lib/mlocate/mlocate-date findthisfile
In the screenshot below you will see a trial run, where you can see that I am unable to find a file in the current db but I am able to find the file in an old index
Multiple versions of the index on a remote server
The other great thing is that you seem to be able to copy those indexes to a central server and use locate to search for files that are or were on a specific server.
In my example I have a central server 192.168.149.55. I used the following statement to copy my index to this central server
$ scp /var/lib/mlocate/mlocate.db index@192.168.149.45:/home/index/$(date +%y%m%d)-$(hostname).db
Then I created a small script on this central server called lsearch
#!/bin/sh
INDEXDIR=/home/index
for curdb in $(ls -t /home/index/*.db)
do
echo ">>>>> $curdb"
locate -d $curdb $1
done
After chmoding it, I am able to search for a file using ./lsearch
$./lsearch findthisfile
Of course I am not sure about the performance with bigger machines and of course with more servers. The script itself is very very very basic but I do hope it might inspire some people to create nicer and better implementations.