This guy provided the solution. For us it was fixed after rebooting TSM, creating a new snapshot and then choosing the delete all option
However to quickly check all the vms i wrote a small powercli script so that we could quickly identify the machines that were having the problem. It is quite easy but might help you in moments of stress ;)
[regex]$disksnap = "[-][0-9]{6,6}[.]vmdk"
foreach($vm in (get-vm))
{
$first = $true
foreach($disk in ($vm.harddisks | select filename))
{
if($disk.filename -match $disksnap)
{
if ($first)
{
$first = $false
write-host "-----------------------------"
write-host "VM Name : "$vm.Name
write-host "Snapshot : "
$snaps = $vm | get-snapshot
foreach($snap in $snaps)
{
write-host $snap.name
}
}
write-host "> "$disk.filename
}
}
}
I one case, rebooting TSM was not enough. The fix was to restart the mgmt agents. This seems to clear the TSM agent after which we could consolidate the snapshot again.
(In ESXi you can do this via ssh using "services.sh restart")
BTW; some time ago I had some vms having some disks referencing delta disks. The only solution was cloning the disks via vmkfstools
vmkfstools -i "/vmfs/volumes/Datastore/examplevm/examplevm-000001.vmdk" "/vmfs/volumes/Datastore 2/newexamplevm/newexamplevm.vmdk"
Before you delete the disks from the vm (to configure the clone disks)
-Make a note of the scsi controller type
-Make a note of the order on vSCSI controller (disk 1 -> 0:0, disk 2 -> 0:1,...)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.