If you ever tried to query data from WMI, you might have noticed that there is preformatted data and raw data classes. Recently I tried to make due with the preformatted data but after a while, I saw it was pretty inaccurate for monitoring as it only shows you that specific moment. Especially for disk writing, which seems to be buffered, meaning you get hugh spikes and hugh dips because of buffers emptying all at once.
Well I tried the raw classes and couldn't make sense of it at first. After my google-kungfu did not seems to yield any result (mostly people asking the same question), I tried to make sense of the data via the good old "trial-and-error" method and see if I could squeeze some reasonable results out of it.
The biggest issue with raw classes is that you take samples, and the values are just augmented with new values over time by the system. So you get a hugh number that doesn't mean anything. What you need to do is take 2 samples, lets call them "old" and "new". Your real value over the interval would be
Well with "BytesPerSec", I could not get it to work until I realised, bytes per sec is already written in an interval. So for "BytesPerSec", it seems you have to look at the "Timestamp_Sys100NS". To convert this to seconds, you multipy it with "*1e-7". (google "1*1e-7 seconds to nanoseconds" to understand why). So what you get is :
Now, my method of discovery is not very scientific (I could not correlate to any doc), but it does seems to add up if you live compare it to the task manager. So below is a link to a sample powershell script, you can use to check the values. Although I'm writing a small program in C#, I can only recommend powershell to play with WMI, as it allows you to play with WMI without having to recompile all the time, and discover the values via for example the Powershell_ISE.
https://github.com/tdewin/veeampowershell/blob/master/wmi_raw_data_bytespersec.ps1
The script queries info from "Win32_PerfRawData_PerfDisk_PhysicalDisk" and "Win32_PerfRawData_Tcpip_NetworkInterface". If you are looking for a fitting class, you can actually use this oneliner:
Update:
There is acutally a a "scientific method" to do it. More playing and googling turns up interesting results.
First of lookup your class and counter. So lets say I want "BytesPerSec" from "Win32_PerfRawData_PerfDisk_PhysicalDisk". You would google it, and hopefully you get to this class page:
https://msdn.microsoft.com/en-us/library/aa394308%28v=vs.85%29.aspx
This would tell you about the counter:
DiskBytesPerSec
Data type: uint64
Access type: Read-only
Qualifiers: CounterType (272696576) , DefaultScale (-4) , PerfDetail (200)
If you click enough, you would end up this page:
https://msdn.microsoft.com/en-us/library/aa389383%28v=vs.85%29.aspx
That type is actually 272696576 PERF_COUNTER_BULK_COUNT. If you google "PERF_COUNTER_BULK_COUNT", you might end up here:
https://msdn.microsoft.com/en-us/library/ms804018.aspx
This would tell you to use the following formula:
(N1 - N0) / ( (D1 - D0) / F, where the numerator (N) represents the number of operations performed during the last sample interval, the denominator (D) represent the number of ticks elapsed during the last sample interval, and the variable F is the frequency of the ticks.
This counter type shows the average number of operations completed during each second of the sample interval. Counters of this type measure time in ticks of the system clock. The variable F represents the number of ticks per second. The value of F is factored into the equation so that the result is displayed in seconds. This counter type is the same as the PERF_COUNTER_COUNTER type, but it uses larger fields to accommodate larger values.
This might still be not so trivial but the nominator should be fairly clear. It is the same value we used before, namely newvalue - oldvalue. The dominator, actually is ( (D1 - D0) / F). Which would be (newticks-oldticks)/frequency. This turns out to translated to :
Interesting enough "$new.Frequency_PerfTime" is always the same because it is actually the hz speed of your processor. So it basically tells you how much ticks it can handle per second. The timestamp_PerfTime, is I guess, how many ticks have already passed. So by deducting the old from the new, you get the amount of ticks that have been done between your sample. If you divided that through hz, you get how much "seconds" have past (can be a float). That means you don't have to convert to nanoseconds, and you can use the formula directly like this:
So the total formula would be
Running the mentioned method in the script and this method give you almost the same results but I guess the tiny differences have to do with rounding. Anyway this method in the update should be the most accurate as this is what Microsoft describes as using themself for cooking up the data. Should also give you a more "stable" way to calculate other values, instead of trial and error
Well I tried the raw classes and couldn't make sense of it at first. After my google-kungfu did not seems to yield any result (mostly people asking the same question), I tried to make sense of the data via the good old "trial-and-error" method and see if I could squeeze some reasonable results out of it.
The biggest issue with raw classes is that you take samples, and the values are just augmented with new values over time by the system. So you get a hugh number that doesn't mean anything. What you need to do is take 2 samples, lets call them "old" and "new". Your real value over the interval would be
(new val - old val) / (new timestamp - old timestamp)
Well with "BytesPerSec", I could not get it to work until I realised, bytes per sec is already written in an interval. So for "BytesPerSec", it seems you have to look at the "Timestamp_Sys100NS". To convert this to seconds, you multipy it with "*1e-7". (google "1*1e-7 seconds to nanoseconds" to understand why). So what you get is :
(New BytesPerSec - Old BytesPerSec) /((new Timestamp_Sys100NS - old Timestamp_Sys100NS)*1e-7)So that seems strange because BytesPerSec is already in seconds. On a 1 second interval, you would not need to divide because the difference between Timestamps would be around 1 anyway. However, consider a 5 second sample interval. In this case, the system seems to add 5 samples of "bytespersec". By dividing it by 5, you get an average over the interval. Well it seems to be more complex than that. If you put the sample interval on 100ms, the formula still seems to work, which basically tells me the system is constantly adding to the number but adjusting it to"Bytes Per Seconds". For example, in the script below, I sleep 900ms because that allows powershell to do around 100ms of querying/calculations.
Now, my method of discovery is not very scientific (I could not correlate to any doc), but it does seems to add up if you live compare it to the task manager. So below is a link to a sample powershell script, you can use to check the values. Although I'm writing a small program in C#, I can only recommend powershell to play with WMI, as it allows you to play with WMI without having to recompile all the time, and discover the values via for example the Powershell_ISE.
https://github.com/tdewin/veeampowershell/blob/master/wmi_raw_data_bytespersec.ps1
The script queries info from "Win32_PerfRawData_PerfDisk_PhysicalDisk" and "Win32_PerfRawData_Tcpip_NetworkInterface". If you are looking for a fitting class, you can actually use this oneliner:
$lookfor = "network";gwmi -List | ? { $_.name -imatch $lookfor } | select name,properties | flAdjust $lookfor, for what you are actually looking for.
Update:
There is acutally a a "scientific method" to do it. More playing and googling turns up interesting results.
First of lookup your class and counter. So lets say I want "BytesPerSec" from "Win32_PerfRawData_PerfDisk_PhysicalDisk". You would google it, and hopefully you get to this class page:
https://msdn.microsoft.com/en-us/library/aa394308%28v=vs.85%29.aspx
This would tell you about the counter:
DiskBytesPerSec
Data type: uint64
Access type: Read-only
Qualifiers: CounterType (272696576) , DefaultScale (-4) , PerfDetail (200)
If you click enough, you would end up this page:
https://msdn.microsoft.com/en-us/library/aa389383%28v=vs.85%29.aspx
That type is actually 272696576 PERF_COUNTER_BULK_COUNT. If you google "PERF_COUNTER_BULK_COUNT", you might end up here:
https://msdn.microsoft.com/en-us/library/ms804018.aspx
This would tell you to use the following formula:
(N1 - N0) / ( (D1 - D0) / F, where the numerator (N) represents the number of operations performed during the last sample interval, the denominator (D) represent the number of ticks elapsed during the last sample interval, and the variable F is the frequency of the ticks.
This counter type shows the average number of operations completed during each second of the sample interval. Counters of this type measure time in ticks of the system clock. The variable F represents the number of ticks per second. The value of F is factored into the equation so that the result is displayed in seconds. This counter type is the same as the PERF_COUNTER_COUNTER type, but it uses larger fields to accommodate larger values.
This might still be not so trivial but the nominator should be fairly clear. It is the same value we used before, namely newvalue - oldvalue. The dominator, actually is ( (D1 - D0) / F). Which would be (newticks-oldticks)/frequency. This turns out to translated to :
($new.Timestamp_PerfTime - $old.Timestamp_PerfTime)/($new.Frequency_PerfTime).
Interesting enough "$new.Frequency_PerfTime" is always the same because it is actually the hz speed of your processor. So it basically tells you how much ticks it can handle per second. The timestamp_PerfTime, is I guess, how many ticks have already passed. So by deducting the old from the new, you get the amount of ticks that have been done between your sample. If you divided that through hz, you get how much "seconds" have past (can be a float). That means you don't have to convert to nanoseconds, and you can use the formula directly like this:
$time = ($new.Timestamp_PerfTime - $old.Timestamp_PerfTime)/($new.Frequency_PerfTime)
So the total formula would be
$dbs = $new.DiskBytesPersec - $old.DiskBytesPersec
$time = ($new.Timestamp_PerfTime - $old.Timestamp_PerfTime)/($new.Frequency_PerfTime)
$cookeddbs = $dbs/$time
Running the mentioned method in the script and this method give you almost the same results but I guess the tiny differences have to do with rounding. Anyway this method in the update should be the most accurate as this is what Microsoft describes as using themself for cooking up the data. Should also give you a more "stable" way to calculate other values, instead of trial and error
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.