Nov 30, 2010 at 6:59 PM
Edited Dec 2, 2010 at 4:53 PM
Sorry for the late reply...
The end result was that after 7 days, it was still chunking away. I ended up canceling it and moving all of the data to a server we had that had 8gb of ram/quad dual core. Took about 2 days there.
Behavior was similar - ran usage was at about 4.8gb consistently, but obviously with that much more overhead, it allowed me to still use the system. Again, no noticeable cpu or disk activity.
A Word on what I was actually doing -
This was an Exchange 2007 server that we had about 6 days of data for. One thing I found out was that they had run perfwiz with the exchange settings (thus recording every counter imaginable) but they did it with the time to issue set at 1 hour. So... I
was chunking three days of data, 200mg each - 28 files totaling about 5.6gb of data with a RIDICULOUS amount of samples (1 hour equals a sample every 5 seconds). So in hindsight - not so smart....
Is there any way to do this with better management on the part of PAL? Does it just take 4-5gb of RAM period? It took about the same amount on the server I moved it to. I wish I knew more about how to do this or I would help... Really love the tool in general,
but if this is the way of the world in 2.x land, I may have to go back into 1.x land. I will try with less data and bigger sample gaps and see what happens...