I've been using PAL for many years since version 1.2 and it has proved time and time again as probably the most valuable tool I have along with Powershell. Presently I'm on 2.3.2 on a dedicated HP DL380G5 with Win7Pro64,
2 x Xeon X5260 and 32 GB RAM and lots of Hdd space to keep all my tsv and report files. It also runs IIS to provision reports for Devs and Engineers.
Most of my perfmon (tsv) log files average around 300mb with counters from System Overview, SQL and BizTalk. I run all reports with 4 threads, but they take a looonnnggg time to get processed.
This might seem a bit awkward but when I ran the same kind of reports on the same machine with 1 cpu and 2 threads it would freeze up to the point I would actually have to power it off and on to run the report again but with 1
thread. Thru Logic, 2 cpu’s would take 3 threads and with 4 it would probably freeze up, but fortunately it doesn’t. I know the Process part of the analysis takes a really long long time, because of those processes that come and go, but is there
a way to put some nitro into the overall analysis.
I ran an experiment consisting of a generating a report based on the SystemOverview template counters and processing all counters in 10min slices. The perfmon tsv file is 10mb, has 3988 counter instances gathered with 5s sampling
over 318 samples (26min). For optimum performance I also excluded PowerShell from the antivirus real-time scanner. No other tweaks or changes were made.
1 thread took 00:25:51.4200000
2 threads took 00:38:54.3996000
3 threads took 00:35:29.8524000
4 threads took 00:34:06.7994723
5 threads took 00:33:27:3986307
6 threads took 00:27:26.1940614
7 threads took 00:27:23.3915625
8 threads took 00:25:09.1240301
9 threads took 00:25:13.1816888
10 threads took 00:27:31.921.2992
Are your processing times similar ??? Worse or better ???
How can we “overclock” PAL??? I know it will probably depend on a combination of factors, but would really like to know your views on this.