PerfView: Performance Analysis

Microsoft rather quietly has released PerfView“a performance analysis tool focusing on ETW information (ETL files) as well as CLR memory information (heap dumps). It can collect and view ETL files as well as XPERF CSV files. Powerful grouping operators allow you to understand performance profiles in ways other tools can’t. PerfView is used internally at Microsoft by a number of teams and is the primary performance investigation tool on the .NET Runtime team.” Found it by accident, while reading an article about performance improvements in Visual Studio 2010 that mentioned Publication of the PerfView performance analysis tool!

It is in a functionality way related to the CLR Profiler that “allows developers to see the allocation profile of their managed applications.” Both tools are useful in optimizing memory usage, except that PerfView is supporting also native and mixed applications.

Today’s Sites/Blogs

  • Ask the Performance Team (Thoughts from the EPS Windows Server Performance Team) – in their own words “… the Performance team covers a broad range of seemingly unrelated areas such as Core OS Performance, Printing, WMI and Terminal Services. Simply put – we’re a bit of a “catch-all” team. […] Because we cover such a wide spectrum of technology, we see many different types of issues – some more frequently than others. So we thought we should share with the broader technical community. We’ll be sharing troubleshooting tips and technical information on areas of our specialty that we cover.”
  • 45+ Excellent Code Snippet Resources and Repositories – it is what it says it is.

Data Access Optimization in SQL Server

The Top 10 steps to optimize data access in SQL Server article series at CodeProject, despite some mistakes and arguable things, is not that bad starter’s guide for the SQL Server optimization question:

JetBrains dotTrace Profiler and Dual Core CPU

We need to do some profiling on our project, and I decided to evaluate JetBrains dotTrace Profiler 3.1. First of all, I have tried to profile the sample application included with the tool, and got really surprised when I saw the times reported – they were “randomly distributed” in the plus-minus billions of milliseconds range :). I couldn’t believe it! Looked for a solution on Google, but first few pages of the search results proved to be totally useless.

Fortunetely, I got “enlightment” at this point – I recalled that I had problems with WPF animations on my dual core AMD Athlon64 x2 processors because of system timers – they were not in sync between cores, and GetPerformanceCounter API was returning “floating” numbers instead of non-decreasing sequence, thus causing “jumpy” animations. The solution then was to install AMD Dual-Core Optimizer (its description talks about gaming, RDTSC, etc., so it is not necessarily obvious that the thing can help with WPF :).

I went and downloaded and installed the tool again, and voilaz – nice times in the profiler! :) Now, the problem is, that I am absolutely sure that I have already installed this tool few months ago when I hit the WPF problem. So, how could it be that the tool was not working, and I had to install it again? Honestly, I don’t know… Most likely, some new drivers or system updates have disabled or otherwise damaged the tool. Which brings me to the (rhetoric?) question: how can “normal” people use today’s computers (and technologies in general) with all this complexity?

Today’s Sites/Blogs