My client had this issue where their web application (deployed across multiple servers) was randomly making the servers unresponsive with 100% cpu usage.

The first action we took was to configure the IIS to automatically recycle the Application Pools when they are using high CPU for more than a few minutes. In the example below we kill AppPools after 3 minutes of using more than 80% cpu.

dir IIS:\AppPools  | ForEach-Object {
	Write-Host  "Updating $($_.Name) ..."

	$appPoolName = $_.Name
	$appPool = Get-Item "IIS:\AppPools\$appPoolName"
	$appPool.cpu.limit = 80000
	$appPool.cpu.action = "KillW3wp"
	$appPool.cpu.resetInterval = "00:03:00"
	$appPool | Set-Item
}

That solved the problems, servers stopped getting unresponsive, but we had to investigate what was eating all CPU.

See below how I proceeded with the troubleshooting:

1. Create a Memory Dump

Task Manager - Right Button in the IIS Worker Process, and create a Dump File

2. Install Debug Diagnostic Tool

Download and install Debug Diagnostic Tool

3. Run Crash & Hang Analysis for ASP.NET / IIS

Add your dump (DMP) file, select “CrashHangAnalysis”, and click “Start Analysis”.

4. Review Analysis for Problems

The first page immediately suggests that there’s a Generic Dictionary which is being used by multiple threads and is blocking one thread.

A few pages later we can find the threads which are consuming the most of the CPU:

If we check those top threads we can see that both are blocked in the same call which is invoking GetVersion() on an API client-wrapper. One thread is trying to Insert on the dictionary (cache the API version), while the other is trying to Find (FindEntry) on the dictionary.

5. What was the issue?

Long Explanation:
Dictionary<T> is a HashMap implementation, and like most HashMap implementations it internally uses LinkedLists (to store multiple elements in case different keys result into the same bucket position after being hashed and after taking the hash modulo). The problem is that since Dictionary<T> is not thread-safe, multiple threads trying to change the dictionary may put it into an invalid state (race condition).

Probably there were different threads trying to add the same element to the dictionary at the same time (invoking Insert method which internally invokes the Resize method which modifies the LinkedList), which was putting the LinkedList (and therefore the whole HashMap) into an inconsistent state. If the LinkedList goes into an inconsistent state it can put the threads into an infinite loop, since both Insert() and FindEntry() iterate through the LinkedList and could go into an infinite loop if the LinkedList was inconsistent.

Short Explanation:
Since Dictionary<T> is not thread-safe, multiple threads trying to change the dictionary may put it into an invalid state (race condition). So if you want to share a dictionary across multiple threads you should use a ConcurrentDictionary<T> which as the name implies is a thread-safe class.

It’s a known-issue that concurrent access to Dictionary can cause an infinite-loop and high-CPU in IIS: Link 1, Link 2, Link 3, Link 4.

6. Advanced Troubleshooting using WinDbg

If the Debug Diagnostic Tool didn’t gave us any obvious clue about the root cause, we could use WinDbg to inspect a memory dump (it also supports .NET/CLR). See example here.

Windows Subsystem for Linux (WSL)

If you’re a Windows developer that needs to compile or run Linux binaries in Windows, then the Windows Subsystem for Linux (WSL) is for you. WSL is a tool for developers and sysadmins that need Linux interoperability in Windows.

The most important thing to understand about WSL is that it isn’t based on virtual machines or containers - when you download a supported Linux distro from the Microsoft Store, it doesn’t mean that a virtual machine is created. What WSL provides is just a layer for mapping Windows kernel system calls to Linux kernel system calls - this allows Linux binaries to run in Windows unmodified. WSL also maps Windows services, like the filesystem and networking, as devices that Linux can access.

Instead of using a VM or container, WSL virtualizes a Linux kernel interface on top of the Windows kernel. This means that running WSL only requires a minimal amount of RAM. And when not in use, the WSL driver isn’t loaded into memory, making it much more efficient than a solution based on a VM or container.

Installing WSL and Ubuntu in a Windows Server

Run this in a Powershell Administrator Prompt (you’ll have to reboot after this):

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Run this in a Powershell Administrator Prompt to install Ubuntu 18.04:

curl.exe -L -o ubuntu-1804.appx https://aka.ms/wsl-ubuntu-1804
Rename-Item ubuntu-1804.appx ubuntu-1804.zip
Expand-Archive ubuntu-1804.zip ubuntu1804

cd ubuntu1804
.\ubuntu1804.exe 

You’ll be asked to choose a Linux username and password.

Installing Redis on Ubuntu (under Windows Server WSL)

Invoke a bash shell as superuser (for the next commands which require root):

sudo bash

(you’ll be asked for administrator’s password created earlier)

Update apt-get and install redis:

apt-get update && apt-get upgrade
apt-get install redis-server

Configuring Redis for external access (Optional)

If you’ll only use Redis in your internal protected network you don’t need this.

WSL only provides a translation layer between the Linux apps and the Windows kernel, so some core parts of the Ubuntu system (including networking) are just not there - WSL just translates the Linux system calls into windows ones so the Ubuntu network data flows through the exact same TCP/IP stack as the windows data.

This means that to open Redis server to other servers (or to the public internet) you just have to configure Redis to listen on the correct interfaces, and open the appropriate ports (there’s no need to do “port forwarding” since this is not a VM with its own networking interfaces).

By default Redis will only bind to loopback (localhost) interfaces. If you open /etc/redis/redis.conf (by running nano /etc/redis/redis.conf) you’ll find a line like bind 127.0.0.1 ::1 which means that Redis by default listens on ipv4 loopback (127.0.0.1) and ipv6 loopback (::1). Just change it to bind 0.0.0.0 ::1 to make Redis listen in all interfaces (including public IPs), or if it’s just for internal network you can add the internal ips in which this server should listen on.

And in case you’re exposing it through the internet you’ll also have to open your Windows Firewall ports:

netsh advfirewall firewall add rule name="Redis Port 6379" dir=in action=allow protocol=TCP localport=6379

Authentication

If you’re exposing your server to public internet you’ll have to configure a password, because by default Redis does not accept external connections without a password.
Yet in /etc/redis/redis.conf you just have to uncomment the line requirepass and set next to it a very-strong password (since it’s very fast to run a brute-force attack on a Redis server).

Start and test server

Start service:

sudo service redis-server restart

To test the service, run redis-cli and use the following commands:

AUTH yourstrongpassword  (if you have enabled authentication, above)
set user:1 "Oscar"
get user:1

To stop your server you can use:

sudo service redis-server stop

If you want to put your Windows Server to automatically start this Linux service on every boot you can configure this command to be executed on every boot:

C:\windows\system32\wsl.exe -u root service redis-server start

References:

We all love Dapper micro-ORM.

Many developers have realized that although Entity Framework offers some features that Dapper doesn’t, Dapper is still more flexible (and much much faster) than EF. And many have realized that the right tool for the right job sometimes involving using both EF for some things and Dapper for others. And one of the usages where Dapper is the king is for building dynamic queries.

Dynamic SQL

Old-school developers (those who have been coding since the late 90s or early 2000s, probably in VB6, ASP3, or PHP) will probably remember code like this:

string sql = "SELECT * FROM [Product] WHERE 1=1";

if (!string.IsNullOrEmpty(Request["ProductName"]))
   sql += " AND Name LIKE '" + Request["ProductName"].Replace("'", "''") + "'"; // this replace!
if (!string.IsNullOrEmpty(Request["SubCategoryId"]))
    sql += " AND ProductSubcategoryID = " + Request["SubCategoryId"].Replace("'", "''");  // this replace!

// here is where you pray that you've correctly sanitized inputs against sql-injection
var products = cn.Query<Product>(sql);

Basically we were all sending dynamic SQL statements directly to our databases. And since user-input can be malicious (see below), we had to manually sanitize user-input to avoid SQL-Injection attacks.

Hello Bobby Tables

If you weren’t sanitizing your input correctly, you were at serious risk. SQL-injection is probably the most popular web-hacking technique. If you forgot to sanitize something like an “id” querystring, you could be opening a huge vulnerability and possibly exposing all your database, not only for reading but also for modifications.

Parameterized SQL

Even if you’re old school (hey VB6 guy, I’m looking at you again) you probably know that Dynamically building SQL like that is not only error-prone (vulnerable to SQL-injection) but also does not benefit from SQL Server Cached Execution Plans. You probably used some kind of code-generation tool which helped you to build code like this (maybe to invoke a stored procedure, since 3-tier-architecture and “Windows DNA” were in the hype in the early 2000s):

Dim strSQL As String
Dim cmd As New ADODB.Command

strSQL = "UPDATE MyTable SET " & vbNewLine _
    & " NEEDS_ID     = @NEEDS_ID, " & vbNewLine _
    & " OBJ_ID       = @OBJ_ID, " & vbNewLine _
    & " OBJ_COMMENTS = @OBJ_COMMENTS, " & vbNewLine _
    & " TIME21_ID    = @TIME21_ID, " & vbNewLine _
    & " WHERE ID     = @WHEREID"

With cmd
    .ActiveConnection = Cn
    .CommandText = strSQL
    .Parameters.Append .CreateParameter("@NEEDS_ID", adInteger, adParamInput, 2, 12)
    .Parameters.Append .CreateParameter("@OBJ_ID", adInteger, adParamInput, 2, 23)
    .Parameters.Append .CreateParameter("@OBJ_COMMENTS", adVarChar, adParamInput, 250, "Some text")
    .Parameters.Append .CreateParameter("@TIME21_ID", adInteger, adParamInput, 2, 34)
    .Parameters.Append .CreateParameter("@ID", adInteger, adParamInput, 18, 456)
    .Execute
End With 

I hope the millennial developers are not so surprised that parameterized SQL already existed in the past century.

Back to modern software

Time flew by, Java and .NET emerged (and maybe submerged a little?), Reflection, Bytecode emission, Full-ORMs, Micro-ORMs, 3-tier was deprecated in favor of dozens of modern architectures, and now database access is much easier, right?

Now we don’t have to manually describe each one of our parameters to SQL. Our favorite ORMs will do that for us.

Dapper Example:

var dynamicParams = new DynamicParameters();

string sql = "SELECT * FROM [Product] WHERE 1=1";

if (productName != null)
{
    sql += " AND Name LIKE @productName"; 
    dynamicParams.Add("productName", productName);
}


if (subCategoryId != null)
{
    sql += " AND ProductSubcategoryID = @subCategoryId"; 
    dynamicParams.Add("subCategoryId", subCategoryId);
}

var products = cn.Query<Product>(sql, dynamicParams);

Don’t Repeat Yourself

The example below makes me a little upset for 2 reasons:

  • I have to pass productName twice, both to the sql string and to the dynamicParams dictionary.
    Their names should match.
  • I have to keep those two variables (the SQL statement and the list of parameters) separated, even though they are closely related to each other.

String Interpolation Internals

String Interpolation was introduced in C# in 2016.

Instead of doing code like this:

string name = "Rick";
int accesses = 10;
string output = string.Format("{0}, you've been here {1:n0} times.",
                              name, accesses);

You could do like this:

string name = "Rick";
int accesses = 10;
string output = $"{name}, you've been here {accesses:n0} times.";

Internally, when you write an interpolated string (starting with $), the compiler generates a FormattableString class, which contains both the template (as if it was {0}, you've been here {1:n0} times.), and also the list of parameters (string name and int accesses).

If your method expects a regular string, the FormattableString will be implicitly converted into a regular string, and you get the same behavior as if you just passed a string.format to your method. However, if your method expects a FormattableString class, then you have access to format and arguments isolated from each other.

FormattableString can be useful for example if we wanted to build parameterized SQL statements while letting users build their strings as if it was just regular string concatenation:

QueryBuilder ParseSQL(FormattableString query)
{
    QueryBuilder myWrapper = new QueryBuilder();

    string dapperSql = query.Format;
    // replace {0} by "@p0", 
    // replace {1} by "@p1", etc..
    myWrapper.SQL = dapperSql;

    var dapperArgs = new Dapper.DynamicParameters();
    var args = query.GetArguments();
    // dapperArgs.Add("p0", args[0]);
    // dapperArgs.Add("p1", args[1]); ...
    myWrapper.Arguments = dapperArgs;

    return myWrapper;
    // now anyone can use Dapper like  
    // var pocos = connection.Query<POCO>(myWrapper.SQL, myWrapper.Parameters);
}

DapperQueryBuilder

Based on the idea above, I’ve created DapperQueryBuilder - which is a simple wrapper around Dapper which allows us to pass SQL parameters using string interpolation.

You can code like this:

var query = cn.QueryBuilder($"SELECT * FROM [Product] WHERE 1=1");

if (productName != null)
    query.Append($"AND Name LIKE {productName}"); 

if (subCategoryId != null)
    query.Append($"AND ProductSubcategoryID = {subCategoryId}"); 

var products = query.Query<Product>(); 

Although it looks like you’re just building a dynamic SQL (with values as inline literals), actually what you get is parameterized SQL.

In case, query will have this underlying statement: SELECT * FROM [Product] WHERE 1=1 AND Name LIKE @p0 AND ProductSubcategoryId = @p1, and will also hold the parameters @p0 = productName and @p1 = subCategoryId.

To sum, instead of using Dapper’s extension .Query<T> which extends IDbConnection and accepts a SQL string and a list of parameters, you use QueryBuilder() extension which creates a QueryBuilder where you can dynamically (and in a single statement) add new parameters and add the associated SQL clause.

Quickstart

If you liked and want to start using right now,

  1. Install the NuGet package Dapper-QueryBuilder
  2. Start using like this:
using DapperQueryBuilder;
// ...

cn = new SqlConnection(connectionString);

// If you have all your parameters at once and just want to benefit from string interpolation:
var products = cn.QueryBuilder($@"
    SELECT ProductId, Name, ListPrice, Weight
    FROM [Product]
    WHERE [ListPrice] <= {maxPrice}
    AND [Weight] <= {maxWeight}
    AND [Name] LIKE {search}
    ORDER BY ProductId").Query<Product>();

Or building dynamic conditions like this:

using DapperQueryBuilder;
// ...

cn = new SqlConnection(connectionString);

// If you want to dynamically add conditions
var q = cn.QueryBuilder($@"
    SELECT ProductId, Name, ListPrice, Weight
    FROM [Product]
    WHERE 1=1 ");

q.AppendLine("AND [ListPrice] <= {maxPrice}");
q.AppendLine("AND [Weight] <= {maxWeight}");
q.AppendLine("AND [Name] LIKE {search}");
q.AppendLine("ORDER BY ProductId");

var products = q.Query<Product>();

Filters list

Some people believe incorrectly that WHERE 1=1 causes a performance hit. It doesn’t. So using this fake condition is a great trick so that you can just append additional conditions (if any) like shown in the example above.

Another option is that you can build your whole query but leave the filters to be dynamically defined (and rendered) later. DapperQueryBuilder has this special command Where() which is used to save the filters internally, and later they are replaced.

int maxPrice = 1000;
int maxWeight = 15;
string search = "%Mountain%";

var cn = new SqlConnection(connectionString);

// You can build the query manually and just use QueryBuilder to replace "where" filters (if any)
var q = cn.QueryBuilder(@"SELECT ProductId, Name, ListPrice, Weight
    FROM [Product]
    /**where**/
    ORDER BY ProductId
    ");
    
// You just pass the parameters as if it was an interpolated string, 
// and QueryBuilder will automatically convert them to Dapper parameters (injection-safe)
q.Where($"[ListPrice] <= {maxPrice}");
q.Where($"[Weight] <= {maxWeight}");
q.Where($"[Name] LIKE {search}");

// Query() will automatically build your query and replace your /**where**/ (if any filter was added)
var products = q.Query<Product>();

You can also create OR filters like this:

q.Where(new Filters(Filters.FiltersType.OR)
{
    new Filter($"[Weight] <= {maxWeight}"),
    new Filter($"[Name] LIKE {search}")
});

Or you can mix and match OR/AND like this:

q.Where(new Filters(Filters.FiltersType.OR)
{
    new Filters(Filters.FiltersType.AND)
    {
        $"[Weight] <= {maxWeight}",
        $"[Weight] >= {minWeight}",
    }
    new Filter($"[Name] LIKE {search}")
});

And even arrays can be used as interpolated strings:

var categories = new string[] { "Components", "Clothing", "Acessories" };
q.Append($"WHERE c.[Name] IN {categories}");

Fluent API (Chained-methods)

For those who like method-chaining guidance (or for those who allow end-users to build their own queries), there’s a Fluent API that allows you to build queries step-by-step mimicking dynamic SQL concatenation.

So, basically, instead of starting with a full query and just appending new filters (.Where()), the QueryBuilder will build the whole query for you:

var q = cn.QueryBuilder()
    .Select($"ProductId")
    .Select($"Name")
    .Select($"ListPrice")
    .Select($"Weight")
    .From($"[Product]")
    .Where($"[ListPrice] <= {maxPrice}")
    .Where($"[Weight] <= {maxWeight}")
    .Where($"[Name] LIKE {search}")
    .OrderBy($"ProductId");
    
var products = q.Query<Product>();

You would get this query:

SELECT ProductId, Name, ListPrice, Weight
FROM [Product]
WHERE [ListPrice] <= @p0 AND [Weight] <= @p1 AND [Name] LIKE @p2
ORDER BY ProductId

Hope you enjoyed this post as much as I did when I wrote this library!

Full source code here.