Most of the applications will be facing a performance issue while scaling and this might be due to thread starvation. Before getting into the thread starvation problem, we need to understand how the thread pool works.

Thread Pool is a thread management or thread queueing mechanism for Dotnet. On the hardware level, we will be having a set of CPUs and x2 processors, for example, if we have hardware of 4 cores and it will accommodate 8 logical processors. Each processor will execute one thread at any given time. For this article let assume we are working with 8 logical processors.

This is how the system works:- Once the application starts, a thread Pool will create a set of threads and make them available to process incoming requests. The number of threads it spawns depends on how many processors are available on the host system. In our case, it is 8 threads. When a new request comes, it will wait on the task queue initially then the runtime will pick this request based on the thread available on this global thread pool as shown below diagram.

Let’s assume we bust with some 100 requests at once and now these requests will reside on the task queue and our thread pool is capable to handle 8 threads initially. The system will pick each request from the task queue and assign an available thread on the thread pool. Now, as there are no available threads and more than 92 requests waiting on the task queue, the runtime will spawn new threads and make them available in the thread pool. But, our system is capable to process only 8 threads at any given time and even we create a new thread will be in a waiting state to process. Thread creation and context switching are expensive as it uses lots of memory. If too many threads are created and destroyed will consume huge memory and it leads to halt the system and reboots.

To avoid this Dotnet thread Pool has a throttling mechanism. This mechanism will activate when the minimum threshold limit is reached, that is, the minimum spawn thread count reaches. Let understand this mechanism with a simple example. Let continue with our above example of 100 requests and the system is capable to handle 8 requests at any given time. In general, once any request completes its process the assigned thread will be released and make itself available on the thread pool to serve another request on the task queue. This way, we avoid creating new threads by the runtime and using expensive resources of the system. Every developer should implement an Asynchronous call to their logic to make sure the thread will be back to the thread pool for reuse for another request until the system doing any I/O process. Please find my other article Working With Async/Await/Task Keywords In Depth to understand more about how Asynchronous calls will work. 

As per the throttling mechanism, we need to define an integer value as a threshold limit. For example, if we set this value as a 50 and the runtime will keep creating the new threads until 50 requests and thereby from 51st request runtime wait for 0.5 seconds before creating a new thread, during this time if any existing thread returns to the thread pool will be reused and avoid creating a new thread. If there are 60 requests, the 61st request should wait for 10*0.5 = 5 seconds before being assigned by any thread (assuming all threads are busy and no thread available for a given time) and this is known as thread starvation.

We can set custom value as the throttling threshold limit using the below statement while application startup.

ThreadPool.SetMinThreads(50, 100);

The first parameter is the threshold custom value and the second parameter represents the I/O Thread count on IOCP. For more details on this please read my article Understanding Worker Thread and I/O Completion Port (IOCP)  

If we set the higher value as the throttling threshold limit will lead to a higher waiting time of the new request but it makes sure the system will keep responding and not get halted. We need to have a tool such as ConcurrencyLimiter to solve these problems, where it will return 503 error for all new requests beyond the threshold limit, and this way there will be no extra new request waiting on the task queue until any thread(s) available on the Thread Pool.  

Happy Coding 🙂