C# Thread Synchronization

Thread synchronization refers to the act of shielding against multithreading issues such as data races, deadlocks and starvation.

This article explains techniques to tackle the thread synchronization problems and race condition.

Contents

Below are the techniques to tackle the thread synchronization problems and race condition. Let’s see them all,

C# Interlocked

So as we saw in previous chapter processor increments the variable, written in a single line of C# code in three steps (three line of code) in processor-specific language, that is read, increment and update the variable

One way to tackle this problem is to carry out all this three operation in one single atomic operation. This can be done only on data that is word-sized. Here, by atomic I mean uninterruptable and word-sized means value that can fit in a register for the update, which is a single integer in our case.

However,

Today’s processors already provide lock feature to carry out an atomic update on word-sized data. However, we can’t use this processor specific instruction directly in C# code but there is Interlocked Class in DotNet Framework that is a wrapper around this processor-level instruction that can be used to carry out atomic operations like increment and decrement on a word-sized data.

Let’s see an updated version of the buggy code we saw in the previous chapter,

 1 using System;
 2 using System.Collections.Generic;
 3 using System.Linq;
 4 using System.Text;
 5 using System.Threading;
 6 using System.Threading.Tasks;
 7 
 8 namespace _03_Atomic_Update
 9 {
10     class Program
11     {
12         private static int sum;
13 
14         static void Main(string[] args)
15         {
16             //create thread t1 using anonymous method
17             Thread t1 = new Thread(() => {
18                 for (int i = 0; i < 10000000; i++)
19                 {
20                     //use threading Interlocked class for atomic update
21                     Interlocked.Increment(ref sum);
22                 }
23             });
24 
25             //create thread t2 using anonymous method
26             Thread t2 = new Thread(() => {
27                 for (int i = 0; i < 10000000; i++)
28                 {
29                     //use threading Interlocked class for atomic update
30                     Interlocked.Increment(ref sum);
31                 }
32             });
33 
34 
35             //start thread t1 and t2
36             t1.Start();
37             t2.Start();
38 
39             //wait for thread t1 and t2 to finish their execution
40             t1.Join();
41             t2.Join();
42 
43             //write final sum on screen
44             Console.WriteLine("sum: " + sum);
45 
46             Console.WriteLine("Press enter to terminate!");
47             Console.ReadLine();
48         }
49     }
50 }

Data Partitioning

Data Partitioning is actually kind of strategy where you decide to process data by partitioning it for multiples threads. Its kind of “you do that and I will do that” strategy.

To use data partitioning you must have some domain-specific knowledge of data (such as an array or multiple files manipulation), where you decide that one thread will process just one slice of data while other thread will work on another slice. Let’s see an example,

 1 using System;
 2 using System.Collections.Generic;
 3 using System.Linq;
 4 using System.Text;
 5 using System.Threading;
 6 using System.Threading.Tasks;
 7 
 8 namespace _04_Data_Partitioning
 9 {
10     /// <summary>
11     /// This program calculate the sum of array elements using 
12     /// data partition technique. Here we split the data into 
13     /// two halves and calculate the sum1 and sum2 for each half 
14     /// in thread t1 and t2 respectively, then finally we print 
15     /// the final sum on screen by adding sum1 and sum2.
16     /// </summary>
17     class Program
18     {
19         private static int[] array;
20         private static int sum1;
21         private static int sum2;
22 
23         static void Main(string[] args)
24         {
25             //set length for the array size
26             int length = 1000000;
27             //create new array of size lenght
28             array = new int[length];
29 
30             //initialize array element with value of their respective index
31             for (int i = 0; i < length; i++)
32             {
33                 array[i] = i;
34             }
35 
36             //index to split on
37             int dataSplitAt = length / 2;
38 
39             //create thread t1 using anonymous method
40             Thread t1 = new Thread(() =>
41             {
42                 //calculate sum1
43                 for (int i = 0; i < dataSplitAt; i++)
44                 {
45                     sum1 = sum1 + array[i];
46                 }
47             });
48 
49 
50             //create thread t2 using anonymous method
51             Thread t2 = new Thread(() =>
52             {
53                 //calculate sum2
54                 for (int i = dataSplitAt; i < length; i++)
55                 {
56                     sum2 = sum2 + array[i];
57                 }
58             });
59 
60 
61             //start thread t1 and t2
62             t1.Start();
63             t2.Start();
64 
65             //wait for thread t1 and t2 to finish their execution
66             t1.Join();
67             t2.Join();
68 
69             //calculate final sum
70             int sum = sum1 + sum2;
71 
72             //write final sum on screen
73             Console.WriteLine("Sum:" + sum);
74 
75             Console.WriteLine("Press enter to terminate!");
76             Console.ReadLine();
77         }
78     }
79 }

However,

This technique can’t be adapted for every scenario there may be a situation where one slice of data depends on the output of the previous slice of data one example of this scenario is Fibonacci series where, data[n]=data[n-1]+data[n-2] in such a situation data partitioning can’t be adopted.

Wait Based Synchronization

Now,

The third technique is a Wait-Based technique which is a very sophisticated way to handle the race condition, used in a situation where above two methods can’t be adopted that easily. In this technique, a thread is blocked until someone decides its safe for them to proceed.

Suppose there are two threads namely X and Y and both want to access some resource R

Now to protect this resource we choose some lock primitive or synchronization primitive as LR (primitive here is some primitive type like int or array)

Now when thread X want to access resource R it will first acquire the lock ownership of LR, once this thread got ownership of LR it can access the resource R safely. As long as thread X has this ownership no other thread can access the LR ownership

While X has ownership if Y request to acquire the ownership of lock LR it requests will get block until thread X releases its ownership.

Wait Based Primitives in CLR

.Net has following Wait Based Primitives that you can use to apply Wait-Based technique.

They all share the same basic usage

  • Access the lock ownership
  • Manipulate the protected resource
  • Release the lock ownership

C# Monitor Class

The Monitor class allows you to synchronize access to a region of code by taking and releasing a lock on a particular object by calling the Monitor.Enter, Monitor.TryEnter, and Monitor.Exit methods. Object locks provide the ability to restrict access to a block of code, commonly called a critical section. While a thread owns the lock for an object, no other thread can acquire that lock. You can also use the Monitor class to ensure that no other thread is allowed to access a section of application code being executed by the lock owner unless the other thread is executing the code using a different locked object.

C# Mutex Class

You can use a Mutex object to provide exclusive access to a resource. The Mutex class uses more system resources than the Monitor class, but it can be marshaled across application domain boundaries, it can be used with multiple waits, and it can be used to synchronize threads in different processes. For a comparison of managed synchronization mechanisms, see Overview of Synchronization Primitives.

C# ReaderWriterLock Class

The ReaderWriterLockSlim class addresses the case where a thread that changes data, the writer, must have exclusive access to a resource. When the writer is not active, any number of readers can access the resource. When a thread requests exclusive access, subsequent reader requests block until all existing readers have exited the lock, and the writer has entered and exited the lock.

For a comparison of managed synchronization mechanisms, see Overview of Synchronization Primitives.

In the next post C# Monitor, we will learn how to use Monitor class along with C# lock keyword .

C# Thread Synchronization
Share this

Subscribe to Code with Shadman