User:Creature/QueueMutex

From OSDev Wiki
Jump to navigation Jump to search

When first starting out with Synchronization Primitives, the first sort I implemented was the Spinlock. Even though they are useful, they don't know a lot about fairness.

The QueueMutex

The Problem

Suppose you have this piece of code in a priority-based multitasking environment:

/*
 * 'Foo' is a function run by multiple threads or processes and executes some operations that need
 * to be thread-safe.
 */
void Foo()
{
   /* 'Spinlock' is some structure or class implementing a spinlock (using atomic operations). */
   static Spinlock lock;

   while(some_condition_that_isnt_met_for_a_long_time) //For example, an infinite loop.
   {
      lock.Lock(); //Waits infinitely for the spinlock to be locked.

      thread_safe_instruction();

      lock.Unlock(); //Unlocks the spinlock so other threads/tasks can use the resource.
   }
}

Suppose two threads start in this function (let's call them Thread #1 and Thread #2). It is possible that Thread #1 is first to lock the spinlock and it executes 'thread_safe_instruction()'. Thread #2 tries 'lock.Lock()' and waits until Thread #1 is done. Suppose your priority-based multitasking says that each of these threads get an approximate 10 ms before threads are switched. It is possible that Thread #1 has a lot of its time-slice left and, if the condition is not yet met, is able to lock the spinlock again before Thread #2 becomes active and is able to see that the spinlock has been unlocked and lock it itself. This can result in situations where e.g. Thread #1 executes the loop 50 times after which Thread #2 is able to lock the spinlock before Thread #1 does again, after which Thread #2 hogs the loop again before Thread #1 can access the resource.

Suppose there is a function 'myId()' that returns a number unique to each thread. If 'thread_safe_instruction()' would output something like

 Thread ID 'myId()'\n

The above situation could result in an output similar like the following:

 Thread ID 1
 Thread ID 1
 Thread ID 1
 Thread ID 1
 Thread ID 1
 Thread ID 1
 ...
 Thread ID 2
 Thread ID 2
 Thread ID 2
 Thread ID 2
 Thread ID 2
 ...
 Thread ID 1
 ...

A common solution is to use something like a semaphore or another synchronization primitive that implements some sort of queue or priority-based way of making access to the resource more fair for everyone.

About

I've recently 'discovered' (I put it between apostrophes because it's very probable that it's been discovered and implemented before, nonetheless, it's still handy and I've never seen it before) something I like to call the 'QueueMutex' (it might already exist under other names).

The point of the QueueMutex is to, like semaphores and other SP's (synchronization primitives), provide a more fair way of giving threads or processes access to a resource that needs to be thread-safe.

Implementation

The QueueMutex is basically a bastardized spinlock. All that needs to be added is some kind of Queue/Vector object or some kind of dynamic array (it can also be a static array). Here's an example of an implementation in a C++ class:

class QueueMutex
{
protected:
	volatile byte _Lock; // Used as a normal spinlock.

	Vector<volatile const Thread *> _WaitQueue; // The queue, this could also be a static array, a queue, or another storage container you prefer.

public:
	QueueMutex(): _Lock(0)
	{
		
	}

	void Lock()
	{
		/* This part utilizes a normal spinlock. */
		while(__sync_lock_test_and_set(&this->_Lock, 1) != 0); // Lock the spinlock, dynamic memory access without thread-safety is dangerous!

		/* Add a pointer to the currently active thread (or, the thread executing the Lock, */
		/* if you will) to the back of vector (or queue, static array, or whatever sort of storage you are using). */
		_WaitQueue.PushBack(thisthread); //  
		__sync_lock_release(&this->_Lock); // Release the normal spinlock.

		/* Infinitely loop until the calling thread becomes the one that is the first in line. */
		while(_WaitQueue[0] != thisthread) ;
	}

	void Unlock()
	{
		this->_WaitQueue.PopFront(); // I'm done with the resource, pop me from the front of the queue.
	}
};

Note: All the '__sync_' functions are GCC-specific, I used them here for simplicity. Each and every one of them is simply an atomic operation.

Note that I have tested the implementation a couple of times but it's very possible that it's still a buggy implementation.

When using this bastardized spinlock on the problematic situation, you should get more fair results. When testing this, my result was the following:

 // Thread #1 outputs a couple of times before they start taking turns. This is because Thread #2 was started after
 // Thread #2 which resulted in Thread #1 performing a couple of loops before Thread #2 was ready to start looping.
 Thread ID 1
 Thread ID 1
 Thread ID 1
 Thread ID 1
 Thread ID 1
 Thread ID 1
 ...
 
 // As soon as both threads are properly running, they will take turns, making access to the resource more fair.
 Thread ID 1
 Thread ID 2
 Thread ID 1
 Thread ID 2
 Thread ID 1
 Thread ID 2
 ...

Advantages

  • There is more fairness when implementing thread-safe functions.
  • Every accessor of the resource gets queued up. So if 2 'elements' are waiting for the resource to be released, a third one can't just barge in and get access to the resource before the other 2 do; it will have to wait.

Disadvantages

  • If using dynamic memory allocation, it can be seen as a problematic overhead.