Deadlock

Deadlock

Optional readings for this topic from Operating Systems: Principles and Practice: Section 6.5.

The deadlock problem:

  • Systems usually have multiple locks
  • Threads often need to hold multiple locks at the same time.
  • Simple example:
    Thread A               Thread B
    mutex1.lock();         mutex2.lock();
    mutex2.lock();         mutex1.lock();
    ...                    ...
    mutex2.unlock();       mutex1.unlock();
    mutex1.unlock();       mutex2.unlock();
    
  • Deadlock definition:
    • A collection of threads are all blocked.
    • Each thread is waiting for a resource owned by one of the other threads.
    • Since all threads are blocked, none can release their resources.

Four conditions for deadlock:

  • Limited access: resources cannot be shared.
  • No preemption. Once given, a resource cannot be taken away.
  • Multiple independent requests: threads don't ask for resources all at once (hold resources while waiting).
  • A circularity in the graph of requests and ownership.

Complexities:

  • Deadlock can occur over anything that causes waiting:
    • Locks
    • Network messages
    • Disk drive
    • Memory space exhausted
  • Deadlock can occur over discrete resources (e.g. locks) or quantities of a more continuous resource (pages of memory).
  • In general, don't know in advance which resources a thread will need.

Solution #1: deadlock detection

  • Determine when system is deadlocked
  • Break the deadlock by terminating one of the threads
  • Usually not practical in operating systems, but often used in database systems where transactions can be aborted and retried

Solution #2: deadlock prevention: eliminate one of the necessary conditions for deadlock

  • Don't allow exclusive access? Not reasonable for most applications.
  • Create enough resources so that they never run out? May work for things like disk space, but locks for synchronization are intentionally limited in number.
  • Allow preemption? Works for some resources but not others (e.g., can't preempt a lock).
  • Require threads to request all resources at the same time; either get them all or wait for them all.
    • Tricky to implement: must wait for several things without locking any of them.
    • Inconvenient for thread: hard to predict needs in advance. May require thread to over-allocate just to be safe.
  • Prevent circularities: all threads request resources in the same order (e.g., always lock l1 before l2). This is the most common approach used in operating systems.