The use of atomic operations is a beautiful and simple solution to many problems that arise in multi-threaded environments where many processes attempt to modify the same data. Unfortunately, their use does not always solve the problem.
The atomic operation is not always enough
Imagine that you want to make an operation in a store that is dependent on stock. If we have exactly five products in stock, maybe we would like to send the warehouse admin an SMS informing them that the products are running out. Let's try to do this:
If two processes take the stock simultaneously, both can receive the number 5 and send an SMS. The first process will not update the store until the second process has read its state. The administrator will get two SMSs instead of one, even though the inventory reduction is atomic. Thankfully it was just an SMS, not a transfer of half a million dollars.
Regardless of how sending the SMS is done, we cannot easily combine it with modifying the database's inventory.
We don't always work with the database
WooCommerce also allows you to change the inventory using the API. Suppose that, as part of handling a customer request, we want to take the stock value of a certain product, reduce it by one and save it. If there may appear a situation where two clients send an API request and get stock value first and then save the modified one, one of the processes may overwrite the data with an incorrect value. Since we are using the API, we cannot convert both activities into one query.
Either of these two situations makes it impossible to solve the problem simply by converting to atomic operation.
The crux of the problem we encounter lies in that more than one process can execute the same code in parallel. The solution to the problem is obvious: you need to protect the part of the code that cannot run parallelly. Subsequent processes should wait until the previous process finishes processing such code. The critical section is what we call a particular part of the code that can only be executed simultaneously by one process. But how to ensure that other processes wait for the colleague who first entered the section?
The easiest way to ensure that subsequent processes do not start processing the critical section code is to use a shared flag between processes. "Shared" means that every process can read or modify its state. The first process that enters a section sets the flag via an atomic operation and thus notifies other processes that the section code is being processed at that point. When you leave a section, the flag is cleared so another process can enter the section. Such a flag is an example of a simple algorithm ensuring MUTual EXclusion or mutex: if one process is in the section, it precludes subsequent processes from entering the section.
Implementing a mutex is not spectacularly difficult, but it's mistake-prone. For this reason, it is easiest to use tools that others have created. A few years ago, at WP Desk, we needed mutexes, among others, due to the need to generate thread-resistant invoice numbering. Since we couldn't find an implementation that would work well for WordPress, we made ours, available at https://gitlab.com/wpdesk/wp-mutex
How to use a mutex? It is straightforward: Processes that encounter the acquireLock command will wait a maximum of 5 seconds for permission to enter the critical section. If access is still not possible after 5 seconds, an exception will be thrown. What happens if we forget to clear the flag?
Hungry and sad processes
If the process that sets the flag stays in the critical section for too long, the processes waiting for input will throw exceptions. They will be starving while waiting for resources.
An even worse situation may arise if a process encounters an error while parsing the code in a section. It may exit the section and not clear the flag, starving all other processes. For this reason, it's best to wrap the entire section in try/finally. As a result, even if there is an error while sending the SMS, the process will still remember to clear the flag and release the mutex.
Sometimes we need more than one critical section, e.g. when the process in section B waits for permission to enter section A. This permission will never be granted because the process that processes section A is simultaneously waiting to enter section B. Processes will wait for each other, leading to a deadlock. In practice, deadlocks occur very rarely in the PHP code itself because the business tasks facing WordPress developers, and therefore the code we write, are pretty straightforward. However, we will come back to deadlocks in the future, especially in the context of MySQL InnoDB deadlock, which can cause neurosis even in the most stoical stoics.
Almost always, PHP processes are run multi-threaded. Therefore, the ability to synchronize processes is a knowledge that every programmer should have. Have I exhausted the topic of multithreading? No, this is just the beginning, but the information provided suffices to deal with most of the problems that await inattentive WordPress developers :)