c++ - "release sequence"是什么意思?

标签 c++ multithreading c++11 memory-model stdatomic

我不明白,为什么没有release sequence会出问题,如果我们在下面的示例中有 2 个线程。我们对原子变量 count 只有 2 个操作. count如输出所示,按顺序递减。

来自安东尼威廉姆斯的 C++ Concurrency in Action:

I mentioned that you could get a synchronizes-with relationship between a store to an atomic variable and a load of that atomic variable from another thread, even when there’s a sequence of read-modify-write operations between the store and the load, provided all the operations are suitably tagged. If the store is tagged with memory_order_release, memory_order_acq_rel, or memory_order_seq_cst, and the load is tagged with memory_order_consume, memory_order_acquire, or memory_order_seq_cst, and each operation in the chain loads the value written by the previous operation, then the chain of operations constitutes a release sequence and the initial store synchronizes-with (for memory_order_acquire or memory_order_seq_cst) or is dependency-ordered-before (for memory_order_consume) the final load. Any atomic read-modify-write operations in the chain can have any memory ordering (even memory_order_relaxed).

To see what this means (release sequence) and why it’s important, consider an atomic<int> being used as a count of the number of items in a shared queue, as in the following listing.

One way to handle things would be to have the thread that’s producingthe data store the items in a shared buffer and then do count.store(number_of_items, memory_order_release) #1 to let the other threads know that data is available. The threads consuming the queue items might then do count.fetch_sub(1,memory_ order_acquire) #2 to claim an item from the queue, prior to actually reading the shared buffer #4. Once the count becomes zero, there are no more items, and the thread must wait #3.


#include <atomic>
#include <thread>
#include <vector>
#include <iostream>
#include <mutex>

std::vector<int> queue_data;
std::atomic<int> count;
std::mutex m;
void process(int i)
{

    std::lock_guard<std::mutex> lock(m);
    std::cout << "id " << std::this_thread::get_id() << ": " << i << std::endl;
}


void populate_queue()
{
    unsigned const number_of_items = 20;
    queue_data.clear();
    for (unsigned i = 0;i<number_of_items;++i)
    {
        queue_data.push_back(i);
    }

    count.store(number_of_items, std::memory_order_release); //#1 The initial store
}

void consume_queue_items()
{
    while (true)
    {
        int item_index;
        if ((item_index = count.fetch_sub(1, std::memory_order_acquire)) <= 0) //#2 An RMW operation
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(500)); //#3
            continue;
        }
        process(queue_data[item_index - 1]); //#4 Reading queue_data is safe
    }
}

int main()
{
    std::thread a(populate_queue);
    std::thread b(consume_queue_items);
    std::thread c(consume_queue_items);
    a.join();
    b.join();
    c.join();
}

输出(VS2015):
id 6836: 19
id 6836: 18
id 6836: 17
id 6836: 16
id 6836: 14
id 6836: 13
id 6836: 12
id 6836: 11
id 6836: 10
id 6836: 9
id 6836: 8
id 13740: 15
id 13740: 6
id 13740: 5
id 13740: 4
id 13740: 3
id 13740: 2
id 13740: 1
id 13740: 0
id 6836: 7

If there’s one consumer thread, this is fine; the fetch_sub() is a read, with memory_order_acquire semantics, and the store had memory_order_release semantics, so the store synchronizes-with the load and the thread can read the item from the buffer.

If there are two threads reading, the second fetch_sub() will see the value written by the first and not the value written by the store. Without the rule about the release sequence, this second thread wouldn’t have a happens-before relationship with the first thread, and it wouldn’t be safe to read the shared buffer unless the first fetch_sub() also had memory_order_release semantics, which would introduce unnecessary synchronization between the two consumer threads. Without the release sequence rule or memory_order_release on the fetch_sub operations, there would be nothing to require that the stores to the queue_data were visible to the second consumer, and you would have a data race.



他什么意思?两个线程都应该看到 count 的值是 20 ?但在我的输出 count在线程中依次递减。

Thankfully, the first fetch_sub() does participate in the release sequence, and so the store() synchronizes-with the second fetch_sub(). There’s still no synchronizes-with relationship between the two consumer threads. This is shown in figure 5.7. The dotted lines in figure 5.7 show the release sequence, and the solid lines show the happens-before relationships enter image description here

最佳答案

What does he mean? That both threads should see the value of count is 20? But in my output count is sequently decremented in threads.



不,他没有。全部修改为count是原子的,所以在给定的代码中,两个读取器线程总是会看到不同的值。

他在谈论释放顺序规则的含义,即当给定线程执行 release 时存储,然后执行的其他多个线程 acquire相同位置的负载形成一个释放序列,其中每个后续acquire加载与存储线程有先发生的关系(即存储的完成发生在加载之前)。这意味着读取器线程中的加载操作是与写入器线程的同步点,写入器中存储之前的所有内存操作都必须完成并在其相应加载完成时在读取器中可见。

他是说如果没有这个规则,那么只有第一个线程会同步到作者。因此,第二个线程在访问 queue 时会发生数据竞争。 (注意:不是 count ,它无论如何都受到原子访问的保护)。理论上,对数据的内存操作发生在 store 之前。在 count只有在 count 上执行自己的加载操作后,读取器线程编号 2 才能看到它.发布顺序规则确保这不会发生。

总之:发布顺序规则确保多个线程可以在单个存储上同步它们的负载。有问题的同步是对数据的内存访问,而不是被同步的实际原子变量(由于是原子的,因此无论如何都保证同步)。

请注意在此处添加:在大多数情况下,这些类型的问题仅与对重新排序其内存操作放松的 CPU 架构有关。 Intel 架构不是其中之一:它是强排序的,并且只有少数非常特殊的情况可以对内存操作进行重新排序。这些细微差别大多只在谈论其他架构时才有意义,例如 ARM 和 PowerPC。

关于c++ - "release sequence"是什么意思?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38565650/

相关文章:

C++ std lib <mutex>, <conditional_variable> 库和共享内存

java - 如何获取线程的堆栈跟踪

c++ - 是否可以这样编码 :while(lambda){}

c++ - 是否可以根据模板参数的常量性有条件地启用模板类的非 const/const 数据成员?

c++ - 在C++中递归地从十进制到八进制。以std::string格式输出

c++ - 为什么 boost::hana::tuple_c 的类型是实现定义的?

c++ - c++ 中函数返回指针的规则是什么?我在代码中遗漏了什么吗?

c++ - 多线程解决查询

c++ - 如何从用户那里获取整数输入,直到在 C++ 中遇到输入

c++ - 使用反射在 protobuf 中设置重复字段