我第一次使用 julia
进行并行计算.我有点头疼。所以假设我开始 julia
如下:julia -p 4
.然后我为所有处理器声明 a 函数,然后将它与 pmap
一起使用还有@parallel for
.
@everywhere function count_heads(n)
c::Int = 0
for i=1:n
c += rand(Bool)
end
n, c # tuple (input, output)
end
###### first part ######
v=pmap(count_heads, 50000:1000:70000)
println("Result first part")
println(v)
###### second part ######
println("Result second part")
@parallel for i in 50000:1000:70000
println(count_heads(i))
end
结果如下。
Result first part
Counting heads function
Any[(50000,24894),(51000,25559),(52000,26141),(53000,26546),(54000,27056),(55000,27426),(56000,28024),(57000,28380),(58000,29001),(59000,29398),(60000,30100),(61000,30608),(62000,31001),(63000,31520),(64000,32200),(65000,32357),(66000,33063),(67000,33674),(68000,34085),(69000,34627),(70000,34902)]
Result second part
From worker 4: (61000, From worker 5: (66000, From worker 2: (50000, From worker 3: (56000
因此,函数
pmap
显然工作正常,但 @parallel for
正在停止,或者它没有给我结果。难道我做错了什么?谢谢!
更新
如果在代码末尾我放了
sleep(10)
.它正确地完成了工作。From worker 5: (66000,33182)
From worker 3: (56000,27955)
............
From worker 3: (56000,27955)
最佳答案
您的两个示例都可以在我的笔记本电脑上正常运行,所以我不确定,但我认为这个答案可能会解决您的问题!
如果您添加 @sync
,它应该可以正常工作在 @parallel for
之前
来自 julia 并行计算文档 http://docs.julialang.org/en/release-0.4/manual/parallel-computing/ :
... the reduction operator can be omitted if it is not needed. In that case, the loop executes asynchronously, i.e. it spawns independent tasks on all available workers and returns an array of RemoteRef immediately without waiting for completion. The caller can wait for the RemoteRef completions at a later point by calling fetch() on them, or wait for completion at the end of the loop by prefixing it with @sync, like @sync @parallel for.
所以你可能调用
println
在 RemoteRef 完成之前。
关于parallel-processing - Julia Parallel 宏似乎不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39620354/