我正在使用 parallel
在 R 中执行并行化代码包装和 mclapply
它采用预定义数量的内核作为参数。
如果我的工作要持续几天,有没有办法让我写(或包装)我的 mclapply
在服务器高峰时段使用较少内核并在非高峰时段增加使用量的功能?
最佳答案
我想最简单的解决方案是将您的数据分成更小的块并运行 mclapply
分别放在这些块上。然后可以设置每次运行的核心数mclapply
.这对于与 w.r.t. 几乎没有差异的计算可能更有效。运行。
我创建了一个快速而肮脏的模型,展示了它的样子:
library(parallel)
library(lubridate)
#you would have to come up with your own function
#for the number of cores to be used
determine_cores=function(hh) {
#hh will be the hour of the day
if (hh>17|hh<9) {
return(4)
} else {
return(2)
}
}
#prepare some sample data
set.seed(1234)
myData=lapply(seq(1e-1,1,1e-1),function(x) rnorm(1e7,0,x))
#calculate SD with mclapply WITHOUT splitting of data into chunks
#we need this for comparison
compRes=mclapply(myData,function(x) sd(x),mc.cores=4)
set.seed(1234)
#this will hold the results of the separate mclapply calls
res=list()
#starting position within myData
chunk_start_pos=1
calc_flag=TRUE
while(calc_flag) {
#use the function defined above to determine how many cores we may use
core_num=determine_cores(lubridate::hour(Sys.time()))
#determine end position of data chunk
chunk_end_pos=chunk_start_pos+core_num-1
if (chunk_end_pos>=length(myData)) {
chunk_end_pos=length(myData)
calc_flag=FALSE
}
message("Calculating elements ",chunk_start_pos," to ",chunk_end_pos)
#mclapply call on data chunk
#store data in res
res[[length(res)+1]]=mclapply(myData[chunk_start_pos:(chunk_start_pos+core_num-1)],
function(x) sd(x),
mc.preschedule=FALSE,
mc.cores=core_num)
#calculate new start position
chunk_start_pos=chunk_start_pos+core_num
}
#let's compare the results
all.equal(compRes,unlist(res,recursive=FALSE))
#TRUE
关于r - 在 R 中并行计算时更改内核数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33150796/