我正在运行 Mac OS X 10.8 并且 time.clock() 出现奇怪的行为,一些在线资源说我应该更喜欢 time.time() 来为我的代码计时。例如:
import time
t0clock = time.clock()
t0time = time.time()
time.sleep(5)
t1clock = time.clock()
t1time = time.time()
print t1clock - t0clock
print t1time - t0time
0.00330099999999 <-- from time.clock(), clearly incorrect
5.00392889977 <-- from time.time(), correct
为什么会这样?我应该只使用 time.time() 进行可靠的估计吗?
最佳答案
来自 time.clock
上的文档:
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.
来自 time.time
上的文档:
Return the time in seconds since the epoch as a floating point number. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
time.time()
以秒为单位测量,time.clock()
测量当前进程已使用的 CPU 时间量。但在 Windows 上,这是不同的,因为 clock()
也测量秒。
关于python - 为什么 Mac OS X 上的 Python time.time() 和 time.clock() 存在差异?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17498199/