在执行小精度 float 算术运算时,我发现计算时间异常。下面的简单代码展示了这种行为:
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
const int MAX_ITER = 100000000;
int main(int argc, char *argv[]){
double x = 1.0, y;
int i;
clock_t t1, t2;
scanf("%lf", &y);
t1 = clock();
for (i = 0; i < MAX_ITER; i++)
x *= y;
t2 = clock();
printf("x = %lf\n", x);
printf("Time: %.5lfsegs\n", ((double) (t2 - t1)) / CLOCKS_PER_SEC);
return 0;
}
这里有两个不同的程序运行:
y = 0.5
x = 0.000000
Time: 1.32000segsy = 0.9
x = 0.000000
Time: 19.99000segs
我正在使用具有以下规范的笔记本电脑来测试代码:
- CPU:Intel® Core™2 Duo CPU T5800 @ 2.00GHz × 2
- RAM:4 GB
- 操作系统:Ubuntu 12.04(64 位)
- 型号:Dell Studio 1535
有人可以详细解释为什么会出现这种行为吗?我知道 y = 0.9 时 x 值变为 0 的速度比 y = 0.5 慢,所以我怀疑问题与此直接相关。
最佳答案
非正规(或次正规)数字通常会影响性能。根据您的第二个示例,慢慢收敛到 0
将生成更多次正规。阅读更多 here和 here .如需更严肃的阅读,请查看经常被引用(而且非常密集)的 What Every Computer Scientist Should Know About Floating-Point Arithmetic .
来自第二个来源:
Under IEEE-754, floating point numbers are represented in binary as:
Number = signbit \* mantissa \* 2exponent
There are potentially multiple ways of representing the same number, using decimal as an example, the number 0.1 could be represented as 1*10-1 or 0.1*100 or even 0.01 * 10. The standard dictates that the numbers are always stored with the first bit as a one. In decimal that corresponds to the 1*10-1 example.
Now suppose that the lowest exponent that can be represented is -100. So the smallest number that can be represented in normal form is 1*10-100. However, if we relax the constraint that the leading bit be a one, then we can actually represent smaller numbers in the same space. Taking a decimal example we could represent 0.1*10-100. This is called a subnormal number. The purpose of having subnormal numbers is to smooth the gap between the smallest normal number and zero.
It is very important to realise that subnormal numbers are represented with less precision than normal numbers. In fact, they are trading reduced precision for their smaller size. Hence calculations that use subnormal numbers are not going to have the same precision as calculations on normal numbers. So an application which does significant computation on subnormal numbers is probably worth investigating to see if rescaling (i.e. multiplying the numbers by some scaling factor) would yield fewer subnormals, and more accurate results.
我本来想自己解释一下,但是上面的解释写得非常好和简洁。
关于c - 为什么有些算术运算比平时花费更多的时间?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12393600/