c - C中的执行时间差异

标签 c time segmentation-fault command-line-arguments

我正在用 C 语言编写 vector 加法代码。当我将参数传递为 1000 万时,我遇到了段错误

我知道当我们访问进程地址空间之外的内存地址时会发生段错误。

但我不这么认为,这不是原因。

我的代码是:

#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
struct timeval stop, start,start1,stop1;
void add(int a[], int b[],int N);

int main(int argc, char* argv[]){

    gettimeofday(&start1, NULL);

    if(argc<2){
        printf("Please enter the value of N(number of elements)\n");
    }else{
        int i,N;
        N=atoi(argv[1]);
        N=N*1024;   
        int a[N],b[N];
        for(i=0;i<=N;i++){
            a[i]=rand()%1000;
            b[i]=rand()%1000;
        }
        gettimeofday(&start, NULL);
        add(a,b,N);
        gettimeofday(&stop, NULL);
        printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
    }
    gettimeofday(&stop1, NULL);
    printf("Total took %lu us\n", (stop1.tv_sec - start1.tv_sec) * 1000000 + stop1.tv_usec - start1.tv_usec);
    return 0;
}

void add(int a[], int b[], int N){
    int c,i;
    for(i=0;i<=N;i++){
        c=a[i]+b[i];
        // printf("%d + %d = %d\n",a[i],b[i],c);
    }
}

输出如下:(命令参数是数组的大小)

$ ./vectorAdd 1
took 9 us
Total took 233 us

$ ./vectorAdd 10
took 93 us
Total took 918 us

$ ./vectorAdd 100
took 210 us
Total took 4974 us

$ ./vectorAdd 1000
took 2371 us
Total took 20277 us

$ ./vectorAdd 10000
Segmentation fault (core dumped)

最佳答案

我试过了,它有效。感谢您的所有回复。这对我很有帮助。如果有任何优化请告诉我 输出:

$ ./vectorAdd 10000
took 34853 us
Total took 246023 us

#include<stdio.h>
#include<stdlib.h>
#include <sys/time.h>
struct timeval stop, start,start1,stop1;
void add(int *a, int *b, int *c, int N);
int *a, *b ,*c;

int main(int argc,char* argv[]){
    gettimeofday(&start1, NULL);
    if(argc<2){
        printf("Please enter the value of N(number of elements)\n");
    }
    else
    {
        int i,N=0;
        N=atoi(argv[1]);
        N=N*1024;
        a=(int *)malloc(N * sizeof(int));
        b=(int *)malloc(N * sizeof(int));
        c=(int *)malloc(N * sizeof(int));
        for(i=0;i<N;i++){
            a[i]=rand()%1000;
            b[i]=rand()%1000;
        }
        gettimeofday(&start, NULL);
        add(a,b,c,N);
        gettimeofday(&stop, NULL);
        printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
        free(a);
        free(b);
        free(c);
    }
    gettimeofday(&stop1, NULL);
    printf("Total took %lu us\n", (stop1.tv_sec - start1.tv_sec) * 1000000 + stop1.tv_usec - start1.tv_usec);
    return 0;
}

void add(int *a, int *b,int *c, int N){
    int i;
    for(i=0;i<=N;i++){
        c[i]=a[i]+b[i];
        // printf("%d + %d = %d\n",a[i],b[i],c[i]);
    }
}

关于c - C中的执行时间差异,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60029740/

相关文章:

Java - 使用 PPQ 节拍进行计时的自定义 MIDI 音序器的可听滞后

javascript - setInterval 改变 Action 下的速度

c - 为什么在写入使用字符串初始化的 "char *s"时会出现段错误,而不是 "char s[]"?

c++ - 宏字符串 : what does #define __T(x) x mean? 和 __T(#x)?

c - 如何在C中打印数组中值的位置

c - 从同一个 C 程序中读取标准输出

c++ - 将当前时间从 Windows 转换为 C 或 C++ 中的 unix 时间戳

c - strncpy导致段错误

c - 带有读取系统调用的空字符串输入导致段错误

c - Emscripten SDL 编译失败