c++ - cudaDecodeGL sdk 示例从 Windows 移植到 Linux 后出现内存泄漏

标签 c++ c memory-leaks cuda valgrind

我成功地将cudaDecodeGL从Windows移植到Linux,它工作正常,但是在使用valgrind检查内存泄漏后,我发现其中存在大量内存泄漏:

我查看了代码,为了找到解决方案,我有一些问题:

1)我应该删除所有函数中所有声明的指针吗?我的意思是不删除指针会导致内存泄漏?

2)将windows程序移植到linux会不会产生内存泄漏问题,比如因为linux和windows的内存管理机制?!?!

3)你能给我一个在 valgrind 中遇到内存泄漏的程序吗?我的意思是,如果 valgrind 告诉你你有这样的内存泄漏,你会怎么做? valgrind 日志文件的一部分:

.
.
.
.
==10468== 754,864 (4,088 direct, 750,776 indirect) bytes in 1 blocks are definitely lost in loss record 136 of 137

==10468==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==10468==    by 0x5B0A366: cuvidCreateVideoParser (in /usr/lib64/libnvcuvid.so.319.17)
==10468==    by 0x40929E: VideoParser::VideoParser(VideoDecoder*, FrameQueue*, CUctx_st**) (in /home/admin/testcuda/de_3/cudaDecodeGL/3_Imaging/cudaDecodeGL/Far_Decoder)
==10468==    by 0x4063F3: initCudaVideo() (in /home/admin/testcuda/de_3/cudaDecodeGL/3_Imaging/cudaDecodeGL/Far_Decoder)
==10468==    by 0x404E8B: initCudaResources(int, char**, int*) (in /home/admin/testcuda/de_3/cudaDecodeGL/3_Imaging/cudaDecodeGL/Far_Decoder)
==10468==    by 0x40561B: main (in /home/admin/testcuda/de_3/cudaDecodeGL/3_Imaging/cudaDecodeGL/Far_Decoder)
    LEAK SUMMARY:
    ==10468==    definitely lost: 7,608 bytes in 148 blocks
    ==10468==    indirectly lost: 988,728 bytes in 907 blocks
    ==10468==      possibly lost: 2,307,388 bytes in 59 blocks
    ==10468==    still reachable: 413,278 bytes in 198 blocks
    ==10468==         suppressed: 0 bytes in 0 blocks

如果您需要更多信息,请告诉我。如果您认为我应该添加一些信息来澄清我的问题,请告诉我该怎么做,我真的很感激。

更新:

VideoParser::VideoParser(VideoDecoder *pVideoDecoder, FrameQueue *pFrameQueue, CUcontext *pCudaContext): hParser_(0)
{
    assert(0 != pFrameQueue);
    oParserData_.pFrameQueue   = pFrameQueue;
    assert(0 != pVideoDecoder);
    oParserData_.pVideoDecoder = pVideoDecoder;
    oParserData_.pContext      = pCudaContext;

    CUVIDPARSERPARAMS oVideoParserParameters;
    memset(&oVideoParserParameters, 0, sizeof(CUVIDPARSERPARAMS));
    oVideoParserParameters.CodecType              = pVideoDecoder->codec();
    oVideoParserParameters.ulMaxNumDecodeSurfaces = pVideoDecoder->maxDecodeSurfaces();
    oVideoParserParameters.ulMaxDisplayDelay      = 1;  // this flag is needed so the parser will push frames out to the decoder as quickly as it can
    oVideoParserParameters.pUserData              = &oParserData_;
    oVideoParserParameters.pfnSequenceCallback    = HandleVideoSequence;    // Called before decoding frames and/or whenever there is a format change
    oVideoParserParameters.pfnDecodePicture       = HandlePictureDecode;    // Called when a picture is ready to be decoded (decode order)
    oVideoParserParameters.pfnDisplayPicture      = HandlePictureDisplay;   // Called whenever a picture is ready to be displayed (display order)
    CUresult oResult = cuvidCreateVideoParser(&hParser_, &oVideoParserParameters);
    assert(CUDA_SUCCESS == oResult);
}

如您所见,cuvidCreateVideoParser位于共享库中,我该如何解决此内存泄漏问题?

最佳答案

嗯,关于 nvcuvid 的文档没有出现在谷歌结果的第一页上,所以快速浏览一下 nvcuvid.h 就会发现:

CUresult CUDAAPI cuvidCreateVideoParser(CUvideoparser *pObj, CUVIDPARSERPARAMS *pParams);
CUresult CUDAAPI cuvidParseVideoData(CUvideoparser obj, CUVIDSOURCEDATAPACKET *pPacket);
CUresult CUDAAPI cuvidDestroyVideoParser(CUvideoparser obj);

请务必在 VideoParser 类的解构函数中通过 cuvidDestroyVideoParser 销毁视频解析器句柄。从您提供的小代码块来看,尚不清楚 VideoParsers 的生命周期有多长。我想(由于 valgrind 输出)它在函数作用域内,并在函数返回时被销毁。如果没有正确销毁对象的 cuvid 资源,您将遇到内存泄漏。

关于c++ - cudaDecodeGL sdk 示例从 Windows 移植到 Linux 后出现内存泄漏,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19108768/

相关文章:

c++ - 在 C++ 中输入 áã 和字符串得到 AN£A$ 如何处理?

c++ - 如果将 char * 数组保存在 C 中的堆栈中,如何比较它

c - 是什么导致此代码中的段错误?

c++ - 如何在 C++ 中编辑操作码或写入内存或编辑字节?

javascript - 如何在 QWebEngine 中显示一个 QPixmap?

c - 在 C 中的两个服务器之间传递数据的最佳方式?

memory-leaks - 为什么即使是非常简单的应用程序,MonoTouch 也会导致大量内存泄漏(如 Instruments 所报告的那样)?

objective-c - 对象泄露 : Object allocated and stored is not referenced later in this execution

C++:std::map的微小内存泄漏

c++ - 如何在 STL 容器 C++11 中查找不同值