我有一个 AudioInputIOProc
,我正在从中获取一个 AudioBufferList
。我需要将此 AudioBufferList
转换为 CMSampleBufferRef
。
这是我到目前为止编写的代码:
- (void)handleAudioSamples:(const AudioBufferList*)samples numSamples:(UInt32)numSamples hostTime:(UInt64)hostTime {
// Create a CMSampleBufferRef from the list of samples, which we'll own
AudioStreamBasicDescription monoStreamFormat;
memset(&monoStreamFormat, 0, sizeof(monoStreamFormat));
monoStreamFormat.mSampleRate = 44100;
monoStreamFormat.mFormatID = kAudioFormatMPEG4AAC;
monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
monoStreamFormat.mBytesPerPacket = 4;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mBytesPerFrame = 4;
monoStreamFormat.mChannelsPerFrame = 2;
monoStreamFormat.mBitsPerChannel = 16;
CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0, NULL, 0, NULL, NULL, &format);
if (status != noErr) {
// really shouldn't happen
return;
}
mach_timebase_info_data_t tinfo;
mach_timebase_info(&tinfo);
UInt64 _hostTimeToNSFactor = (double)tinfo.numer / tinfo.denom;
uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
if (status != noErr) {
// couldn't create the sample buffer
NSLog(@"Failed to create sample buffer");
CFRelease(format);
return;
}
// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
if (status != noErr) {
NSLog(@"Failed to add samples to sample buffer");
CFRelease(sampleBuffer);
CFRelease(format);
NSLog(@"Error status code: %d", status);
return;
}
[self addAudioFrame:sampleBuffer];
NSLog(@"Original sample buf size: %ld for %d samples from %d buffers, first buffer has size %d", CMSampleBufferGetTotalSampleSize(sampleBuffer), numSamples, samples->mNumberBuffers, samples->mBuffers[0].mDataByteSize);
NSLog(@"Original sample buf has %ld samples", CMSampleBufferGetNumSamples(sampleBuffer));
}
现在,我不确定如何根据 AudioInputIOProc 的函数定义计算 numSamples:
OSStatus AudioTee::InputIOProc(AudioDeviceID inDevice, const AudioTimeStamp *inNow, const AudioBufferList *inInputData, const AudioTimeStamp *inInputTime, AudioBufferList *outOutputData, const AudioTimeStamp *inOutputTime, void *inClientData)
这个定义存在于AudioTee.cpp WavTap 中的文件。
我遇到的错误是 CMSampleBufferError_RequiredParameterMissing
错误,错误代码为 -12731
,当我尝试调用 CMSampleBufferSetDataBufferFromAudioBufferList
时。
更新:
为了澄清问题,以下是我从 AudioDeviceIOProc 获取的音频数据的格式:
channel :2,采样率:44100,精度:32 位,样本编码:32 位有符号整数 PCM,字节序类型:小,反向半字节:无,反向位:无
我得到一个 AudioBufferList
*,其中包含我需要转换为 CMSampleBufferRef
* 的所有音频数据(30 秒视频)并添加这些样本缓冲区通过 AVAssetWriterInput
写入磁盘的视频(时长 30 秒)。
最佳答案
三个地方看起来不对:
您声明格式 ID 为
kAudioFormatMPEG4AAC
,但将其配置为 LPCM。所以试试monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
当格式配置为立体声时,您也可以将其称为“单声道”。
为什么要使用
mach_timebase_info
,它可能会在您的音频演示时间戳中留下空白?改用样本计数:CMTime presentationTime = CMTimeMake(numSamplesProcessed, 44100);
您的
CMSampleTimingInfo
看起来不对,而且您没有使用presentationTime
。当缓冲区的持续时间可以是numSamples
时,您可以将缓冲区的持续时间设置为 1 个样本长,并将其呈现时间设置为零,这是不正确的。这样的事情会更有意义:CMSampleTimingInfo timing = { CMTimeMake(numSamples, 44100), presentationTime, kCMTimeInvalid };
还有一些问题:
您的 AudioBufferList
是否具有预期的 2 个 AudioBuffers
?
你有这个的可运行版本吗?
附注我自己对此感到内疚,但在音频线程上分配内存是 considered harmful在音频开发中。
关于c++ - 从 AudioInputIOProc 创建 CMSampleBufferRef,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49122224/