ios - 在 iOS 上将 PCM (CMSampleBufferRef) 编码为 AAC - 如何设置频率和比特率?

标签 ios audio core-audio aac audiotoolbox

我想将 PCM(CMSampleBufferRef (s) going live from AVCaptureAudioDataOutputSampleBufferDelegate)编码成 AAC。

当第一个CMSampleBufferRef到了,我设置了(进/出)AudioStreamBasicDescription (s),根据文档“out”

AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(sampleBuffer));

AudioStreamBasicDescription outAudioStreamBasicDescription = {0}; // Always initialize the fields of a new audio stream basic description structure to zero, as shown here: ...
outAudioStreamBasicDescription.mSampleRate = 44100; // The number of frames per second of the data in the stream, when the stream is played at normal speed. For compressed formats, this field indicates the number of frames per second of equivalent decompressed data. The mSampleRate field must be nonzero, except when this structure is used in a listing of supported formats (see “kAudioStreamAnyRate”).
outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC; // kAudioFormatMPEG4AAC_HE does not work. Can't find `AudioClassDescription`. `mFormatFlags` is set to 0.
outAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_SSR; // Format-specific flags to specify details of the format. Set to 0 to indicate no format flags. See “Audio Data Format Identifiers” for the flags that apply to each format.
outAudioStreamBasicDescription.mBytesPerPacket = 0; // The number of bytes in a packet of audio data. To indicate variable packet size, set this field to 0. For a format that uses variable packet size, specify the size of each packet using an AudioStreamPacketDescription structure.
outAudioStreamBasicDescription.mFramesPerPacket = 1024; // The number of frames in a packet of audio data. For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC. For formats with a variable number of frames per packet, such as Ogg Vorbis, set this field to 0.
outAudioStreamBasicDescription.mBytesPerFrame = 0; // The number of bytes from the start of one frame to the start of the next frame in an audio buffer. Set this field to 0 for compressed formats. ...
outAudioStreamBasicDescription.mChannelsPerFrame = 1; // The number of channels in each frame of audio data. This value must be nonzero.
outAudioStreamBasicDescription.mBitsPerChannel = 0; // ... Set this field to 0 for compressed formats.
outAudioStreamBasicDescription.mReserved = 0; // Pads the structure out to force an even 8-byte alignment. Must be set to 0.

AudioConverterRef .

AudioClassDescription audioClassDescription;
memset(&audioClassDescription, 0, sizeof(audioClassDescription));
UInt32 size;
NSAssert(AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size) == noErr, nil);
uint32_t count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
NSAssert(AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size, descriptions) == noErr, nil);
for (uint32_t i = 0; i < count; i++) {

    if ((outAudioStreamBasicDescription.mFormatID == descriptions[i].mSubType) && (kAppleSoftwareAudioCodecManufacturer == descriptions[i].mManufacturer)) {

        memcpy(&audioClassDescription, &descriptions[i], sizeof(audioClassDescription));

    }
}
NSAssert(audioClassDescription.mSubType == outAudioStreamBasicDescription.mFormatID && audioClassDescription.mManufacturer == kAppleSoftwareAudioCodecManufacturer, nil);
AudioConverterRef audioConverter;
memset(&audioConverter, 0, sizeof(audioConverter));
NSAssert(AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, 1, &audioClassDescription, &audioConverter) == 0, nil);

然后,我转换每个 CMSampleBufferRef转换为原始 AAC 数据。

AudioBufferList inAaudioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inAaudioBufferList, sizeof(inAaudioBufferList), NULL, NULL, 0, &blockBuffer);
NSAssert(inAaudioBufferList.mNumberBuffers == 1, nil);

uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;
uint8_t *buffer = (uint8_t *)malloc(bufferSize);
memset(buffer, 0, bufferSize);
AudioBufferList outAudioBufferList;
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = inAaudioBufferList.mBuffers[0].mNumberChannels;
outAudioBufferList.mBuffers[0].mDataByteSize = bufferSize;
outAudioBufferList.mBuffers[0].mData = buffer;

UInt32 ioOutputDataPacketSize = 1;

NSAssert(AudioConverterFillComplexBuffer(audioConverter, inInputDataProc, &inAaudioBufferList, &ioOutputDataPacketSize, &outAudioBufferList, NULL) == 0, nil);

NSData *data = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];

free(buffer);
CFRelease(blockBuffer);

inInputDataProc()实现:

OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
    AudioBufferList audioBufferList = *(AudioBufferList *)inUserData;

    ioData->mBuffers[0].mData = audioBufferList.mBuffers[0].mData;
    ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize;

    return  noErr;
}

现在,data保存我的原始 AAC,我将其包装到具有适当 ADTS header 的 ADTS 帧中,这些 ADTS 帧的序列是可播放的 AAC 文件。

但是我并没有像我想的那样理解这段代码。一般来说,我不明白音频......我只是在博客、论坛和文档之后以某种方式写了它,花了很长时间,现在它可以工作了,但我不知道为什么以及如何更改一些参数。所以这是我的问题:

  1. 我需要在 HW 编码器被占用期间(由 AVAssetWriter)使用此转换器。这就是我通过 AudioConverterNewSpecific() 制作 SW 转换器的原因而不是 AudioConverterNew() .但现在设置 outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC_HE;不起作用。找不到 AudioClassDescription .即使mFormatFlags设置为 0。使用 kAudioFormatMPEG4AAC 我失去了什么? ( kMPEG4Object_AAC_SSR ) 超过 kAudioFormatMPEG4AAC_HE ?我应该使用什么进行直播? kMPEG4Object_AAC_SSRkMPEG4Object_AAC_Main ?

  2. 如何正确更改采样率?如果我设置 outAudioStreamBasicDescription.mSampleRate以 22050 或 8000 为例,音频播放速度变慢。我在 ADTS header 中将采样频率索引设置为与 outAudioStreamBasicDescription.mSampleRate 相同的频率是。

  3. 如何改变比特率? ffmpeg -i 显示生成的 aac 的此信息: Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 64 kb/s . 例如,如何将其更改为 16 kbps?比特率随着我降低频率而降低,但我相信这不是唯一的方法?正如我在 2 中提到的那样,降低频率会损坏播放。

  4. 如何计算buffer的大小?现在我将它设置为 uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;因为我相信压缩后的格式不会比未压缩后的大...但这不是不必要的太多了吗?

  5. 如何设置ioOutputDataPacketSize适本地?如果我得到正确的文档,我应该将其设置为 UInt32 ioOutputDataPacketSize = bufferSize / outAudioStreamBasicDescription.mBytesPerPacket;但是mBytesPerPacket是0。如果我把它设置为0,AudioConverterFillComplexBuffer()返回错误。如果我将它设置为 1,它可以工作,但我不知道为什么...

  6. inInputDataProc()有 3 个“输出”参数。我只设置了 ioData .我还应该设置 ioNumberDataPackets 吗?和 outDataPacketDescription ?为什么以及如何?

最佳答案

在将音频馈送到 AAC 转换器之前,您可能需要使用重采样音频单元更改原始音频数据的采样率。否则 AAC header 和音频数据将不匹配。

关于ios - 在 iOS 上将 PCM (CMSampleBufferRef) 编码为 AAC - 如何设置频率和比特率?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19849509/

相关文章:

iphone - 无法从 AppDelegate 自定义我的选项卡栏

ios - mobile-ffmpeg-https (4.3.1) POD 安装失败

iOS Today 小部件扩展 : detect if opened in lock screen

audio - 从处理中导出视频3

ios - Audio Session 中断后停止调用音频图渲染回调

ios - 检测选定的注释以更改图钉颜色

audio - iOS 11 WKWebView 应用程序在播放音频时从 Watchdog 获取 SIGKILL

iOS 8 : AudioServicesPlaySystemSound is not playing Sound when AudioServicesDisposeSystemSoundID is called afterwards

ios - 同时录制和播放音频的问题

ios - 使用来自原始 PCM 16000 采样率流的 CMSampleTimingInfo、CMSampleBuffer 和 AudioBufferList