我想将 PCM(CMSampleBufferRef
(s) going live from AVCaptureAudioDataOutputSampleBufferDelegate
)编码成 AAC。
当第一个CMSampleBufferRef
到了,我设置了(进/出)AudioStreamBasicDescription
(s),根据文档“out”
AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(sampleBuffer));
AudioStreamBasicDescription outAudioStreamBasicDescription = {0}; // Always initialize the fields of a new audio stream basic description structure to zero, as shown here: ...
outAudioStreamBasicDescription.mSampleRate = 44100; // The number of frames per second of the data in the stream, when the stream is played at normal speed. For compressed formats, this field indicates the number of frames per second of equivalent decompressed data. The mSampleRate field must be nonzero, except when this structure is used in a listing of supported formats (see “kAudioStreamAnyRate”).
outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC; // kAudioFormatMPEG4AAC_HE does not work. Can't find `AudioClassDescription`. `mFormatFlags` is set to 0.
outAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_SSR; // Format-specific flags to specify details of the format. Set to 0 to indicate no format flags. See “Audio Data Format Identifiers” for the flags that apply to each format.
outAudioStreamBasicDescription.mBytesPerPacket = 0; // The number of bytes in a packet of audio data. To indicate variable packet size, set this field to 0. For a format that uses variable packet size, specify the size of each packet using an AudioStreamPacketDescription structure.
outAudioStreamBasicDescription.mFramesPerPacket = 1024; // The number of frames in a packet of audio data. For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC. For formats with a variable number of frames per packet, such as Ogg Vorbis, set this field to 0.
outAudioStreamBasicDescription.mBytesPerFrame = 0; // The number of bytes from the start of one frame to the start of the next frame in an audio buffer. Set this field to 0 for compressed formats. ...
outAudioStreamBasicDescription.mChannelsPerFrame = 1; // The number of channels in each frame of audio data. This value must be nonzero.
outAudioStreamBasicDescription.mBitsPerChannel = 0; // ... Set this field to 0 for compressed formats.
outAudioStreamBasicDescription.mReserved = 0; // Pads the structure out to force an even 8-byte alignment. Must be set to 0.
和AudioConverterRef
.
AudioClassDescription audioClassDescription;
memset(&audioClassDescription, 0, sizeof(audioClassDescription));
UInt32 size;
NSAssert(AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size) == noErr, nil);
uint32_t count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
NSAssert(AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size, descriptions) == noErr, nil);
for (uint32_t i = 0; i < count; i++) {
if ((outAudioStreamBasicDescription.mFormatID == descriptions[i].mSubType) && (kAppleSoftwareAudioCodecManufacturer == descriptions[i].mManufacturer)) {
memcpy(&audioClassDescription, &descriptions[i], sizeof(audioClassDescription));
}
}
NSAssert(audioClassDescription.mSubType == outAudioStreamBasicDescription.mFormatID && audioClassDescription.mManufacturer == kAppleSoftwareAudioCodecManufacturer, nil);
AudioConverterRef audioConverter;
memset(&audioConverter, 0, sizeof(audioConverter));
NSAssert(AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, 1, &audioClassDescription, &audioConverter) == 0, nil);
然后,我转换每个 CMSampleBufferRef
转换为原始 AAC 数据。
AudioBufferList inAaudioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inAaudioBufferList, sizeof(inAaudioBufferList), NULL, NULL, 0, &blockBuffer);
NSAssert(inAaudioBufferList.mNumberBuffers == 1, nil);
uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;
uint8_t *buffer = (uint8_t *)malloc(bufferSize);
memset(buffer, 0, bufferSize);
AudioBufferList outAudioBufferList;
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = inAaudioBufferList.mBuffers[0].mNumberChannels;
outAudioBufferList.mBuffers[0].mDataByteSize = bufferSize;
outAudioBufferList.mBuffers[0].mData = buffer;
UInt32 ioOutputDataPacketSize = 1;
NSAssert(AudioConverterFillComplexBuffer(audioConverter, inInputDataProc, &inAaudioBufferList, &ioOutputDataPacketSize, &outAudioBufferList, NULL) == 0, nil);
NSData *data = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
free(buffer);
CFRelease(blockBuffer);
inInputDataProc()
实现:
OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
AudioBufferList audioBufferList = *(AudioBufferList *)inUserData;
ioData->mBuffers[0].mData = audioBufferList.mBuffers[0].mData;
ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize;
return noErr;
}
现在,data
保存我的原始 AAC,我将其包装到具有适当 ADTS header 的 ADTS 帧中,这些 ADTS 帧的序列是可播放的 AAC 文件。
但是我并没有像我想的那样理解这段代码。一般来说,我不明白音频......我只是在博客、论坛和文档之后以某种方式写了它,花了很长时间,现在它可以工作了,但我不知道为什么以及如何更改一些参数。所以这是我的问题:
我需要在 HW 编码器被占用期间(由
AVAssetWriter
)使用此转换器。这就是我通过AudioConverterNewSpecific()
制作 SW 转换器的原因而不是AudioConverterNew()
.但现在设置outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC_HE;
不起作用。找不到AudioClassDescription
.即使mFormatFlags
设置为 0。使用kAudioFormatMPEG4AAC
我失去了什么? (kMPEG4Object_AAC_SSR
) 超过kAudioFormatMPEG4AAC_HE
?我应该使用什么进行直播?kMPEG4Object_AAC_SSR
或kMPEG4Object_AAC_Main
?如何正确更改采样率?如果我设置
outAudioStreamBasicDescription.mSampleRate
以 22050 或 8000 为例,音频播放速度变慢。我在 ADTS header 中将采样频率索引设置为与outAudioStreamBasicDescription.mSampleRate
相同的频率是。如何改变比特率? ffmpeg -i 显示生成的 aac 的此信息:
Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 64 kb/s
. 例如,如何将其更改为 16 kbps?比特率随着我降低频率而降低,但我相信这不是唯一的方法?正如我在 2 中提到的那样,降低频率会损坏播放。如何计算
buffer
的大小?现在我将它设置为uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;
因为我相信压缩后的格式不会比未压缩后的大...但这不是不必要的太多了吗?如何设置
ioOutputDataPacketSize
适本地?如果我得到正确的文档,我应该将其设置为UInt32 ioOutputDataPacketSize = bufferSize / outAudioStreamBasicDescription.mBytesPerPacket;
但是mBytesPerPacket
是0。如果我把它设置为0,AudioConverterFillComplexBuffer()
返回错误。如果我将它设置为 1,它可以工作,但我不知道为什么...在
inInputDataProc()
有 3 个“输出”参数。我只设置了ioData
.我还应该设置ioNumberDataPackets
吗?和outDataPacketDescription
?为什么以及如何?
最佳答案
在将音频馈送到 AAC 转换器之前,您可能需要使用重采样音频单元更改原始音频数据的采样率。否则 AAC header 和音频数据将不匹配。
关于ios - 在 iOS 上将 PCM (CMSampleBufferRef) 编码为 AAC - 如何设置频率和比特率?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19849509/