objective-c - 如何在 iOS 上使用 AudioUnit.framework 配置帧大小

标签 objective-c core-audio audiounit

我有一个音频应用程序,我需要捕获麦克风样本以使用 ffmpeg 编码为 mp3

首先配置音频:

/**  
     * We need to specifie our format on which we want to work.
     * We use Linear PCM cause its uncompressed and we work on raw data.
     * for more informations check.
     * 
     * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz 
     */
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = SAMPLE_RATE;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8;
    audioFormat.mBytesPerPacket     = audioFormat.mChannelsPerFrame*sizeof(SInt16);
    audioFormat.mBytesPerFrame      = audioFormat.mChannelsPerFrame*sizeof(SInt16);

录音回调为:

static OSStatus recordingCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) 
{
    NSLog(@"Log record: %lu", inBusNumber);
    NSLog(@"Log record: %lu", inNumberFrames);
    NSLog(@"Log record: %lu", (UInt32)inTimeStamp);

    // the data gets rendered here
    AudioBuffer buffer;

    // a variable where we check the status
    OSStatus status;

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;

    /**
     on this point we define the number of channels, which is mono
     for the iphone. the number of frames is usally 512 or 1024.
     */
    buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size
    buffer.mNumberChannels = 1; // one channel

    buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size

    // we put our buffer into a bufferlist array for rendering
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    // render input and check for error
    status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
    [audioProcessor hasError:status:__FILE__:__LINE__];

    // process the bufferlist in the audio processor
    [audioProcessor processBuffer:&bufferList];

    // clean up the buffer
    free(bufferList.mBuffers[0].mData);


    //NSLog(@"RECORD");
    return noErr;
}

有数据:

inBusNumber = 1

inNumberFrames = 1024

inTimeStamp = 80444304//inTimeStamp 始终相同,这很奇怪

但是,我需要编码mp3的帧大小是1152,我该如何配置呢?

如果我做缓冲,这意味着延迟,但我想避免这种情况,因为它是一个实时应用程序。如果我使用这个配置,每个缓冲区我都会得到垃圾尾随样本,1152 - 1024 = 128 个坏样本。所有样本均为 SInt16。

最佳答案

您可以使用属性 kAudioUnitProperty_MaximumFramesPerSlice 配置 AudioUnit 将使用的每个切片的帧数。但是,我认为在您的情况下最好的解决方案是将传入的音频缓冲到环形缓冲区,然后向您的编码器发出音频可用的信号。由于您正在转码为 MP3,我不确定在这种情况下实时意味着什么。

关于objective-c - 如何在 iOS 上使用 AudioUnit.framework 配置帧大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12953157/

相关文章:

ios - 惊人的音频引擎如何将过滤器应用于麦克风输入

objective-c - NSTask 产生顶级进程。如何提取数据?

ios - 传递 NSNotifications。好的?坏的? NBD?

iOS 将音频采样率从 16 kHz 转换为 8 kHz

swift - 如何快速获取多个音频的混合并将其保存为单个音频

ios - 音频单元 v3 扩展不会出现在主机应用程序中

objective-c - 手指在 iPad 屏幕上滑动

objective-c - CoreGraphics 填充路径和描边路径

ios - 如何将 AUGraph 的输出写入文件?

ios - AUGraphAddNode -10862