iphone - iOS:Audio Unit RemoteIO 在 iPhone 上不工作

标签 iphone ios audio microphone audiounit

我正在尝试根据麦克风的输入创建我自己的自定义音效音频单元。此应用程序允许同时从麦克风输入/输出到扬声器。我可以使用模拟器应用效果和工作,但是当我尝试在 iPhone 上进行测试时,我听不到任何声音。如果有人可以帮助我,我会粘贴我的代码:

  - (id) init{
    self = [super init];

    OSStatus status;

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  kOutputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;


    // Apply format
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  kOutputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);


    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;
}

当来自麦克风的新音频数据可用时调用此回调。但是当我在 iPhone 上测试时从不输入这里:

static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {
    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    // Then:
    // Obtain recorded samples

    OSStatus status;

    status = AudioUnitRender([iosAudio audioUnit],
                             ioActionFlags,
                             inTimeStamp,
                             inBusNumber,
                             inNumberFrames,
                             &bufferList);
    checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;
}

最佳答案

我解决了我的问题。我只需要在播放/录音之前初始化 AudioSession。我使用以下代码这样做:

OSStatus status;

AudioSessionInitialize(NULL, NULL, NULL, self);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty (kAudioSessionProperty_AudioCategory,
                               sizeof (sessionCategory),
                               &sessionCategory);

if (status != kAudioSessionNoError)
{
    if (status == kAudioServicesUnsupportedPropertyError) {
        NSLog(@"AudioSessionInitialize failed: unsupportedPropertyError");
    }else if (status == kAudioServicesBadPropertySizeError) {
        NSLog(@"AudioSessionInitialize failed: badPropertySizeError");
    }else if (status == kAudioServicesBadSpecifierSizeError) {
        NSLog(@"AudioSessionInitialize failed: badSpecifierSizeError");
    }else if (status == kAudioServicesSystemSoundUnspecifiedError) {
        NSLog(@"AudioSessionInitialize failed: systemSoundUnspecifiedError");
    }else if (status == kAudioServicesSystemSoundClientTimedOutError) {
        NSLog(@"AudioSessionInitialize failed: systemSoundClientTimedOutError");
    }else {
        NSLog(@"AudioSessionInitialize failed! %ld", status);
    }
}


AudioSessionSetActive(TRUE);

...

关于iphone - iOS:Audio Unit RemoteIO 在 iPhone 上不工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13086363/

相关文章:

jquery - 在 iPhone 上更改 div 的 bg 图像不起作用

iphone - 修复 iPad 状态栏方向为纵向

iphone - 在 iOS 中播放系统声音

audio - Audible 如何将 24 小时长的音频文件(FLAC、Opus、AA 和 AAC)实现 300MB 文件大小?

java - 如何防止我的音乐播放器在电话铃声响起并挂断电话后启动?

iphone - 带括号的 NSDictionary 返回值

关于字体大小的 iOS 9 问题

iphone - 在发生异步更新时更新 TableView——请告知!

iphone - 如何在 viewDidLoad 中引用 UIButton?

ios - 获取系统音量 iOS