ios - 如何将 iPhone 音频路由到蓝牙扬声器

标签 ios iphone bluetooth

我正在尝试使用 audioRouteOverride 从 Mic 获取输入并将其输出到我的蓝牙设备 ti Speaker。但没有运气。 iPhone Mic 输出仍然是 iPhone 内置扬声器。我预计 kAudioSessionOutputRoute_BluetoothA2DP 是这里的关键。但它没有按预期工作。

- (id) init {
    self = [super init];

    OSStatus status;

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));
    checkStatus(status);

    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Output, 
                              kOutputBus,
                              &flag, 
                              sizeof(flag));
    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Output, 
                              kInputBus, 
                              &audioFormat, 
                              sizeof(audioFormat));
    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Input, 
                              kOutputBus, 
                              &audioFormat, 
                              sizeof(audioFormat));
    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_SetInputCallback, 
                              kAudioUnitScope_Global, 
                              kInputBus, 
                              &callbackStruct, 
                              sizeof(callbackStruct));
    checkStatus(status);

    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              kOutputBus,
                              &callbackStruct, 
                              sizeof(callbackStruct));
    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
    flag = 0;
    status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));

    NSLog(@"%ld",(long)status);
    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    status = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory);
    NSLog(@"%ld",(long)status);


    UInt32 allowMixing = true;
    status = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(allowMixing), &allowMixing);

    UInt32 audioRouteOverride = kAudioSessionOutputRoute_BluetoothA2DP; //kAudioSessionOverrideAudioRoute_Speaker;
    AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);

    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;
}

最佳答案

尝试使用此代码。此代码也适用于 iOS 7 和 8(已测试)。

+ (void) startBTAudio
{
    UInt32 allowBluetoothInput = 1;
    AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, sizeof(allowBluetoothInput), &allowBluetoothInput);  
}


+ (void) stopBTAudio
{
    UInt32 allowBluetoothInput = 0;
    AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, sizeof(allowBluetoothInput), &allowBluetoothInput);
}

关于ios - 如何将 iPhone 音频路由到蓝牙扬声器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23787852/

相关文章:

ios - 无法使用 QMServicesManager 从对话框列表中删除对话框?

android - 扫描时如何在Android中读取蓝牙LE广告数据

ios - 当所有关系为零时如何自动删除任何实体?

java - java - 如何使用Java中的iOS模拟器为iOS移动应用程序自动滑动手势?

iphone - 在 OpenGL 中对表面上的纹理进行动画处理

bluetooth - 使用 RSSI 计算近似距离

android - 无法在 Android 中通过蓝牙录制音频

ios - 全局初始化变量在 Swift UIView XIB 中更改为 'nil'?

ios - 从 Core Data 中将一对多显示为一对一

iphone - 为什么当我在 -drawRect :? 中绘制字符串时换行符不起作用