我正在将一个音频库移植到 iOS,允许播放从回调提供的音频流。用户提供了一个返回原始 PCM 数据的回调,我需要播放这些数据。此外,库必须能够同时播放多个流。
我想我需要使用 AVFoundation,但 AVAudioPlayer 似乎不支持流式音频缓冲区,而且我能找到的所有流式文档都使用了直接来自网络的数据。我应该在这里使用什么 API?
提前致谢!
顺便说一句,我没有通过 Swift 或 Objective-C 使用 Apple 库。然而,我假设所有内容仍然暴露,所以无论如何,一个 Swift 中的例子将不胜感激!
最佳答案
你需要初始化:
使用输入音频单元和输出的 Audio Session 。
-(SInt32) audioSessionInitialization:(SInt32)preferred_sample_rate { // - - - - - - Audio Session initialization NSError *audioSessionError = nil; session = [AVAudioSession sharedInstance]; // disable AVAudioSession [session setActive:NO error:&audioSessionError]; // set category - (PlayAndRecord to use input and output session AudioUnits) [session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&audioSessionError]; double preferredSampleRate = 441000; [session setPreferredSampleRate:preferredSampleRate error:&audioSessionError]; // enable AVAudioSession [session setActive:YES error:&audioSessionError]; // Configure notification for device output change (speakers/headphones) [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(routeChange:) name:AVAudioSessionRouteChangeNotification object:nil]; // - - - - - - Create audio engine [self audioEngineInitialization]; return [session sampleRate]; }
音频引擎
-(void) audioEngineInitialization{ engine = [[AVAudioEngine alloc] init]; inputNode = [engine inputNode]; outputNode = [engine outputNode]; [engine connect:inputNode to:outputNode format:[inputNode inputFormatForBus:0]]; AudioStreamBasicDescription asbd_player; asbd_player.mSampleRate = session.sampleRate; asbd_player.mFormatID = kAudioFormatLinearPCM; asbd_player.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; asbd_player.mFramesPerPacket = 1; asbd_player.mChannelsPerFrame = 2; asbd_player.mBitsPerChannel = 16; asbd_player.mBytesPerPacket = 4; asbd_player.mBytesPerFrame = 4; OSStatus status; status = AudioUnitSetProperty(inputNode.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbd_player, sizeof(asbd_player)); // Add the render callback for the ioUnit: for playing AURenderCallbackStruct callbackStruct; callbackStruct.inputProc = engineInputCallback; ///CALLBACK/// callbackStruct.inputProcRefCon = (__bridge void *)(self); status = AudioUnitSetProperty(inputNode.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input,//Global kOutputBus, &callbackStruct, sizeof(callbackStruct)); [engine prepare]; }
音频引擎回调
static OSStatus engineInputCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { // the reference to the audio controller where you get the stream data MyAudioController *ac = (__bridge MyAudioController *)(inRefCon); // in practice we will only ever have 1 buffer, since audio format is mono for (int i = 0; i < ioData->mNumberBuffers; i++) { AudioBuffer buffer = ioData->mBuffers[i]; // copy stream buffer data to output buffer UInt32 size = min(buffer.mDataByteSize, ac.playbackBuffer.mDataByteSize); memcpy(buffer.mData, ac.streamBuffer.mData, size); buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer } return noErr; }
关于ios - 从内存数据流在 iOS 上播放音频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50996398/