我正在开发一款通过内置麦克风录制语音并将其实时发送到服务器的应用程序。所以我需要在录音时从麦克风获取字节流。
在谷歌搜索和堆栈溢出一段时间后,我想我明白了它应该如何工作,但事实并非如此。我认为使用音频队列可能是可行的方法。
到目前为止,这是我尝试过的:
func test() {
func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) {
print("test")
}
var inputQueue: AudioQueueRef? = nil
var aqData = AQRecorderState(
mDataFormat: AudioStreamBasicDescription(
mSampleRate: 16000,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: 0,
mBytesPerPacket: 2,
mFramesPerPacket: 1, // Must be set to 1 for uncomressed formats
mBytesPerFrame: 2,
mChannelsPerFrame: 1, // Mono recording
mBitsPerChannel: 2 * 8, // 2 Bytes
mReserved: 0), // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription
mQueue: inputQueue!,
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
var error = AudioQueueNewInput(&aqData.mDataFormat,
callback,
nil,
nil,
nil,
0,
&inputQueue)
AudioQueueStart(inputQueue!, nil)
}
它会编译并启动应用程序,但只要我调用 test() 就会出现异常:
fatal error: unexpectedly found nil while unwrapping an Optional value
异常是由
引起的mQueue: inputQueue!
我明白为什么会发生这种情况(inputQueue 没有值),但我不知道如何正确初始化 inputQueue。问题是音频队列对于 Swift 用户的文档非常少,我在互联网上没有找到任何有效的例子。
谁能告诉我我做错了什么?
最佳答案
使用AudioQueueNewInput(...)
(或输出)在您使用它之前初始化您的音频队列:
let sampleRate = 16000
let numChannels = 2
var inFormat = AudioStreamBasicDescription(
mSampleRate: Double(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt32>.size),
mFramesPerPacket: 1,
mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt32>.size),
mChannelsPerFrame: UInt32(numChannels),
mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt32>.size)),
mReserved: UInt32(0)
var inQueue: AudioQueueRef? = nil
AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue)
var aqData = AQRecorderState(
mDataFormat: inFormat,
mQueue: inQueue!, // inQueue is initialized now and can be unwrapped
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
在 Apples Documentation 中查找详细信息
关于ios - 在 Swift 3 中使用音频队列获取麦克风输入,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43519077/