ios - Swift 3 中的 AVAudioRecorder : Get Byte stream instead of saving to file

标签 ios swift microphone avaudiorecorder audio-processing

我是 iOS 编程的新手,我想使用 Swift 3 将 Android 应用程序移植到 iOS。该应用程序的核心功能是从麦克风读取字节流并实时处理该流。因此,将音频流存储到文件并在录制停止后对其进行处理是不够的。

我已经找到了可用的 AVAudioRecorder 类,但我不知道如何实时处理数据流(过滤、将其发送到服务器等)。 AVAudioRecorder 的初始化函数如下所示:

AVAudioRecorder(url: filename, settings: settings)

我需要的是一个类,我可以在其中注册一个事件处理程序或类似的东西,每次读取 x 个字节时都会调用它,以便我可以处理它。

AVAudioRecorder 可以吗?如果没有,Swift 库中是否有另一个类允许我实时处理音频流?在 Android 中,我使用 android.media.AudioRecord,所以如果 Swift 中有一个等效的类,那就太好了。

问候

最佳答案

在核心音频框架中使用音频队列服务。 https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW1

static const int kNumberBuffers = 3;                            // 1
struct AQRecorderState {
    AudioStreamBasicDescription  mDataFormat;                   // 2
    AudioQueueRef                mQueue;                        // 3
    AudioQueueBufferRef          mBuffers[kNumberBuffers];      // 4
    AudioFileID                  mAudioFile;                    // 5
    UInt32                       bufferByteSize;                // 6
    SInt64                       mCurrentPacket;                // 7
    bool                         mIsRunning;                    // 8
};

Here’s a description of the fields in this structure:

1 Sets the number of audio queue buffers to use. 2 An AudioStreamBasicDescription structure (from CoreAudioTypes.h) representing the audio data format to write to disk. This format gets used by the audio queue specified in the mQueue field. The mDataFormat field gets filled initially by code in your program, as described in Set Up an Audio Format for Recording. It is good practice to then update the value of this field by querying the audio queue's kAudioQueueProperty_StreamDescription property, as described in Getting the Full Audio Format from an Audio Queue. On Mac OS X v10.5, use the kAudioConverterCurrentInputStreamDescription property instead.

For details on the AudioStreamBasicDescription structure, see Core Audio Data Types Reference.

3 The recording audio queue created by your application.

4 An array holding pointers to the audio queue buffers managed by the audio queue.

5 An audio file object representing the file into which your program records audio data.

6 The size, in bytes, for each audio queue buffer. This value is calculated in these examples in the DeriveBufferSize function, after the audio queue is created and before it is started. See Write a Function to Derive Recording Audio Queue Buffer Size.

7 The packet index for the first packet to be written from the current audio queue buffer.

8 A Boolean value indicating whether or not the audio queue is running.

关于ios - Swift 3 中的 AVAudioRecorder : Get Byte stream instead of saving to file,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43473061/

相关文章:

ios - SKShapeNode .strokeColor 在更改 node.alpha 时看起来与 .fillColor 相同

ios - 未在 swift 中调用的协议(protocol)方法

ios - 查询核心数据中所有实体的快速简便方法

android - 在移动设备上使用 Adob​​e Air 消除回声

audio - DirectShow 从麦克风 + 立体声混音中捕捉声音

ios - 调整大小小于元素时如何使元素隐藏在 UIView 中

ipad - UIImageView/UIScrollView 强制固定宽高比以适应屏幕

ios - View 仅显示在 iphone 的状态栏中

ios - 在 Swift 中一个接一个地调用异步任务(iOS)

c# - 如何使麦克风静音c#