ios - 将Objective-C AudioUnits简介转换为Swift

标签 ios audiounit avaudiosession audiotoolbox

我已经设法翻译了这段代码,因此调用了render回调:
http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html

我确定我的渲染回调方法实现不正确,因为我要么根本听不到声音,要么从耳机中听到非常刺耳的声音。
我也看不到viewDidLoad中的audioSession与其余代码之间的连接。

有谁可以帮我这个忙吗?

private func performRender(
inRefCon: UnsafeMutablePointer<Void>,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
{

// get object
let vc = unsafeBitCast(inRefCon, ViewController.self)
print("callback")

let thetaIncrement = 2.0 * M_PI * vc.kFrequency / vc.kSampleRate
var theta = vc.theta

//    var sinValues = [Int32]()
let amplitude : Double = 0.25

let abl = UnsafeMutableAudioBufferListPointer(ioData)
    for buffer in abl
    {
        let val : Int32 = Int32((sin(theta) * amplitude))
    //        sinValues.append(val)
        theta += thetaIncrement

        memset(buffer.mData, val, Int(buffer.mDataByteSize))
    }

vc.theta = theta

return noErr
}

class ViewController: UIViewController
{
let kSampleRate : Float64 = 44100
let kFrequency : Double = 440
var theta : Double = 0

private var toneUnit = AudioUnit()
private let kInputBus = AudioUnitElement(1)
private let kOutputBus = AudioUnitElement(0)

@IBAction func tooglePlay(sender: UIButton)
{
    if(toneUnit != nil)
    {
        AudioOutputUnitStop(toneUnit)
        AudioUnitInitialize(toneUnit)
        AudioComponentInstanceDispose(toneUnit)
        toneUnit = nil
    }
    else
    {
        createToneUnit()
        var err = AudioUnitInitialize(toneUnit)
        assert(err == noErr, "error initializing audiounit!")
        err = AudioOutputUnitStart(toneUnit)
        assert(err == noErr, "error starting audiooutput unit!")      
    }
}

func createToneUnit()
{
    var defaultOutputDescription = AudioComponentDescription(
        componentType: kAudioUnitType_Output,
        componentSubType: kAudioUnitSubType_RemoteIO,
        componentManufacturer: kAudioUnitManufacturer_Apple,
        componentFlags: 0,
        componentFlagsMask: 0)

    let defaultOutput = AudioComponentFindNext(nil,&defaultOutputDescription)


    let fourBytesPerFloat : UInt32 = 4
    let eightBitsPerByte : UInt32 = 8

    var err = AudioComponentInstanceNew(defaultOutput, &toneUnit)
    assert(err == noErr, "error setting audio component instance!")
    var input = AURenderCallbackStruct(inputProc: performRender,     inputProcRefCon: UnsafeMutablePointer(unsafeAddressOf(self)))

    err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &input, UInt32(sizeof(AURenderCallbackStruct)))
    assert(err == noErr, "error setting render callback!")

    var streamFormat = AudioStreamBasicDescription(
        mSampleRate: kSampleRate,
        mFormatID: kAudioFormatLinearPCM,
        mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
        mBytesPerPacket: fourBytesPerFloat,
        mFramesPerPacket: 1,
        mBytesPerFrame: fourBytesPerFloat,
        mChannelsPerFrame: 1,
        mBitsPerChannel: fourBytesPerFloat*eightBitsPerByte,
        mReserved: 0)

    err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, UInt32(sizeof(AudioStreamBasicDescription)))
    assert(err == noErr, "error setting audiounit property!")
}

override func viewDidLoad()
{
    super.viewDidLoad()
    let audioSession = AVAudioSession.sharedInstance()

    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayback)
    }
    catch
    {
        print("Audio session setCategory failed")
    }

    do
    {
        try audioSession.setPreferredSampleRate(kSampleRate)
    }
    catch
    {
        print("Audio session samplerate error")
    }

    do
    {
        try audioSession.setPreferredIOBufferDuration(0.005)
    }
    catch
    {
        print("Audio session bufferduration error")
    }

    do
    {
        try audioSession.setActive(true)
    }
    catch
    {
        print("Audio session activate failure")
    }
}

最佳答案

  • vc.theta没有增加
  • memset只需要一个字节的val
  • AudioUnit希望使用float,但是您要存储Int32
  • 音频数据的范围也看起来很有趣-为什么不将其保持在[-1,1]范围内?
  • 也不需要限制thetasin可以做到这一点。

  • 您确定这曾经在Objective-C中起作用吗?

    关于ios - 将Objective-C AudioUnits简介转换为Swift,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33296302/

    相关文章:

    ios - 单击 TextView 上打开 UIPickerView 而不是键盘

    macos - Mac OS X:音频频率是否随采样率的变化而变化?

    objective-c - 音频缓冲区代码中的内存不断增长

    ios - 如何使用内置麦克风输入和蓝牙输出

    ios - 自定义 UITableViewCell 的布局何时完成?

    iOS:是否可以同时从耳机和扬声器发送音频?

    ios - 如何向 iOS 中的所有 Controller 发送通知

    swift - macOS 上的 AVAudioInputNode 仅输出静音

    ios - 在应用程序中更改 AVAudioSession 模式

    ios - AVAudioSession swift