ios - 如何将音频单元输出的相位移动 180 度

标签 ios audio core-audio audiounit phase

我正在尝试从麦克风输入音频并对输入流应用 180 度相移并将其输出。

这是我用来初始化 session 和捕获音频的代码(采样率设置为 44.1 KHz)

OSStatus status = noErr;

status = AudioSessionSetActive(true);
assert(status == noErr);

UInt32 category = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(UInt32), &category);
assert(status == noErr);

float aBufferLength = 0.002902; // In seconds


status = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
                                 sizeof(aBufferLength), &aBufferLength);

assert(status == noErr);

AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// get AU component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

// create audio unit by component
status = AudioComponentInstanceNew(inputComponent, &_audioState->audioUnit);
assert(status == noErr);

// record io on the input bus
UInt32 flag = 1;
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Input,
                              1, /*input*/
                              &flag,
                              sizeof(flag));
assert(status == noErr);

// play on io on the output bus
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Output,
                              0, /*output*/
                              &flag,
                              sizeof(flag));

assert(status == noErr);


// Fetch sample rate, in case we didn't get quite what we requested
Float64 achievedSampleRate;
UInt32 size = sizeof(achievedSampleRate);
status = AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &achievedSampleRate);
if ( achievedSampleRate != SAMPLE_RATE ) {
    NSLog(@"Hardware sample rate is %f", achievedSampleRate);
} else {
    achievedSampleRate = SAMPLE_RATE;
    NSLog(@"Hardware sample rate is %f", achievedSampleRate);
}


// specify stream format for recording
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = achievedSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;

// set the format on the output stream
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioUnitProperty_StreamFormat,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &audioFormat,
                              sizeof(audioFormat));

assert(status == noErr);

// set the format on the input stream
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioUnitProperty_StreamFormat,
                              kAudioUnitScope_Input,
                              kOutputBus,
                              &audioFormat,
                              sizeof(audioFormat));
assert(status == noErr);

AURenderCallbackStruct callbackStruct;
memset(&callbackStruct, 0, sizeof(AURenderCallbackStruct));
callbackStruct.inputProc = RenderCallback;
callbackStruct.inputProcRefCon = _audioState;

// set input callback
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioOutputUnitProperty_SetInputCallback,
                              kAudioUnitScope_Global,
                              kInputBus,
                              &callbackStruct,
                              sizeof(callbackStruct));
assert(status == noErr);

callbackStruct.inputProc = PlaybackCallback;
callbackStruct.inputProcRefCon = _audioState;

// set Render callback for output
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioUnitProperty_SetRenderCallback,
                              kAudioUnitScope_Global,
                              kOutputBus,
                              &callbackStruct,
                              sizeof(callbackStruct));
assert(status == noErr);

flag = 0;

// allocate render buffer
status = AudioUnitSetProperty(_audioState->audioUnit,
                              kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &flag,
                              sizeof(flag));
assert(status == noErr);

_audioState->audioBuffer.mNumberChannels = 1;
_audioState->audioBuffer.mDataByteSize = 256 * 2;
_audioState->audioBuffer.mData = malloc(256 * 2);

// initialize the audio unit
status = AudioUnitInitialize(_audioState->audioUnit);
assert(status == noErr);
}

有谁知道改变相位以产生破坏性正弦波的方法吗?我听说过使用 vDSP 进行带通滤波,但我不确定...

最佳答案

除非您知道从麦克风到输入缓冲器的延迟、从输出缓冲器到扬声器的延迟、您要取消的频率,以及这些频率在这段时间内保持不变的一些知识,否则您无法可靠地创建一个 180 度相移用于取消目的。相反,您将尝试取消十几毫秒或更早之前发生的声音,如果该频率在此期间发生了变化,您最终可能会添加到声音而不是取消它。此外,如果声源、扬声器源和听众之间的距离变化了足够大的波长分数,扬声器输出最终可能会加倍地加倍声源的响度,而不是消除它。对于 1 kHz 的声音,这是 6 英寸的移动。

主动降噪需要非常准确地了解输入到输出的时间延迟;包括麦克风、输入滤波器和扬声器响应,以及 ADC/DAC 延迟。 Apple 没有指定这些,它们很可能在 iOS 设备型号之间有所不同。

鉴于对输入到输出延迟的精确了解以及对源信号频率的准确分析(通过 FFT),可能需要在每个频率上进行 180 度以外的一些相移以尝试消除固定源.

关于ios - 如何将音频单元输出的相位移动 180 度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33086400/

相关文章:

javascript - 当应用程序被禁用时,React Native 在特定时间显示消息

iPhone 上的 jquery 移动网站

javascript - 徒手绘制时 iOS SVG 滞后

ios - 导航回到上一个 View Controller

macos - 将音频输出到内置输出设备(不是默认设备)

iPhone SDK : Change playback speed using core audio AVAudioPlayer

iOS 5/6 : low volume after first usage of CoreAudio

java - 我怎样才能改变java中音频文件的音量

ios - 如何在到达音频结尾之前阻止AVAudioPlayer重复播放?

android - 在 Android 中与 Thread 同时录制和处理音频