iphone - 如何向后播放音频?

标签 iphone ios ipad audio core-audio

有人建议从头到尾读取音频数据,并创建一个从头到尾写入的副本,然后简单地播放反转的音频数据。

是否有 iOS 的现有示例如何完成此操作?

我发现了一个名为 MixerHost 的示例项目,它在某些时候使用 AudioUnitSampleType 保存已从文件中读取的音频数据,并将其分配给缓冲区。

这定义为:

typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24

根据苹果公司的说法:

The canonical audio sample type for audio units and other audio processing in iPhone OS is noninterleaved linear PCM with 8.24-bit fixed-point samples.

换句话说,它保存非交错线性 PCM 音频数据。

但我无法弄清楚这些数据是在哪里读入以及存储在哪里的。以下是加载音频数据并对其进行缓冲的代码:

- (void) readAudioFilesIntoMemory {

    for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile)  {

        NSLog (@"readAudioFilesIntoMemory - file %i", audioFile);

        // Instantiate an extended audio file object.
        ExtAudioFileRef audioFileObject = 0;

        // Open an audio file and associate it with the extended audio file object.
        OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);

        if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;}

        // Get the audio file's length in frames.
        UInt64 totalFramesInFile = 0;
        UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileLengthFrames,
                        &frameLengthPropertySize,
                        &totalFramesInFile
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}

        // Assign the frame count to the soundStructArray instance variable
        soundStructArray[audioFile].frameCount = totalFramesInFile;

        // Get the audio file's number of channels.
        AudioStreamBasicDescription fileAudioFormat = {0};
        UInt32 formatPropertySize = sizeof (fileAudioFormat);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileDataFormat,
                        &formatPropertySize,
                        &fileAudioFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}

        UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;

        // Allocate memory in the soundStructArray instance variable to hold the left channel, 
        //    or mono, audio data
        soundStructArray[audioFile].audioDataLeft =
            (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));

        AudioStreamBasicDescription importFormat = {0};
        if (2 == channelCount) {

            soundStructArray[audioFile].isStereo = YES;
            // Sound is stereo, so allocate memory in the soundStructArray instance variable to  
            //    hold the right channel audio data
            soundStructArray[audioFile].audioDataRight =
                (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
            importFormat = stereoStreamFormat;

        } else if (1 == channelCount) {

            soundStructArray[audioFile].isStereo = NO;
            importFormat = monoStreamFormat;

        } else {

            NSLog (@"*** WARNING: File format not supported - wrong number of channels");
            ExtAudioFileDispose (audioFileObject);
            return;
        }

        // Assign the appropriate mixer input bus stream data format to the extended audio 
        //        file object. This is the format used for the audio data placed into the audio 
        //        buffer in the SoundStruct data structure, which is in turn used in the 
        //        inputRenderCallback callback function.

        result =    ExtAudioFileSetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (importFormat),
                        &importFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}

        // Set up an AudioBufferList struct, which has two roles:
        //
        //        1. It gives the ExtAudioFileRead function the configuration it 
        //            needs to correctly provide the data to the buffer.
        //
        //        2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so 
        //            that audio data obtained from disk using the ExtAudioFileRead function
        //            goes to that buffer

        // Allocate memory for the buffer list struct according to the number of 
        //    channels it represents.
        AudioBufferList *bufferList;

        bufferList = (AudioBufferList *) malloc (
            sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
        );

        if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;}

        // initialize the mNumberBuffers member
        bufferList->mNumberBuffers = channelCount;

        // initialize the mBuffers member to 0
        AudioBuffer emptyBuffer = {0};
        size_t arrayIndex;
        for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
            bufferList->mBuffers[arrayIndex] = emptyBuffer;
        }

        // set up the AudioBuffer structs in the buffer list
        bufferList->mBuffers[0].mNumberChannels  = 1;
        bufferList->mBuffers[0].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
        bufferList->mBuffers[0].mData            = soundStructArray[audioFile].audioDataLeft;

        if (2 == channelCount) {
            bufferList->mBuffers[1].mNumberChannels  = 1;
            bufferList->mBuffers[1].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
            bufferList->mBuffers[1].mData            = soundStructArray[audioFile].audioDataRight;
        }

        // Perform a synchronous, sequential read of the audio data out of the file and
        //    into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
        UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;

        result = ExtAudioFileRead (
                     audioFileObject,
                     &numberOfPacketsToRead,
                     bufferList
                 );

        free (bufferList);

        if (noErr != result) {

            [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result];

            // If reading from the file failed, then free the memory for the sound buffer.
            free (soundStructArray[audioFile].audioDataLeft);
            soundStructArray[audioFile].audioDataLeft = 0;

            if (2 == channelCount) {
                free (soundStructArray[audioFile].audioDataRight);
                soundStructArray[audioFile].audioDataRight = 0;
            }

            ExtAudioFileDispose (audioFileObject);            
            return;
        }

        NSLog (@"Finished reading file %i into memory", audioFile);

        // Set the sample index to zero, so that playback starts at the 
        //    beginning of the sound.
        soundStructArray[audioFile].sampleNumber = 0;

        // Dispose of the extended audio file object, which also
        //    closes the associated file.
        ExtAudioFileDispose (audioFileObject);
    }
}

哪个部分包含必须反转的音频样本数组?是 AudioUnitSampleType 吗?

bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;

注意:audioDataLeft 被定义为 AudioUnitSampleType,它是 SInt32,但不是数组。

我在 Core Audio Mailing list 中找到了线索:

Well, nothing to do with iPh*n* as far as I know (unless some audio API has been omitted -- I am not a member of that program). AFAIR, AudioFile.h and ExtendedAudioFile.h should provide you with what you need to read or write a caf and access its streams/channels. Basically, you want to read each channel/stream backwards, so, if you don't need properties of the audio file it is pretty straightforward once you have a handle on that channel's data, assuming it is not in a compressed format. Considering the number of formats a caf can represent, this could take a few more lines of code than you're thinking. Once you have a handle on uncompressed data, it should be about as easy as reversing a string. Then you would of course replace the file's data with the reversed data, or you could just feed the audio output (or wherever you're sending the reversed signal) reading whatever stream you have backwards.

这是我尝试过的,但是当我将反向缓冲区分配给两个 channel 的 mData 时,我什么也没听到:

AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
    reversedData[j] = leftData[i];
    j++;
}

最佳答案

我开发了一个示例应用程序,它记录用户所说的话并向后播放。我使用 CoreAudio 来实现这一点。 Link to app code .

/* 由于每个样本的大小为 16 位(2 字节)(单 channel )。 您可以通过从录制结束时开始将每个样本复制到不同的缓冲区中来一次加载每个样本,然后 向后阅读。当您到达数据的开头时,您已经反转了数据,并且播放也将反转。 */

// set up output file
AudioFileID outputAudioFile;

AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 16000.00;
myPCMFormat.mFormatID = kAudioFormatLinearPCM ;
myPCMFormat.mFormatFlags =  kAudioFormatFlagsCanonical;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 16;
myPCMFormat.mBytesPerPacket = 2;
myPCMFormat.mBytesPerFrame = 2;


AudioFileCreateWithURL((__bridge CFURLRef)self.flippedAudioUrl,
                       kAudioFileCAFType,
                       &myPCMFormat,
                       kAudioFileFlags_EraseFile,
                       &outputAudioFile);
// set up input file
AudioFileID inputAudioFile;
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;

AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);

theErr = AudioFileOpenURL((__bridge CFURLRef)self.recordedAudioUrl, kAudioFileReadPermission, 0, &inputAudioFile);

thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);

UInt32 dataSize = fileDataSize;
void* theData = malloc(dataSize);

//Read data into buffer
UInt32 readPoint  = dataSize;
UInt32 writePoint = 0;
while( readPoint > 0 )
{
    UInt32 bytesToRead = 2;

    AudioFileReadBytes( inputAudioFile, false, readPoint, &bytesToRead, theData );
    AudioFileWriteBytes( outputAudioFile, false, writePoint, &bytesToRead, theData );

    writePoint += 2;
    readPoint -= 2;
}

free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);

希望这有帮助。

关于iphone - 如何向后播放音频?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12027003/

相关文章:

iphone - 选择 OpenGL ES 1.1 还是 OpenGL ES 2.0?

iphone - 网站宽度不会自动适应 iPhone 屏幕

ios - 使用 Google Maps SDK for iOS Ver.1.9.0 提交我的应用程序时出现警告

ios - 使用 setViewControllers 时,viewWillAppear、viewDidAppear 和 viewWillDisappear 不会在 iOS 5 上被调用

iphone - Apache Lucene 或其他 iPhone 应用程序中的搜索

iphone - 创建 NSURL 但仅显示链接文本

ios - 单击 UI 搜索栏中的清除按钮时调用两次 textDidChange 方法

ios - 如何在 iOS 中使用 Swift 运行命令行命令或任务?

iphone - 如何创建iOS模拟器

objective-c - 如何获取 UITableViewCell 的矩形大小?