iphone - 使用适用于 iOS 的 ExtAudioFileWrite 将音频样本缓冲区写入 aac 文件

标签 iphone c++ objective-c ios extaudiofile

更新:我已经解决了这个问题并发布了我的解决方案作为我自己的问题的答案(如下)

我正在尝试使用 AAC 格式的 ExtAudioFileWrite 将一个简单的音频样本缓冲区写入一个文件。

我已经通过下面的代码实现了这一点,将单声道缓冲区写入 .wav 文件 - 但是,我无法对立体声或 AAC 文件执行此操作,而这正是我想要做的。

这是我目前所拥有的...

CFStringRef fPath;
fPath = CFStringCreateWithCString(kCFAllocatorDefault,
                                  "/path/to/my/audiofile/audiofile.wav",
                                  kCFStringEncodingMacRoman);


OSStatus err;

int mChannels = 1;
UInt32 totalFramesInFile = 100000;

Float32 *outputBuffer = (Float32 *)malloc(sizeof(Float32) * (totalFramesInFile*mChannels)); 


////////////// Set up Audio Buffer List ////////////

AudioBufferList outputData; 
outputData.mNumberBuffers = 1;
outputData.mBuffers[0].mNumberChannels = mChannels; 
outputData.mBuffers[0].mDataByteSize = 4 * totalFramesInFile * mChannels;
outputData.mBuffers[0].mData = outputBuffer;

Float32 audioFile[totalFramesInFile*mChannels];


for (int i = 0;i < totalFramesInFile*mChannels;i++)
{
    audioFile[i] = ((Float32)(rand() % 100))/100.0;
    audioFile[i] = audioFile[i]*0.2;
}

outputData.mBuffers[0].mData = &audioFile;


CFURLRef fileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,fPath,kCFURLPOSIXPathStyle,false);

ExtAudioFileRef audiofileRef;

// WAVE FILES

AudioFileTypeID fileType = kAudioFileWAVEType;
AudioStreamBasicDescription clientFormat;
clientFormat.mSampleRate = 44100.0;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = 12;
clientFormat.mBitsPerChannel = 16;
clientFormat.mChannelsPerFrame = mChannels;
clientFormat.mBytesPerFrame = 2*clientFormat.mChannelsPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerPacket = 2*clientFormat.mChannelsPerFrame;


// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)fileURL, fileType, &clientFormat, NULL, kAudioFileFlags_EraseFile, &audiofileRef);

if (err != noErr)
{
    cout << "Problem when creating audio file: " << err << "\n";
}

// tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

if (err != noErr)
{
    cout << "Problem setting audio format: " << err << "\n";
}

UInt32 rFrames = (UInt32)totalFramesInFile;
// write the data
err = ExtAudioFileWrite(audiofileRef, rFrames, &outputData);

if (err != noErr)
{
    cout << "Problem writing audio file: " << err << "\n";
}

// close the file
ExtAudioFileDispose(audiofileRef);



NSLog(@"Done!");

我的具体问题是:

  • 如何为 AAC 设置 AudioStreamBasicDescription?
  • 为什么我不能让立体声在这里正常工作?如果我将 channel 数 ('mChannels') 设置为 2,那么我会正确地获得左声道并在右声道中失真。

如果有任何帮助,我将不胜感激——我想我已经阅读了几乎所有可以找到的页面,但我并不觉得更明智,因为虽然有类似的问题,但他们通常会从一些输入音频文件中获取 AudioStreamBasicDescription 参数,我看不到结果。 Apple 文档也无济于事。

非常感谢,

亚当

最佳答案

好的,经过一些探索我已经弄明白了。我将它包装为一个将随机噪声写入文件的函数。具体来说,它可以:

  • 编写 .wav 或 .m4a 文件
  • 以任一格式编写单声道或立体声
  • 将文件写入指定路径

函数参数是:

  • 要创建的音频文件的路径
  • channel 数量(最多 2 个)
  • boolean:使用 m4a 压缩(如果为 false,则使用 pcm)

对于立体声 M4A 文件,该函数应调用为:

writeNoiseToAudioFile("/path/to/my/audiofile.m4a",2,true);

函数的来源如下。我已尝试尽可能多地对其进行评论 - 我希望它是正确的,它当然对我有用,但如果我遗漏了什么,请说“亚当,你做错了一点”。祝你好运!这是代码:

void writeNoiseToAudioFile(char *fName,int mChannels,bool compress_with_m4a)
{
OSStatus err; // to record errors from ExtAudioFile API functions

// create file path as CStringRef
CFStringRef fPath;
fPath = CFStringCreateWithCString(kCFAllocatorDefault,
                                  fName,
                                  kCFStringEncodingMacRoman);


// specify total number of samples per channel
UInt32 totalFramesInFile = 100000;      

/////////////////////////////////////////////////////////////////////////////
////////////// Set up Audio Buffer List For Interleaved Audio ///////////////
/////////////////////////////////////////////////////////////////////////////

AudioBufferList outputData; 
outputData.mNumberBuffers = 1;
outputData.mBuffers[0].mNumberChannels = mChannels;    
outputData.mBuffers[0].mDataByteSize = sizeof(AudioUnitSampleType)*totalFramesInFile*mChannels;



/////////////////////////////////////////////////////////////////////////////
//////// Synthesise Noise and Put It In The AudioBufferList /////////////////
/////////////////////////////////////////////////////////////////////////////

// create an array to hold our audio
AudioUnitSampleType audioFile[totalFramesInFile*mChannels];

// fill the array with random numbers (white noise)
for (int i = 0;i < totalFramesInFile*mChannels;i++)
{
    audioFile[i] = ((AudioUnitSampleType)(rand() % 100))/100.0;
    audioFile[i] = audioFile[i]*0.2;
    // (yes, I know this noise has a DC offset, bad)
}

// set the AudioBuffer to point to the array containing the noise
outputData.mBuffers[0].mData = &audioFile;


/////////////////////////////////////////////////////////////////////////////
////////////////// Specify The Output Audio File Format /////////////////////
/////////////////////////////////////////////////////////////////////////////


// the client format will describe the output audio file
AudioStreamBasicDescription clientFormat;

// the file type identifier tells the ExtAudioFile API what kind of file we want created
AudioFileTypeID fileType;

// if compress_with_m4a is tru then set up for m4a file format
if (compress_with_m4a)
{
    // the file type identifier tells the ExtAudioFile API what kind of file we want created
    // this creates a m4a file type
    fileType = kAudioFileM4AType;

    // Here we specify the M4A format
    clientFormat.mSampleRate         = 44100.0;
    clientFormat.mFormatID           = kAudioFormatMPEG4AAC;
    clientFormat.mFormatFlags        = kMPEG4Object_AAC_Main;
    clientFormat.mChannelsPerFrame   = mChannels;
    clientFormat.mBytesPerPacket     = 0;
    clientFormat.mBytesPerFrame      = 0;
    clientFormat.mFramesPerPacket    = 1024;
    clientFormat.mBitsPerChannel     = 0;
    clientFormat.mReserved           = 0;
}
else // else encode as PCM
{
    // this creates a wav file type
    fileType = kAudioFileWAVEType;

    // This function audiomatically generates the audio format according to certain arguments
    FillOutASBDForLPCM(clientFormat,44100.0,mChannels,32,32,true,false,false);
}



/////////////////////////////////////////////////////////////////////////////
///////////////// Specify The Format of Our Audio Samples ///////////////////
/////////////////////////////////////////////////////////////////////////////

// the local format describes the format the samples we will give to the ExtAudioFile API
AudioStreamBasicDescription localFormat;
FillOutASBDForLPCM (localFormat,44100.0,mChannels,32,32,true,false,false);



/////////////////////////////////////////////////////////////////////////////
///////////////// Create the Audio File and Open It /////////////////////////
/////////////////////////////////////////////////////////////////////////////

// create the audio file reference
ExtAudioFileRef audiofileRef;

// create a fileURL from our path
CFURLRef fileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,fPath,kCFURLPOSIXPathStyle,false);

// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)fileURL, fileType, &clientFormat, NULL, kAudioFileFlags_EraseFile, &audiofileRef);

if (err != noErr)
{
    cout << "Problem when creating audio file: " << err << "\n";
}


/////////////////////////////////////////////////////////////////////////////
///// Tell the ExtAudioFile API what format we'll be sending samples in /////
/////////////////////////////////////////////////////////////////////////////

// Tell the ExtAudioFile API what format we'll be sending samples in 
err = ExtAudioFileSetProperty(audiofileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(localFormat), &localFormat);

if (err != noErr)
{
    cout << "Problem setting audio format: " << err << "\n";
}

/////////////////////////////////////////////////////////////////////////////
///////// Write the Contents of the AudioBufferList to the AudioFile ////////
/////////////////////////////////////////////////////////////////////////////

UInt32 rFrames = (UInt32)totalFramesInFile;
// write the data
err = ExtAudioFileWrite(audiofileRef, rFrames, &outputData);

if (err != noErr)
{
    cout << "Problem writing audio file: " << err << "\n";
}


/////////////////////////////////////////////////////////////////////////////
////////////// Close the Audio File and Get Rid Of The Reference ////////////
/////////////////////////////////////////////////////////////////////////////

// close the file
ExtAudioFileDispose(audiofileRef);


NSLog(@"Done!");
}

不要忘记导入 AudioToolbox 框架并包含头文件:

#import <AudioToolbox/AudioToolbox.h>

关于iphone - 使用适用于 iOS 的 ExtAudioFileWrite 将音频样本缓冲区写入 aac 文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12569015/

相关文章:

ios - 将 OSX 文件夹复制到 iOS 目录

iphone - UIButton 的不同状态不起作用

ios - 如何在通话期间在 iPhone 上显示信息屏幕

ios - 关闭 UIScrollView 中的缩放

c++ - 如何在 makefile 中包含和编译库

c++ - iterator_traits<vector<bool>::iterator>::iterator_category 不应该是 input_iterator_tag 吗?

ios - 调整 UILabel 的字体,使文本适合标签的边界

ios - 您是否在同一个 prepareForSegue 方法中为一个 View Controller 的所有 segues 实现了行为?

jquery - 上传前调整图像大小

C++,在运行时创建类