android - 将 FFT 转换与 AudioRecord 结合使用

标签 android audio fft audiorecord

我正在按照这个示例转换为 FFT http://som-itsolutions.blogspot.com.ee/2012/01/fft-based-simple-spectrum-analyzer.html .我已经让它运行了,但我得到的结果非常奇怪。如果我使用 transofrmer(来自 FFT 类),我得到的都是 0。

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    text = (TextView) findViewById(R.id.kaka);

    int bufferSize = AudioRecord.getMinBufferSize(frequency,
            channelConfiguration, audioEncoding);
    audioRecord = new AudioRecord(
            MediaRecorder.AudioSource.DEFAULT, frequency,
            channelConfiguration, audioEncoding, bufferSize);

    buffer = new short[blockSize];
    toTransform = new double[blockSize];
    try {
        audioRecord.startRecording();
    } catch (IllegalStateException e) {
        Log.e("Recording failed", e.toString());

    }
    transformer = new RealDoubleFFT(blockSize);

    final Runnable r = new Runnable() {

        public void run() {

            Log.d("Amplify","HERE");
            Toast.makeText(getBaseContext(), "Working!", Toast.LENGTH_LONG).show();
            runOnUiThread(new Runnable() {
                @Override
                public void run() {
                    audioRecord.read(buffer, 0, blockSize);
                    for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
                        toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
                    }
                    transformer.ft(toTransform);
                    text.setText("result:" + toTransform[10]);
                    handler.postDelayed(this, 150); // amount of delay between every cycle of volume level detection
                }
            });

        }
    };
    handler.postDelayed(r, 80);

我还看到一个代码,它说您必须从我首先提供的链接中实现代码并添加此方法来计算它:
public static int calculate(int sampleRate, short [] audioData){
    int numSamples = audioData.length;
    int numCrossing = 0;
    for (int p = 0; p < numSamples-1; p++)
    {
        if ((audioData[p] > 0 && audioData[p + 1] <= 0) ||
                (audioData[p] < 0 && audioData[p + 1] >= 0))
        {
            numCrossing++;
        }
    }

    float numSecondsRecorded = (float)numSamples/(float)sampleRate;
    float numCycles = numCrossing/2;
    float frequency = numCycles/numSecondsRecorded;

    return (int)frequency;
}

计算方法有 2 个参数,第一个是采样率,另一个是 short[] audiodata。我尝试将“缓冲区”作为变量,但我得到的结果与预期相去甚远。

有没有人熟悉这个例子,或者有人可以向我解释如何从:audiorecord.read(...) 获取数据。我了解您设置音频记录以记录输入的部分,但是当您 .read 数据时究竟发生了什么是我不明白的。

涉及所有 FFT 转换类真的很难,但这里是本示例中使用的 .ft:
  public void ft(double x[]){
     if(x.length != ndim)
          throw new IllegalArgumentException("The length of data can not match that of the wavetable");
     rfftf(ndim, x, wavetable);
  }

我知道这一定令人困惑,所以我会尝试总结一下,我的问题是:

audiorecord.read(..) 提供什么输出以及如何使用它?

如果我要使用计算方法,那么那里的预期输入是什么?

FFT 变换给了我一个长度为 2048 的数组,里面的所有整数都是 0.00,我该怎么办?

也许我的做法完全错误,我不需要使用 FFT 从用户输入中获取频率。但结果我不需要绘制图表,我只需要根据频率变化(更高/更低)移动图像。

最佳答案

有几件事浮现在脑海...

list 文件中是否允许录音?

Audiorecord.read 似乎只运行一次?这应该包含在 while() 语句中。如果它一次捕获 256 个字节,它可能需要一遍又一遍地运行

将整个代码块放入异步任务并将进度发布回 UI 会更容易。看看我下面的例子。唯一的区别是我使用字节而不是短裤

 @Override
    protected Boolean doInBackground(File... files) {

        try {
        waveOut = new FileOutputStream(files[0]);


        int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_MASK, ENCODING);
        audioRecord = new AudioRecord(AUDIO_SOURCE, SAMPLE_RATE, CHANNEL_MASK, ENCODING, minBufferSize);


            writeWavHeader(waveOut,CHANNEL_MASK,SAMPLE_RATE,ENCODING);

        int bufferReadData;
        byte[] buffer1 = new byte[blockSize * 2];
        long total = 0;

        try{
            startTime = SystemClock.elapsedRealtime();
            audioRecord.startRecording();

        }catch (IllegalStateException e){

            Log.e(TAG, " Records doInBackground: " + e.toString() );
        }



        while(running)
        {


            bufferReadData = audioRecord.read(buffer1,0,blockSize);  //we are requesting 256 byte obj and android is sending us 16bit so each byte hold half of a 16 bit!!


           createFFT(bufferReadData,buffer1);



            createWavFile(total,bufferReadData,buffer1);


        }

        } catch (IOException e) {
            Log.e(TAG, "Records doInBackground: " + e.toString(), e);
            stoprecording();
        } finally {

            Log.i(TAG, "Records doInBackground: calld from 2nd");

                endTime = SystemClock.elapsedRealtime();

        }

        try {
        updateWavHeader(files[0]);}catch (IOException e) {
            Log.e(TAG, "doInBackground: ", e);
        }

        return false;
    }





  protected void createFFT(int bufferReadData, byte[] buffer1)
    {
        double[] toTransform = new double[blockSize/2];

        //ByteBuffer.wrap(buffer1).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);   //this hopefully creates the buffer1 array

        // xx is correct bits. Here we load byte from buffer lilendian to big to feed into the newBuff
        //  xx xx xx xx 00 00 00 00
        //  00 00 00 00 xx xx xx xx
        //  xx xx xx xx xx xx xx xx
        // 0xFF added to fix a left padding problem??

        short newBuff, buffLil, buffBig, count = 0;

        for (int i = 0; i < blockSize/2 && i < bufferReadData/2; i++) //after 128 runs with a double increment it is pulling what data?
        {

            buffLil = buffer1[count];
            buffBig = buffer1[count + 1];
            newBuff = (short) ( buffBig <<8 | buffLil & 0xFF);
            count ++;
            count++;
            toTransform[i] = (double) newBuff / 32768.0;             //This takes the short and divides by total short value to give a decimal double value between -1.0 to 1.0 for input into fft
        }

        transformer.ft(toTransform);

        String w = String.valueOf(toTransform.length);
        //String w = String.valueOf(toTransform[0]);
        Log.i(TAG, "createFFT: " + w);

        publishProgress(toTransform);





    }



   @Override
    protected void onProgressUpdate(double[]...toTransform){



        uiChartBuffer = toTransform;




     }

关于android - 将 FFT 转换与 AudioRecord 结合使用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49108220/

相关文章:

c# - Windows Phone 7.1 视频(3GP,MP4)到音频(MP3)转换器——windows phone 上的 ffmpeg?

matlab - 更改快速傅里叶逆变换 (ifft) 以使用任意波形而不是正弦波来创建新信号

python - 过滤 wav 文件中高于限制的频率(低通滤波器)

android - 如果应用程序位于前台,则通知 ParentStack 重新启动父级

c# - 适用于 Android 和 Windows Phone 7 的脚本语言

delphi - 使用TMediaPlayer记录和保存.wav文件(Delphi 2010)

audio - 我想为所有 VoIP 应用程序(如 Skype、G-Talk、Msn 等)捕获音频

Python:声音文件的频率分析

php - 如何使用 APIController 在 CakePHP 中插入数据?

android - android位图库中的getWidth()函数不返回真实图像高度