javascript - 网络音频API : How to use FFT to convert from time domain and use iFFT to convert the data back

标签 javascript html audio web-audio-api

我一直在尝试将音频数据转换为频域数据,编辑该数据,并从该数据重建音频。

我按照以下指示进行操作:

  1. OfflineAudioContext为了获得一个缓冲区来执行分析,
  2. AnalyserNode执行分析,并且
  3. PeriodicWave来重建波。

OfflineAudioContext 渲染的音频应该与 PeriodicWave 的音频匹配,但显然不是。不过,说明说应该这样做,所以我显然错过了一些东西。

(另外,我不知道如何使用PeriodicWave的实部和虚部输入。从指令来看,实部值为正弦,虚部值为余弦,所以我将所有虚值设置为 0,因为我没有来自 AnalyserNode 的 FFT 分析的余弦值,而且似乎没有其他方法。)

到目前为止,我得到的最简单和最接近的是以下脚本(https://jsfiddle.net/k81w04qv/1/):

<!DOCTYPE html>
<html>

<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
  <meta name="viewport" content="width=device-width">

  <title>Audio Test</title>
  <link rel="stylesheet" href="">
  <!--[if lt IE 9]>
      <script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
    <![endif]-->
</head>

<body>
  <h1>Audio Test</h1>
  <button id='0'>Play original sound</button>
  <button id='1'>Play reconstructed sound</button>
  <pre></pre>
</body>
<script id='script'>
  var pre = document.querySelector('pre');
  var myScript = document.getElementById('script');
  pre.innerHTML = myScript.innerHTML;

  var buttonOriginal = document.getElementById('0');
  var buttonReconstr = document.getElementById('1');
  var audioCtx = new (window.AudioContext || window.webkitAudioContext)();

  var channels = 2;
  var sampleRate = audioCtx.sampleRate;
  var frameCount = sampleRate * 2.0;

  var offlineCtx = new OfflineAudioContext(channels, frameCount, sampleRate);
  var myArrayBuffer = offlineCtx.createBuffer(channels, frameCount, sampleRate);
  var offlineSource = offlineCtx.createBufferSource();

  var analyser = offlineCtx.createAnalyser();

  var pi = Math.PI;
  var songPos = [0, 0];

  for (var channel = 0; channel < channels; channel++) {
    var nowBuffering = myArrayBuffer.getChannelData(channel);
    for (var i = 0; i < frameCount; i++) {
      songPos[channel]++;
      nowBuffering[i] = synth(channel);
    }
  }

  analyser.connect(offlineCtx.destination);
  offlineSource.connect(analyser);
  offlineSource.buffer = myArrayBuffer;
  offlineSource.start();
  offlineCtx.startRendering().then(function (renderedBuffer) {
    console.log('Rendering completed successfully');
    analyser.fftSize = 2048;
    var bufferLength = analyser.frequencyBinCount;
    var dataArray = new Float32Array(bufferLength);

    analyser.getFloatFrequencyData(dataArray);
    console.log(dataArray);
    // Remove -infinity
    for (var i = 0; i < dataArray.length; i++) {
      if(dataArray[i]==-Infinity)
      dataArray[i] = -255;
    }
    /// Reconstruct
    // Create array of zeros
    var imagArray = new Float32Array(bufferLength);
    for (var i = 0; i < imagArray.length; i++) imagArray[i] = 0;

    var wave = audioCtx.createPeriodicWave(dataArray, imagArray, {disableNormalization: true});
    console.log(wave);


    buttonReconstr.onclick = function() {
      var wave = audioCtx.createPeriodicWave(dataArray, imagArray, {disableNormalization: true});
      var osc = audioCtx.createOscillator();
      osc.setPeriodicWave(wave);
      osc.connect(audioCtx.destination);
      osc.start();
      osc.stop(2);
      osc.onended = () => {
        console.log('Reconstructed sound finished');
      }
    }

    buttonOriginal.onclick = function() {
      var song = audioCtx.createBufferSource();
      song.buffer = renderedBuffer;
      song.connect(audioCtx.destination);
      song.start();
      song.onended = () => {
        console.log('Original sound finished');
      }
    }


  })/*.catch(function (err) {
    console.log('Rendering failed: ' + err);
    // Note: The promise should reject when startRendering is called a second time on an OfflineAudioContext
  });*/




  function freqSin(freq, time) {
    return Math.sin(freq * (2 * pi) * time);
  }
  function synth(channel) {
    var time = songPos[channel] / sampleRate;
    switch (channel) {
      case 0:
        var freq = 200 + 10 * freqSin(9, time);;
        var amp = 0.7;
        var output = amp * Math.sin(freq * (2 * pi) * time);
        break;
      case 1:
        var freq = 900 + 10 * freqSin(10, time);
        var amp = 0.7;
        var output = amp * Math.sin(freq * (2 * pi) * time);
        break;
    }
    //console.log(output)
    return output;
  }


</script>

</html>

此脚本的一个有趣的附带问题是,在播放原始声音后,您无法播放重建的声音(尽管您可以随意播放原始声音)。为了播放重建的声音,您必须先播放它,然后它才会在刷新时播放。 (如果您在播放原声时播放,也可以在播放原声后播放。)

最佳答案

为此,您需要时域信号 FFT 的实部和虚部。 AnalyserNode 仅提供大小;你缺少相位组件。

抱歉,这行不通。

关于javascript - 网络音频API : How to use FFT to convert from time domain and use iFFT to convert the data back,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60156269/

相关文章:

javascript - 本地存储获取 Angular 值并传递给输入

asp.net - resx 资源文件中的连字符(破折号)不呈现 IE7/8

python - 通过 Qt 播放音频的最佳方式是什么?

html - Tab 键在 mozilla 的输入表单中不起作用,但在 google chrome 中起作用

javascript - 从 contentEditable 中删除元素后出现奇怪的行为

swift - AudioKit AKMetronome回调定时似乎不精确或量化

audio - 使用MonoTouch进行多声道声音播放

javascript - ie8 设置样式时找不到成员

javascript - 是否可以在不使用 stroke 的情况下为 SVG 路径设置动画?

javascript - 不能在 ExpressJS 提供的索引文件中包含 JS 文件