java - Java麦克风TargetDataLine灵敏度/最大输入振幅

标签 java audio video-capture javasound xuggler

我正在编写一个核心Java应用程序(JDK 11),该应用程序应该记录音频和视频。
经过对各种库的广泛尝试和错误,我设法使用不推荐使用的Xuggler库来同时运行这两个库。
但是,以不错的质量录制音频仍然是一个问题。
我设法将录音作为short []样本进行编码,但是由于某些原因,它们被TargetDataLine在振幅127处截断了。我可以通过将它们乘以一个系数来增加编码量,但是高于127的任何录音细节都是迷失了声音。
就是事实发生之后(我失去了声音或正常语音),我可以将麦克风扩音并放大。
不幸的是,我无法在Java中控制FloatControl.Type.MASTER_GAIN,因为AudioSystem似乎不支持任何控件类型(如果这样做可能会解决问题);

问题:
如何从TargetDataLine捕获完整的声音/采样幅度,而在127处不被截断?

研究指出了以下有用的线索:
How to get Audio for encoding using Xuggler
How to set volume of a SourceDataLine in Java
Java algorithm for normalizing audio
Xuggler encoding and muxing

这是我的代码:

  private static void startRecordingVideo() {
      
    // total duration of the media
    long duration = DEFAULT_TIME_UNIT.convert(1, SECONDS);
    
    // video parameters
    //Dimension size = WebcamResolution.QVGA.getSize();
    //webcam.setViewSize(size);

    BufferedImage img = webCamImageStream.get(); 
    
    final int videoStreamIndex = 0;
    final int videoStreamId = 0;
    final long frameRate = DEFAULT_TIME_UNIT.convert(2, MILLISECONDS);
    
    // audio parameters
    TargetDataLine mic = null;
    final int audioStreamIndex = 1;
    final int audioStreamId = 0;
    final int channelCount = 2; //1 mono  2Stereo
    final int sampleRate = 44100; // Hz
    final int sampleSizeInBits = 16; // bit in sample
    final int frameSizeInByte = 4;  
    final int sampleCount = 588; //CD standard (588 lines per frame) 

    // the clock time of the next frame
    long nextFrameTime = 0;

    // the total number of audio samples
    long totalSampleCount = 0;

    // create a media writer and specify the output file

    final IMediaWriter writer = ToolFactory.makeWriter("capture.mp4");

    // add the video stream
    writer.addVideoStream(videoStreamIndex, videoStreamId,
            img.getWidth(), img.getHeight());
    
    // add the audio stream
    writer.addAudioStream(audioStreamIndex, audioStreamId,
        channelCount, sampleRate);


    //define audio format
    AudioFormat audioFormat = new AudioFormat(
            AudioFormat.Encoding.PCM_SIGNED, 
            sampleRate, 
            sampleSizeInBits, 
            channelCount,
            frameSizeInByte, 
            sampleRate, 
            true);
    DataLine.Info info = new DataLine.Info(TargetDataLine.class, audioFormat);
    AudioInputStream audioInputStream = null; 
   
        try {       
            mic = (TargetDataLine) AudioSystem.getLine(info);
            //mic.open();
            mic.open(audioFormat, mic.getBufferSize());
             // Adjust the volume on the output line.
             if (mic.isControlSupported(FloatControl.Type.MASTER_GAIN)) {
                FloatControl gain = (FloatControl) mic.getControl(FloatControl.Type.MASTER_GAIN);
                gain.setValue(-10.0f); // attempt to Reduce volume by 10 dB.
             }else {
                 System.out.println("Not supported in my case :'( ");
             }
            
            mic.start();
            audioInputStream = new AudioInputStream(mic);
    
            
        } catch (Exception e) {
            e.printStackTrace();
        }
    // loop through clock time, which starts at zero and increases based
    // on the total number of samples created thus far
    long start = System.currentTimeMillis(); 
    //duration = frameRate; 
    recordingVideo = true; 
    updateUI("Recording");
    System.out.println("Audio Buffer size : " + mic.getBufferSize());
    coverImage = webCamImageStream.get();
    int frameCount = 0;

//IGNOR Complexity of for Loop*******************************************************************
    for (long clock = 0; clock < duration;  clock = IAudioSamples.samplesToDefaultPts(totalSampleCount, sampleRate)){
      // while the clock time exceeds the time of the next video frame,
      // get and encode the next video frame
      while (frameCount * clock >= nextFrameTime) {
                BufferedImage image = webCamImageStream.get();
                IConverter converter = ConverterFactory.createConverter(image, IPixelFormat.Type.YUV420P);
                IVideoPicture frame = converter.toPicture(image, (System.currentTimeMillis() - start) * 1000);
                writer.encodeVideo(videoStreamIndex, frame);
        nextFrameTime += frameRate;
      }
      
      
//##################################### Audio Recording section #######################################
      

      int factor = 2; 
      byte[] audioBytes = new byte[mic.getBufferSize() ]; // best size?
      int numBytesRead = 0;
        try {
            numBytesRead =  audioInputStream.read(audioBytes, 0, audioBytes.length);
            //error is probably here as it is only reading up to 127
        } catch (IOException e) {
            numBytesRead =  mic.read(audioBytes, 0, audioBytes.length);
            e.printStackTrace();
        }
     
        mic.flush();
          // max for normalizing
          short rawMax = Short.MIN_VALUE;
          for (int i = 0; i < numBytesRead; ++i) {
              short value = audioBytes[i];
              rawMax = (short) Math.max(rawMax, value);
          }

//127 is max input amplitude (microphone could go higher but its cut off) ###############################

        //values at and over 127 are static noises
        System.out.println("MAX = " +rawMax );
      
      // convert to signed shorts representing samples
        int volumeGainfactor = 2;
      int numSamplesRead = numBytesRead / factor;
      short[] audioSamples = new short[ numSamplesRead ];
      if (audioFormat.isBigEndian()) {
          for (int i = 0; i < numSamplesRead; i++) {
              audioSamples[i] = (short)((audioBytes[factor*i] << 8) | audioBytes[factor*i + 1]);
          }
      }
      else {
          for (int i = 0; i < numSamplesRead; i++) {
              audioSamples[i] = (short)(((audioBytes[factor*i + 1] ) << 8) |(audioBytes[factor*i])) ;
              
                    //normalization -> does not help (issue lies in Max read value) 
                    //short targetMax = 127; //maximum volume 
                    //Normalization method
                    /*
                        double maxReduce = 1 - targetMax/(double)rawMax;
                        int abs = Math.abs(audioSamples[i]);
                        double factor1 = (maxReduce * abs/(double)rawMax);
                        audioSamples[i] = (short) Math.round((1 - factor1) * audioSamples[i]); 
                    */
              //https://stackoverflow.com/questions/12469361/java-algorithm-for-normalizing-audio
          }
      }

//##################################### END Audio Recording Section #####################################  
    

      writer.encodeAudio(audioStreamIndex, audioSamples, clock, 
        DEFAULT_TIME_UNIT);
      //extend duration if video is not terminated 
      if(!recordingVideo) {break;}
      else {duration += 22675;} //should never catch up to duration 
      // 22675 = IAudioSamples.samplesToDefaultPts(588, sampleRate)
      //totalSampleCount += sampleCount;
      totalSampleCount = sampleCount; 
      frameCount++; 
    }
    
    
    // manually close the writer
    writer.close();
    mic.close();
    }
调试打印示例:
 MAX = 48 (is recorded)

 MAX = 127 (is static noise)

最佳答案

好的,看来我设法通过反复试验来解决此问题,并发布了这篇文章:
reading wav/wave file into short[] array
问题在于将byte [](起源)转换为short []。

  • ,audioFormat必须设置为BigEndian = false
  • AudioFormat audioFormat = new AudioFormat(
                AudioFormat.Encoding.PCM_SIGNED, 
                sampleRate, 
                sampleSizeInBits, 
                channelCount,
                frameSizeInByte, 
                sampleRate, 
                false);`
    
  • 从字节到短的转换需要如下
  •       int factor = 2; 
          byte[] audioBytes = new byte[mic.getBufferSize() ];
          int numBytesRead = 0;
          numBytesRead =  audioInputStream.read(audioBytes, 0, audioBytes.length);
    
          mic.flush();
          
          // convert to signed shorts representing samples
          int volumeGainfactor = 2;
          int numSamplesRead = numBytesRead / factor;
          short[] audioSamples = new short[ numSamplesRead ];
          if (audioFormat.isBigEndian()) {
              for (int i = 0; i < numSamplesRead; i++) {
                  //BigEndian Conversion not working
                  audioSamples[i] = (short)((audioBytes[factor*i] << 8) | audioBytes[factor*i + 1]);
              }
          }
          else {
              for (int i = 0; i < numSamplesRead; i++) {
    ____________________________________________________ ISSUE WAS HERE __________________________________________
                  audioSamples[i] = ( (short)( ( audioBytes[i*2] & 0xff )|( audioBytes[i*2 + 1] << 8 )) );
    ____________________________________________________________________________________________________      
              }
          }
    

    关于java - Java麦克风TargetDataLine灵敏度/最大输入振幅,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64062759/

    相关文章:

    Java正则表达式问题

    java - 为什么可以在类型参数上调用静态方法?

    android - 如何使用谷歌眼镜录制音频?

    opencv - cv::VideoCapture读取实时视频流,但不能区分假网址

    java - 滚动时Android ListView项目消失

    java - 使用 IBM Java 通过 SSL 连接到 Oracle DB

    Javascript 声音开始/停止和图像更改第 2 部分

    java - 音频到实数

    Android - MediaRecorder - 应用程序失去了表面

    linux - 在屏幕上输出之前是否必须对视频进行编码?