android - 从磨损接收数据后写.wav为空

标签 android audio wear-os wav audiorecord

我正在开发一个安卓穿戴应用程序。该应用程序从智能 watch 的麦克风录制本地音频并将其发送到 handle 设备。 handle 设备接收数据并写入 .wav 文件。文件已创建,但是当我听文件为空时,我只能听静音。

这是磨损代码:

 public void replyAudioByByte(final byte data[]) {


            final String path = "/start_activity";
            final Byte[] text= new Byte[1024];


            GoogleApiClient client = new GoogleApiClient.Builder(getApplicationContext())
                    .addApi(Wearable.API)
                    .build();
            new Thread(new Runnable() {
                @Override
                public void run() {

                    NodeApi.GetConnectedNodesResult nodes = Wearable.NodeApi.getConnectedNodes(mApiClient).await();

                    for (Node node : nodes.getNodes()) {

                        MessageApi.SendMessageResult result = Wearable.MessageApi.sendMessage(
                                mApiClient, node.getId(),AUDIO_RECORDER, data).await();
                        if (result.getStatus().isSuccess()) {
                            Log.d("sendMessage","Message send!!");
                            for (int j=0; j<data.length; j++ ){
                            Log.v("Mobile", "Message: {" + data[j] + "} sent to: " + node.getDisplayName());
                            }

                        } else {
                            // Log an error
                            Log.v("Mobile", "ERROR: failed to send Message");
                        }
                    }

                }
            }).start();
            client.disconnect();
            Log.d("MOBILE", "send message end");
        }


        public void startRecordingAudio() {


            recorder = findAudioRecord();
            Log.d("recorder:","recorder="+recorder.toString());

            CountDownTimer countDowntimer = new CountDownTimer(8000, 1000) {
                public void onTick(long millisUntilFinished) {
                }

                public void onFinish() {
                    try {
                        //Toast.makeText(getBaseContext(), "Stop recording Automatically ", Toast.LENGTH_LONG).show();
                        Log.d("wear", "stopRecorder=" + System.currentTimeMillis());
                        recorder.stop();
                        Log.d("formato registrazione","recorderformat="+recorder.getAudioFormat()+"-----rate=");
                        Log.d("formato registrazione","recordersamplerate=" +recorder.getSampleRate());
                        isRecording=false;
                        replyAudioByByte(data);
                        for (int j=0; j< data.length;j++){
                            Log.d("watch audio registrato", "data[]="+data[j]);
                        }

                        Log.d("wear", "recorder.stop ok!");
                    } catch (Exception e) {
                        // TODO Auto-generated catch block
                        Log.e("wear", "recorder.stop catch");
                        e.printStackTrace();
                    }


                }
            };


            recorder.startRecording();
            countDowntimer.start();
            Log.d("wear", "startRecorder=" + System.currentTimeMillis());

            isRecording = true;

            recordingThread = new Thread(new Runnable() {
                public void run() {

                    while (isRecording ) {
                    recorder.read(data, 0, bufferSize);
                        Log.d("WEAR","recorder.read="+recorder.read(data, 0, bufferSize));
                    }
                    recorder.stop();
                    recorder.release();

                    for (int i = 0; i < bufferSize; i++) {

                        Log.d("startrecording", "data=" + data[i]);
                    }


                }
            }, "AudioRecorder Thread");

            recordingThread.start();

            int a= recorder.getSampleRate();

            Log.d("formato registrazione","recorderformat="+recorder.getAudioFormat()+"-----rate="+a);
            Log.d("formato registrazione","recordersamplerate=" +recorder.getSampleRate());

        }


        public AudioRecord findAudioRecord() {
            /** The settings that i must use are not the same for every device, so i try if they work */
            for (int rate : mSampleRates) {
                for (short audioFormat : audioF) {
                    for (short channelConfig : channelC) {
                        try {
                            //Log.d("Check", "Attempting rate " + rate + "Hz, bits: " + audioFormat + ", channel: " + channelConfig);
                            int bufferSize = AudioRecord.getMinBufferSize(rate, channelConfig, audioFormat);
                            if (bufferSize != AudioRecord.ERROR_BAD_VALUE) {
                                //It checks if it can instantiate the audiorecorder without problems
                                AudioRecord recorder = new AudioRecord(AudioSource.MIC, rate, channelConfig, audioFormat, bufferSize + 2000);
                                Log.d("AudioRecorder data","AudioSource.Default="+ AudioSource.MIC);
                                Log.d("AudioRecorder data","Rate="+ rate);
                                Log.d("AudioRecorder data","Channel.config="+ channelConfig);
                                Log.d("AudioRecorder data","AudioFormat= "+audioFormat);
                                bufferSize=bufferSize+2000;
                                Log.d("AudioRecorder data","buffersize="+ bufferSize );

                                if (recorder.getState() == AudioRecord.STATE_INITIALIZED)
                                    Log.d("audiorec","rate="+rate);

                                    return recorder;
                            }
                        } catch (Exception e) {
                            Log.e("Check", rate + "Exception, keep trying.", e);
                        }
                    }
                }
            }
            return null;
        }

这是句柄代码:
public Void doInBackground(byte [] dataToWrite) {
        Log.d("doInBackground","entrato");

            byte data[] = new byte[bufferSize];
            String tempfilename = "";
            FileOutputStream os = null;
            //if(allowRecorder){
                tempfilename = getTempFilename();
                Log.d("doInBackground","getTempFilename=" +tempfilename.toString());
                try {
                    os = new FileOutputStream(tempfilename);
                    Log.d("doInBackground","os new ok" );
                } catch (FileNotFoundException e) {
                    e.printStackTrace();
                }


            dbData = new ArrayList<Double>();


            Log.d("doInBackGround", "dateToWrite.length=" + dataToWrite.length);
            for (int j = 0; j < dataToWrite.length; j++) {

                    try {
                    os.write(dataToWrite);
                    Log.d("os,write", "dataToWrite");
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }

                if(data[data.length-1]!=0){
                    double Db = 20 * Math.log10(Math.abs((data[data.length-1]/51805.5336) / 0.00002));
                    dbData.add(Db);
                }



                try {
                    os.close();
                    Log.d("os.close", "dataToWrite");
                    copyWaveFile(tempfilename,getFilename());
                    deleteTempFile();

                } catch (IOException e) {
                    e.printStackTrace();
                }

        return null;
    }

 private void copyWaveFile(String inFilename,String outFilename){
        FileInputStream in = null;
        FileOutputStream out = null;
        long totalAudioLen = 0;
        long totalDataLen = 0;
        long longSampleRate = 8000;
        System.out.println("SAMPLE RATE = "+longSampleRate);
        int channels = 12;
        audioFormat = 16;

        long byteRate = audioFormat * longSampleRate * channels/8;

        byte[] data = new byte[bufferSize];

        try {
            in = new FileInputStream(inFilename);
            out = new FileOutputStream(outFilename);
            totalAudioLen = in.getChannel().size();
            totalDataLen = totalAudioLen + 36;
            Log.d("RecorderRead","totalAudioLen=" +totalAudioLen);
            Log.d("RecorderRead","totalDatalen=" +totalDataLen);
            System.out.println("Temp File size: " + totalDataLen);

            Log.d("AudioRecorder data","AudioSource.Default="+ AudioSource.DEFAULT);
            Log.d("AudioRecorder data","Rate="+ longSampleRate);
            Log.d("AudioRecorder data","Channel.config="+ channels);
            Log.d("AudioRecorder data","AudioFormat= "+audioFormat);
            //bufferSize=bufferSize+2000;
            Log.d("AudioRecorder data","buffersize="+ bufferSize );


            if(totalDataLen != 36){
                writeWaveFileHeader(out, totalAudioLen, totalDataLen,
                        longSampleRate, channels, byteRate);
                Log.d("writeWAVEFILE", "chiamato");
                while(in.read(data) != -1){
                    out.write(data);
                }
                System.out.println("Wav File size: " + out.getChannel().size());
            }
            else{
                System.out.println("Non creo il file .wav");
            }

            in.close();
            out.close();
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }



private void writeWaveFileHeader(
            FileOutputStream out, long totalAudioLen,
            long totalDataLen, long longSampleRate, int channels,
            long byteRate) throws IOException {

        byte[] header = new byte[44];

        header[0] = 'R';  // RIFF/WAVE header
        header[1] = 'I';
        header[2] = 'F';
        header[3] = 'F';
        header[4] = (byte) (totalDataLen & 0xff);
        header[5] = (byte) ((totalDataLen >> 8) & 0xff);
        header[6] = (byte) ((totalDataLen >> 16) & 0xff);
        header[7] = (byte) ((totalDataLen >> 24) & 0xff);
        header[8] = 'W';
        header[9] = 'A';
        header[10] = 'V';
        header[11] = 'E';
        header[12] = 'f';  // 'fmt ' chunk
        header[13] = 'm';
        header[14] = 't';
        header[15] = ' ';
        header[16] = 16;  // 4 bytes: size of 'fmt ' chunk
        header[17] = 0;
        header[18] = 0;
        header[19] = 0;
        header[20] = 1;  // format = 1
        header[21] = 0;
        header[22] = (byte) channels;
        header[23] = 0;
        header[24] = (byte) (longSampleRate & 0xff);
        header[25] = (byte) ((longSampleRate >> 8) & 0xff);
        header[26] = (byte) ((longSampleRate >> 16) & 0xff);
        header[27] = (byte) ((longSampleRate >> 24) & 0xff);
        header[28] = (byte) (byteRate & 0xff);
        header[29] = (byte) ((byteRate >> 8) & 0xff);
        header[30] = (byte) ((byteRate >> 16) & 0xff);
        header[31] = (byte) ((byteRate >> 24) & 0xff);
        header[32] = (byte) (2 * 16 / 8);  // block align
        header[33] = 0;
        header[34] = (byte) audioFormat;  // bits per sample
        header[35] = 0;
        header[36] = 'd';
        header[37] = 'a';
        header[38] = 't';
        header[39] = 'a';
        header[40] = (byte) (totalAudioLen & 0xff);
        header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
        header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
        header[43] = (byte) ((totalAudioLen >> 24) & 0xff);

        out.write(header, 0, 44);
    }

穿着主要我有
<uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

在处理 list 中我有
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

当我在磨损应用程序的日志文件中运行应用程序时,出现此错误:
AudioRecord-JNI: Error -4 during AudioRecord native read

我应该怎么做才能解决它?
有人可以帮助我吗?怎么了?每种类型的帮助都表示赞赏,代码或教程。
提前致谢

最佳答案

您需要设置用于录制的音频编码器。
void setAudioEncoder (int audio_encoder)
如果不调用此方法,则输出文件将不包含音轨。在 setOutputFormat() 之后调用此电话但在 prepare() 之前.

尝试按照以下列出的步骤进行操作:

  • 创建 android.media.MediaRecorder 的新实例.
  • 使用 MediaRecorder.setAudioSource() 设置音频源.您可能想要使用 MediaRecorder.AudioSource.MIC .
  • 使用 MediaRecorder.setOutputFormat() 设置输出文件格式.
  • 使用 MediaRecorder.setOutputFile() 设置输出文件名.
    使用 MediaRecorder.setAudioEncoder() 设置音频编码器.
  • 调用 MediaRecorder.prepare()在 MediaRecorder 实例上。
  • 要开始音频捕获,请调用 MediaRecorder.start() .
  • 要停止音频捕获,请调用 MediaRecorder.stop() .
  • 完成 MediaRecorder 实例后,请调用 MediaRecorder.release()在上面。调用MediaRecorder.release()始终建议立即释放资源。

  • 以下是如何录制音频和播放录制的音频的示例代码:https://developer.android.com/guide/topics/media/audio-capture.html#example

    关于android - 从磨损接收数据后写.wav为空,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38433769/

    相关文章:

    javascript - 使用解析代码将图像/文件保存到解析云中

    android - view.findViewById(R.id....) 和 findViewById(R.id...) 之间的区别

    audio - 是否可以通过标准 GET 请求将广告或消息动态拼接到 MP3 文件中?

    android - Android 中 ID 的命名约定

    android - 通过 Scene Transition 维护 "fitsSystemWindows"窗口插入

    python - 在 Python 中调整 OSX 系统音量

    security - 我怎样才能允许人们收听音频剪辑,但可以限制他们下载它们?

    安卓磨损 : Custom Notifications

    java - Android Wear 通知与 MediaSession

    安卓磨损 : How does it work an activity life cycle?