我正在使用 google speech api 和 NAudio(使用 NAudio WaveInEvent 类)进行语音到文本的转换。像这样:https://cloud.google.com/speech-to-text/docs/streaming-recognize?hl=en (“对音频流执行流式语音识别”的 C# 示例)
如果说话的人离麦克风很近,一切都很快。但是如果说话的人离麦克风很远,他的前 3-5 个词就不会被识别。之后,其他词就被很好地识别了。 (所以它不可能是距离的一般问题)更像是对距离的适应问题,或者 NAudio 可能没有在 100% 音量输入下录制。
对这个问题有什么想法吗?
编辑:这是要求的代码:
static async Task<object> StreamingMicRecognizeAsync(int seconds)
{
if (NAudio.Wave.WaveIn.DeviceCount < 1)
{
Console.WriteLine("No microphone!");
return -1;
}
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "en",
},
InterimResults = true,
}
});
// Print responses as they arrive.
Task printResponses = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream
.Current.Results)
{
foreach (var alternative in result.Alternatives)
{
Console.WriteLine(alternative.Transcript);
}
}
}
});
// Read from the microphone and stream to API.
object writeLock = new object();
bool writeMore = true;
var waveIn = new NAudio.Wave.WaveInEvent();
waveIn.DeviceNumber = 0;
waveIn.WaveFormat = new NAudio.Wave.WaveFormat(16000, 1);
waveIn.DataAvailable +=
(object sender, NAudio.Wave.WaveInEventArgs args) =>
{
lock (writeLock)
{
if (!writeMore) return;
streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString
.CopyFrom(args.Buffer, 0, args.BytesRecorded)
}).Wait();
}
};
waveIn.StartRecording();
Console.WriteLine("Speak now.");
await Task.Delay(TimeSpan.FromSeconds(seconds));
// Stop recording and shut down.
waveIn.StopRecording();
lock (writeLock) writeMore = false;
await streamingCall.WriteCompleteAsync();
await printResponses;
return 0;
}
来源:https://cloud.google.com/speech-to-text/docs/streaming-recognize?hl=en
最佳答案
是的,这就是事情的运作方式。引擎使用对声音级别的适应,如果级别太低,它们只会错过第一个单词,并且只有在适应后才会开始识别。准确性将低于预期。
要解决此问题 - 使用更先进的麦克风阵列来跟踪音频源,如 Respeaker 或 Matrix,并且可能使用自定义语音识别系统,该系统对快速音频电平变化更稳健。它也将比 Google API 便宜。
关于c# - 如果说话者远离麦克风,Google Speech/NAudio 会有很大的延迟,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55500330/