嗨,我正在尝试找到一种方法来使用 Microsoft Speech API 来使用 Angular 5 我使用 microsoft-speech-browser-sdk 作为 JavaScript
https://github.com/Azure-Samples/SpeechToText-WebSockets-Javascript
我只是导入了SDK 从“microsoft-speech-browser-sdk”导入 * 作为 SDK; 我尝试在示例中使用相同的代码
但是我有这个错误 SDK.Recognizer.CreateRecognizer 不是一个函数 我知道 skd 已导入,因为它执行第一个函数
我也找不到 API 引用 有没有人用 Angular 从事过这种认知服务工作?
最佳答案
我也遇到了同样的问题,似乎是博客文章中的拼写错误,因此我与此处的 SDK 示例进行了比较:https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser
Smael 的答案本质上是修复 - 从函数调用中删除 .Recognizer,这应该可以修复它(还要确保您返回的 SDK 引用与您导入的 SDK 引用具有相同的名称:
import { Component } from '@angular/core';
import { environment } from 'src/environments/environment';
import * as SpeechSDK from 'microsoft-speech-browser-sdk';
@Component({
selector: 'app-home',
templateUrl: './home.component.html',
})
export class HomeComponent {
speechAuthToken: string;
recognizer: any;
constructor() {
this.recognizer = this.RecognizerSetup(SpeechSDK, SpeechSDK.RecognitionMode.Conversation, 'en-US',
SpeechSDK.SpeechResultFormat.Simple, environment.speechSubscriptionKey);
}
RecognizerSetup(SDK, recognitionMode, language, format, subscriptionKey) {
const recognizerConfig = new SDK.RecognizerConfig(
new SDK.SpeechConfig(
new SDK.Context(
new SDK.OS(navigator.userAgent, 'Browser', null),
new SDK.Device('SpeechSample', 'SpeechSample', '1.0.00000'))),
recognitionMode, // SDK.RecognitionMode.Interactive (Options - Interactive/Conversation/Dictation)
language, // Supported languages are specific to each recognition mode Refer to docs.
format); // SDK.SpeechResultFormat.Simple (Options - Simple/Detailed)
// Alternatively use SDK.CognitiveTokenAuthentication(fetchCallback, fetchOnExpiryCallback) for token auth
const authentication = new SDK.CognitiveSubscriptionKeyAuthentication(subscriptionKey);
return SpeechSDK.CreateRecognizer(recognizerConfig, authentication);
}
RecognizerStart() {
this.recognizer.Recognize((event) => {
/*
Alternative syntax for typescript devs.
if (event instanceof SDK.RecognitionTriggeredEvent)
*/
switch (event.Name) {
case 'RecognitionTriggeredEvent' :
console.log('Initializing');
break;
case 'ListeningStartedEvent' :
console.log('Listening');
break;
case 'RecognitionStartedEvent' :
console.log('Listening_Recognizing');
break;
case 'SpeechStartDetectedEvent' :
console.log('Listening_DetectedSpeech_Recognizing');
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechHypothesisEvent' :
// UpdateRecognizedHypothesis(event.Result.Text);
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechFragmentEvent' :
// UpdateRecognizedHypothesis(event.Result.Text);
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechEndDetectedEvent' :
// OnSpeechEndDetected();
console.log('Processing_Adding_Final_Touches');
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechSimplePhraseEvent' :
// UpdateRecognizedPhrase(JSON.stringify(event.Result, null, 3));
break;
case 'SpeechDetailedPhraseEvent' :
// UpdateRecognizedPhrase(JSON.stringify(event.Result, null, 3));
break;
case 'RecognitionEndedEvent' :
// OnComplete();
console.log('Idle');
console.log(JSON.stringify(event)); // Debug information
break;
}
})
.On(() => {
// The request succeeded. Nothing to do here.
},
(error) => {
console.error(error);
});
}
RecognizerStop() {
// recognizer.AudioSource.Detach(audioNodeId) can be also used here. (audioNodeId is part of ListeningStartedEvent)
this.recognizer.AudioSource.TurnOff();
}
}
关于angular - 使用 Angular 来使用 Microsoft Speech API,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48124648/