我正在尝试使用 SFSpeechRecognizer
但我没有办法测试我是否正确地实现了它,而且由于它是一个相对较新的类,我找不到示例代码(我不知道快)。我是否犯了任何不可原谅的错误/遗漏了什么?
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status){
if (status == SFSpeechRecognizerAuthorizationStatusAuthorized) {
SFSpeechRecognizer* recognizer = [[SFSpeechRecognizer alloc] init];
recognizer.delegate = self;
SFSpeechAudioBufferRecognitionRequest* request = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
request.contextualStrings = @[@"data", @"bank", @"databank"];
SFSpeechRecognitionTask* task = [recognizer recognitionTaskWithRequest:request resultHandler:^(SFSpeechRecognitionResult* result, NSError* error){
SFTranscription* transcript = result.bestTranscription;
NSLog(@"%@", transcript);
}];
}
}];
最佳答案
我也在尝试,但这段代码对我有用,毕竟 SFSpeechRecognizer 和 SFSpeechAudioBufferRecognitionRequest 不一样,所以我认为(尚未测试)你必须请求不同的权限(你之前是否请求过权限?到使用麦克风和语音识别?)。好的,这是代码:
//Available over iOS 10, only for maximum 1 minute, need internet connection; can be sourced from an audio recorded file or over the microphone
NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:@"es-MX"];
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:local];
NSString *soundFilePath = [myDir stringByAppendingPathComponent:@"/sound.m4a"];
NSURL *url = [[NSURL alloc] initFileURLWithPath:soundFilePath];
if(!speechRecognizer.isAvailable)
NSLog(@"speechRecognizer is not available, maybe it has no internet connection");
SFSpeechURLRecognitionRequest *urlRequest = [[SFSpeechURLRecognitionRequest alloc] initWithURL:url];
urlRequest.shouldReportPartialResults = YES; // YES if animate writting
[speechRecognizer recognitionTaskWithRequest: urlRequest resultHandler: ^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error)
{
NSString *transcriptText = result.bestTranscription.formattedString;
if(!error)
{
NSLog(@"transcriptText");
}
}];
关于ios - 使用 SFSpeechRecognizer 的正确方法?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41633036/