ios - Swift 3 和 iOS 10 中的语音识别错误

标签 ios swift speech-recognition ios10

我使用的是 iPhone 6s plus,这里是语音识别 View Controller 的代码:

import Speech
import UIKit

protocol SpeechRecognitionDelegate: class {
    func speechRecognitionComplete(query: String?)
    func speechRecognitionCancelled()
}

class SpeechRecognitionViewController: UIViewController, SFSpeechRecognizerDelegate {

    var textView: UITextView!

    private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
    private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
    private var recognitionTask: SFSpeechRecognitionTask?
    private let audioEngine = AVAudioEngine()
    private var query: String?
    weak var delegate: SpeechRecognitionDelegate?
    var isListening: Bool = false

    init(delegate: SpeechRecognitionDelegate, frame: CGRect) {
        super.init(nibName: nil, bundle: nil)
        self.delegate = delegate
        self.view.frame = frame
    }

    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    enum ErrorMessage: String {
        case denied = "To enable Speech Recognition go to Settings -> Privacy."
        case notDetermined = "Authorization not determined - please try again."
        case restricted = "Speech Recognition is restricted on this device."
        case noResults = "No results found - please try a different search."
    }



    func displayErrorAlert(message: ErrorMessage) {
        let alertController = UIAlertController(title: nil,
                                                message: message.rawValue,
                                                preferredStyle: .alert)
        let alertAction = UIAlertAction(title: "OK", style: .default, handler: nil)
        alertController.addAction(alertAction)
        OperationQueue.main.addOperation {
            self.present(alertController, animated: true, completion: nil)
        }
    }

    override func viewDidLoad() {
        super.viewDidLoad()

    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        speechRecognizer?.delegate = self

        //initialize textView and add it to self.view
    }

    func startListening() {
        guard !isListening else {return}
        isListening = true

        recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
        guard let recognitionRequest = recognitionRequest else {
            print("SpeechRecognitionViewController recognitionRequest \(self.recognitionRequest)")
            return
        }

        recognitionRequest.shouldReportPartialResults = true

        recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
            var isFinal = false

            if result != nil {
                self.query = result?.bestTranscription.formattedString
                self.textView.text = self.query
                isFinal = (result?.isFinal)!
            }

            if error != nil || isFinal {
                print("recognitionTask error = \(error?.localizedDescription)")
                self.stopListening()
            }
        })

        let audioSession = AVAudioSession.sharedInstance()
        do {
            try audioSession.setCategory(AVAudioSessionCategoryRecord)
            try audioSession.setMode(AVAudioSessionModeMeasurement)
            try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        } catch {
            print("Audio session isn't configured correctly")
        }

        let recordingFormat = audioEngine.inputNode?.outputFormat(forBus: 0)
        audioEngine.inputNode?.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, time) in
            self.recognitionRequest?.append(buffer)
        }

        audioEngine.prepare()

        do {
            try audioEngine.start()
            textView.text = "Listening..."
        } catch {
            print("Audio engine failed to start")
        }
    }

    func stopListening() {
        guard isListening else {return}
        audioEngine.stop()
        audioEngine.inputNode?.removeTap(onBus: 0)
        recognitionRequest = nil
        recognitionTask = nil
        isListening = false
    }

    // MARK: SFSpeechRecognizerDelegate

    func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
        if !available {
            let alertController = UIAlertController(title: nil,
                                                    message: "Speech Recognition is currently unavailable.",
                                                    preferredStyle: .alert)
            let alertAction = UIAlertAction(title: "OK", style: .default) { (alertAction) in
                .self.stopListening()
            }
            alertController.addAction(alertAction)
            present(alertController, animated: true)
        }
    }
}

此 VC 嵌入到另一个 View Controller 中。 当在父 View Controller 中点击按钮时,调用 startListening()。当再次按下同一个按钮时,将调用 stopListening()

第一次语音识别工作正常。第二次尝试我得到这个错误(我猜它与语法加载有关?):

recognitionTask error = Optional("The operation couldn’t be completed. (kAFAssistantErrorDomain error 209.)") 

并且语音识别不再有效。 30 秒后我收到超时错误:

Optional(Error Domain=kAFAssistantErrorDomain Code=203 "Timeout" UserInfo={NSLocalizedDescription=Timeout, NSUnderlyingError=0x170446f90 {Error Domain=SiriSpeechErrorDomain Code=100 "(null)"}})

原始代码在这里SayWhat

我错过了什么?

最佳答案

我所要做的就是在尝试停止收听时添加 recognitionRequest?.endAudio():

func stopListening() {
    guard isListening else {return}
    audioEngine.stop()
    audioEngine.inputNode?.removeTap(onBus: 0)
    // Indicate that the audio source is finished and no more audio will be appended
    recognitionRequest?.endAudio()
    recognitionRequest = nil
    recognitionTask = nil
    isListening = false
}

关于ios - Swift 3 和 iOS 10 中的语音识别错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42331278/

相关文章:

ios - 尽管我以编程方式选择了一个单元格,但 indexPathForSelectedRow 返回 nil

ios - Xcode 8.2.1中如何拖放一个可以自动展开满屏的UITableView

java - haddler 中的语音识别器似乎每次都重新打开 Activity

Java .wav 获取二进制数据

ios - 如何在 Swift 中从 LPLinkMetadata 获取图像 URL

ios - 使用 Core Data 和 Magical Record 编辑自定义对象

c# - 根页面不是 Navigation.NavigationStack 集合的一部分

swift - 如何保存/存储从 Parse 发送的推送通知并将其打印在任何 View Controller 上?

android - 如何以编程方式读出在 Android 中选择了哪个语音识别?

ios - Realm 反向关系返回具有 nil 属性的对象