ios - 使用 Text-To-Speech postUtteranceDelay 时回避背景音乐不会取消回避

标签 ios iphone avfoundation avaudiosession avspeechsynthesizer

问题:

使用文本转语音时,我希望背景音频变暗(或“闪避”),说出一句话,然后取消闪避背景音频。它主要工作,但是当尝试取消闪避时,它会保持闪避状态而不会在停用时抛出错误。

上下文和代码:

说出话语的方法:

// Create speech utterance
AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc]initWithString:textToSpeak];
speechUtterance.rate = instance.speechRate;
speechUtterance.pitchMultiplier = instance.speechPitch;
speechUtterance.volume = instance.speechVolume;
speechUtterance.postUtteranceDelay = 0.005;

AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:instance.voiceLanguageCode];
speechUtterance.voice = voice;

if (instance.speechSynthesizer.isSpeaking) {
    [instance.speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}

AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *activationError = nil;
[audioSession setActive:YES error:&activationError];
if (activationError) {
    NSLog(@"Error activating: %@", activationError);
}

[instance.speechSynthesizer speakUtterance:speechUtterance]; 

然后在 speechUtterance 说完后将其停用:

- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance
{
    dispatch_queue_t myQueue = dispatch_queue_create("com.company.appname", nil);
dispatch_async(myQueue, ^{
        NSError *error = nil;

        if (![[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]) {
            NSLog(@"Error deactivating: %@", error);
        }
    });
}

在 App Delegate 中设置应用的音频类别:

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{    
    AVAudioSession *audioSession = [AVAudioSession sharedInstance];
    NSError *setCategoryError = nil;
    [audioSession setCategory:AVAudioSessionCategoryPlayback
                                 withOptions:AVAudioSessionCategoryOptionDuckOthers error:&setCategoryError];
}

我尝试过的:

当我停用 AVAudioSession 时,闪避/取消闪避有效 延迟:

dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, 0.2 * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_queue_create("com.company.appname", nil), ^(void){
    NSError *error = nil;

    if (![[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]) {
        NSLog(@"Error deactivating: %@", error);
    }
});

但是,延迟很明显,我在控制台中收到错误消息:

[avas] AVAudioSession.mm:1074:-[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.

问题:

如何将 AVSpeechSynthesizer 与背景音频的回避正确结合起来?

编辑: 显然,问题源于在 AVSpeechUtterance 上使用 postUtteranceDelay,导致音乐持续变暗。删除该属性可解决此问题。但是,我的一些话语需要 postUtteranceDelay,所以我更新了标题。

最佳答案

在收听 Spotify 时使用您的代码,闪避工作(启动和停止)没有任何问题/错误。我在 iOS 9.1 上使用的是 iPhone 6S,所以这可能是 iOS 10 的问题。

我会建议完全移除 dispatch wrap,因为它没有必要。这可能会为您解决问题。

工作代码示例如下,我所做的只是创建一个新项目(“单 View 应用程序”)并将我的 AppDelegate.m 更改为如下所示:

#import "AppDelegate.h"
@import AVFoundation;

@interface AppDelegate () <AVSpeechSynthesizerDelegate>
@property (nonatomic, strong) AVSpeechSynthesizer *speechSynthesizer;
@end

@implementation AppDelegate


- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    AVAudioSession *audioSession = [AVAudioSession sharedInstance];

    NSError *setCategoryError = nil;
    [audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDuckOthers error:&setCategoryError];
    if (setCategoryError) {
        NSLog(@"error setting up: %@", setCategoryError);
    }

    self.speechSynthesizer = [[AVSpeechSynthesizer alloc] init];
    self.speechSynthesizer.delegate = self;

    AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc] initWithString:@"Hi there, how are you doing today?"];

    AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"];
    speechUtterance.voice = voice;

    NSError *activationError = nil;
    [audioSession setActive:YES error:&activationError];
    if (activationError) {
        NSLog(@"Error activating: %@", activationError);
    }

    [self.speechSynthesizer speakUtterance:speechUtterance];

    return YES;
}

- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance {
    NSError *error = nil;
    if (![[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]) {
        NSLog(@"Error deactivating: %@", error);
    }
}

@end

在物理设备上运行时控制台的唯一输出是:

2016-12-21 09:42:08.484 DimOtherAudio[19017:3751445] 为 Assets 构建 MacinTalk 语音:(空)

更新

设置 postUtteranceDelay 属性对我造成了同样的问题。

postUtteranceDelay 的文档说明了这一点:

The amount of time a speech synthesizer will wait after the utterance is spoken before handling the next queued utterance.

When two or more utterances are spoken by an instance of AVSpeechSynthesizer, the time between periods when either is audible will be at least the sum of the first utterance’s postUtteranceDelay and the second utterance’s preUtteranceDelay.

从文档中可以清楚地看出,此值仅设计用于在添加另一个话语时使用。我确认添加未设置 postUtteranceDelay 的第二个话语会取消音频。

AVAudioSession *audioSession = [AVAudioSession sharedInstance];

NSError *setCategoryError = nil;
[audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDuckOthers error:&setCategoryError];
if (setCategoryError) {
    NSLog(@"error setting up: %@", setCategoryError);
}

self.speechSynthesizer = [[AVSpeechSynthesizer alloc] init];
self.speechSynthesizer.delegate = self;

AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc] initWithString:@"Hi there, how are you doing today?"];
speechUtterance.postUtteranceDelay = 0.005;

AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"];
speechUtterance.voice = voice;

NSError *activationError = nil;
[audioSession setActive:YES error:&activationError];
if (activationError) {
    NSLog(@"Error activating: %@", activationError);
}

[self.speechSynthesizer speakUtterance:speechUtterance];

// second utterance without postUtteranceDelay
AVSpeechUtterance *speechUtterance2 = [[AVSpeechUtterance alloc] initWithString:@"Duck. Duck. Goose."];
[self.speechSynthesizer speakUtterance:speechUtterance2];

关于ios - 使用 Text-To-Speech postUtteranceDelay 时回避背景音乐不会取消回避,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41165588/

相关文章:

ios - Cocos2d : How to check Sprite Frame Name is available or not?

ios - 如何以编程方式向 UINavigationBar 添加自动布局约束

javascript - 在 JSContext 中暂停/停止评估脚本

iphone - Grand Central Dispatch 函数调用

iphone - iOS如何停止 View 旋转

ios - avplayer item的重定向播放输出

ios - 将 CMSampleBufferRef 缓冲到 CFArray 中

ios - 在 AdMob 原生广告 iOS 中出现错误 "Request Error: No ad to show"

iphone - UIImage : Resize, 然后裁剪

iphone - 找不到 AVFoundation/AVFoundation.h 文件