android - 华为 ML Kit 文字转语音错误

标签 android android-studio huawei-mobile-services huawei-developers huawei-ml-kit

我正在开发翻译应用程序,我需要说出用户翻译的内容。关注华为Text to Speech医生我得到了错误。

onError: MLTtsError{errorId=11301, errorMsg='The speaker is not supported. ', extension=7002}

 protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_speak_and_translate);

        showInterstitialAd();
        MLApplication.getInstance().setApiKey("Your Key");


        if (deviceManufacture.equalsIgnoreCase("Huawei")) {
            setUpHuaweiTts();
        } 
        
    }
    private void setUpHuaweiTts() {
        mlTtsConfig = new MLTtsConfig()
                // Set the text converted from speech to English.
                // MLTtsConstants.TtsEnUs: converts text to English.
                // MLTtsConstants.TtsZhHans: converts text to Chinese.
                .setLanguage(MLTtsConstants.TTS_EN_US)
                // Set the English timbre.
                // MLTtsConstants.TtsSpeakerFemaleEn: Chinese female voice.
                // MLTtsConstants.TtsSpeakerMaleZh: Chinese male voice.
                .setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
                // Set the speech speed. Range: 0.2–1.8. 1.0 indicates 1x speed.
                .setSpeed(1.0f)
                // Set the volume. Range: 0.2–1.8. 1.0 indicates 1x volume.
                .setVolume(1.0f);
        mlTtsEngine = new MLTtsEngine(mlTtsConfig);
        mlTtsEngine.setTtsCallback(new MLTtsCallback() {
            @Override
            public void onError(String s, MLTtsError mlTtsError) {
                Log.d(TAG, "onError: "+ mlTtsError);
            }

            @Override
            public void onWarn(String s, MLTtsWarn mlTtsWarn) {
                Log.d(TAG, "onWarn: ");
            }

            @Override
            public void onRangeStart(String s, int i, int i1) {
                Log.d(TAG, "onRangeStart: ");
            }

            @Override
            public void onAudioAvailable(String s, MLTtsAudioFragment mlTtsAudioFragment, int i, Pair<Integer, Integer> pair, Bundle bundle) {
                Log.d(TAG, "onAudioAvailable: ");
            }

            @Override
            public void onEvent(String s, int i, Bundle bundle) {
                // Callback method of a TTS event. eventId indicates the event name.
                switch (i) {
                    case MLTtsConstants.EVENT_PLAY_START:
                        Log.d(TAG, "onEvent: Play");
                        // Called when playback starts.
                        break;
                    case MLTtsConstants.EVENT_PLAY_STOP:
                        // Called when playback stops.
                        boolean isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED);
                        Log.d(TAG, "onEvent: Stop");
                        break;
                    case MLTtsConstants.EVENT_PLAY_RESUME:
                        // Called when playback resumes.
                        Log.d(TAG, "onEvent: Resume");      
                        break;
                    case MLTtsConstants.EVENT_PLAY_PAUSE:
                        // Called when playback pauses.
                        Log.d(TAG, "onEvent: Pause");
                        break;

                    // Pay attention to the following callback events when you focus on only synthesized audio data but do not use the internal player for playback:
                    case MLTtsConstants.EVENT_SYNTHESIS_START:
                        // Called when TTS starts.
                        Log.d(TAG, "onEvent: SynStart");
                        break;
                    case MLTtsConstants.EVENT_SYNTHESIS_END:
                        // Called when TTS ends.
                        Log.d(TAG, "onEvent: SynEnd");
                        break;
                    case MLTtsConstants.EVENT_SYNTHESIS_COMPLETE:
                        // TTS is complete. All synthesized audio streams are passed to the app.
                        boolean isInterruptedCheck = bundle.getBoolean(MLTtsConstants.EVENT_SYNTHESIS_INTERRUPTED);
                        Log.d(TAG, "onEvent: SynComplete");
                        break;
                    default:
                        break;
                }
            }
        });
       mlTtsEngine.speak("test", MLTtsEngine.QUEUE_APPEND);
    }
目前,我只是为了测试目的设置字符串“test”。我必须从模型中获取文本并将其设置为口语。我在有关扬声器错误的文档中看不到类似的内容。我已经搜索了 ErrorCode on Huawei .

public static final int ERR_ILLEGAL_PARAMETER

Invalid parameter.

Constant value: 11301


LogCat:调试 enter image description here
LogCat:错误 enter image description here
任何帮助将非常感激。谢谢。

最佳答案

我用英语设置了错误的人。所以改变这条线的代码

.setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
.setPerson(MLTtsConstants.TTS_SPEAKER_MALE_EN)
工作得很好。

关于android - 华为 ML Kit 文字转语音错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69539762/

相关文章:

react-native - 华为帐号登录: Scope email address option getting as unselected

android - 我可以在华为AppGallery上发布的Android应用程序中使用Facebook Audience Network (FAN)吗?

android - adb 今天无法连接到 VirtualBox 中的 Android-x86

使用 9 补丁图像时,Android Studio 构建在 "processDebugResources"处失败

android - 测试版本构建在 android 上,无需签名

PgedBinderListener 在华为 P9 上杀死 Android 6.0 粘性服务

android - Android Assets 究竟是如何存储在设备的存储空间中的

Android ICS 上下文菜单替换

android - 如何使用 AlarmManager.setInexactRepeating() 使服务保持 Activity 状态?

android - 在 Windows 上使用 CMake 在 Android Studio 中构建+链接 libpng