我现在真的陷入困境,而且我对 Xamarin 还很陌生。 我使用 Xamarin Forms 开发具有语音识别功能的应用程序。
我只创建了一个带有按钮和输入框的简单 UI。
工作:
- 按下按钮并显示带有语音识别功能的弹出窗口
- 将口语读入变量
不工作:
- 将数据传回 Xamarin Forms UI(条目)
StartPage.xaml.cs:
private void BtnRecord_OnClicked(object sender, EventArgs e)
{
WaitForSpeechToText();
}
private async void WaitForSpeechToText()
{
EntrySpeech.Text = await DependencyService.Get<Listener.ISpeechToText>().SpeechToTextAsync();
}
ISpeechToText.cs:
public interface ISpeechToText
{
Task<string> SpeechToTextAsync();
}
调用 native 代码。
SpeechToText_Android.cs:
public class SpeechToText_Android : ISpeechToText
{
private const int VOICE = 10;
public SpeechToText_Android() { }
public Task<string> SpeechToTextAsync()
{
var tcs = new TaskCompletionSource<string>();
try
{
var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Sprechen Sie jetzt");
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000);
voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1);
voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default);
try
{
((Activity)Forms.Context).StartActivityForResult(voiceIntent, VOICE);
}
catch (ActivityNotFoundException a)
{
tcs.SetResult("Device doesn't support speech to text");
}
}
catch (Exception ex)
{
tcs.SetException(ex);
}
return tcs.Task;
}
}
MainActivity.cs:
protected override void OnActivityResult(int requestCode, Result resultVal, Intent data)
{
if (requestCode == VOICE)
{
if (resultVal == Result.Ok)
{
var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults);
if (matches.Count != 0)
{
string textInput = matches[0].ToString();
if (textInput.Length > 500)
textInput = textInput.Substring(0, 500);
}
// RETURN
}
}
base.OnActivityResult(requestCode, resultVal, data);
}
首先我认为我可以通过以下方式传递结果
return tcs.Task;
回到用户界面,但后来我注意到这种返回发生在 语音识别的弹出窗口已完成渲染。这一刻,没有人说话。
口语单词位于 OnActivityResult 函数中的字符串“textInput”中, 但我如何将此字符串传递回 Xamarin.Forms UI?
谢谢大家!
最佳答案
我会使用AutoResetEvent
暂停返回,直到 OnActivityResult
被调用,直到用户记录某些内容、取消或者您在 AutoResetEvent 中使他们的操作超时。
返回Task<string>
来自您的SpeechToTextAsync
方法:
public interface ISpeechToText
{
Task<string> SpeechToTextAsync();
}
添加AutoResetEvent
暂停执行:
注意:包裹 AutoResetEvent.WaitOne
防止挂起应用程序循环器
public class SpeechToText_Android : Listener.ISpeechToText
{
public static AutoResetEvent autoEvent = new AutoResetEvent(false);
public static string SpeechText;
const int VOICE = 10;
public async Task<string> SpeechToTextAsync()
{
var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Sprechen Sie jetzt");
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000);
voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1);
voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default);
SpeechText = "";
autoEvent.Reset();
((Activity)Forms.Context).StartActivityForResult(voiceIntent, VOICE);
await Task.Run(() => { autoEvent.WaitOne(new TimeSpan(0, 2, 0)); });
return SpeechText;
}
}
MainActivity OnActivityResult:
const int VOICE = 10;
protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
{
base.OnActivityResult(requestCode, resultCode, data);
if (requestCode == VOICE)
{
if (resultCode == Result.Ok)
{
var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults);
if (matches.Count != 0)
{
var textInput = matches[0];
if (textInput.Length > 500)
textInput = textInput.Substring(0, 500);
SpeechToText_Android.SpeechText = textInput;
}
}
SpeechToText_Android.autoEvent.Set();
}
}
注意:这是利用几个静态变量来简化这个示例的实现...有些开发人员会说这是一种代码味道,我半同意,但你不能有多个 Google语音识别器一次运行......
Hello World 示例:
public class App : Application
{
public App()
{
var speechTextLabel = new Label
{
HorizontalTextAlignment = TextAlignment.Center,
Text = "Waiting for text"
};
var speechButton = new Button();
speechButton.Text = "Fetch Speech To Text Results";
speechButton.Clicked += async (object sender, EventArgs e) =>
{
var speechText = await WaitForSpeechToText();
speechTextLabel.Text = string.IsNullOrEmpty(speechText) ? "Nothing Recorded" : speechText;
};
var content = new ContentPage
{
Title = "Speech",
Content = new StackLayout
{
VerticalOptions = LayoutOptions.Center,
Children = {
new Label {
HorizontalTextAlignment = TextAlignment.Center,
Text = "Welcome to Xamarin Forms!"
},
speechButton,
speechTextLabel
}
}
};
MainPage = new NavigationPage(content);
}
async Task<string> WaitForSpeechToText()
{
return await DependencyService.Get<Listener.ISpeechToText>().SpeechToTextAsync();
}
}
关于c# - Android 语音识别将数据传回 Xamarin Forms,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40614131/