我正在构建一个测试应用程序,以通过 Microsoft 的 Cognitive Speaker Recognition API
对用户进行身份验证。这看起来很简单,但正如他们在 API Docs 中提到的那样,在创建注册时,我需要发送我录制的音频文件的 byte[]
。现在,因为我使用的是 Xamarin.Android,所以我能够录制音频并保存它。现在,Microsoft 的 Cognitive Speaker Recognition API
对音频的要求非常具体。
根据API文档,音频文件格式必须满足以下要求。
Container -> WAV
Encoding -> PCM
Rate -> 16K
Sample Format -> 16 bit
Channels -> Mono
正在关注 this recipe我成功地录制了音频,并在玩了一会儿并使用了一些 android 文档后,我也能够实现这些设置:
_recorder.SetOutputFormat(OutputFormat.ThreeGpp);
_recorder.SetAudioChannels(1);
_recorder.SetAudioSamplingRate(16);
_recorder.SetAudioEncodingBitRate(16000);
_recorder.SetAudioEncoder((AudioEncoder) Encoding.Pcm16bit);
这符合所需音频文件的大部分标准。但是,我似乎无法以实际的“.wav”格式保存文件,而且我无法验证文件是否实际上是 PCM
编码的。
这是我的 AXML 和 MainActivity.cs:Github Gist
我也关注了this code并将其合并到我的代码中:Github Gist
文件的规范看起来很好,但持续时间是错误的。无论我录制多长时间,它都显示 250 毫秒,这导致音频太短。
有什么办法吗?基本上,我只想通过 Xamarin.Android 连接到 Microsoft 的 Cognitive Speaker Recognition API
。我找不到任何此类资源来指导自己。
最佳答案
录音
添加Audio Recorder Plugin NuGet Package到 Android 项目(以及任何 PCL、netstandard 或 iOS 库,如果您正在使用它们)。
安卓项目配置
- 在 AndroidMainifest.xml 中,添加以下权限:
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.INTERNET" />
- 在 AndroidManifest.xml 中,添加以下
provider
内部<application></application>
标签。
<provider android:name="android.support.v4.content.FileProvider" android:authorities="${applicationId}.fileprovider" android:exported="false" android:grantUriPermissions="true">
<meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/file_paths"></meta-data>
</provider>
在 Resources 文件夹中,创建一个名为
xml
的新文件夹在 Resources/xml 中,创建一个名为
file_paths.xml
的新文件
- 在
file_paths.xml
,添加如下代码,将[your package name]替换为你的Android项目的package
<?xml version="1.0" encoding="utf-8"?> <paths xmlns:android="http://schemas.android.com/apk/res/android"> <external-path name="my_images" path="Android/data/[your package name]/files/Pictures"/> <external-path name="my_movies" path="Android/data/[your package name]/files/Movies" /> </paths>
示例包名称
安卓录音机代码
AudioRecorderService AudioRecorder { get; } = new AudioRecorderService
{
StopRecordingOnSilence = true,
PreferredSampleRate = 16000
});
public async Task StartRecording()
{
AudioRecorder.AudioInputReceived += HandleAudioInputReceived;
await AudioRecorder.StartRecording();
}
public async Task StopRecording()
{
AudioRecorder.AudioInputReceived += HandleAudioInputReceived;
await AudioRecorder.StartRecording();
}
async void HandleAudioInputReceived(object sender, string e)
{
AudioRecorder.AudioInputReceived -= HandleAudioInputReceived;
PlaybackRecording();
//replace [UserGuid] with your unique Guid
await EnrollSpeaker(AudioRecorder.GetAudioFileStream(), [UserGuid]);
}
认知服务说话人识别码
HttpClient Client { get; } = CreateHttpClient(TimeSpan.FromSeconds(10));
public static async Task<EnrollmentStatus?> EnrollSpeaker(Stream audioStream, Guid userGuid)
{
Enrollment response = null;
try
{
var boundryString = "Upload----" + DateTime.Now.ToString("u").Replace(" ", "");
var content = new MultipartFormDataContent(boundryString)
{
{ new StreamContent(audioStream), "enrollmentData", userGuid.ToString("D") + "_" + DateTime.Now.ToString("u") }
};
var requestUrl = "https://westus.api.cognitive.microsoft.com/spid/v1.0/verificationProfiles" + "/" + userGuid.ToString("D") + "/enroll";
var result = await Client.PostAsync(requestUrl, content).ConfigureAwait(false);
string resultStr = await result.Content.ReadAsStringAsync().ConfigureAwait(false);
if (result.StatusCode == HttpStatusCode.OK)
response = JsonConvert.DeserializeObject<Enrollment>(resultStr);
return response?.EnrollmentStatus;
}
catch (Exception)
{
}
return response?.EnrollmentStatus;
}
static HttpClient CreateHttpClient(TimeSpan timeout)
{
HttpClient client = new HttpClient();
client.Timeout = timeout;
client.DefaultRequestHeaders.AcceptEncoding.Add(new StringWithQualityHeaderValue("gzip"));
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
//replace [Your Speaker Recognition API Key] with your Speaker Recognition API Key from the Azure Portal
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", [Your Speaker Recognition API Key]);
return client;
}
public class Enrollment : EnrollmentBase
{
[JsonConverter(typeof(StringEnumConverter))]
public EnrollmentStatus EnrollmentStatus { get; set; }
public int RemainingEnrollments { get; set; }
public int EnrollmentsCount { get; set; }
public string Phrase { get; set; }
}
public enum EnrollmentStatus
{
Enrolling
Training,
Enrolled
}
音频播放
配置
添加SimpleAudioPlayer Plugin NuGet Package到 Android 项目(以及任何 PCL、netstandard 或 iOS 库,如果您正在使用它们)。
代码
public void PlaybackRecording()
{
var isAudioLoaded = Plugin.SimpleAudioPlayer.CrossSimpleAudioPlayer.Current.Load(AudioRecorder.GetAudioFileStream());
if (isAudioLoaded)
Plugin.SimpleAudioPlayer.CrossSimpleAudioPlayer.Current.Play();
}
关于c# - 通过 Xamarin.Android 连接到 Microsoft 的 Cognitive Speaker Recognition API,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49294652/