当我尝试使用以下配置调用Google Cloud Speech to Text Api进行长时间运行的识别时:
config = dict(
languageCode='de',
maxAlternatives=1,
enableWordTimeOffsets=True,
enableAutomaticPunctuation=True,
model='default',
encoding='ENCODING_UNSPECIFIED'
)
我得到这个错误
接收到无效的JSON负载。“config”处的未知名称“encoding”:Proto字段未重复,无法启动列表
怎么修?
你能给我们更多的信息吗。。。例如,在项目的这一部分中使用哪种语言和库版本?
假设您使用的是Python,您可以在此处找到另一种连接到Google Cloud Speech to Text Api的官方方式:https://cloud.google.com/speech-to-text/docs/basics
我习惯于使用googleapiclient
phyton包和JSON数据结构,而不是字典数据。
import base64
import googleapiclient.discovery
with open(speech_file, 'rb') as speech:
# Base64 encode the binary audio file for inclusion in the JSON
# request.
speech_content = base64.b64encode(speech.read())
# Construct the request
service = googleapiclient.discovery.build('speech', 'v1')
service_request = service.speech().recognize(
body={
"config": {
"encoding": "LINEAR16", # raw 16-bit signed LE samples
"sampleRateHertz": 16000, # 16 khz
"languageCode": "en-US", # a BCP-47 language tag
},
"audio": {
"content": speech_content
}
})
如果你不知道如何安装python包,请参考这篇官方文章:https://packaging.python.org/tutorials/installing-packages/#id13
如需LongRun请求,请参阅:https://cloud.google.com/speech-to-text/docs/reference/rest/v1/speech/longrunningrecognize
在这种情况下,配置JSON结构将是:
{
"config": {
object(RecognitionConfig)
},
"audio": {
object(RecognitionAudio)
}
}
其中,RecognitionConfig是以下类型的JSON对象:
{
"encoding": enum(AudioEncoding),
"sampleRateHertz": number,
"languageCode": string,
"maxAlternatives": number,
"profanityFilter": boolean,
"speechContexts": [
{
object(SpeechContext)
}
],
"enableWordTimeOffsets": boolean
}
和识别音频是这样的:
{
// Union field audio_source can be only one of the following:
"content": string,
"uri": string
// End of list of possible types for union field audio_source.
}
有关LongRunning识别,您也可以参考以下链接:https://developers.google.com/resources/api-libraries/documentation/speech/v1/java/latest/com/google/api/services/speech/v1/Speech.SpeechOperations.html
它展示了如何使用Phyton软件包GoogleAppClient。查找
用于长时间运行的请求,只需在Phyton类中使用以下方法即可:
...
service_request = service.speech().longrunningrecognize(
body= {
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"enableWordTimeOffsets": True
},
"audio": {
"uri": str('gs://speech-clips/'+self.audio_fqid)
}
}
)
...