Cloud Natural Language API and Cloud Speech API opened as betas for analysing large chunks of text and speech
Looking to sell customers better tools for extracting value from large sets of unstructured data, Google has released beta versions of two new machine learnings APIs for its Google Cloud Platform.
The tools, Cloud Natural Language API and Cloud Speech API, are designed for digging in to gargantuan text and audio files and pulling out information on specified topics such as people, locations, dates and events.
This mean organisations can carry out large analyses of text and audio to produce fine-tuned information on customers or users.
“You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call centre or a messaging app,” explained Google on the Cloud Natural Language API product page.
“You can analyse text uploaded in your request or integrate with your document storage on Google Cloud Storage.”
British online supermarket and tech company Ocado said it’s already using the Natural Language API, and it’s a viable replacement to its own machine learning language analyser.
“NL API has shown it can accelerate our offering in the natural language understanding area and is a viable alternative to a custom model we had built for our initial use case,” said Ocado’s head of data Dan Nelson.
Google Cloud Speech API lets developers convert audio to text by applying neural network models in an API. Google said that the API recognises over 80 languages and variants.
“You can transcribe the text of users dictating to an application’s microphone, enable command-and-control through voice, or transcribe audio files, among many other use cases,” said Google.
“Enterprises and developers now have access to speech-to-text conversion in over 80 languages, for both apps and IoT devices. Cloud Speech API uses the voice recognition technology that has been powering your favorite products such as Google Search and Google Now.”
More than 5,000 companies signed up for Google’s Speech API alpha, including video chat app HyperConnect that uses Cloud Speech and Translate API to transcribe and translate conversations between people who speak different languages.
The Speech API also support word hints, meaning custom words or phrases by context can be added to API calls to improve recognition. An example of this may be in smart TV listening for ‘rewind’ and ‘fast-forward’.