This article is brought to you by the Eden AI team. We allow you to test and use in production a large number of AI engines from different providers directly through our API and platform. You are a solution provider and want to integrate Eden AI, contact us at: email@example.com
In this article, we are going to see how we can easily integrate a Speech recognition engine in your project and how to choose and access the right engine according to your data.
Speech recognition technology allows you to turn any audio content into written text. It is also called automatic speech recognition, or computer speech recognition. Speech recognition is based on acoustic modeling and language modeling. Note that it is commonly confused with voice recognition, but it focuses on the translation of speech from a verbal format to a text one whereas voice recognition just seeks to identify an individual user’s voice.
In 1952, Bell Laboratories designed the first speech recognition which could recognize a single voice speaking digits aloud. Ten years later, IBM introduced “Shoebox” which understood and responded to 16 words in English.
In the early 1970s, the U.S. Department of Defense’s ARPA funded a five-year program which could recognize just over 1000 words by 1976.
A key turning point came with the popularization of Hidden Markov Models (HMMs) in the mid-1980s. HMM uses probability functions to determine the correct words to transcribe.
The next big breakthrough came in the late 1980s with the addition of neural networks. This was also an inflection point for ASR.
You can use Speech Recognition in numerous fields, and sometimes specific models are trained for those fields. Here are some common use cases:
When you need a Speech Recognition engine, you have 2 options:
The only way you have to select the right provider is to benchmark different providers’ engines with your data and choose the best OR combine different providers’ engines results. You can also compare prices if the price is one of your priorities, as well as you can do for rapidity.
This method is the best in terms of performance and optimization but it presents many inconveniences:
Here is where Eden AI becomes very useful. You just have to subscribe and create an Eden AI account, and you have access to many providers engines for many technologies including Speech recognition. The platform allows you to benchmark and visualize results from different engines, and also allows you to have centralized cost for the use of different providers.
Eden AI provides the same easy to use API with the same documentation for every technology. You can use the Eden AI API to call Speech-to-Text engines with a provider as a simple parameter. With only few lines, you can set up your project in production:
Test and API:
Here is the code in Python (GitHub repo) that allows to test Eden AI for face detection:
Eden AI also allows you to compare these engines directly on the web interface without having to code:
There are numerous Speech engines available on the market: it’s impossible to know all of them, to know those who provide good performance. The best way you have to integrate Speech recognition technology is the multi-cloud approach that guarantees you to reach the best performance and prices depending on your data and project. This approach seems to be complex but we simplify this for you with Eden AI which centralizes best providers APIs.
In this article, we explain how the mapping between the input language and the languages supported by the providers is performed to facilitate access to one of our AI engines.