07 Nov 2017
Automatic Linguist uses AI for learning new languages
Automatic speech learning company, Speechmatics has launched its new Automatic Linguist an Artificial Intelligence powered framework. With speech recognition being introduced into many new products from mobile phones to assistants like the Amazon Alexa, televisions and much more, speech recognition is growing rapidly in importance.
The new Automatic Linguist significantly improves the speed at which new languages are built for use in speech-to-text transcription. This reduces the time that it takes to deploy new languages and enables systems to be introducd far more quickly than before.
Automatic Linguist has the potential to learn any language in the world in a matter of days, enabling Speechmatics to expand their service offering to any region globally, even those that have previously been uneconomic to serve. The system also allows for the rapid iteration, improvement and adaption of existing languages.
Using Machine Learning, Automatic Linguist can learn the initial base of a language in under a day. This is partly due to the fact it was purpose-built from the ground up and has been programmed to apply patterns from one language to another. For example, the production-ready Hindi system was built within 2 weeks after facing a challenge from a large corporate that this would not be possible. In addition to this the Automatic Linguist system made 23%* fewer errors than the market leaders. This on its own was a significant advantage.
So far Automatic Linguist has learnt 28 languages including Japanese, Hindi, Russian and Korean in rapid succession, with the focus shifting to languages that have fewer native speakers worldwide. Traditionally, building a new language pack takes months and is a costly and very labour intensive. It involves gathering vast amounts of data, building a one-off system and continually refining it with input from experts in that language. This is time consuming, expensive and difficult, meaning only the most widely spoken of languages in the world remain the focus.
Most languages have inherent similarities in their fundamental sounds (sometimes represented as phonemes) and grammatical structures. AL can recognise patterns within and across languages and apply these to a new language build, therefore significantly reducing the time and data required to build a new language.
Benedikt von Thüngen, CEO at Speechmatics, explained:“The world is increasingly connected and technologically dependent. Serving many of our international blue-chip customers, such as Adobe, requires our product to be available in all languages. Given resource constraints we had to come up with something new. Combining our deep understanding of speech recognition systems and machine learning, we built AL and tested our hypothesis that there are sufficient similarities between languages so that computers can learn them. After building the major European languages, we tried AL on Japanese and it worked. This is now enables us to pursue building any language in the world and support our global customer base.”
Tom Ash, Speech Recognition Director at Speechmatics and recent winner of the ‘Speech Luminary’ award, commented: “We are already seeing a shift to a speech-enabled future where voice is the primary form of communication. Transcription not only eases the lives of many people, but opens the door for new opportunities, especially in regions with lower literacy rates. There are over 7,000 languages in the world, and our ultimate goal is to make speech recognition technology available to as many as possible.”
By Ian Poole
Most popular news in Cellular telecomsAnokiwave Joins O-RAN Alliance
Mobile edge computing platform unlocks commercial opportunities
Cellular Signal Strength Surveys using SNYPER for M2M & IoT links
First decoupling and interconnection test of ZTE’s ZENIC SDN controller
CommScope enhances OneCell C-RAN small cells
Share this page
Want more like this? Register for our newsletter