French artificial intelligence developer, Kyutai has introduced Moshi, an artificial intelligence (AI) model with unprecedented vocal capabilities.
Kyutai unveiled its experimental prototype in Paris. This new type of technology makes it possible for the first time to communicate in a smooth, natural and expressive way with an AI.
Moshi has the potential to revolutionize the use of speech in the digital world. For instance, its text-to-speech capabilities are exceptional in terms of emotion and interaction between multiple voices.
Moshi is an audio language model that can listen and speak continuously, with no need for explicitly modelling speaker turns or interruptions. When talking to Moshi, you will notice that the UI displays a transcript of its speech. This does not come from an ASR nor is an input to a TTS, but is rather part of the integrated multimodal modelling of Moshi.
Being compact, Moshi can also be installed locally and therefore run safely on an unconnected device. With Moshi, Kyutai intends to contribute to open research in AI and to the development of the entire ecosystem.
The code and weights of the models will soon be freely shared, which is also unprecedented for such technology. They will be useful both to researchers in the field and to developers working on voice-based products and services. This technology can therefore be studied in depth, modified, extended or specialized according to needs.
The community will in particular be able to extend Moshi’s knowledge base and factuality, which are currently deliberately limited in such a lightweight model, while exploiting its unparalleled voice interaction capabilities.
Important | OpenAI develops CriticGPT to improve AI-generated code quality