Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Amrisha Prashar
on 7 July 2016

Mycroft: The open source answer to natural language platforms


We’re thrilled to be working with Mycroft, the open source answer to proprietary natural language platform. Mycroft has adopted Ubuntu Core and Snaps to deliver their software to Mycroft hardware, as well as Snaps to enable desktop users to install the software regardless of the Linux distribution they are using! CEO of Mycroft, Joshua Montgomery, explains more within his piece below.

One of the best things about the open source community is that it brings in talent from unexpected places. When people think about tech they usually think of San Francisco, Tokyo or London – not Kansas. But thanks to the inclusiveness of open source little Lawrence, Kansas is home to one of our community’s most innovative projects – Mycroft.

Mycroft is the open source answer to proprietary natural language platforms like Apple’s Siri or Amazon’s Alexa. Users can speak to the software naturally and receive a natural response. For example, if a user asks “Mycroft, how is the weather in Seattle”, the system responds by saying “It is currently 60 degrees and raining. It is always 60 degrees and raining in Seattle.” In addition to voice responses, Mycroft can launch applications and initiate commands, so the platform can be used as a voice interface for almost anything. Developers are currently working to integrate Mycroft into devices ranging from a wireless speaker to an automobile.

Funded through a successful Kickstarter campaign, the team has been developing the software since April 2015 and released Mycroft to the public on May 20. Though the Kickstarter campaign revolved around the Mycroft reference device – a wireless speaker based on Raspberry Pi and Arduino – the software can be run on anything from a tablet to an automobile.

The Mycroft project has now released 3 packages – Adapt, Mimic and Mycroft Core. Adapt is an intent parser that takes in natural language and uses it to determine the user’s intent. Mimic is a text to speech engine based on the voice of Ubuntu community manager Alan Pope. Mimic takes in text and converts it to audio for playback. Mycroft Core is the software that ties everything together and makes it useful. Mycroft Core includes a keyword recognition loop and a framework for deploying skills. Skills can include anything from executing a shell command to searching DuckDuckGo. The Mycroft Skills Framework makes it easy for developers to implement new abilities for the platform. Skills are limited only by a developer’s imagination and can include anything from controlling a drone to answering questions about Pokemon.

The idea behind Mycroft is to allow users to voice enable any type of device – desktops, mobile devices, speakers, robots – anything that can benefit from natural language processing. That is why the company is adopting Ubuntu Core. By deploying Mycroft on Ubuntu Core it is easy to install and update Mycroft without worrying about the underlying environment. This frees our developers to focus on creating a superb natural user interaction and fantastic skills rather than operating system issues.

Not only are we using Ubuntu Core and Snaps to deliver our software to the Mycroft hardware, but we are also working to use Snaps to enable desktop users to install the software regardless of the Linux distribution they are using. We see Snaps as a fantastic way to ensure users get the best Mycroft experience, by not having to worry about system library version mismatches or old versions of Mycroft in a distribution’s repositories. We are confident that delivering Mycroft using Snappy will provide a positive experience for our users.

The ultimate goal of the Mycroft project is to provide an experience so natural that it is impossible for users to determine if they are talking to a human or a machine. This will enable users to interact with their technology naturally.

Of course, there is a lot of work to be done to achieve this goal. The speech-to-text component of the project OpenSTT needs to be completed, the Mimic engine needs support for additional languages and Mycroft Core needs enhancements. Developers interested in trying Mycroft or contributing to the project can find the source code at http://docs.mycroft.ai

Related posts


Rhys Knipe
12 June 2024

Space pioneers: Lonestar gears up to create a data centre on the Moon

Canonical announcements Article

Why establish a data centre on the Moon? Find out in our blog. ...


Holly Hall
15 January 2024

Managing software in complex network environments: the Snap Store Proxy

Internet of Things Article

As enterprises grapple with the evolving landscape of security threats, the need to safeguard internal networks from the broader internet is increasingly important. In environments with restricted internet access, it can be difficult to manage software updates in an easy, reliable way. When managing devices in the field, change management ...


Canonical
2 December 2024

Canonical announces public beta of optimized Ubuntu image for Qualcomm IoT platforms

Canonical announcements Article

Today Canonical, the publisher of Ubuntu, and Qualcomm® Technologies announce the official beta launch of the very first optimized image of  Ubuntu for Qualcomm® IoT Platforms. Through this beta program, developers will be able to download and use Ubuntu 22.04 LTS for the Qualcomm® RB3 Gen 2 Vision kit, which runs on the Qualcomm® QCS6490 ...