If you remember, for last year’s Robot Film Festival, Nao 1337 walked the red carpet by holding my hand. I took some time in order to better document the behaviour, improve it a bit and make a video description of it that you can see below.
Of course, you can find the code and the project page at the Developer Program Site.
The Control behaviour is one if the first behaviours I created for Nao 1337. Although it was not very popular at the app contest, I truly believe it is the only useful behaviours so far. It is very simple and its main goal is to allow the user to make Nao sit , stand, change the volume level, the language and launch programs. This set of features might have been superseded by the current Nao Life behaviours from Aldebaran but I have not tested it yet. One very important requirement for me is to be able to perform all this actions while away of a computer or any other device and with no WiFi.
As always, you can find the code and further information about the behaviour in the project page at the Developer Program site.
This is my first cloud-robotics application and although there are many possible improvements, it is immediately useful and show the power of cloud-robotics.
Google Chrome Speech-to-Text
I created a behaviour for Nao that used the same speech-to-text services found in Android devices and google chrome to translate spoken words in to text. It works better than the standard speech recognition engine and can be used for many more things. I’m eager to see what other Nao developers do with it.
See a quick demo of the speech recognition:
Nao developers can find the code here.
For those who want to try it on their (Linux) computers, here you have a command that will record 5s of sound, encode it in Flac format and send it to google. Then it will right Google’s response in a txt file.