Manuel Meyer told us about the different Azure Cognitive Services and demonstrated most of them. The Microsoft Cognitive Services are many APIs and SDKs based on machine learning features. This way you can easily add intelligence to your applications. There are things like emotion detection, facial and speech recognition etc. Find more here.
Anyway, everything sounds interesting, modern and easy to use, so I started another learning project where I want to connect my Raspberry PI with an Azure Cognitive Service. Using the Face API, I will let Azure identify the age of a person in a picture I send to the cloud. Exactly, it's magic! On pressing a button, a picture will be taken with a camera module and sent to the Face API.
To dos:
- Install Raspbian (operating system for Raspberry PI).
- Configure the camera module.
- Connect the camera module.
- Register for a free trial of the Face API.
- Develop the logic to capture fotos and send them to the Azure Cognitive Service. Finally, display the response.
- Moment of truth (Run all components together).
Hardware/Components used:
- Raspberry PI 2
- Power supply
- Camera module
- Micro SD with 8 GB
- Push button
- Breadboard
- Male to female jumper wire cable
- 3rd/helping hand
Software used
- Operation system installer Noobs
- SD Formatter
- Python 2.7.9
The steps:
1. Get your Raspberry ready:
2. Configure the camera module:
You can test your camera now by executing raspistill -v -o testpicture.jpg in the terminal.
-v = verbose
-o = path or file name
Here I'm using a 3rd hand to hold the camera. Since the camera module doesn't support auto focus, it is very helpful having something which gives stability to the camera while taking the pictures. Here is what it looks like:
3. Connect the push button with the Raspberry PI:
First, I fitted the push button in the breadboard. Then, I connected the breadboard with the GPIO pins GND and 23(input) from the Raspberry using the jumper wire cables. Here you can see this work of art :)
I wrote a small piece of code in Python to test the push button. Here is how it looks:
4. Register for a free trial of the Face API:
Save the keys and the endpoint, since you'll need this data later to send requests to the Face API. You can of course already try the power of the Face API on this website. You only have to enter your subscription key in the Ocp-Apim-Subscription-Key field and an URL in the request body, which points to a picture. You will find the Send button at the end of the page ;)
5. Develop the logic:
I'm using the code already developed in the step 3 to receive the inputs from the push button. There is still work to be done, since I want the picture to be taken by pressing the input button. The next step is to include the picamera library to the code. To be able to use this library, execute "sudo apt-get install python-picamera" in the terminal to install it. In order to get the age of the person, I passed the optional attribute "returnFaceAttributes" with the value age. In the code I published, I also use other keywords like gender, glasses, facialHair etc. It's very interesting to see what the Face API is able to recognize. The rest of the code builds on top of the python's quick start from Microsoft and includes comments so you can follow it step-by-step.
I saved the code as raspberrypiToAzureFaceApi.py and shared it through GitHub. Here the entire code:
6. Moment of truth (run all components together):
Summary
Here you can see the Raspberry PI with the camera module and the push button:
Links:
- Camera module documentation
- Azure Cognitive Service Face API pricing
- Microsoft Python SDK for the Cognitive Face API
- Try Face API
- Quickstart Face API
- Python PICamera