Sunday, July 16, 2017

My Raspberry PI knows your age!!!

At the Developer Week 2017 (IT conference held in Nuremberg) I was inspired by Manuel Meyer's session regarding Intelligence-as-a-Service: Creating applications of tomorrow with Azure Cognitive Services.

Manuel Meyer told us about the different Azure Cognitive Services and demonstrated most of them. The Microsoft Cognitive Services are many APIs and SDKs based on machine learning features. This way you can easily add intelligence to your applications. There are things like emotion detection, facial and speech recognition etc. Find more here.

Anyway, everything sounds interesting, modern and easy to use, so I started another learning project where I want to connect my Raspberry PI with an Azure Cognitive Service. Using the Face API, I will let Azure identify the age of a person in a picture I send to the cloud. Exactly, it's magic! On pressing a button, a picture will be taken with a camera module and sent to the Face API.

To dos:

  1. Install Raspbian (operating system for Raspberry PI).
  2. Configure the camera module.
  3. Connect the camera module.
  4. Register for a free trial of the Face API.
  5. Develop the logic to capture fotos and send them to the Azure Cognitive Service. Finally, display the response.
  6. Moment of truth (Run all components together).

Hardware/Components used:

- Raspberry PI 2
- Power supply
- Camera module
- Micro SD with 8 GB
- Push button
- Breadboard
- Male to female jumper wire cable
- 3rd/helping hand

Software used

- Operation system installer Noobs
- SD Formatter
- Python 2.7.9

The steps:

1. Get your Raspberry ready:

A Raspberry PI is a little computer which can be used in electronic projects but also as a normal computer as well, since it supports operating systems with graphic interface. If you haven't installed an operating system yet, you will find a very good documentation in the Raspberrypi's website about how to get your Raspberry configured: I used SD Formatter to format my sd card before unzipping Noobs into it.

2. Configure the camera module:

The next step is to configure the camera module. If you have already configured your camera, just skip this step. In order to use the camera, you have to enable it first. Open the terminal and enter "sudo raspi-config". After pressing "Enter" the configuration tool will appear. Enable the support for the Raspberry Pi camera under "6 Enable Camera". This change requires a system reboot!

You can test your camera now by executing raspistill -v -o testpicture.jpg in the terminal.
-v = verbose
-o = path or file name

Here I'm using a 3rd hand to hold the camera. Since the camera module doesn't support auto focus, it is very helpful having something which gives stability to the camera while taking the pictures.  Here is what it looks like:

3. Connect the push button with the Raspberry PI:

I thought it would be fancy to have the pictures taken after pushing a physical button, so I added this step to the project. You can increase your grasp of push buttons by watching this video. Since there are a lot of tutorials about how to deal with push button and Raspberry PI and I'm not an expert on it, I will briefly demonstrate how I connected the push button with the Raspberry PI.

First, I fitted the push button in the breadboard. Then, I connected the breadboard with the GPIO pins GND and 23(input) from the Raspberry using the jumper wire cables. Here you can see this work of art :)

I wrote a small piece of code in Python to test the push button. Here is how it looks:

 4. Register for a free trial of the Face API:

At the time of writing (July 2017) there are free trials for the Cognitive Service APIs ( Since I don't have an account to use the Face API yet, I will register for a free 30 days trial which includes 30,000 transactions, up to 20 per minute. You can create your own account here. Once you've registered you'll see the following window:

Save the keys and the endpoint, since you'll need this data later to send requests to the Face API. You can of course already try the power of the Face API on this website. You only have to enter your subscription key in the Ocp-Apim-Subscription-Key field and an URL in the request body, which points to a picture. You will find the Send button at the end of the page ;)

5. Develop the logic:

I chose Python to develop the solution because it is installed by default in Raspbian. Actually, there isn't a lot of work to be done. Microsoft has already written a documentation which describes how to call the Face API using Python. It's pretty easy to use, since you just have to configure key information like your subscription key, regional base URL and an image URL.

I'm using the code already developed in the step 3 to receive the inputs from the push button. There is still work to be done, since I want the picture to be taken by pressing the input button. The next step is to include the picamera library to the code. To be able to use this library, execute "sudo apt-get install python-picamera" in the terminal to install it. In order to get the age of the person, I passed the optional attribute "returnFaceAttributes" with the value age. In the code I published, I also use other keywords like gender, glasses, facialHair etc. It's very interesting to see what the Face API is able to recognize. The rest of the code builds on top of the python's quick start from Microsoft and includes comments so you can follow it step-by-step.

I saved the code as and shared it through GitHub. Here the entire code:

6. Moment of truth (run all components together):

Finally, the last step is here! It's time to test the code and the external components. I started the program "" and took a picture. Here you can see the program's output:


It's unbelievable, I was recognized as being much older than I really am... There must be an error with it :) I'm 28 years old and the Face API actually was very close to my real age. Anyway, after taking several pictures the results ranged from 27 to 41. In my opinion, the Face API is still in an improvement phase and will become much more powerful in the future, than it already is. If I had had to write the face recognition code on my own, I would haven't got so far in such a short period of time. Since Microsoft is doing a very good job regarding the documentation, it was very easy to implement the API! At the end of the day, it is just calling an endpoint and getting a result. You can actually concentrate your focus on other parts of a project which need more attention. Azure Cognitive Services are an opportunity to improve the capabilities of your products in a fast and easy manner.

Here you can see the Raspberry PI with the camera module and the push button:


- Raspistill documentation
- Camera module documentation
- Azure Cognitive Service Face API pricing
- Microsoft Python SDK for the Cognitive Face API
- Try Face API
- Quickstart Face API
- Python PICamera

Sunday, July 2, 2017

Writing posts in Yammer with Azure Logic Apps

Azure Logic App is an easy way to automate processes, mostly without the need of code and in a friendly way. It supports connection to different systems: SharePoint, Yammer, RSS etc. Check all connectors here.

In particular I was very interested in learning more about this Azure service, so I planned a small learning project where new blog articles must automatically be posted to Yammer.

First, I wrote down all the workflow steps:

  1. Check for new articles in the blog from Waldek Mastykarz
  2. If Waldek publishes a new blog article which contains SharePoint in the title, the logic app must post a new article to my Yammer group Development. The message must contain the title of the new blog article and a link to it.

Then I started creating a new logic app.

1. Create new logic app:

2. Add RSS connector:

Once in the Logic App Designer I used a RSS connector to hourly trigger the logic app and check for new blog articles from Waldek.

3. Add condition:

Than I added an if condition which checks for the SharePoint word in the post's title. If the word is present, the Yammer post will be executed, otherwise nothing will happen.

4. Add action:

Finally, I added the Yammer connector to the if condition. It will post a new article to my Yammer group Development. The message will contain the title of the new blog article and a link to it.

After finishing all the above steps, I saved my logic app and waited until Waldek published a new blog article. Here is the result in my Yammer group.

Here you can see the finished logic app:


I've never used Azure Logic Apps before and it took me less than 5 minutes to create a new logic app which writes new posts to my Yammer group. You don't need to be a developer to use Logic Apps. In my opinion, the only skill needed is logical undestanding and you are ready to go!


- What are Logic Apps?
- Logic Apps Pricing