Sunday, December 10, 2017

Application lifecycle (ALM) APIs for SharePoint Framework solutions and SharePoint add-ins

Thank you, Microsoft, for finally publishing APIs which we can now use to manage the application lifecycle for ours SharePoint Framework solutions and SharePoint add-ins.

Until the end of November 2017 it was very tricky to manage the application lifecycle for SharePoint solutions, since some of the possibilities available weren't supported by Microsoft. If you've worked with SharePoint web services to reach your goals, you know how vulnerable a system becomes to changes caused by Microsoft. But now… it is past!!!

In this blog post I'll present and explain you the new ALM APIs.

At the end of the day it is just about executing REST calls


It doesn't matter if you use CSOM or PnP PowerShell for working with the new APIs, because they all are just using REST behind the scenes. These are the benefits of working with CSOM or PnP PowerShell:

  • you don't need to take care of inconvenient things like passing the REQUESTDIGEST while executing the calls.
  • you don't need to remember the REST APIs or build complex strings/code.

APIs' scope


The new APIs were designed to act in different scopes:

Site level operations: These APIs will execute requests against the site collection which you requested client context for.

App catalog operations: These APIs will execute requests against the app catalog. Independent from the generated client context.

APIs' overview


Site level operations:
Description API
Get available apps _api/web/tenantappcatalog/availableApps
Get available app by id _api/web/tenantappcatalog/availableApps/GetById("")
Install available app _api/web/tenantappcatalog/availableApps/GetById("")/Install
Update available app _api/web/tenantappcatalog/availableApps/GetById("")/Upgrade
Uninstall available app _api/web/tenantappcatalog/availableApps/GetById("")/Uninstall

App catalog operations:
Description API
Add app to app catalog _api/web/tenantappcatalog/Add
Deploy available app _api/web/tenantappcatalog/availableApps/GetById("")/Deploy
Retract available app _api/web/tenantappcatalog/availableApps/GetById("")/Retract
Remove available app _api/web/tenantappcatalog/availableApps/GetById("")/Remove

APIs for site level operations in test


Instead of directly working with REST calls, I'll demonstrate the new APIs using PnP PowerShell.

Get available apps:


Returns all apps which are available for your site collection.

Get-PnPApp | Format-Table


Get available app by id:


If the app is available, it returns the searched app.

Get-PnPApp -Identity 19109eba-5efd-4e1e-a48e-eadf87f6d811 | Format-Table


Install available app:


Installs an app to the current site collection. If the app requires admin permission to install, the API will automatically handle the approval process.

Install-PnPApp -Identity 19109eba-5efd-4e1e-a48e-eadf87f6d811

After running the API, you can see in the site contents of your site collection that the app is getting installed.


Update available app:


If a new app version is available, it updates the app on the current site collection.


Running the API below will update the app from the version 1.0.0.0 to the version 1.0.0.1.

Update-PnPApp -Identity 19109eba-5efd-4e1e-a48e-eadf87f6d811

After running the API, you can see in the site contents of your site collection that the app is getting updated.


Uninstall available app:


Uninstalls an app from the current site collection. It completely removes the app from the site collection, without moving it before to the recycle bin folder. So wonderful!!!

Uninstall-PnPApp -Identity 19109eba-5efd-4e1e-a48e-eadf87f6d811


APIs for app catalog operations in test


Instead of directly working with REST calls, I'll demonstrate the new APIs using PnP PowerShell.

Add app to app catalog:


Uploads an app file to app catalog. Using the REST API you can work with binary data and pass the parameter override. At time of this written, the PnP method Add-PnPApp works with a file path and automatically set the override parameter to false. If the file already exists in the app catalog and the override parameter is set to false, you'll get an exception.

Add-PnPApp -Path C:\search-web-part\search-web-part.sppkg | Format-Table


This API returns the app unique ID which is not the product ID. The unique app ID is stored in the hidden field UniqueId of the list Apps for SharePoint. The picture below shows the new uploaded app. Notice that the file is still no deployed.



Deploy available app:


Deploys and trusts an available app. It is possible to set the parameter SkipFeatureDeployment. Although the PnP method is called Publish, it does the right job J

Publish-PnPApp -Identity 38740267-e8a5-4179-9725-fc5155d19447


After running this API, the app is available in the "add an app" page for installation.


Retract available app:


Retracts the available app without removing it from the app catalog. Although the PnP method is called Unpublish, it does the right job J

Unpublish-PnPApp -Identity 33aad64c-fe65-4f39-b873-5349632957fe


After running this API, the specified app is no longer available for installation.


Remove available app:


Removes an available app from the app catalog. Notice that installed apps will not be uninstalled, but as far as I know, client side web parts will no longer work.

Remove-PnPApp -Identity 33aad64c-fe65-4f39-b873-5349632957fe


Summary


We finally have stable and supported APIs for managing the application lifecycle of SharePoint solutions. Processes have become faster and are easier to automate. Many thanks to everyone involved in developing these APIs.

Monday, November 13, 2017

Speaking at Office 365 & SharePoint User Group Nuremberg - Configure build and release pipelines in VSTS for SharePoint Framework

I'm looking forward to giving my first public talk with my colleague Christian Groß!!! This will be at Office 365 & SharePoint User Group Nuremberg, on November 16 2017.

The event will be held at Coworking Nuremberg in Nuremberg.

At the end of this session you will be able to:
  • Create and configure a build and a release pipeline in Visual Studio Team Services for SharePoint Framework
  • Automatically upload your SharePoint Framework assets to the Azure CDN
  • Automatically upload your SharePoint Framework assets and app file to SharePoint
  • Use Gulp tasks in Visual Studio Team Services

Event information:


Sunday, September 10, 2017

Save money by automatically starting and stopping Azure VM's with Azure Automation

Azure is great! In my opinion there is no doubts about it! But... :) ...you pay as you go. What actually is a wonderful thing both for Microsoft and for us since you only pay for things you are consuming. In practice I have experienced situations where companies spend more money because some services run although they are not needed or they are not needed all the time. Since Azure VM's are one of those money burners, I want to show you in this post how to automatically start and stop your Azure VM's using Azure Automation.

Azure Automation is a way to automate frequently repeated processes that are commonly executed in the cloud. It saves you time since you don't need to execute things manually and of course, it saves your money!

Step-by-step
  1. Create an Automation Account in Azure
  2. Create a runbook 
  3. Create a Scheduled task to start your VM
  4. Create a Scheduled task to stop your VM
  5. Schedule Auto Shutdown

1. Create an Automation Account in Azure


The first step of this small project consists of creating an automation account which is the foundation for managing automated processes. In order to create the account, navigate to the Azure Portal (https://portal.azure.com). After logging in, click new > Monitoring + Management > Automation which opens the form to request an automation account.


In the request form specify name, subscription, resource group, location and set Create Azure Run as Account to yes, which allows us to manage Resource Manager resources by using runbooks. Then, click Create! See more information about how to create an automation account here. Here is how my settings look:


2. Create a runbook


After creating the automation account, it is time to create the runbook which contains the logic to automate processes. In the automation account you've just created, go to Runbooks > Add a runbook > Create a new runbook. It opens the request form where you enter the runbook's name and the runbook's type. At the time of this writing there are four runbook types:

Powershell: Allows you to create complex logic using Windows PowerShell commands. You can edit your code either directly in the Azure Portal or offline.

Graphical: Supports a graphical interface to edit the runbook logic. Unfortunately, you can't edit graphical logic outside of the Azure Portal.

Powershell Workflow: Similar to the powershell type. It is based on Windows Powershell workflow.

Graphical Workflow: Similar to the graphical type. It is based on Windows Powershell workflow.

I chose PowerShell as the runbook type since I'm more familiarized with it. Here is what my settings look like:


Now you've created the runbook, you can start adding code to it. Click the edit button which opens the integrated edit area. The logic I created can be used to either start or stop a virtual machine based on its resource group and name. I also added a logic to limit the code execution during the business days which in my case are Monday to Friday. I didn't take federal holidays into consideration. I shared my code through GitHub. I also added a couple of comments to make the code more understandable.

After adding the code to start and stop Azure VM's, you can try it using the Test pane which is an integrated test area in the Azure Portal. Testing the logic is very easy since you must only enter the required parameters and click Start.

Since your code is running as aspected, you can now publish your runbook. Without publishing the runbook you wont be able to create a scheduled task!

3. Create a scheduled task to start your VM


Now that you've published your runbook, you can create the first schedule task which will start your VM at a desired time. To create the scheduled task, go inside of the runbook to Schedules > Add a schedule. Configuring the scheduled task is very simple since you just have to create a schedule and enter the runbook's parameters. I want my VM to be started every business day at 6:00 a.m. Here is how my settings look:



4. Create a Scheduled task to stop your VM


Repeat the step before to create a scheduled task which turns the VM off. In my case, I want the VM to be stopped every business day at 10 p.m. Here is how both of my schedule tasks look:


5. Schedule auto shutdown


If your VM is not a classic VM, you can set the auto shutdown option which is an alternative to step 4. You can find the Auto-shutdown in your VM options.


The auto shutdown runs all week days. If you want to have more control over it, consider using Azure Automation.

Summary:


Azure Automation simplifies your life as an administrator since you can automate tedious processes. It saves you time, money and it increases the reliability of executed tasks. If you have been using PowerShell, it will be very easy for you to get started with the Azure Automation :)

Links:


- Azure Automation Overview
- Azure Automation Runbook Types
- Getting Started with Azure Automation
- My first PowerShell runbook

Wednesday, August 16, 2017

Running a SharePoint framework web part with elevated privileges – Focus on Azure Functions – Part 2

In the previous post we looked at the following steps:
  1. Create an Azure account
  2. Create an Azure Function App
  3. Create the Azure Function
  4. Configure the Azure Function
    1. Support continuous deployment
    2. Enable CORS

In this post, I will continue with steps "Configure access to SharePoint" and "Install NuGet packages" which belongs to Configure the Azure Function. Then, "Develop the code to read site collection administrators" and "Calling the Azure Function from your SPFx web part".

4.3. Configure access to SharePoint:


4.3.1 Register your function in SharePoint:


If you have developed provider-hosted SharePoint Add-ins in the past, you know this step since I will use the same approach in registering the Azure Function with the Azure Access Control Service (ACS) and the SharePoint App Management Service of the tenant. It allows the Azure Function to execute requests in SharePoint with the app-only context.

To register the Azure Function as an Add-in, navigate to "http://<your tenant>.sharepoint.com/_layouts/15/AppRegNew.aspx" and enter the required information. Generate a new client id and client secret. The Add-in title can be your function’s name. Use your function’s URL to configure App Domain and Redirect URI. Here my configuration:


After clicking Create, you will see this message "The app identifier has been successfully created.". Now copy the client id and client secret, since we will need these values later.

If you want to understand more about the registering process, check out this article.

4.3.2. Grant permissions for function:


Since we have finished the registration step, it is time to grant the function permissions to execute requests in SharePoint.

Navigate to "http://<your tenant>.sharepoint.com/_layouts/15/AppInv.aspx" and look up the Add-in you have just created using its client id. Then, enter the following code in the Permissions field to give the function full control of the site collection you are configuring it for.


As you're granting full control to an Add-in, the client secret is as important as the password for your SharePoint administration account!

Here my configuration:


After clicking Create, you will be requested to trust the Add-in. It is similar to installing an app on your cell phone!


4.3.3. Configure App Settings in the function:


Using the client id and client secret from the step above, it is possible to authenticate to SharePoint and run our code with elevated privilege. Therefore, we need to configure the function’s application settings to use these values in the function’s code.

First, open the Application settings:


Then, add the client id and the client secret of your newly registered Add-in to the App settings:


After clicking Save, you have the essential information to run the function with the app-only context which give us the RunWithElevatedPrivileges effect.

4.4. Install NuGet packages:


Instead of having to upload DLLs using FTP or KUDU to your function’s directory, the Azure Function offers support to NuGet packages, which makes our lives much easier. Once configured, the NuGet packages will automatically be downloaded and installed to the function’s directory. It happens the first time the code runs! To use the types defined in the NuGet packages, you just need to add using statements to your run.csx file.

To communicate with SharePoint, I used the PnP core components which are much more convenient than directly working with the managed CSOM for authentication etc. Therefore, I’ve described below how to configure the usage of NuGet packages in your function.

The project.json file is the location to manage the NuGet packages. Add this file to the function’s directory and add a reference to SharePointPnPCoreOnline version 2.16.1706.0 to it.

Here’s how it looks in Visual Studio Code after committing and pushing the changes. Since I configured continuous deployment, I always need to execute this step.


Alright, the preparation steps are finished, but there is still work to be done. It never ends! J It is finally time to develop the code to retrieve the site collection administrators.

5. Develop the code to read site collection administrators:


Let’s start developing the logic J The code below is very simple. The program requires a site collection URL to retrieve the administrators. Then, it fetches client id and client secret from the app settings and use it to authentication in SharePoint. Finally, it reads the site collection administrators using the GetAdministrators method, which is available through the PnP library. Then, it’ll return the site administrators’ display names. That’s it! I added a couple of comments to the code, so it is easier to understand.

For instance, if you change the app permission to read, you won’t get any site collection administrators. There is also no error indicating that you don’t have enough permissions.

Before using the function in the SPFx web part, I tried the function using the integrated test area in Azure. It’s only about entering a JSON object with the properties required by the function and running it.


6. Calling the Azure Function from your SPFx web part:


The SPFx web part is also very simple. It builds on top of the "hello world" sample from Microsoft, which is what you get after running the yeoman SharePoint generator. If you have never created a SPFx web part before and want to try it, I recommend you use the SPFx tutorial about building your first SharePoint client-side web part.

The first thing we need is the function’s URL. You will find it in the Azure Portal:


Please note that the function URL already contains the function key as a query string parameter (code). If it is not provided, you must pass it using a x-functions-key header when calling the Azure Function.

Now it is time to call the function from the web part. The private method _retrieveAdministrators contains the logic to communicate with the Azure function. Since I’m using the HttpClient object to call the Azure function, there are a few things to consider when executing POST calls. First, I configured the Headers object. Then I configured the IHttpClientOptions passing the Headers object and the body which contains the current site collection URL as a JSON object. This is the more relevant part of the code! I also handled possible errors using resp.ok for checking the function’s success (HTTP response status code 200). I shared the main part of the code through GitHub.


Finally, the web part is able to call the Azure function which returns the site collection administrators. Since I only granted the function access to a specific site collection, the function will return an error message when the web part is used in another site collection different from the one I configured. Here you can see the possible outputs:

Calling the function from the configured site collection:


Calling the function from any other site collection:


Summary:


After a few configurations, the Azure Function is ready to use and you can put your focus on more important things like the development of the code instead of programming a web service and hosting it somewhere. Since the focus of this post relies on the Azure Function, I spent less time on the development of the web part. But it is enough to understand the idea of elevating user’s privileges. Depending on the permissions you configure in your Add-in, you can have access to a site collection or even full control in the entire tenant. The combination of the app-only context and the Azure Function gives you sight beyond sight (enhanced vision) in SharePoint. In other words, with the correct configurations you can do whatever you want from your SPFx web part in SharePoint!

Links:

Tuesday, August 1, 2017

Running a SharePoint framework web part with elevated privileges – Focus on Azure Functions – Part 1

SharePoint Framework (SPFx) web parts are modern, fancy and only run in the context of the current user :) So far so good! But what if I want to do more? What if I want to elevate the user privileges like we have done in the past with SharePoint Add-ins, where we use the App-Only context or like the farm solutions, where we use the static method RunWithElevatedPrivileges. The answer for all these questions are web services.

I will introduce you to Azure Functions which when well configured gives you sight beyond sight (enhanced vision) in SharePoint. It's like the sword of Omens which increases the power of the Thundercats. There are also other options for web services like Azure App API, your own web service etc. You can also use Microsoft Graph which requires authentication using ADAL.js or use the GraphHttpClient which should not be used in a production environment, since it is still in a preview version (July 2017). Anyway, I will be showing you how to configure an Azure Function and how to use it in a SharePoint Framework web part. Since there are a lot of things to show, I’ve split the content of this article in two blog posts.

Everything starts with a learning project:

This learning project consists of calling an Azure Function from a SPFx web part, which will return all administrators of the current site collection. All site users must be able to see the correct content independent of their permission levels.

Software used:


Steps:

  1. Create an Azure account
  2. Create an Azure Function App
  3. Create the Azure Function
  4. Configure the Azure Function
    1. Support continuous deployment
    2. Enable CORS
    3. Configure access to SharePoint
    4. Install NuGet packages
  5. Develop the code to read site collection administrators
  6. Calling the Azure Function from your SPFx web part

1. Create an Azure account:


If your company doesn't have an Azure account yet, please go ahead and ask your boss if your company is driving the business in the right direction. THIS IS A JOKE, DON'T DO THAT ;) In my opinion, cloud plays a very important role in business today. Of course, you can use Amazon Web Service (AWS) or something similar to it, but as a SharePoint developer and since Windows 3.11 J (my first operating system) I’ve trusted Microsoft to be the right partner for my business.

If you already have an Azure account, skip this step. Otherwise, I’ll demonstrate how to create a Microsoft Azure trial account which gives you 200 dollars (170 euros) in credits to try any Azure service for a period of 30 days. You need to enter your credit card information but you will never be charged unless you choose to subscribe. The credit card information is for identity verification only.

To register to an Azure account, navigate to https://azure.microsoft.com/en-us/free/ and click on start free. After authentication with a valid Microsoft account, you will be redirected to the trial register form. Just follow the registration's steps which are very user friendly. This is what the register form looks like:


Now that you have registered, you have the power of Azure with you J Welcome to the world of endless possibilities!

2. Create an Azure Function App:


An Azure Function App is a serverless service where you don't need to worry about infrastructure. It is a container for Azure Functions and supports several programming languages like C#, JavaScript, PowerShell, Python, F# etc. You can also include DLLs (your own or others) and use this as part of your function. Depending on your needs, you can use different events to trigger your function like HTTP, Time (making it a webjob), Azure Blob, Azure Storage etc. This way you can easily create your own REST API without a lot of configuration. Check the function's overview and convince yourself!

To create the Function App, navigate to the Azure Portal (https://portal.azure.com). After logging in, click new > Compute > Function App which opens the form to requests for a new function app.

Add function app

In the Function App form, specify the data required, but pay attention to the hosting plan, since it plays an important role in the way your function is going to perform.

Consumption plan: Is not bound to predefined computer resources (RAM, CPU etc.) It scales out automatically, even during periods of high load. You just pay for the execution time. So, if your function is not running, there are no costs! But be aware of the function timeout. Functions on Consumption Plan are limited to an execution time up to a maximum of 10 minutes.

Azure Service Plan: Is a dedicated virtual machine (VM) with predefined resources. The VM's cost is fixed. The more you pay the better resources you get! Consider using this plan to bypass the function timeout which occurs in the Consumption Plan. It is also a chance to reuse already existing VMs which are underutilized. Don't forget to activate the function Always On, since the VM goes idle after a few minutes of inactivity.

After filling in the required data, your form should look like this:


Since I already have an underutilized VM, I will reuse it to minimize my costs running my Azure Functions.

3. Create the Azure Function:


Once the function has finished creating, it is time to configure it. You can find your Azure Function, for instance, by navigating to All Resources and searching there for your function’s name. 

Once in the function, click the new function button:


Azure offers you premade functions, but I'd recommend you clicking Custom function. This way you can see all the possible triggers and languages supported.


Select Language > All and Scenario > All to see all the supported templates. Since the general availability in November 2016 the list of templates has considerably increased and there are always new templates coming in. The possibilities are tremendous with this Azure service!

For this learning project, choose HttpTrigger - C#. What does it mean? This function is triggered over http calls and the function's programming language is C#.

Depending on your needs, you can have different authorization levels:
  • Admin: Requires master key on calls.
  • Function: Requires function key on calls.
  • Anonymous: Requires no API key on calls. Anyone can call the function!

Here is how my settings look:


After creating your function, you will see the integrated editor which contains a sample code. You can try it by pressing Run over the editor. By default, the Azure Function contains two files:

run.csx: CSX means C# script file and was introduced with Roslyn. It contains a Run method which is similar to the main method in a console application. The focus relies here on writing a C# function.

function.json: This file contains function's bindings which are specified via the property's name. For instance, our income binding is httpTrigger and the name is req. In the function's signature you will find the parameter HttpRequestMessage req which refers to the entry in the function.json file.

4. Configure the Azure Function:


In this step, I will show you how to support continuous deployment, enable CORS, register your function in SharePoint and use NuGet packages in your function.

4.1. Support continuous deployment:


This step is optional! Actually, you could write the entire code in the integrated editor of your Azure function, but it is not what you will do when developing functions for productive scenarios. I also like the idea of having a code editor like Visual Studio Code which gives me support during the code’s development. Hence, I'll integrate continuous deployment from Visual Studio Team Services (VSTS). There are also other systems like GitHub, Bitbucket, OneDrive, Dropbox etc. 

4.1.1. Start creating a Visual Studio Team Services project:



4.1.2. Then clone it to your project's directory in your local machine:




4.1.3. Add the two Azure Function’s files to the project’s directory:


There are different ways to obtain the two Azure Function’s files "run.csx" and "function.json": 

  • Just copy the file’s content into new files in your project’s directory.
  • Use KUDU
  • Use FTP

Very important: The function’s files are hosted in the followed structure:

wwwroot
 | - host.json
 | - retrieveadmins
 | | - function.json
 | | - run.csx

If you don’t have the folder "retrieveadmins" in your project’s directory, the integration will not work since the files will directly be stored in the wwwroot directory. 

To check if the integration with VSTS is working well, open Visual Studio Code and change some content in the run method. I have just added another log entry log.Info(“The value passed id ” + name);. The changes made will not apply to your Azure Function until you’ve committed and pushed it. It also means that you need to commit and push every change before testing it J


4.1.4. Configure the continuous deployment in the Azure Function:


Open the Deployment Options to configure the continuous deployment:


Click Setup, then click Configure required settings:


Finally, choose VSTS to configure it:


Enter the required fields and click OK. Here my configurations: 


4.1.5. Check integration with VSTS:


In Visual Studio Code, you can now commit and push your changes:



After pushing the changes, navigate to the Azure Function and check the integrated editor to see the changes. It only takes a few seconds for the changes to apply.


With the steps above, we’ve laid the foundation for a professional development. We are no longer depending on the integrated editor and we can benefit from the advantages offered by VSTS.

4.2. Enable CORS


Since the SPFx web part will request the Azure Function from JavaScript in a different domain, it is important to enable Cross-Origin Resource Sharing (CORS). Otherwise, the Azure Function will block the cross-domain requests from our SPFx web part. This includes all kind of requests, also those made from the SharePoint workbench.

For instance, I need to allow the following origins (domains) to make cross-origin calls to my Azure Function:
  • https://localhost:4321
  • https://example.sharepoint.com

First, open CORS to configure it:


Then, specify the origins. You need to enter any domain which will call your function from a JavaScript code:


After clicking Save, the Azure Function allows calls from the SPFx web parts.


You'll find the steps below in the second part of this article:
  • Configure access to SharePoint
  • Install NuGet packages
  • Develop the code to read site collection administrators
  • Calling the Azure Function from your SPFx web part

To be continued...

Sunday, July 16, 2017

My Raspberry PI knows your age!!!

At the Developer Week 2017 (IT conference held in Nuremberg) I was inspired by Manuel Meyer's session regarding Intelligence-as-a-Service: Creating applications of tomorrow with Azure Cognitive Services.

Manuel Meyer told us about the different Azure Cognitive Services and demonstrated most of them. The Microsoft Cognitive Services are many APIs and SDKs based on machine learning features. This way you can easily add intelligence to your applications. There are things like emotion detection, facial and speech recognition etc. Find more here.

Anyway, everything sounds interesting, modern and easy to use, so I started another learning project where I want to connect my Raspberry PI with an Azure Cognitive Service. Using the Face API, I will let Azure identify the age of a person in a picture I send to the cloud. Exactly, it's magic! On pressing a button, a picture will be taken with a camera module and sent to the Face API.

To dos:

  1. Install Raspbian (operating system for Raspberry PI).
  2. Configure the camera module.
  3. Connect the camera module.
  4. Register for a free trial of the Face API.
  5. Develop the logic to capture fotos and send them to the Azure Cognitive Service. Finally, display the response.
  6. Moment of truth (Run all components together).

Hardware/Components used:


- Raspberry PI 2
- Power supply
- Camera module
- Micro SD with 8 GB
- Push button
- Breadboard
- Male to female jumper wire cable
- 3rd/helping hand

Software used


- Operation system installer Noobs
- SD Formatter
- Python 2.7.9

The steps:


1. Get your Raspberry ready:


A Raspberry PI is a little computer which can be used in electronic projects but also as a normal computer as well, since it supports operating systems with graphic interface. If you haven't installed an operating system yet, you will find a very good documentation in the Raspberrypi's website about how to get your Raspberry configured: https://www.raspberrypi.org/documentation/installation/noobs.md. I used SD Formatter to format my sd card before unzipping Noobs into it.

2. Configure the camera module:


The next step is to configure the camera module. If you have already configured your camera, just skip this step. In order to use the camera, you have to enable it first. Open the terminal and enter "sudo raspi-config". After pressing "Enter" the configuration tool will appear. Enable the support for the Raspberry Pi camera under "6 Enable Camera". This change requires a system reboot!

You can test your camera now by executing raspistill -v -o testpicture.jpg in the terminal.
-v = verbose
-o = path or file name

Here I'm using a 3rd hand to hold the camera. Since the camera module doesn't support auto focus, it is very helpful having something which gives stability to the camera while taking the pictures.  Here is what it looks like:


3. Connect the push button with the Raspberry PI:


I thought it would be fancy to have the pictures taken after pushing a physical button, so I added this step to the project. You can increase your grasp of push buttons by watching this video. Since there are a lot of tutorials about how to deal with push button and Raspberry PI and I'm not an expert on it, I will briefly demonstrate how I connected the push button with the Raspberry PI.

First, I fitted the push button in the breadboard. Then, I connected the breadboard with the GPIO pins GND and 23(input) from the Raspberry using the jumper wire cables. Here you can see this work of art :)


I wrote a small piece of code in Python to test the push button. Here is how it looks:


 4. Register for a free trial of the Face API:


At the time of writing (July 2017) there are free trials for the Cognitive Service APIs (https://azure.microsoft.com/en-us/try/cognitive-services/). Since I don't have an account to use the Face API yet, I will register for a free 30 days trial which includes 30,000 transactions, up to 20 per minute. You can create your own account here. Once you've registered you'll see the following window:



Save the keys and the endpoint, since you'll need this data later to send requests to the Face API. You can of course already try the power of the Face API on this website. You only have to enter your subscription key in the Ocp-Apim-Subscription-Key field and an URL in the request body, which points to a picture. You will find the Send button at the end of the page ;)

5. Develop the logic:


I chose Python to develop the solution because it is installed by default in Raspbian. Actually, there isn't a lot of work to be done. Microsoft has already written a documentation which describes how to call the Face API using Python. It's pretty easy to use, since you just have to configure key information like your subscription key, regional base URL and an image URL.

I'm using the code already developed in the step 3 to receive the inputs from the push button. There is still work to be done, since I want the picture to be taken by pressing the input button. The next step is to include the picamera library to the code. To be able to use this library, execute "sudo apt-get install python-picamera" in the terminal to install it. In order to get the age of the person, I passed the optional attribute "returnFaceAttributes" with the value age. In the code I published, I also use other keywords like gender, glasses, facialHair etc. It's very interesting to see what the Face API is able to recognize. The rest of the code builds on top of the python's quick start from Microsoft and includes comments so you can follow it step-by-step.

I saved the code as raspberrypiToAzureFaceApi.py and shared it through GitHub. Here the entire code:


6. Moment of truth (run all components together):


Finally, the last step is here! It's time to test the code and the external components. I started the program "raspberrypiToAzureFaceApi.py" and took a picture. Here you can see the program's output:


Summary


It's unbelievable, I was recognized as being much older than I really am... There must be an error with it :) I'm 28 years old and the Face API actually was very close to my real age. Anyway, after taking several pictures the results ranged from 27 to 41. In my opinion, the Face API is still in an improvement phase and will become much more powerful in the future, than it already is. If I had had to write the face recognition code on my own, I would haven't got so far in such a short period of time. Since Microsoft is doing a very good job regarding the documentation, it was very easy to implement the API! At the end of the day, it is just calling an endpoint and getting a result. You can actually concentrate your focus on other parts of a project which need more attention. Azure Cognitive Services are an opportunity to improve the capabilities of your products in a fast and easy manner.

Here you can see the Raspberry PI with the camera module and the push button:


Links:


- Raspistill documentation
- Camera module documentation
- Azure Cognitive Service Face API pricing
- Microsoft Python SDK for the Cognitive Face API
- Try Face API
- Quickstart Face API
- Python PICamera