Image and Activity Processing Application with Cognitive Service, Power Apps and Microsoft Flow

There are lots of possibilities to embed cognitive services API in application like Power apps with help of Microsoft flow

in the last posts, I have shown how to create Face Recognition application or How to create an OCR application to convert image to text.

Face Recognition Application with Power Apps, Microsoft Flow and Cognitive Service -Part 4

In this post, I am going to talk about one of the interesting API we have in Cognitive service name

Analyze an image

for a demo just navigate to the cognitive service website

Then click to Computer vision and then click on Scene and activity recognition in images

 

to see how it works click on a demo and upload a picture, I did the same and I upload the picture of my dog posing for the camera in doggy daycare.

Look at the result on the right side,

The object list able to identify the animal–> mammal–> and Dog with the confidence

Tibetan Mastiff is 0.62 which I can say they are similar breed, not the same that is why the confidence is low

The dog is 0.92 which is totally correct

Now let’s test another one, as you can see this one also able to detect the object like Tennis Players, Rackets Tennis and even some people like Reza Rad that is popular in social media!

Now I am excited to use this API in my application, the process that I am explaining can be applied to other webservices as well.

The process is the same as Face recognition

Create Power Apps for taking photo- Part 1- Face Recognition by Power Apss, Microsoft Flow and Cognitive Service

Create Microsoft Flow Part 2- Face Recognition by Power Apss, Microsoft Flow and Cognitive Service

Face Recognition Application with Power Apps, Microsoft Flow and Cognitive Service -Part 3

Now I will summarise the process

1-Power APPS

Create an application with the format of the Mobile app in Power Apps take a photo and pass the photo to the Microsoft Flow.

Create Power Apps Application to Take a Photo.

For this scenario, I use the community plan: Community Plan: https://powerapps.microsoft.com/en-us/communityplan/ 

 

create an Application as below ( for more detail see the Post 1 and Post 3).

so create the first draft of App with Camera, and Image as below (explanation has been done in  Post 1 and Post 3)

2- Microsoft Flow

Now we need to create the flow.

to create flow we follow the same process

1- Login into Microsoft Flow,

2- blank template

3-trigger Power Apps

4- Store data in OneDrive or SharePoint

5- Next store it in Compose as datatoUriasBinary

this the same process we do the get the picture from the Power Apps

 

3- Two component for call any API

Now we use a component in Flow as Request and Respond

by having this two node you able to call most of API

in this scenario I have check the documents of Image tag and activity from Cognitive service and

First Http component:

this component is responsible for the calling the API using API Uri, Key, and what it need to return

 

for this example, I put only for Image tag API but you will see the same scenario for Face API in Post 1 to 4.

for Response, the same but you need to provide the JSON format of that, if you do not have it, then copy the output you have and it gives you the output schema

Save the flow and Back to Power Apps, in Power Apps for the button you need to write a line code to connect flow to apps, first, click on the button, then click on the Action, then flow, find the flow and write the below code for it

each API return specific data type

for instance, in Face API  the output schema was like

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "faceId": {
                "type": "string"
            },
            "faceRectangle": {
.
.
.
.

But for the image tag

{
    “type”: “object“,
    “properties”: {
        “categories”: {
            “type”: “array“,
            “items”: {
                “type”: “object“,
so to fetch the data in Power Apps you need to be careful about the objects you are getting, so in first one we recive first a collection (array) that has an object on it, but for image ta,g we recived an Objects that we have an array (collection) on it.
this will impact on the formula we use to fetch the information.
so here in Image Tags to get the data you need to check the
ImageTag.description.captions
so for this post, I assigned the above value to the data table and run the flow
In the next Post How we able to fetch the data in Power Apss  will be discussed deeply.
Leila Etaati on LinkedinLeila Etaati on TwitterLeila Etaati on Youtube
Leila Etaati
Trainer, Consultant, Mentor
Leila is the first Microsoft AI MVP in New Zealand and Australia, She has Ph.D. in Information System from the University Of Auckland. She is the Co-director and data scientist in RADACAD Company with more than 100 clients in around the world. She is the co-organizer of Microsoft Business Intelligence and Power BI Use group (meetup) in Auckland with more than 1200 members, She is the co-organizer of three main conferences in Auckland: SQL Saturday Auckland (2015 till now) with more than 400 registrations, Difinity (2017 till now) with more than 200 registrations and Global AI Bootcamp 2018. She is a Data Scientist, BI Consultant, Trainer, and Speaker. She is a well-known International Speakers to many conferences such as Microsoft ignite, SQL pass, Data Platform Summit, SQL Saturday, Power BI world Tour and so forth in Europe, USA, Asia, Australia, and New Zealand. She has over ten years’ experience working with databases and software systems. She was involved in many large-scale projects for big-sized companies. She also AI and Data Platform Microsoft MVP. Leila is an active Technical Microsoft AI blogger for RADACAD.

Leave a Reply