Google Vision API in R – RoogleVision


Using the Google Vision API in R

Utilizing RoogleVision

After doing my post last month on OpenCV and face detection, I started looking into other algorithms used for pattern detection in images. As it turns out, Google has done a phenomenal 非凡的 job with their Vision API. It’s absolutely incredible the amount of information it can spit back to you by simply sending it a picture.

Also, it’s 100% free! I believe that includes 1000 images per month. Amazing!



RoogleVision
This package is not yet on CRAN.
To install the latest development version you can install from the cloudyr drat repository:
# latest stable version
install.packages("RoogleVision", repos = c(getOption("repos"), "http://cloudyr.github.io/drat"))

In this post I’m going to walk you through the absolute basics of accessing the power of the Google Vision API using the RoogleVision package in R.

As always, we’ll start off loading some libraries. I wrote some extra notation around where you can install them within the code.

# Normal Libraries
library(tidyverse)

# devtools::install_github("flovv/RoogleVision")
library(RoogleVision)
library(jsonlite) # to import credentials

# For image processing
# source("http://bioconductor.org/biocLite.R")
# biocLite("EBImage")
library(EBImage)

# For Latitude Longitude Map
library(leaflet)

Google Authentication

In order to use the API, you have to authenticate. There is plenty of documentation out there about how to setup an account, create a project, download credentials, etc. Head over to Google Cloud Console if you don’t have an account already.

# Credentials file I downloaded from the cloud console
creds = fromJSON('credentials.json')

# Google Authentication - Use Your Credentials
# options("googleAuthR.client_id" = "xxx.apps.googleusercontent.com")
# options("googleAuthR.client_secret" = "")

options("googleAuthR.client_id" = creds$installed$client_id)
options("googleAuthR.client_secret" = creds$installed$client_secret)
options("googleAuthR.scopes.selected" = c("https://www.googleapis.com/auth/cloud-platform"))
googleAuthR::gar_auth()

Now You’re Ready to Go

The function getGoogleVisionResponse takes three arguments:

  1. imagePath
  2. feature
  3. numResults

Numbers 1 and 3 are self-explanatory, “feature” has 5 options:

These are self-explanatory but it’s nice to see each one in action.

As a side note: there are also other features that the API has which aren’t included (yet) in the RoogleVision package such as “Safe Search” which identifies inappropriate content, “Properties” which identifies dominant colors and aspect ratios and a few others can be found at the Cloud Vision website


Label Detection

This is used to help determine content within the photo. It can basically add a level of metadata around the image.

Here is a photo of our dog when we hiked up to Audubon Peak in Colorado:

dog_mountain_label = getGoogleVisionResponse('dog_mountain.jpg',
                                              feature = 'LABEL_DETECTION')
head(dog_mountain_label)
##            mid           description     score
## 1     /m/09d_r              mountain 0.9188690
## 2 /g/11jxkqbpp mountainous landforms 0.9009549
## 3    /m/023bbt            wilderness 0.8733696
## 4     /m/0kpmf             dog breed 0.8398435
## 5    /m/0d4djn            dog hiking 0.8352048

All 5 responses were incredibly accurate! The “score” that is returned is how confident the Google Vision algorithms are, so there’s a 91.9% chance a mountain is prominent in this photo. I like “dog hiking” the best – considering that’s what we were doing at the time. Kind of a little bit too accurate…


Landmark Detection

This is a feature designed to specifically pick out a recognizable landmark! It provides the position in the image along with the geolocation of the landmark (in longitude and latitude).

My wife and I took this selfie in at the Linderhof Castle in Bavaria, Germany.

us_castle <- readImage('us_castle_2.jpg')
plot(us_castle)

The response from the Google Vision API was spot on. It returned “Linderhof Palace” as the description. It also provided a score (I reduced the resolution of the image which hurt the score), a boundingPoly field and locations.

us_landmark = getGoogleVisionResponse('us_castle_2.jpg',
                                      feature = 'LANDMARK_DETECTION')
head(us_landmark)
##         mid      description     score
## 1 /m/066h19 Linderhof Palace 0.4665011
##                               vertices          locations
## 1 25, 382, 382, 25, 178, 178, 659, 659 47.57127, 10.96072

I plotted the polygon over the image using the coordinates returned. It does a great job (certainly not perfect) of getting the castle identified. It’s a bit tough to say what the actual “landmark” would be in this case due to the fact the fountains, stairs and grounds are certainly important and are a key part of the castle.

us_castle <- readImage('us_castle_2.jpg')
plot(us_castle)
xs = us_landmark$boundingPoly$vertices[[1]][1][[1]]
ys = us_landmark$boundingPoly$vertices[[1]][2][[1]]
polygon(x=xs,y=ys,border='red',lwd=4)

Turning to the locations – I plotted this using the leaflet library. If you haven’t used leaflet, start doing so immediately. I’m a huge fan of it due to speed and simplicity. There are a lot of customization options available as well that you can check out.

The location = spot on! While it isn’t a shock to me that Google could provide the location of “Linderhof Castle” – it is amazing to me that I don’t have to write a web crawler search function to find it myself! That’s just one of many little luxuries they have built into this API.

latt = us_landmark$locations[[1]][[1]][[1]]
lon = us_landmark$locations[[1]][[1]][[2]]
m = leaflet() %>%
  addProviderTiles(providers$CartoDB.Positron) %>%
  setView(lng = lon, lat = latt, zoom = 5) %>%
  addMarkers(lng = lon, lat = latt)
m


Face Detection

My last blog post showed the OpenCV package utilizing the haar cascade algorithm in action. I didn’t dig into Google’s algorithms to figure out what is under the hood, but it provides similar results. However, rather than layering in each subsequent “find the eyes” and “find the mouth” and …etc… it returns more than you ever needed to know.

The likelihoods is another amazing piece of information returned! I have run about 20 images through this API and every single one has been accurate – very impressive!

I wanted to showcase the face detection and headwear first. Here’s a picture of my wife and I at “The Bean” in Chicago (side note: it’s awesome! I thought it was going to be really silly, but you can really have a lot of fun with all of the angles and reflections):

us_hats_pic <- readImage('us_hats.jpg')
plot(us_hats_pic)

us_hats = getGoogleVisionResponse('us_hats.jpg',
                                      feature = 'FACE_DETECTION')
head(us_hats)
##                                 vertices
## 1 295, 410, 410, 295, 164, 164, 297, 297
## 2 353, 455, 455, 353, 261, 261, 381, 381
##                                 vertices
## 1 327, 402, 402, 327, 206, 206, 280, 280
## 2 368, 439, 439, 368, 298, 298, 370, 370
##
## landmarks...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           landmarks
##   rollAngle panAngle tiltAngle detectionConfidence landmarkingConfidence
## 1  7.103324 23.46835 -2.816312           0.9877176             0.7072066
## 2  2.510939 -1.17956 -7.393063           0.9997375             0.7268016
##   joyLikelihood sorrowLikelihood angerLikelihood surpriseLikelihood
## 1   VERY_LIKELY    VERY_UNLIKELY   VERY_UNLIKELY      VERY_UNLIKELY
## 2   VERY_LIKELY    VERY_UNLIKELY   VERY_UNLIKELY      VERY_UNLIKELY
##   underExposedLikelihood blurredLikelihood headwearLikelihood
## 1          VERY_UNLIKELY     VERY_UNLIKELY        VERY_LIKELY
## 2          VERY_UNLIKELY     VERY_UNLIKELY        VERY_LIKELY
us_hats_pic <- readImage('us_hats.jpg')
plot(us_hats_pic)

xs1 = us_hats$fdBoundingPoly$vertices[[1]][1][[1]]
ys1 = us_hats$fdBoundingPoly$vertices[[1]][2][[1]]

xs2 = us_hats$fdBoundingPoly$vertices[[2]][1][[1]]
ys2 = us_hats$fdBoundingPoly$vertices[[2]][2][[1]]

polygon(x=xs1,y=ys1,border='red',lwd=4)
polygon(x=xs2,y=ys2,border='green',lwd=4)

Here’s a shot that should be familiar (copied directly from my last blog) – and I wanted to highlight the different features that can be detected. Look at how many points are perfectly placed:

my_face_pic <- readImage('my_face.jpg')
plot(my_face_pic)

my_face = getGoogleVisionResponse('my_face.jpg',
                                      feature = 'FACE_DETECTION')
head(my_face)
##                               vertices
## 1 456, 877, 877, 456, NA, NA, 473, 473
##                               vertices
## 1 515, 813, 813, 515, 98, 98, 395, 395
##                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             landmarks
## landmarks ...
##    rollAngle  panAngle tiltAngle detectionConfidence landmarkingConfidence
## 1 -0.6375801 -2.120439  5.706552            0.996818             0.8222974
##   joyLikelihood sorrowLikelihood angerLikelihood surpriseLikelihood
## 1   VERY_LIKELY    VERY_UNLIKELY   VERY_UNLIKELY      VERY_UNLIKELY
##   underExposedLikelihood blurredLikelihood headwearLikelihood
## 1          VERY_UNLIKELY     VERY_UNLIKELY      VERY_UNLIKELY
head(my_face$landmarks)
## [[1]]
##                            type position.x position.y    position.z
## 1                      LEFT_EYE   598.7636   192.1949  -0.001859295
## 2                     RIGHT_EYE   723.1612   192.4955  -4.805475700
## 3          LEFT_OF_LEFT_EYEBROW   556.1954   165.2836  15.825399000
## 4         RIGHT_OF_LEFT_EYEBROW   628.8224   159.9029 -23.345352000
## 5         LEFT_OF_RIGHT_EYEBROW   693.0257   160.6680 -25.614508000
## 6        RIGHT_OF_RIGHT_EYEBROW   767.7514   164.2806   7.637372000
## 7         MIDPOINT_BETWEEN_EYES   661.2344   185.0575 -29.068363000
## 8                      NOSE_TIP   661.9072   260.9006 -74.153710000
...
my_face_pic <- readImage('my_face.jpg')
plot(my_face_pic)

xs1 = my_face$fdBoundingPoly$vertices[[1]][1][[1]]
ys1 = my_face$fdBoundingPoly$vertices[[1]][2][[1]]

xs2 = my_face$landmarks[[1]][[2]][[1]]
ys2 = my_face$landmarks[[1]][[2]][[2]]

polygon(x=xs1,y=ys1,border='red',lwd=4)
points(x=xs2,y=ys2,lwd=2, col='lightblue')


Logo Detection

To continue along the Chicago trip, we drove by Wrigley field and I took a really bad photo of the sign from a moving car as it was under construction. It’s nice because it has a lot of different lines and writing the Toyota logo isn’t incredibly prominent or necessarily fit to brand colors.

This call returns:

wrigley_image <- readImage('wrigley_text.jpg')
plot(wrigley_image)

wrigley_logo = getGoogleVisionResponse('wrigley_text.jpg',
                                   feature = 'LOGO_DETECTION')
head(wrigley_logo)
##           mid description     score                               vertices
## 1 /g/1tk6469q      Toyota 0.3126611 435, 551, 551, 435, 449, 449, 476, 476
wrigley_image <- readImage('wrigley_text.jpg')
plot(wrigley_image)
xs = wrigley_logo$boundingPoly$vertices[[1]][[1]]
ys = wrigley_logo$boundingPoly$vertices[[1]][[2]]
polygon(x=xs,y=ys,border='green',lwd=4)


Text Detection

I’ll continue using the Wrigley Field picture. There is text all over the place and it’s fun to see what is captured and what isn’t. It appears as if the curved text at the top “field” isn’t easily interpreted as text. However, the rest is caught and the words are captured.

The response sent back is a bit more difficult to interpret than the rest of the API calls – it breaks things apart by word but also returns everything as one line. Here’s what comes back:

wrigley_text = getGoogleVisionResponse('wrigley_text.jpg',
                                   feature = 'TEXT_DETECTION')
head(wrigley_text)
##   locale
## 1     en

##                                                                                                        description
## 1 RIGLEY F\nICHICAGO CUBS\nORDER ONLINE AT GIORDANOS.COM\nTOYOTA\nMIDWEST\nFENCE\n773-722-6616\nCAUTION\nCAUTION\n
                                                                                             ORDER
##                                 vertices
## 1   55, 657, 657, 55, 210, 210, 852, 852
## 2 343, 482, 484, 345, 217, 211, 260, 266

wrigley_image <- readImage('wrigley_text.jpg')
plot(wrigley_image)

for(i in 1:length(wrigley_text$boundingPoly$vertices)){
  xs = wrigley_text$boundingPoly$vertices[[i]]$x
  ys = wrigley_text$boundingPoly$vertices[[i]]$y
  polygon(x=xs,y=ys,border='green',lwd=2)
}


That’s about it for the basics of using the Google Vision API with the RoogleVision library. I highly recommend tinkering around with it a bit, especially because it won’t cost you a dime.

While I do enjoy the math under the hood and the thinking required to understand alrgorithms, I do think these sorts of API’s will become the way of the future for data science. Outside of specific use cases or special industries, it seems hard to imagine wanting to try and create algorithms that would be better than ones created for mass consumption. As long as they’re fast, free and accurate, I’m all about making my life easier! From the hiring perspective, I much prefer someone who can get the job done over someone who can slightly improve performance (as always, there are many cases where this doesn’t apply).

Please comment if you are utilizing any of the Google API’s for business purposes, I would love to hear it!

As always you can find this on my GitHub



Image Recognition With Google Vision API Exercises

Google Cloud Vision API is a powerful, off the shelf tool that provides you with Image Recognition, Object Detection, and OCR. There is also a RoogleVision package that makes it really easy to use the service.

Answers to the exercises are available here .

If you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page.

Exercise 1
Install RoogleVision . Sign in at Google Cloud Platform . Create a project, enable billing (free up to 1000 requests per month) and enable Google Cloud Vision API. To be able to connect with the Google Cloud, create OAuth 2.0 client ID at the credentials tab. Finally, connect with the Google Cloud.

####################
#                  #
#    Exercise 1    #
#                  #
####################

# devtools::install_github("cloudyr/RoogleVision")
library(RoogleVision)
library(jsonlite)

creds = fromJSON('credentials.json')
options("googleAuthR.client_id" = creds$installed$client_id)
options("googleAuthR.client_secret" = creds$installed$client_secret)
options("googleAuthR.scopes.selected" = c("https://www.googleapis.com/auth/cloud-platform"))
googleAuthR::gar_auth()
## Warning: option(googleAuthR.scopes.selected) not same scopes as current cached token .httr-oauth, will need reauthentication.  
##                      
## Token scopes: https://www.googleapis.com/auth/webmasters https://www.googleapis.com/auth/analytics https://www.googleapis.com/auth/analytics.readonly https://www.googleapis.com/auth/analytics.manage.users.readonly https://www.googleapis.com/auth/tagmanager.readonly https://www.googleapis.com/auth/urlshortener
## getOption(googleAuthR.scopes.selected): https://www.googleapis.com/auth/cloud-platform

Exercise 2
Detect what is on the image below.


####################
#                  #
#    Exercise 2    #
#                  #
####################
label_link <- 'https://www.r-exercises.com/wp-content/uploads/2017/10/eiffel.jpg'
label <- getGoogleVisionResponse(label_link,
                                 feature = 'LABEL_DETECTION')
label
##          mid       description     score
## 1  /m/034z7h         cityscape 0.9712378
## 2   /m/0j_s4 metropolitan area 0.9708802
## 3 /m/05_5t0l          landmark 0.9697697
## 4  /m/01d74z             night 0.9574602
## 5  /m/01fdzj             tower 0.9561583



Exercise 3
Detect the landmark on the image from exercise 2.

####################
#                  #
#    Exercise 3    #
#                  #
####################
landmark <- getGoogleVisionResponse(label_link,
                                    feature = 'LANDMARK_DETECTION')
landmark
##        mid  description     score                               vertices
## 1 /m/02j81 Eiffel Tower 0.6718224 439, 556, 556, 439, 159, 159, 560, 560
##             locations
## 1 48.858461, 2.294351



Exercise 4
Plot the image from exercise 2 with a box around the detected landmark. (You will need an additional package, for example EBImage , for this one.)

####################
#                  #
#    Exercise 4    #
#                  #
####################
library(EBImage)
plot(readImage(label_link))
xs = landmark$boundingPoly$vertices[[1]][1][[1]]
ys = landmark$boundingPoly$vertices[[1]][2][[1]]
polygon(x=xs,y=ys,border='red',lwd=4)



Exercise 5
Mark on the map the location of the landmark. (You will need an additional package, for example leaflet , for this one.)

####################
#                  #
#    Exercise 5    #
#                  #
####################
library(leaflet)
latt = landmark$locations[[1]][[1]][[1]]
lon = landmark$locations[[1]][[1]][[2]]
m = leaflet() %>%
  addProviderTiles(providers$CartoDB.Positron) %>%
  setView(lng = lon, lat = latt, zoom = 5) %>%
  addMarkers(lng = lon, lat = latt)
m

Exercise 6
Detect a logo on the picture below and mark it with a box.


####################
#                  #
#    Exercise 6    #
#                  #
####################
logo_link <- 'https://www.r-exercises.com/wp-content/uploads/2017/10/cola.jpg'
logo <- getGoogleVisionResponse(logo_link,
                                feature = 'LOGO_DETECTION')
plot(readImage(logo_link))
xs = logo$boundingPoly$vertices[[1]][1][[1]]
ys = logo$boundingPoly$vertices[[1]][2][[1]]
polygon(x=xs,y=ys,border='red',lwd=4)

Exercise 7
Detect a face on the picture below and mark it with a box.


####################
#                  #
#    Exercise 7    #
#                  #
####################
face_link <- 'https://www.r-exercises.com/wp-content/uploads/2017/10/face.jpg'
face <- getGoogleVisionResponse(face_link,
                                feature = 'FACE_DETECTION')
plot(readImage(face_link))
xs = face$boundingPoly$vertices[[1]][1][[1]]
ys = face$boundingPoly$vertices[[1]][2][[1]]
polygon(x=xs,y=ys,border='red',lwd=4)




Exercise 8
Mark face landmarks on the image.


####################
#                  #
#    Exercise 8    #
#                  #
####################
xs2 = face$landmarks[[1]][[2]][[1]]
ys2 = face$landmarks[[1]][[2]][[2]]
points(x=xs2,y=ys2,lwd=2, col='lightblue')


Exercise 9
What is the most probable emotion expressed by the face on the image?


####################
#                  #
#    Exercise 9    #
#                  #
####################
face[grepl('Likelihood', names(face))]
##   joyLikelihood sorrowLikelihood angerLikelihood surpriseLikelihood
## 1   VERY_LIKELY    VERY_UNLIKELY   VERY_UNLIKELY      VERY_UNLIKELY
##   underExposedLikelihood blurredLikelihood headwearLikelihood
## 1          VERY_UNLIKELY     VERY_UNLIKELY      VERY_UNLIKELY


Exercise 10
Get the text from the image below.


####################
#                  #
#    Exercise 10   #
#                  #
####################
text_link <- 'https://www.r-exercises.com/wp-content/uploads/2017/10/golf.jpg'
text <- getGoogleVisionResponse(text_link,
                                feature = 'TEXT_DETECTION')
text$description[1]
## [1] "Danger\nGolf in progress\nBeware of golf balls from\nboth directions\n"



pdf . print

Learn more about using google plugins in r, such as the package GoogleVis, in the online course Mastering in Visualization with R programming . In this course you will learn how to: