Download friends’ profile pictures on facebook!

Hello guys, before going into the topic, I wanna say you one thing. This method may not fetch all of your facebook friends’ profile pictures. But, this is the efficient way*(considering the number of pictures you get) you can find on whole internet(exaggerating 😛 ).
*depends on your popularity on FB.
Let me make sure you have:(Prerequisites, and don’t proceed without understanding these topics well.)

  • Considerable knowledge of facebook GraphAPI. Nice video explaining GraphAPI.
  • Installed third-party facebook-sdk for python. As I am going to use python.

So, with a simple google search for getting friends’ pics, you might land on a hell lot of pages with most of them using a call to friends (/friends) with a user node(me/friends).

Okay. So, why isn’t it efficient?

Here’s the catch. The permissions required to make this call from this page. permissions The ‘second point’ states it will only return those who have use the app making the request. I don’t know what’s your case, but, for me it returned only 14 friends of 970 in my list. Very disappointing. Try it for yourself and let me know how many nodes( 😛 ) it returned in comments.

import facebook

graph= facebook.GraphAPI()
friends= graph.get_connections("me", "friends")
print(len(friends["data"]))
It returned only 15 friends in my list.
It returned only 15 friends in my list.

I think our aim is to fetch profile pics, not getting friends list!

If you are thinking like that, just take a minute to re-think. To get pics through GraphAPI we can call me/picture which returns present profile picture of user. We can give a username or user id in the place of ‘me‘ while we are calling for pictures. That’s the reason we are obsessed to know our friends’ list to get user id or username. But, here’s another catch. The ids you got are not the real ids. 😦 See this…

You can see that for yourself. It returned some other value in the place of userid.
You can see that for yourself. It returned some other value in the place of user id.

So, now, some how, find a way to get user-id of our friends.

Likes! Yes the answer is likes.

You can get your friends’ user-ids (depends on their settings. But, you can access 90% of them.) by passing any of our pics or statuses with most number of likes. Find its id in search bar by clicking on your pic. Now, you can pass <pic-id>/likes which returns a list of friends who liked your pic. Now, here’s the brilliant part. FB provides a link to their profile who all liked your nodes(pics or statuses).

likes= graph.get_connections(&quot;&lt;node-id&gt;&quot;, &quot;likes?fields=link,name&amp;limit=500&quot;)
#returns only 1000 friends maximum.
#returns your friends profile as &quot;https://www.facebook.com/&lt;username&gt;&quot;

Now, we got our friends’ usernames. Lets start downloading pics. 🙂

for i in range (0,len(likes["data"])):
myid= likes["data"][i]["id"]
myname= likes["data"][i]["name"]
mylink= likes["data"][i]["link"]
username= mylink[25:]
print(myname)
pic= graph.get_connections(username, &quot;picture?height=9000&amp;redirect=false&quot;)
urllib.request.urlretrieve(pic["data"]["url"], "fb/likers/"+myname+".jpg")
print(&quot;downloaded the pic&quot;)

Downloading pics in action.
Downloading pics in action.

In this process you may get some errors fetching friends’ username, because of their settings. In that case we check for it and try to ignore those links.

#'https://www.facebook.com/profile.php?id=100005090622885&amp;#8242' It looks similar to this. From which we can’t extract username.
check= (username[:7])
if(check == &quot;profile&quot;):
print(&quot;Can’t fetch username due to their settings.&quot;)
continue

If you are planning to download pictures of friends who liked your nodes from two or more nodes, you may get repeated users i.e. one of your friends may liked both of your nodes. In that case we try not to re-download their pictures.

if (os.path.exists("fb/likers/"+nodename+".jpg")):
print("Already downloaded "+nodename+"’s picture!")
continue

OMG! omg! I got a lot of pics. So, now what to do?

You can do a lot of things. Really a lot of things. You can build a crazy face-recogniser like this one here.
And you can put it in your room, running, recognises the person entered your room. 😛

For entire code you can visit my GitHub page.

Any questions, feedback is always welcome at shine123surya@gmail.com 

Advertisements

Car detection in MATLAB

myCarDetection

Hello guys, how’s it going

Today we are going to train a cascadeDetector, which returns an XML file. We can use that XML file  to detect objects, cars (only from side-view) in this case, in an image.

As we are going to use matlab, I assume you have matlab installed on your PC along with image processing and computer vision toolboxes. The whole post is of two steps:

  1. Train our cascade detector with all the data files.
  2. Use the output XML file to detect objects in a pic.

The following pic. says it all.

Overview
Overview of what we are going to do in here.

 

 

Before going into the topics, lets see what we are going to build:

Detected correctly
This is the final output we are going to get by the end.
1.  Training the cascade file

First things first, to train a cascade detector we need a dataSet. A dataSet contains a lot of positive and negative images for a specific object. So, download the image dataBase from here. You can see a lot of image files (.pgm) in folders testImages, trainImages. You can get an overview by reading the ‘README.txt’ file in that downloaded folder. In this part we are concentrating only on trainImages folder and in next part we get onto teatImages. Make new folders ‘trainImagesNeg’ and ‘trainImagesPos’ and remember the path. Copy&Paste or Cut&Paste the pictures in ‘trainImages’ folder to these new folders. (you may know all the negative pictures are named neg-n.pgm and positive pictures as pos-n.pgm if you read that ‘.txt’ file)

So, here is the line to train your data:

trainCascadeObjectDetector('carTraindata4.xml', mydata, negativeFolder);

So, what’s with those arguments? Where the heck are they initialized.

Here we go, the first argument, a xml file is going to be saved in our current directory, so that we can use it for detecting objects. You can name it as you wish, but don’t forget the extension ‘.xml’. Next argument is actually a struct in matlab, which is the data of all positive images. It contains two fields namely imageFilename and objectBoundingBoxes. Size of this struct would be 1x(no. of pos images), 1×550 in this case as we have 550 pos images. Have a look at this:

Struct-mydata
Screenshot of struct of positive images with objectBoundingBoxes field

In the first field, the path of all 550 pos images are entered and in the second field the bounding boxes of our image of interest. As we got this whole data of images from a dataSets site, rather than collecting from internet, we don’t need to take that huge task of manually putting that values of bounding boxes into second field. (Thank God) Those values in second field are like [x of top-left point, y of top-left, width, height]. All the pictures in the dataSet are of size (100,40), and are already cropped to the image of interest. So, we can just select the whole pic by giving arguments as [1, 1, 100, 40]. And add that folder trainImagesPos to matlab path by right-clicking on it and click addpath.

Okay, I see where this is going. You mean I should do this for 550 times? (as there are 550 pos images) 

It’s absolutely your wish or you could use this for loop after initializing the struct ‘mydata’- (code is self-explanatory)

mydata= struct('imageFilename', 'Just a random string', 'objectBoundingBoxes', 'Just a random string');
for i=0:549,
 mydata(i+1).imageFilename = strcat('trainImagesPos/pos-', num2str(i), '.pgm');
 mydata(i+1).objectBoundingBoxes = [1, 1, 100, 40]
end

 

Now, the whole thing with the second argument ‘mydata’ is closed. As the name suggests the third argument ‘negativeFolder’ is just a folder containing negative images. There is no need of bounding boxes for negative images. So, no need of thing like struct. Just assign the folder path to this variable named negativeFolder-

negativeFolder= fullfile('C:\Users\Surya Teja Cheedella\Documents\MATLAB\carDetection\carData\trainImagesNeg')

For a good training, there should be a large number of negative images. As the number of neg. images in the dataSet are relatively low, I copy&pasted a lot of my personal images into that trainImagesNeg folder (make sure they don’t have pics of cars in side-view).

You can learn more about this function trainCascadeObjectDetector here.

Now, run the code with all arguments initialized. It took around 40 mins. to complete 13 stages of training on my laptop and returned a xml file.

Stages? What do you mean by them? Where did they come from?

See THIS.

Stages while training
An overview of what it’s gonna do in various stages.

 

2.  Detecting objects in an image.

After successful training, we can use the xml file to detect objects (cars in this case) in a picture. These lines of code will do that for us:

%initialising the variable detector with the xml file
detector= vision.CascadeObjectDetector('carTraindata3.xml');
%reading an image file in current dir.
img= imread('sun.png');
%bounding box around detected object
box= step(detector, img);
%inserting that bounding box in given picture and showing it
figure,imshow(insertObjectAnnotation(img, 'rectangle', box,' '));

I have manually tested my trained xml file with all the pics in the testImages folder. It has an accuracy of 93% and out of 180 images these are the statistics:

  • False Positives- 10 (single object in 120 pics and double objects in remaining)
  • True Negatives- 9

Here is the code (just a for loop) to detect a large number of images and display them-

for j= 1:100,
 img= imread(strcat('test-', num2str(j-1), '.pgm'));
 bbox= step(detector, img);
 figure,imshow(insertObjectAnnotation(img, 'rectangle', bbox,' '));
 pause(0.5);
end

As usual my training has a small defect. You can understand by seeing the pic below 😛

Wrongly detected images

 

So, Happy Training!


Surya

Face Detection using openCV in Python

faceDetection

Hello folks, How’s it going!

Today I am going to introduce my face detection algorithm. (not a big one though) Don’t think that this is really a huge task! I am not working from scratch (means I am not actually gathering a huge data set of all pictures (both negative and positive, i.e having faces and not ) and train my algorithm). I have used haar Cascading files.

What the heck are they? Here we go.

Taking samples of a lot of image files of both types (having faces and not having faces) and train the algorithm, make it learn when ever (most probably every time 😛 ) it makes a mistake and store the whole data into xml files. In this case I am using haar Cascading xml files. As I already said they are huge, wanna know their size? 35k lines of code in each xml file! Yes, you read that right, 35K lines of random numbers. 😛

Coming to the juicy part, the code, here it is. I have commented briefly what each line is contributing.


#import numpy as np
import cv2
#import PIL

#import the xml files. here i am using frontal face and eye.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

img= cv2.imread('5.jpg') #reading the image
gray_img= cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #converting into gratscale(algos work with grayscale images)

faces = face_cascade.detectMultiScale(gray_img, 1.3, 5) #detecting faces. lightning conditions may affect the output
print (faces) #just printing to console. this will print boundary points of detected face(s)

#searching for eyes only in faces. easy and efficient to search only in face rather than whole image
for (p,q,r,s) in faces:
 cv2.rectangle(img,(p,q),(p+r,q+s),(150,125,0),2)        #drawing a rectangle indicating face
 face_gray = gray_img[q:q+s, p:p+r] #cropping   face in gray image
 face_color = img[q:q+s, p:p+r] #cropping face in  color image
 eyes = eye_cascade.detectMultiScale(face_gray)  #searching for eyes in grayscale img
 for (ep,eq,er,es) in eyes:
  cv2.rectangle(face_color,(ep,eq),(ep+er,eq+es), (100,210,150),2) #for each eye drawing rectangle

cv2.imshow("output", img)
cv2.waitKey(0) #showing the img

#this only takes a image and shows the faces in the image. It dont modifies the image.
#if you want to save the resulted image use this...
#cv2.imwrite("output.jpg", img)

Wanna know what it did? I gave this image

Input image

and it returned THIS!…….

Output
Screenshot of the output

 

Currently, I am working on alphabet detection thing to won in a bet with my friend.

Bieeee  ^_^

Here somewhere in Milky Way

-Surya