Download friends’ profile pictures on facebook!

Hello guys, before going into the topic, I wanna say you one thing. This method may not fetch all of your facebook friends’ profile pictures. But, this is the efficient way*(considering the number of pictures you get) you can find on whole internet(exaggerating 😛 ).
*depends on your popularity on FB.
Let me make sure you have:(Prerequisites, and don’t proceed without understanding these topics well.)

  • Considerable knowledge of facebook GraphAPI. Nice video explaining GraphAPI.
  • Installed third-party facebook-sdk for python. As I am going to use python.

So, with a simple google search for getting friends’ pics, you might land on a hell lot of pages with most of them using a call to friends (/friends) with a user node(me/friends).

Okay. So, why isn’t it efficient?

Here’s the catch. The permissions required to make this call from this page. permissions The ‘second point’ states it will only return those who have use the app making the request. I don’t know what’s your case, but, for me it returned only 14 friends of 970 in my list. Very disappointing. Try it for yourself and let me know how many nodes( 😛 ) it returned in comments.

import facebook

graph= facebook.GraphAPI()
friends= graph.get_connections("me", "friends")
print(len(friends["data"]))
It returned only 15 friends in my list.
It returned only 15 friends in my list.

I think our aim is to fetch profile pics, not getting friends list!

If you are thinking like that, just take a minute to re-think. To get pics through GraphAPI we can call me/picture which returns present profile picture of user. We can give a username or user id in the place of ‘me‘ while we are calling for pictures. That’s the reason we are obsessed to know our friends’ list to get user id or username. But, here’s another catch. The ids you got are not the real ids. 😦 See this…

You can see that for yourself. It returned some other value in the place of userid.
You can see that for yourself. It returned some other value in the place of user id.

So, now, some how, find a way to get user-id of our friends.

Likes! Yes the answer is likes.

You can get your friends’ user-ids (depends on their settings. But, you can access 90% of them.) by passing any of our pics or statuses with most number of likes. Find its id in search bar by clicking on your pic. Now, you can pass <pic-id>/likes which returns a list of friends who liked your pic. Now, here’s the brilliant part. FB provides a link to their profile who all liked your nodes(pics or statuses).

likes= graph.get_connections(&quot;&lt;node-id&gt;&quot;, &quot;likes?fields=link,name&amp;limit=500&quot;)
#returns only 1000 friends maximum.
#returns your friends profile as &quot;https://www.facebook.com/&lt;username&gt;&quot;

Now, we got our friends’ usernames. Lets start downloading pics. 🙂

for i in range (0,len(likes["data"])):
myid= likes["data"][i]["id"]
myname= likes["data"][i]["name"]
mylink= likes["data"][i]["link"]
username= mylink[25:]
print(myname)
pic= graph.get_connections(username, &quot;picture?height=9000&amp;redirect=false&quot;)
urllib.request.urlretrieve(pic["data"]["url"], "fb/likers/"+myname+".jpg")
print(&quot;downloaded the pic&quot;)

Downloading pics in action.
Downloading pics in action.

In this process you may get some errors fetching friends’ username, because of their settings. In that case we check for it and try to ignore those links.

#'https://www.facebook.com/profile.php?id=100005090622885&amp;#8242' It looks similar to this. From which we can’t extract username.
check= (username[:7])
if(check == &quot;profile&quot;):
print(&quot;Can’t fetch username due to their settings.&quot;)
continue

If you are planning to download pictures of friends who liked your nodes from two or more nodes, you may get repeated users i.e. one of your friends may liked both of your nodes. In that case we try not to re-download their pictures.

if (os.path.exists("fb/likers/"+nodename+".jpg")):
print("Already downloaded "+nodename+"’s picture!")
continue

OMG! omg! I got a lot of pics. So, now what to do?

You can do a lot of things. Really a lot of things. You can build a crazy face-recogniser like this one here.
And you can put it in your room, running, recognises the person entered your room. 😛

For entire code you can visit my GitHub page.

Any questions, feedback is always welcome at shine123surya@gmail.com 

Advertisements

Donated 104,390 rice grains using tesseract OCR (MARK I)

For each answer you get right, we donate 10 grains of rice through the World Food programme to help end hunger

That’s what guys at freerice.com say. So, I answered 10,439 questions correctly, ending up donating 104,390 rice grains. 😀 As I mentioned in one of my previous posts, I am working on OCR (Optical Character Recognition) to win a bet with my friend. I have completed building an OCR system and donated 104,390 rice grains on this website freerice.com in a single day under United Nations world food program. As I said, I built this just to win a bet. This whole post is one down and dirty way  to build a working OCR system. I know this isn’t efficient way to do this even. (that’s the reason for MARK I 😛 ) Here’s a screenshot of my score (aka rice grains donated) on that site-

I let the code to run for a whole night and this is what it did by morning. 104390 rice grains are donated under UN World Food Program.
I let the code to run for a whole night and this is what it did by morning. 104,390 rice grains are donated under UN World Food Program.

And a screenshot while the code is running:

Working fine but with 60% accuracy. (answered correctly 594 questions out of 1000, to be more specific)

So, lets dive into the juicy part, the code- The whole process is divided into small points.

  1. Take a screenshot while the required page, freerice.com in our case, is opened, crop it to get the area we are interested in to recognize characters i.e where the multiplication tables are. (Here in this project it is going to answer only multiplication type questions) Save the cropped image.
  2. Analyse that image with Tesseract OCR to get a text file of recognized characters as output. (eventually saving it as a txt file next to our main program, I mean the folder)
  3. Analyze that txt file and get the recognized information. (We are dealing with integers, so, convert a string txt file into integers)
  4. After analysing the information in txt file, we get to know the question (7×4 or something like that) and check for answer in options.
  5. If the answer matches any of the options, move the mouse onto specific region. (works for me. It basically click on that pixel value, where that option is on screen, which I found by trail and error on my laptop)
  6. If tesseract couldn’t find the correct answer, (answer we got by solving the first line  (7×4 for example) and answer by analysing the options) it randomly clicks on any of the four options just not to break the loop. (LOOP? where’s that? See next point)
  7. Embed everything from 1 to 6 points in a loop so, it does its work while you are sleeping. 😀

I have briefly commented what’s each line is contributing to the code, making it as a whole.

#Import required libraries. We need to download some, if you don't have tham.
import cv2
import os
import pyscreenshot as ImageGrab
import numpy as np
import time
from pymouse import PyMouse
import random

#defining a function rand.
def rand():
    m = PyMouse()
    #find a random int and put it into 'do'
    do= random.randint(1,4)
    #basic if, elif loop
    if do == 1:
    #clicking at point (395, 429). Here 1 implies a   left-click
    m.click(395, 429, 1)
    elif do== 2:
    m.click(395, 466, 1)
    elif do == 3:
    m.click(395, 505, 1)
    else:
    m.click(395, 544, 1)
    m.move(50,50)
    print("Rand")
    #wait for 1 sec, giving time to browser to refresh
    time.sleep(1)

trails= 0
#two forloops because, I am waiting for 5 secs after every 10 calculations just to make the system stable
for guns in range (0,1000):
  for buns in range(0,10):
  #Using try,catch to avoid any errors
    try:
      img= ImageGrab.grab() #taking a screenshot
      img.save('output.png')
      pic= cv2.imread('output.png')
      pic2= pic[360:570, 380:470] #cropping the pic, works in my case
      cv2.imwrite('output.png', pic2)
      u= 'convert output.png -resize 700 output.png'
      os.system(u) #writing to terminal (re-sizing the pic)
      s= 'tesseract output.png output'
      os.system(s) #writing to terminal (running Tesseract)
      f= open('output.txt', 'r')
      string= f.read().replace('\n', ' ')
      string= string.replace(' ', ' ')
      string= string.replace(' ', ' ')
      first= string[:string.find('x')] #finding first integer
      second= string[string.find('x')+1:string.find(' ')] #finding second integer
      pro= int(first)*int(second)
      print(pro)
      print(string)
      m= PyMouse()
      string= string[string.find(' ')+1:]
      a= int(string[:string.find(' ')])
      #print(a)
      #checking if product is equal to any of answers and clicking on that particular option
      if pro == a:
        m.click(395, 429, 1)
        m.move(50,50) #move cursor to any random point which is not in our area of interest, avoiding tesseract to think it as some character
        print("Pass")
        time.sleep(1)
      else:
        string= string[string.find(' ')+1:]
        b= int(string[:string.find(' ')])
        #print(b)
        if pro == b:
          m.click(395, 466, 1)
          m.move(50,50)
          print("Pass")
          time.sleep(1)
        else:
          string= string[string.find(' ')+1:]
          c= int(string[:string.find(' ')])
          #print(c)
          if pro == c:
            m.click(395, 505, 1)
            m.move(50,50)
            print("Pass")
            time.sleep(1)
          else:
            d= int(string[string.find(' ')+1:])
            #print(d)
            if pro == d:
              m.click(395, 544, 1)
              m.move(50,50)
              print("Pass")
              time.sleep(1)
            else:
              rand() #tesseract can't detect 100% accurately. So, tick any option randomly in case it didn't find correct answer
              #print("haha")
    except (ValueError, NameError, TypeError):
      rand() #tick randomly in case of any errors
  trails+= 10
  print("Total= " + str(trails))
  time.sleep(5) #waiting for 5secs after every 10 loops to make my system stable.

I’ll be back soon with MARK II of OCR system and next time I’ll not be using TesseractOCR. (Target accuracy- 90% (just a thought, though)) If you have any questions or some feedback, please feel free to add comments. I’d be happy to get some feedback from you.   So, happy donating. Any time

-SuryaTeja Cheedella

What I have found?!

Hello,

HaHaaa.. I received all of my summer toys from element14 for free. Yeah, you heard that right, absolutely for FREE. It’s part of roadtest by element14. This is a group dedicated to testing and reviewing new products. I submitted my idea of testing these products and they selected me. And I should agree to their conditions that I should provide them with a neat review within two months of receiving kits. Here are they:

My Toys
Got from element14. Hahaa

 

Briefly, an MSP430 launchpad from TI with Anaren air kit to control a robot or a robotic car with smartphones via bluetooth. Raspberry Pi with Enocean modules (self-powered sensors kit) for home automation. And my idea is switch-off computers or laptops when you leave your room.

After spending all these days lazily, it’s time to start working on these things and provide them with review helps me to spend remaining time lazily….!

Bye for now.

Peace

S

I am happy. I am sad.

Namaste,

Now, I am happy after all these boring holidays, did something productive(posting this) besides learning. You might think it’s insane, learning during holidays. But, thank god I am not studying. I am following some Java tutorials on YouTube and trying to understand my first technical book Head First Java. A hell of a book that. Basically I don’t like reading books, fact is that I completed reading How I Braved Anu Aunty and Co-Founded A Million Dollar Company in two days in my second sem and that is my first book. The main reason I don’t like books is the way they appear, those paras, those lines with same font and size, which eventually makes  me scare and to throw it away.

Head First JAVA

Coming to this book, Head First Java, I didn’t throw it away because it’s a soft copy on my laptop. I didn’t delete it because I
love that book. A ‘Brain-Friendly Guide’, there’s this phrase on cover page and after reading the book (not whole book, though) I realised that the author really mean it. I completed reading 200 pages of 700 in two days(surprisingly, I understood) and realised that it’s really Brain-Friendly. All Happies. Hope my Java journey takes me further. Now, sad part is, there is something big to do in my mind but facing difficulties to practically apply it(Yeah, I know something big always comes with a lot of difficulties). I got selected for two roadtests (reviewing new electronic products, mostly development boards, home automation systems) in which I am about to receive A raspberry-pi with home automation modules(fun part is these modules are self-powered) and A launch pad from TI, MSP430 with bluetooth modules. The real sad part living in India(no hard feelings), it takes a long time to ship these kits to India. All of those who got selected in roadtests have received their kits(not from India). There’s a nice thing to do with those self-powered modules(sensors). But before knowing what can we do with Continue reading