Jayashree Subramanian

On a cloudy, cold morning in Chennai, India, my smart devices went berserk. Not a big deal, people going crazy on social media is as common as dogs barking. Perhaps, more so. What could be crazier than people getting off a moving car (Kiki challenge) or going about, blindfolded (Birdbox challenge)? The 10-year challenge pales in comparison to these two, it is not crazy at all. This challenge however principally differs from other social media challenges by its very different purpose and unobvious motto. The #10yearchallenge is not another marketing gimmick for customer engagement feeding our egos and playing on the narcissism of the millennials. It is part of a bigger plan, a plan to train Facebook’s facial recognition programs, to teach them how people age, to find the programs on simulating younger or older versions of people.  Seems like a bit of a stretch?

I only discussed in the last edition of engage about the fiasco created by two AI bots on Facebook. Facebook’s image recognition AI is perhaps not as intelligent, and requires training. From millions and millions of data sets. Still not clear? Kate O’Neil puts it clearly on Wired.co.uk, “Imagine that you wanted to train a facial recognition program on age-related characteristics and, more specifically, on age progression (e.g., how people are likely to look as they get older). Ideally, you’d want a broad and rigorous dataset with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years”.  Let me explain further.

Imagine, that a dangerous personality like Bin Laden was still alive, and goes underground and resurfaces to the normal world, after 10 years. One day, he gets hungry, and is all out of snacks. So, he goes to a nearby Walmart to get some snickers and cola. Normal folks like us probably won’t recognize him because, all the file photos and videos that we saw on TV were from 10 years ago. The CCTV cameras and the facial recognition programs in the current avatar would be none the wiser because they would also be matching the faces in the live feed with file photos from 10 years ago.

If these programs were ‘taught’ how people age, how their features change over a fixed period of time, say 10 years; Then, in 2020, the programs would ‘simulate’ faces, ‘predict’ how Bin Laden person would look in 2020 at the age of 62, and match the faces from the live feed with the simulated face that is similar to how the person currently looks. Now, this program would most certainly recognize Bin Laden, if he ever fancied some snickers and walked into a surveillance zone.

There would be no need for half lockets or family songs to find lost family members or siblings You could just pop their pictures into a computer, ask the system to simulate a picture of how the person would look like in the present. A bunch of Bollywood movies would have a very different ending, won’t they? “Yaadon ki Baarat”, the world’s first movie in the ‘Bollywood’ genre wouldn’t have been created, had this AI program been available in 1975. Goodness, no!  Jokes aside, simulating age progression is just a simple and straight forward application of such a training. The learnings from this data set could have far more use cases and applications.

So, you might think, Facebook already has pictures of me from 10 years ago. Pictures of me all through these ten years. Pictures of my dog, my sibling, parents, my old roommates, things I don’t even remember until they come up as a ‘Memory’ on my Facebook wall. So, why does Facebook have to gain by from a single picture of mine from 2009 and a picture from now?

How AI programs learn, is not very different from how humans learn. Imagine you want to teach a child, the names of fruits. What do you do? You won’t go for a cosmopolitan or a Times to teach a child fruits, right? Why? Because there probably won’t be fruits there. Even if there was, you won’t find all fruits there. There would be more unwanted information (noise) than the information you need. So, you would go for a picture book with a set of pictures of fruits with clear labels. These days there are also interactive games on your smart devices. But basically, they both have a clear defined set of fruits, with clear names and they are meant for the express purpose of teaching children the names of fruits.

Now, if Facebook wanted to teach its facial recognition AI, how people’s features changed over the years, it’s not going to go for all the pictures in your profile. Firstly, it has too much of noise (Pictures of your dog, your old roommates, pictures of the pasta you made last month). Secondly, you didn’t upload the pictures in a chronological order. Especially around 2009 when smart phones were not very prevalent. You might have uploaded a picture from 2007 in 2009, a picture from 1991, when you were a baby, in 2013.  You might have uploaded a scanned picture of your mother from 1970’s in 2009. That did not mean that the picture was indeed taken in 2009. Now, you and I as humans would understand that this picture is not from 2009 but can an AI program? The answer is, most probably, not. The program might even mistake your mother for you, if you share physical similarities. Maybe you gave a caption to the picture, say, “My beautiful mother”. AI could then understand that it’s a picture of your mother. But, it has no way to tell that it’s an old picture from 1970’s and not from 2009. Because, AI is, to be blunt, stupid. Like a small child, who is yet to learn things. At least until we teach them.

AI programs learn from huge data sets. They need to be clean and labelled. Teaching an AI program, age progression using the plethora of pictures on FB, would be very inefficient and there is very less probability that the program learns what was intended. It’s like trying to teach a child names of fruits using the Times or Cosmopolitan. There are little bits of useful data (fruits), and lots of unnecessary data (anything other than fruits), and the child will most probably get distracted and confused. Enter the 10-year challenge. Millions of pictures that are exactly 10 years apart. Between 2009 and 2019; Along with other important data like the object’s (you) gender, age, race, etc., Millions and millions of such pictures, available to the AI program to learn from, with context.

The programs are then at liberty to measure the angle of the sag of the under-eye bags, the number of lines in the crow’s feet, or the presence of absence of it from millions of datasets; Store them, compare them and find a pattern and establish a general principal. ‘A learning’, if you will. And voila, in a few months, Facebook might be able to simulate with decent accuracy how people change as they grow old. How are 13-year-olds likely to look when they are 22.  Soon, they’ll probably be able to predict how your child would look like when he/she grows up.  A Bin Laden or anybody who is classified as dangerous by the Government, most certainly can’t step out for some snack. Able to picture the power of such an AI program now? The possibilities? Do you still think this is just another social media, trend? Of course, not.

I’ve been talking like this trend was started by FB. That’s not quite accurate. Like all other trends and challenges, it is unclear as to who created this “trend”, but seeing as how much FB stands to gain, the trend could well have been created by the tech giant Facebook itself.

So, why now, in 2019? Why didn’t it happen before? Say, in 2018 or 2017? Some analysts opine, and I agree, that FB was simply not that popular before 2009. It was the year that FB millions in investor funding along with millions of users. So, a 10-year challenge in 2017 wouldn’t have made much sense as there weren’t many pictures available in 2007. FB itself was founded in 2004 and was made open to public only in 2006. So, it makes perfect sense to have such a trend in 2019.

I’m not saying you should be afraid of technology giants and I’m definitely not spreading paranoia. I’m just telling you, to stay smart about all of it, and join me in admiring the power, possibility and stupidity of AI.

The limitations that we consciously put on our AI programs are called artificial stupidity and that could very well be the key to preventing AI from taking over. Let’s continue discussing artificial stupidity in the next edition, Stay tuned!

 

Quotes / page fillers:

“Because, AI is, to be blunt, stupid. Like a small child, who is yet to learn things. At least until we teach them.”

“The limitations that we consciously put on our AI programs are called artificial stupidity and that could very well be the key to preventing AI from taking over”