AI Researchers Develop New Way to Reverse Engineer Recipes From Photos

Cinnamon buns dripping with gooey frosting, bubbling pizza laden with colorful toppings, decadent chocolate mousse with whipped cream — a picture may be worth a thousand words, but for aspiring chefs, a food post without a recipe is the ultimate frustration.


Until now — at least in the research lab. Facebook AI researchers have built a system that can analyze a photo of food and then create a recipe from scratch.


Snap a photo of a particular dish and, within seconds, the system can analyze the image and generate a recipe with a list of ingredients and steps needed to create the dish. It can’t look at a photo of a particular pie or pancake and determine the exact type of flour used or the skillet or oven temperature, but the system will come up with a recipe for a very credible (and tasty) approximation.


While the system is only for research, it has proved to be an interesting challenge for the broader project of teaching machines to see and understand the world.


Their “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world. Those smartphone apps that allow you to identify plant and dog species, or that scan your credit card so that you won’t have to tap in all the numbers? That’s computer vision.



But this is no ordinary computer vision system: It has extra gray matter. It leverages not one but two neural networks, algorithms that are designed to recognize patterns in digital images, whether they are fern fronds, long muzzles or embossed characters. Michal Drozdzal, a Research Scientist at Facebook AI Research, explains that the inverse cooking system splits the image-to-recipe problem into two parts: One neural network identifies the ingredients that it sees in the dish, while the other devises a recipe from the list.