(Photo by Victor Freitas via Pexels)
By Stephen Beech
A new AI scanner can instantly tell you the nutritional value of your meal from a phone photo.
American researchers are refining the state-of-the-art tech to calculate calorie count, fat content, and nutritional value from images of food taken on a smartphone.
Snap a photo of your meal, and artificial intelligence will be able to instantly tells you its nutritional value without the need for food diaries or guesswork.
That scenario is now much closer to reality, thanks to an AI system developed by scientists at New York University’s (NYU) Tandon School of Engineering that promises a new tool for the millions of people who want to manage their weight, diabetes and other diet-related health conditions.
The technology uses advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content – including calories, protein, carbohydrates and fat.
For over a decade, NYU’s Fire Research Group, which includes the paper’s lead author Professor Prabodh Panindre and co-author Professor Sunil Kumar, has studied firefighters’ health.
(Photo by ROMAN ODINTSOV via Pexels)
Previous research has shown that more than 70% of firefighters are overweight or obese, so they face increased cardiovascular and other health risks.
The findings directly motivated the development of their AI-powered food-tracking system.
Panindre said: “Traditional methods of tracking food intake rely heavily on self-reporting, which is notoriously unreliable.
“Our system removes human error from the equation.”
Despite the apparent simplicity of the concept, developing reliable food recognition AI has stumped researchers for years.
Previous attempts struggled with three fundamental challenges that the NYU Tandon team appears to have overcome.
Kumar said: “The sheer visual diversity of food is staggering.
“Unlike manufactured objects with standardized appearances, the same dish can look dramatically different based on who prepared it.
(Photo by Ivan Samkov via Pexels)
“A burger from one restaurant bears little resemblance to one from another place, and homemade versions add another layer of complexity.”
The researchers said that earlier systems also faltered when estimating portion sizes – a crucial factor in nutritional calculations.
The NYU team’s advance is their “volumetric computation function” – which uses advanced image processing to measure the exact area each food occupies on a plate.
The system correlates the area occupied by each food item with density and macronutrient data to convert 2D images into nutritional assessments.
The team explained that the integration of volumetric computations with the AI model enables “precise” analysis without manual input, solving a long-standing challenge in automated dietary tracking.
Kumar said the third major hurdle has been computational efficiency.
(Photo by Anna Shvets via Pexels)
Previous models required too much processing power to be practical for real-time use, often necessitating cloud processing that introduced delays and privacy concerns.
The NYU researchers used a powerful image-recognition technology called YOLOv8 with ONNX Runtime – a tool that helps AI programs run more efficiently – to build a food-identification program that runs on a website instead of as a downloadable app.
That allows people to visit the website using their phone’s web browser to analyze meals and track their diet.
When tested on a pizza slice, the system calculated 317 calories, 10 grams of protein, 40 grams of carbohydrates, and 13 grams of fat — nutritional values that closely matched reference standards.
It performed similarly well when analyzing more complex dishes such as idli sambhar, a South Indian specialty featuring steamed rice cakes with lentil stew, for which it calculated 221 calories, seven grams of protein, 46 grams of carbs and just one gram of fat.
Panindre added: “One of our goals was to ensure the system works across diverse cuisines and food presentations.
(Photo by Thirdman via Pexels)
“We wanted it to be as accurate with a hot dog – 280 calories according to our system – as it is with baklava, a Middle Eastern pastry that our system identifies as having 310 calories and 18 grams of fat.”
The research team solved data challenges by combining similar food categories, removing food types with too few examples, and giving extra emphasis to certain foods during training.
Those techniques helped refine their training dataset from countless initial images to a more balanced set of 95,000 instances across 214 food categories.
The AI can now accurately locate and identify food items around 80% of the time, even when they overlap or are partially obscured.
The system has been deployed as a web application that works on mobile devices, making it potentially accessible to anyone with a smartphone.
The NYU team described their current system as a “proof-of-concept” that could be refined and expanded for broader healthcare applications very soon.