Food Quantity Detection
IMPORTANT: Before integrating this use case we highly recommend integrating the Regular Image Food Recognition procedure.
What You’ll Build
You’ll implement a depth-based food quantity detection flow where your app captures a set of images or videos around food items using the LogMeal Depth SDK, validates device compatibility, and submits the captured data to the Food Quantity Detection API.
This use case enables automatic estimation of food portions and volume from real-world meals, improving dietary tracking, nutritional reporting, and food waste analysis.
Prerequisites
- Plan: Recommend and above (see more info about LogMeal Plans)
- User type: 🔴 APIUser (see more info about User Types)
- Integrated LogMeal Depth SDK (iOS or Android)
- Device with depth-sensing camera (ARCore or LiDAR capable)
SDK Access
The LogMeal Depth SDK is available for both mobile platforms:
- iOS SDK – LogMeal Depth SDK for iOS: Review its documentation to understand integration, permissions, and required frameworks.
- Android SDK – LogMeal Depth SDK for Android: Review its documentation for setup, camera handling, and ARCore support.
If you are using React Native for your App implementation, you can integrate our ready-to-use LogMeal Depth React Native SDK module, which works for both iOS and Android.
Device Compatibility Check
Not all devices support depth-based quantity estimation. Before starting capture, verify device compatibility using the provided utilities:
- iOS: Use LogMealDepthUtils Compatibility Check to ensure the device meets depth capture requirements.
- Android: Use ArCoreUtils Compatibility Check to confirm ARCore support.
Devices that fail compatibility checks should default to using the Regular Image Food Recognition procedure.
Sequence Diagrams
Main Flow (Depth Image Capture + Quantity Detection)
sequenceDiagram participant User participant App participant LogMeal API User->>App: Initiate capture session App->>App: Validate device compatibility (LogMealDepthUtils / ArCoreUtils) App->>User: Prompt user to move device 180º around food User->>App: Capture image sequence (Depth SDK) App->>LogMeal API: POST /image/segmentation/complete/quantity <br> (Authorization + captured data) LogMeal API-->>App: 200 OK (imageId, segmentation results, quantity data) App-->>User: Display items and estimated food quantity
Fallback Flow (Non-Compatible Device)
sequenceDiagram participant User participant App participant LogMeal API App->>App: Check device compatibility (Depth SDK) App-->>User: Notify device not supported ⚠️ App->>LogMeal API: POST /image/segmentation/complete <br> (Standard segmentation) LogMeal API-->>App: 200 OK (segmentation results) App-->>User: Show regular segmentation results
Implementation Guide
Integration and Capture Steps
-
Access the SDK – Download the appropriate SDK (iOS or Android) and carefully review its documentation before integration.
-
Check device compatibility – Ensure the user’s device supports depth estimation. If not, fall back to regular segmentation.
-
Capture image sequence – Instruct users to:
- Initiate capture manually.
- Perform a slow 180º movement around the food (at least 150º coverage) while keeping the camera pointed toward it.
- Wait for the SDK to signal successful capture.
-
Send captured data – Once capture completes, upload the sequence using one of the supported API endpoints.
Related Endpoints
- POST /image/segmentation/complete/quantity → 🔴 Upload captured data for food quantity estimation.
- POST /waste/detection/intake → 🔴 Detect remaining quantities for waste estimation.
- POST /image/segmentation/complete → 🔴 Standard segmentation endpoint for non-depth devices.
Remember to check applicable request limitations inside each of the endpoints.
Common Pitfalls & Tips
- Always check compatibility: If a device doesn’t support the SDK, revert to segmentation.
- User motion: Ensure users perform full 180º arcs; shorter movements reduce accuracy.
- Lighting: Recommend bright, diffuse lighting to enhance image quality.
- File size: Keep each frame below 1 MB for optimal performance.
- Communication: Clearly guide users during capture; incorrect movement patterns degrade accuracy.
Optional Enhancements
- Combine results with Nutritional Analysis for energy and macro estimation.
- Add pre- and post-meal captures to enable Waste Estimation workflows.
- Integrate capture tutorials or overlay animations directly in your mobile UI.
Next Steps
- Integrate with the Waste Estimation feature.
- Review Plans & Limits for endpoint access and quotas.
Updated about 10 hours ago