Food Quantity Detection
IMPORTANT: Before integrating this use case we highly recommend integrating the Regular Image Food Recognition procedure.
What You’ll Build
You’ll implement a depth-based food quantity detection flow where your app:
- Validates device compatibility with the LogMeal Depth SDK.
- Captures a set of images or videos around food items using the LogMeal Depth SDK, too.
- Sends the captured sequence to the POST /image/segmentation/complete/quantity endpoint to create an intake and trigger the asynchronous food quantity estimation task.
- Periodically calls GET /intake/{imageId} (using the
imageIdreturned by the previous call) and waits untilquantity_prediction_statusbecomesdoneto obtain the final food quantity estimation.
This use case enables automatic estimation of food portions and volume from real-world meals, improving dietary tracking, nutritional reporting, and food waste analysis.
Prerequisites
- Plan: Recommend and above (see more info about LogMeal Plans)
- User type: 🔴 APIUser (see more info about User Types)
- Integrated LogMeal Depth SDK (iOS or Android)
- Device with depth-sensing camera (ARCore or LiDAR capable)
- For testing and data validation purposes, you can download the following data acquisition examples before proceeding with SDK integration:
SDK Access
The LogMeal Depth SDK is available for both mobile platforms:
- iOS SDK – LogMeal Depth SDK for iOS: Review its documentation to understand integration, permissions, and required frameworks.
- Android SDK – LogMeal Depth SDK for Android: Review its documentation for setup, camera handling, and ARCore support.
If you are using React Native for your App implementation, you can integrate our ready-to-use LogMeal Depth React Native SDK module, which works for both iOS and Android.
Device Compatibility Check
Not all devices support depth-based quantity estimation. Before starting capture, verify device compatibility using the provided utilities:
- iOS: Use LogMealDepthUtils Compatibility Check to ensure the device meets depth capture requirements.
- Android: Use ArCoreUtils Compatibility Check to confirm ARCore support.
Devices that fail compatibility checks should default to using the Regular Image Food Recognition procedure.
Sequence Diagrams
Main Flow (Depth Image Capture + Asynchronous Quantity Detection)
sequenceDiagram
participant User
participant App
participant LogMeal API
User->>App: Initiate capture session
App->>App: Validate device compatibility (LogMealDepthUtils / ArCoreUtils)
App->>User: Prompt user to move device 180º around food
User->>App: Capture image sequence (Depth SDK)
App->>LogMeal API: POST /image/segmentation/complete/quantity <br> (Authorization + captured data)
LogMeal API-->>App: 200 OK (imageId, segmentation results, initial quantity, quantity_prediction_status=running)
App-->>User: Optionally display items with initial/approx. quantities retrieved from GET /intake/{imageId}
loop Poll quantity until quantity_prediction_status = "done"
App->>LogMeal API: GET /intake/{imageId}
LogMeal API-->>App: 200 OK (updated quantity, quantity_prediction_status=running|done|error)
end
App-->>User: Display final food quantity when status = "done"
Fallback Flow (Non-Compatible Device)
sequenceDiagram participant User participant App participant LogMeal API App->>App: Check device compatibility (Depth SDK) App-->>User: Notify device not supported ⚠️ App->>LogMeal API: POST /image/segmentation/complete <br> (Standard segmentation) LogMeal API-->>App: 200 OK (segmentation results) App-->>User: Show regular segmentation results
Implementation Guide
Integration and Capture Steps
-
Access the SDK – Download the appropriate SDK (iOS or Android) and carefully review its documentation before integration.
-
Check device compatibility – Ensure the user’s device supports depth estimation. If not, fall back to regular segmentation.
-
Capture image sequence – Instruct users to:
-
Initiate capture manually.
-
Perform a slow 180º movement around the food (at least 150º coverage) while keeping the camera pointed toward it.
-
Wait for the SDK to signal successful capture.
For data validation purposes you are advised to compare the data acquired by your SDK integration with the data samples provided on the Prerequisites section above. -
-
Send captured data & create the intake – Once capture completes, upload the sequence using the POST /image/segmentation/complete/quantity endpoint. The response will:
- create the intake and return an
imageId, - return segmentation results with the detected dishes, and
- include a
quantity_prediction_statusfield (typicallyrunningright after creation).
- create the intake and return an
-
Poll intake status until final quantities are ready –
-
Call GET /intake/{imageId} using the
imageIdobtained in the previous step. -
Inspect the
quantity_prediction_statusfield in the response:running– the quantity estimation job is still being refined. You may show the approximate quantities to the user but keep polling.done– the asynchronous quantity estimation has finished. The same response now contains the final quantity values for each detected dish. You should stop polling at this point.error– the quantity estimation failed. Handle this case gracefully in your app (for example, displaying the approximate quantities and by allowing the user or a manager to manually edit quantities using POST /nutrition/confirm/quantity).
-
The
donestatus is usually reached in about 10 seconds on average, but this can vary depending on image size, number of dishes, and system load. -
Respect endpoint rate limits when polling (e.g. avoid polling more than once per second per 🔴 APIUser) and consider adding both a maximum polling duration and a maximum number of attempts.
-
Related Endpoints
- POST /image/segmentation/complete/quantity → 🔴 Upload captured data for food quantity estimation and create an intake that triggers the asynchronous quantity prediction task.
- GET /intake/{imageId} → ⚫ 🔴 🔵 Retrieve the full intake information (including
quantity_prediction_statusand the approximate/final quantity values) for a givenimageId. - POST /nutrition/confirm/quantity → 🔴 🔵 Confirm or manually adjust the final food quantity (serving size) once the automated estimation is complete.
- POST /waste/detection/intake → 🔴 Detect remaining quantities for waste estimation.
- POST /image/segmentation/complete → 🔴 Standard segmentation endpoint for non-depth devices.
Remember to check applicable request limitations inside each of the endpoints.
Common Pitfalls & Tips
- Handle asynchronous quantity estimation correctly: POST /image/segmentation/complete/quantity only starts the quantity estimation. Always poll GET /intake/{imageId} and wait for
quantity_prediction_status = "done"before treating the quantities as final. - Always check compatibility: If a device doesn’t support the SDK, revert to segmentation.
- User motion: Ensure users perform full 180º arcs; shorter movements reduce accuracy.
- Lighting: Recommend bright, diffuse lighting to enhance image quality.
- File size: Keep each frame below 1 MB for optimal performance.
- Communication: Clearly guide users during capture; incorrect movement patterns degrade accuracy.
Optional Enhancements
- Combine results with Nutritional Analysis for energy and macro estimation.
- Add pre- and post-meal captures to enable Waste Estimation workflows.
- Integrate capture tutorials or overlay animations directly in your mobile UI.
Next Steps
- Integrate with the Waste Estimation feature.
- Review Plans & Limits for endpoint access and quotas.
Updated 7 days ago
