AI Dish Recognition & Pricing System: Build an Automated Food Checkout Using Mind+ V2
Whoās still annoyed by messed-up orders or unclear pricing when paying at restaurants? Who wouldnāt want to quickly see the price of every dish when grabbing food at the cafeteria or snack streetāso you can spend with total clarity?
Today, the AI dish-recognizing pricing device is here to break the āhassle and confusionā of traditional pricing. Itās using tech to shake up the whole āeating and payingā routine, and itās pretty cool!
Itās like a āsharp-eyed little pricing butler.ā No need for manual checks or scanning each item one by one. Just plop your tray down, lay out the dishes, and let the AI camera take a quick scan. Boomā it instantly recognizes each dishās name, portion size, and automatically calculates the exact total. Whether itās complicated home-cooked meals, fancy snack platters, or self-serve dishes with varying portions, it handles it in a flash. No more awkward āovercharged by a pennyā moments, and no more hassle of āchecking the list one by one.ā

Ā
After seeing the amazing performance of this AI Dish Quick-Calc Genie, can you already not wait to build one yourself? Actually, its core secret lies in the AI model that ālets the device recognize every dish and calculate prices correctlyāāand you donāt have to worry at all about key steps like training the model or exporting the algorithm, because we have the all-around tool Mind+ V2 to help! Itās like an āAI workshopā tailor-made for beginners. It turns complex processes like model training and algorithm export into something as easy as building with blocksāno complicated coding needed, just follow the steps and youāll get it done. Next, weāll start with how to use Mind+ V2, break down the production process step by step, and teach you how to train your own exclusive dish recognition model to build this practical and fun pricing device~
Based on the dish recognition function of this pricing device, you can even DIY your own projects. For example, add a custom health needs feature: if youāre in a fat-loss phase, it recommends āeating more protein and less carbsāāwhen it recognizes stir-fried greens as a low-protein dish, it suggests high-protein options like shrimp paste eggs; if youāre controlling sugar, it reminds you āpairing with whole grains stabilizes blood sugar betterāāwhen it recognizes stir-fried pork liver isnāt a whole grain, it doesnāt recommend it, helping you balance your nutrition~
What even is Mind+ V2?
Mind+ V2 is a visual model training tool made for AI newbies and creators. It simplifies the complicated AI training process into just 3 easy steps:
1.Prep your dataset: either take pics with a camera, import local images, and label them super quickly.
2.Train the model: hit a button to start training, and watch the accuracy and loss curves in real time.
3.Export & deploy: once the modelās ready, you can drop it straight onto hardware like HuskyLens 2 or UNIHIKER Board, and itās good to go right away.
It supports way more model types too. Besides common ones like object detection, image classification, and gesture classification, it also handles pose classification, instance segmentation, voice classification, text classification ā all sorts of AI scenarios. No need to mess with tricky setup stuff; even newbies can jump into AI creation in no time.
Ā

Ā
Hardware Preparation
Ā
Software Preparation
- Mind+ V2

Ā
Step 1. Ā Go to the Object Detection module in Mind+ V2
Download and install the latest version of Mind+ V2 based on your operating system. Fire up Mind+ V2, pick āNew Projectā from the menu bar, then click āModel Trainingā. Find āObject Detectionā in the training options and hit itāthatās how you finish creating the project!
Ā

Ā
Click "Advanced Mode" in the top right corner of the interface to switch modes. Once youāve switched successfully, the menu bar will add these new function modules: Data Settings, Annotation Settings, Model Training, and Model Validation.
Ā

Ā
Step 2. Data Settings
Switch to "Data Settings" ā click "Create Dataset" in the top left corner. For example, create a dataset named "Dish Recognition". After switching to Advanced Mode, a default "Experience" dataset will appear in the dataset listāitās generated by the Quick Experience mode. You can perform the following operations on the newly created dataset: annotate, copy, import data, export, and delete.
Ā

Ā
Go ahead and do the "Import Data" thing for your new "Dish Recognition" dataset. The system lets you import in two ways: labeled data and unlabeled data. For this project, weāll use labeled data.
Ā
Import Method 1: Unlabeled Data
Ā - This works when you just want to upload raw imagesālike pics of scrambled eggs with tomatoes, rice mixed together, that sort of thing. Make sure you have at least 20 pics for each category, though.
Ā - Steps to do it: Pick "Unlabeled Data" as the import type ā click "Click to Upload" ā choose pics from your computer ā hit "Confirm" to finish importing.
Ā

Ā
Import Method 2: Labeled Data
- Just upload your pre-labeled data in YOLO format (itās a .zip file).
- Arrange the folder structure like the platform tells you to. Once you upload it, no need to label anything manuallyāyou can jump straight to model training.
Ā

Ā
Heads up: The category names for labeled data need to be in English. Otherwise, the labels might get all garbled after uploading.
Ā
Step 3. Data Labeling
Once youāve successfully imported unlabeled dish images, the labeling progress bar will show the number of imported images and their corresponding labeled count. If it says "data exists but unlabeled" (like 0/1), youāll need to label the data manually.
In the "Actions" column, click "Label" to enter the dish recognition labeling setup screen. Follow the on-screen prompts to create labelsāthese will be used to mark different dish categories.
Next, start labeling the dataset. Hereās how: first click the corresponding label name, then use your mouse to click one corner of the target, drag diagonally to the opposite corner, and click once more to form a rectangle (make sure it fully frames the target).
Ā



Ā
Quick Tips for Labeling Dataset Samples:
1. When labeling, you gotta go through every single image in the dataset and finish labeling āem one by one.
2. If there are multiple dish targets in one pic, make sure to label each and every one of āem separately.
Ā
Step 4. Model Training & Export
Once youāve labeled all the images, click the top right corner to switch over to the "Model Training" module.
Ā

Ā
- ick "Create Training Task" and set things up in the pop-up window. Once the model training task is created, hit "Parameter Settings" to get to the training parameter config screen. You can tweak the parameters if you need to, or just use the default settings to start training right away.
Ā


Hit "Train" to start training the model! Once itās done, you can do stuff like delete the trained model, export it, or check the training results right from the Actions column.
Ā


Step 5. Model Check
Switch over to the "Model Validation" module. Pick your training project: the Dish Recognition Model. Then choose the model: best.pt. You can tweak the other settings if you want, or just stick with the defaultsāeither works.
There are two ways to check the model: real-time camera test and single image test.
- Real-time camera test: The camera will recognize dishes as they go, draw colored rectangles around them, and show the category and confidence level.
- Single image test: Upload one picture to check if the model recognizes it right.
Ā

Dish Pricing Device
Hardware Connection Demo
Back side: Lattepanda 3 Delta acts as the main unit, connected to the USB camera and 15.6-inch touchscreen.
Ā

Front side: The USB camera is placed on top, and the 15.6" touchscreen is for controlling operations.
Ā

Step 1. Import the Model
Unzip the model you exported from Mind+ V2, and youāll get the best.onnx file.
Ā

Unzip the "Dish Pricing Device.zip" from the accessories, open the "Dish Pricing Device.mp" file, and drop the best.onnx model into the project folder.
Ā

Step 2. Run.py
Plug a USB plug-and-play camera into the USB port of your Lattepanda 3 Delta. Run the Run.py fileāonce itās up and running, itāll generate http://127.0.0.1:5000. Open a browser on your computer, head to that web address, and youāll see the whole dish pricing system!
Ā


Step 3. Web Interface Operation
Put the dish tray in the camera's recognition area, click "Start Detection" and youāll see the price of each dish on the screen. Then tap the payment QR code to finish paying.
Ā

Attachments
Google drive: https://drive.google.com/file/d/1EpLM5aOdI5y4EGcEbOWYHBe6QhAeSYS9/view?usp=sharing
Ā
Content of Attachments : Dish Pricing Device.mp
Content of "Dish Pricing Device.mp": templates/index.html, Mind+.png, Run.py, and best.onnx
Ā

FAQ
1. Q: Where do I tweak the HTML page for the Dish Pricing Device?
A: Just make changes in templates/index.html.
Ā

2. Q: Where do I change the dish details on the Dish Pricing Deviceās HTML page?
A: Just edit the DISH_INFO section at the start of Run.pyāthatās where the dish info lives!
Ā










