AI Dish Recognition & Pricing System: Build an Automated Food Checkout Using Mind+ V2

Who’s still annoyed by messed-up orders or unclear pricing when paying at restaurants? Who wouldn’t want to quickly see the price of every dish when grabbing food at the cafeteria or snack street—so you can spend with total clarity?


Today, the AI dish-recognizing pricing device is here to break the ā€œhassle and confusionā€ of traditional pricing. It’s using tech to shake up the whole ā€œeating and payingā€ routine, and it’s pretty cool!


It’s like a ā€œsharp-eyed little pricing butler.ā€ No need for manual checks or scanning each item one by one. Just plop your tray down, lay out the dishes, and let the AI camera take a quick scan. Boom— it instantly recognizes each dish’s name, portion size, and automatically calculates the exact total. Whether it’s complicated home-cooked meals, fancy snack platters, or self-serve dishes with varying portions, it handles it in a flash. No more awkward ā€œovercharged by a pennyā€ moments, and no more hassle of ā€œchecking the list one by one.ā€

Ā 

After seeing the amazing performance of this AI Dish Quick-Calc Genie, can you already not wait to build one yourself? Actually, its core secret lies in the AI model that ā€œlets the device recognize every dish and calculate prices correctlyā€ā€”and you don’t have to worry at all about key steps like training the model or exporting the algorithm, because we have the all-around tool Mind+ V2 to help! It’s like an ā€œAI workshopā€ tailor-made for beginners. It turns complex processes like model training and algorithm export into something as easy as building with blocks—no complicated coding needed, just follow the steps and you’ll get it done. Next, we’ll start with how to use Mind+ V2, break down the production process step by step, and teach you how to train your own exclusive dish recognition model to build this practical and fun pricing device~


Based on the dish recognition function of this pricing device, you can even DIY your own projects. For example, add a custom health needs feature: if you’re in a fat-loss phase, it recommends ā€œeating more protein and less carbsā€ā€”when it recognizes stir-fried greens as a low-protein dish, it suggests high-protein options like shrimp paste eggs; if you’re controlling sugar, it reminds you ā€œpairing with whole grains stabilizes blood sugar betterā€ā€”when it recognizes stir-fried pork liver isn’t a whole grain, it doesn’t recommend it, helping you balance your nutrition~


What even is Mind+ V2?

Mind+ V2 is a visual model training tool made for AI newbies and creators. It simplifies the complicated AI training process into just 3 easy steps:
1.Prep your dataset: either take pics with a camera, import local images, and label them super quickly.
2.Train the model: hit a button to start training, and watch the accuracy and loss curves in real time.
3.Export & deploy: once the model’s ready, you can drop it straight onto hardware like HuskyLens 2 or UNIHIKER Board, and it’s good to go right away.


It supports way more model types too. Besides common ones like object detection, image classification, and gesture classification, it also handles pose classification, instance segmentation, voice classification, text classification — all sorts of AI scenarios. No need to mess with tricky setup stuff; even newbies can jump into AI creation in no time.

Ā 

Ā 

Hardware Preparation

HARDWARE LIST
1 0.3 MegaPixels USB Camera for Raspberry Pi / NVIDIA Jetson Nano / UNIHIKER M10
1 LattePanda 3 Delta 864 - Pocket-sized Windows / Linux Single Board Computer (8GB RAM/64GB eMMC)
1 15.6 Inch 1920x1080 IPS Type-C Touch Screen & Display for Raspberry Pi / LattePanda / Jetson Nano Single Board Computer

Ā 

Software Preparation

- Mind+ V2

Ā 

Step 1. Ā Go to the Object Detection module in Mind+ V2

Download and install the latest version of Mind+ V2 based on your operating system. Fire up Mind+ V2, pick ā€œNew Projectā€ from the menu bar, then click ā€œModel Trainingā€. Find ā€œObject Detectionā€ in the training options and hit it—that’s how you finish creating the project!

Ā 

Ā 

Click "Advanced Mode" in the top right corner of the interface to switch modes. Once you’ve switched successfully, the menu bar will add these new function modules: Data Settings, Annotation Settings, Model Training, and Model Validation.

Ā 

Ā 

Step 2. Data Settings

Switch to "Data Settings" → click "Create Dataset" in the top left corner. For example, create a dataset named "Dish Recognition". After switching to Advanced Mode, a default "Experience" dataset will appear in the dataset list—it’s generated by the Quick Experience mode. You can perform the following operations on the newly created dataset: annotate, copy, import data, export, and delete.

Ā 

Ā 

Go ahead and do the "Import Data" thing for your new "Dish Recognition" dataset. The system lets you import in two ways: labeled data and unlabeled data. For this project, we’ll use labeled data.

Ā 

Import Method 1: Unlabeled Data

Ā - This works when you just want to upload raw images—like pics of scrambled eggs with tomatoes, rice mixed together, that sort of thing. Make sure you have at least 20 pics for each category, though.
Ā - Steps to do it: Pick "Unlabeled Data" as the import type → click "Click to Upload" → choose pics from your computer → hit "Confirm" to finish importing.

Ā 

Ā 

Import Method 2: Labeled Data

- Just upload your pre-labeled data in YOLO format (it’s a .zip file).
- Arrange the folder structure like the platform tells you to. Once you upload it, no need to label anything manually—you can jump straight to model training.

Ā 

Ā 

Heads up: The category names for labeled data need to be in English. Otherwise, the labels might get all garbled after uploading.

Ā 

Step 3. Data Labeling

Once you’ve successfully imported unlabeled dish images, the labeling progress bar will show the number of imported images and their corresponding labeled count. If it says "data exists but unlabeled" (like 0/1), you’ll need to label the data manually.
In the "Actions" column, click "Label" to enter the dish recognition labeling setup screen. Follow the on-screen prompts to create labels—these will be used to mark different dish categories.
Next, start labeling the dataset. Here’s how: first click the corresponding label name, then use your mouse to click one corner of the target, drag diagonally to the opposite corner, and click once more to form a rectangle (make sure it fully frames the target).

Ā 

Ā 

Quick Tips for Labeling Dataset Samples:

1. When labeling, you gotta go through every single image in the dataset and finish labeling ā€˜em one by one.
2. If there are multiple dish targets in one pic, make sure to label each and every one of ā€˜em separately.

Ā 

Step 4. Model Training & Export

Once you’ve labeled all the images, click the top right corner to switch over to the "Model Training" module.

Ā 

Ā 

- ick "Create Training Task" and set things up in the pop-up window. Once the model training task is created, hit "Parameter Settings" to get to the training parameter config screen. You can tweak the parameters if you need to, or just use the default settings to start training right away.
Ā 


Hit "Train" to start training the model! Once it’s done, you can do stuff like delete the trained model, export it, or check the training results right from the Actions column.
Ā 


Step 5. Model Check

Switch over to the "Model Validation" module. Pick your training project: the Dish Recognition Model. Then choose the model: best.pt. You can tweak the other settings if you want, or just stick with the defaults—either works.


There are two ways to check the model: real-time camera test and single image test.
- Real-time camera test: The camera will recognize dishes as they go, draw colored rectangles around them, and show the category and confidence level.
- Single image test: Upload one picture to check if the model recognizes it right.
Ā 


Dish Pricing Device

Hardware Connection Demo

Back side: Lattepanda 3 Delta acts as the main unit, connected to the USB camera and 15.6-inch touchscreen.
Ā 


Front side: The USB camera is placed on top, and the 15.6" touchscreen is for controlling operations.
Ā 


Step 1. Import the Model

Unzip the model you exported from Mind+ V2, and you’ll get the best.onnx file.
Ā 


Unzip the "Dish Pricing Device.zip" from the accessories, open the "Dish Pricing Device.mp" file, and drop the best.onnx model into the project folder.
Ā 


Step 2. Run.py

Plug a USB plug-and-play camera into the USB port of your Lattepanda 3 Delta. Run the Run.py file—once it’s up and running, it’ll generate http://127.0.0.1:5000. Open a browser on your computer, head to that web address, and you’ll see the whole dish pricing system!
Ā 


Step 3. Web Interface Operation

Put the dish tray in the camera's recognition area, click "Start Detection" and you’ll see the price of each dish on the screen. Then tap the payment QR code to finish paying.
Ā 


Attachments

Google drive: https://drive.google.com/file/d/1EpLM5aOdI5y4EGcEbOWYHBe6QhAeSYS9/view?usp=sharing

Ā 

Content of Attachments : Dish Pricing Device.mp
Content of "Dish Pricing Device.mp": templates/index.html, Mind+.png, Run.py, and best.onnx
Ā 


FAQ

1. Q: Where do I tweak the HTML page for the Dish Pricing Device?
A: Just make changes in templates/index.html.
Ā 


2. Q: Where do I change the dish details on the Dish Pricing Device’s HTML page?
A: Just edit the DISH_INFO section at the start of Run.py—that’s where the dish info lives!
Ā 

License
All Rights
Reserved
licensBg
0