The pancake Identifier was placed at the end to watch when the user picked up their order
The table was designed to provide a track for the Create 3 to travel around in a loop without bumping into the other stations.
Camera attachment used Teachable Machine image recognition to identify when the pancake was picked up by a user.
This project aimed to work with the other groups in the course on an overall autonomous pancake cafe. The overall cafe was split into several subgroups: Order, Batter, Cooking, Toppings, Plating, and Delivery. For Delivery, the main objectives were: fabricate a table to house the other subgroups, create a track for the Create 3 to drive on to pass through each station, and create a station that determines when the customer picks up their pancake to reset the system.
The table was built by placing a 4x8 plywood sheet over an existing table, with legs on three of the corners. This ensured the entire table was level, with the main support being located where the other subgroups were building their stations. The plywood was sanded down to further level the track for the Create 3. A path was mapped out that offered 6 inches between two Create 3's to ensure spacing. Based on feedback from the plating team, the corners of the track were softened to allow for easier turning.
For the pancake identifier, images were taken of the Create 3 with just the plate and pancakes with various toppings. These images were placed in Google's Teachable Machine to create a model that could detect a pancake. This model was then placed back to the Raspberry Pi, with code that told the Create 3 to move once it saw the pancake was gone.
For the Identifier, the Pi camera wires were not long enough for the Pi to remain on the ground. Therefore, a platform had to be built to house the Raspberry Pi close to the camera
The initial pancake identification model was not accurate when the plates were swapped to a different brand. To resolve this issue, the Teachable Machine was trained on new photos of the plates, resulting in a more accurate model.
Initial drawing of table to estimate size
Sizing of the Table to ensure enough space for the Create 3 to travel its path
The Teachable Machine had two classes: No Pancake and Pancake
The camera was placed at a 45 degree angle to easily see over the Create 3
When seeing the plate, the pancake identified would turn the airtable value to 99, telling the Create 3 to move forward
Although all three objectives were reached for the Delivery subgroup, each objective could have been improved. The table could have been expanded to allow the other subgroups more space for their projects. In the initial design, the table would have an L shape to give the user a designated space to pick up the pancake. This gives the other subgroups more space for their builds and offers a more impressive track for the Create 3 to travel upon. Due to time constraints, the L table extension was forgone in favor of the pancake identifier. Based on feedback from the Plating team, the path for the Create 3 could be optimized to allow for easier turns. Although this decision was partially changed, the path could have been closer to an oval compared to the final design. Finally, the Pancake Identifier could have been optimized to not require the camera wires to twist. This would make the identifier more aesthetically pleasing and not potentially damage the camera.
Video of final pancake maker
from keras.models import load_model
import cv2
import numpy as np
from picamera2 import Picamera2
from libcamera import controls
import time
import AirtablePancake
print("Starting up")
# Disable scientific notation
np.set_printoptions(suppress=True)
# Load the Teachable Machine model
model = load_model("keras_model.h5", compile=False)
# Parse labels (remove leading index numbers like "0 No Pancake")
with open("labels.txt", "r") as file:
class_names = [line.strip().split(" ", 1)[1] for line in file.readlines()]
print('Starting Camera')
# Set up Picamera2
picam2 = Picamera2()
picam2.set_controls({"AfMode": controls.AfModeEnum.Continuous})
# Fix: configure camera to output RGB format to avoid reshape issues
config = picam2.create_still_configuration(main={"size": (640, 480), "format": "RGB888"})
picam2.configure(config)
picam2.start()
time.sleep(1) # Camera warm-up time
at = AirtablePancake.at()
print('Ready!')
time.sleep(1)
try:
while True:
# Capture image from the camera (RGB format)
image = picam2.capture_array()
# Resize and normalize image to match model input
image_resized = cv2.resize(image, (224, 224), interpolation=cv2.INTER_AREA)
image_array = np.asarray(image_resized, dtype=np.float32).reshape(1, 224, 224, 3)
image_array = (image_array / 127.5) - 1 # Normalize to [-1, 1]
# Run prediction
prediction = model.predict(image_array, verbose=0)
index = np.argmax(prediction)
class_name = class_names[index]
confidence = prediction[0][index] * 100
# Output result
#print(f"Detected: {class_name} ({confidence:.2f}%)")
pancakes = at.checkValue("Pickup Status")
if pancakes == 1:
if class_name == 'Pancake':
print(f"Detected: {class_name} ({confidence:.2f}%)")
#at.changeValue("Pickup Status", 1)
#print('Pancake!!!!')
elif class_name == 'No Pancake':
print('Pancake Delivered')
at.changeValue("Pickup Status", 99)
else:
print('No Pancakes :(')
# Optional: wait before next capture (adjust frame rate)
time.sleep(0.5) # 2 FPS
except KeyboardInterrupt:
print("Stopping...")
finally:
picam2.stop()