What if a computer could understand a command that is unique to you? Not just a mouse click or a key press, but a specific gesture, facial expression, or sound that is easy for you to make. This is the core idea behind powerful assistive technologies like Google’s Project Euphonia, which trains AI to understand people with non-typical speech, helping them communicate with the world.
Today, you will be the researcher. Your goal is to explore how to train your own simple AI model and brainstorm how it could be used to help someone.
Let’s see what happens when technology is designed for everyone.
Watch the Video: Watch this short video explaining Google’s Project Euphonia (2 minutes).
Think About It:
Now it’s your turn to experiment. Think about a user who might have difficulty using their hands to press a button or use a keyboard. How could they communicate with a device?
Your challenge is to train a simple AI model to recognize one unique action.
Open Teachable Machine: Go to the Teachable Machine website and click “Get Started.”
Choose Your Project: Select an “Image Project” for a visual action or an “Audio Project” for a sound.
Brainstorm Your “Switch”: What will be the action your AI learns? Here are some ideas to get you started:
Train Your Model:
Test Your Model: Use the “Preview” window to see if your AI can tell the difference between your action and the neutral state. Does it work well? What makes it better or worse?
Let’s share what we discovered.
This exploration was the first, most important step. You now have an idea for a custom AI switch. In our next activity, we will take a model just like the one you built today and connect it to a physical Micro:bit to make your assistive technology idea a reality.