Facial Recognition Getting Started

Part 1: AI for Everyone: Training a Model to Understand You

What if a computer could understand a command that is unique to you? Not just a mouse click or a key press, but a specific gesture, facial expression, or sound that is easy for you to make. This is the core idea behind powerful assistive technologies like Google’s Project Euphonia, which trains AI to understand people with non-typical speech, helping them communicate with the world.

Today, you will be the researcher. Your goal is to explore how to train your own simple AI model and brainstorm how it could be used to help someone.

Today’s Goals

  • Discover how AI is being used for assistive technology through Project Euphonia.
  • Explore Google’s Teachable Machine to train a basic gesture or sound model.
  • Brainstorm ideas for how your custom AI “switch” could help a user with unique needs.

Materials You’ll Need

  • A computer with a webcam and microphone 💻
  • Internet access

Part 1: Inspiration – Seeing AI in Action

Let’s see what happens when technology is designed for everyone.

  1. Watch the Video: Watch this short video explaining Google’s Project Euphonia (2 minutes).

  2. Think About It:

    • Who does this technology help?
    • Why is it so important that AI can be trained to understand individuals?

Part 2: Your Turn to Be the Researcher

Now it’s your turn to experiment. Think about a user who might have difficulty using their hands to press a button or use a keyboard. How could they communicate with a device?

Your challenge is to train a simple AI model to recognize one unique action.

  1. Open Teachable Machine: Go to the Teachable Machine website and click “Get Started.”

  2. Choose Your Project: Select an “Image Project” for a visual action or an “Audio Project” for a sound.

  3. Brainstorm Your “Switch”: What will be the action your AI learns? Here are some ideas to get you started:

    • Gesture Idea: A head nod up and down.
    • Facial Expression Idea: Raising your eyebrows.
    • Sound Idea: A specific hum or a tongue click.
  4. Train Your Model:

    • Class 1: Your Action. Rename the first class to describe your action (e.g., “Eyebrows Up”). Use your webcam or mic to record at least 20-30 examples of you doing this action.
    • Class 2: Neutral. Rename the second class “Neutral” or “Nothing.” Record yourself looking normally at the camera or just the quiet background noise of the room. This step is crucial—it teaches the AI what to ignore.
    • Click “Train Model” and wait for it to finish.
  5. Test Your Model: Use the “Preview” window to see if your AI can tell the difference between your action and the neutral state. Does it work well? What makes it better or worse?


Part 3: Group Brainstorm & Discussion

Let’s share what we discovered.

  • What action did you train your model to recognize?
  • What was one challenge you faced while training? (e.g., inconsistent lighting, background noise, repeating the action the exact same way).
  • Imagine the model you just created could turn something on or off. What could you control with your eyebrow raise, head nod, or hum to help someone in their daily life?

Next Steps: Bringing Your Idea to Life

This exploration was the first, most important step. You now have an idea for a custom AI switch. In our next activity, we will take a model just like the one you built today and connect it to a physical Micro:bit to make your assistive technology idea a reality.