The Idea

Before transformer models and large language models made chatbots astonishingly fluent, most chatbots worked through decision trees: a map of possible questions with branching paths for each possible answer. This is simpler to understand — and to build on paper.

By designing their own decision tree chatbot, children learn:

  • How chatbots respond to user input
  • Why chatbots get confused by unexpected inputs
  • Why designing for conversation is hard
  • What makes modern AI chatbots different (and more impressive)

Part 1: Choose Your Chatbot’s Purpose (5 minutes)

Your chatbot needs a specific job. The more specific, the better — because open-ended chatbots require infinitely many possible paths.

Good chatbot purposes for this activity:

  • Movie recommender: Asks what genre you like, whether you want something funny or serious, old or new, and recommends a movie
  • Homework helper: Guides a student to figure out what kind of help they need and where to look for it
  • Family chef: Asks what ingredients you have and how much time you have, then suggests a simple recipe
  • Mood checker: Asks how you’re feeling and offers a suggestion (book, activity, snack) to match
  • Museum guide: Walks a visitor through exhibits based on their interests

Have your child pick one. This is their chatbot’s purpose.

Part 2: Map the Conversation (20 minutes)

Step 1: Start with a greeting

Every chatbot starts with a welcome. Write it in a box at the top of your paper:

“Hi! I’m RecBot, your movie recommender. What kind of movie are you in the mood for?”

Step 2: Map the first branching point

Under the greeting, draw arrows for the main responses a user might give:

  • Action
  • Comedy
  • Drama
  • Something scary

Each becomes a branch. Write them in boxes.

Step 3: Add depth

Under each branch, the chatbot asks a follow-up question to narrow down further:

  • If “Comedy” → “Do you want something for the whole family or just for you?”
  • If “Action” → “Do you prefer realistic or superhero-style?”

Step 4: Reach recommendations

Eventually, each path should end in a recommendation (or a small set of options). Draw these as final boxes — the “leaves” of your decision tree.

Step 5: Handle the unexpected

Here’s where it gets interesting. What if the user types “I don’t know” or “something random” or “green”?

Design a fallback path: “Hmm, I didn’t quite understand that. I can help you find Action, Comedy, Drama, or Horror — which sounds closest?”

Tips for the map:

  • Use sticky notes for paths you might change — they’re easy to move
  • Use different colored markers for different levels (questions, answers, recommendations, fallbacks)
  • Don’t try to map every possibility — focus on the 3–5 main paths

Part 3: Test Your Chatbot (10–15 minutes)

Step 6: Run a test conversation

One person plays the user (the human), the other plays the chatbot (reads responses from the tree and follows the paths).

The user tries to have a natural conversation. The chatbot follows the tree exactly.

Observe: Where does the conversation break down? What questions does the user ask that aren’t on the map?

Step 7: Break it on purpose

Now the user intentionally tries to break the chatbot:

  • Gives an unexpected answer
  • Asks a question outside the chatbot’s purpose
  • Gives multiple answers at once (“I want comedy and action”)
  • Types something that doesn’t make sense

How does the chatbot handle it? What does the failure look like?

Part 4: The Real AI Connection (10 minutes)

After the activity, talk about what modern chatbots like ChatGPT do differently.

What your paper chatbot can do:

  • Follow a defined path reliably
  • Give consistent answers
  • Handle expected inputs well

What your paper chatbot can’t do:

  • Understand language it wasn’t explicitly programmed for
  • Handle unexpected phrasing of the same question
  • Learn from the conversation

What modern AI chatbots do differently:

  • They don’t follow decision trees — they predict what word should come next, based on training on enormous amounts of text
  • They can handle almost any phrasing of a question
  • They can carry on conversations that seem natural and thoughtful
  • But: they can hallucinate facts, misunderstand context, and have no actual understanding of what they’re saying — they’re pattern-matching, not thinking

The key insight: Even the most sophisticated AI chatbot is still doing something that can be described algorithmically. It’s much more complex than your paper chatbot, but it’s not magic and it’s not human. It has no experience, no intent, no understanding. It’s a very sophisticated pattern-completion engine.

Extension for Older Children (12+)

Design critique:

  • What assumptions did you make about the user when you designed your chatbot?
  • Who might be excluded by those assumptions?
  • How would you make your chatbot more accessible or inclusive?

Real-world connection:

  • Look up how customer service chatbots work and fail. (Reading complaint threads about chatbots online can be educational and funny.)
  • Discuss: When is a chatbot appropriate? When should there always be a human option?

Ready for more?

Explore all activities in the library, or find ones matched to your child's age.

All Activities → Browse by Age