We are excited to announce the release of our first batch of AI-generated content on our YouTube channel. Children love animals, so we have started with a collection that we call AI Tails (pun intended). In order to kick off a broader discussion of the immense potential and rather scary pitfalls of AI for early literacy, we outline in this blog post the perspective informing our first steps in utilizing AI in our mission of helping every child get ready for school.
When OpenAI released ChatGPT on November 30, 2022, it took the world by storm. Here was a machine that was uncannily human in its responses to our prompts, with massive potential for both creation and automation based on the sheer amount of data it had been trained on. In a very short span of time, ChatGPT and similar tools have established themselves as sources for ideas and first drafts for students and professionals alike. The tech industry has redoubled its efforts in developing so-called “generative AI” technologies, where machines trained on vast amounts of data learn to create text, image/video, and audio. Businesses big and small are scrambling to figure out ways to utilize generative AI. Web search? Customer service? Drug discovery? Medical diagnoses? Pretty much any task that involves information (extraction, creation, summarization,…) is fair game.
Amidst the frenzy of activity triggered by that “ChatGPT moment” in November 2022, we at GLEN World have naturally been asking whether, and how, generative AI can help with our mission of early literacy and language acquisition. To answer this question, we must understand and acknowledge a fundamental weakness of this technology: the content generated may not align with our expectations. For queries intended to elicit factual responses (e.g., summaries from a web search), we might get “hallucinations,” or made-up facts, from a chatbot. Prompts can be engineered to elicit toxic responses, and even when the prompt is innocent, we cannot be 100% sure that the content generated will be appropriate. The content generated will be a version of the data the machine was trained on, so it may, for example, lack the diversity we expect. To be fair, large and highly capable teams at all the key AI companies are working hard to understand and solve such problems, but since we aim to serve the very young, we have decided to pursue a conservative strategy in how we utilize generative AI.
In our minds, therefore, there is no question of allowing our young learners to directly interact with chatbots if there is the smallest chance of the conversation taking a weird turn. How about creating stories, though? Our GLEN Learn and GLEN Books apps contain beautifully illustrated and narrated stories for children. Can we use generative AI technologies to add to this collection? Here again, there’s a hitch…actually, multiple hitches. When you tell ChatGPT to come up with a story aimed at young children (for example, a cat making friends with a dog), we get somewhat predictable and unoriginal results. This is perhaps not surprising, given that the patterns the machine is putting out are probably a rehash of what the machine was trained on. The situation with image generation is also tricky. It is difficult to prompt a generative AI to provide images that are consistent enough to, for example, portray a character in a story doing different things: for example, a girl climbing a tree and then the same girl running. The technology for achieving such consistency is rapidly improving, but as of this moment, we cannot count on it.
We have therefore decided to utilize generative AI to develop nonfiction: AI-generated illustrated audiobooks on topics that we hope will interest young children. Since children are drawn to animals, that’s what we are starting with, but the possibilities are endless. We find that, if we ask nicely, ChatGPT is pretty good at generating nonfiction in a tone that is appealing to children. And we can easily check and correct any factual errors it might make. A little bit of prompt engineering with Microsoft Designer enables us to illustrate this content quite effectively since we do not need to follow through consistently on particular characters as in fiction. For example, in the audiobook entitled “How Wolves Became Dogs,” we use completely different images of wolves on different pages. Finally, ElevenLabs gives us a variety of options for text-to-audio narration which are only slightly robotic. Which we think might be OK, since we are highlighting rather than hiding the fact that this content is AI-generated. The name of the YouTube playlist is AI Tails, and there’s a robot in the logo for this playlist. Naturally, the logo is also AI-generated!
We plan to continue expanding our AI-generated content via our YouTube channel, and will eventually start adding this content to our apps. We would love your feedback on our approach and the content we have put up so far. Our goal here is to quickly explore the potential of generative AI for our mission. So, unlike the high standards that we imposed on our human-created, human-illustrated, and human-narrated stories, we are going with “good enough” for AI-based nonfiction. What do you think of the quality of the text, the images, and the narration audio? Is it actually good enough?
More broadly, what do you think of our decision to focus on nonfiction? Do you have any ideas for topics that might interest young children that you would like to share with us? We look forward to hearing from you…please reach out to us via info@glenworld.org with your comments and feedback.
As always, we are grateful for your support. If you like our work, please consider a donation.
The GLEN World Team
GLEN World is a 501(c)3 nonprofit. If your employer has a donation match program, then that’s a great way to double your gift’s impact!