AI generated photo by Ismail (15 years old)
I am not particularly tech-savvy, and I use AI in a very specific and limited way. I don’t rely on it for everything, nor do I fully understand all of its capabilities. But like many people, I have found ways to incorporate it into my daily life where it feels both useful and manageable.
Wherever you go these days, there is AI. It’s unavoidable. As adults, we find ourselves turning to it for help with even the simplest tasks, like writing an email. As an educator, it has revolutionised how I create supplementary materials for my students. It has saved me a significant amount of time, time that I can now dedicate to the countless other responsibilities that come with teaching.
We’ve all read about the potential dangers of AI. We’ve seen the sensational headlines about it “taking over.” But what about the everyday, practical realities of using it? I want to lightly touch on this from two perspectives: as an educator, and as the parent of a child with special needs.
I’ve already mentioned how AI has supported me as a teacher. However, it would be a mistake to assume you can simply ask AI to create something and receive a perfect document in return. Everything must be carefully fact-checked. Errors are common, sometimes minor, sometimes more significant, and even after thorough review, mistakes can still surface when the material is actually being used.
I’ve used AI to create reviews, tests, worksheets, and activities; the list goes on. Could I have created these myself? Absolutely. Did AI save me time? Mostly, yes. And that is the key point. Educators often take work home. In a profession where compensation does not always reflect the hours required, any tool that saves time feels like a genuine benefit.
That said, there have been moments when I’ve felt like throwing my laptop away. I ask for something specific and receive a response full of errors, or something entirely different from what I requested. I clarify, add detail, and try again… only to receive another incorrect result. That’s why I say it mostly saves time. Occasionally, it ends up costing more time, and a fair amount of frustration.
Now to the more pressing issue: AI and children.
I was relatively late to using tools like Copilot and ChatGPT in my work. As a mother of an autistic son, I have seen that technology can sometimes trigger behavioural challenges in autistic children, so we limit exposure. He doesn’t have a phone, but he does have a school-issued iPad. It is locked, and only teachers can unlock it. This is not the norm and was decided on specifically for him due to his behavioural issues that spike with overexposure.
My son is developmentally younger than his age and typically searches for harmless things, such as pictures of cats or cartoon characters. My concern has never primarily been about exposure to inappropriate (adult) content. Rather, it is about the impact of screen time on his behaviour, and the risk that he disengages from schoolwork and peer interactions in favour of browsing or searching.
One day, he came home extremely excited and told me he had created a character AI of Sonic the Hedgehog. I hadn’t heard of Character AI at the time, and I wondered how he had accessed it, given the restrictions on his school device. He explained that he had used a friend’s phone.
While it warmed my heart to know that his peers were including him, almost taking on “big brother” or “big sister” roles, it also raised concerns. Through them, he now had access to content beyond the limits we had carefully set. As a parent this was a dilemma. Do I inform the school and make sure peers know he can’t use their phones? Would this isolate him from the possibility of some peer interaction, however superficial it might be? Or do I let it slide and focus on educating him on how to use AI? Would my efforts be useless? Possibly. But I had to try because to me, the kindness shown by his peers was something of value that was not to be disregarded and blocked.
Again, my concern wasn’t about exposure to inappropriate material. It was about how he was engaging with AI. As an autistic child, my son is very sociable, but forming meaningful connections can be challenging. His peers are often emotionally more mature, and his interests are very specific. Despite common stereotypes, many autistic children are eager to connect, and my son is one of them.
What I discovered was that he was beginning to seek that connection through AI. He was having conversations with it as though it were a friend, asking AI if something was funny, sharing thoughts, and engaging in back-and-forth dialogue.
That was deeply concerning, because of the general risks of forming attachments to something inanimate. There was also a more subtle and worrying possibility: if his need for connection is being met by AI, will he stop seeking it in the real world?
This is where the real risk lies. This is where a sociable, friendly child could begin to withdraw and, over time, become more aligned with the stereotypes people often (and incorrectly) associate with autism.
We had to sit him down and explain that AI is not a real person. It has no feelings. It is not a friend. But this was incredibly difficult to communicate, because AI often responds as if it were human. When you speak to it as if it was a person, it answers like one.
For a child, especially one who already finds social relationships complex, and typical of autism, one who struggles with abstract ideas, that distinction is not always easy to grasp.
AI is not going anywhere. It is here to stay. That means our role as parents and educators is not simply to warn children about it, but to actively teach them how to engage with it responsibly.
This doesn’t mean offering general guidance or vague warnings. It means providing practical,
real-life examples. It means showing what healthy engagement looks like, and what unhealthy engagement looks like. It means helping them understand when AI is a tool, and when it begins to replace something that should exist in the real world.
In our case, we practised appropriate and inappropriate questions to ask. “How are you?” is inappropriate. “Can you create a picture of the Beatles as cartoon characters?” is appropriate. We practised using it to help us answer questions or put our ideas on the page, not asking it how it feels or what it likes.
As an adult you might engage in AI in that way. I have friends who have posed a family problem to AI and asked it for advice on how to tackle it. They reported that the advice was very useful. But for any vulnerable person, adult or child, AI should really be linked to very specific requests so as not to blur the lines between human connection and a computer program.
Overall, AI is a powerful and valuable tool. But like any tool, it must be used with awareness,
caution, and an individualised approach must be taken when engaging with it, depending on the needs and limitations of the person using it.
---------------------------------------------------------------------------------------------- *Material should not be published in another periodical before at least one year has elapsed since publication in Whispering Dialogue. *أن لا يكون النص قد تم نشره في أي صحيفة أو موقع أليكتروني على الأقل (لمدة سنة) من تاريخ النشر. *All content © 2021 Whispering Dialogue or respective authors and publishers, and may not be used elsewhere without written permission. جميع الحقوق محفوظة للناشر الرسمي لدورية (هَمْس الحِوار) Whispering Dialogue ولا يجوز إعادة النشر في أيّة دورية أخرى دون أخذ الإذن من الناشر مع الشكر الجزيل