Navigating the AI landscape: what parents should know about children’s safety

Anna Collard, SVP of Content Strategy at KnowBe4 AFRICA, highlights both the exciting potential and the concerns for children growing up in this AI-driven world. | Supplied

Anna Collard, SVP of Content Strategy at KnowBe4 AFRICA, highlights both the exciting potential and the concerns for children growing up in this AI-driven world. | Supplied

Published Oct 10, 2024

Share

Durban — Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 AFRICA has warned that the implications are both exciting and concerning for children growing up in an AI-powered world.

KnowBe4 is a computer security service provider.

In just two years, artificial intelligence has undergone a revolution. Generative AI tools like ChatGPT, Google’s Gemini, and Microsoft’s Copilot have rapidly become part of our daily lives.

With Meta integrating AI chatbots into popular platforms like WhatsApp, Facebook, and Instagram, the technology is more accessible.

“These AI tools offer unprecedented opportunities for learning, creativity, and problem-solving. Children can use them to create art, compose music, write stories, and even learn new languages through engaging interactive methods,” Collard explained.

“The personalised nature of AI chatbots, with their ability to provide quick answers and tailored responses, makes them especially appealing to young minds.”

She said AI brings a host of potential risks that parents, teachers and policymakers must consider carefully.

The challenges are significant, from privacy concerns and the danger of overtrust to the spread of misinformation and possible psychological effects.

“As we step into this AI-driven era, we must carefully weigh the incredible potential against the genuine risks,” Collard warned.

“Our challenge is to harness AI’s power to enrich our children’s lives while simultaneously safeguarding their development, privacy, and overall well-being.”

Privacy concerns

“Parents need to know that while they seem harmless, chatbots collect data and may use it without proper consent, leading to potential privacy violations,” Collard said.

She said the extent of these privacy risks varies greatly.

According to a Canadian Standards Authority report, the threats range from relatively low-stakes issues, such as using a child’s data for targeted advertising, to more serious concerns.

Since chatbots can track conversations, preferences, and behaviours, they can create detailed profiles of child users. When used for malicious purposes, this information can enable powerful manipulative tactics to spread misinformation, polarisation, or grooming.

Collard pointed out that large-language models were not designed with children in mind. The AI systems that power these chatbots train on vast amounts of adult-oriented data, which may not account for the special protections needed for minors’ information.

Overtrust

Another concern for parents is that children may develop an emotional connection with chatbots and trust them too much, whereas, in reality, they are neither human nor their friends.

“The overtrust effect is a psychological phenomenon that is closely linked to the media equation theory, which states that people tend to anthropomorphise machines, meaning they assign human attributes to them and develop feelings for them,” Collard said.

“It also means that we overestimate an AI system’s capability and place too much trust in it, thus becoming complacent.”

She said overtrust in generative AI can lead children to make poor decisions because they may not verify the information.

“This can lead to a compromise of accuracy and many other potential negative outcomes,” Collard explained.

“When children rely too much on their generative AI buddy, they may become complacent in their critical thinking, and it also means they may reduce face-to-face interactions with real people.”

Inaccurate and inappropriate information

Despite their sophistication, AI chatbots are not infallible.

“When they are unsure how to respond, these AI tools may ‘hallucinate’ by making up the answer instead of simply saying it doesn’t know,” Collard explained.

This can lead to minor issues like incorrect homework answers or, more seriously, giving minors a wrong diagnosis when they are feeling sick.

“AI systems are trained on information that includes biases, which means they can reinforce these existing biases and provide misinformation, affecting children’s understanding of the world,” she said.

From a parent’s perspective, the most frightening danger of AI for children is the potential exposure to harmful sexual material.

“This ranges from AI tools that can create deepfake images of them or that can manipulate and exploit their vulnerabilities, subliminally influencing them to behave in harmful ways,” Collard said.

Psychological impact and reduction in critical thinking

As with most new technology, over-use can have poor outcomes.

“Excessive use of AI tools by kids and teens can lead to reduced social interactions, as well as a reduction in critical thinking,” Collard stated.

“We’re already seeing these negative psychological side-effects in children through overuse of other technologies such as social media: a rise in anxiety, depression, social aggression, sleep deprivation and a loss of meaningful interaction with others.”

Collard said navigating their way through this brave new world is difficult for children, parents and teachers, but she believes policymakers are catching up.

“In Europe, while it doesn’t specifically relate to children, the AI Act aims to protect human rights by making sure AI systems are safer.”

Until proper safeguards are in place, parents need to monitor their children’s AI usage and counter their negative effects by introducing some family rules.

“By prioritising play and reading that children do not do on screens, parents will help boost their children’s self-esteem, as well as their critical-thinking skills,” Collard concluded.

WhatsApp your views on this story at 071 485 7995.

Daily News