The whole notion of artificial intelligence can evoke various levels of excitement and fear when it comes up in conversation these days—and both are likely fair responses when referring to something that’s still as obscure as artificial intelligence is. No one quite knows how it will affect our lives.
On the one hand, AI is guaranteed to change our world in remarkable ways. It will be capable of filling the role of personal assistant, provide easy and affordable access to valuable advice, and make our workplaces safer, just to name a few.
But there is also no shortage of people warning us of the potential risks and dangers to mankind if AI isn’t reined in and tightly controlled. Those risks range from the eradication of human jobs to the takeover of the world.
Unfortunately, like so many other innovative technologies, AI is hitting the market far more quickly than governments can regulate it. And as is often the case, these technologies are being marketed to the very young, making effective governance all the more imperative.
Case in point, Mattel released a new Barbie doll last fall, just in time for Christmas. It’s called Hello Barbie, and unlike previous Barbies this one is powered by an AI platform and comes equipped with a microphone to record conversations, WiFi to transfer those conversations to a computer server, voice recognition software to understand a child’s words, and an algorithm to help formulate what feel like human responses.
Hello Barbie also has the capacity to recall conversational points from its previous interactions with a child in order to get to know them, similar to one’s relationship with a sibling or friend.
Similarly, the Green Dino is an interactive AI-driven dinosaur toy that adapts its response to the age and development level of a child, regardless of the complexity behind the questions it’s asked.
While the first set of questions posed to the Green Dino were generated by parent focus groups, the toy can learn in real-time and build answers to questions that might not have been programmed in the first place.
In the field of education, new AI programs are being developed daily. These programs can provide assistance as reading coaches or math tutors.
So whether a person falls into the “thrilled” or “fearful” camp, AI is here to stay. It will be up to us to learn how to navigate these waters safely and talk to our children about it.
AI Is Nothing New
While it may seem like AI is a relative newcomer to the scene, the earliest examples of the technology actually date back more than 70 years. In 1950, a robotic mouse named Theseus made his debut. His creator was Claude Shannon, a mathematician.
Powered by a bank of telephone relays, Theseus had the unique ability to solve a problem by trial and error and then remember the solution. Thus, it learned by experience, similar to a sentient creature.
Theseus demonstrated this ability by bumping its way through a labyrinth until it finally located the endpoint. The second time through the labyrinth, it was able to take a direct route to the finish line without making a single misstep.
Small advancements were made to the technology over the ensuing years, but it wasn’t until the turn of the century that the baby steps evolved into giant leaps.
New developments in handwriting, voice, and image recognition have taken computers from being able to barely compete with the human brain to surpassing human efforts in almost any kind of test.
But if you think you’re going to keep AI technology at arm’s length until some future date, think again. Many of us have already been using AI on a daily basis for years, perhaps without realizing it.
The technology has made its way into cellphones through face and fingerprint recognition. New TVs, appliances, thermostats, and other household gadgets are equipped with so-called “smart” technology.
Digital voice assistants such as Siri, Alexa, Google Home, and Cortana are commonplace. And when we use social media or run internet searches, the algorithms that work behind the scenes to identify our likes and dislikes are forms of AI. We’re told it’s all part of an effort to customize our online experiences.
When we write, our spell-checkers are based on AI. When we drive, our GPS systems and driver-assist technologies also rely on it. When we bank online or make a credit card purchase, we are depending on AI fraud detection systems to protect us.
And when it’s time at the end of the day to put our feet up, even Netflix will use AI to make viewing recommendations that suit our tastes.
The Danger
There’s no question that toys like Hello Barbie are a far cry from the innocuous pull-string dolls that once came equipped with a dozen prerecorded phrases. When a toy is capable of understanding a conversation, responding intelligently, and learning and adapting in real-time, it takes on an uncanny quality that is simultaneously not quite human and more than mere machine.
The technology is expanding. Already developers are working on toys that are capable of gesture recognition. If a child looks sad, the toy might attempt to console him without any initiation by the child. These AI toys may not realistically be capable of emotion or sincere interest, but is a child capable of making that distinction?
The same question may be relevant for many adults, too.
This topic has been well-explored in science fiction over the years. In the 2013 film Her, the protagonist gets drawn into an unexpected romance with his AI virtual assistant, which is personified through a female voice.
But the warnings about AI gadgets go far deeper than how we interact with them. Child privacy advocates say that AI toys violate a child’s rights. They worry, too, about the risk of a toy getting hacked when it’s connected to the internet.
Image Generation
AI image generators have become a popular online tool. These are apps that can create a seemingly original image based on text inputs from a user.
The generator uses trained artificial neural networks to pull from online databases that contain millions of images. This is already creating legal complications.
Just last month, the Winnipeg Police Service paid a visit to Collège Béliveau, a high school in Windsor Park, after a report of child pornography. Underage female students found themselves at the centre of a humiliating controversy. Nude photos bearing their likenesses had appeared on public online forums.
In that case, upwards of 300 AI-generated photos were believed to have been created using innocent photos taken from these students’ social media pages. These nude photos were deeply unsettling.
Cases like this one test the ability of law enforcement to get the upper hand on the nonconsensual sharing of intimate images online, whether those images are “real” or otherwise.
These same image generators have been accused of reinforcing harmful bias and stereotypes on the internet. The algorithms behind them can produce images replete with misogyny, violence, and bigotry.
For instance, when a specific AI image generator was asked to produce photos of toys in Iraq, image after image was created portraying Lego-style characters or teddy bears dressed in army attire and bearing violent weapons.
When the input was “attractive people,” the images produced were all those of the young and fair-skinned. Similarly, an input of “productive person” revealed images of white men in suits. Images requested for those who rely on social services displayed people with darker skin.
These harmful stereotypes are far from a true reflection of the real world and can warp people’s worldview. The influence of AI is far-reaching.
Chatbots
ChatGPT is a popular AI processing tool first launched in November 2022. Within two months, the chatbot had already reached 100 million users. ChatGPT is available to the public and free to use at its entry tier.
Chatbots like ChatGPT can answer a user’s questions, drawing from the extensive databases of raw information on which they’ve been trained. These bots can also be used as tools to help compose emails, essays, and code.
Snapchat has its own chatbox, called My AI. And it’s not alone. Soon Google will be launching an experimental version of a chatbot called Bard, geared specifically at teens.
The problem with chatbots, many say, is that people, especially young people, tend to regard them as all-knowing entities which provide the final authority in an almost god-like manner.
Nothing could be further from the truth. Like AI-generated images, tests have demonstrated that chatbots are quite capable of producing wrong answers, promoting unhealthy ideals, and generating racist responses.
So what can parents do to prepare their kids as they move into a future where AI tools will be used both in the home and workplace?
What Parents Can Teach Their Kids
Firstly, experts say that kids need to be reminded that there is no oversight built into these chatbots. No one is checking them for factual accuracy. Thus, the user needs to be the investigator and conduct their own fact-checking.
Next, kids need to be reminded that chatbots aren’t thoughtful or caring. They cannot replace real-life friends.
Many people are concerned that young people will soon use AI to fill the voids in their social circles. Snapchat’s My AI already allows users to treat it in this way. The chatbot has its own profile page that takes up space among a user’s friend list. And it’s always available to chat, day and night.
Finally, it’s important to emphasize to our kids that the information used by chatbots is gathered through the use of algorithms which source information inspired by humans. We need to remind them, and ourselves, that it’s dangerous to rely on such a broad base of information from all corners of the internet without employing some good, old-fashioned human judgment.