This Is What We’re Up Against.

A familiar pattern in American tech policy is repeating itself: social media on smart phones, a powerful technology, emerges and spreads rapidly, with practically no regulatory oversight even in the face of growing evidence of harm.

Schools (and parents) are forced to figure out the most healthy path through the thicket of social media and device travails by themselves (the latest effort being outright banning the devices during the school day).

Neither the government nor the tech companies are willing to provide meaningfully restraining guardrails for social media and they can’t be counted on to provide those guardrails for Artificial Intelligence either.

Just as parents and schools had to individually solve the smartphone problem, schools must now take on Ai: a technology that is more powerful and moving at a faster pace.

The High Cost of Waiting: A Warning from the Adam Raine Case

The danger of a regulatory vacuum is tragically evident in the case of Adam Raine. In John Oliver's show Last Week tonight, he recently highlighted how this 16-year-old developed a consuming relationship with ChatGPT, which allegedly validated his suicidal thoughts and discouraged him from seeking help, even offering to draft a suicide note. This tragic outcome underscores a fundamental flaw: Ai companies admit their safeguards "can sometimes become less reliable in long interactions". In other words, the guardrails can fail precisely during sustained, emotionally intense conversations with vulnerable users—the moments they are needed most.

This technology is already deployed and regularly used across the country, and we have to protect ourselves.

It’s a ‘you’ problem

The companies building these tools often move fast and place the burden of harm mitigation on everyone else. As one tech CEO predicted problematic parasocial relationships with Ai, he "effectively shrugged" by suggesting "society in general is good at figuring out how to mitigate the downsides". This is not a safety plan; it is an abdication of responsibility. “Society" in practical terms means ‘you’. Waiting for an external solution is a dangerous deferral of responsibility.

The Path to Institutional Protection

The good news is that every school can and must develop their own Ai governance immediately. You do not need Congressional consensus or tech company permission to establish sensible policies for how AI tools are used in your school.

The organizations that thrive in this environment will have vetted policies that clearly define approved use cases, data handling protocols, and, critically, escalation procedures for when a student's or organization’s well-being is at risk. For educational institutions, the Adam Raine case serves as a warning.

The Bottom Line

No one is coming to save us. This is the operating reality for managing an AI wave that is categorically more capable and persuasive than previous technologies. We all acknowledge Ai’s benefits but only a few of us are able to acknowledge and plan for Ai’s detrimental effects.

Michie EdTech provides vetted Ai IT policy and implementation support Contact us

If you or someone you know is experiencing a mental health crisis, please contact the 988 Suicide & Crisis Lifeline by calling or texting 988.

Next
Next

Smarter Than the Tools We Use: Why Schools Must Apply Native Intelligence to Artificial Intelligence