AI use in US workplaces has doubled in two years (so has trouble)

Gallup recently published a survey on AI use in the workplace, and published the results:

The use of AI at work is accelerating. In the past two years, the percentage of U.S. employees who say they have used AI in their role a few times a year or more has nearly doubled, from 21% to 40%. Frequent AI use (a few times a week or more) has also nearly doubled, from 11% to 19% since Gallup’s first measure in 2023. Daily use has doubled in the past 12 months alone, from 4% to 8%.

While guidance and governance are merely questions yet to be answered in most organizations.

We talk a lot about using AI as an organization and as consultants to business, education and non-profits. And we always do so from a ‘responsible’ use perspective. In reading and exploring Ai use (and abuse) just this week, there have been some incredibly important reasons why policies and playbooks (guidelines and guardrails) / AKA governance is more important than ever.

The genie is out of the lamp

From Gallup:

Many employees are using AI at work without guardrails or guidance. While 44% of employees say their organization has begun integrating AI, only 22% say their organization has communicated a clear plan or strategy for doing so. Similarly, 30% of employees say their organization has either general guidelines or formal policies for using AI at work.

Why would we integrate an incredibly powerful technology without a strategy for doing so?

I tell people all the time, I’m a huge proponent of AI in the right hands, with the proper foundations. And I study AI every single day. The “innocence?” / “hubris?” / “drive?” to build AI into work without clearly understanding how or why simply baffles me.

Just this morning, I read a piece by Google’s first Chief Decision Scientist, Cassie Kozyrkof where she described AI adoption this way:

AI is a magic lamp with a genie inside.

  • The genie is the model. It’s powerful, impressive, doesn’t always do what you hoped it would—but it definitely does something.

  • The lamp is the control layer: the structure around the system. Guardrails, constraints. The thing keeping the genie on good behavior.

  • And then there’s you—the wisher. The one holding the lamp, deciding what to ask, and bracing for the consequences.

Why not just let people wish away?

In my last posts, I used schools and education technology providers as examples of two sides of a growing AI equation to show how important understanding privacy and risk management together can be both difficult and essential to protect the communities they each serve. (Yes, the post is really long, sorry about that… but there’s a LOT to say!)

I’ve been in technology (and education) for decades. I happen to have worked with amazing, intelligent and incredible people in every organization. I believe that the human beings involved in improving education for students believe in a true driving force for good.

But there are criminal elements exploiting the human beings using the technology (and not just AI, by the way, but technologies we’ve been using for my entire career.)

A few sobering statistics in the education space alone - from just this past 18 months:

  • The Center for Internet Security reported that 82% of K-12 schools reported at least one cyber incident between July 2023 and December 2024, with over 9,300 confirmed incidents in that period

  • Ransomware attacks on K-12 schools increased by 92% in 2024, making education one of the most targeted sectors globally

  • The December 2024 PowerSchool breach exposed sensitive data of millions of students and staff, including Social Security numbers, grades, and IEP details. Attackers used phishing and exploited third-party software vulnerabilities, remaining undetected for more than a week

    • Even after ransom payments, school districts continued to be extorted with stolen data from the PowerSchool breach. More than 100 school districts have filed lawsuits for negligence and breach of contracts

    • Exposed data led to thousands of identity fraud cases and targeted phishing attacks against parents and staff

These attacks led to forced school closures, disrupted meal programs, and blocked access to counseling and special education services, disproportionately affecting the most vulnerable students.

And schools don’t always have IT organizations to support them like other businesses do. Which is why this final stat is so disturbing:

Exploiting our (very) human vulnerabilities - crushing trust

Criminals targeting K-12 schools focus more on exploiting human vulnerabilities—such as phishing, social engineering, and user error—than on technical flaws in IT systems.

These trends underscore the need for ongoing staff training, awareness programs, and layered security approaches (dare we say governance?) that address both human and technical risks.

If you need support in protecting your reputations through proper training, implementation and support around governance, or the management of students’ data, we are here to help. We can answer your questions about compliance in educational settings, having supported more than 800 schools in more than 40 years in the education and technology spaces as a group.

We need to handle AI on our terms… safely, responsively and with an awareness of the risks and rewards of using technology in every organization. Including schools.

 

Resources from AIGG on your AI Journey

Is your organization ready to navigate the complexities of AI with confidence?

At AIGG, we understand that adopting AI isn’t just about the technology—it’s about doing so responsibly, ethically, and with a focus on protecting privacy. We’ve been through business transformations before, and we’re here to guide you every step of the way.

Whether you’re a government agency, school district, or business, our team of experts—including attorneys, anthropologists, data scientists, and business leaders—can help you craft Strategic AI Use Statements that align with your goals and values. We’ll also equip you with the knowledge and tools to build your TOS review playbooks, guidelines, and guardrails as you embrace AI.

Don’t leave your AI journey to chance.

Connect with us today for your free AI Tools Adoption Checklist, Legal and Operational Issues List, and HR Handbook policy. Or, schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.

Your next step is simple—reach out and start your journey towards safe, strategic AI adoption with AIGG.

Let’s invite AI in on our own terms.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Next
Next

FERPA, COPPA, and Beyond… Bridging the EdTech-Education Compliance Gap