OPINION: Artificial Intelligence Literacy Is a Shared Responsibility

At a conference earlier this year, I overheard two attendees chatting during the lunch break. One man introduced himself as working for a company focused on using artificial intelligence (AI) to improve HR workflows. The other incredulously asked him, “Are people really using that AI stuff?” 
 
Those two extremes illustrate conversations I have every day as a tech leader. On one hand, the room generally leans in when AI comes up in a meeting… when I used to get raised eyebrows. On the other hand, there’s a drastic gap between what’s possible and what’s happening, particularly at work, and particularly in the work we do in the human resources community.  

The questions I hear most often tell me something important: We’re interested, we’re curious, and we’re mostly willing. But we’re not yet confident. 

And in HR, where our work shapes careers, company culture, and productivity, that confidence matters. 

 Who “Owns” AI in an Organization 

Many of you are already using AI—whether you realize it or not. Résumé screeners. Engagement surveys with “predictive insights.” Learning platforms that recommend training paths. If you don’t know how those systems work—or what their blind spots are—you’re flying blind. 

So, that’s the tech team, right? Not necessarily. 

AI itself is a “technology,” but being AI literate is not a “technical” skill. So, while your company’s technology team can set up secure interfaces or advise on the features and limitations of various language models (the AI systems behind chatbots and writing assistants), they won’t necessarily be the best people to tell you how such tools can best enhance your work.  

That said, as a technologist, I always remind my colleagues to use AI with care. Human resources operates at the unique intersection of talent management, legal challenges, finance, operations, and the personal factors that impact every employee. It is critical to not only be able to adapt to modern technology, but also to understand that nothing can replace critical thought in determining what will need human review.  

The fine print for most generative AI platforms (tools that create new text, images, or other content) reminds us to double-check important data. And it makes sense: AI is kind of like a human… and like humans, it can make errors in judgment or analysis. 

AI Success Is a Team Effort 

In my opinion, it’s best to share the responsibility for AI success across roles:  

  • Technology leaders like me need to provide secure, ethical, and relevant tools, and translate the technical stuff into digestible processes you can use. 
  • HR leaders need to make AI upskilling part of professional development, aligning adoption with organizational values and goals. 
  • Every HR professional needs to get hands-on: try various types of prompting, explore use cases, and learn to validate AI outputs rather than accept them blindly. 

Think of it like workplace safety: Everyone has a role to play, and everyone benefits when we all know the basics. 

AI Beyond the “Chatbot” 

When I talk to HR professionals, I hear a common assumption: AI = chatbot. 

Yes, chatbots are part of the story—but they’re just the tip of the iceberg. AI in HR is already doing things like: 

  • Predicting which employees might be at risk of leaving. 
  • Mapping skills gaps so you can plan workforce development. 
  • Suggesting learning content personalized to each employee.
  • Tracking compliance automatically across complex regulations.
  • Optimizing onboarding processes and procedures. 

In the traditional sense, AI waits for you to ask a question and then provides a response. But a new wave is coming: AI that can take action on its own. 

Instead of asking it to write a job posting, you might have an AI tool that writes it, posts it to the right sites, screens the first wave of applicants, schedules interviews for qualified candidates, and updates your dashboard—without you touching a single form. 

That’s a huge leap forward in productivity, and it also means we need to understand the decisions AI is making along the way. Who’s accountable if it filters out great candidates, or if it shows some sort of bias? How do we make sure it understands and aligns with our organizational goals? 

Get to the Gym: Building the AI Muscle 

The difference between good results and bad (or even risky) results often comes down to—yes—AI literacy. You don’t need a massive training program to start. Here are a few steps you can take this month: 

  1. Take a crash course. Use microlearning like HRCI’s AI for HR Professionals course to brush up on the basics, especially if you’re new to the concept. 
  2. Experiment in low-stakes scenarios. Use AI to draft meeting notes, summarize articles, or brainstorm icebreaker questions for an event. 
  3. Create “AI office hours.” Give your team a dedicated time to test prompts, share tips, and ask questions. Document wins, lessons learned, and potential use cases. 
  4. Partner with IT and your technical teams early. Involve tech teams before buying or rolling out new AI tools. 

This isn’t 2001: A Space Odyssey. At HRCI, we believe in “human + AI,” not “human vs. AI.” Teams who know how to guide, question, and refine what AI delivers will develop a distinct advantage in years to come. 

So get curious, get hands-on, and bring your colleagues along. Because the future is already here.

 

Chris Scandlen is HRCI’s CIO, driving AI, analytics, and technology strategy to transform customer and community experiences. A former global leader at Laureate Education and PwC, he has built groundbreaking AI and data products, led high-performing tech teams, and co-founded a successful startup. 

 

Related Learning & Resources 

 

Share