Please note: Over the next couple of months, we’ll be looking at influential trends to consider when planning training programs for 2026.
A post by Sanjay Nasta, CEO of Microassist, Inc.
Synthesia’s research, based on a late 2025 survey of 421 L&D professionals, found that eighty-seven percent said their teams already use artificial intelligence (AI).
Most reported AI use centers on production tasks: voice generation, content and assessment drafting, voice and video creation, and translation. Eighty-four percent cited faster production as the primary benefit. AI is being used to do what Learning and Development already does, just faster.
What did not keep pace was governance. Policies, data controls, and quality review workflows were slow to change. As individuals experiment with personal AI accounts and unsanctioned apps, shadow AI is spreading faster than governance can keep up.
Three Risks That Became Visible in 2025
As AI moved from curiosity to production use, three governance risks stood out, all of which affect Learning and Development teams:
Inaccurate or noncompliant content. AI-generated material often misrepresents regulations or policies, creating legal and operational risk.
Data and IP exposure. Pasting proprietary code, customer data, or internal documents into public AI tools can violate contracts and create long-term exposure that is difficult to unwind.
Quality drift and accountability. Without a clear review standard, AI can accelerate production but also lead to inconsistent quality.
Why the Governance Gap Isn’t Closing Quickly
Organizations are creating governance for AI use, including use by Learning and Development teams. But it takes time to develop enterprise-level AI contracts with strong data controls, and they can be expensive. Shadow AI persists even as policies emerge. Learning and Development teams continue to use personal accounts and unsanctioned tools when official options feel too slow, too limited, or too locked down.
What Leading Organizations Are Starting to Do
A small number of organizations began establishing practical governance in 2025. The best policies make the safe path the easy path. Typically, they include three layers of control:
Enterprise contracts with clear data controls. Organizations only work with vendors who document how data is used, clearly identify what is and is not used for model training, and establish what options exist to prevent customer data from training shared models.
Expert review. Organizations make human oversight mandatory for regulated or public-facing content, with documentation of who reviewed what, and which standards were applied. This is especially important for compliance training, policy guidance, and any content tied to safety, benefits, or employee relations.
API-mediated usage. Organizations route AI activity through systems that enforce data retention rules, access controls, and logging, rather than allowing direct use of public interfaces. This improves visibility and reduces the risk of sensitive inputs landing in the wrong place.
These policies point the way for effective governance of AI use in organizations. But progress is uneven, and governance in 2026 will likely still be reactive. Organizations will be fixing issues after they surface rather than preventing them systematically.
The Cost of Waiting
Organizations that defer governance work in 2026 are not saving time or reducing risk. They are building governance debt. That debt manifests as bad data habits, inconsistent review practices, and a culture where shadow AI becomes the norm. Once that happens, implementing effective AI policy requires changing established behavior, and that is far harder than building guardrails early.
The organizations that will succeed are the ones building foundations: establishing clear data rules, aligning vendor contracts with governance standards, creating review protocols that do not bottleneck production, and reducing the incentive for workarounds by making approved tools genuinely helpful.
While some factors may be outside the purview of Learning and Development teams, Learning and Development can still drive strong AI governance by modeling responsible use, participating in the policy development phase, operationalizing enterprise policies, and training employees in approved practices.
Insight from Everywhere
Following this issue’s discussion on AI, here’s Pete Pachal on 10 ways that he uses AI to be a better journalist. (I’ve been putting the scheduled task approach he mentions in “beat monitoring” to use with pleasing results.)
“So bubble or not, that shift is already here. To better articulate this and highlight tools and techniques that are broadly useful, I’ve broken down several ways I use AI in my writing, researching, and reporting.”
I found the detail and approach that Kylie Earnhardt described in her discussion of PaceJam’s new branding approach inspiring. How much do these details matter when developing a conceptually consistent training program?
“Pacejam, because it was not a client project or pre-existing idea, had no visual identity to begin with.”
A very, very deep dive on the biggest creative trends of 2025 from Benjamin Hiorns. It’s that time of year, where looking back and looking forward delivers all kinds of new insights. (And I might note that this article was found through my new AI scheduled task.)
“To help us catch our collective breath a little before the festive season truly kicks into gear, I’ll be distilling the key design trends that shaped the creative industries around the world in 2025, drawing on industry research and the insights of leading creative voices”
Because I’ve long found interactive fiction a profound source of inspiration for effective techniques related to learning designs and development, it’s worth nothing that Microsoft has (finally) made Zork I, II, and III officially open source.
“‘Rather than creating new repositories, we’re contributing directly to history. In collaboration with Jason Scott, the well-known digital archivist of Internet Archive fame, we have officially submitted upstream pull requests to the historical source repositories of Zork I, Zork II, and Zork III.’”
Do you use exclamation points in your training or personal messages? (I resisted ending that sentence in the obvious way; although I might add that an interrobang would have likely been more appropriate.) Josie Cox points out “The High-Stakes Politics of Exclamation Points.”
“In short, exclamation points matter. They spark surprisingly strong feelings about tone, intention, and even etiquette. But according to new research, they also shape much more than just mood.”
Video of the Month

For this month’s video, Kim Bahr shares thoughts about onboarding overload, and ways to minimize it.
Tips and Tricks
from Kim Bahr, Microassist Senior Instructional Designer.
Custom Shapes Part 1, PPT
It’s not uncommon when developing a presentation or eLearning that you want a custom shape, one that is not listed as an option in your authoring tool. PowerPoint offers a few shape merging options.
To create a custom shape:
- Add two or more shapes to a slide.
- Select the shapes you want to merge.
- On the Shape Format ribbon, select the desired merge option from the Merge Shapes dropdown.

In the examples below, notice the front shape (orange circle) determines the attributes of the custom shape.

Before You Go…
Share the Knowledge: Know someone who would enjoy this newsletter? Feel free to forward it along.
Get Your Own Copy: If this was shared with you, you can subscribe to the Learning Dispatch to get the next issue directly in your inbox.
Start a Conversation: Have a question or a training project in mind? We’re here to help. Just reply to this email to get started.
Until next time,
Kevin
Contact our Learning Developers
Need to discuss developing e-learning? Creating curriculum for classroom training? Auditing and remediating e-learning for accessibility? Our learning developers would be glad to help.