Whenever we get asked to write about trends in web content management systems (CMS) or Digital Experience Platforms (DXP), the first thing we highlight in CMS MAG these past few years is artificial intelligence, because it’s practically all anyone is talking about.
Although not everything about AI is positive (for instance, the well-known “hallucinations”), it does have tremendous appeal and offers countless applications, thanks to its massive processing capacity and the fact that anyone can access this new technology using only natural language. On top of that, when OpenAI made ChatGPT available to everyone at no cost a few years ago, it led to a historic and rapid adoption of the technology.
CMSs have not stayed on the sidelines. In fact, thanks to OpenAI’s API pricing and the ease of obtaining responses to prompts from almost anywhere, even more modest CMSs have gained a certain level of sophistication in this area. However, there’s a big difference between simply adopting AI and doing it the right way. Below, we’ll explore this topic in detail.
1. Use AI but keep data secure
A CMS must select companies or Large Language Models (LLMs) that guarantee data privacy and won’t use your data for unauthorized training, taking advantage of your AI usage. Any company or LLM that doesn’t fully respect copyright rights should be automatically discarded. A key reason is that several big tech companies are already paying massive sums to access such data, which means you should ensure you receive fair compensation for yours.
2. Offer a broad range of options
A CMS shouldn’t rely on just one provider; it needs to offer a variety of LLMs so the newsroom can choose whichever is best at any given time, provides higher quality, or is simply more convenient. This approach also protects the newsroom against any single provider’s downtime or problems, since there will always be another option readily available. Switching providers for economic or commercial reasons might also be advantageous.
3. AI must be well-integrated and user-friendly
First, AI should be seamlessly integrated into the CMS so that editors or journalists don’t encounter usability issues. Additionally, the AI’s power must be regulated by the system of roles and permissions.
4. AI should work as a “copilot”
- Copilot integration: Once the AI is usable, it should function as a copilot. That means it should never interfere with the CMS’s overall operations—such as the editorial workflow—and using it should require an action from the user (for example, cutting and pasting text). This helps prevent prompts from accidentally being published and going viral as a journalistic blunder.
- No autonomous publishing: AI should not be allowed to publish content on its own. News organizations can face legal and reputational risks—for instance, defamation, invasion of privacy, or child protection issues—when even a single poorly chosen word can lead to major legal or public-image crises. This is not to mention potential errors and hallucinations.
5. Pricing
Ideally, the AI service should be included in the overall CMS fee so the newsroom can use it freely without any restrictions. Having multiple LLM options can also lead to cost savings. If AI usage is billed separately from the CMS, there should be a dashboard to view usage statistics and perhaps an alert system when nearing certain limits—if such limits exist.
6. Flexibility
If prompts are used to perform actions through APIs, these prompts should be easily accessible and customizable by the digital newsroom, typically in the CMS’s configuration section. This area might also allow editors to choose their preferred LLM and adjust other parameters.
7. Legal aspects
- European vs. Non-European LLMs: Some AI models may be European, but many others might not be. It’s widely known that European legislation is at the forefront when it comes to AI. Any LLM operating in Europe must, of course, comply with the law. It’s important to stay vigilant for any unethical or dishonest companies—if discovered, they should be reported and immediately dropped. Extra caution is advised when using AI providers from countries that are not fully democratic.
- Copyright disputes: Keep in mind that some AI service providers are involved in ongoing court disputes over alleged copyright infringements.
- Respecting editors’ wishes: Certain providers also ignore publishers’ requests not to use their content for AI training—requests often stated in the robots.txt file or in a site’s usage policy or legal notice. Editors must keep an eye out to ensure big tech doesn’t overreach, which, unfortunately, happens more frequently than one would hope.
- Rapidly evolving legislation: Laws are evolving at a remarkable pace, and different countries have their own regulations around AI. What’s legal today may not be legal tomorrow and vice versa. CMSs must, in some way, anticipate future changes—for example, by marking AI-generated content as such in the backend.
8. Sustainability
Although this is improving day by day, AI can still be very energy-intensive, resulting in a substantial environmental footprint. When implementing AI, consider using it where it truly adds value as a journalist’s copilot, boosting efficiency or quality in a meaningful way. Otherwise, it might be better to use more traditional, non-AI tools that still yield acceptable results.
Finally, keep in mind that your CMS provider should have enough capacity to offer you its own AI or GPT-based system if you need it.
Leave a Reply