This best practice AI policy framework is aligned to CCN’s values: local focus, community trust, journalistic accuracy, and transparency.
1. Editorial Oversight Comes First
AI can assist, but it cannot replace editorial judgement. All AI-generated content must be reviewed and approved by a human editor before publication or broadcast. Final accountability lies with CCN’s editorial team.
2. Transparency with Audience
CCN commits to disclosing when AI tools have been used for creation, or research, summarising transcripts, or generating automated alerts.
Example: “This image was produced with assistance from generative AI and edited by our staff.”
3. No AI-Generated News Reporting
News reporting, especially stories involving public interest, local issues, or breaking developments it must be journalist-led.
AI can support background research or structure, but quotes, facts, and context must be human-verified.
4. Guard Against Misinformation
AI outputs must never be used to generate or spread unverified claims. All AI-assisted content is subject to the same fact-checking standards as traditional reporting.
5. Protect Privacy and Identity
AI tools must not be used to create or manipulate images, voices, or text that could mislead the public about the identity or actions of a person, especially without consent.
6. Keep the Community in the Loop
CCN will regularly review and update its AI use in line with community expectations. If AI use expands, readers and listeners will be informed.
7. Secure and Ethical Use
Only trusted, compliant AI platforms will be used. At present, we use ChatGPT and approved derivatives.
Data privacy, copyright law, and journalistic ethics will guide all use of AI.
Internal staff training will support responsible use.
8. We will actively pursue mutual collaboration and protection of our content
CCN will actively pursue mutual collaboration between AI companies and associated entities, including governments. We will, however, not surrender our copyright without consent.