top of page

Don’t Blame the Bot: AI Fails Without the Fundamentals

  • Writer: Andrew Chamberlain
    Andrew Chamberlain
  • Jul 4
  • 3 min read

Yesterday, I had the privilege of facilitating an AI roundtable for members of the Royal Society of Chemistry. Our discussion quickly moved beyond the tools themselves to something more foundational: the realisation that, before any organisation can meaningfully adopt AI, it must first get the basics right.


Too many organisations are rushing to “implement AI” without ensuring the ground beneath them is stable. AI isn’t a shortcut to transformation, it’s an amplifier, and if what you amplify is dysfunction, the outcome is predictable.


Here are five first principles we discussed that must be in place before AI can be responsibly and effectively embedded across a business.


1. Skills First, Then AI

Don’t assume your team has the foundational skills for delivering good business, let alone for using AI effectively. Technical literacy, critical thinking, communication, data handling, and ethical reasoning are just as important as prompt-writing. Conduct a broad review of the team’s current skills, expertise, and knowledge, not just of AI, but of their core functions. Then invest accordingly.

Professional development should be a constant, not a one-off. This includes providing opportunities to build confidence in using AI responsibly, consistently, and in ways that align with your organisation’s mission.


2. Healthy Data, Healthy Output

AI is only as good as the data it’s trained on, or, in the case of most business use, the data it’s applied to. If your data is inconsistent, outdated, poorly structured, or incomplete, then your AI outputs will be flawed or misleading.


data audit is a vital first step. Identify your key data sources. Assess accuracy, structure, permissions, and completeness. Make sure the inputs reflect the outcomes you want. Clean, clear, high-integrity data is the backbone of trustworthy AI use.


3. Speak the Same Language

For AI tools to be applied consistently, your organisation must use language consistently. This may sound trivial, but it’s one of the most overlooked issues. If one person says “sales” and means revenue, while another thinks “sales” refers only to income from products (not services), the AI will struggle to interpret or generate anything useful.


Agreeing on definitions, financial and otherwise, is a strategic imperative. Consistent language enables not just reliable AI outputs but better cross-functional communication, alignment, and accountability.


4. Respect and Protect Copyright, Privacy, and Intellectual Property (in both directions

When it comes to AI, intellectual property rights, copyright, and data protection must be front and centre. Too often, organisations focus only on avoiding the misuse of others’ materials, overlooking the importance of protecting their own work and assets.


If you're using third-party content, whether articles, datasets, designs, or code to train or prompt AI tools, you must ensure you have the rights to do so. Many free online tools ingest whatever they’re given without checking permissions. The liability still rests with you.


But just as important is protecting what you create. AI-generated outputs built on your original data, research, or creative content may still represent valuable intellectual property. You need to treat these outputs with the same level of care as anything produced by your team. That means registering, watermarking, or licensing your content where appropriate, and ensuring contracts and NDAs with third parties reflect your IP expectations.


Equally, any AI-generated material you publish should be properly attributed, especially if it draws on identifiable sources. This protects your credibility and helps avoid unintentional plagiarism.


The rule is simple: protect what’s yours, and respect what’s not.


5. Set the Rules: Policy, Protocol, and Governance

If everyone uses AI differently, chaos will follow. That's why an organisation-wide policy and clear protocols on responsible AI use are essential. They set the tone for what is acceptable, what is expected, and how to escalate concerns.


This isn’t just about style guides or compliance, it’s about creating a shared understanding of how AI supports your business goals, customer relationships, and brand integrity. AI should enhance your corporate voice, not dilute or fragment it. And this applies not only to content creation but to internal decision-making, ideation, and problem solving.


These principles are not about AI. They are about good business. AI simply exposes (sometimes brutally) where those basics are missing.


Before you start with AI, take a step back. Revisit your foundations. Get your skills, data, language, ethics, and policies in order. Then, and only then, will AI be able to deliver on its promise.


Let’s not get seduced by the shine of new tools. Let’s build on solid ground.


__________________________________________________


Do these issues resonate with you? Then they'll probably resonate with your membership. Let me know if you'd like me to facilitate an AI roundtable with your members so that they don't rush into using AI without solid foundations. 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page