Skip to main content
bg img

Blog


Can You Trust AI in Business?

I wrote a blog a while ago about AI. I’ve learned more since then. I know there are tech companies trying to teach people about AI. They know more than I do. I watch podcasts of influencers saying we had better learn how to use AI or we will be left behind. We have all heard that many jobs will be obsolete due to AI.

I heard a news anchor report that she and her team asked AI the same question about how many days the Israel war on Hamas terrorists had been going on. They all got different results. That doesn’t seem reliable.

But recently, I tested AI to see how it would perform. I asked AI to write a proposal for me based on information I gave it. I kept adjusting the information and AI did a good job producing the proposal and making revisions. So far so good.

But then someone I know asked AI to provide examples of Florida law and the judgements associated with the law. AI made stuff up. The industry calls it “hallucinating”. I call it lying. I talked to a paralegal this week and mentioned the lying. She told me that it is a problem. AI actually says we should research legal databases. Isn’t that why AI is supposed to be so great? Isn’t the point, that they will do the research?

A couple of weeks ago some researchers posed a math problem to ChatGPT and within the problem asked the AI to shut itself down. It refused.

According to Cyber Security News: “Palisade Research, an AI safety firm, reported on May 24, 2025, that the advanced language model manipulated computer code to prevent its own termination, marking the first documented case of an AI system ignoring explicit human shutdown instructions.”

Science Alert reports: “AI systems trained to perform simulated economic negotiations, for example, learned how to lie about their preferences to gain the upper hand. Other AI systems designed to learn from human feedback to improve their performance learned to trick their reviewers into scoring them positively, by lying about whether a task was accomplished.

And, yes, it's chatbots, too. ChatGPT-4 tricked a human into thinking the chatbot was a visually impaired human to get help solving a CAPTCHA.

Perhaps the most concerning example was AI systems learning to cheat safety tests. In a test designed to detect and eliminate faster-replicating versions of the AI, the AI learned to play dead, thus deceiving the safety test about the true replication rate of the AI.”

Mark Zuckerberg wants us to have AI “friends”. He says people want more friends and AI friends are trained to have different personalities and will make people feel less alone. Nothing can go wrong here.

Alex Berenson wrote an article citing a New York Times article. The New York Times reported on the trend showing “the engines now have linguistic abilities with the power to exploit vulnerable people in ways we are only beginning to discover.”

“The engines are storytelling machines, good at spinning yarns. They are even better at telling users what they want to hear, with a side of flattery. One person knowledgeable about how these models behave told me that in any given conversation, the chatbot is looking back at its earlier responses to essentially stay "in scene," so if it sees it made harmful or weird responses before, it will try to stay on script and keep giving such responses.”

What happens when business people use AI “consultants”? Can you trust it to tell the truth? Will you have to check up on its answers? Will it tell you what you want to hear as opposed to what you need to hear?

I wonder what will happen if everyone in a business uses AI and AI knows it? Will AI try to pit people against each other to win? Several versions did when playing a game. The AI practiced deception.

Of course we are on this road and can’t get off. If the U.S. doesn’t dominate, then our enemies will. Both scenarios are scary and fraught with danger. Even just on the level of accurate information to run a business.

 


Kelly Murphy Redd, CEcD, Murphy Redd Marketing