
I have 400 words to talk about AI, which is a challenge even for ChatGPT and its nicely presented lists. But I can write about this for the next four columns, which should give me time to introduce you to some of the key positive and negative consequences. The pressure is on with so many people thinking they will become AI millionaires by promising to be the answer to our writing challenges and make our lives easier.
With new things, we tend to divide into two camps: the unbelievers and the early adopters. Both need to exist in order to balance the risk with a level of protection and containment so it drives positive societal impact that is hooked into place by the ethics of care.
AI is a tool that requires electricity to function and so needs to be handled with care. The ease of the human-like ChatGPT text and Gemini built into the daily usage of the computer is very tempting. Asking ChatGPT to write a letter to explain a problem, or sort out a request or a complaint, seems a very easy two-second response in a very busy day, but is it? Who is the author, who owns the intellectual property (IP), how much data have you fed into it?
Before we all begin to sound like ChatGPT, let's consider how well you understand its power. I have not used ChatGPT for this!
Do you and your team know what AI can do? Do you have a policy to address how people are using it at work? Does it include action to protect you from data misuse, phishing, deepfake technologies and cyber-attacks? If you are thinking of using AI-powered tools such as chat bots, have you thought about the impact of human interaction as essential for building trust and empathy?
Did you know the Department for Education (DfE) has £1 million to build teacher AI tools, and TeachScribe is currently developing one to help early years educators save time? Interesting, but who is the author and who owns the data and the IP?
We definitely need to get to know the world of AI so we can make intelligent decisions.