Avoiding hype and binary AI mindsets
AI is all the rage right now – there’s an unspoken reason for that

Generative AI isn’t going to change everything, but it will change an awful lot. Break free from the binary and embrace a balanced approach to AI’s potential
AI is as transformative as it is disruptive.
You’d be forgiven AI is the second coming right now with the slew of smart cookies waxing lyrical about the potential dangers of AI, or more specifically, generative AI; the field of AI that enables people to create new content, including audio, code, images, text, simulations, and videos from text, known as prompts, you, or anyone, put in.
TOPIA discount
‘What Did OpenAI Do This Week?‘ is a weekly rundown of everything OpenAI (and the larger industry) does. Paul Armstrong is offering TOPIA readers 50% off the first year here.
I say specifically, because generative AI is just the tip of a very large iceberg. Whereas usually, the danger is underneath, the danger is pretty clear for all to see, but often the opportunities get lost or little play in the media. There is often only a binary position being offered, and the reality is much more complex. Generative AI will disrupt a lot of jobs, yes, but it’ll also create new jobs and empower people to learn new skills too.
Generative AI is is just the tip of a very large iceberg.
While generative AI tools like OpenAI’s ChatGPT and Google’s Bard are impressive, they are still dumb response units that just happen to be learning at breakneck speed. Will these tools become super-intelligent next week? No, they will not. In our lifetime? Almost certainly. The trouble is the tools aren’t fully formed.
ChatGPT tends to be generally more accurate at generating text that is similar to the text it was trained on; Bard is more accurate at answering questions and generating text that is relevant to the topic at hand. Both are racing to be the de facto tool people turn to, to a degree ChatGPT is almost already at eponym status for when someone is using AI.
There is no official training or test before you use any of the tools, just a quick tour of the features and blamo, the world is your oyster. Now, add a bit of training, and people can write powerful prompts to get smart stuff out, which is worrying a lot of people in various places when it comes to being replaced when they should see upskilling as the logical step.


From Congressional hearings to Yuval Noah Harari speculating about the future of humanity, there are a lot of tales of caution – and rightly so. AI is as transformative as it is disruptive, and we’re only on the starting blocks, the starting gun is about to fire. Listening to the ten-cent-ers and doomsayers, you’d be forgiven for thinking that the tools we have can already tie their shoelaces and cure cancer when the opposite is true (although AI is very good at detecting cancer).
AI is not evil by intent, it’s a technology we have created and are choosing to wield in various ways and guises. What is being confused is AGI or artificial general intelligence, or when the AI can do anything we humans or animals can.
The argument then becomes what it ‘thinks’ of us and the systems we put it in control of. More so, it’s if we have the cognitive ability to foresee issues beforehand. Right now, no one on either side is really spending time on that element – although Google does have a privacy-first stance.
So why is everything all abuzz? Well, a few reasons, but the main one is that spinning things up fast, creating a sense of urgency and fear of missing out, makes certain people lots of money. Secondly, there is a lack of understanding, and people’s lust to glom onto anything that suggests a quick buck can be made will always, sadly, grab people’s attention. The focus on the fast money and disruptive elements isn’t helping people get a clear perspective on the technology.
So how do you make sure you avoid the hype and get the signal? There are a few tips that can help, and full disclosure upfront, I write an AI newsletter, so I get to see (and avoid!) all the hot takes and puff pieces. Here’s are my tips.
1. Find the right ‘experts‘

Know that taking an online course isn’t enough, and any ‘simple’ guide is unlikely to bring any enlightenment. Therefore you need to create a learning strategy that works for you and your needs. Step one is to write down the questions you want to ask about, then ask your network who they trust. When you hear the same name mentioned a few times, you have found the person that can probably help you – people call this the ‘expert staircase’. You might need more than one. Equally, read around the subject using tools like Flipboard with expert curators. If you head into super-techie ville, step back and reassess if you are getting the understanding you need.
2. Take a breath

You’re going to read a lot about potential outcomes, thoughts, conjecture and possibilities. Just keep asking/reminding yourself, “what can the tool or product actually do now, and does it help me achieve the goals I need to?”.
3. Create policies

Adopt a nuanced view of AI that acknowledges both its potential benefits and its risks. There will always be at least two ways of looking at something when it comes to AI. For every positive, there will be a way that someone will be able to abuse the tool. Creating policies will guide you and others towards the best route that fits all your and society’s goals.
4. Get stuck in
AI isn’t something to shy away from or leave to others. If we let technology centralise, that will not help as many people as it can.

One thing to think about right away is creating a policy for staff so they know how to use the tools and what is expected from them from day one. Lots of companies have blocked access, I prefer an enable-before-disable approach for the variety of businesses I work with, but in some cases, a block approach can be a smart move. If you have IP, it’s worth protecting.
Understanding why you need to pay attention – but not panic – is easy when you look at the huge numbers that are minuscule when you have some context. Based on publicly available numbers that Google and OpenAI have provided to date, fewer than 3% of the world is likely to have used either tool yet (let alone returned or regularly use either). If you were to follow the media tsunami, you’d be forgiven for expecting that number would be 50% or higher.
The numbers are low at the moment, meaning you haven’t missed out, and there’s no need to dive in without armbands. AI will be a disruptive and transformative force that will undoubtedly cause issues for people, but that doesn’t mean you can’t avoid being affected. Losing a binary AI mindset is essential if we want to reap the benefits of AI while mitigating its risks.
Losing a binary AI mindset is essential if we want to reap the benefits of AI while mitigating its risks.
You might also like: Have we created an AI monster?
What’s so good about this?
AI is just another change… but it’s an important one. The ability to sensibly apply AI is important, and getting the right information without the hype so you can make the right decisions for the world is important. Something we can all get behind.
Paul Armstrong has always helped people understand technological change, He recently launched the comprehensive ‘What Did OpenAI Do This Week?’ – a weekly, easy-to-read rundown of everything that OpenAI (and the larger industry) does because they are the company that is pushing boundaries and because of the David vs Goliath narrative that is forming. It’s $15 a month or $99 for the year, TOPIA readers can get 50% off for the first year here.

Meet the writer
Disruption-lover Paul Armstrong is a leading expert on the future of technology and innovation. He runs emerging technology advisory, HERE/FORTH. His best-selling book, Disruptive Technologies, offers organisations a distinct response to emerging technologies including AI, 3-D printing, Blockchain and was recently udpated to include web3, multiverse technologies. He is on the board of Global Tech Advocates and an Ambassador for Meaningful Business and runs the TBD Group, TBD Conference.