Published: October 2025
In 2025, phishing isn’t just a security nuisance—it’s an existential threat for small businesses. And with artificial intelligence (AI) now being leveraged by cybercriminals to craft hyper-realistic lures, traditional cybersecurity defences simply aren’t enough. Businesses must adapt—not just with tools, but with people. That’s where the concept of a “human firewall” comes in.
Forget the days of poorly worded emails from foreign princes. AI-driven phishing has changed the game completely. Attackers now deploy tools like generative language models to produce error-free, highly contextualised phishing emails that mirror authentic business communication styles.
Deepfake voice calls, fake browser login pages generated on the fly, and even AI-synthesised Zoom impersonations are no longer the realm of science fiction. We’ve seen attackers simulate the tone of a CEO, use ChatGPT to mimic the email patterns of colleagues, and even generate domain-specific technical vocabulary to boost credibility.
Small and medium-sized businesses (SMBs) in New Zealand are facing a perfect storm of increased attack surface and limited internal security capability. Many lack dedicated cybersecurity staff. Others outsource IT to MSPs but still depend on staff making the right decisions at the point of click.
Worse, AI tools scrape LinkedIn, company bios, and press releases to personalise attacks. From fake “invoice overdue” notices targeting your accounts team to deepfake supplier calls, no department is immune.
Traditional security awareness training has often been dry and ineffective. To truly create a human firewall, training must be embedded in culture, policy, and daily operations. Here's how:
Executives and board members must model secure behaviour. If your CEO routinely approves finance changes by email, staff will follow suit—even if it violates policy. Establishing leadership accountability is key to enforcing behavioural standards that align with your technical controls.
Likewise, update incident response policies to reflect AI-era phishing realities. Include internal communication playbooks, executive impersonation escalation paths, and quick-turnaround investigation SOPs.
Of course, layered technical defences are still critical:
But none of these will protect you if someone replies to an AI-generated email asking for bank details and sends $30,000 to the wrong place. People remain the weakest—and strongest—link.
In April 2025, a local consultancy firm received an email from what appeared to be Jessica, the firm’s payroll officer, asking to update her bank account before the end-of-month payroll run. The email contained her correct signature block, tone, and even referenced a recent internal HR update.
But the message was fake. An attacker had scraped LinkedIn, used AI to model writing style, and even used ChatGPT to draft responses to replies from the CFO. Over $92,000 in salary payments were redirected before the fraud was discovered.
Lessons learned? No matter how advanced your tech, without staff trained to be skeptical and verify requests—especially around money—losses will occur.
AI is a powerful tool—for both sides. You won’t stop every threat, but you can make your business a much harder target. The Human Firewall is no longer a luxury—it’s a necessity.
Don't wait until you’re on the front page of the Taranaki Daily News or explaining a breach to your board. Start building your human firewall today.