News

Data is the new uranium - incredibly powerful and amazingly dangerous

CISOs are quietly wishing they had less data, because the cost of management sometimes exceeds its value

"Data is so common it has become nearly invisible. Unless you're a CISO. For them, more data means more problems. Most security execs know they have pools of data all over the place, and that marketing departments have built massive data-gathering and analytics engines into all customer-facing systems, and acquire more data every day..."

Read it here.

In 2016, Tesla CEO Elon Musk instructed his team of engineers to 'hard code' the first demo of what would become 'Full Self Driving'. A faked video drove panic across the entire automotive sector, leading to massive (and mostly failed) investments in technologies for autonomy.

RNZ's Nine To Noon host Kathryn Ryan and I discuss biases in AI systems used to sift through resumes, systems that judge female names quite harshly when compared to male names; how Mark Zuckerberg wants to put AI 'slop' into every Facebook, Instagram and Threads feed - and a new AI 'priest', CATHY, working for the Anglicans...
Listen here.

In my regular chat with RNZ's Nine To Noon host Kathryn Ryan we discussed Meta's use of all public Facebook and Instagram posts - back to 2007 - to train their AI models; people believe an AI kill-switch recommendation, even when it's actually randomly generated; and AI chatbots can be helpful in moving people away from conspiracy theories.
Listen here.

AI has colonised our world - so it's time to learn the language of our new overlords

Brush up on your 'Delvish' - the lingo that flatters LLMs into a sort of submission

"As has been true all through history, the conquered are left with little choice but to learn the language of their conquerors – if for no other reason, so we can flatter them and maybe get our way...With this in mind, let me offer up brief lesson in what is called 'Delvish' – the machine-inflected dialect of English that science fiction author Bruce Sterling recently named to illustrate the unnatural propensity for generative AI outputs to 'delve' into topics..."

Read it here.

AI stole my job and my work, and my boss didn't know or care

Everyone knows automation will happen, which is why everyone needs proof of human involvement.

"The gig I lost started as a happy and profitable relationship with COSMOS Magazine – Australia's rough analog of New Scientist. I wrote occasional features and a column that appeared every three weeks in the online edition. Everyone seemed happy with the arrangement: my editors, the readers, and myself. We'd found a groove that I believed would continue for years to come. It didn't..."

Read it here.

In my regular chat with RNZ's Nine To Noon host Paddy Gowan we talk about a farmer whose milking system was subjected to a cyber attack, resulting in the death of a cow. I also discussed my personal connection to science mag COSMOS decision to use AI-generated articles after it laid off their freelancers earlier in the year. And imagine a 'bug' that causes the user speaking with ChatGPT to be answered back in their OWN VOICE. Actually, don't imagine - it happened. Black Mirror anyone?

Listen here.

Big Tech's eventual response to my LLM-crasher bug report was dire

Fixes have been made, it appears, but disclosure or discussion is invisible

"Found a bug? It turns out that reporting it with a story in The Register works remarkably well ... mostly. After publication of my "Kryptonite" article about a prompt that crashes many AI chatbots, I began to get a steady stream of emails from readers – many times the total of all reader emails I'd received in the previous decade. Those emails proved very interesting..."

Read it here.

In my regular chat with RNZ's Nine To Noon host Kathryn Ryan, we discuss problems that are cropping up in telling humans and AI apart. In one instance writers were fired after being accused of using AI - where they hadn't! In a recent US study participants struggled to tell who was human in a five-minute two-way text conversation with a GPT-4 model. And as the UK goes to the polls next week 'AI Steve' is running in the seat of Brighton and Hove. But plans in Wyoming for a similar AI run for mayor has struck a legal hurdle.

Listen here.

If you’re one of the millions of Australians using Facebook or Instagram, tech giant Meta is using your data to train its artificial intelligence models, and you don’t have the ability to opt out. Read it here.

Microsoft's Recall should be celebrated as the savior of SMEs and scourge of CEOs

Small businesses have seldom had the chance to understand how they work. A history of PC use makes it possible

"Last month, Microsoft dropped a tool that no one had expected or asked for – and which seemingly no one needs. That tool is Recall, the software that records everything done with a PC and regurgitates it...So much seemed weird about Recall that I never really saw the sense of it. My clear-eyed business partner saw through it immediately: "It's Robotic Process Automation.""

Read it here.

In my regular chat with RNZ's Nine To Noon host Kathryn Ryan, we discuss the LLM bug I found and tried to report, without success; Microsoft's new privacy-violating 'Recall' tool, that captures everything you do with your computers; and how Google search began telling  users of their search engine that doctors recommend 'eating at least one rock a day' with their new 'AI Overviews'. Glue on a pizza anyone?

Listen here.


No one wanted to fix this model-breaking bug. Neural nets with flaws can be harmless … yet dangerous. So why are reports of problems being roundly ignored?

"Imagine a brand new and nearly completely untested technology, capable of crashing at any moment under the slightest provocation without explanation – or even the ability to diagnose the problem. No self-respecting IT department would have anything to do with it, keeping it isolated from any core systems...What if, instead, the whole world embraced that untested and unstable tech, wiring it into billions of desktops, smartphones and other connected devices? You'd hope that as problems arose – a condition as natural as breathing – there'd be some way to deal with them, so that those poor IT departments would have someone to call when the skies began falling. I've learned differently..."

Read it here.

When AI helps you code, who owns the finished product?

It's not settled law. And it's going to mean trouble.

"In the age of AI, one thing the legal system has so far been unambiguously clear on revolves around the ownership of AI generated content: as it has not been created by a human being, it cannot be copyrighted. The AI doesn't own it, the AI's creators don't own it, and whoever prompted the AI to generate this content doesn't own it either. That code cannot have an owner. Who owns this code that I've written?"

Read it here.

In my regular chat with RNZ's Nine To Noon host Kathryn Ryan, we discuss the first known criminal case of a 'deepfake' being used to discredit a witness; Meta AI claiming to be the parent of a disabled child - in a chatgroup of parents with disabled children; how the new US Department of Homeland Security's AI Oversight Board assigns the foxes to guard the henhouse; and what happens when you ask a chatbot to play act as a Catholic Priest?

Listen here.


The Project TV: Meet the nightmare robots taking over the world

Some commentary on 'humanoid' robots on Channel 10's The Project on 1 May 2024 - and why those robots really don't pose a threat to flesh-and-blood humans...

Devaluing content created by AI is lazy and ignores history

The answer is not to hide from ML, but be honest about it

It's taken less than eighteen months for human- and AI-generated media to become impossibly intermixed. Some find this utterly unconscionable, and refuse to have anything to do with any media that has any generative content within it. That ideological stance betrays a false hope: that this is a passing trend, an obsession with the latest new thing, and will pass. It won't.

Read it here.

Wisely AI has identified five key risks associated with the use of AI in organisations: Anthropomorphising AI chatbots; Malicious and commercially sensitive training data; hallucinations; privacy, data security and data sovereignty; and prompt attacks. In our 'De-Risking AI' white paper, we outline these risks, and suggested mitigation techniques. It's part of Wisely AI's mission to 'help organisations use AI safely and wisely.'

Read or download the white paper here

From a “zombie self” to the reshaping of cities and nanobots in your blood stream, here are nine ways things might be different 40 years from now.

ABC News 'What will life be like for Australians in 2064?': Interviews with a number of future-oriented thinkers - including myself - about what the world of 2064 holds for us.

Read here

Your PC can probably run inferencing just fine – so it's already an AI PC 

Language models are entirely happy on the desktop

What is it that makes a PC an AI PC? Beyond some vague hand-waving at the presence "neural processing units" and other features only available on the latest-and-greatest silicon, no-one has come up with a definition beyond an attempt to market some FOMO.

Read it here.

Tech giant nVidia breaks Wall Street record after posting an enormous increase in profits

ABC News Radio 'The World Today' programme, Friday 23 February 2024  - Interview for ABC 'The World Today' radio programme about why nVidia has suddenly become the 3rd most valuable company in the world, and how they're litterally printing money on silicon with their must-have AI chips.

Listen here.

As we grapple with 'sovereign AI', perhaps we should treat computational resources as finite and precious

COSMOS column 21  February 2024 - "While both commercial and political imperatives drive some of the explosive growth in 'foundation' large language models, more of it – much, much more – will be driven by an increasingly nuanced understanding of the value of these models...."

Read it here.

AI has a terrible energy problem. It’s about to hit crisis point

COSMOS column 14  February 2024 - "Unless something knocks us off this path, it’s reasonable to expect that by around 2030 there will be more than a billion people using AI day-to-day in their work, and perhaps another 3 or 4 billion using it..."

Read it here.

It's time we add friction to digital experiences and slow them down

Decades of obsessing about always going faster have left us in constant danger.

Read it here.

Microsoft's new subscription 'Copilot Pro' service - is it worth AUD $45 per person per month? Read our white paper to learn whether the business case makes sense for your organisation.

Read it here.

In my regular chat with RNZ's Nine To Noon host Kathryn Ryan, I talk about the huge advance in "spatial computing" introduced by Apple's Vision Pro. It comes with a hefty price tag - and will people really want to wear them?

Deepfakes keep getting better - just ask the Hong Kong employee whose company lost $40m on an extremely real scam and what are the implications of Google no longer backing up the web?

Listen here.


Apple has botched 3D for decades, so good luck with the Vision Pro, Tim

It looks like a fine product, but it's the ecosystem that will determine success

Read it here.

It's uncertain where consumer technology is heading, but judging from CES, it smells

Our vulture spent a week in Las Vegas – here are his key takeaways

Read it here.

Second of two reports from CES 2024 for COSMOS

COSMOS column 11 January 2024 - "...the most interesting and innovative takeaways can reliably be found amongst the thousands of startups and tiny companies vying for the attentions of the 200,000+ CES attendees..."

Read it here.

COSMOS column 7 January 2024 - "I’d come to CES this year expecting to see AI pretty much everywhere, integrated into pretty much everything, but I very quickly learned that this wouldn’t be the big story of 2024...."

Read it here.