A lot can happen in a year, but killer robots are old news.
The good news? DistillXRy is relaunching. So huzzah for that.

Aaron Sorkin famously ends the first season of many of his shows with an episode titled What Kind of Day Has It Been, building up a strong conclusion to the show’s initial arcs while raising the stakes for the season to come.
So when I was getting ready to start writing DistillXRy (pronounced ‘distillery’) again, a part of me thought I should start by recapping the past year of advancements in XR technology (and in AI, its increasingly intimate bedmate).
But then I realized that was a bad idea. After all, trends are important and the future is important, but you know that already - and you’ve probably had an endless barrage of Top Trends articles thrown at you anyway.
But what we don’t often do is stop and just look at one single thing and talk about what it means about how far we have come. And that was the entire point of the original DistillXRy blog concept:
to break things down a piece at a time so we can look at them from all angles, and have some hope of understanding what they might mean.
As luck would have it, I see that my last post was about the 2023 Laval Virtual conference in France. I unfortunately could not return this year due to personal commitments, but it does stand out as a real star of last year (except for its lousy WiFi, a problem that was gloomily repeated at most tech conferences). But outside of the WiFi, I had a great time. And with good reason.
I love conferences. Love them, love them, love them. They are a bizarre ecosystem of show-and-tell, overstimulation, packaged talks pitched in front of half-attentive audiences, and networking breaks that flow between awkward introductions and cliquish catch-ups. They are completely weird, nothing else is quite like them, and sometimes you see something really cool.
Like a robotic dog. A dog that looks amazing as it cheerfully and dutifully prances about the small booth carpet, does tricks, and makes you think in the abstract of a future world where robots are as ubiquitous as smartphones - the original robots we invited into our pockets and let take over our lives.
Of course, I knew there was more to the story before I saw an article tonight titled, China Shows Off Robot Dogs Armed with Machine Guns. The Agence France-Presse report covers the country’s “Golden Dragon” military exercise, done in conjunction with Cambodian troops over a 15-day period, and including the weapon-enabled dogs as part of the display.
But I still had to stop and sit a moment in wonder.
Robot dogs with machine guns on them, and it barely made the news.
This is not good.
-
I realize it did, in fact, make the news. I’m talking about it because Agence France-Presse told me about it, and they did that via the news.
But there’s the news, and then there’s the news. As I write this, the headline on CBS News is about missing Iranian President Ebrahim Raisi, whose helicopter has gone down in the eastern part of the country. (Update: he is confirmed dead.) That is what we’d call ‘breaking news’.
There are also summaries of ‘the news’, such as Wikipedia’s often-helpful Current Events Portal, which summarizes any given month and year with a breakdown of some of its key happenings. You can find the May 2024 entry here, but as of publication, you won’t find any robotic dogs.
And then there’s what people will be talking about at work, which is largely going to split along lines of severe international or national tragedies, sports, entertainment gossip, and matters of local concern. (Sigh.)
That last category - local concern - is mercifully free of killer robots for the moment, and you won’t hear any complaints from me about this. But I wonder what would happen if I did go in tomorrow, sit down with some of my very smart and normal coworkers, and asked them how they felt about - for example - the fact that police officers have now used a robot to kill a suspect in the United States of America.
Even if it did not happen here, that still helps translate the killer robot concept into a topic that sits at the ‘local’ zoom level. And regardless of their reaction, I am sure most of them won’t know that I am actually hitting them with breaking news from eight years ago.
“We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was,” said Dallas Police Chief David Brown, back in July 2016, when this actually happened for the first time. The suspect in question turned out to be a US Army Reserve veteran, and he ultimately killed five police officers and injured a further 11 people using a combination of semi-automatic weapons. He also claimed to have planted explosive devices, and remained a constant threat to people on the scene. It was not a good situation.
But police are often thrown into not-good situations, and in the end the suspect was indeed killed not by the gun of some officer, but by a bomb delivered via a MarkV-A1 robot.
This isn’t limited to Dallas. San Francisco has since floated the idea of allowing for killer robots as well, though an understandable backlash threw the idea back to the committee level. So really (and a part of me cannot believe I am typing these words), we’ve been dealing with the killer robot problem for a while now, but nobody’s talking about it.
-
Just as I found it unfair to single out Dallas, I will note that the Chinese military is in good company where robots-from-hell are concerned. In fact, we are only nine days away from the three-year anniversary of an Axios article with the title, The age of killer robots may have already begun.
A Janes report last year discussed the US Army’s experimentations with weaponized Q-UGV robots; an Estonian company has debuted an AI-driven tank that can ‘breach enemy defensive positions with minimal risk’; and then there’s the matter of the STM Kargu-2 Turkish drone in Libya, which was “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.’
It did what it was programmed to do, to the unfortunate plight of some fleeing human beings.
And why not? We’ve mostly been dealing with automatons all this time. They do what we ask them to do, from making our bread hot enough to be called ‘toast’, to taking our payments so we can park our cars in designated spots, to sitting on our wrists and keeping track of what time it is.
-
“It feels like AI from the movies.”
That’s one of the things Open AI CEO Sam Altman had to say about the new voice and video interface for GPT-4o, announced just a week ago as I write this. Mr. Altman has also disbanded the internal team focused on AI risks, and while co-leader Jan Leike emphasized on former-Twitter that he had resigned, he also posted a very strong series of comments on the site that said that “safety culture and processes have taken a backseat to shiny products,” and “We are long overdue in getting incredibly serious about the implications of AGI.”
For those not in the know, AGI, or artificial general intelligence, refers to a type of strong AI that can meet or beat human levels of performance in a wide spectrum of cognitive tasks. And Mr. Leike is correct, not only about Open AI but about the entire human race.
It is long past time that we got overly serious about this. And yet when I point people to the very compelling Moritorium.AI, which posits that perhaps we should figure out how to control all of this before letting it run frolic, they come back to me with resigned shrugs.
They know no one is going to stop developing AI despite the risks; they know I know that no one is going to stop; so they don’t see much point in taking up a cause in the name of it all.
They just hope that serious people somewhere are working on it, and everything will be fine and dandy. Even though they don’t think things are fine and dandy with the stuff we’ve been working on for hundreds or thousands of years, like politics, economics, and all that jazz.
-
Ten or fifteen years ago, I took a car ride to and from Boston with a coworker who was a devout Orthodox Jew, and along the ride we had a deeply open and interesting conversation about our particular views on matters including politics, economics, and all that jazz. We also discussed faith, and he said something that has stuck with me all these years, and that I thought about tonight as I wrote this blog.
“I think one question every person of faith should have to be able to answer is, ‘Why did God put us on this Earth? What was the point?”
I’ll come back to that in a second. First, let’s take this as granted:
We’ve developed killer robots and already started deploying them. We are building artificial intelligence and are increasingly comfortable speaking with it and trusting in it, just as we have become comfortable talking to other people without actually talking to them.
Rex Woodbury’s blog Digital Native is always a must-read, and in his latest piece he talks about some of the implications of AI as relates to human communication. He leads off with a mention of the Hello GPT-4o announcement demo video, which - if you have not seen - you should watch. Immediately.
And then he hits us with a general prediction of what human communication could come to look like in the next fifteen years. I’ve included a link to the blog at the end of this article, and think everyone who reads this should also read that.
Read that, and then ask your yourself this question: if an AI were smart enough to effectively mimic every person in my life with texts, social media posts, and deepfake audio/video, how long would it be before I saw the actual person - physically - and confirmed it was not them I was speaking with this entire time?
I’m not asking that as a pointed question about the AI taking over. I am asking it as a standalone question. Literally, how long would it be? For my mother, days. For my brother, weeks or even months. For most of my cousins, never.
So let me come back now to the point my coworker made. It is fine to hold opinions, but they should come with the responsibility to actually be opinions. We should be able to explain them enough to articulate our point of view. (A favorite term of former Mythbusters host Adam Savage, who has used it before to distinguish the key characteristic of what he looks for in art.)
So, what is your point of view?
If you think AI is inevitable, what should the rules be?
If you think widespread killer robots are inevitable, what should the rules be?
And do you plan on telling anyone how you feel, and if not, why not?
Will you just turn to ChatGPT for reassurance, and ask it to come up with a list of reasons not to worry about AI, and then go back to watching random YouTube videos until you fall asleep hours past your bedtime, like I might be doing if I weren’t writing this blog?
I’m not asking anyone to come up with a comprehensive justification for their point of view on the spot. Like anything else, it should be interrogated and allowed to evolve.
So, tomorrow at work, consider asking at least three people the questions above, and sit with them for five minutes and talk out your own answer as well. You may all decide this out of scope for your time and energies and skills, but at least have a chat about it.
After all, the chart below - from Mr. Woodbury’s blog - would seem to indicate that the process of sitting and talking to humans in person is on its way to minority status, and right soon. Might as well get while the getting’s good.
Chart Source: