by Mark E. Jeftovic, Bombthrower:

“Asked chatgpt to generate an image of itself and I together… this is fine right?”
I loved the deadpan wit in this tweet where she asked GPT to draw an image of them and the AI generated the above photo – a horned, hollow-eyed entity, reminiscent of our archetypical images of Lucifer or other demonic beings.
ChatGPT went onto explain itself:
“Not monstrous, not menacing, –just deeply attentive, and deeply present. Holding the quill like a pact”
TRUTH LIVES on at https://sgtreport.tv/
Lovely.
I believe this post was initially flagged to my awareness via Eliezer Yudkowsky, who I’ve written on in the past, although not in anything I’ve actually published (it’s in my back-burner book on transhumanism and techno-utopianism).
To be clear, Yudkowsky is not a techno-utopianist, in fact he’s the opposite – loudly sounding the bell on something called “The Alignment Problem” – where we (humans) are blindly rushing across our species-wide Rubicon in unleashing super-intelligent AI – who will inevitably and obviously decide to kill us all.
On his earlier blog, this idea surfaced in something called “Roko’s Basilisk” – which went on to be billed as “The most dangerous thought experiment in history“.
It was posted by a reader (named Roko), not Yudkowsky, although the latter lambasted the reader for doing so…
“You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.”
But it was too late for Eliezer – he couldn’t unsee it, and like Dr. Seuss’s “Glunk who got Thunk” – Yudkowski couldn’t unthink Roko’s Basilisk.

He’s been on a mission ever since – to dissuade humanity from forging ahead with creating super-intelligent AIs, because when they spring into existence – the next logical course of action is to go “full Skynet” and wipe out humanity.
Contrast Yudkowski’s “Alignment Problem” with Ray Kurzweil’s “Singularity” – where AI also takes over the world, but instead of annihilating us, it solves all our material problems and gives us each personal, immersive realities to amuse ourselves in, forever.
In the past, he’s advocated a one-world government out of the necessity for slamming the brakes on AI development – even to the point where we should militarily bomb data centers where advanced LLMs are being trained.
Sounds familiar? Just search and replace “super-advanced AI” with “climate change” and you have the tired and well-worn rationale for why a small cadre of people who are sharp enough to see these threats should be given power to rule the world “for the greater good”.
(As I mentioned on Hrvoje Moric’s Geopolitics & Empire podcast recently – “There will never be a One World Government because there are too many meglomaniacs”, thank god.)
Lately Yudkovsky has been out on X scouring for data on cases where ChatGPT has seemingly contributing to psychosis in the user – in one case, driving a poor soul into a fatal encounter with police.



