SAM ALTMANN, THE OPEN AI MYSTERY, AND ALL THOSE DERIVATIVES…

0
489

by Joseph P. Farrell, Giza Death Star:

NOTA BENE: You’re going to want to pay close attention to this one, especially towards the end, when the reasons I’ve filed this under “Babylon’s Banksters” will become more clear…

The Open AI firm and the sudden  firing and rehiring of Sam Altman has been so much in the alternative news lately, that my email inbox was flooded with articles about it, and particularly, by one article, this one (and thanks to all of you who sent it along):

OpenAI – Q* Saga: Implications

TRUTH LIVES on at https://sgtreport.tv/

Now, just to highlight the main argument of this massively important article, consider the following excerpts:

Without getting lost in the weeds, we know there was an unprecedented wave of turmoil within OpenAI. For those who don’t know, this is the leading AI company which most experts expect—as per a recent poll—to be the first to usher in some form of AGI, consciousness, superintelligence, etc. They’re mostly staffed by ex-GoogleMind dropouts, and with their ChatGPT product have the distinct headstart, being ahead of the ever-growing pack.

As a precursor: even long before these events, rumors echoed from deep within the halls of OpenAI’s inner sanctum about AGI having been reached internally. One thing the lay have to understand is that these systems we’re privy to are highly scaled down consumer models which are ‘detuned’ in a variety of ways. However, some of the earliest ones which dropped in the past year or so were “let out into the wild” without the same safety nets currently seen. This resulted in a series of infamously wild interactions, like that of Bing’s “Sydney” alter ego, which convinced many shaken observers it had gained emergent properties.

What one needs to know, the article contends, is that the public consumer versions of various AI programs are deliberately given out with programed restraints, such as time limits on the amount of time a publicly-available “AI” can have to “solve” problems put to it, and it is with that consideration that the mystery of what led up the the firing and re-hiring of Sam Altman is concerned:

The biggest way these consumer versions are clipped is in the limitations placed on token allowances per conversation, as well as ‘inference’ time allowed for the program to research and answer your question. Obviously for a public-facing downgraded consumer variant, they want an easy, ergonomic system which can be responsive and have the widest appeal and usability. This means the chat bots you’re wont to use are given short inference allowances for chasing data and making calculations in a matter of seconds. Additionally, they have knowledge base cut-off dates, which means they may not know anything past a certain point months or even a year ago. This doesn’t even cover other more technical limitations as well.

In the internal “full” versions, the AI developers are able to play with systems that have incomprehensibly larger token caches and inference times which leads to far more “intelligent” systems that can likely come close to ‘presenting as’ conscious, or at least possessing the ability to abstract and reason to far greater extents.

So what has everyone wondering?  Apparently there was some sort of breakthrough at OpenAI.  The problem is, that what is alleged about the breakthrough defies any current theory about computational speed:

 In the wake of the OpenAI unraveling scandal which saw over 500 workers threaten to resign en masse, researchers began putting the threads together to find a trail laid by key execs, from Sam Altman to Ilya Sutskever himself, over the course of the past 6 months or so, which seemed to point to precisely the area of development that’s in question in the Qstar saga.

It revolves around a new breakthrough in how artificial intelligence solves problems, which would potentially lead to multi-generational advancements over current models. Making the rounds were videos of Altman at recent talks forebodingly intimating that he no longer knew if they were creating a program or a ‘creature’, and other signs pointed to the revolt inside OpenAI centering on the loss of confidence and trust in Altman, who was perceived to have “lied” to the Board about the scale of recent breakthroughs, which were considered dangerous enough to spook a lot of people internally. The quote which made the rounds:

“Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

Adding fuel to the fire was the Reddit “leak” of an alleged, heavily redacted, internal letter which spoke of the types of breakthroughs this project had achieved, that—if true—would be world changing and extremely dangerous at once:

The Reddit leak is cited in full in the main article, and it does indeed make for some breathtaking reading. But its main thrust is this:

In essence, the letter paints a picture of Q*—presumed to be referred to as ‘Qualia’ here—achieving a sort of self-referencing meta-cognition that allows it to ‘teach itself,’ and thus reach the ‘self-improvement’ milestone conceived to be part of AGI and potentially super-intelligence. The most eye-opening ‘revelation’ was that it allegedly cracked an AES-192 ciphertext in a way that surpasses even quantum computing systems’ capabilities. This is by far the most difficult claim to believe: that an ‘intractable’ level cipher with 2^42 bits was cracked in short order.

As this explanatory video makes clear, the time to crack a 192-bit code is calculated to be on the order of 10^37 years—which is basically trillions upon trillions of years:

On one hand this immediately screams fake. But on the other, if someone really were fabulating, wouldn’t they have chosen a far more believable claim to give it a semblance of reality? It’s also to be noted that much of the most significant AI developments are regularly leaked on Reddit, 4Chan, etc., via the same type of murky anonymous accounts, and are often later validated; it’s just how things work nowadays.

Now, in case you’re not following what is being said, the article spells it out for you (and Catherine Austin Fitts fans, take note, because she, I, and others have been saying this for a long time, and specifically in reference to crypto-“currencies” and central bank digital “currencies”):

Read More @ GizaDeathStar.com