Study Suggests AI Can Literally Read Your Mind, And With Extreme Accuracy.


    by Natalie Winters, The National Pulse:


    Scientists from Osaka University, Japan have discovered a way to use artificial intelligence (AI) to read our minds – quite literally.

    By combining magnetic resonance imaging (MRI) with AI technology, the researchers were able to recreate visual images directly from human brain activity. What’s more, by using widely available Stable Diffusion technology, they were able to do so at a very low cost and with relatively little effort compared to other recent attempts to do this. The researchers’ findings are reported in a pre-print study.

    TRUTH LIVES on at

    Although researchers had already found ways to convert neural imaging into visual representations, complex and expensive deep-learning technology has generally been required. Technology which must be “trained” and carefully calibrated. Still, the representations produced are often sketchy, with poor fidelity.

    The new method used by the Japanese researchers harnesses popular technology which has found widespread use in the generation of images using linguistic prompts. Over the last few months, social media platforms have been awash with images created using Stable Diffusion and other platforms like it. The technology is able to produce compelling, sometimes hyperrealistic images, with just a few carefully selected words. The technology can be used to produce static images or, with some tweaking, animations in popular styles such as anime.

    While some in the art world have been supportive of this, many artists are fearful that it will replace them – and soon. Some have begun lobbying for this technology to be limited or even banned.

    In September last year, the New York Times covered the fallout from that year’s Colorado State Art Fair competition, which was won by an entry – Théâtre D’opéra Spatial – created using Midjourney, another popular AI system.

    “Art is dead, dude,” the winner, Jason M. Allen, told the Times.


    To produce their images, the Japanese researchers followed a two-stage process. First, they decoded a visual image from the MRI signals of their test subjects. Then they used the MRI signals to “decode latent text representations” which could be fed into the Stable Diffusion platform, like prompts, to enhance the quality of the initial visual images retrieved.

    The results of this process can be seen in the series of images below.

    Read More @