The technology related to artificial intelligence algorithms is developing at such a dizzying pace that we can already witness many amazing things. Typically, when someone mentioned the ability to interpret our brain signals and then convert them into images, it was a sci-fi movie. Currently, this opportunity is literally at your fingertips, and the first projects boast of their effects.
Japanese researchers managed to do the almost impossible. We are already able to generate images from our thoughts. All this with a little AI help.
Neuralink still not for humans. The US Food and Drug Administration rejected the application for admission to clinical trials
Who among us did not think in our youth that one day it would be possible to read our thoughts in such a way that others could see them. It was then only a song of the distant future, or a rather frightening imagination. Just a few years ago, hardly anyone took this into account. However, a small revolution has come, which seems to be developing very dynamically. Thanks to tools that use AI algorithms, such as Midjourney or Stable Diffusion, we are able to generate images from descriptions. It is the second of these that will be the hero of this news. The researchers mentioned at the beginning are Assistant Professor Yu Takagi and Professor Shinji Nishimoto from Osaka University. They found a way to convert brain activity into high-quality images.
trinamiX presents an innovative sensor under the screen that detects real skin. You can’t fool him with just a picture
Visually reconstructing the images that come from our mind can help us to understand more about how we function. The whole mechanism of interpreting the world through our visual organs and their subsequent processing by our brain will now be easier to understand. The method that can be achieved to “illustrate” our thoughts is based on the use of functional magnetic resonance imaging (fMRI). Then, with the help of the generative LDM (latent diffusion model), the results are processed until the final image is obtained. The LDM model, unlike DM, is much more efficient. It is used to generate data that will be similar to what it was trained on. Its mode of operation is based on destroying them and then gradually adding Gaussian noise. It then learns to recreate this data by reversing the entire process. In this case, the Stable Diffusion image generation model was used. The effects we can see are really impressive. Although this technology still needs to be refined, what has been achieved is undoubtedly a big step forward.
Source: Vice