top of page




Using data from the Allen Brain Atlas Institute ( we can already decode the visual scenery from the visual cortex in awake animals ( and other groups have decoded visual stimuli from fMRI data in humans (

(1) Can we use an encoder-decoder model that uses that visual_scenery_to_wake encoding to decode the "visual_scenery" that occurs during sleep?
(2) Could we use DALL-E 2, Midjourney, Stable Diffusion, ControlNet, EbSynth, or any other tool to render video of what dreams are made of?

Some neuroscience background would be useful, but not necessary.

Screen Shot 2022-06-03 at 11.31.35 AM.png
github URL
bottom of page