top of page

TEAM

DreamMapper

clear.png

Using data from the Allen Brain Atlas Institute (http://help.brain-map.org/display/observatory/Documentation) we can already decode the visual scenery from the visual cortex in awake animals (http://observatory.brain-map.org/visualcoding/stimulus/natural_movies) and other groups have decoded visual stimuli from fMRI data in humans (https://mind-vis.github.io/).

(1) Can we use an encoder-decoder model that uses that visual_scenery_to_wake encoding to decode the "visual_scenery" that occurs during sleep?
(2) Could we use DALL-E 2, Midjourney, Stable Diffusion, ControlNet, EbSynth, or any other tool to render video of what dreams are made of?

Some neuroscience background would be useful, but not necessary.

Screen Shot 2022-06-03 at 11.31.35 AM.png
github URL
bottom of page