Abstract: Systems, devices, and methods described herein relate to multi-sensory presentation devices, including virtual reality (VR) devices, visual display devices, sound devices, haptic devices, and other forms of presentation devices, that are configured to present sensory elements, including visual and/or audio scenes, to a user. In some embodiments, one or more sensors including electroencephalography (EEG) sensors and a photoplethysmography (PPG) sensors, e.g., included in a brain-computer interface, can measure physiological data of a user to monitor a state of the user during the presentation of the visual and/or audio scenes. Such systems, devices, and methods can adapt one or more visual and/or audio scenes based on user physiological data, e.g., to control or manage the state of the user.