Abstract: Magnetic resonance fingerprinting (“MRF”) techniques in which T1, T2, and T2* are simultaneously quantified using a combined gradient echo and spin echo acquisition with integrated B1 correction are described. The values for T2 and T2* can be estimated separately, but using the same underlying dictionary. This approach enables a smaller dictionary size that is easily manageable, and also reduced error propagation. Moreover, by using echo planar imaging (“EPI”) readouts, the raw MRF images will have higher signal-to-noise ratio (“SNR”) relative images acquired using spiral-based MRF techniques. The EPI-based images are also relatively free of artifacts. Together, these advantages lead to the need for far fewer frames, thereby enabling much faster acquisitions. Moreover, offline reconstruction is not needed, allowing for a more straightforward implementation of MRF.
Abstract: In one aspect the application relates to a computing system for providing data for modelling a human brain comprises a database including a plurality of datasets (or allow access to a plurality of datasets), each dataset including at least a dynamical model of the brain including at least one node and a neurodataset of a neuroimaging modality input. The at least one node include a representation of a local dynamic model and a parameter set of the local dynamic model.
Type:
Grant
Filed:
February 18, 2019
Date of Patent:
January 24, 2023
Assignees:
Baycrest Centre for Geriatric Care, Codebox Computerdienste GmbH, Max-Planck-Gesellschaft Zur Förderung Der Wissenschaften e.V., Universite D'Aix Marseille
Inventors:
Anthony Randal McIntosh, Jochen Mersmann, Viktor Jirsa, Petra Ritter
Abstract: Systems and methods of are disclosed for providing visual feedback to a subject during magnetic resonance imaging, where the visual feedback is associated with input provided by the subject to a magnetic resonance compatible touch panel. A video camera is employed to record video images of the interaction between the subject and the touch panel, and the video images are processed to generate a real-time video signal including a rendering of the input provided to the touch panel and the interaction between the subject's hands and the touch panel. The real-time video signal is provided to the subject as visual feedback, and is displayed within a time duration that is sufficiently fast to avoid the detection of the visual feedback as an error signal with the subject's brain in relation to the sense of proprioception. A measurement of the force applied to the touch panel by the subject may be recorded and employed when rendering the real-time video.
Type:
Grant
Filed:
May 9, 2014
Date of Patent:
March 10, 2020
Assignees:
SUNNYBROOK RESEARCH INSTITUTE, BAYCREST CENTRE FOR GERIATRIC CARE
Inventors:
Simon James Graham, Tom A. Schweizer, Stephen Strother, Fred Tam, Mahta Karimpoor
Abstract: The present invention provides methods and systems for assessing cognitive function by comparing a subject's eye movements within and across distinct classes of images.
Abstract: This invention relates to an apparatus and method for assessing a subject's hearing by recording steady-state auditory evoked responses. The apparatus generates a steady-state auditory evoked potential stimulus, presents the stimulus to the subject, senses potentials while simultaneously presenting the stimulus and determines whether the sensed potentials contain responses to the stimulus. The stimulus may include an optimum vector combined amplitude modulation and frequency modulation signal adjusted to evoke responses with increased amplitudes, an independent amplitude modulation and frequency modulation signal and a signal whose envelope is modulated by an exponential modulation signal. The apparatus is also adapted to reduce noise in the sensed potentials by employing sample weighted averaging. The apparatus is also adapted to detect responses in the sensed potentials via the Phase weighted T-test or Phase zone technique.