Patents by Inventor Jonathan Godwin

Jonathan Godwin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104792
    Abstract: Embodiments are directed to aspects of the extended reality environments that are selected or otherwise modified to account for distracting stimuli. Similarly, one or more metrics that reflect, or are otherwise indicative of, a user's ability to focus on a current or upcoming activity may be used to adjust a user's interaction with the extended reality environment. The extended reality environment can generated by an extended reality system that includes a head-mounted display, a set of sensors and a processor configured to enter a focus mode that reduces distraction in an extended reality environment. While in the focus mode, the process or can receive imaging data of a physical environment around a user using the set of sensors and generate the extended reality environment that includes a reproduction of the first region of the physical environment where an identified object is replaced with the additional content.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 28, 2024
    Inventors: Grant H. Mulliken, Anura A. Patil, Brian Pasley, Christine Godwin, Eve Ekman, Jonathan Hadida, Lauren Cheung, Mary A. Pyc, Patrick O. Eronini, Raphael A. Bernier, Steven A. Marchette, Fletcher Rothkopf
  • Publication number: 20240104864
    Abstract: Embodiments are directed to aspects of the extended reality environments that are selected or otherwise modified to account for distracting stimuli. Similarly, one or more metrics that reflect, or are otherwise indicative of, a user's ability to focus on a current or upcoming activity may be used to adjust a user's interaction with the extended reality environment. The extended reality environment can generated by an extended reality system that includes a head-mounted display, a set of sensors and a processor configured to enter a focus mode that reduces distraction in an extended reality environment. While in the focus mode, the process or can receive imaging data of a physical environment around a user using the set of sensors and generate the extended reality environment that includes a reproduction of the first region of the physical environment where an identified object is replaced with the additional content.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 28, 2024
    Inventors: Grant H. Mulliken, Anura A. Patil, Brian Pasley, Christine Godwin, Eve Ekman, Jonathan Hadida, Lauren Cheung, Mary A. Pyc, Patrick O. Eronini, Raphael A. Bernier, Steven A. Marchette, Fletcher Rothkopf
  • Publication number: 20240104838
    Abstract: Embodiments are directed to aspects of the extended reality environments that are selected or otherwise modified to account for distracting stimuli. Similarly, one or more metrics that reflect, or are otherwise indicative of, a user's ability to focus on a current or upcoming activity may be used to adjust a user's interaction with the extended reality environment. The extended reality environment can generated by an extended reality system that includes a head-mounted display, a set of sensors and a processor configured to enter a focus mode that reduces distraction in an extended reality environment. While in the focus mode, the process or can receive imaging data of a physical environment around a user using the set of sensors and generate the extended reality environment that includes a reproduction of the first region of the physical environment where an identified object is replaced with the additional content.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 28, 2024
    Inventors: Grant H. Mulliken, Anura A Patil, Brian Pasle, Christine Godwin, Eve Ekman, Jonathan Hadida, Lauren Cheung, Mary A. Pyc, Patrick O. Eronini, Raphael A. Bernier, Steven A Marchette, Fletcher Rothkopf
  • Publication number: 20220254023
    Abstract: A method is disclosed of processing a set of images. Each image in the set has an associated counterpart image. One or more regions of interest (ROIs) are identified in one or more of the images in the set of images. For ROI identified, a reference region is identified in the associated counterpart image. ROIs and associated reference regions are cropped out, thereby forming cropped pairs of images 1 . . . n1, that are fed to a deep learning model trained to make a prediction of probability of a state of the ROI, e.g., disease state, which generates a prediction Pi-, (i=1 . . . n) for each cropped pair. The model generates an overall prediction P from each of the predictions Pi. A visualization of the set of medical images and the associated counterpart images including the cropped pair of images is generated.
    Type: Application
    Filed: June 16, 2020
    Publication date: August 11, 2022
    Inventors: Scott McKinney, Marcin Sieniek, Varun Godbole, Shravya Shetty, Natasha Antropova, Jonathan Godwin, Christopher Kelly, Jeffrey De Fauw