Patents by Inventor James Andrew Youngquist

James Andrew Youngquist has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013412
    Abstract: Mediated-reality imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array. The processing device can be configured to synthesize a three-dimensional (3D) image corresponding to a virtual perspective of the scene based on the intraoperative image data from the cameras. The imaging system is further configured to receive and/or store preoperative image data, such as medical scan data corresponding to a portion of a patient in the scene. The processing device can register the preoperative image data to the intraoperative image data, and overlay the registered preoperative image data over the corresponding portion of the 3D image of the scene to present a mediated-reality view.
    Type: Application
    Filed: July 11, 2023
    Publication date: January 11, 2024
    Inventors: Nava Aghdasi, James Andrew Youngquist
  • Publication number: 20240000295
    Abstract: Systems and methods for capturing and rendering light fields for head-mounted displays are disclosed. A mediated-reality visualization system includes a head-mounted display assembly comprising a frame configured to be mounted to a user's head and a display device coupled to the frame. An imaging assembly separate and spaced apart from the head-mounted display assembly is configured to capture light-field data. A computing device in communication with the imaging assembly and the display device is configured to receive light-field data from the imaging assembly and render one or more virtual cameras. Images from the one or more virtual cameras are presented to a user via the display device.
    Type: Application
    Filed: March 23, 2023
    Publication date: January 4, 2024
    Applicant: University of Washington
    Inventors: Joshua R. Smith, Samuel R. Browd, Rufus Griffin Nicoll, James Andrew Youngquist
  • Publication number: 20240005596
    Abstract: Methods of determining the depth of a scene and associated systems are disclosed herein. In some embodiments, a method can include augmenting depth data of a scene captured with a depth sensor with depth data from one or more images of the scene. For example, the method can include capturing image data of the scene with a plurality of cameras. The method can further include generating a point cloud representative of the scene based on the depth data from the depth sensor and identifying a missing region of the point cloud, such as a region occluded from the view of the depth sensor. The method can then include generating depth data for the missing region based on the image data. Finally, the depth data for the missing region can be merged with the depth data from the depth sensor to generate a merged point cloud representative of the scene.
    Type: Application
    Filed: May 4, 2023
    Publication date: January 4, 2024
    Inventors: Thomas Ivan Nonn, David Julio Colmenares, James Andrew Youngquist
  • Publication number: 20230403477
    Abstract: Systems, devices, and methods for collection, editing, and playback of data collected from an imaging system for generating a virtual perspective of a scene are disclosed. In one example perspective, an imaging system includes a camera array configured to capture multiple images of a scene. Each of the multiple images includes color data represented in a Bayer pattern that includes a blue channel, two green channels, and a red channel. The system also includes an image processing device configured to receive the multiple images captured by the camera array, split each of the multiple images represented in the Bayer pattern into four individual color planes, form at least one set of data by combining the four individual color planes of the multiple images, and compress the at least one set of data.
    Type: Application
    Filed: June 9, 2022
    Publication date: December 14, 2023
    Inventors: Thomas Ivan Nonn, Tze-Yuan Cheng, James Andrew Youngquist
  • Publication number: 20230394707
    Abstract: Methods and systems for calibrating an imaging system having a plurality of sensors are disclosed herein. In some embodiments, a method includes initially calibrating the imaging system, operating the imaging system during an imaging procedure, and then updating the calibration during the imaging procedure to account for degradation of the initial calibration due to environmental factors, such as heat. The method of updating the calibration can include capturing image data of a rigid body having a known geometry with the sensors and determining that the calibration has drifted for a problematic one of the sensors based on the captured image data. After determining the problematic one of the sensors, the method can include updating the calibration of the problematic one of the sensors based on the captured image data.
    Type: Application
    Filed: June 1, 2023
    Publication date: December 7, 2023
    Inventors: Thomas Ivan Nonn, JR., Nava Aghdasi, James Andrew Youngquist, Adam Gabriel Jones
  • Publication number: 20230355319
    Abstract: Methods and systems for calibrating an instrument, such as a surgical instrument, within an imaging system are disclosed herein. In some embodiments, a method includes capturing images of the instrument with a plurality of cameras of the imaging system and identifying common features of the instrument in the captured images. The method further includes generating a three-dimensional (3D) representation of the instrument based on the common features and determining a reference frame of the instrument based on the generated 3D representation of the instrument. A first transform is determined between the reference frame of the instrument and a reference frame of the cameras. Then, a second transform between the reference frame of the instrument and a reference frame of the tracking structure can be determined based on the first transform.
    Type: Application
    Filed: May 9, 2023
    Publication date: November 9, 2023
    Inventors: Nava Aghdasi, JR., James Andrew Youngquist
  • Publication number: 20230355309
    Abstract: Methods and systems for intraoperatively determining alignment parameters of a spine during a spinal surgical procedure are disclosed herein. In some embodiments, a method of intraoperatively determining an alignment parameter of a spine during a surgical procedure includes receiving initial image data of the spine including multiple vertebrae and identifying a geometric feature associated with each vertebra in the initial image data. The geometric features each have a pose in the initial image data and characterize a three-dimensional (3D) shape of the associated vertebra. The method further comprises receiving intraoperative image data of the spine and registering the initial image data to the intraoperative image data. The method can then update the pose of each geometric feature based on the registration and the intraoperative image data, and determine the alignment parameter based on the updated poses of the geometric features associated with two or more of the vertebrae.
    Type: Application
    Filed: May 3, 2022
    Publication date: November 9, 2023
    Inventors: Robert Bruce Grupp, JR., David Lee Fiorella, Thomas A. Carls, Samuel R. Browd, Adam Gabriel Jones, James Andrew Youngquist, Richard Earl Simpkinson
  • Patent number: 11741619
    Abstract: Mediated-reality imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array. The processing device can be configured to synthesize a three-dimensional (3D) image corresponding to a virtual perspective of the scene based on the intraoperative image data from the cameras. The imaging system is further configured to receive and/or store preoperative image data, such as medical scan data corresponding to a portion of a patient in the scene. The processing device can register the preoperative image data to the intraoperative image data, and overlay the registered preoperative image data over the corresponding portion of the 3D image of the scene to present a mediated-reality view.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: August 29, 2023
    Assignee: Propio, Inc.
    Inventors: Nava Aghdasi, James Andrew Youngquist
  • Patent number: 11734876
    Abstract: A method assigns weights to physical imager pixels in order to generate photorealistic images for virtual perspectives in real-time. The imagers are arranged in three-dimensional space such that they sparsely sample the light field within a scene of interest. This scene is defined by the overlapping fields of view of all the imagers or for subsets of imagers. The weights assigned to imager pixels are calculated based on the relative poses of the virtual perspective and physical imagers, properties of the scene geometry, and error associated with the measurement of geometry. This method is particularly useful for accurately rendering numerous synthesized perspectives within a digitized scene in real-time in order to create immersive, three-dimensional experiences for applications such as performing surgery, infrastructure inspection, or remote collaboration.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: August 22, 2023
    Assignee: Proprio, Inc.
    Inventors: James Andrew Youngquist, David Julio Colmenares, Adam Gabriel Jones
  • Publication number: 20230196595
    Abstract: Medical imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array. The processing device can be configured to synthesize a three-dimensional (3D) image corresponding to a virtual perspective of the scene based on the intraoperative image data from the cameras. The imaging system is further configured to receive and/or store initial image data, such as medical scan data corresponding to a portion of a patient in the scene. The processing device can register the initial image data to the intraoperative image data, and overlay the registered initial image data over the corresponding portion of the 3D image of the scene to present a mediated-reality view.
    Type: Application
    Filed: December 19, 2022
    Publication date: June 22, 2023
    Inventors: Robert Bruce Grupp, JR., Samuel R. Browd, James Andrew Youngquist, Nava Aghdasi, Theodores Lazarakis, Tze-Yuan Cheng, Adam Gabriel Jones
  • Patent number: 11682165
    Abstract: Methods of determining the depth of a scene and associated systems are disclosed herein. In some embodiments, a method can include augmenting depth data of a scene captured with a depth sensor with depth data from one or more images of the scene. For example, the method can include capturing image data of the scene with a plurality of cameras. The method can further include generating a point cloud representative of the scene based on the depth data from the depth sensor and identifying a missing region of the point cloud, such as a region occluded from the view of the depth sensor. The method can then include generating depth data for the missing region based on the image data. Finally, the depth data for the missing region can be merged with the depth data from the depth sensor to generate a merged point cloud representative of the scene.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: June 20, 2023
    Assignee: Proprio, Inc.
    Inventors: Thomas Ivan Nonn, David Julio Colmenares, James Andrew Youngquist
  • Patent number: 11612307
    Abstract: Systems and methods for capturing and rendering light fields for head-mounted displays are disclosed. A mediated-reality visualization system includes a head-mounted display assembly comprising a frame configured to be mounted to a user's head and a display device coupled to the frame. An imaging assembly separate and spaced apart :from the head-mounted display assembly is configured to capture light-field data. A computing device in communication with the imaging assembly and the display device is configured to receive light-field data from the imaging assembly and render one or more virtual cameras. Images from the one or more virtual cameras are presented to a user via the display device.
    Type: Grant
    Filed: November 24, 2016
    Date of Patent: March 28, 2023
    Assignee: University of Washington
    Inventors: Joshua R. Smith, Samuel R. Browd, Rufus Griffin Nicoll, James Andrew Youngquist
  • Publication number: 20220301195
    Abstract: Camera arrays for mediated-reality systems and associated methods and systems are disclosed herein. In some embodiments, a camera array includes a support structure having a center, and a depth sensor mounted to the support structure proximate to the center. The camera array can further include a plurality of cameras mounted to the support structure radially outward from the depth sensor, and a plurality of trackers mounted to the support structure radially outward from the cameras. The cameras are configured to capture image data of a scene, and the trackers are configured to capture positional data of a tool within the scene. The image data and the positional data can be processed to generate a virtual perspective of the scene including a graphical representation of the tool at the determined position.
    Type: Application
    Filed: May 4, 2022
    Publication date: September 22, 2022
    Inventors: David Julio Colmenares, James Andrew Youngquist, Adam Gabriel Jones, Thomas Ivan Nonn, Jay Peterson
  • Publication number: 20220265385
    Abstract: A mediated-reality system for surgical applications incorporates pre-operative images and real-time captured images of a surgical site into a visualization presented on a head-mounted display worn by a surgeon during a surgical procedure. The mediated-reality system tracks the surgeon's head position and generates real-time images of the surgical site from a virtual camera perspective corresponding to the surgeon's head position to mimic the natural viewpoint of the surgeon. The mediated-reality system furthermore aligns the pre-operative images with the real-time images from the virtual camera perspective and presents a mediated-reality visualization of the surgical site with the aligned pre-operative three-dimensional images or a selected portion thereof overlaid on the real-time images representing the virtual camera perspective.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 25, 2022
    Inventors: Samuel R. Browd, James Andrew Youngquist, Adam Gabriel Jones
  • Publication number: 20220215532
    Abstract: Mediated-reality imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array. The processing device can be configured to synthesize a three-dimensional (3D) image corresponding to a virtual perspective of the scene based on the intraoperative image data from the cameras. The imaging system is further configured to receive and/or store preoperative image data, such as medical scan data corresponding to a portion of a patient in the scene. The processing device can register the preoperative image data to the intraoperative image data, and overlay the registered preoperative image data over the corresponding portion of the 3D image of the scene to present a mediated-reality view.
    Type: Application
    Filed: January 5, 2021
    Publication date: July 7, 2022
    Inventors: Nava Aghdasi, James Andrew Youngquist
  • Patent number: 11376096
    Abstract: A mediated-reality system for surgical applications incorporates pre-operative images and real-time captured images of a surgical site into a visualization presented on a head-mounted display worn by a surgeon during a surgical procedure. The mediated-reality system tracks the surgeon's head position and generates real-time images of the surgical site from a virtual camera perspective corresponding to the surgeon's head position to mimic the natural viewpoint of the surgeon. The mediated-reality system furthermore aligns the pre-operative images with the real-time images from the virtual camera perspective and presents a mediated-reality visualization of the surgical site with the aligned pre-operative three-dimensional images or a selected portion thereof overlaid on the real-time images representing the virtual camera perspective.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: July 5, 2022
    Assignee: Proprio, Inc.
    Inventors: Samuel R. Browd, James Andrew Youngquist, Adam Gabriel Jones
  • Patent number: 11354810
    Abstract: Camera arrays for mediated-reality systems and associated methods and systems are disclosed herein. In some embodiments, a camera array includes a support structure having a center, and a depth sensor mounted to the support structure proximate to the center. The camera array can further include a plurality of cameras mounted to the support structure radially outward from the depth sensor, and a plurality of trackers mounted to the support structure radially outward from the cameras. The cameras are configured to capture image data of a scene, and the trackers are configured to capture positional data of a tool within the scene. The image data and the positional data can be processed to generate a virtual perspective of the scene including a graphical representation of the tool at the determined position.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: June 7, 2022
    Assignee: Proprio, Inc.
    Inventors: David Julio Colmenares, James Andrew Youngquist, Adam Gabriel Jones, Thomas Ivan Nonn, Jay Peterson
  • Publication number: 20220157011
    Abstract: A method assigns weights to physical imager pixels in order to generate photorealistic images for virtual perspectives in real-time. The imagers are arranged in three-dimensional space such that they sparsely sample the light field within a scene of interest. This scene is defined by the overlapping fields of view of all the imagers or for subsets of imagers. The weights assigned to imager pixels are calculated based on the relative poses of the virtual perspective and physical imagers, properties of the scene geometry, and error associated with the measurement of geometry. This method is particularly useful for accurately rendering numerous synthesized perspectives within a digitized scene in real-time in order to create immersive, three-dimensional experiences for applications such as performing surgery, infrastructure inspection, or remote collaboration.
    Type: Application
    Filed: February 4, 2022
    Publication date: May 19, 2022
    Inventors: James Andrew Youngquist, David Julio Colmenares, Adam Gabriel Jones
  • Patent number: 11303823
    Abstract: A camera array for a mediated-reality system includes a plurality of hexagonal cells arranged in a honeycomb pattern in which a pair of inner cells include respective edges adjacent to each other and a pair of outer cells are separated from each other by the inner cells. A plurality of cameras are mounted within each of the plurality of hexagonal cells. The plurality of cameras include at least one camera of a first type and at least one camera of a second type. The camera of the first type may have a longer focal length than the camera of the second type.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: April 12, 2022
    Assignee: Proprio, Inc.
    Inventors: James Andrew Youngquist, David Julio Colmenares, Adam Gabriel Jones
  • Patent number: 11295460
    Abstract: Mediated-reality imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array. The processing device can be configured to synthesize a three-dimensional (3D) image corresponding to a virtual perspective of the scene based on the intraoperative image data from the cameras. The imaging system is further configured to receive and/or store preoperative image data, such as medical scan data corresponding to a portion of a patient in the scene. The processing device can register the preoperative image data to the intraoperative image data, and overlay the registered preoperative image data over the corresponding portion of the 3D image of the scene to present a mediated-reality view.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: April 5, 2022
    Assignee: Proprio, Inc.
    Inventors: Nava Aghdasi, James Andrew Youngquist