Patents by Inventor Shinichi J. Takayama

Shinichi J. Takayama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230309832
    Abstract: Novel tools and techniques are provided for presenting patient information to a user. In some embodiments, a computer system may: receive device data associated with one or more devices configured to perform a cardiac shunting procedure to change a cardiac blood flow pattern to improve cardiac blood flow efficiency or cardiac pumping efficiency; receive one or more imaging data associated with one or more imaging devices configured to generate images of one or more internal portions of the patient; analyze the device data and the imaging data; map the device data and the imaging data to a multi-dimensional representation of the one or more internal portions of the patient; generate one or more image-based outputs based at least in part on the mapping; and present, using a user experience (“UX”) device, the generated one or more image-based outputs.
    Type: Application
    Filed: May 28, 2021
    Publication date: October 5, 2023
    Inventors: Peter N. Braido, Randal C. Schulhauser, Richard J. O'Brien, Anthony W. Rorvick, Zhongping Yang, Nicolas Coulombe, David A. Anderson, Angela M. Liu, Robert Kowal, Brian D. Pederson, Angela N. Burgess, Shinichi J. Takayama
  • Publication number: 20210369394
    Abstract: Novel tools and techniques are provided for implementing intelligent assistance (“IA”) ecosystem. In various embodiments, a computing system might receive device data associated with a device(s) configured to perform a task(s), might receive sensor data associated with sensors configured to monitor at least one of biometric, biological, genetic, cellular, or procedure-related data of a subject, and might receive imaging data associated with an imaging device(s) configured to generate images of a portion(s) of the subject. The computing system might analyze the received device data, sensor data, and imaging data (collectively “received data”), might map two or more of the received data to a 3D or 4D representation of the portion(s) of the subject based on the analysis, might generate and present (using a user experience (“UX”) device) one or more extended reality (“XR”) images or experiences based on the mapping.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 2, 2021
    Inventors: Peter N. Braido, Mina S. Fahim, Ross D. Hinrichsen, Shinichi J. Takayama, Monica M. Bolin