Patents Examined by Ping Y Hsieh
  • Patent number: 12196833
    Abstract: Systems and methods for generative adversarial networks (GANs) to remove artifacts from undersampled magnetic resonance (MR) images are described. The process of training the GAN can include providing undersampled 3D MR images to the generator model, providing the generated example and a real example to the discriminator model, applying adversarial loss, L2 loss, and structural similarity index measure loss to the generator model based on a classification output by the discriminator model, and repeating until the generator model has been trained to remove the artifacts from the undersampled 3D MR images. At runtime, the trained generator model of the GAN can be generate artifact-free images or parameter maps from undersampled MRI data of a patient.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: January 14, 2025
    Assignees: Siemens Healthineers AG, The Regents of the University of California
    Inventors: Peng Hu, Xiaodong Zhong, Chang Gao, Valid Ghodrati
  • Patent number: 12198402
    Abstract: A hazard estimation unit 21 estimates a likelihood of an occurrence of an event according to a hazard function, with respect to each of a plurality of pieces of time-series data that are a series of multiple pieces of data to which an event occurrence time relevant to the data is given in advance and that include time-series data in which the event did not occur and time-series data in which the event occurred. A parameter estimation unit 22 estimates a parameter of the hazard function so as to optimize a likelihood function expressed by including the event occurrence time given with respect to each of the plurality of pieces of time-series data and the likelihood of the occurrence of the event estimated with respect to each of the plurality of pieces of time-series data.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: January 14, 2025
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yoshiaki Takimoto, Yusuke Tanaka, Takeshi Kurashima, Shuhei Yamamoto, Maya Okawa, Hiroyuki Toda
  • Patent number: 12198398
    Abstract: Methods and systems are disclosed for performing operations for transferring motion from one real-world object to another in real-time. The operations comprise receiving a first video that includes a depiction of a first real-world object and extracting an appearance of the first real-world object from the video. The operations comprise obtaining a second video that includes a depiction of a second real-world object and extracting motion of the second real-world object from the second video. The operations comprise applying the motion of the second real-world object extracted from the second video to the appearance of the first real-world object extracted from the first video. The operations comprise generating a third video that includes a depiction of the first real-world object having the appearance of the first real-world object and the motion of the second real-world object.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 14, 2025
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Nir Malbin, Gal Sasson
  • Patent number: 12190549
    Abstract: There is provided an information processing device, an image processing device, an encoding device, a decoding device, an electronic apparatus, an information processing method, or a program for processing attribute information of each point of a point cloud representing an object in a three-dimensional shape as a set of points, a first level is hierarchized using a first hierarchization method and a second level different from the first level is hierarchized using a second hierarchization method different from the first hierarchization method at the time of performing hierarchization of attribute information by recursively repeating processing of, among points classified as a predictive point or a reference point, deriving a difference value between a predictive value of attribute information of the predictive point derived using attribute information of the reference point and the attribute information of the predictive point for the reference point to perform hierarchization of the attribute information.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: January 7, 2025
    Assignee: SONY GROUP CORPORATION
    Inventors: Satoru Kuma, Ohji Nakagami, Koji Yano, Hiroyuki Yasuda, Tsuyoshi Kato
  • Patent number: 12190508
    Abstract: Described herein are systems, methods, and instrumentalities associated with medical image enhancement. The medical image may include an object of interest and the techniques disclosed herein may be used to identify the object and enhance a contrast between the object and its surrounding area by adjusting at least the pixels associated with the object. The object identification may be performed using an image filter, a segmentation mask, and/or a deep neural network trained to separate the medical image into multiple layers that respectively include the object of interest and the surrounding area. Once identified, the pixels of the object may be manipulated in various ways to increase the visibility of the object. These may include, for example, adding a constant value to the pixels of the object, applying a sharpening filter to those pixels, increasing the weight of those pixels, and/or smoothing the edge areas surrounding the object of interest.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: January 7, 2025
    Assignee: Shanghai United Imaging Intelligence Co., Ltd.
    Inventors: Yikang Liu, Shanhui Sun, Terrence Chen, Zhang Chen, Xiao Chen
  • Patent number: 12190492
    Abstract: A surface defect detection method for a primary cable of an aerostat based on few-shot learning includes the following steps. A hardware and software system environment is set. A surface image of the primary cable is acquired and processed to obtain augmented surface image data, which is labeled to construct a surface defect sample library. A defect detection network model is designed and constructed, and then is trained based on the surface defect sample library. A query set in the surface defect sample library is processed with the trained defect detection network model to obtain shallow texture features and high-level semantic features. The shallow texture features are transferred to the high-level semantic features through skip connection. The surface defect detection data under different detection operation modes are obtained at a terminal. This application also provides a surface defect detection system.
    Type: Grant
    Filed: July 5, 2024
    Date of Patent: January 7, 2025
    Assignee: 38TH RESEARCH INSTITUTE, CHINA ELECTRONICS TECHNOLOGY GROUP CORPORATION
    Inventors: Hongqi Zhang, Yue Tian, Xingyu Chen, Liangxi Chen
  • Patent number: 12190515
    Abstract: A method of operation of a compute system includes: detecting a skin area in a patient image; segmenting the skin area into a segmented image having an acne pimple at the center; generating a target pixel array from the segmented image includes separating a plurality of the acne pimples that are adjacent in the segmented image; identifying an acne characterization of the acne pimples including an area of each acne and an acne score; and assembling a user interface display from the acne characterization for displaying on a device.
    Type: Grant
    Filed: January 24, 2024
    Date of Patent: January 7, 2025
    Assignee: BelleTorus Corporation
    Inventors: Tien Dung Nguyen, Thi Thu Hang Nguyen, Léa Mathilde Gazeau, Tat Dat Tô, Dinh Van Han
  • Patent number: 12183102
    Abstract: Optical character recognition (OCR) based systems and methods for extracting and automatically evaluating contextual and identification information and associated metadata from an image utilizing enhanced image processing techniques and image segmentation. A unique, comprehensive integration with an account provider system and other third party systems may be utilized to automate the execution of an action associated with an online account. The system may evaluate text extracted from a captured image utilizing machine learning processing to classify an image type for the captured image, and select an optical character recognition model based on the classified image type. They system may compare a data value extracted from the recognized text for a particular data type with an associated online account data value for the particular data type to evaluate whether to automatically execute an action associated with the online account linked to the image based on the data value comparison.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: December 31, 2024
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Ian Whitestone, Brian Chun-Lai So, Sourabh Mittal
  • Patent number: 12183090
    Abstract: According to one aspect, intersection scenario description may be implemented by receiving a video stream of a surrounding environment of an ego-vehicle, extracting tracklets and appearance features associated with dynamic objects from the surrounding environment, extracting motion features associated with dynamic objects from the surrounding environment based on the corresponding tracklets, passing the appearance features through an appearance neural network to generate an appearance model, passing the motion features through a motion neural network to generate a motion model, passing the appearance model and the motion model through a fusion network to generate a fusion output, passing the fusion output through a classifier to generate a classifier output, and passing the classifier output through a loss function to generate a multi-label classification output associated with the ego-vehicle, dynamic objects, and corresponding motion paths.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: December 31, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Nakul Agarwal, Yi-Ting Chen
  • Patent number: 12181856
    Abstract: A method for collecting machine data from a machine comprising the following steps: collecting image information displayed on a graphical user interface of a machine and transmitting the collected information to a computer unit; masking the collected information or information derived therefrom to define data regions; extracting alphanumeric characters from at least one data region by means of a text recognition program; writing the alphanumeric characters into a data structure; and storing or outputting the data structure.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: December 31, 2024
    Assignee: Technische Hochschule Deggendorf
    Inventors: Florian Schweiger, Ilja Fuchs, Andreas Grzemba
  • Patent number: 12176096
    Abstract: Approaches for analyzing an input image and providing one or more outputs related to the input image are provided. In accordance with an exemplary embodiment, an input image may be received and analyzed, using a trained machine learning model, to generate an inference related to the image. Based, at least in part, upon the generated inference, one or more reports related to the inference can be generated and provided for presentation on a user device. A user can interact with the report in a conversational manner with the computer system to generate additional reports or insights related to the input image.
    Type: Grant
    Filed: March 25, 2024
    Date of Patent: December 24, 2024
    Assignee: Northwestern Memorial Healthcare
    Inventor: Mozziyar Etemadi
  • Patent number: 12175796
    Abstract: A method includes displaying, via a display, an environment that includes a representation of a person associated with the device. The representation includes a virtual face with virtual facial features corresponding to respective physical facial features of the person associated with the device. The method includes detecting, via a sensor, a change in a physical facial feature of the person associated with the device. The physical facial feature indicates a physical facial expression of the person. In response to determining that the physical facial expression breaches a criterion, modifying one or more virtual facial features of the virtual face so that the virtual face indicates a virtual facial expression that satisfies the criterion.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: December 24, 2024
    Assignee: APPLE INC.
    Inventor: Ian M. Richter
  • Patent number: 12175671
    Abstract: According to the present application, a computer-implemented method of predicting thyroid eye disease is disclosed. The method comprising: preparing a conjunctival hyperemia prediction model, a conjunctival edema prediction model, a lacrimal edema prediction model, an eyelid redness prediction model, and an eyelid edema prediction model, obtaining a facial image of an object, obtaining a first processed image and a second processed image from the facial image, wherein the first processed image is different from the second processed image, obtaining predicted values for each of a conjunctival hyperemia, a conjunctival edema and a lacrimal edema by applying the first processed image to the conjunctival hyperemia prediction model, the conjunctival edema prediction model, and the lacrimal edema prediction model, and obtaining predicted values for each of an eyelid redness and an eyelid edema by applying the second processed image to the eyelid redness prediction model and the eyelid edema prediction model.
    Type: Grant
    Filed: July 14, 2023
    Date of Patent: December 24, 2024
    Assignee: THYROSCOPE INC.
    Inventors: Kyubo Shin, Jaemin Park, Jongchan Kim
  • Patent number: 12175771
    Abstract: A control system for a vehicle includes a camera mounted on the vehicle and configured to take an image of an occupant of the vehicle, and an anti-droplet protective equipment providing device mounted on the vehicle and configured to provide anti-droplet protective equipment to the occupant, a determination unit configured to determine whether the occupant is wearing the anti-droplet protective equipment based on the image of the occupant taken by the camera, and a provision control unit configured to provide the anti-droplet protective equipment to the occupant with the anti-droplet protective equipment providing device when the determination unit determines that the occupant is not wearing the anti-droplet protective equipment.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: December 24, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Ryota Tomizawa, Shozo Takaba, Ayako Shimizu, Hojung Jung, Daisuke Sato, Yasuhiro Kobatake
  • Patent number: 12171528
    Abstract: A blood vessel wall thickness estimation method includes: obtaining behavioral information, which is numerical information about changes over time in positions of a plurality of predetermined points in a blood vessel wall, based on a video including the blood vessel wall obtained using four-dimensional angiography; generating estimation information for estimating a thickness of the blood vessel wall based on the behavioral information obtained in the obtaining; and outputting the estimation information generated in the generating. The estimation information is information in which at least one of the following is visualized: a change in displacement over time; a change in speed over time; a change in acceleration over time; a change in kinetic energy over time; a spring constant obtained from the displacement and the acceleration, and a Fourier coefficient obtained from the change in the displacement over time.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: December 24, 2024
    Assignee: OSAKA UNIVERSITY
    Inventor: Yoshie Sugiyama
  • Patent number: 12175668
    Abstract: Systems and methods for determining a semantic image understanding of medical imaging studies are provided. A plurality of medical imaging studies associated with a plurality of medical imaging modalities is provided. Metadata associated with each of the plurality of medical imaging studies is generated by performing a plurality of semantic image analysis tasks using one or more machine learning based networks. The metadata associated with each of the plurality of medical imaging studies is output.
    Type: Grant
    Filed: April 14, 2022
    Date of Patent: December 24, 2024
    Assignee: Siemens Healthineers AG
    Inventors: Ingo Schmuecking, Puneet Sharma, Desiree Komuves, Tiziano Passerini, Paul Klein
  • Patent number: 12175723
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for unsupervised learning of object keypoint locations in images. In particular, a keypoint extraction machine learning model having a plurality of keypoint model parameters is trained to receive an input image and to process the input image in accordance with the keypoint model parameters to generate a plurality of keypoint locations in the input image. The machine learning model is trained using either temporal transport or spatio-temporal transport.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: December 24, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Ankush Gupta, Tejas Dattatraya Kulkarni
  • Patent number: 12175650
    Abstract: A method includes obtaining an image data set that depicts semiconductor components, and applying a hierarchical bricking to the image data set. In this case, the bricking includes a plurality of bricks on a plurality of hierarchical levels. The bricks on different hierarchical levels have different image element sizes of corresponding image elements.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: December 24, 2024
    Assignee: Carl Zeiss SMT GmbH
    Inventors: Jens Timo Neumann, Abhilash Srikantha, Christian Wojek, Thomas Korb
  • Patent number: 12175721
    Abstract: Disclosed is a multi-angle image semantic segmentation method for cadmium zinc telluride chips, belonging to the field of image quality augmentation. Firstly, construction of an n+1 dataset is performed by using acquire CZT images, and then pixel-level and latent-level knowledge representation is performed through a Pixel Aggregation Network PAN and a Latent Aggregation Network LAN in a Progressive Complementary Knowledge Aggregation network PCKA, which ultimately improves the quality and speed of CZT image segmentation. The method is suitable for applications that require multi-angle image acquisition and semantic segmentation, such as semiconductor material segmentation.
    Type: Grant
    Filed: August 27, 2024
    Date of Patent: December 24, 2024
    Assignees: Beijing Jiaotong University, Taiyuan University of Science and Technology, Shanxi Zhishi Haotai Technology Co., LTD
    Inventors: Peihao Li, Huihui Bai, Yunchao Wei, Yao Zhao, Anhong Wang, Jiapeng Jia
  • Patent number: 12175764
    Abstract: Techniques for performing deconvolution operations on data structures representing condensed sensor data are disclosed herein. Autonomous vehicle sensors can capture data in an environment that may include one or more objects. The sensor data may be processed by a convolutional neural network to generate condensed sensor data. The condensed sensor data may be processed by one or more deconvolution layers using a machine-learned upsampling transformation to generate an output data structure for improved object detection, classification, and/or other processing operations.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: December 24, 2024
    Assignee: Zoox, Inc.
    Inventors: Qian Song, Benjamin Isaac Zwiebel