Patents Examined by Nay Maung
  • Patent number: 11915336
    Abstract: The present disclosure provides a method of embedding a watermark on a Joint Photographic Experts Group (JPEG) image. The method includes: performing an entropy decoding on the JPEG image to generate quantized discrete cosine transform (DCT) coefficients; determining target bits in a bit plane of the quantized DCT coefficients on the basis of a watermark-embedding table (WET); and embedding a watermark based on metadata of the JPEG image in the target bits. Also, the present disclosure provides a method of verifying integrity of the image by using the embedded watermark.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: February 27, 2024
    Assignee: INDUSTRY-ACADEMIA COOPERATION GROUP OF SEJONG UNIVERSITY
    Inventor: Oh Jin Kwon
  • Patent number: 11896320
    Abstract: A system is disclosed that includes an optical tracking device and a surgical computing device. The optical tracking device includes a structured light module and an optical module that includes an image sensor and is spaced from the structured light module at a known distance. The surgical computing device includes a display device, a non-transitory computer readable medium including instructions, and processor(s) configured to execute the instructions to generate a depth map from a first image captured by the image sensor during projection of a pattern into a surgical environment by the structured light module. The pattern is projected in a near-infrared (NIR) spectrum. The processor(s) are further configured to execute the stored instructions to reconstruct a 3D surface of anatomical structure(s) based on the generated depth map. Additionally, the processor(s) are configured to execute the stored instructions to output the reconstructed 3D surface to the display device.
    Type: Grant
    Filed: January 20, 2023
    Date of Patent: February 13, 2024
    Assignees: Smith & Nephew, Inc., Smith & Nephew Orthopaedics AG, Smith & Nephew Asia Pacific Pte. Limited
    Inventors: Gaëtan Marti, Maurice Hälg, Ranjith Steve Sivagnanaselvam
  • Patent number: 11900584
    Abstract: A method for judging freshness of cultured fish product based on eye image recognition is provided, which relates to the field of image processing and includes the following steps: obtaining eye area and eye center point of each cultured fish product; obtaining a first data category and a second data category of each gray scale change sequence according to each gray scale change sequence of each eye area; calculating a first mean value and a second mean value of each gray scale change sequence; obtaining the fish eye turbidity of each eye area according to the first mean value and the second mean value; obtaining the fish eye plumpness of each eye area according to the first data category and the second data category; and obtaining the freshness of each cultured fish product according to the fish eye turbidity and fish eye plumpness in each eye area.
    Type: Grant
    Filed: August 31, 2023
    Date of Patent: February 13, 2024
    Assignee: SHANDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Lanlan Zhu, Xudong Wu, Xiuting Wei, Qingxiang Zhang, Hengjia Ni, Ruining Kang, Lei Liu
  • Patent number: 11901064
    Abstract: Systems and methods are disclosed for generating synthetic medical images, including images presenting rare conditions or morphologies for which sufficient data may be unavailable. In one aspect, style transfer methods may be used. For example, a target medical image, a segmentation mask identifying style(s) to be transferred to area(s) of the target, and source medical image(s) including the style(s) may be received. Using the mask, the target may be divided into tile(s) corresponding to the area(s) and input to a trained machine learning system. For each tile, gradients associated with a content and style of the tile may be output by the system. Pixel(s) of at least one tile of the target may be altered based on the gradients to maintain content of the target while transferring the style(s) of the source(s) to the target. The synthetic medical image may be generated from the target based on the altering.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: February 13, 2024
    Assignee: Paige.AI, Inc.
    Inventors: Rodrigo Ceballos Lentini, Christopher Kanan
  • Patent number: 11861847
    Abstract: A background image generator (21) stores, in a storage device (3), a skeleton image obtained by calculating a feature quantity for each of multiple first thermal images (Din1) obtained by imaging by a thermal image sensor (1) in the same field of view or multiple sorted images (Dc) generated from the first thermal images, generating an average image from the first thermal images or sorted images, sharpening the average image, and then extracting a skeleton component. An image corrector (22) corrects, by using the skeleton image stored in the storage device (3), a second thermal image (Din2) obtained by imaging by the thermal image sensor in the same field of view as the first thermal images, thereby generating a corrected thermal image. It is possible to generate a sharp thermal image with a high S/N ratio.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: January 2, 2024
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Kohei Kurihara, Koichi Yamashita, Daisuke Suzuki
  • Patent number: 11861881
    Abstract: Techniques for training a first electronic neural network classifier to identify a presence of a particular property in a novel supra-image while ignoring a spurious correlation of the presence of the particular property with a presence of an extraneous property are presented.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: January 2, 2024
    Assignee: PROSCIA INC.
    Inventors: Julianna Ianni, Rajath Elias Soans, Kameswari Devi Ayyagari, Saul Kohn
  • Patent number: 11854703
    Abstract: Systems and methods for providing a novel framework to simulate the appearance of pathology on patients who otherwise lack that pathology. The systems and methods include a “simulator” that is a generative adversarial network (GAN). Rather than generating images from scratch, the systems and methods discussed herein simulate the addition of diseases-like appearance on existing scans of healthy patients. Focusing on simulating added abnormalities, as opposed to simulating an entire image, significantly reduces the difficulty of training GANs and produces results that more closely resemble actual, unmodified images. In at least some implementations, multiple GANs are used to simulate pathological tissues on scans of healthy patients to artificially increase the amount of available scans with abnormalities to address the issue of data imbalance with rare pathologies.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: December 26, 2023
    Assignee: ARTERYS INC.
    Inventors: Hok Kan Lau, Jesse Lieman-Sifry, Sean Patrick Sall, Berk Dell Norman, Daniel Irving Golden, John Axerio-Cilies, Matthew Joseph Didonato
  • Patent number: 11854191
    Abstract: An image processing method is provided for displaying cells from a plurality of pathology images or overall images. A respective overall image represents a respective patient tissue sample or a respective patient cell sample.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: December 26, 2023
    Assignee: EUROIMMUN Medizinische Labordiagnostika AG
    Inventors: Christian Marzahl, Stefan Gerlach, Joern Voigt, Christine Kroeger
  • Patent number: 11845464
    Abstract: Driver behavior risk assessment and pedestrian awareness may include an receiving an input stream of images of an environment including one or more objects within the environment, estimating an intention of an ego vehicle based on the input stream of images and a temporal recurrent network (TRN), generating a scene representation based on the input stream of images and a graph neural network (GNN), generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: December 19, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Nakul Agarwal, Yi-Ting Chen
  • Patent number: 11841920
    Abstract: The technology disclosed introduces two types of neural networks: “master” or “generalists” networks and “expert” or “specialists” networks. Both, master networks and expert networks, are fully connected neural networks that take a feature vector of an input hand image and produce a prediction of the hand pose. Master networks and expert networks differ from each other based on the data on which they are trained. In particular, master networks are trained on the entire data set. In contrast, expert networks are trained only on a subset of the entire dataset. In regards to the hand poses, master networks are trained on the input image data representing all available hand poses comprising the training data (including both real and simulated hand images).
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: December 12, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Jonathan Marsden, Raffi Bedikian, David Samuel Holz
  • Patent number: 11842274
    Abstract: A controlling method of an electronic apparatus may include: obtaining image data and metadata regarding the image data, the image data comprising a first image frame and a second image frame that is subsequent to the first image frame; obtaining information regarding a region of interest of the first image frame by inputting the first image frame to a first neural network model; obtaining a similarity between the first image frame and the second image frame based on motion vector information included in the metadata; and detecting whether there is a manipulated area in the second image frame based on the similarity.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: December 12, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Gilwoo Song, Jaehyun Kwon, Jinwoo Nam, Heeseung Shin, Minseok Choi
  • Patent number: 11832582
    Abstract: A leg (205) detection system comprising: a robotic arm (200) comprising a gripping portion (208) for holding a teat cup (203, 210) for attaching to a teat (1102, 1104, 1106, 1108, 203S, 203) of a dairy livestock (200, 202, 203); an imaging system coupled to the robotic arm (200) and configured to capture a first three-dimensional (3D) image (138, 2400, 2500) of a rearview of the dairy livestock (200, 202, 203) in a stall (402), the imaging system comprising a 3D camera (136, 138) or a laser (132), wherein each pixel of the first 3D image (138, 2400, 2500) is associated with a depth value; one or more memory (104) devices configured to store a reference (3D) 3D image (138, 2400, 2500) of the stall (402) without any dairy livestock (200, 202, 203); and a processor (102) communicatively coupled to the imaging system and the one or more memory (104) devices, the processor (102) configured to: access the first 3D image (138, 2400, 2500) and the reference (3D) 3D image (138, 2400, 2500); subtract the first 3D image
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: December 5, 2023
    Assignee: Technologies Holdings Corp.
    Inventors: Mark A. Foresman, Bradley J. Prevost, Marijn Van Aart, Peter Willem van der Sluis, Alireza Janani
  • Patent number: 11836919
    Abstract: A method for capturing digital data for fabricating a dental splint involves displaying a GUI on a display of a smartphone that provides an alignment feature for a user to align a camera of the smartphone to a first position that captures teeth of a person, receiving digital video of the teeth, overlaying the alignment feature on the digital video of the teeth on the display, moving the alignment feature on the screen in a manner that causes the user to move the smartphone relative to the teeth to maintain alignment with the alignment feature, capturing digital image information of the teeth as the alignment feature is moved, the captured digital image information including depth information, and transmitting the captured digital image information, including the depth information, from the smartphone for use in fabricating a dental splint.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: December 5, 2023
    Assignee: Asesso Health Inc.
    Inventor: William C. Cliff
  • Patent number: 11825784
    Abstract: A hydroponic system is managed by obtaining hydroponic data using a plurality of sensors in a hydroponic cultivator. Data from the plurality of sensors is communicated to a data aggregator, which renders the sensed data from the plurality of sensors in a predetermined data format, as aggregator output data. The aggregator output data is communicated to a fog computing unit, which, in turn, communicates with outside computing routed over the Internet backbone or “cloud”. The fog computing unit is used to control operation of the hydroponic cultivator based on direct control commands executed through the fog control unit and data inputs obtained by communications with outside computing routed over the Internet backbone or “cloud”.
    Type: Grant
    Filed: June 16, 2023
    Date of Patent: November 28, 2023
    Assignee: KING FAISAL UNIVERSITY
    Inventor: Suresh Sankaranarayanan
  • Patent number: 11829449
    Abstract: Techniques for determining a classification probability of an object in an environment are discussed herein. Techniques may include analyzing sensor data associated with an environment from a perspective, such as a top-down perspective, using multi-channel data. From this perspective, techniques may determine channels of multi-channel input data and additional feature data. Channels corresponding to spatial features may be included in the multi-channel input data and data corresponding to non-spatial features may be included in the additional feature data. The multi-channel input data may be input to a first portion of a machine-learned (ML) model, and the additional feature data may be concatenated with intermediate output data from the first portion of the ML model, and input into a second portion of the ML model for subsequent processing and to determine the classification probabilities. Additionally, techniques may be performed on a multi-resolution voxel space representing the environment.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: November 28, 2023
    Assignee: Zoox, Inc.
    Inventor: Samir Parikh
  • Patent number: 11830607
    Abstract: A system for facilitating image finding analysis includes one or more processors and one or more hardware storage devices storing instructions that are executable by the one or more processors to configure the system to perform acts such as (i) presenting an image on a user interface, the image being one of a plurality of images provided on the user interface in a navigable format, (ii) obtaining a voice annotation for the image, the voice annotation being based on a voice signal of a user, and (iii) binding the voice annotation to at least one aspect of the image, wherein the binding modifies metadata of the image based on the voice annotation.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: November 28, 2023
    Assignee: AI METRICS, LLC
    Inventors: Andrew Dennis Smith, Robert B. Jacobus, Paige Elaine Severino
  • Patent number: 11823436
    Abstract: Systems and methods are disclosed for generating a specialized machine learning model by receiving a generalized machine learning model generated by processing a plurality of first training images to predict at least one cancer characteristic, receiving a plurality of second training images, the first training images and the second training images include images of tissue specimens and/or images algorithmically generated to replicate tissue specimens, receiving a plurality of target specialized attributes related to a respective second training image of the plurality of second training images, generating a specialized machine learning model by modifying the generalized machine learning model based on the plurality of second training images and the target specialized attributes, receiving a target image corresponding to a target specimen, applying the specialized machine learning model to the target image to determine at least one characteristic of the target image, and outputting the characteristic of the tar
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: November 21, 2023
    Assignee: Paige.AI, Inc.
    Inventors: Belma Dogdas, Christopher Kanan, Thomas Fuchs, Leo Grady
  • Patent number: 11823404
    Abstract: A method of image processing in a structured light imaging system is provided that includes receiving a captured image of a scene, wherein the captured image is captured by a camera of a projector-camera pair, and wherein the captured image includes a binary pattern projected into the scene by the projector, applying a filter to the rectified captured image to generate a local threshold image, wherein the local threshold image includes a local threshold value for each pixel in the rectified captured image, and extracting a binary image from the rectified captured image wherein a value of each location in the binary image is determined based on a comparison of a value of a pixel in a corresponding location in the rectified captured image to a local threshold value in a corresponding location in the local threshold image.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: November 21, 2023
    Assignee: Texas Instruments Incorporated
    Inventor: Vikram Vijayanbabu Appia
  • Patent number: 11823392
    Abstract: A method, system and computer program product for segmenting generic foreground objects in images and videos. For segmenting generic foreground objects in videos, an appearance stream of an image in a video frame is processed using a first deep neural network. Furthermore, a motion stream of an optical flow image in the video frame is processed using a second deep neural network. The appearance and motion streams are then joined to combine complementary appearance and motion information to perform segmentation of generic objects in the video frame. Generic foreground objects are segmented in images by training a convolutional deep neural network to estimate a likelihood that a pixel in an image belongs to a foreground object. After receiving the image, the likelihood that the pixel in the image is part of the foreground object as opposed to background is then determined using the trained convolutional deep neural network.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: November 21, 2023
    Assignee: Board of Regents, The University of Texas System
    Inventors: Kristen Grauman, Suyog Dutt Jain, Bo Xiong
  • Patent number: 11823479
    Abstract: A non-transitory computer readable medium (107, 127) stores instructions executable by at least one electronic processor (101, 113) to perform a component co-replacement recommendation method (200).
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: November 21, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Sarif Kumar Naik, Vidya Ravi, Ravindra Balasaheb Patil, Meru Adagouda Patil