Patents Examined by Emmanuel Silva-Avina
  • Patent number: 12272162
    Abstract: Methods, systems, and techniques for utilizing image processing systems to measure damage to vehicles include utilizing an image processing system to generate a heat map of an image of a damaged vehicle, where the heat map is indicative of a damaged area of the vehicle, and determining at least one measurement of the damaged area based on the heat map and a depth of field indicator corresponding to the image. In some embodiments, the image processing system also determines one or more types of damage of the damaged area, and/or also generates a segmentation map of the depicted vehicle and utilizes the segmentation map in conjunction with the heat map to measure damaged areas and locations thereof on the vehicle depicted within the image. In some embodiments, the techniques include determining the depth of field indicator of the image or portions thereof.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: April 8, 2025
    Assignee: CCC INTELLIGENT SOLUTIONS INC.
    Inventors: Steven Penny, Mohan Liu, Bahar Radfar, Neda Hantehzadeh, Bhadresh Dhanani, Sagar Bachwani, Ranjini Vaidyanathan, Masatoshi Kato, Srinivasan Krishnaswamy, Mina Haratiannezhadi
  • Patent number: 12260655
    Abstract: A method for detection of three-dimensional (3D) objects on or around a roadway by machine learning, applied in an electronic device, obtains images of road, inputs the images into a trained object detection model, and determines categories of objects in the images, two-dimensional (2D) bounding boxes of the objects, and parallax (rotation) angles of the objects. The electronic device determines object models and 3D bounding boxes of the object models and determines distance from the camera to the object models according to size of the 2D bounding boxes, image information of the detection images, and focal length of the camera. The positions of the object models in a 3D space can be determined according to the rotation angles, the distance, and the 3D bounding boxes, and the positions of the object models are taken as the position of the objects in the 3D space.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: March 25, 2025
    Assignee: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Chih-Te Lu, Chieh Lee, Chin-Pin Kuo
  • Patent number: 12260553
    Abstract: A mobile radiation generation apparatus that is used for a first radiography system configured to capture a first radiographic image, has a radiation generation unit including a radiation tube configured to emit radiation, is movable with a carriage having wheels, and is driven with a battery includes at least one processor configured to execute first reception processing of receiving the first radiographic image, first computer aided diagnosis processing of executing computer aided diagnosis processing on the first radiographic image, second reception processing of receiving a second radiographic image from a second radiography system different from the first radiography system, second computer aided diagnosis processing of executing the computer aided diagnosis processing on the second radiographic image, and return processing of returning a result of the second computer aided diagnosis processing to the second radiography system.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: March 25, 2025
    Assignee: FUJIFILM CORPORATION
    Inventors: Kazuhiro Makino, Takeyasu Kobayashi
  • Patent number: 12260598
    Abstract: Embodiments described herein relate to methods, devices, and computer-readable media to determine a compression setting. An input image may be obtained where the input image is associated with a user account. One or more features of the input image may be determined using a feature-detection machine-learning model. A compression setting for the input image may be determined using a user-specific machine-learning model personalized to the user account based on the one or more features in the input image. The input image may be compressed based on the compression setting.
    Type: Grant
    Filed: June 13, 2020
    Date of Patent: March 25, 2025
    Assignee: Google LLC
    Inventors: Jonathan D. Hurwitz, Punyabrata Ray
  • Patent number: 12254667
    Abstract: A multiple scenario-oriented item retrieval method and system. The method includes the steps of extracting, by Hashing learning, image features from an image training set to train a pre-built item retrieval model; when an image is in a scenario of hard samples, introducing an adaptive similarity matrix, optimizing the similarity matrix by an image transfer matrix, constructing an adaptive similarity matrix objective function in combination with an image category label; constructing a loss quantization objective function between the image and a Hash code according to the image transfer matrix; when the image is in a scenario of zero samples, introducing an asymmetric similarity matrix, constructing an objective function by taking the image category label as supervisory information in combination with equilibrium and decorrelation constraints of the Hash code; and training the item retrieval model based on the above objective function to obtain a retrieved result of a target item image.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: March 18, 2025
    Assignee: Shandong Jianzhu University
    Inventors: Xiushan Nie, Yang Shi, Jie Guo, Xingbo Liu, Yilong Yin
  • Patent number: 12254674
    Abstract: A method for recognizing arteries and veins on a fundus image includes: executing a pre-process operation on the fundus image, so as to obtain a pre-processed fundus image; generating a fundus spectral reflection dataset associated with pixels of the pre-processed fundus image, based on the pre-processed fundus image, and a spectral transformation matrix; obtaining a plurality of principle component scores associated with the pixels of the pre-processed fundus image, respectively; and determining, for each of the pixels of the pre-processed fundus image that has been determined as a part of a blood vessel, whether the pixel belongs to a part of an artery or a part of a vein.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: March 18, 2025
    Assignee: National Chung Cheng University
    Inventors: Hsiang-Chen Wang, Yu-Ming Tsao, Yong-Song Chen, Yu-Sin Liu, Shih-Wun Liang
  • Patent number: 12243192
    Abstract: An apparatus to facilitate video motion smoothing is disclosed. The apparatus comprises one or more processors including a graphics processor, the one or more processors including circuitry configured to receive a video stream, decode the video stream to generate a motion vector map and a plurality of video image frames, analyze the motion vector map to detect a plurality of candidate frames, wherein the plurality of candidate frames comprise a period of discontinuous motion in the plurality of video image frames and the plurality of candidate frames are determined based on a classification generated via a convolutional neural network (CNN), generate, via a generative adversarial network (GAN), one or more synthetic frames based on the plurality of candidate frames, insert the one or more synthetic frames between the plurality of candidate frames to generate up-sampled video frames and transmit the up-sampled video frames for display.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: March 4, 2025
    Assignee: Intel Corporation
    Inventors: Satyam Srivastava, Saurabh Tangri, Rajeev Nalawadi, Carl S. Marshall, Selvakumar Panneer
  • Patent number: 12236584
    Abstract: A computer-implemented method for facilitating opportunistic screening for cardiomegaly includes obtaining a set of computed tomography (CT) images. The set of CT images captures at least a portion of a heart of a patient, and the set of CT images is captured for a purpose independent of assessing cardiomegaly. The method further includes using the set of CT images as an input to an artificial intelligence (AI) module configured to determine a heart measurement based on CT image set input. The method also includes obtaining heart measurement output generated by the AI module and, based on the heart measurement output, classifying the patient into one of a plurality of risk levels for cardiomegaly. The classification is operable to trigger additional action based on the corresponding risk level for the patient.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: February 25, 2025
    Assignees: AI METRICS, LLC, THE UAB RESEARCH FOUNDATION
    Inventors: Andrew Dennis Smith, Robert B. Jacobus, Jr., Paige Elaine Severino
  • Patent number: 12216187
    Abstract: The disclosure relates to a method for correcting a movement of an object occurring during an MR image acquisition. The method includes: determining a motion model describing possible movements of the object based on a defined number of degrees of freedom; detecting a motion of a marker provided on the object with a motion sensor; determining a description of the motion model in a common coordinate system; determining the motion of the marker in the common coordinate system; determining a first motion of the object in the common coordinate system using the description of the motion model, the first motion being the motion that best matches the determined motion of the marker in the common coordinate system using the defined number of degrees of freedom; and correcting the movement of the object based on the determined first motion in order to determine at least one motion corrected MR image.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: February 4, 2025
    Assignee: Siemens Healthineers AG
    Inventors: Randall Kroeker, Daniel Kraus, Michael Roas-Löffler, Wilfried Schnell, Daniel Nicolas Splitthoff
  • Patent number: 12190484
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
  • Patent number: 12175639
    Abstract: A video quality improvement method may comprise: inputting a structure feature map converted from current target frame by first convolution layer to first multi-task unit and second multi-task unit, which is connected to an output side of first multi-task unit, among the plurality of multi-task units; inputting a main input obtained by adding the structure feature map to a feature space, which is converted by second convolution layer from those obtained by concatenating, in channel dimension, a previous target frame and a correction frame of the previous frame to first multi-task unit; and inputting current target frame to Nth multi-task unit connected to an end of output side of second multi-task unit, wherein Nth multi-task unit outputs a correction frame of current target frame, and machine learning of the video quality improvement model is performed using an objective function calculated through the correction frame of current target frame.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: December 24, 2024
    Assignee: POSTECH RESEARCH AND BUSINESS DEVELOPMENT FOUNDATION
    Inventors: Seung Yong Lee, Jun Yong Lee, Hyeong Seok Son, Sung Hyun Cho
  • Patent number: 12169620
    Abstract: A method, apparatus and system for video display and a camera are disclosed. The camera includes one wide-field lens assembly and a wide-field sensor corresponding to the wide-field lens assembly; at least one narrow-field lens assembly and narrow-field sensor corresponding to the narrow-field lens assembly, wherein an angle of view of the wide-field lens assembly is greater than an angle of view of the narrow-field lens assembly, and for a same target, a definition of the wide-field sensor is smaller than that of the narrow-field sensor; and a processor configured for performing human body analysis on the wide-field image and performing face analysis, head and shoulder analysis or human body analysis on at least one frame of narrow-field image. The methods, apparatuses and systems can reduce the workload of installing and adjusting the cameras during monitoring, the performance requirements for the server, and monitoring costs.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: December 17, 2024
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.
    Inventor: Wenwei Li
  • Patent number: 12165294
    Abstract: A computer-implemented method for high resolution image inpainting comprising the following steps: providing a high resolution input image, providing at least one inpainting mask, selecting at least one rectangular sub-region of the input image and at least one aligned rectangular subregion of the inpainting mask such that the rectangular subregion of the input image encompasses at least one set of pixels to be removed and synthetized, the at least one sub-region of the input image and its corresponding aligned subregion of the inpainting mask having identical minimum possible size and a position for which a calculated information gain does not decrease, processing the sub-region of the input image and its corresponding aligned subregion of the inpainting mask by a machine learning model, generating an output high resolution image comprising the inpainted sub-region.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: December 10, 2024
    Assignee: TCL RESEARCH EUROPE SP. Z O. O.
    Inventors: Michal Kudelski, Tomasz Latkowski, Filip Skurniak, Lukasz Sienkiewicz, Piotr Frankowski, Bartosz Biskupski
  • Patent number: 12165298
    Abstract: A method may include determining, based at least on an image of a document, a plurality of text bounding boxes enclosing lines of text present in the document. A machine learning model may be trained to determine, based at least on the coordinates defining the text bounding boxes, the coordinates of a document bounding box enclosing the text bounding boxes. The document bounding box may encapsulate the visual aberrations that are present in the image of the document. As such, one or more transformations may be determined based on the coordinates of the document bounding box. The image of the document may be deskewed by applying the transformations. One or more downstream tasks may be performed based on the deskewed image of the document. Related methods and articles of manufacture are also disclosed.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: December 10, 2024
    Assignee: SAP SE
    Inventors: Marek Polewczyk, Marco Spinaci
  • Patent number: 12154382
    Abstract: An eye state detecting method, applied to an electronic apparatus with an image sensor, which comprises: (a) acquiring a detecting image via the image sensor; (b) defining a face range on the detecting image; (c) defining a determining range on the face range; and (d) determining if the determining range comprises an open eye image or a close eye image.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: November 26, 2024
    Assignee: PixArt Imaging Inc.
    Inventor: Guo-Zhen Wang
  • Patent number: 12148223
    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: November 19, 2024
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Arjun Bhargava, Chao Fang, Charles Christopher Ochoa, Kun-Hsin Chen, Kuan-Hui Lee, Vitor Guizilini
  • Patent number: 12125294
    Abstract: A vehicle position determination device mountable to a vehicle, the vehicle position determination device including an acquisition unit that acquires a surrounding image that is road information for identifying a position of the vehicle and is represented by a small displacement with respect to the vehicle, and a control unit that compares the road information with road characteristic information indicating an absolute position of a predetermined point and determines a vehicle position according to a result of the comparison; the road information includes at least one of road shape information indicating a shape of a road surface in a direction of travel of the vehicle and road pattern information indicating a pattern on a road surface.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: October 22, 2024
    Assignee: DENSO CORPORATION
    Inventors: Takahisa Yokoyama, Noriyuki Ido
  • Patent number: 12118741
    Abstract: The present invention provides a processing apparatus (20) including a first generation unit (22) that generates, from a plurality of time-series images, three-dimensional feature information indicating a time change of a feature in each position in each of the plurality of images, a second generation unit (23) that generates person position information indicating a position in which a person is present in each of the plurality of images, and an estimation unit (24) that estimates person behavior indicated by the plurality of images, based on the time change of the feature indicated by the three-dimensional feature information in the position in which the person is present being indicated by the person position information.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: October 15, 2024
    Assignee: NEC CORPORATION
    Inventors: Jianquan Liu, Junnan Li
  • Patent number: 12094084
    Abstract: Image processing methods are provided. One of the method includes: obtaining a to-be-processed multi-channel feature maps; obtaining multi-channel first output feature maps and multi-channel second output feature maps by processing the multi-channel feature maps through a parallel pointwise convolution and non-pointwise operation, where the non-pointwise convolution is for descripting a spatial feature of each channel and an information exchange between the feature maps; and fusing the multi-channel first output feature maps and the multi-channel second output feature maps to obtain a multi-channel third output feature map.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: September 17, 2024
    Assignee: UBTECH ROBOTICS CORP LTD
    Inventors: Bin Sun, Mingguo Zhao, Youjun Xiong
  • Patent number: 12076869
    Abstract: A method and system for calculating a minimum distance from a robot to dynamic objects in a robot workspace. The method uses images from one or more three-dimensional cameras, where edges of objects are detected in each image, and the robot and the background are subtracted from the resultant image, leaving only object edge pixels. Depth values are then overlaid on the object edge pixels, and distance calculations are performed only between the edge pixels and control points on the robot arms. Two or more cameras may be used to resolve object occlusion, where each camera's minimum distance is computed independently and the maximum of the cameras' minimum distances is used as the actual result. The use of multiple cameras does not significantly increase computational load, and does require calibration of the cameras with respect to each other.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: September 3, 2024
    Assignee: FANUC CORPORATION
    Inventors: Chiara Landi, Hsien-Chung Lin, Tetsuaki Kato