Patents Examined by Edward F. Urban
  • Patent number: 11967121
    Abstract: A difference detection device includes a difference detection unit configured to, based on association among a first image and a second image captured at different times and illustrating a substantially identical space and encoding information of each of the first image and the second image, detect difference between a third image and a fourth image captured at different times and illustrating a substantially identical space, and the encoding information is information acquired from data including the first image encoded and data including the second image encoded, before inverse transform processing is executed in decoding processing executed on each of the first image and the second image.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: April 23, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Motohiro Takagi, Kazuya Hayase, Atsushi Shimizu
  • Patent number: 11961328
    Abstract: An eye detecting device is configured to: acquire a color image including a face of a person taken by an image taking device; generate a grayscale image by multiplying each of a red component value, a green component value, and a blue component value of each pixel of the color image by a predetermined ratio according to characteristics of a lens of glasses that the person wears; detect an eye of the person from the grayscale image; and output eye information on the eye of the person.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: April 16, 2024
    Assignees: SWALLOW INCUBATE CO., LTD., PANASONIC HOLDINGS CORPORATION
    Inventor: Toshikazu Ohno
  • Patent number: 11961165
    Abstract: An image acquisition unit acquires a plurality of projection images corresponding to a plurality of radiation source positions in a case of tomosynthesis imaging. A reconstruction unit reconstructs all or a part of the plurality of projection images to generate a tomographic image on each of a plurality of tomographic planes of a subject. A feature point detecting unit detects at least one feature point from a plurality of the tomographic images. A positional shift amount derivation unit derives a positional shift amount between the plurality of projection images with the feature point as a reference on a corresponding tomographic plane corresponding to the tomographic image in which the feature point is detected. The reconstruction unit reconstructs the plurality of projection images by correcting the positional shift amount to generate a corrected tomographic image.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: April 16, 2024
    Assignee: FUJIFILM CORPORATION
    Inventor: Junya Morita
  • Patent number: 11961224
    Abstract: The system for the qualitative evaluation of human organs includes: a camera (101) for capturing an image of the organ, the organ being in the donor's body or already collected or placed in a hypothermic, normothermic and/or subnormothermic graft infusion machine at the time of image capture; an image processor (103, 104) configured to extract at least a portion of the organ image from the captured image and an estimator (103, 104) for estimating, from the extracted image, the health of the organ. In some embodiments, the device also includes a means of introducing into the donor's body at least one optical window of the image capture means as well as a light source to illuminate the donor's organ, while maintaining the sterility of the surgical field. In some embodiments, the image processor involves applying a clipping mask to the captured image.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: April 16, 2024
    Inventor: Clément Labiche
  • Patent number: 11961621
    Abstract: A method includes receiving patient health data; determining a score using a trained machine learning model; determining a threshold value using an adaptive threshold tuning learning model; comparing the score to the threshold value; and generating an alarm. A computing system includes a processor; and a memory having stored thereon instructions that, when executed by the processor, cause the computing system to: receive patient health data; determine a score using a trained machine learning model; determine a threshold value using an adaptive threshold tuning learning model; compare the score to the threshold value; and generate an alarm. A non-transitory computer readable medium includes program instructions that when executed, cause a computer to: receive patient health data; determine a score using a trained machine learning model; determine a threshold value using an adaptive threshold tuning learning model; compare the score to the threshold value; and generate an alarm.
    Type: Grant
    Filed: February 10, 2023
    Date of Patent: April 16, 2024
    Assignee: REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Christopher Elliot Gillies, Daniel Francis Taylor, Kevin R. Ward, Fadi Islim, Richard Medlin
  • Patent number: 11960787
    Abstract: A vehicle and control method of the vehicle are provided. The vehicle includes a camera provided on the vehicle and configured to capture an image of an object outside the vehicle, a controller configured to determine a photographing position required for facial recognition from the captured image, a guide configured to guide the photographing position, and a display configured to display a result of the facial recognition.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: April 16, 2024
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION
    Inventors: Yun Sup Ann, Hyunsang Kim
  • Patent number: 11954824
    Abstract: An image pre-processing method and an image processing apparatus for a fundoscopic image are provided. A region of interest (ROI) is obtained from a fundoscopic image to generate a first image. The ROI is focused on an eyeball in the fundoscopic image. A smoothing process is performed on the first image to generate a second image. A value difference between neighboring pixels in the second image is increased to generate a third image.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: April 9, 2024
    Assignee: Acer Medical Inc.
    Inventors: Yi-Jin Huang, Chin-Han Tsai, Ming-Ke Chen
  • Patent number: 11947668
    Abstract: In some embodiments, an apparatus includes a memory and a processor. The processor is configured to extract a set of features from a potentially malicious file and provide the set of features as an input to a normalization layer of a neural network. The processor is configured to implement the normalization layer by calculating a set of parameters associated with the set of features and normalizing the set of features based on the set of parameters to define a set of normalized features. The processor is further configured to provide the set of normalized features and the set of parameters as inputs to an activation layer of the neural network such that the activation layer produces an output based on the set of normalized features and the set of parameters. The output can be used to produce a maliciousness classification of the potentially malicious file.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: April 2, 2024
    Assignee: Sophos Limited
    Inventor: Richard Harang
  • Patent number: 11948356
    Abstract: A context-based object classifying model is applied to a set of object location representations (12, 14), derived from an object detection applied to a frame (10) of a video stream, to obtain a context-adapted classification probability for each object location representation (12, 14). Each object location representation (12, 14) defines a region of the frame (10) and each context-adapted classification probability represents a likelihood that the region comprises an object (11, 13). The model is generated based on object location representations from previous frames of the video stream. It is determined whether the region defined by the object location representation (12, 14) comprises an object (11, 13) based on the context-adapted classification probability and a detection probability. The detection probability is derived from the object detection and represents a likelihood that the region defined by the object location representation (12, 14) comprises an object (11, 13).
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: April 2, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Volodya Grancharov, Arvind Thimmakkondu Hariraman
  • Patent number: 11941911
    Abstract: The present teaching relates to method, system, medium, and implementations for detecting liveness. When an image is received with visual information claimed to represent a palm of a person, a region of interests (ROI) in the image that corresponds to the palm is identified. Each of a plurality of fake palm detectors individually generates an individual decision on whether the ROI corresponds to a specific type of fake palm that the fake palm detector is to detect. Such individual decisions from the plurality of fake palm detectors are combined to derive a liveness detection decision with respect to the ROI.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: March 26, 2024
    Assignee: ARMATURA LLC
    Inventors: Zhinong Li, Xiaowu Zhang
  • Patent number: 11941898
    Abstract: A three-dimensional position and posture recognition device speeds estimation of a position posture and a gripping coordinate posture of a gripping target product. The device includes: a sensor unit configured to measure a distance between an image of an object and the object; and a processing unit configured to calculate an object type included in the image, read model data of each object from the external memory, and create structured model data having a resolution set for each object from the model data, generate measurement point cloud data of a plurality of resolutions from information on a distance between an image of the object and the object, perform a K neighborhood point search using the structured model data and the measurement point cloud data, and perform three-dimensional position recognition processing of the object by rotation and translation estimation regarding a point obtained from the K neighborhood point search.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: March 26, 2024
    Assignee: HITACHI, LTD.
    Inventors: Atsutake Kosuge, Takashi Oshima, Yukinori Akamine, Keisuke Yamamoto
  • Patent number: 11941080
    Abstract: A system and method for learning human activities from video demonstrations using video augmentation is disclosed. The method includes receiving original videos from one or more data sources. The method includes processing the received original videos using one or more video augmentation techniques to generate a set of augmented videos. Further, the method includes generating a set of training videos by combining the received original videos with the generated set of augmented videos. Also, the method includes generating a deep learning model for the received original videos based on the generated set of training videos. Further, the method includes learning the one or more human activities performed in the received original videos by deploying the generated deep learning model. The method includes outputting the learnt one or more human activities performed in the original videos.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: March 26, 2024
    Assignee: Retrocausal, Inc.
    Inventors: Quoc-Huy Tran, Muhammad Zeeshan Zia, Andrey Konin, Sanjay Haresh, Sateesh Kumar
  • Patent number: 11941805
    Abstract: The present disclosure relates to systems and methods for image processing. The methods may include obtaining imaging data of a subject, generating a first image based on the imaging data, and generating at least two intermediate images based on the first image. At least one of the at least two intermediate images may be generated based on a machine learning model. And the at least two intermediate images may include a first intermediate image and a second intermediate image. The first intermediate image may include feature information of the first image, and the second intermediate image may have lower noise than the first image. The methods may further include generating, based on the first intermediate image and at least one of the first image or the second intermediate image, a target image of the subject.
    Type: Grant
    Filed: July 17, 2021
    Date of Patent: March 26, 2024
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventor: Yang Lyu
  • Patent number: 11941783
    Abstract: Methods and systems include receiving, at de-ringing circuitry of a display pipeline of an electronic device, scaled image content based on image data. The de-ringing circuitry also receives a fallback scaler output. The de-ringing circuitry determines whether the image data has a change frequency greater than a threshold. In response to the change frequency being above the threshold, the de-ringing circuitry determines a weight. Based at least in part on the weight, the de-ringing circuitry blends the scaled image content and the fallback scaler output based at least in part on the weight.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: March 26, 2024
    Assignee: Apple Inc.
    Inventors: Mahesh B. Chappalli, Vincent Z. Young
  • Patent number: 11935254
    Abstract: System, methods, and other embodiments described herein relate to improving depth prediction for objects within a low-light image using a style model. In one embodiment, a method includes encoding, by a style model, an input image to identify content information. The method also includes decoding, by the style model, the content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes providing the synthetic image to a depth model.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: March 19, 2024
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi, Kareem Metwaly
  • Patent number: 11935242
    Abstract: The present disclosure provides for crop yield estimation by identifying, via image processing, a field in which a crop is grown; identifying a plurality of regions within the field; identifying, by processing growth metrics via a model, a plurality of data collection points in the plurality of regions, wherein a given data collection point of the plurality of data collection points within a given region of the plurality of regions is identified by multivariate analysis as representative of growing conditions in the given region; receiving in-field data linked to the data collection points of the plurality; and predicting a yield for the crop in the field based on the in-field data.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: March 19, 2024
    Assignee: International Business Machines Corporation
    Inventors: Bruno Silva, Renato Luiz De Freitas Cunha, Ana Paula Appel, Eduardo Rocha Rodrigues
  • Patent number: 11936843
    Abstract: Techniques are described for converting a 2D map into a 3D mesh. The 2D map of the environment is generated using data captured by a 2D scanner. Further, a set of features is identified from a subset of panoramic images of the environment that are captured by a camera. Further, the panoramic images from the subset are aligned with the 2D map using the features that are extracted. Further, 3D coordinates of the features are determined using 2D coordinates from the 2D map and a third coordinate based on a pose of the camera. The 3D mesh is generated using the 3D coordinates of the features.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: March 19, 2024
    Assignee: FARO Technologies, Inc.
    Inventors: Mark Brenner, Aleksej Frank, Ahmad Ramadneh, Mufassar Waheed, Oliver Zweigle
  • Patent number: 11928799
    Abstract: An electronic device includes a plurality of cameras, and at least one processor connected to the plurality of cameras. The at least one processor is configured to, based on a first user command to obtain a live view image, segment an image frame obtained via a camera among the plurality of cameras into a plurality of regions based on a brightness of pixels and an object included in the image frame; obtain a plurality of camera parameter setting value sets, each including a plurality of parameter values with respect to the plurality of regions; based on a second user command to capture the live view image, obtain a plurality of image frames using the plurality of camera parameter setting value sets and at least one camera among the plurality of cameras; and obtain an image frame by merging the plurality of obtained image frames.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: March 12, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ashish Chopra, Bapi Reddy Karri
  • Patent number: 11921494
    Abstract: An automated line clearance inspection system will enable fast and accurate inspection of packaging equipment lines to reduce or prevent product intermixing. The system includes a set of image capturing devices that are controlled via a central processing unit whereby end run images are compared with control images to determine if a line is cleared.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: March 5, 2024
    Inventor: Raymond H Scherzer
  • Patent number: 11922320
    Abstract: A dual variational autoencoder-generative adversarial network (VAE-GAN) is trained to transform a real video sequence and a simulated video sequence by inputting the real video data into a real video decoder and a real video encoder and inputting the simulated video data into a synthetic video encoder and a synthetic video decoder. Real loss functions and simulated loss functions are determined based on output from a real video discriminator and a simulated video discriminator, respectively. The real loss functions are backpropagated through the real video encoder and the real video decoder to train the real video encoder and the real video decoder based on the real loss functions. The synthetic loss functions are backpropagated through the synthetic video encoder and the synthetic video decoder to train the synthetic video encoder and the synthetic video decoder based on the synthetic loss functions.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: March 5, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Nikita Jaipuria, Eric Frankel