Patents Examined by Amara Abdi
  • Patent number: 11113566
    Abstract: An image processing system includes an image acquisition unit that acquires a captured image obtained by imaging a vehicle outside, a dictionary storage unit that stores dictionary data to be referred to in specifying an object included in the captured image, a specification unit that specifies the object based on the dictionary data, a behavior information acquisition unit that acquires behavior information indicating a behavior state of a vehicle, and a classification unit that classifies, based on the behavior information of the vehicle with respect to an unspecifiable object as the object unspecified by the specification unit, whether or not the vehicle needs to avoid the unspecifiable object. Image data of the unspecifiable object is used for creating the dictionary data along with a classification result of the classification unit.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: September 7, 2021
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kazuya Nishimura, Yoshihiro Oe, Hirofumi Kamimaru
  • Patent number: 11106904
    Abstract: A method for modeling crowd movement includes obtaining a temporal sequence of images of a physical venue and, for each of the images, subdividing the respective image into a respective set of logical pixels according to a predetermined mapping. For each logical pixel of each image, the method computes a respective crowd density representing a respective number of mobile objects per unit of area in the physical venue at the logical pixel, thereby forming a temporal sequence of crowd density maps that corresponds to the temporal sequence of images. The method then uses successive pairs of crowd density maps to train a model on spatiotemporal changes in crowd density at the physical venue. A method of predicting future crowd density maps at physical venues using a current image of the physical venue and the trained model is also disclosed.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: August 31, 2021
    Assignee: Omron Corporation
    Inventors: Ryo Yonetani, Mai Kurose
  • Patent number: 11107349
    Abstract: Systems, devices, and methods for detection of a scooter riding environment are described. A scooter alarm and pedestrian walkway detection system includes an alarm, a movement sensor configured to detect motion of a scooter and transmit motion data indicative of the motion to a processor, and an image capture device configured to detect features of a riding environment of the scooter and transmit riding environment data indicative of the features to the processor. The processor is configured to determine whether the riding environment of the scooter is a pedestrian walkway based on the motion data and the riding environment data. The processor is configured to activate the alarm in response to determining that the riding environment of the scooter is a pedestrian walkway.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: August 31, 2021
    Assignee: AitronX Inc.
    Inventors: Xinlu Tang, Alexandra Li, Chen Zhong, Fang Jiang, Quan Zhang, Yong-Gang Sun
  • Patent number: 11100314
    Abstract: A device, system and method is provided for estimating movement based on a human movement model in a virtual, augmented or mixed reality environment. In an offline phase, a human movement model may be stored that assigns a non-uniform probability of spatiotemporal representations of movements that occur in a human body. In an online phase, a user's movements may be recorded. The user's movements may be estimated by spatiotemporal representations of a plurality of (N) degrees of freedom (DOF) that maximize a joint probability comprising a first probability that the measured movements match the estimated movements and a second probability that the human movement model assigns to the spatiotemporal representations of the estimated movements. A virtual, augmented or mixed reality image may be displayed that is rendered based on the NDOF spatiotemporal representations of the estimated movements.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: August 24, 2021
    Assignee: Alibaba Technologies (Israel) LTD.
    Inventors: Matan Protter, Efrat Rotem
  • Patent number: 11096674
    Abstract: A method and means to utilize machine learning to train a device to generate a confidence level indicator (CLI). The device is a CAD system that has been initially trained using initial machine learning to recommend classifications for image features presented to the device. Probabilistic classification is utilized to incorporate intermediate values given by a human operator to better indicate a level of confidence of the CAD system's recommendations as to what classes should be associated with certain image features.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: August 24, 2021
    Assignee: Koios Medical, Inc.
    Inventors: Christine I. Podilchuk, Ajit Jairaj, Lev Barinov, William Hulbert, Richard Mammone
  • Patent number: 11094059
    Abstract: The present disclosure publishes a vulnerable plaque identification method. The method includes: receiving an angiocarpy image sent by a terminal device; transforming the angiocarpy image in an original Cartesian coordinate system into a polar coordinate system to form a polarization image; constructing a faster RCNN architecture and accomplishing a training; inputting the polarization image into the faster RCNN architecture accomplished the training to identify the polarization image, and outputting the image with the marked vulnerable plaques; feed backing the image with the marked vulnerable plaques to the terminal device. The present disclosure also publishes an application server and a computer readable medium. The vulnerable plaque identification method, the application server, and the computer readable medium can quickly and correctly conform the position of the vulnerable plaque.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: August 17, 2021
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Jianzong Wang, Tianbo Wu, Lihong Liu, Xinhui Liu, Jing Xiao
  • Patent number: 11094070
    Abstract: The disclosure discloses a visual multi-object tracking based on multi-Bernoulli filter with YOLOv3 detection, belonging to the fields of machine vision and intelligent information processing. The disclosure introduces a YOLOv3 detection technology under a multiple Bernoulli filtering framework. Objects are described by using anti-interference convolution features, and detection results and tracking results are interactively fused to realize accurate estimation of video multi-object states with unknown and time-varying number. In a tracking process, matched detection boxes are combined with object tracks and object templates to determine new objects and re-recognize occluded objects in real time. Meanwhile, under the consideration of identity information of detected objects and estimated objects, identity recognition and track tracking of the objects are realized, so that the tracking accuracy of the occluded objects can be effectively improved, and track fragments are reduced.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 17, 2021
    Assignee: JIANGNAN UNIVERSITY
    Inventors: Jinlong Yang, Xiaoxue Cheng, Guangnan Zhang, Jianjun Liu, Yuan Zhang, Hongwei Ge
  • Patent number: 11079334
    Abstract: A food inspection apparatus includes: a conveyance unit; a light irradiation unit; an unit capturing an image of an inspection object A; a wavelength emphasis emphasizing a foreign substance specific wavelength characteristic of a foreign substance F from light having a wavelength of 300 nm to 1100 nm by using a first and/or second optical filter or a wavelength-specific light source; and an identification processing device identifying the foreign substance and including: a lightening unit normalizing the captured image with 256 or less of gradations into lightened data; and an identification unit having been provided with deep-learning of an identification processing of the foreign substance specific wavelength from the lightened data and identifying the foreign substance or a good item S in line from the lightened data obtained through capturing the image during conveyance of the inspection object.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: August 3, 2021
    Assignees: KEWPIE CORPORATION, BRAINPAD INC.
    Inventors: Kotaro Furihata, Takeshi Ogino, Kenji Suzuki, Hiromu Suzuki, Taketoshi Yamamoto, Mitsuhisa Ota, Yoshimitsu Imazu, Alejandro Javier Gonzalez Tineo, Yuta Yoshida, Yohei Sugawara
  • Patent number: 11080839
    Abstract: A system is provided for identifying damages of a vehicle. During operation, the system can obtain a set of digital images associated with a set of tagged digital images as training data. Each tagged digital image in the set of tagged digital images may include at least one damage object. The system can train a damage identification model based on the training data. When training the damage identification model, the system may identify at least a damage object in the training data based on a target detection technique. The system may also generate a set of feature vectors for the training data. The system can use the set of feature vectors to optimize a set of parameters associated with the damage identification model to obtain a trained damage identification model. The system can then apply the trained damage identification model to obtain a damage category prediction result.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: August 3, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Juan Xu
  • Patent number: 11073619
    Abstract: Efficient and scalable three-dimensional point cloud segmentation. In an embodiment, a three-dimensional point cloud is segmented by adding points to a spatial hash. For each unseen point, a cluster is generated, the unseen point is added to the cluster and marked as seen, and, for each point that is added to the cluster, the point is set as a reference, a reference threshold metric is computed, all unseen neighbors are identified based on the reference threshold metric, and, for each identified unseen neighbor, the unseen neighbor is marked as seen, a neighbor threshold metric is computed, and the neighbor is added or not added to the cluster based on the neighbor threshold metric. When the cluster reaches a threshold size, it may be added to a cluster list. Objects may be identified based on the cluster list and used to control autonomous system(s).
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: July 27, 2021
    Assignee: APEX.AI, INC.
    Inventor: Christopher Ho
  • Patent number: 11062178
    Abstract: An image processing system includes an image acquisition unit that acquires a captured image obtained by imaging a vehicle outside, a dictionary storage unit that stores dictionary data to be referred to in specifying an object included in the captured image, a specification unit that specifies the object based on the dictionary data, a behavior information acquisition unit that acquires behavior information indicating a behavior state of a vehicle, and a classification unit that classifies, based on the behavior information of the vehicle with respect to an unspecifiable object as the object unspecified by the specification unit, whether or not the vehicle needs to avoid the unspecifiable object. Image data of the unspecifiable object is used for creating the dictionary data along with a classification result of the classification unit.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: July 13, 2021
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kazuya Nishimura, Yoshihiro Oe, Hirofumi Kamimaru
  • Patent number: 11062147
    Abstract: A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first and second person may be associated with the event. After the item exits the rack, the subsystem tracks the item and calculates a velocity of the item as it is moved through the space. The subsystem identifies, based on the calculated velocity, a frame in which the velocity of the item is less than a threshold velocity. The subsystem determines whether the first or second person is nearer the item in the identified frame. If the first person is nearer, the item is assigned to the first person.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: July 13, 2021
    Assignee: 7-Eleven, Inc.
    Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy, Sarath Vakacharla, Deepanjan Paul
  • Patent number: 11062444
    Abstract: The invention relates to an artificial intelligence cataract analysis system, including a pattern recognition module for recognizing a photo mode of an input eye image, wherein the photo mode is divided according to the slit width of the illuminating slit during photographing of the eye image and/or whether a mydriatic treatment is carried out; a preliminary analysis module used for selecting a corresponding deep learning model for eye different photo modes, analyzing the characteristics of lens in the eye image by using a deep learning model, and further performing classification in combination with cause and severity degree of a disease. The invention can perform cataract intelligent analysis on eye images with different photo modes by using deep learning models, so that the analysis accuracy is improved.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: July 13, 2021
    Assignee: ZHONGSHAN OPHTHALMIC CENTER OF SUN YAT-SEN UNIVERSITY
    Inventors: Haotian Lin, Xiaohang Wu, Weiyi Lai
  • Patent number: 11055557
    Abstract: The system and method described herein provide for a machine-learning model to automate determination of product attributes for a product based on images associated with the product. The product attributes can be used in online commerce to facilitate product selection by a customer. In accordance with this disclosure, the product attributes may be determined using machine-learning technology by processing images associated with the product (including product packaging). The machine-learning technology is trained using product-related vocabulary and potential attributes that can be discovered by analyzing the images associated with the product.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: July 6, 2021
    Assignee: Walmart Apollo, LLC
    Inventors: Anirban Chatterjee, Bodhisattwa Prasad Majumder, Gayatri Pal, Rajesh Shreedhar Bhat, Sumanth S. Prabhu, Vignesh Selvaraj
  • Patent number: 11055878
    Abstract: A person counting method and a person counting system are provided. The method includes extracting a group of person images to obtain a first image set; dividing the first image set into first and second subsets based on whether a related image exists in a second image set, and reusing a person ID of the related image; estimating posture patterns of images in the first subset, and storing the images in the first subset into an image library based on person IDs and the posture patterns; and selecting a target image whose similarity to an image in the second subset is highest from the image library, reusing a person ID of the target image when the similarity is greater than a threshold, and assigning a new person ID and incrementing a person counter by 1 when the similarity is not greater than the threshold.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: July 6, 2021
    Assignee: RICOH COMPANY, LTD.
    Inventors: Hong Yi, Haijing Jia, Weitao Gong, Wei Wang
  • Patent number: 11042758
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain. The instructions can include further instructions to train a deep neural network (DNN) based on the domain adapted synthetic images and the corresponding ground truth and process images with the trained deep neural network to determine objects.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: June 22, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut Murali, Rohan Bhasin, Akhil Perincherry
  • Patent number: 11037316
    Abstract: An apparatus comprises: an acquisition unit that captures an image of a measurement target and acquires an original image pair having a parallax; a reduction unit that reduces a size of the original image pair; and a calculator that calculates, from an image pair obtained by the reduction unit, a parallax map using predetermined search ranges and search windows in the respective regions. In order to obtain a parallax map that has a predetermined number of pieces of data, the calculator calculates parallax maps in a plurality of hierarchies in which a hierarchy where a parallax map is calculated using an image pair with a lowest magnification is set as a lowest hierarchy, and the predetermined search ranges for calculating the parallax maps in the respective hierarchies are determined based on a parallax map in an immediately lower hierarchy.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: June 15, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Makoto Oigawa
  • Patent number: 11034357
    Abstract: Systems and techniques for scene classification and prediction is provided herein. A first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network. Additionally, scene classification may occur based on global average pooling. Feature vectors may be generated based on different series of image frames and a fusion feature vector may be obtained by performing data fusion based on a first feature vector, a second feature vector, a third feature vector, etc. In this way, a behavior predictor may generate a predicted driver behavior based on the fusion feature.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: June 15, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
  • Patent number: 11030455
    Abstract: The present disclosure belongs to the field of 3D gaze point recognition and computer vision, and more particularly discloses a pose recognition method, device and system for an object of interest to human eyes, which respectively identifies centers of left and right pupils of a user by using a left eye camera and a right eye camera on an eye tracker to extract information of the user's eyes; maps the obtained centers of the left and right pupils to a left scene camera to obtain a 2D gaze point; extracts bounding boxes of objects in the left scene camera by using target recognition and tracking algorithms, and then determines an object of interest to the user according to a positional relationship between the 2D gaze point and the bounding boxes of the objects; performs 3D reconstruction and pose estimation of the object of interest to the user to obtain a pose of the object of interest in the left scene camera; and converts the pose of the object of interest in the left scene camera to a pose in the world co
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: June 8, 2021
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Caihua Xiong, Shikai Qiu, Quanlin Li
  • Patent number: 11024046
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: June 1, 2021
    Assignee: FotoNation Limited
    Inventor: Kartik Venkataraman