Patents Examined by Amara Abdi
  • Patent number: 11094059
    Abstract: The present disclosure publishes a vulnerable plaque identification method. The method includes: receiving an angiocarpy image sent by a terminal device; transforming the angiocarpy image in an original Cartesian coordinate system into a polar coordinate system to form a polarization image; constructing a faster RCNN architecture and accomplishing a training; inputting the polarization image into the faster RCNN architecture accomplished the training to identify the polarization image, and outputting the image with the marked vulnerable plaques; feed backing the image with the marked vulnerable plaques to the terminal device. The present disclosure also publishes an application server and a computer readable medium. The vulnerable plaque identification method, the application server, and the computer readable medium can quickly and correctly conform the position of the vulnerable plaque.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: August 17, 2021
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Jianzong Wang, Tianbo Wu, Lihong Liu, Xinhui Liu, Jing Xiao
  • Patent number: 11094070
    Abstract: The disclosure discloses a visual multi-object tracking based on multi-Bernoulli filter with YOLOv3 detection, belonging to the fields of machine vision and intelligent information processing. The disclosure introduces a YOLOv3 detection technology under a multiple Bernoulli filtering framework. Objects are described by using anti-interference convolution features, and detection results and tracking results are interactively fused to realize accurate estimation of video multi-object states with unknown and time-varying number. In a tracking process, matched detection boxes are combined with object tracks and object templates to determine new objects and re-recognize occluded objects in real time. Meanwhile, under the consideration of identity information of detected objects and estimated objects, identity recognition and track tracking of the objects are realized, so that the tracking accuracy of the occluded objects can be effectively improved, and track fragments are reduced.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 17, 2021
    Assignee: JIANGNAN UNIVERSITY
    Inventors: Jinlong Yang, Xiaoxue Cheng, Guangnan Zhang, Jianjun Liu, Yuan Zhang, Hongwei Ge
  • Patent number: 11079334
    Abstract: A food inspection apparatus includes: a conveyance unit; a light irradiation unit; an unit capturing an image of an inspection object A; a wavelength emphasis emphasizing a foreign substance specific wavelength characteristic of a foreign substance F from light having a wavelength of 300 nm to 1100 nm by using a first and/or second optical filter or a wavelength-specific light source; and an identification processing device identifying the foreign substance and including: a lightening unit normalizing the captured image with 256 or less of gradations into lightened data; and an identification unit having been provided with deep-learning of an identification processing of the foreign substance specific wavelength from the lightened data and identifying the foreign substance or a good item S in line from the lightened data obtained through capturing the image during conveyance of the inspection object.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: August 3, 2021
    Assignees: KEWPIE CORPORATION, BRAINPAD INC.
    Inventors: Kotaro Furihata, Takeshi Ogino, Kenji Suzuki, Hiromu Suzuki, Taketoshi Yamamoto, Mitsuhisa Ota, Yoshimitsu Imazu, Alejandro Javier Gonzalez Tineo, Yuta Yoshida, Yohei Sugawara
  • Patent number: 11080839
    Abstract: A system is provided for identifying damages of a vehicle. During operation, the system can obtain a set of digital images associated with a set of tagged digital images as training data. Each tagged digital image in the set of tagged digital images may include at least one damage object. The system can train a damage identification model based on the training data. When training the damage identification model, the system may identify at least a damage object in the training data based on a target detection technique. The system may also generate a set of feature vectors for the training data. The system can use the set of feature vectors to optimize a set of parameters associated with the damage identification model to obtain a trained damage identification model. The system can then apply the trained damage identification model to obtain a damage category prediction result.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: August 3, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Juan Xu
  • Patent number: 11073619
    Abstract: Efficient and scalable three-dimensional point cloud segmentation. In an embodiment, a three-dimensional point cloud is segmented by adding points to a spatial hash. For each unseen point, a cluster is generated, the unseen point is added to the cluster and marked as seen, and, for each point that is added to the cluster, the point is set as a reference, a reference threshold metric is computed, all unseen neighbors are identified based on the reference threshold metric, and, for each identified unseen neighbor, the unseen neighbor is marked as seen, a neighbor threshold metric is computed, and the neighbor is added or not added to the cluster based on the neighbor threshold metric. When the cluster reaches a threshold size, it may be added to a cluster list. Objects may be identified based on the cluster list and used to control autonomous system(s).
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: July 27, 2021
    Assignee: APEX.AI, INC.
    Inventor: Christopher Ho
  • Patent number: 11062178
    Abstract: An image processing system includes an image acquisition unit that acquires a captured image obtained by imaging a vehicle outside, a dictionary storage unit that stores dictionary data to be referred to in specifying an object included in the captured image, a specification unit that specifies the object based on the dictionary data, a behavior information acquisition unit that acquires behavior information indicating a behavior state of a vehicle, and a classification unit that classifies, based on the behavior information of the vehicle with respect to an unspecifiable object as the object unspecified by the specification unit, whether or not the vehicle needs to avoid the unspecifiable object. Image data of the unspecifiable object is used for creating the dictionary data along with a classification result of the classification unit.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: July 13, 2021
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kazuya Nishimura, Yoshihiro Oe, Hirofumi Kamimaru
  • Patent number: 11062147
    Abstract: A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first and second person may be associated with the event. After the item exits the rack, the subsystem tracks the item and calculates a velocity of the item as it is moved through the space. The subsystem identifies, based on the calculated velocity, a frame in which the velocity of the item is less than a threshold velocity. The subsystem determines whether the first or second person is nearer the item in the identified frame. If the first person is nearer, the item is assigned to the first person.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: July 13, 2021
    Assignee: 7-Eleven, Inc.
    Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy, Sarath Vakacharla, Deepanjan Paul
  • Patent number: 11062444
    Abstract: The invention relates to an artificial intelligence cataract analysis system, including a pattern recognition module for recognizing a photo mode of an input eye image, wherein the photo mode is divided according to the slit width of the illuminating slit during photographing of the eye image and/or whether a mydriatic treatment is carried out; a preliminary analysis module used for selecting a corresponding deep learning model for eye different photo modes, analyzing the characteristics of lens in the eye image by using a deep learning model, and further performing classification in combination with cause and severity degree of a disease. The invention can perform cataract intelligent analysis on eye images with different photo modes by using deep learning models, so that the analysis accuracy is improved.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: July 13, 2021
    Assignee: ZHONGSHAN OPHTHALMIC CENTER OF SUN YAT-SEN UNIVERSITY
    Inventors: Haotian Lin, Xiaohang Wu, Weiyi Lai
  • Patent number: 11055557
    Abstract: The system and method described herein provide for a machine-learning model to automate determination of product attributes for a product based on images associated with the product. The product attributes can be used in online commerce to facilitate product selection by a customer. In accordance with this disclosure, the product attributes may be determined using machine-learning technology by processing images associated with the product (including product packaging). The machine-learning technology is trained using product-related vocabulary and potential attributes that can be discovered by analyzing the images associated with the product.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: July 6, 2021
    Assignee: Walmart Apollo, LLC
    Inventors: Anirban Chatterjee, Bodhisattwa Prasad Majumder, Gayatri Pal, Rajesh Shreedhar Bhat, Sumanth S. Prabhu, Vignesh Selvaraj
  • Patent number: 11055878
    Abstract: A person counting method and a person counting system are provided. The method includes extracting a group of person images to obtain a first image set; dividing the first image set into first and second subsets based on whether a related image exists in a second image set, and reusing a person ID of the related image; estimating posture patterns of images in the first subset, and storing the images in the first subset into an image library based on person IDs and the posture patterns; and selecting a target image whose similarity to an image in the second subset is highest from the image library, reusing a person ID of the target image when the similarity is greater than a threshold, and assigning a new person ID and incrementing a person counter by 1 when the similarity is not greater than the threshold.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: July 6, 2021
    Assignee: RICOH COMPANY, LTD.
    Inventors: Hong Yi, Haijing Jia, Weitao Gong, Wei Wang
  • Patent number: 11042758
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain. The instructions can include further instructions to train a deep neural network (DNN) based on the domain adapted synthetic images and the corresponding ground truth and process images with the trained deep neural network to determine objects.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: June 22, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut Murali, Rohan Bhasin, Akhil Perincherry
  • Patent number: 11037316
    Abstract: An apparatus comprises: an acquisition unit that captures an image of a measurement target and acquires an original image pair having a parallax; a reduction unit that reduces a size of the original image pair; and a calculator that calculates, from an image pair obtained by the reduction unit, a parallax map using predetermined search ranges and search windows in the respective regions. In order to obtain a parallax map that has a predetermined number of pieces of data, the calculator calculates parallax maps in a plurality of hierarchies in which a hierarchy where a parallax map is calculated using an image pair with a lowest magnification is set as a lowest hierarchy, and the predetermined search ranges for calculating the parallax maps in the respective hierarchies are determined based on a parallax map in an immediately lower hierarchy.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: June 15, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Makoto Oigawa
  • Patent number: 11034357
    Abstract: Systems and techniques for scene classification and prediction is provided herein. A first series of image frames of an environment from a moving vehicle may be captured. Traffic participants within the environment may be identified and masked based on a first convolutional neural network (CNN). Temporal classification may be performed to generate a series of image frames associated with temporal predictions based on a scene classification model based on CNNs and a long short-term memory (LSTM) network. Additionally, scene classification may occur based on global average pooling. Feature vectors may be generated based on different series of image frames and a fusion feature vector may be obtained by performing data fusion based on a first feature vector, a second feature vector, a third feature vector, etc. In this way, a behavior predictor may generate a predicted driver behavior based on the fusion feature.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: June 15, 2021
    Assignee: Honda Motor Co., Ltd.
    Inventors: Athmanarayanan Lakshmi Narayanan, Isht Dwivedi, Behzad Dariush
  • Patent number: 11030455
    Abstract: The present disclosure belongs to the field of 3D gaze point recognition and computer vision, and more particularly discloses a pose recognition method, device and system for an object of interest to human eyes, which respectively identifies centers of left and right pupils of a user by using a left eye camera and a right eye camera on an eye tracker to extract information of the user's eyes; maps the obtained centers of the left and right pupils to a left scene camera to obtain a 2D gaze point; extracts bounding boxes of objects in the left scene camera by using target recognition and tracking algorithms, and then determines an object of interest to the user according to a positional relationship between the 2D gaze point and the bounding boxes of the objects; performs 3D reconstruction and pose estimation of the object of interest to the user to obtain a pose of the object of interest in the left scene camera; and converts the pose of the object of interest in the left scene camera to a pose in the world co
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: June 8, 2021
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Caihua Xiong, Shikai Qiu, Quanlin Li
  • Patent number: 11024046
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: June 1, 2021
    Assignee: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Patent number: 11017508
    Abstract: An image matching method and apparatus are disclosed. The method according to the present embodiment comprises: a step of acquiring source image; a step of acquiring a transformation image, by converting the display mode of the source images into a top view and adjusting the brightness thereof through auto exposure (AE); a step of acquiring image information; and a step of generating a target image comprising a first area and a second area in an overlap area in which pixels of a corrected transformation image are disposed on the basis of the image information, wherein the correction comprises gradation correction of the pixels of an image disposed in the first area and pixels of an image in the second area.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: May 25, 2021
    Assignee: LG INNOTEK CO., LTD.
    Inventors: Dong Gyun Kim, Sung Hyun Lim
  • Patent number: 11017531
    Abstract: Methods of and systems for reconstructing a vascular tree shape from vascular segments imaged in a single source 2-D projection image are described. A structuring shape comprising spatial positions of reference anatomical elements is defined, such as vascular segments in the definition of a 3-D surface model corresponding to a surface defined by an anatomical structure such as a body organ (e.g., heart). The 3-D surface model is used to create a 3-D model of anatomical elements (e.g., additional vascular segments of a cardiac vasculature) imaged in a source 2-D projection image, by back-projection to the 3-D surface model. The 3-D surface model is optionally aligned by first aligning the source 2-D projection image to the structuring shape. In some embodiments, the source 2-D projection image is registered to the 3-D surface model through the structuring shape by the source image's initial use in defining the structuring shape.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: May 25, 2021
    Assignee: CathWorks Ltd
    Inventors: Omri Harish, Ofek Shilon, Guy Lavi
  • Patent number: 11010592
    Abstract: In one embodiment, example systems and methods relate to a manner of generating 3D representations from monocular 2D images. A monocular 2D image is captured by a camera. The 2D image is processed to create one or more feature maps. The features may include depth features, or object labels, for example. Based on the image and the feature map, regions-of-interest corresponding to vehicles in the image are determined. For each region-of-interest a lifting function is applied to the region-of-interest to determine values such as height and width, camera distance, and rotation. The determined values are used to create an eight-point box that is a 3D representation of the vehicle depicted by the region-of-interest. The 3D representation can be used for a variety of purposes such as route planning, object avoidance, or as training data, for example.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: May 18, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Wadim Kehl, Fabian Manhardt
  • Patent number: 11004192
    Abstract: A computer-implemented method for selecting aerial images for image processing to identify Energy Infrastructure (EI) features is provided. The method includes performing image processing on aerial images of a portion of global terrain captured at different times to determine differences in terrain content the captured images. Aerial images are selected for further image processing according to identified differences in terrain content. The selected images are imaged processed via an EI feature recognition type to identify EI features within the images.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: May 11, 2021
    Assignee: SOURCEWATER, INC.
    Inventor: Joshua Adler
  • Patent number: 10997721
    Abstract: A microbe scanning device and methods are disclosed. The device includes a housing that includes a sensor(s), output device(s) that conveys text/audio/images, and control circuit(s) coupled to the sensor(s) and output device(s). the sensor captures first ASD and second ASD. The first and second ASD each includes an image of an appendage captured using one or more of radio waves, visible light (“VL”), and infrared light (“IR”). The control circuit is configured to determine, using ASD, whether a user is present; determine, using ASD, whether hands are present when the user is present; determine, using ASD, whether microbiol material is present on the hands; generate a notification when the microbial material is present on the hands; transmit, via the output device, the notification; generate a notification when the microbial material is not present on the hands; and transmit the second notification when generated.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: May 4, 2021
    Inventor: Beth Allison Lopez