Patents Examined by Nancy Bitar
  • Patent number: 11514558
    Abstract: A method for automatically enhancing an image from a device includes obtaining a first image using an imaging device. Recognition software is configured to recognize an object or individual in the first image. An initial image profile is configured based on the first image. Editing software is used to edit at least one attribute of the initial image profile. At least one subsequent image is taken or received. The recognition software is used to recognize the object or individual in the at least one subsequent image. The at least one attribute of the at least one subsequent image is automatically edited based on the initial image profile.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: November 29, 2022
    Inventor: Edward C. Meagher
  • Patent number: 11507615
    Abstract: A method and apparatus for image searching based on artificial intelligent (AI) are provided. The method includes obtaining first feature information by extracting features from an image based on a first neural network, obtaining second feature information corresponding to a target area of a query image by processing the first feature information based on a second neural network and at least two filters having different sizes, and identifying an image corresponding to the query image according to the second feature information.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: November 22, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Zhonghua Luo, Jiahui Yuan, Wei Wen, Zuozhou Pan, Yuanyang Xue
  • Patent number: 11501869
    Abstract: Systems and methods are disclosed for predicting a resistance index associated with a tumor and surrounding tissue, comprising receiving one or more digital images of a pathology specimen, receiving additional information about a patient and/or a disease associated with the pathology specimen, determining at least one target region of the one or more digital images for analysis and removing a non-relevant region of the one or more digital images, applying a machine learning system to the one or more digital images to determine a resistance index for the target region of the one or more digital images, the machine learning system having been trained using a plurality of training images to predict the resistance index for the target region using a plurality of images of pathology specimens, and outputting the resistance index corresponding to the target region.
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: November 15, 2022
    Assignee: PAIGE.AI, Inc.
    Inventors: Leo Grady, Christopher Kanan, Jorge S. Reis-Filho
  • Patent number: 11501452
    Abstract: Techniques for detecting motion of a vehicle are disclosed. Optical flow techniques are applied to the entirety of the received images from an optical sensor mounted to a vehicle. Motion detection techniques are then imposed on the optical flow output to remove image portions that correspond to objects moving independent from the vehicle and determine the extent, if any, of movement by the vehicle from the remaining image portions. Motion detection can be performed via a machine learning classifier. In some aspects, motion can be detected by extracting the depth of received images in addition to optical flow. In additional or alternative aspects, the optical flow and/or motion detection techniques can be implemented by at least one artificial neural network.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: November 15, 2022
    Assignee: Honeywell International Inc.
    Inventors: Vegnesh Jayaraman, Sai Krishnan Chandrasekar, Andrew Stewart, Vijay Venkataraman
  • Patent number: 11497723
    Abstract: The invention relates to a composition for induction of activity of a nuclear receptor PPAR? and inhibition of HDAC in a subject in need thereof, which comprises a synergistic combination of benzoate and phenylbutyrate and/or phenylacetate in association with a pharmaceutical carrier.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: November 15, 2022
    Inventor: Tony Antakly
  • Patent number: 11501548
    Abstract: The present disclosure discloses a method and an object determination system for determining one or more target objects in an image. The image is segmented by the object detection system into one or more segments based on visual attributes in a first set. Morphological operations are performed on the one or more segments to obtain one or more morphed segments. One or more candidates of target objects are identified based on visual attributes in a second set corresponding to each one or more morphed segments. The object determination system identifies at least one of true positive and false positive from the one or more candidates which indicates presence or absence of the one or more target objects respectively, based on neighborhood information associated with the one or more candidates. The present disclosure facilitates in determining target objects in document automatically, thereby eliminating manual intervention in identifying target objects in the document.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: November 15, 2022
    Assignee: EDGEVERVE SYSTEMS LIMITED
    Inventors: Niraj Kunnumma, Rajeshwari Ganesan, Anmol Chandrakant Khopade, Akash Gaur
  • Patent number: 11501871
    Abstract: A system includes a microscope configured to magnify a pathology sample, a camera positioned to record magnified pathology images from the microscope, and a display configured to show the magnified pathology images. A processing apparatus is coupled to the camera, and the display, and the processing apparatus includes instructions that when executed by the processing apparatus cause the system to perform operations, including: identifying, using a machine learning algorithm, one or more regions of interest in the magnified pathology images; and alerting, using the display, a user of the microscope to the one or more regions of interest in the magnified pathology images while the pathology sample is being magnified with the microscope.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: November 15, 2022
    Assignee: Verily Life Sciences LLC
    Inventor: Joëlle K. Barral
  • Patent number: 11501118
    Abstract: A digital model repair method includes: providing a point cloud digital model of a target object as input to a generative network of a trained generative adversarial network ‘GAN’, the input point cloud comprising a plurality of points erroneously perturbed by one or more causes, and generating, by the generative network of the GAN, an output point cloud in which the erroneous perturbation of some or all of the plurality of points has been reduced; where the generative network of the GAN was trained using input point clouds comprising a plurality of points erroneously perturbed by said one or more causes, and a discriminator of the GAN was trained to distinguish point clouds comprising a plurality of points erroneously perturbed by said one or more causes and point clouds substantially without such perturbations.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: November 15, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Nigel John Williams, Fabio Cappello
  • Patent number: 11494937
    Abstract: Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: November 8, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Bin Yang, Ming Liang
  • Patent number: 11488393
    Abstract: Systems and corresponding methods are provided for moving object predictive locating, reporting, and alerting. An exemplary method includes receiving moving object data corresponding to a moving object; receiving sensor data from a sensor; merging the received moving object data and received sensor data into a set of merged data; and based thereon, automatically determining one or more of: a predicted location or range of locations for the moving object, a potential path of travel or area for the moving object, and a potential for interaction between the moving object and subject objects. The method can include automatically generating and providing alerts based on the determining. Alert can be configured for users having potential for interaction with the moving object. A method may include receiving sensor data from third parties, and provide information generated by the system pertaining to moving objects to other third parties.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: November 1, 2022
    Inventor: Joshua May
  • Patent number: 11488382
    Abstract: Various user-presence/absence recognition techniques based on deep learning are provided. More specifically, these user-presence/absence recognition techniques include building/training a CNN-based image recognition model including a user-presence/absence classifier based on training images collected from the user-seating area of a surgeon console under various clinically-relevant conditions/cases. The trained user-presence/absence classifier can then be used during teleoperation/surgical procedures to monitor/track users in the user-seating area of the surgeon console, and continuously classify the real-time video images of the user-seating area as either a user-presence state or a user-absence state. In some embodiments, the user-presence/absence classifier can be used to detect a user-switching event at the surgeon console when a second user is detected to have entered the user-seating area after a first user is detected to have exited the user-seating area.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: November 1, 2022
    Assignee: VERB SURGICAL INC.
    Inventor: Meysam Torabi
  • Patent number: 11482317
    Abstract: Systems and methods are disclosed for predicting a resistance index associated with a tumor and surrounding tissue, comprising receiving one or more digital images of a pathology specimen, receiving additional information about a patient and/or a disease associated with the pathology specimen, determining at least one target region of the one or more digital images for analysis and removing a non-relevant region of the one or more digital images, applying a machine learning system to the one or more digital images to determine a resistance index for the target region of the one or more digital images, the machine learning system having been trained using a plurality of training images to predict the resistance index for the target region using a plurality of images of pathology specimens, and outputting the resistance index corresponding to the target region.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: October 25, 2022
    Assignee: PAIGE.AI, Inc.
    Inventors: Leo Grady, Christopher Kanan, Jorge S. Reis-Filho
  • Patent number: 11482016
    Abstract: An apparatus for recognizing a division line on a road from an image captured by a camera includes: a processing area setting unit to set a processing area to the image; a statistics calculation unit to calculate statistics of the image in the processing area; a threshold value setting unit to set a plurality of threshold values on the basis of the statistics; a division line feature point extraction unit to classify a plurality of pixels contained in the image on the basis of the white line threshold value and the road surface threshold value and extracts feature points of the division line on the basis of classification results of the plurality of pixels; and a division line decision unit configured to decide the division line on the basis of the feature points extracted by the division line feature point extraction unit.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: October 25, 2022
    Assignee: Faurecia Clarion Electronics Co., Ltd.
    Inventors: Kimiyoshi Machii, Takehito Ogata, Naoki Shimizu, Ayano Miyashita
  • Patent number: 11475681
    Abstract: The present application discloses an image processing method, apparatus, electronic device and computer readable storage medium. The image processing method comprises detecting a text region in an image to be processed, recognizing the text region to obtain a text recognition result. In this application, the text recognition in the image to be processed is realized, the recognition manner for the text in the image is simplified, and the recognition effect for the text is improved.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: October 18, 2022
    Inventors: Xiaobing Wang, Yingying Jiang, Xiangyu Zhu, Hao Guo, Yi Yu, Pingjun Li, Zhenbo Luo
  • Patent number: 11471114
    Abstract: The present invention concerns a Medical system tor mapping of action potential data comprising an elongated medical mapping device (1) suitable for intravascular insertion having an electrode assembly (80) located at a distal portion (3) of the mapping device (1), a data processing and control unit (15) for processing data received from the mapping device (1), the data processing and control unit including a model generator for visualizing a 3-dimensional heart model based on one of electrical navigation system, MRI or CT scan data of a heart, a data output unit (16) for displaying both the 3-dimensional heart model and the processed data of the mapping device (1) simultaneously in a single visualization, wherein the model generator is configured to structure 3D scan data of the heart into 6 directions (a, b, c, d, e or f) of a cube, each direction is associated with a separate Cartesian coordinate system with X(a, b, c, d, e or f), Y(a, b, c, d, e or f), Z(a, b, c, d, e or f) coordinates, wherein for assign
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: October 18, 2022
    Assignee: Ablacon Inc.
    Inventor: Peter Ruppersberg
  • Patent number: 11475611
    Abstract: The present disclosure relates to a system and method for generating an image. At least one processor, when executing instructions, may perform one or more of the following operations. When raw data relating to an object is retrieved, an image may be generated based thereon. A first voxel of the image is identified based a first geometric parameter relating to the first voxel; a second voxel of the image is identified based on a second geometric parameter relating to the second voxel; the image is reconstructed using an iterative reconstruction process, during which the calculation relating to the first voxel is based on the first number of sub-voxels, and the calculation relating to the second voxel is based on the second voxel.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: October 18, 2022
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Lu Wang, Patrick Kling
  • Patent number: 11461924
    Abstract: Systems and methods are provided for: receiving an image containing a code that has one or more visual qualities that fail to satisfy respective thresholds; applying a trained machine learning model to find a rough location of the code by generating a bounding box and cropping out the portion of the image; applying another trained machine learning model to the portion of the image to estimate key point locations of the code depicted in the portion of the image, aligning the portion of the image that depicts the code based on the estimated key point locations; and decoding, by the other trained machine learning model, the aligned portion of the image that depicts the code.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: October 4, 2022
    Assignee: Snap Inc.
    Inventors: Shree K. Nayar, Jian Wang, Wenzheng Chen
  • Patent number: 11458987
    Abstract: A system and method for predicting driving actions based on intent-aware driving models that include receiving at least one image of a driving scene of an ego vehicle. The system and method also include analyzing the at least one image to detect and track dynamic objects located within the driving scene and to detect and identify driving scene characteristics associated with the driving scene and processing an ego-thing graph associated with the dynamic objects and an ego-stuff graph associated with the driving scene characteristics. The system and method further include predicting a driver stimulus action based on a fusion of representations of the ego-thing graph and the ego-stuff graph and a driver intention action based on an intention representation associated with driving intentions of a driver of the ego vehicle.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: October 4, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Chengxi Li, Yi-Ting Chen
  • Patent number: 11455788
    Abstract: A method and apparatus for positioning a description statement in an image includes: analyzing a to-be-analyzed description statement and a to-be-analyzed image to obtain a plurality of statement attention weights of the to-be-analyzed description statement and a plurality of image attention weights of the to-be-analyzed image; obtaining a plurality of first matching scores based on the plurality of statement attention weights and a subject feature, a location feature and a relationship feature of the to-be-analyzed image; obtaining a second matching score between the to-be-analyzed description statement and the to-be-analyzed image based on the plurality of first matching scores and the plurality of image attention weights; and determining a positioning result of the to-be-analyzed description statement in the to-be-analyzed image based on the second matching score.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: September 27, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xihui Liu, Jing Shao, Zihao Wang, Hongsheng Li, Xiaogang Wang
  • Patent number: 11455817
    Abstract: A method for detecting a finger at a fingerprint sensor includes detecting a presence of an object at a fingerprint sensor and, in response to detecting the presence of the object, acquiring image data for the object based on signals from the fingerprint sensor. The method further includes, for each subset of one or more subsets of the image data, calculating a magnitude value for a spatial frequency of the subset, and identifying the object as a finger based on comparing the magnitude value to a threshold.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: September 27, 2022
    Assignee: Cypress Semiconductor Corporation
    Inventors: Andriy Ryshtun, Oleksandr Rohozin, Viktor Kremin, Oleg Kapshii