Patents Examined by Yosef Kassa
  • Patent number: 11978180
    Abstract: A system and a method for processing an image. The method includes the steps of: receiving a source image having at least one sensitive portions arranged to present intelligible sensitive information; and processing, with a predetermined redacting method, the at least one sensitive portions of the source image to generate an output image including the intelligible sensitive information being transformed into an unintelligible form; wherein the sensitive information in the unintelligible form is adapted to be restored to an intelligible form.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: May 7, 2024
    Assignee: IQ Works Limited
    Inventors: Rustom Adi Kanga, Lai Chun Li
  • Patent number: 11972601
    Abstract: The present disclosure relates to a technology field of artificial intelligence and provides a method for selecting image samples and related equipment. The method trains an instance segmentation model with first image samples and trains a score prediction model with third image samples. An information quantum score of second image samples is calculated through the score prediction model and feature vectors extracted. The second image samples are clustered according to the feature vectors of the second image samples and sample clusters of the second image samples are obtained. Target image samples are selected from the second image samples according to the information quantum score of the second image samples and the sample clusters. Target image samples from the image samples are selected for labelling, improving an accuracy of sample selection.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: April 30, 2024
    Assignee: Ping An Technology (Shenzhen) Co., Ltd.
    Inventors: Jun Wang, Peng Gao
  • Patent number: 11972591
    Abstract: In an image processing apparatus 100, an image acquiring portion 11 acquires image information. A position acquiring portion 12 acquires an image capturing position and an image capturing direction of the image information. An image-capturing propriety information acquiring portion 13 acquires image-capturing propriety information indicating that a position of an information terminal 200 corresponds to image-capturing propriety setting information. An image editing portion 14 performs a masking process to a range based on the information terminal 200 in the image information acquired by the image acquiring portion 11, in accordance with image-capturing propriety setting information of the image-capturing propriety information indicating that the position of the information terminal 200 in the image-capturing propriety information corresponds to the image capturing range based on the image capturing position and the image capturing direction acquired by the position acquiring portion 12.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: April 30, 2024
    Assignee: MAXELL, LTD.
    Inventors: Kazuhiko Yoshizawa, Satoru Takashimizu
  • Patent number: 11966229
    Abstract: Provided is a robot, including: a plurality of sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with an image sensor, images of a workspace as the robot moves within the workspace; identifying, with the processor, at least one characteristic of at least one object captured in the images of the workspace; determining, with the processor, an object type of the at least one object based on characteristics of different types of objects stored in an object dictionary, wherein possible object types comprise a type of clothing, a cord, a type of pet bodily waste, and a shoe; and instructing, with the processor, the robot to execute at least one action based on the object type of the at least one object.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: April 23, 2024
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia
  • Patent number: 11961261
    Abstract: A scheme for modifying an image is disclosed, which includes receiving a source image having a first image configuration; determining a second image configuration for a target image; providing the received source image to an AI engine trained to identify, based on a set of rules related to visual features, candidate regions from the source image; generating proposal images based on the candidate regions, respectively; determining, based on prior aesthetical evaluation data, an aesthetical value of each regional proposal image; and selecting, based on the determined aesthetical value of each regional proposal image, one of the regional proposal images as the target image; extracting, from the AI engine, the target image; and causing the target image to be displayed via a display of a user device.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ji Li, Xiao Sun, Qi Dai, Han Hu
  • Patent number: 11959861
    Abstract: An agricultural machine is disclosed that includes an NIR sensor configured to detect NIR spectra of plant material and output them as raw data, an evaluation unit configured to derive at least one parameter of the plant material in real time from the raw data, and an interface for the data traffic with at least one data processing unit outside of or external to the agricultural machine that is configured to transmit the raw data to the data processing unit.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: April 16, 2024
    Assignee: CLAAS Selbstfahrende Erntemaschinen GmbH
    Inventors: Björn Stremlau, Carsten Grove, Michael Roggenland, Frank Claussen, Maximilian von Nordheim, Jeremias Hagel, Jörg Wesselmann
  • Patent number: 11961281
    Abstract: Techniques for training a machine-learning model are described. In an example, a computer generates a first pseudo-label indicating a first mask associated with a first object detected by a first machine-learning model in a first training image. A transformed image of the first training image can be generated using a transformation. Based on the transformation, a second pseudo-label indicating a second mask detected in the transformed image and corresponding to the first mask can be determined. A second machine-learning model can be trained using the second pseudo-label. The trained, second machine-learning model can detect a third mask associated with a second object based on a second image.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: April 16, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Taewan Kim, Jesse Norman Clark, Onkar Jayant Dabeer
  • Patent number: 11954878
    Abstract: A device to determine a height disparity between features of an image includes a memory including instructions and processing circuitry. The processing circuitry is configured by the instructions to obtain an image including a first repetitive feature and a second repetitive feature. The processing circuitry is further configured by the instructions to determine a distribution of pixels in a first area of the image, where the first area includes an occurrence of the repetitive features, and to determine a distribution of pixels in a second area of the image, where the second area includes another occurrence of the repetitive features. The processing circuitry is further configured by the instructions to evaluate the distribution of pixels in the first area and the distribution of pixels in the second area to determine a height difference between the first repetitive feature and the second repetitive feature.
    Type: Grant
    Filed: March 15, 2023
    Date of Patent: April 9, 2024
    Assignee: Raven Industries, Inc.
    Inventors: Yuri Sneyders, John D. Preheim
  • Patent number: 11954175
    Abstract: Disclosed herein is an improvement to prior art feature pyramids for general object detection that inserts a simple norm calibration (NC) operation between the feature pyramids and detection head to alleviate and balance the norm bias caused by feature pyramid network (FPN) and which leverages an enhanced multi-feature selective strategy (MS) during training to assign the ground-truth to one or more levels of the feature pyramid.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: April 9, 2024
    Assignee: Carnegie Mellon University
    Inventors: Fangyi Chen, Chenchen Zhu, Zhiqiang Shen, Han Zhang, Marios Savvides
  • Patent number: 11948479
    Abstract: Provided herein are methods, systems and computer program products for detecting tampering, comprising a sealing process and a seal verification process. The sealing process comprising analyzing a seal applied to seal an object as a tamper evident element, recording one or more manufacturing defects of the seal identified based on the analysis, each of the one or more manufacturing defects comprising one or more non-reproducible deviations from seal generation instructions used to produce the seal, and generating a signature comprising the one or more manufacturing defects. The seal verification process comprising obtaining the signature, analyzing the seal sealing the object, and determining whether the object is tampered based on a comparison between the analyzed seal and the signature.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: April 2, 2024
    Assignee: NEC Corporation Of America
    Inventor: Tsvi Lev
  • Patent number: 11941172
    Abstract: A method for training an eye tracking model is disclosed, as well as a corresponding system and storage medium. The eye tracking model is adapted to predict eye tracking data based on sensor data from a first eye tracking sensor. The method comprises receiving sensor data obtained by the first eye tracking sensor at a time instance and receiving reference eye tracking data for the time instance generated by an eye tracking system comprising a second eye tracking sensor. The reference eye tracking data is generated by the eye tracking system based on sensor data obtained by the second eye tracking sensor at the time instance. The method comprises training the eye tracking model based on the sensor data obtained by the first eye tracking sensor at the time instance and the generated reference eye tracking data.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: March 26, 2024
    Assignee: Tobii AB
    Inventors: Carl Asplund, Patrik Barkman, Anders Dahl, Oscar Danielsson, Tommaso Martini, Mårten Nilsson
  • Patent number: 11934771
    Abstract: The present invention concerns a method for transforming an unstructured set of data representing a standardized form to a structured set of data. The method comprises a processing phase comprising the steps of determining a plurality of data blocks in the unstructured set of data using learning parameters determined using a plurality of samples, each data block corresponding to a visual pattern on the standardized form and being categorized to a known class, of processing data in each data block and of forming a structured set of data using the processed data from each data block, according to the class of this block.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: March 19, 2024
    Assignee: Ivalua SAS
    Inventors: Aurélien Coquard, Christopher Bourez
  • Patent number: 11933898
    Abstract: A system and method of adjusting a field of view in an imaging system includes transmitting light across a transmission optical path and defining a field of view encompassing both uniform and spatially tenuous target objects within the transmitted light. A sensor within a return optical path of reflected light from at least a portion of one of the target objects allows a data processing computer to compile an image from a series of data outputs from the sensor. The image is analyzed to determine a region of interest within the image and by dynamically adjusting the light source, the computer is configured to change the field of view of the light source such that the image includes a higher resolution and/or signal intensity for the region of interest. The region of interest may include at least one spatially tenuous target object.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: March 19, 2024
    Assignee: University of South Florida
    Inventor: Dennis Karl Killinger
  • Patent number: 11927965
    Abstract: Provided is a method for operating a robot, including: capturing images of a workspace; capturing movement data indicative of movement of the robot; capturing LIDAR data as the robot performs work within the workspace; comparing at least one object from the captured images to objects in an object dictionary; identifying a class to which the at least one object belongs; generating a first iteration of a map of the workspace based on the LIDAR data; generating additional iterations of the map based on newly captured LIDAR data and newly captured movement data; actuating the robot to drive along a trajectory that follows along a planned path by providing pulses to one or more electric motors of wheels of the robot; and localizing the robot within an iteration of the map by estimating a position of the robot based on the movement data, slippage, and sensor errors.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: March 12, 2024
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Lukas Fath, Andrew Fitzgerald, Amin Ebrahimi Afrouzi, Brian Highfill
  • Patent number: 11928844
    Abstract: A three-dimensional data encoding method includes: encoding geometry information of each of three-dimensional points based on one of a first geometry information encoding method of encoding using octree division and a second geometry information encoding method of encoding without using octree division; and generating a bitstream including the geometry information encoded and a geometry information flag indicating whether the encoding was performed based on the first geometry information encoding method or the second geometry information encoding method. In the generating of the bitstream: when the encoding is performed based on the first geometry information encoding method, the bitstream including a parameter set used in octree division is generated; and when the encoding is performed based on the second geometry information encoding method, the bitstream not including the parameter set used for octree division is generated.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: March 12, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Noritaka Iguchi, Toshiyasu Sugio
  • Patent number: 11922659
    Abstract: A coordinate calculation apparatus 10 includes: an image selection unit 11 configured to select, when a specific portion is designated in an object, two or more images including the specific portion from the images of the object; a three-dimensional coordinate calculation unit 12 configured to specify, for each of the selected images, a location of points corresponding to each other at the specific portion, and calculating a three-dimensional coordinate of the specific portion by using the location of the point specified for each of the images and the camera matrix calculated in advance for each of the images; a three-dimensional model display unit 13 configured to display, using the point cloud data of the object, a three-dimensional model of the object on a screen, and displaying the designated specific portion on the three-dimensional model based on the calculated three-dimensional coordinates.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 5, 2024
    Assignee: NEC Solution Innovators, Ltd.
    Inventor: Yoshihiro Yamashita
  • Patent number: 11915512
    Abstract: A three-dimensional sensing system includes a plurality of scanners each emitting a light signal to a scene to be sensed and receiving a reflected light signal, according to which depth information is obtained. Only one scanner executes transmitting corresponding light signal and receiving corresponding reflected light signal at a time.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: February 27, 2024
    Assignee: Himax Technologies Limited
    Inventors: Ching-Wen Wang, Cheng-Che Tsai, Ting-Sheng Hsu, Min-Chian Wu
  • Patent number: 11915292
    Abstract: An apparatus, a system, and a method for providing a customized clothing recommendation service are provided. The method for providing a customized clothing recommendation service recommending clothing products with a neckline suitable for a user by a neckline recommendation service providing server, comprises: constructing a database for storing clothing product information matching neckline type information to clothing products; receiving an image of a user's upper body; detecting a face from the received upper body image and determining a face type of the detected face; and determining neckline type information matching the determined face type and detecting one or more clothing products matching the determined neckline type information; and providing the detected clothing products to the user.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: February 27, 2024
    Assignee: NHN CLOUD CORPORATION
    Inventor: Ohk Yeob Heo
  • Patent number: 11915444
    Abstract: Methods and systems for image analysis are provided, and in particular for identifying a set of base-calling locations in a flow cell for DNA sequencing. These include capturing flow cell images after each sequencing step performed on the flow cell, and identifying candidate cluster centers in at least one of the flow cell images. Intensities are determined for each candidate cluster center in a set of flow cell images. Purities are determined for each candidate cluster center based on the intensities. Each candidate cluster center with a purity greater than the purity of the surrounding candidate cluster centers within a distance threshold is added to a template set of base-calling locations.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: February 27, 2024
    Assignee: Element Biosciences, Inc.
    Inventors: Chunhong Zhou, Semyon Kruglyak, Francisco Garcia, Minghao Guo, Haosen Wang, Ryan Kelly
  • Patent number: 11915421
    Abstract: Implementations are described herein for auditing performance of large-scale tasks. In various implementations, one or more ground-level vision sensors may capture a first set of one or more images that depict an agricultural plot prior to an agricultural task being performed in the agricultural plot, and a second set of one or more images that depict the agricultural plot subsequent to the agricultural task being performed in the agricultural plot. The first and second sets of images may be processed in situ using edge computing device(s) based on a machine learning model to generate respective pluralities of pre-task and post-task inferences about the agricultural plot. Performance of the agricultural task may include comparing the pre-task inferences to the post-task inferences to generate operational metric(s) about the performance of the agricultural task in the agricultural plot. The operational metric(s) may be presented at one or more output devices.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: February 27, 2024
    Assignee: MINERAL EARTH SCIENCES LLC
    Inventors: Zhiqiang Yuan, Elliott Grant