Patents Examined by Dhaval V Patel
  • Patent number: 11386568
    Abstract: A method for determining a quality of a surface in the surroundings of a transportation vehicle, wherein three-dimensional surface coordinates of the surface are generated using a sensor assembly. In the method, an approximation of the course of the curvature of the surface in at least one direction is obtained based on the surface coordinates and the surface coordinates are classified to characterize the quality of the surface using the course of the curvature and/or vertical distances of the approximation of the course of the curvature from the three-dimensional surface coordinates. A device for carrying out the method.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 12, 2022
    Inventors: Dominik Maximilian Martin Vock, Marc-Michael Meinecke, Fabian Warnecke
  • Patent number: 11386705
    Abstract: According to one embodiment, a feature amount management apparatus includes a data generation unit, an ID generation unit, a storage unit, and a deletion unit. The data generation unit generates, from an image, feature amount data indicating a feature amount of biometric information of a person. The ID generation unit generates identification information including expiration date information used for determining an expiration date of the feature amount data. The storage unit stores the feature amount data in correlation with the identification information. The deletion unit deletes the feature amount data when the feature amount data pass the expiration date.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: July 12, 2022
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Atsushi Okamura
  • Patent number: 11386676
    Abstract: A passenger state analysis method and apparatus, a vehicle, an electronic device and a storage medium. The method includes: obtaining a video stream of a rear seat area in a vehicle; performing face and/or body detection on at least one image frame in the video stream; determining state information of a passenger in the rear seat area according to a face and/or body detection result; and in response to the fact that the state information of the passenger satisfies a predetermined condition, outputting prompt information to a driver area or a specified device in the vehicle.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: July 12, 2022
    Assignee: SHANGHAI SENSETIME INTELLIGENT TECHNOLOGY CO., LTD
    Inventors: Chengming Yi, Guanhua Liang, Yang Wei, Renbo Qin, Chendi Yu
  • Patent number: 11380102
    Abstract: The method includes for each of a plurality of successive images of a video stream from a camera, the search for at least one person present in the image and the definition, for each person found, of a zone in the image, known as person zone, surrounding this person at least partially; for each of at least one person, the grouping together into one tracklet of several person zones from successive images and surrounding this same person at least partially; for each tracklet: the identification of the person in this tracklet from person zones, the determination of a moment at which the line is crossed by the person identified from person zones and the addition of the name found and of the moment of crossing determined in at least some of the images containing the person zones.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: July 5, 2022
    Assignee: BULL SAS
    Inventors: Rémi Druihle, Cécile Boukamel-Donnou, Benoit Pelletier
  • Patent number: 11373454
    Abstract: There is provided an information processing apparatus, comprising: a processor configured to: cause a display to display image data received from a camera; provide a user with a first prompt to place an object in a shooting area of the camera; analyze the received image data; provide the user with a second prompt, in response to the received image data satisfying a predetermined condition, to reduce a distance between the object and the information processing apparatus for near field communication; receive information read from an IC chip on the object by the near field communication, the information being (i) personal information or (ii) information used to access the personal information stored in a database; and perform a predetermined process using the personal information.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: June 28, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Ikumi Kaede
  • Patent number: 11373519
    Abstract: Systems and methods to perform traffic signal management for operation of autonomous vehicles involve obtaining vehicle data from two or more vehicles at an intersection with one or more traffic lights. The vehicle data includes vehicle location, vehicle speed, and image information or images. A method includes determining at least one of three types of information about the one or more traffic lights based on the vehicle data. The three types of information include a location of the one or more traffic lights, a signal phase and timing (SPaT) of the one or more traffic lights, and a lane correspondence of the one or more traffic lights. The method also includes providing the at least one of the three types of information about the one or more traffic lights for the operation of autonomous vehicles.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: June 28, 2022
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Donald K. Grimm, Fan Bai, Bo Yu, Vivek Vijaya Kumar
  • Patent number: 11373389
    Abstract: Image processing techniques are described to select and crop a region of interest from an image obtained from a camera located on or in a vehicle, such as an autonomous semi-trailer truck. The region of interest can be identified by selecting one or more reference points and determining one or more positions of the one or more reference points on the image obtained from the camera. As an example, a location of two reference points may be 500 meters and 1000 meters in front of a location of autonomous vehicle, where the front of the autonomous vehicle is an area towards which the autonomous vehicle is being driven.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: June 28, 2022
    Assignee: TUSIMPLE, INC.
    Inventors: Lingting Ge, Siyuan Liu, Chenzhe Qian, Yijie Wang, Zehua Huang, Xiaodi Hou
  • Patent number: 11366986
    Abstract: A vehicle includes a vehicle body having a camera and at least one ego part connection. An ego part is connected to the vehicle body via the ego part connection. A collision detection system is communicatively coupled to the camera and is configured to receive a video feed from the camera. The collision detection system being configured to identify an exclusion region of each frame corresponding to the ego part, perform an object detection on a remainder of each frame, and generate a collision detection warning in response to an object being detected by the object detection system.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: June 21, 2022
    Assignee: Orlaco Products, B.V.
    Inventors: Milan Gavrilovic, Andreas Nylund, Pontus Olsson
  • Patent number: 11367218
    Abstract: To detect a discrimination error in a type of an object. A calculation system includes a first device and a second device. The first device includes: a first object map generation unit configured to calculate, using first image information that is image information acquired by the first device, a first object map indicating a type of an object and a position of the object; and a first communication unit configured to transmit the first object map to the second device. The second device includes: a second object map generation unit configured to calculate, using second image information that is image information acquired by the second device, a second object map indicating a type of an object and a position of the object; and a comparison unit configured to compare the first object map and the second object map.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: June 21, 2022
    Assignee: HITACHI, LTD.
    Inventors: Hiroaki Itsuji, Takumi Uezono, Tadanobu Toba, Kenichi Shimbo, Yutaka Uematsu
  • Patent number: 11366981
    Abstract: Providing localization data includes obtaining a first image of a scene associated with a first condition, determining one or more target conditions, and applying an appearance transfer network to the first image to obtain one or more synthesized images comprising the scene, wherein the scene is associated with the one or more target conditions in the synthesized image. A first patch is selected from the first image, wherein the first patch comprises a keypoint, and an image location is determined for the first patch. Then one or more additional patches can be obtained using the synthesized images and the image location. A descriptor network may be trained to provide localization data based on the first patch and the one or more additional patches.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: June 21, 2022
    Assignee: Apple Inc.
    Inventor: Lina M. Paz-Perez
  • Patent number: 11363909
    Abstract: A processor of a sensor device receives a plurality of images capturing a scene that depicts at least a portion of a conveyer entering a treatment area of a food processing system. The processor processes one or more images, among the plurality of images, to detect one or more characteristics in the scene. Processing the one or more images includes detecting presence or absence of a product on the at least the portion of the conveyor depicted in the scene, and classifying the scene as having one or more characteristics among a predetermined set of characteristics. The sensor device provides characteristics information indicating the one or more characteristics detected in the scene to a controller. The characteristics information is to be used by the controller to control operation of one or both of the conveyor and the treatment area of the food processing system.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: June 21, 2022
    Assignee: Air Products and Chemicals, Inc.
    Inventors: Reed Jacob Hendershot, Avishek Guha, Shawn Haupt, Ankit Naik, Michael Robert Himes, Erdem Arslan
  • Patent number: 11361545
    Abstract: A monitoring device and an operation method thereof are provided to detect whether an object of interest appears in a video stream. The monitoring device includes a motion calculation circuit, a motion region determination circuit and a computing engine. The motion calculation circuit performs motion calculation on a current frame in the video stream to generate a motion map. The motion region determination circuit determines a motion region in the current frame according to the motion map. The motion region determination circuit notifies the computing engine with the motion region in the current frame. The computing engine performs an object of interest detection on the motion region in the current frame of the video stream to generate a detection result. The motion region determination circuit determines whether to ignore the motion region in a subsequent frame after the current frame according to the detection result.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: June 14, 2022
    Assignee: HIMAX TECHNOLOGIES LIMITED
    Inventors: Chin-Kuei Hsu, Ti-Wen Tang
  • Patent number: 11354904
    Abstract: Techniques for generating a grounded video description for a video input are provided. Hierarchical Attention based Spatial-Temporal Graph-to-Sequence Learning framework for producing a GVD is provided by generating an initial graph representing a plurality of object features in a plurality of frames of a received video input and generating an implicit graph for the plurality of object features in the plurality of frames using a similarity function. The initial graph and the implicit graph are combined to form a refined graph and the refined graph is processed using attention processes, to generate an attended hierarchical graph of the plurality of object features for the plurality of frames. The grounded video description is generated for the received video input using at least the hierarchical graph of the plurality of features.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: June 7, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lingfei Wu, Liana Fong
  • Patent number: 11354903
    Abstract: Techniques related to training and implementing a bidirectional pairing architecture for object detection are discussed. Such techniques include generating a first enhanced feature map for each frame of a video sequence by processing the frames in a first direction, generating a second enhanced feature map for frame by processing the frames in a second direction opposite the first, and determining object detection information for each frame using the first and second enhanced feature map for the frame.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: June 7, 2022
    Assignee: Intel Corporation
    Inventors: Yan Hao, Zhi Yong Zhu, Lu Li, Ciyong Chen, Kun Yu
  • Patent number: 11348431
    Abstract: An in-vehicle monitoring device is configured to monitor an interior of a vehicle with reference to sensor information from a living body detection sensor. The in-vehicle monitoring device includes: a sensor information acquisition unit that acquires the sensor information from the living body detection sensor; a visual recognition information generation unit that generates visual recognition information indicating at least one of a detection range and a detection accuracy of the living body detection sensor with reference to the sensor information acquired by the sensor information acquisition unit; and a visual recognition information output unit that outputs the visual recognition information to a display device.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: May 31, 2022
    Assignee: NIDEC MOBILITY CORPORATION
    Inventor: Hideyuki Ohara
  • Patent number: 11348371
    Abstract: A person detection system of the present invention includes: a person extraction unit that extracts person information from image information; a group determination unit that extracts behavior information from the image information and determines a group; a first person identification unit that identifies a first person from the image information, based on the person information and the behavior information; a second person identification unit that identifies a second person belonging to a same group as the first person from the image information, based on the person information of a person identified as the first person and the group information; and a position identification unit that identifies a position of the first person and a position of the second person, based on position information of the security cameras used for taking the image information based on which the first person and the second person have been identified, respectively.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: May 31, 2022
    Assignee: NEC CORPORATION
    Inventor: Ryuta Niino
  • Patent number: 11348242
    Abstract: A prediction apparatus includes a learning section that performs machine learning in which, with respect to a combination of different types of captured images obtained by imaging the same subject, one captured image is set to an input and another captured image is set to an output to generate a prediction model; and a controller that performs a control for inputting a first image to the prediction model as an input captured image and outputting a predicted second image that is a captured image having a type different from that of the input captured image.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: May 31, 2022
    Assignee: FUJIFILM Corporation
    Inventor: Yoshiro Kitamura
  • Patent number: 11335012
    Abstract: An object tracking method includes generating a feature map of a search image and generating a feature map of a target image, obtaining an object classification result and a basic bounding box based on the feature map of the search image and the feature map of the target image, obtaining an auxiliary bounding box based on the feature map of the search image, obtaining a final bounding box based on the basic bounding box and the auxiliary bounding box, and tracking an object based on the object classification result and the final bounding box.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: May 17, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seung Wook Kim, Hyunjeong Lee, Changbeom Park, Changyong Son, Seohyung Lee
  • Patent number: 11336316
    Abstract: An apparatus comprising: a sampler for over-sampling an input signal to produce a sampled input signal; a delta-sigma modulator for modulating the sampled input signal to produce a modulated signal; and a filter for filtering the modulated signal, the filter comprising: a conductive patch and a ground plane separated by a dielectric wherein the ground plane comprises a band-gap periodic structure.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: May 17, 2022
    Assignee: Nokia Solutions and Networks Oy
    Inventor: Eric Wantiez
  • Patent number: 11325145
    Abstract: A system for determining spraying information used for spraying a 3D object using a spray tool is provided. The system includes a 3D image capturing device and a computing device. The 3D image capturing device is configured to capture a 3D image of the 3D object. The computing device is configured to determine a plurality of border data points of the 3D object based on the 3D image, to determine a plurality of inside points positioned on a surface of the 3D object within a range defined among the border data points according to a spray width with which the spray tool is to spray the 3D object, and to output the border data points and the inside points as the spraying information for spraying the 3D object.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: May 10, 2022
    Assignee: ORISOL TAIWAN LIMITED
    Inventors: Yu-Fong Yang, Yen-Te Lee, Ching-Wei Wu, Wei-Hsin Hsu