Patents Examined by Amara Abdi
-
Patent number: 11960005Abstract: An object-tracking method using a LiDAR sensor includes generating current shape information about a current tracking box at a current time from an associated segment box, using history shape information accumulated prior to the current time with respect to a target object that is being tracked, and updating information on a previous tracking box at a time prior to the current time, contained in the history shape information, using the current shape information and the history shape information and determining a previous tracking box having the updated information to be a final output box containing information on the shape of the target object.Type: GrantFiled: August 30, 2021Date of Patent: April 16, 2024Assignees: Hyundai Motor Company, Kia CorporationInventor: Hyun Ju Kim
-
Patent number: 11954930Abstract: The present disclosure relates to advanced image signal processing technology including: i) rapid localization for machine-readable indicia including, e.g., 1-D and 2-D barcodes; and ii) barcode reading and decoders. One claim recites: an image processing method comprising: obtaining 2-dimensional (2D) image data representing a 1-dimensional (1D) barcode within a first image area; generating a plurality of scanlines across the first image area; for each of the plurality of scanlines, synchronizing the scanline, including decoding an initial set of numerical digits represented by the scanline, in which said synchronizing provides a scale estimate for the scanline; using a path decoder to decode remaining numerical digits within the scanline, the path decoder decoding multiple numerical digits in groups, in which the scale estimate is adapted as the remaining numerical digits are decoded; and providing decoded numerical digits as an identifier represented by the scanline.Type: GrantFiled: February 7, 2022Date of Patent: April 9, 2024Assignee: Digimarc CorporationInventors: Brett A. Bradley, Tomas Filler, Vojtech Holub
-
Patent number: 11944043Abstract: Described herein are systems and methods for capturing images of a field and performing agricultural data analysis of the images. In one embodiment, a computer system for monitoring field operations includes a database for storing agricultural image data including images of at least one stage of crop development that are captured with at least one of an apparatus and a remote sensor moving through a field. The computer includes at least one processing unit that is coupled to the database. The at least one processing unit is configured to execute instructions to analyze the captured images, to determine relevant images that indicate a change in at least one condition of the crop development, and to generate a localized view map layer for viewing the field at the at least one stage of crop development based on at least the relevant captured images.Type: GrantFiled: November 30, 2021Date of Patent: April 2, 2024Assignee: Climate LLCInventors: Phil Baurer, Justin Koch, Doug Sauder, Brad Stoller
-
Patent number: 11948183Abstract: A method of detecting a cart-based loss incident in a retail store includes decoding one or more video frames of a video stream to obtain one or more motion vectors therefrom, detecting motion of a shopping cart within a cash register lane bounded by pre-defined tracking start and end points based on the one or more motion vectors, tracking a location of the shopping cart till the shopping cart reaches the pre-defined tracking end point, dynamically classifying the shopping cart in one of a plurality of classification statuses based on recognition of one or more items present in the shopping cart till the shopping cart reaches the pre-defined tracking end point, and generating an alert signal when the shopping cart is classified in a pre-defined classification status from the plurality of classification statuses at an alert threshold point between the pre-defined tracking start and end points.Type: GrantFiled: October 11, 2021Date of Patent: April 2, 2024Assignee: Everseen LimitedInventors: Milutin Cerovic, Djordje Nedeljkovic, Irena Veljanovic, Marko Milanovic
-
Patent number: 11948312Abstract: In order to minimize the impact of a delay (if any) that occurs when a process for detecting an object from a video takes time, and thereby achieve accurate tracking, the object detection/tracking device according to the present invention is provided with: an acquisition unit which acquires a video; a tracking unit which tracks an object in the video; a detection unit which detects an object in the video; an association unit which associates the same objects that have been detected and tracked in the same image in the video; and a correction unit which corrects the position of the tracked object using the position of the detected object, from among the associated objects.Type: GrantFiled: April 17, 2019Date of Patent: April 2, 2024Assignee: NEC CORPORATIONInventor: Ryoma Oami
-
Patent number: 11941842Abstract: The present invention relates to a device (10) for determining the position of stents (34, 36) in an image of vasculature structure, the device (10) comprising: an input unit (12); a processing unit (14); and an output unit (16); wherein the input unit (12) is configured to receive a sequence of images (24) of a vasculature structure (38) comprising at least one vessel branch (44, 46); wherein the processing unit (14) is configured to: detect positions of at least two markers (26, 28, 30, 32) for identifying a stent position (50, 52) in at least one of the images (24); detect at least one path indicator (64, 66, 74, 76) for the at least one vessel branch (44, 46) in at least one of the images (24) of the vasculature structure (38) at least for vessel regions in which the positions of the markers (26, 28, 30, 32) are detected; associate the at least two markers (26, 28, 30, 32) to the at least one path indicator (64, 66, 74, 76) based on the detected positions of the markers (26, 28, 30, 32) and the location oType: GrantFiled: January 28, 2019Date of Patent: March 26, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Peter Maria Johannes Rongen, Markus Johannes Harmen Den Hartog, Javier Olivan Bescos, Thijs Elenbaas, Iris Ter Horst
-
Patent number: 11941831Abstract: An image processing system to estimate depth for a scene. The image processing system includes a fusion engine to receive a first depth estimate from a geometric reconstruction engine and a second depth estimate from a neural network architecture. The fusion engine is configured to probabilistically fuse the first depth estimate and the second depth estimate to output a fused depth estimate for the scene. The fusion engine is configured to receive a measurement of uncertainty for the first depth estimate from the geometric reconstruction engine and a measurement of uncertainty for the second depth estimate from the neural network architecture, and use the measurements of uncertainty to probabilistically fuse the first depth estimate and the second depth estimate.Type: GrantFiled: July 23, 2021Date of Patent: March 26, 2024Assignee: Imperial College Innovations LimitedInventors: Tristan William Laidlow, Jan Czarnowski, Stefan Leutenegger
-
Patent number: 11935253Abstract: A method for automatically splitting visual sensor data comprising consecutive images, the method being executed by at least one processor of a host computer, the method comprising: a) assigning a scene number to each image, wherein a scene comprises a plurality of images taken in a single environment, wherein assigning a scene number to each image is performed based on a comparison between consecutive images; b) determining an accumulated effort for the images in each scene, wherein the accumulated effort is determined based on the number of objects in the images of the scene, wherein the number of objects is determined using one or more neural networks for object detection; and c) creating packages of images, wherein the images with the same scene number are assigned to the same package unless the accumulated effort of the images in the package surpasses a package threshold.Type: GrantFiled: August 31, 2021Date of Patent: March 19, 2024Assignee: DSPACE GMBHInventor: Tim Raedsch
-
Patent number: 11928839Abstract: A method for photogrammetric measurement includes providing a bedsheet having one or more patterns printed thereon in accordance with a pattern template, which defines respective locations of the one or more patterns in a template coordinate frame. An image is received, in an image coordinate frame, of a person lying on a bed, which is covered by the bedsheet. The image is processed in order to identify the one or more patterns in the image and to match the one or more patterns identified in the image to the one or more patterns in the pattern template. A transformation is computed, based on the matched patterns, between the image coordinate frame and the template coordinate frame. A dimension of the person is measured by applying the computed transformation to the image of the person.Type: GrantFiled: July 21, 2021Date of Patent: March 12, 2024Assignee: UDISENSE INC.Inventors: Assaf Glazer, Tor Ivry, Amnon Karni, Dror Porat, Yanai Victor Ankri, Sivan Hurvitz, Natalie Barnett
-
Patent number: 11928895Abstract: An electronic device and a control method therefor are disclosed. The present invention comprises: a camera; a memory for storing user face authentication information and an authentication pattern; and a control unit for recognizing a face from an image acquired through the camera, performing a first authentication that determines whether the recognized face matches the face authentication information, tracking the movement of a gaze of the recognized face when the first authentication is completed, and performing a second authentication that determines whether the movement of the gaze matches the authentication pattern. According to the present invention, a dual authentication step using the face authentication information and the movement of the gaze of the user can be conveniently performed.Type: GrantFiled: January 22, 2018Date of Patent: March 12, 2024Assignee: LG ELECTRONICS INC.Inventor: Bongsang Kim
-
Patent number: 11928793Abstract: A video quality assessment apparatus and method are provided. The video quality assessment apparatus includes a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to: identify whether a frame included in a video is a fully-blurred frame or a partially-blurred frame based on a blur level of the frame, obtain, in response to the frame being the fully-blurred frame, an analysis-based quality score with respect to the fully-blurred frame; obtain, in response to the frame being the partially-blurred frame, a model-based quality score with respect to the partially-blurred frame; and process the video based on at least one of the analysis-based quality score or the model-based quality score to obtain a processed video.Type: GrantFiled: June 22, 2021Date of Patent: March 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Anant Baijal, Hoshin Son, Eunae Cho, Sangshin Park, Seungwon Cha
-
Patent number: 11915438Abstract: The method of determination of a depth map of a scene comprises generation of a distance map of the scene obtained by time of flight measurements, acquisition of two images of the scene from two different viewpoints, and stereoscopic processing of the two images taking into account the distance map. The generation of the distance map includes generation of distance histograms acquisition zone by acquisition zone of the scene, and the stereoscopic processing includes, for each region of the depth map corresponding to an acquisition zone, elementary processing taking into account the corresponding histogram.Type: GrantFiled: September 17, 2021Date of Patent: February 27, 2024Assignee: STMicroelectronics FranceInventors: Manu Alibay, Olivier Pothier, Victor Macela, Alain Bellon, Arnaud Bourge
-
Patent number: 11915434Abstract: Systems and methods are provided for object re-identification. In many scenarios, it would be useful to be able to monitor the movement, actions, etc. of an object, such as a person, moving into and between different camera views of a monitored space. A network of cameras and client edge devices may detect and identify a particular object, and when that object re-appears in another camera view, a comparison can be performed between the data collected regarding the object upon its initial appearance and upon its re-appearance to determine if they are the same object. In this way, data regarding an object can be captured, analyzed, or otherwise processed despite moving from one camera view to another.Type: GrantFiled: May 27, 2021Date of Patent: February 27, 2024Assignee: ALWAYSAI, INC.Inventor: Stephen Bottos
-
Patent number: 11908245Abstract: A body language system for determining a body language message of a living being in a context comprising an artificial intelligence system, said AI system running a computer program that: retrieves an image of said living being showing body language; labels said living being in said image, resulting in a labeled living being; determines said context from said image using a trained machine learning model; determines a baseline body language of said labeled living being from said image using a trained machine learning model; adapts a trained machine learning model of said AI system using said baseline body language and said context; applies the adapted trained machine learning model of said AI system to the one image for categorizing said body language resulting in a category, and applying said category for determining said body language message.Type: GrantFiled: September 12, 2022Date of Patent: February 20, 2024Assignee: KEPLER VISION TECHNOLOGIES B.V.Inventors: Henricus Meinardus Gerardus Stokman, Marc Jean Baptist Van Oldenborgh, Fares Alnajar
-
Patent number: 11906625Abstract: A surround multi-object tracking and surround vehicle motion prediction framework is provided. A full-surround camera array and LiDAR sensor based approach provides for multi-object tracking for autonomous vehicles. The multi-object tracking incorporates a fusion scheme to handle object proposals from the different sensors within the calibrated camera array. A motion prediction framework leverages the instantaneous motion of vehicles, an understanding of motion patterns of freeway traffic, and the effect of inter-vehicle interactions. The motion prediction framework incorporates probabilistic modeling of surround vehicle trajectories. Additionally, subcategorizing trajectories based on maneuver classes leads to better modeling of motion patterns. A model takes into account interactions between surround vehicles for simultaneously predicting each of their motion.Type: GrantFiled: January 8, 2019Date of Patent: February 20, 2024Assignee: The Regents of the University of CaliforniaInventors: Akshay Rangesh, Mohan M. Trivedi, Nachiket Deo
-
Patent number: 11908177Abstract: The learning device 10D is learned to extract moving image feature amount Fm which is feature amount relating to the moving image data Dm when the moving image data Dm is inputted thereto, and is learned to extract still image feature amount Fs which is feature amount relating to the still image data Ds when the still image data Ds is inputted thereto. The first inference unit 32D performs a first inference regarding the moving image data Dm based on the moving image feature amount Fm. The second inference unit 34D performs a second inference regarding the still image data Ds based on the still image feature amount Fs. The learning unit 36D performs learning of the feature extraction unit 31D based on the results of the first inference and the second inference.Type: GrantFiled: May 29, 2019Date of Patent: February 20, 2024Assignee: NEC CORPORATIONInventors: Shuhei Yoshida, Makoto Terao
-
Patent number: 11900616Abstract: A method for analyzing an object. The method includes capturing, using a camera device, a sequence of images of a scene comprising a light source attached to a first element of a plurality of elements comprised in an object, detecting, by a hardware processor based on a pattern of local light change across the sequence of images, the light source in the scene, determining, by the hardware processor, a location of the light source in at least one image of the sequence of images, generating, by the hardware processor based on the location of the light source and a dynamic model of the object, a region-of-interest for analyzing the object, and generating an analysis result of the object based on the region-of-interest, wherein a pre-determined task is performed based on the analysis result.Type: GrantFiled: January 8, 2019Date of Patent: February 13, 2024Assignee: HANGZHOU TARO POSITIONING TECHNOLOGY CO., LTD.Inventor: Hao Qian
-
Patent number: 11900679Abstract: Methods and systems for image-based abnormal event detection are disclosed. An example method includes obtaining a sequential set of images captured by a camera; generating a set of observed features for each of the images; generating a set of predicted features based on a portion of the sets of observed features that excludes the set of observed features for a last image in the sequential set of images; determining that a difference between the set of predicted features and the set of observed features for the last image in the sequential set of images satisfies abnormal event criteria; and in response to determining that the difference between the set of predicted features and the set of observed features for the last image in the sequential set of images satisfies abnormal event criteria, classifying the set of sequential images as showing an abnormal event.Type: GrantFiled: November 25, 2020Date of Patent: February 13, 2024Assignee: ObjectVideo Labs, LLCInventors: Jangwon Lee, Gang Qian, Allison Beach, Donald Gerard Madden
-
Patent number: 11893807Abstract: A method for determining a state of vigilance of a driver in a vehicle using a predetermined image-analyzing algorithm. The method especially includes a step of executing the predetermined algorithm on the generated sequence of images in order to detect a series comprising at least one movement of the head of the driver, a step of determining the speed and/or the amplitude of each identified movement and of the dynamic and static periods of the head of the driver, a step of detecting dynamic and static periods of the head of the driver and of measuring the frequency and duration of each period, and a step of determining a state of vigilance of the driver from the speed and/or the amplitude determined for each identified movement and from the frequency and duration of each detected period.Type: GrantFiled: December 13, 2019Date of Patent: February 6, 2024Inventors: Martin Petrov, Alain Giralt
-
Patent number: 11887384Abstract: A method of describing a temporal event, including receiving a video sequence of the temporal event, extracting at least one physical characteristic of an at least one occupant within the video sequence, extracting at least one action of the at least one occupant within the video sequence, extracting at least one interaction of the at least one occupant with a secondary occupant within the video sequence, determining a safety level of the temporal event within a vehicle based on at least one of the at least one action and the at least one interaction and describing the at least one physical characteristic of the at least one occupant and at least one of the at least one action and the at least one interaction of the at least one occupant.Type: GrantFiled: February 2, 2021Date of Patent: January 30, 2024Assignee: Black Sesame Technologies Inc.Inventors: Zilong Hu, Lei Zhang, Qun Gu