Patents Examined by Pinalben Patel
-
Patent number: 12361694Abstract: An autonomous system includes: a first semantic segmentation model trained based on a training dataset including images and labels for the images, the first semantic segmentation model configured to generate a first segmentation map based on an image from a camera; a second semantic segmentation model of the same type of semantic segmentation model as the first semantic segmentation model, the second semantic segmentation model configured to generate a second segmentation map based on the image from the camera; an adaptation module configured to selectively adjust one or more first parameters of the second semantic segmentation model; and a reset module configured to: determine a first total number of unique classifications included in the first segmentation map; determine a second total number of unique classifications included in the first segmentation map; and selectively reset the first parameters to previous parameters, respectively, based on the first and second total numbers.Type: GrantFiled: January 31, 2022Date of Patent: July 15, 2025Assignee: NAVER CORPORATIONInventors: Riccardo Volpi, Diane Larlus, Gabriela Csurka Khedari
-
Patent number: 12354374Abstract: Systems, methods, models, and training data for models are discussed, for determining vehicle positioning, and in particular identifying tailgating. Simulated training images showing vehicles following other vehicles, under various conditions, are generated using a virtual environment. Models are trained to determine following distance between two vehicles. Trained models are used to in detection of tailgating, based on determined distance between two vehicles. Results of tailgating are output to warn a driver, or to provide a report on driver behavior.Type: GrantFiled: March 28, 2024Date of Patent: July 8, 2025Assignee: Geotab Inc.Inventors: Joy Mazumder, Shashank Saurav, Javed Siddique, Mohammed Sohail Siddique
-
Patent number: 12354340Abstract: A system implemented as computer programs on one or more computers in one or more locations that implements a computer vision model is described. The computer vision model includes a positional local self-attention layer that is configured to receive an input feature map and to generate an output feature map. For each input element in the input feature map, the positional local self-attention layer generates a respective output element for the output feature map by generating a memory block including neighboring input elements around the input element, generates a query vector using the input element and a query weight matrix, for each neighboring element in the memory block, performs positional local self-attention operations to generate a temporary output element, and generates the respective output element by summing temporary output elements of the neighboring elements in the memory block.Type: GrantFiled: May 22, 2020Date of Patent: July 8, 2025Assignee: Google LLCInventors: Jonathon Shlens, Ashish Teku Vaswani, Niki J. Parmar, Prajit Ramachandran, Anselm Caelifer Levskaya, Irwan Bello
-
Patent number: 12354409Abstract: A system for motion capture of human body movements includes sensor nodes configured for coupling to respective portions of a human subject. Each sensor node generates inertial sensor data as the human subject engages in a physical activity session and processes the inertial sensor data according to a first machine-learned model to generate a set of local motion determinations. One or more computing devices receive the sets of local determinations and process them according to a second machine-learned model to generate a body motion profile. The computing device(s) provide an animated display of an avatar moving according to the body motion profile and generate training data based on input received from a viewer in response to the animated display. The computing device(s) modify at least one of the first machine-learned model or the second machine-learned model based at least in part on the training data.Type: GrantFiled: March 22, 2022Date of Patent: July 8, 2025Assignee: GOOGLE LLCInventors: Nicholas Gillian, Daniel Lee Giles
-
Patent number: 12347141Abstract: A method with object pose estimation includes: obtaining an instance segmentation image and a normalized object coordinate space (NOCS) map by processing an input single-frame image using a deep neural network (DNN); obtaining a two-dimensional and three-dimensional (2D-3D) mapping relationship based on the instance segmentation image and the NOCS map; and determining a pose of an object instance in the input single-frame image based on the 2D-3D mapping relationship.Type: GrantFiled: November 30, 2021Date of Patent: July 1, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Weiming Li, Jiyeon Kim, Hyun Sung Chang, Qiang Wang, Sunghoon Hong, Yang Liu, Yueying Kao, Hao Wang
-
Patent number: 12340543Abstract: A three-dimensional data encoding method includes: encoding items of attribute information corresponding to respective three-dimensional points using a parameter; and generating a bitstream including the items of attribute information encoded and the parameter. Each of the items of attribute information belongs to one of at least one layer. Each of the three-dimensional points belongs to one of at least one region. The parameter is determined based on a layer to which an item of attribute information to be encoded in the encoding belongs, and a region to which a three-dimensional point having the item of attribute information to be encoded in the encoding belongs. The parameter included in the bitstream includes a predetermined reference value, a first difference value determined for each of the at least one layer, and a second difference value determined for each of the at least one region.Type: GrantFiled: March 11, 2022Date of Patent: June 24, 2025Assignee: Panasonic Intellectual Property Corporation of AmericaInventors: Noritaka Iguchi, Toshiyasu Sugio
-
Patent number: 12333734Abstract: This disclosure relates generally to speckle image analysis, and, more particularly, to a system and method for imaging of localized and heterogenous dynamics using laser speckle. Existing speckle analysis techniques do not offer the capability to achieve both the dynamic phenomenon which carries over a specific time duration and localizing the extent of the activity at a single, chosen instant of time simultaneously. The present disclosure records an image stack consisting of N speckle images sequentially over a period, divides the image stack into a spatial window and a temporal window, converts the speckle intensity data comprised in the spatial window into a column vector. Construct a diagonal matrix and extract a singular value from the diagonal matrix, then defines a speckle intensity correlation metric using the plurality of singular values, defines a speckle activity and generates a speckle contrast image by graphically plotting the speckle activity values.Type: GrantFiled: March 7, 2022Date of Patent: June 17, 2025Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Parama Pal, Earu Banoth
-
Patent number: 12333617Abstract: A method of operation of a computer for managing a plurality of pieces of drawing data representing a set of components, includes: determining whether target drawing data included in the plurality of pieces of drawing data is related to existing drawing data that has been already stored; acquiring record information when the record information associated with the existing drawing data is stored in a case in which a result of the determining is affirmative; and transmitting record display information for displaying the record information to a terminal. The determining includes determining whether the existing drawing data represents a component having an attribute identical with an attribute of a component represented by the target drawing data. The attribute represents a material, a surface treatment, a processing method, or a combination of two or more of them.Type: GrantFiled: June 26, 2024Date of Patent: June 17, 2025Assignee: CADDi, Inc.Inventor: Yushiro Kato
-
Patent number: 12332118Abstract: A method for achieving a target white chromaticity value and a target white light output value on a display device includes measuring a chromaticity value of a first primary color, a chromaticity value of a first secondary color, and a chromaticity value of a second secondary color on the display device using a photometer. The method also includes measuring a current white chromaticity value and a current white light output value on the display device using the photometer. The method also includes generating a plot of a color gamut triangle based at least partially upon the measured chromaticity value of the first primary color, the measured chromaticity value of the first secondary color, the measured chromaticity value of the second secondary color, and the measured current white chromaticity value.Type: GrantFiled: April 5, 2022Date of Patent: June 17, 2025Assignee: THE BOEING COMPANYInventor: Steven T. Beetz
-
Patent number: 12322189Abstract: The invention relates to a method for generating disturbed input data for a neural network for analyzing sensor data, in particular digital images, of a driver assistance system, in which a first metric is defined which indicates how the magnitude of a change in sensor data is measured, a second metric is defined which indicates where a disturbance of sensor data is directed, an optimization problem is generated from a combination of the first metric and second metric, the optimization problem is solved by means of at least one solution algorithm, wherein the solution indicates a target disturbance of the input data, and disturbed input data is generated from sensor data for the neural network by means of the target disturbance.Type: GrantFiled: June 12, 2020Date of Patent: June 3, 2025Assignees: VOLKSWAGEN AKTIENGESELLSCHAFT, DSPACE GMBHInventors: Fabian Hüger, Peter Schlicht, Nico Maurice Schmidt, Feix Assion, Florens Fabian Gressner
-
Patent number: 12322159Abstract: Embodiments of the present application provide a medical image acquisition apparatus and method. The apparatus comprises: a preprocessing unit, for preprocessing an original image signal to obtain a first number of input images; and a determination unit, for using a training model to determine an analytic relationship between respective pixels at the same position in the first number of input images, and determining, according to the analytic relationship, a second number of medical functional parameter diagrams corresponding to the original image signal. The embodiments achieve fast calculation of SyMRI function parameter diagrams and high scalability.Type: GrantFiled: April 13, 2022Date of Patent: June 3, 2025Assignee: GE Precision Healthcare LLCInventors: Jialiang Ren, Jingjing Xia, Zhoushe Zhao
-
Patent number: 12293535Abstract: A data capture stage includes a frame at least partially surrounding a target object, a rotation device within the frame and configured to selectively rotate the target object, a plurality of cameras coupled to the frame and configured to capture images of the target object from different angles, a sensor coupled to the frame and configured to sense mapping data corresponding to the target object, and an augmentation data generator configured to control a rotation of the rotation device, to control operations of the plurality of cameras and the sensor, and to generate training data based on the images and the mapping data.Type: GrantFiled: August 3, 2021Date of Patent: May 6, 2025Assignee: Intrinsic Innovation LLCInventors: Agastya Kalra, Rishav Agarwal, Achuta Kadambi, Kartik Venkataraman, Anton Boykov
-
Patent number: 12277745Abstract: A training apparatus includes a processor which acquires an image to which identification information of a subject included in the image is attached as a first correct label the image being included in an image dataset for training and being an image for use in supervised training. The processor further attaches, to the acquired image, classification information based on a feature amount of the acquired image as a second correct label, trains a classifier using the acquired image and the second correct label attached thereto, and updates training content used in the training using the acquired image and the first correct label after training the classifier.Type: GrantFiled: January 6, 2022Date of Patent: April 15, 2025Assignee: CASIO COMPUTER CO., LTD.Inventor: Yoshihiro Teshima
-
Patent number: 12272093Abstract: An information processing apparatus (2000) detects one or more candidate regions (22) from a captured image (20) based on an image feature of a target object. Each candidate region (22) is an image region that is estimated to represent the target object. The information processing apparatus (2000) detects a person region (26) from the captured image (20) and detects an estimation position (24) based on the detected person region (26). The person region (26) is a region that is estimated to represent a person. The estimation position (24) is a position in the captured image (20) where the target object is estimated to be present. Then, the information processing apparatus (2000) determines an object region (30), which is an image region representing the target object, based on each candidate region (22) and the estimation position (24).Type: GrantFiled: December 5, 2023Date of Patent: April 8, 2025Assignee: NEC CORPORATIONInventor: Jun Piao
-
Patent number: 12272155Abstract: There is provided a method for detecting a vehicle including receiving continuously captured front images, setting a search area of the vehicle in a target image based on a location of the vehicle or a vehicle area detected from a previous image among the front images, detecting the vehicle in the search area according to a machine learning model, and tracking the vehicle in the target image by using feature points of the vehicle extracted from the previous image according to a vehicle detection result based on the machine learning model. Since the entire image is not used as a vehicle detection area, a processing speed may be increased, and a forward vehicle tracked in an augmented reality navigation may be continuously displayed without interruption, thereby providing a stable service to the user.Type: GrantFiled: December 19, 2023Date of Patent: April 8, 2025Assignee: THINKWARE CORPORATIONInventor: Shin Hyoung Kim
-
Patent number: 12272111Abstract: Presented herein are systems and methods for the employment of machine learning models for image processing. A method may include a capture of a video feed including image data of a document at a client device. The client device can provide the video feed to another computing device. The method can include, by the client device or the other computing device object recognition for recognizing a type of document and capturing an image exceeding a quality threshold of the document amongst the frames within the video feed. The method may further include the execution of other image processing operations on the image data to improve the quality of the image or features extracted therefrom. The method may further include anti-fraud detection or scoring operations to determine an amount of risk associated with the image data.Type: GrantFiled: September 19, 2024Date of Patent: April 8, 2025Assignee: Citibank, N.A.Inventors: Ashutosh K. Sureka, Venkata Sesha Kiran Kumar Adimatyam, Miriam Silver, Daniel Funken
-
Patent number: 12266179Abstract: A processor-implemented method for processing a video includes receiving the video as an input at an artificial neural network (ANN). The video includes a sequence of frames. A set of features of a current frame of the video and a prior frame of the video are extracted. The set of features including a set of support features for a set of pixels of the prior frame to be aligned with a set of reference features of the current frame. A similarity between a support feature for each pixel in the set of pixels of the set of support features of the prior frame and a corresponding reference feature of the current frame is computed. An attention map is generated based on the similarity. An output including a reconstruction of the current frame is generated based on the attention map.Type: GrantFiled: March 16, 2022Date of Patent: April 1, 2025Assignee: QUALCOMM INCORPORATEDInventors: Davide Abati, Amirhossein Habibian, Amir Ghodrati
-
Patent number: 12266145Abstract: Presented herein are systems and methods for the employment of machine learning models for image processing. A method may include a capture of a video feed including image data of a document at a client device. The client device can provide the video feed to another computing device. The method can include, by the client device or the other computing device object recognition for recognizing a type of document and capturing an image exceeding a quality threshold of the document amongst the frames within the video feed. The method may further include the execution of other image processing operations on the image data to improve the quality of the image or features extracted therefrom. The method may further include anti-fraud detection or scoring operations to determine an amount of risk associated with the image data.Type: GrantFiled: April 8, 2024Date of Patent: April 1, 2025Assignee: CITIBANK, N.A.Inventors: Ashutosh K. Sureka, Venkata Sesha Kiran Kumar Adimatyam, Miriam Silver, Daniel Funken
-
Patent number: 12260640Abstract: Described herein are techniques for receiving, from a first device, first data about a physical object located in a physical space. The techniques further including storing, in a catalog of objects generated for the physical space, information about the physical object and indicating at least a first location of the physical object in the physical space. The techniques include receiving, from a second device, second data about the physical space, the second data sent from the second device upon or after an occurrence of an event. The techniques include determining, based on the second data, at least an impact area of the event to the physical space and determining, based on the catalog of objects and the second data, whether the physical object is impacted by the event. The techniques include causing an output indicating whether the physical object is impacted by the event.Type: GrantFiled: March 5, 2024Date of Patent: March 25, 2025Assignee: Lowe's Companies, Inc.Inventors: Mason E. Sheffield, Josh Shabtai
-
Patent number: 12254622Abstract: Systems and methods are described for calculating an emission rate of a fugitive gas based on a gas density image of the fugitive gas. In an example, a computing device receives a gas density image of a fugitive gas from a camera. The computing device determines how to optimize the fugitive gas in the camera's field of view and instructions the camera to adjust its bearing and zoom accordingly. The camera captures one or more additional images of the fugitive gas, and the computing device stitches the images together where appropriate. The computing device then calculates the emission rate by delineating the fugitive gas in the image and determining a flux of the gas using one of various calculation methods.Type: GrantFiled: October 3, 2023Date of Patent: March 18, 2025Assignee: SCHLUMBERGER TECHNOLOGY CORPORATIONInventors: Andrew J. Speck, Manasi Doshi, Lukasz Zielinski