Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 12386059
    Abstract: A method for monitoring surroundings of a first sensor system. The method includes: providing a temporal sequence of data of the first sensor system for monitoring the surroundings; generating an input tensor including the temporal sequence of data of the first sensor system, for a trained neural network; the neural network being configured and trained to identify, on the basis of the input tensor, at least one subregion of the surroundings, in order to improve the monitoring of the surroundings with the aid of a second sensor system; generating a control signal for the second sensor system with the aid of an output signal of the trained neural network, in order to improve the monitoring of the surroundings in the at least one subregion.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: August 12, 2025
    Assignee: ROBERT BOSCH GMBH
    Inventors: Sebastian Muenzner, Alexandru Paul Condurache, Claudius Glaeser, Fabian Timm, Florian Drews, Florian Faion, Jasmin Ebert, Lars Rosenbaum, Michael Ulrich, Rainer Stal, Thomas Gumpp
  • Patent number: 12387419
    Abstract: A computer-implemented method includes rendering a plurality of two-dimensional views of a three-dimensional object generated by a generative model, using a two-dimensional landmark regressor to process the plurality of two-dimensional views to generate respective sets of two-dimensional landmarks, fitting a set of three-dimensional landmarks to the respective sets of two-dimensional landmarks. The method includes processing at least a first two-dimensional view of the object using a three-dimensional landmark regressor to determine a candidate set of three-dimensional landmarks for the first two-dimensional view of the object, and updating the three-dimensional landmark regressor based at least in part on a loss function comprising a term that evaluates a deviation between the candidate set of three-dimensional landmarks and the fitted set of three-dimensional landmarks.
    Type: Grant
    Filed: December 13, 2023
    Date of Patent: August 12, 2025
    Assignee: Flawless Holdings Limited
    Inventors: David Ferman, Pablo Garrido, Gaurav Bharaj
  • Patent number: 12387447
    Abstract: Methods and systems are disclosed for performing operations comprising: receiving an image that includes a depiction of a face of a user; generating a plurality of landmarks of the face based on the received image; removing a set of interfering landmarks from the plurality of landmarks resulting in a remaining set of landmarks of the plurality of landmarks; obtaining a depth map for the face of the user; and computing a real-world scale of the face of the user based on the depth map and the remaining set of landmarks.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: August 12, 2025
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Jean Luo, Matan Zohar
  • Patent number: 12386417
    Abstract: Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. An augmented reality display system comprises a handheld component housing an electromagnetic field emitter, the electromagnetic field emitter emitting a known magnetic field, the head mounted component coupled to one or more electromagnetic sensors that detect the magnetic field emitted by the electromagnetic field emitter housed in the handheld component, wherein a head pose is known, and a controller communicatively coupled to the handheld component and the head mounted component, the controller receiving magnetic field data from the handheld component, and receiving sensor data from the head mounted component, wherein the controller determining a hand pose based at least in part on the received magnetic field data and the received sensor data.
    Type: Grant
    Filed: February 14, 2023
    Date of Patent: August 12, 2025
    Assignee: Magic Leap, Inc.
    Inventor: Michael Woods
  • Patent number: 12387503
    Abstract: A method for improving 3D object detection via object-level augmentations is described. The method includes recognizing, using an image recognition model of a differentiable data generation pipeline, an object in an image of a scene. The method also includes generating, using a 3D reconstruction model, a 3D reconstruction of the scene from the image including the recognized object. The method further includes manipulating, using an object level augmentation model, a random property of the object by a random magnitude at an object level to determine a set of properties and a set of magnitudes of an object manipulation that maximizes a loss function of the image recognition model. The method also includes training a downstream task network based on a set of training data generated based on the set of properties and the set of magnitudes of the object manipulation, such that the loss function is minimized.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: August 12, 2025
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Rares Andrei Ambrus, Sergey Zakharov, Vitor Guizilini, Adrien David Gaidon
  • Patent number: 12386058
    Abstract: A system of controlling operation of a vehicle includes a lidar unit and a radar unit configured to obtain measured lidar datapoints and a measured radar signal, respectively. A command unit is adapted to receive the measured lidar datapoints and the measured radar signal, the command unit including a processor and tangible, non-transitory memory on which instructions are recorded. The command unit is configured to identify respective objects in the measured lidar datapoints and assign a respective radar reflection intensity to the measured lidar datapoints in the respective objects. A synthetic radar signal is generated based in part on the radar reflection intensity. The command unit is configured to obtain an enhanced radar signal by adjusting the measured radar signal based on the synthetic radar reference signal.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: August 12, 2025
    Assignee: GM Global Technology Operations LLC
    Inventors: Oded Bialer, Yuval Haitman
  • Patent number: 12386071
    Abstract: Disclosed herein are systems, methods, and computer program products for operating a sensor system. The methods comprise: receiving, by a computing device, a track for an object; classifying, by the computing device, the track as an infant track or a mature track based on a type of sensor detection used to generate the track, a total number of cycles in which the lidar detections were generated, a total number of sensor detections included in the track, an object type associated with the track, an object speed, and/or a distance between the object and the sensor system; using, by the computing device, radar data to modify a speed of the track in response to the track being classified as an infant track.
    Type: Grant
    Filed: December 12, 2022
    Date of Patent: August 12, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Shubhendra Chauhan, Xiufeng Song
  • Patent number: 12381788
    Abstract: A traffic prediction method includes performing traffic autonomous zone division on a to-be-predicted geographic area based on geographic information data of the geographic area and crowd flow data of the geographic area to obtain a plurality of sub-areas; determining, for any sub-area, a crowd flow motif in the sub-area based on geographic information data of the sub-area and crowd flow data of the sub-area, where the crowd flow motif indicates a multi-point crowd motion pattern in the sub-area; determining a crowd flow feature of the sub-area based on the crowd flow motif; and predicting data traffic of the sub-area based on the crowd flow feature of the sub-area to obtain a data traffic prediction result of the sub-area.
    Type: Grant
    Filed: October 30, 2023
    Date of Patent: August 5, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yinsheng Zhou, Qian Huang, Liyan Xu, Junxian Mo, Yaoli Wang
  • Patent number: 12380964
    Abstract: Classification of cancer condition, in a plurality of different cancer conditions, for a species, is provided in which, for each training subject in a plurality of training subjects, there is obtained a cancer condition and a genotypic data construct including genotypic information for the respective training subject. Genotypic constructs are formatted into corresponding vector sets comprising one or more vectors. Vector sets are provided to a network architecture including a convolutional neural network path comprising at least a first convolutional layer associated with a first filter that comprise a first set of filter weights and a scorer. Scores, corresponding to the input of vector sets into the network architecture, are obtained from the scorer. Comparison of respective scores to the corresponding cancer condition of the corresponding training subjects is used to adjust the filter weights thereby training the network architecture to classify cancer condition.
    Type: Grant
    Filed: August 31, 2023
    Date of Patent: August 5, 2025
    Assignee: GRAIL, Inc.
    Inventors: Virgil Nicula, Anton Valouev, Darya Filippova, Matthew H. Larson, M. Cyrus Maher, Monica Portela dos Santos Pimentel, Robert Abe Paine Calef, Collin Melton
  • Patent number: 12380667
    Abstract: A reference state deciding device (10) according to the present disclosure includes a feature calculation unit (11) that calculates an object feature related to a target object of state determination included in a real space and an imaged space feature related to the real space on the basis of past image data of the real space imaged in the past and target image data of the real space imaged at the time of the state determination, and a reference state deciding unit (12) that decides a reference state to be used for the state determination on the basis of a relation between the object feature and the imaged space feature calculated from the past image data and the target image data.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: August 5, 2025
    Assignee: NEC CORPORATION
    Inventors: Kenta Ishihara, Shoji Nishimura
  • Patent number: 12380706
    Abstract: The invention provides a system for detecting obstacle state and an operating method thereof, comprising an image capturing module, a semantic segmentation module, a feature extraction module, an object detection module, and a distance table calibration module. The invention is delivered a semantic segmentation information to a model for processing self-learning, and selected an output of an original image size, for a carrier of an attention mechanism.
    Type: Grant
    Filed: May 30, 2023
    Date of Patent: August 5, 2025
    Assignee: NATIONAL YANG MING CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Zhe-Lun Hu
  • Patent number: 12380592
    Abstract: A computer-implemented method of determining a pose of each of a plurality of objects includes, for each given object: using image data and associated depth information to estimate a pose of the given object. The method includes iteratively updating the estimated poses by: sampling, for each given object, a plurality of points from a predetermined model of the given object transformed in accordance with the estimated pose of the given object; determining first occupancy data for each given object dependent on positions of the points sampled from the predetermined model, relative to a voxel grid containing the given object; determining second occupancy data for each given object dependent on positions of the points sampled from the predetermined models of the other objects, relative to the voxel grid containing the given object; and updating the estimated poses of the plurality of objects to reduce an occupancy penalty.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: August 5, 2025
    Assignee: Imperial College Innovations Limited
    Inventors: Kentaro Wada, Edgar Antonio Sucar Escamilla, Stephen Lloyd James, Daniel James Lenton, Andrew Davison
  • Patent number: 12380609
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.
    Type: Grant
    Filed: January 3, 2024
    Date of Patent: August 5, 2025
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Erik Fonseka
  • Patent number: 12377863
    Abstract: A system and method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data to classify dynamic objects as agents and to detect and annotate actions that are completed by the agents that are located within the surrounding environment of the ego vehicle and analyzing the dynamic data to process an ego motion history that is associated with the ego vehicle that includes vehicle dynamic parameters during a predetermined period of time. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle based on the annotated actions.
    Type: Grant
    Filed: November 16, 2022
    Date of Patent: August 5, 2025
    Assignee: Honda Motor Co., Ltd.
    Inventors: Srikanth Malla, Chiho Choi, Behzad Dariush
  • Patent number: 12380668
    Abstract: The present invention relates to a method, a computing device, and a computer-readable medium for providing guide information on contour information of an object included in an image in crowdsourcing, and more particularly, to a method, a computing device, and a computer-readable medium for providing guide information on contour information of an object included in an image in crowdsourcing, in which guide information for a region set by a worker is generated and displayed on an image when the worker receiving the image corresponding to a work through the crowdsourcing performs labeling of setting the region of the object included in the image, thereby providing a guide on an accurate level required to be set for the region of the object.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: August 5, 2025
    Inventor: Munhwi Jeon
  • Patent number: 12380699
    Abstract: An object tracking apparatus, method and computer-readable medium for detecting an object from output information of sensors, tracking the object on a basis of a plurality of detection results, generating tracking information of the object represented in a common coordinate system, outputting the tracking information, and detecting the object on a basis of the tracking information.
    Type: Grant
    Filed: August 24, 2023
    Date of Patent: August 5, 2025
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Hiroyoshi Miyano
  • Patent number: 12380597
    Abstract: The present application is applicable to the field of video processing. Provided are a target tracking method for a panoramic video, a readable storage medium, and a computer device. The method comprises: using a tracker to track and detect a target to be tracked to obtain a predicted tracking position of said target in the next panoramic video frame, calculating the reliability of the predicted tracking position, and using an occlusion detector to calculate an occlusion score of the predicted tracking position; determining whether the reliability of the predicated tracking position is greater than a preset reliability threshold value, and determining whether the occlusion score of the predicted tracking position is greater than a preset occlusion score threshold value; and using a corresponding tracking strategy according to the reliability and the occlusion score.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: August 5, 2025
    Assignee: ARASHI VISION INC.
    Inventors: Rui Xu, Wenjie Jiang
  • Patent number: 12382248
    Abstract: Implementations disclosed describe techniques and systems for efficient determination and tracking of trajectories of objects in an environment of a wireless device. The disclosed techniques include, among other things, determining multiple sets of sensing values that characterize one or more radio signals received, during a respective sensing event, from an object in an environment of the wireless device. Multiple likelihood vectors may be obtained using the sensing values and characterizing a likelihood that the object is at a certain distance from the wireless device. A likelihood tensor may be generated, based on the likelihood vectors, that characterizes a likelihood that the object is moving along one of a set of trajectories. The likelihood tensor may be used to determine an estimate of the trajectory of the object.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: August 5, 2025
    Assignee: Cypress Semiconductor Corporation
    Inventors: Igor Kolych, Kiran Uln
  • Patent number: 12380687
    Abstract: This application provides example object detection methods and apparatuses. This application relates to the field of artificial intelligence, and specifically, to the field of computer vision. One example method includes obtaining a to-be-detected image and performing convolution processing on the to-be-detected image to obtain an initial image feature of a to-be-detected object in the to-be-detected image. Based on knowledge graph information, an enhanced image feature of the to-be-detected object is determined. A candidate frame and a classification of the to-be-detected object is determined based on the initial image feature and the enhanced image feature of the to-be-detected object. The enhanced image feature indicates semantic information of a different object category corresponding to another object associated with the to-be-detected object.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: August 5, 2025
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hang Xu, Zhenguo Li
  • Patent number: 12374216
    Abstract: The invention relates to mobile surveillance and, more particularly, to systems and methods for policing traffic violations, such as illegal cell phone use while driving. The surveillance system may include a mobile unit and an imaging device configured to monitor, detect, and record vehicles moving in the same direction or in an opposite direction of the mobile unit. The imaging device may capture license plate information and other data, such as characteristics corresponding to the driver of the vehicle. Advantageously, the surveillance system can record, analyze, detect, predict, and/or communicate moving violations effectively and efficiently.
    Type: Grant
    Filed: December 21, 2023
    Date of Patent: July 29, 2025
    Assignee: VIG Vehicle Intelligence Group LLC
    Inventor: Greg Horn
  • Patent number: 12371010
    Abstract: Disclosed herein are system, method, and computer program product aspects for enabling an autonomous vehicle (AV) to react to objects posing a risk to the AV. The system can monitor an object within a vicinity of the AV. A plurality of trajectories predicting paths the object will take can be generated, the plurality of trajectories being based on a plurality of inputs indicating current and past characteristics of the object. Using a learned model, a forecasted position of the object at an instance in time can be generated. An error value representing how accurate the forecasted position is versus an observed position of the object can be stored. Error values can be accumulated over a period of time. A risk factor can be assigned for the object based on the accumulated error values. A maneuver for the AV can be performed based on the risk factor.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: July 29, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: John Lepird, Pragati Satpute, Ramadev Burigsay Hukkeri
  • Patent number: 12375342
    Abstract: Monitoring systems and methods for use in security, safety, and business process applications utilizing a correlation engine are disclosed. Sensory data from one or more sensors are captured and analyzed to detect one or more events in the sensory data. The events are correlated by a correlation engine, optionally by weighing the events based on attributes of the sensors that were used to detect the primitive events. The events are then monitored for an occurrence of one or more correlations of interest, or one or more critical events of interest. Finally, one or more actions are triggered based on a detection of one or more correlations of interest, one or more anomalous events, or one or more critical events of interest. A hierarchical storage manager, having access to a hierarchy of two or more data storage devices, is provided to store data from the one or more sensors.
    Type: Grant
    Filed: March 12, 2024
    Date of Patent: July 29, 2025
    Assignee: SecureNet Solutions Group LLC
    Inventors: John J Donovan, Daniar Hussain
  • Patent number: 12374156
    Abstract: An accessibility determination device may include an imaging device that captures images of a surrounding environment of a user at time intervals to acquire a plurality of captured images; a facial organ detector that analyzes the captured images to detect a facial organ of a person that makes a notification to the user when the accessibility determined by processor satisfies a predetermined criterion and the target detector detects that the person is the predetermined target person. The processor may change the accessibility or the predetermined criterion depending on a distance between the imaging device and the person.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: July 29, 2025
    Assignee: OMRON CORPORATION
    Inventors: Tomohiro Yabuuchi, Kazuo Yamamoto, Naoto Iwamoto, Endri Rama
  • Patent number: 12374127
    Abstract: Systems and methods to enhance vehicle object detection capability are provided. The vehicle may include a first sensor coupled with a body of the vehicle, the first sensor having a first field of view and the first sensor comprising a polarizer. The vehicle may include a second sensor coupled with the body of the vehicle, the second sensor having a second field of view. The first field of view and the second field of view can at least partially overlap. The vehicle may include a processor coupled with memory. The processor can receive a first image captured by the first sensor and a second image captured by the second sensor. The processor can determine a luminance of light ratio associated with the first image and the second image, and can modify an image processing technique.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: July 29, 2025
    Assignee: Rivian IP Holdings, LLC
    Inventors: Elaine Wenying Jin, Vidya Rajagopalan
  • Patent number: 12373988
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for detecting changes to a point of interest between a selected version and a previous version of a digital image and providing a summary of the changes to the point of interest. For example, the disclosed system provides for display a selected version of a digital image and detects a point of interest within the selected version of the digital image. The disclosed system determines image modifications to the point of interest (e.g., tracks changes to the point of interest) to generate a summary of the image modifications. Moreover, the summary can indicate further information concerning image modifications applied to the selected point of interest, such as timestamp, editor, or author information.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: July 29, 2025
    Assignee: Adobe Inc.
    Inventors: Amol Jindal, Ajay Bedi
  • Patent number: 12369568
    Abstract: A livestock, wildlife, and domesticated animal automated temperature screening system and process to monitor health and wellbeing of animals including livestock, domesticated animals, and wildlife are disclosed. The livestock, wildlife, and domesticated animal automated temperature screening system and process provide a stand-alone automated temperature screening solution for the health and wellbeing of livestock, wildlife and domesticated animals.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: July 29, 2025
    Inventor: Jarret Mason New
  • Patent number: 12373979
    Abstract: A method for determining the orientation of a physical object in space which determines the orientation of the object from the images captured by a set of cameras distributed around an image capture space. The method includes a set of steps which comprise: capturing multiple images of the physical object at the same time instant, machine learning modules determining the estimate of the orientation of the physical object for each of the cameras using the captured images, and a central processing unit determining the orientation of the physical object with respect to a global reference by combining the different estimated orientations.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: July 29, 2025
    Assignee: INSTITUTO TECNOLÓGICO DE INFORMÁTICA (ITI)
    Inventors: Juan Carlos Pérez Cortés, Omar Del Tejo Catalá, Javier Pérez Soler, Jose Luis Guardiola García, Alberto J. Pérez Jiménez, David Millán Escrivá
  • Patent number: 12373971
    Abstract: Systems and methods for tracking inventory items in an area of real space are disclosed. The method includes receiving a signal generated in dependence on sensors. The signal indicates a change to a portion of an image of an area of real space. The method includes, in response to receiving the signal, implementing a trained location detection model to determine, based on inputs, whether an inventory item identified in the portion of the image has changed a position in the area of real space. The method includes implementing a trained item classification model to determine a classification of the inventory item. The method includes updating an inventory database with inventory item data determined in dependence on the classification of the inventory item to provide an updated map of the area of real space as a result of the received signal indicating the change to the portion of the image.
    Type: Grant
    Filed: September 8, 2022
    Date of Patent: July 29, 2025
    Assignee: STANDARD COGNITION, CORP.
    Inventors: Jordan Fisher, Nairwita Mazumder, Arpan Ghosh, Mohit Deep Singh, David Woollard, Michael Huang
  • Patent number: 12374161
    Abstract: This application relates to the field of electronic technologies, and provides an action recognition method and apparatus, a terminal device, and a motion monitoring system. Characteristic extraction and action recognition are performed based on motion data collected by data collection apparatuses; a gait characteristic, a swing gesture characteristic, and an image action characteristic of a user are recognized by using a plurality of pieces of motion data; and a type of a hitting action of a player is determined based on the gait characteristic, the swing gesture characteristic, and the image action characteristic. In this way, the hitting action of the player in a motion process can be accurately recognized. This is conducive to performing comprehensive analysis on a comprehensive sports capability of the player, and is more helpful to formulating a personalized training plan for the player.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: July 29, 2025
    Assignee: Honor Device Co., Ltd.
    Inventors: Teng Xu, Xiaohan Chen
  • Patent number: 12374162
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for visual authentication. In some implementations, a method may include obtaining images of a person at a property; detecting a discontinuity in an appearance of the person in the images of the person at the property; determining that the discontinuity does not correspond to a known occlusion; and providing an indication of a potential spoofing attack.
    Type: Grant
    Filed: April 9, 2024
    Date of Patent: July 29, 2025
    Assignee: Alarm.com Incorporated
    Inventors: Stephen Scott Trundle, Daniel Todd Kerzner, Allison Beach, Babak Rezvani, Donald Gerard Madden
  • Patent number: 12367608
    Abstract: A method for estimating a gaze direction of a user includes acquiring an image of a face of the user, determining an approximate gaze direction based on a current head pose and a relationship between head pose and gaze direction, determining an estimated gaze direction based on detected eye features, determining a precise gaze direction based on glint position and eye features, and combining the approximate gaze direction and at least one of the estimated gaze direction and the precise gaze direction to provide a corrected gaze direction.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: July 22, 2025
    Assignee: Smart Eye AB
    Inventor: Torsten Wilhelm
  • Patent number: 12367592
    Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for generating label data for one or more target objects in an environment. The system obtains first data characterizing the environment, wherein the first data includes position data characterizing a position of the target object. The system obtains second data including one or more three-dimensional (3D) frames characterizing the environment. The system determines, based on the first data, a guide feature for locating the target object in the 3D frames of the second data. The system receives a first user input that specifies at least an object position in the selected 3D frame, and generates label data for the target object based on the first user input.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: July 22, 2025
    Assignee: Waymo LLC
    Inventors: Maya Kabkab, Yulai Shen, Congyu Gao, Sakshi Madan
  • Patent number: 12367595
    Abstract: A processor-implemented method with virtual object rendering includes: determining a plurality of predictive trajectories of a first object according to a Gaussian random path based on a high-level model that is trained by hierarchical reinforcement learning; determining direction information of a second object according to subgoals corresponding to the predictive trajectories based on a low-level model that is trained by hierarchical reinforcement learning; determining direction information of the second object according to a subgoal corresponding to one of the predictive trajectories based on an actual trajectory of the first object; and rendering the second object, which is a virtual object, based on the determined direction information.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: July 22, 2025
    Assignees: Samsung Electronics Co., Ltd., Korea University Research and Business Foundation
    Inventors: Joongheon Kim, SooHyun Park, Won Joon Yun, Youn Kyu Lee, Soyi Jung
  • Patent number: 12365344
    Abstract: In various examples, systems and methods are disclosed that detect hazards on a roadway by identifying discontinuities between pixels on a depth map. For example, two synchronized stereo cameras mounted on an ego-machine may generate images that may be used extract depth or disparity information. Because a hazard's height may cause an occlusion of the driving surface behind the hazard from a perspective of a camera(s), a discontinuity in disparity values may indicate the presence of a hazard. For example, the system may analyze pairs of pixels on the depth map and, when the system determines that a disparity between a pair of pixels satisfies a disparity threshold, the system may identify the pixel nearest the ego-machine as a hazard pixel.
    Type: Grant
    Filed: October 31, 2023
    Date of Patent: July 22, 2025
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Yue Wu, Cheng-Chieh Yang
  • Patent number: 12367636
    Abstract: A method for virtual object pose rendering includes receiving a digital image of a real-world object in a real-world environment. An imaged pose of the real-world object is estimated based at least in part on a neural radiance model encoding 3D (three-dimensional) spatial data for the real-world object, the neural radiance model derived from a plurality of previously captured digital images of the real-world object. A view of a virtual object corresponding to the real-world object is rendered based at least in part on the neural radiance model, such that the virtual object has a virtual pose consistent with the imaged pose of the real-world object in the digital image.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: July 22, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Jonathan Carl Hanzelka, Vibhav Vineet, Pedro Urbina Escos, Vicente A. Rivera, Gavin Jancke
  • Patent number: 12367609
    Abstract: Provided is an information processing apparatus which includes a line-of-sight estimator, a correction amount calculator, and a registration determination section. The line-of-sight estimator calculates an estimation vector obtained by estimating a direction of a line of sight of a user. The correction amount calculator calculates a correction amount related to the estimation vector on the basis of at least one object that is within a specified angular range that is set using the estimation vector as a reference. The registration determination section determines whether to register, in a data store, calibration data in which the estimation vector and the correction amount are associated with each other, on the basis of a parameter related to the at least one object within the specified angular range.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: July 22, 2025
    Assignee: SONY GROUP CORPORATION
    Inventors: Takuro Noda, Ryouhei Yasuda
  • Patent number: 12367600
    Abstract: Depth images acquired during a scan of one or more portions of a user's body may be used to determine a full-body three-dimensional (3D) representation of the user. Depth imaging data may be used to determine the full-body shape and contours of the user when rotating in front of a user device. Machine learning methods for pose detection may be used to determine the portion of the user's body being scanned, as well as the position and pose of the user's body. A point cloud alignment process that leverages knowledge of user behavior during the scan is used to determine parameters for a full-body representation of a user, which may be based on a plurality of partial scans of the user's body. A full-body representation of a user may be output and/or displayed along with accurate measurements determined for various portions of the user's body.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: July 22, 2025
    Assignee: Tam Technologies, Inc.
    Inventors: Margaret H. Peterson, Young Ha Yoo
  • Patent number: 12367648
    Abstract: An offloading schedule for offloading image frames that are to be generated by an AR device in a subsequent time period is determined, the offloading schedule identifying certain image frames generated in the subsequent time period that are to be offloaded and certain image frames generated in the subsequent time period that are not to be offloaded, the offloading schedule being selected from a plurality of offloading schedules based on a tracking stride of the offloading schedule. The AR device sends to a computing device at least some of the image frames generated in the subsequent time period in accordance with the offloading schedule.
    Type: Grant
    Filed: May 5, 2023
    Date of Patent: July 22, 2025
    Assignee: Charter Communications Operating, LLC
    Inventors: Dell Lawrence Wolfensparger, Viviane Espinoza McLandrich, Christopher Glen Turner, Yu Charlie Hu, Zhaoning Kong, Pranab Dash
  • Patent number: 12365334
    Abstract: Based on a recognition result of a recognition sensor installed in a mobile body, a mobile body control system calculates a first target path for moving to a destination while avoiding a risk around the mobile body. Further, based on the first target path, the mobile body control system calculates a second target path having higher granularity than the first target path. Then, the mobile body control system controls the mobile body so as to follow the second target path. The mobile body control system determines in which field the mobile body moves, a normal field or a specific field having more risks than the normal field. When the mobile body moves in the specific field, the mobile body control system reduces a frequency of update of the first target path compared with that when the mobile body moves in the normal field.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: July 22, 2025
    Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, NATIONAL UNIVERSITY ORPORATION TOKYO UNIVERSITY OF AGRICULTURE AND TECHNOLOGY
    Inventors: Hidenari Iwai, Shintaro Inoue, Pongsathorn Raksincharoensak
  • Patent number: 12367586
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Patent number: 12367233
    Abstract: One or more three-dimensional models that corresponds to at least one two-dimensional image are determined by receiving image data that corresponds to the at least one two-dimensional image, generating features based on the image data corresponding to the at least one two-dimensional image, generating a representation vector for the at least one two-dimensional image by transforming the features into a predetermined amount of numerical representations corresponding to the features, and outputting the representation vector for the at least one two-dimensional image to facilitate a search query for the one or more three-dimensional models associated with the at least one two-dimensional image.
    Type: Grant
    Filed: August 11, 2023
    Date of Patent: July 22, 2025
    Assignee: Trimble Inc.
    Inventors: Robert Banfield, Aristodimos Komninos, Jacques Harvent, Michael Tadros, Karolina Torttila
  • Patent number: 12367677
    Abstract: Embodiments are disclosed for real-time event detection using edge and cloud AI. An event monitoring system can receive live video data from one or more video capture devices at a surveillance location. A first machine learning model identifies a first portion of the live video data as depicting an event. The first portion of the live video data is provided to a second machine learning model. The second machine learning model identifies the first portion of the live video data as depicting the event. An event notification corresponding to the event is then sent to a user device.
    Type: Grant
    Filed: January 16, 2025
    Date of Patent: July 22, 2025
    Assignee: Coram AI, Inc.
    Inventors: Peter Ondruska, Ashesh Jain, Balazs Kovacs, Luca Bergamini
  • Patent number: 12367566
    Abstract: One or more systems, devices, computer program products, and/or computer-implemented methods provided herein relate to accurate anomaly detection in images using patched features. According to an embodiment, an extraction component can extract multiple layers of features from one or more patches of an image using a pretrained convolutional neural network (CNN). A feature mapping component can concatenate the features from the multiple layers to generate a tensor feature map comprising a one-dimensional feature vector for respective patches. A cropping component can perform center cropping on the tensor feature map. A calculation component can calculate a distance to a feature distribution mean for respective patches.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: July 22, 2025
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Haoxiang Qiu, Tadanobu Inoue, Takayuki Katsuki, Ryuki Tachibana
  • Patent number: 12366924
    Abstract: This application is directed to a method for controlling user experience (UX) operations on an electronic device that executes an application. A touchless UX operation associated with the application has an initiation condition including at least detection of a presence and a gesture in a required proximity range with a required confidence level. The electronic device then determines from a first sensor signal the proximity of the presence with respect to the electronic device. In accordance with a determination that the determined proximity is in the required proximity range, the electronic device determines from a second sensor signal a gesture associated with the proximity of the presence and an associated confidence level of the determination of the gesture. In accordance with a determination that the determined gesture and associated confidence level satisfy the initiation condition, the electronic device initializes the touchless UX operation associated with the application.
    Type: Grant
    Filed: January 2, 2024
    Date of Patent: July 22, 2025
    Assignee: Google LLC
    Inventors: Ashton Udall, Andrew Christopher Felch, James Paul Tobin
  • Patent number: 12361099
    Abstract: New software systems and capabilities that facilitate the rapid development, evaluation, and deployment of advanced inspection and detection systems using high-fidelity synthetic imagery. The present invention generates high-fidelity synthetic imagery for detection systems that analyze data across the electromagnetic spectrum in an automated, random, directed, or semi-directed manner.
    Type: Grant
    Filed: February 6, 2024
    Date of Patent: July 15, 2025
    Assignee: Cignal LLC
    Inventor: Eric M. Fiterman
  • Patent number: 12361565
    Abstract: Systems, methods, and computer-readable storage devices are disclosed for improving markerless motion analysis. One method including: receiving position data of joint centers of a body in motion captured by at least one camera; enhancing, using model equations, three-dimensional (3D) angular kinematic data of the position data of the joint centers of the body, wherein the enhanced 3D angular kinematic data includes increased measurement accuracy of the position data of the joint centers of the body; and providing the enhanced 3d angular kinematic data for display to evaluate motion performance.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: July 15, 2025
    Assignee: GOLFTEC Enterprises, LLC
    Inventors: Michael Decker, Craig Simons
  • Patent number: 12361652
    Abstract: Among other things, embodiments of the present disclosure improve the functionality of computer imaging software and systems by facilitating the manipulation of virtual content displayed in conjunction with images of real-world objects and environments. Embodiments of the present disclosure allow different virtual objects to be moved onto different physical surfaces, as well as manipulated in other ways.
    Type: Grant
    Filed: January 4, 2023
    Date of Patent: July 15, 2025
    Assignee: Snap Inc.
    Inventors: Ozi Egri, David Ben Ezra, Andrew James McPhee, Qi Pan, Eyal Zak
  • Patent number: 12361723
    Abstract: An object recognition device to be mounted to a vehicle includes a cluster point group formation unit and an object recognition unit. The cluster point group formation unit is configured to form a cluster point group by executing clustering for a plurality of reflecting points. The object recognition unit recognizes the cluster point group as an object. The object recognition unit is configured to, in response to determining that an outer shape of an upper surface that is an outer shape of the recognized object viewed from the top has a concave section when viewed from a vehicle side, divide the cluster point group into a plurality of quasi-cluster point groups based on a position of the concave section and recognizes each of the plurality of quasi-cluster point groups as an object.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: July 15, 2025
    Assignee: DENSO CORPORATION
    Inventor: Masanari Takaki
  • Patent number: 12359939
    Abstract: A method, apparatus and computer program product are provided for learning to generate maps from raw geospatial observations from sensors traveling within an environment.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: July 15, 2025
    Assignee: HERE GLOBAL B.V.
    Inventor: Tero Juhani Keski-Valkama
  • Patent number: 12361730
    Abstract: Various embodiments of the teachings herein include a method for the digital capture of spaces of a building. In some embodiments, the method includes: scanning a corresponding space in the building by a scanning apparatus; capturing the corresponding space in a digital point cloud and/or by an image capture; performing an object recognition, based on the digital point cloud and/or the image capture, using means of artificial intelligence; mapping, after the object recognition is performed, the digital point cloud and/or the image capture in a digital building model; and in the case of the capture of defined objects in the building, capturing the respective defined object in a dedicated manner by the scanning apparatus. Attributes are allocated to the respective defined object by a voice input.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: July 15, 2025
    Assignee: SIEMENS SCHWEIZ AG
    Inventor: Christian Frey