Target Tracking Or Detecting Patents (Class 382/103)
-
Patent number: 12293264Abstract: An information processing system that carries out a specified processing based on a learning model, comprises: a processor; a programmable logic device that rewrites logic data and reconstitutes a circuit; a machine learning processing unit that carries out machine learning and generates a new learning model for the specified processing; a convertor that converts the new learning model into the logic data that is operable in the programmable logic device; and a controller that enables the processor to carry out the specified processing based on the new learning model while the time the new learning model is converted into the logic data by the convertor.Type: GrantFiled: August 2, 2021Date of Patent: May 6, 2025Assignee: KONICA MINOLTA, INC.Inventor: Yuji Okamoto
-
Patent number: 12293589Abstract: Systems and methods for controlling a vehicle. The systems and methods receive image data from at least one camera of the vehicle, detect a 2D measurement location of a static object in an image plane of the camera using the image data, receive an input vector of measurements from a sensor system of the vehicle, predict a predicted 3D location of the static object using a Unscented Kalman Filter (UKF) that incorporates a motion model for the vehicle and further using the 2D measurement location of the static object, and the input vector, and control at least one vehicle feature based on the predicted 3D location of the static object.Type: GrantFiled: August 4, 2022Date of Patent: May 6, 2025Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yonattan Menaker, Sharon Ayzik, Ido Amit, Max Fomin
-
Patent number: 12292509Abstract: An embodiment method for classifying objects around a vehicle includes generating a dynamic occupancy grid map including a plurality of cells including point data corresponding to each of a plurality of objects located around the vehicle and velocity vector information of the point data, based on LiDAR data received from a LiDAR sensor of the vehicle and information related to movement of the vehicle, determining a cluster corresponding to each of the plurality of objects on the dynamic occupancy grid map using a clustering technique, and classifying an object corresponding to the cluster into a static object or a dynamic object, based on a region size of the cluster and velocity vector information included in cells belonging to the cluster.Type: GrantFiled: July 25, 2022Date of Patent: May 6, 2025Assignees: Hyundai Motor Company, Kia Corporation, Kookmin University Industry Academy Cooperation FoundationInventors: Se Jong Heo, Kyung Jae Ahn, Yun Jung Kim, Yeon Sik Kang, Ju Yeon Cho
-
Patent number: 12293561Abstract: Embodiment herein discloses methods and devices for waste management by using an artificial intelligence based waste object categorizing engine. The method includes acquiring at least one image and detecting at least one waste object from the at least one acquired image. Additionally, the method determines that the at least one detected waste object matches with a pre-stored waste object and identifies a type of the detected waste object using the pre-stored waste object. Furthermore, the method includes displaying the type of the detected waste object based on the identification.Type: GrantFiled: May 3, 2022Date of Patent: May 6, 2025Inventors: Wolfgang Decker, Maithreya Chakravarthula, Cristopher Luce, Christopher Heney, Clifton Luce
-
Patent number: 12290939Abstract: The invention relates to a sensor arrangement and to a method for safeguarding a monitored zone at a machine.Type: GrantFiled: August 2, 2022Date of Patent: May 6, 2025Assignee: SICK AGInventors: Markus Hammes, Nicolas Löffler
-
Patent number: 12293580Abstract: Systems and methods for displaying augmented reality content relative to an identified object are disclosed. The systems and methods can include a set of operations. An operation can include providing a first image. An operation can include determining interest points in the first image. An operation can include identifying an object in the first image. An operation can include identifying augmented reality content associated with the object. An operation can include determining a first transformation for displaying the augmented reality content in the first image relative to the identified object. An operation can include providing the interest points and the first transformation. An operation can include determining a second transformation for displaying the augmented reality content in a second image relative to the identified object using, at least in part, the first transformation and the interest points. An operation can include displaying, by a user device, the augmented reality content.Type: GrantFiled: November 15, 2023Date of Patent: May 6, 2025Assignee: Geenee GmbHInventors: Alexander Goldberg, Davide Mameli, Matthias Emanuel Thömmes, Andrii Tkachuk
-
Patent number: 12293029Abstract: Head-mounted augmented reality (AR) devices can track pose of a wearer's head to provide a three-dimensional virtual representation of objects in the wearer's environment. An electromagnetic (EM) tracking system can track head or body pose. A handheld user input device can include an EM emitter that generates an EM field, and the head-mounted AR device can include an EM sensor that senses the EM field (e.g., for determining head pose). The generated EM field may be distorted due to nearby electrical conductors or ferromagnetic materials, which may lead to error in the determined pose. Systems and methods are disclosed that measure the degree of EM distortion, as well as correct for the EM distortion. The EM distortion correction may be performed in real time by the EM tracking system without the need for additional data from imaging cameras or other sensors.Type: GrantFiled: October 19, 2021Date of Patent: May 6, 2025Assignee: Magic Leap, Inc.Inventor: Richmond B. Chan
-
Patent number: 12292298Abstract: A device is provided to extract scan lines data from images of a route, said scan line data comprising only a portion of the an image that extends along a scan line, and to store this data in memory together with respective positional data pertaining to the position of the device traveling along the route in a navigable network. This scan line and positional data can be obtained from a plurality of devices in a network and transmitted to computing apparatus, for example, for use in a production of a realistic view of a digital map.Type: GrantFiled: August 24, 2023Date of Patent: May 6, 2025Assignee: Tom Tom Global Content B.V.Inventors: Henk van der Molen, Jeroen Trum, Jan Willem van den Brand
-
Patent number: 12293602Abstract: In some examples, a system can access video data collected from one or more image sensors, the video data showing a region of interest proximate to a machine. The system can execute an object detection model to detect that a person is within the region of interest proximate to the machine based on the video data. The system can detect a motion status of a component of the machine. The system can execute a pose estimation model on the video data to estimate a pose of the person with respect to the machine. The system can detect a safety rule violation based on the pose of the person with respect to the machine, and the motion status of the machine. The system can transmit a signal to a controller of the machine in response to detecting the safety rule violation.Type: GrantFiled: October 23, 2024Date of Patent: May 6, 2025Assignee: SAS INSTITUTE INC.Inventors: Kedar Shriram Prabhudesai, Hardi Desai, Jonathan James McElhinney, Jonathan Lee Walker, Sanjeev Shyam Heda, Andrey Matveenko, Varunraj Valsaraj, Rik Peter de Ruiter
-
Patent number: 12293529Abstract: A method for prioritizing feature extraction for object re-identification in an object tracking application. Region of interests (ROI) for object feature extraction is determined based on motion areas in the image frame. Each object detected in an image frame and which is at least partly overlapping with a ROI is associated with the ROI. A list of candidate objects for feature extraction is determined by, for each ROI associated with two or more objects: adding each object of the two or more objects that is not overlapping with any of the other objects among the two or more objects with more than a threshold amount. From the list of candidate objects, at least one object is selected, and image data of the image frame depicting the selected object is used for determining a feature vector for the selected object.Type: GrantFiled: March 26, 2024Date of Patent: May 6, 2025Assignee: AXIS ABInventors: Niclas Danielsson, Christian Colliander, Amanda Nilsson, Sarah Laross
-
Patent number: 12291229Abstract: A method includes an operation to collect radar signals reflected from objects in a field of view. Range-angle-doppler bins representing three-dimensional objects in the field of view are formed. A local median operation is used across a selected dimension of the range-angle-doppler bins to eliminate background noise in the range-angle-doppler bins. Low energy peak regions are masked by removing radial velocity values in the selected dimension to form a sparse range-angle two-dimensional grid. The radar signals reflected from objects in the view of view are processed to extract reflection point detections. Reflection point detections are tracked in accordance with short-term filter rules to form tracked reflection point detections. The tracked reflection point detections are formed into clusters. The clusters are processed with long-term filter rules.Type: GrantFiled: January 28, 2022Date of Patent: May 6, 2025Assignee: Symeo GmbHInventors: Johannes Traa, Andrew Schweitzer, Atulya Yellepeddi
-
Patent number: 12293575Abstract: A method includes generating, by a neural network having a plurality of layers, final feature vectors of one or more frames of a plurality of frames of an input video, while sequentially processing each of the plurality of, and generating image information corresponding to the input video based on the generated final feature vectors. For each of the plurality of frames, the generating of the final feature vectors comprises determining whether to proceed with or stop a corresponding sequenced operation through layers of the neural network for generating a final feature vector of a corresponding frame, and generating the final feature vector of the corresponding frame in response to the corresponding sequenced operation completing a final stage of the corresponding sequenced operation.Type: GrantFiled: July 15, 2022Date of Patent: May 6, 2025Assignees: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATIONInventors: Bohyung Han, Jonghyeon Seon, Jaedong Hwang
-
Patent number: 12293014Abstract: An information processing apparatus includes a processor that processes the image data of the image stored in a memory in order to detect a face area with a face captured therein and an orientation of the face from the image, and controls the screen brightness of a display unit based on the orientation of the face detected by the processing. When the detected face orientation falls within a preset first angle range, the processor determines that the face orientation is a first orientation, and even when the detected face orientation is out of the first angle range, the processor determines that the face orientation is the first orientation depending on the amount of change in the detected orientation of the face changing in a direction of the first orientation.Type: GrantFiled: October 26, 2023Date of Patent: May 6, 2025Assignee: Lenovo (Singapore) Pte. Ltd.Inventors: Masashi Nishio, Yuji Wada
-
Patent number: 12289405Abstract: The present disclosure provides a graphical watermark, a method and an apparatus for generating a graphical watermark, and a method and an apparatus for authenticating a graphical watermark. The graphical watermark includes: a plurality of graphical markers carrying position and pose information, and identity information of the graphical watermark; and a watermark pattern provided between a pair of graphical markers.Type: GrantFiled: July 18, 2022Date of Patent: April 29, 2025Assignee: I-SPRINT INNOVATIONS PTE LTDInventors: Chin Phek Ong, Wai Keung Ching, Tat Kwong Simon Leung
-
Patent number: 12289567Abstract: A method is provided of parallel gathering of structured light patterns in multi-projector systems, comprising sequentially projecting images from each of a plurality of projectors on a surface; capturing the sequentially projected images on the surface via at least one sensing device; creating one or more sensing device/projector masks for limiting the images projected by each of the plurality of projectors to portions that do not conflict with images projected by other ones of the plurality of projectors; using the one or more projector masks to create a graph of projectors whose projections conflict; and creating from the graph a plurality of gather groups wherein projectors within a group do not conflict with each other for simultaneously gathering structured light patterns from projectors in each of the gather groups.Type: GrantFiled: September 13, 2022Date of Patent: April 29, 2025Assignee: CHRISTIE DIGITAL SYSTEMS USA, INC.Inventors: Matthew Post, Peter Anthony Van Eerd, Anda Achimescu, Huck Kim
-
Patent number: 12288385Abstract: A learning device 10 includes a first learning unit 20. The first learning unit 20 includes a first supervised learning unit 22 and a first self-supervised learning unit 24. The first supervised learning unit 22 learns a first object detection network 30 using learning data 40 so as to reduce a first loss between an output of the first object detection network 30 for detecting an object from target image data and supervised data 40B. Using image data 40A and self-supervised data 40C generated from the image data 40A, the first self-supervised learning unit 24 learns the first object detection network 30 so as to reduce a second loss of a feature amount of a corresponding candidate area P between the image data 40A and the self-supervised data 40C, the second loss being derived by the first object detection network 30.Type: GrantFiled: August 24, 2022Date of Patent: April 29, 2025Assignee: KABUSHIKI KAISHA TOSHIBAInventor: Daisuke Kobayashi
-
Patent number: 12288358Abstract: A method includes: selecting, from among a series of images, a candidate image that was captured at an image-capturing time instant corresponding to a point-cloud-generating time instant at which a point cloud was generated; generating a two-dimensional data set from the point cloud; superimposing the two-dimensional data set on the candidate image to result in a superimposed image; obtaining a derived distance inconsistency between the candidate image and the two-dimensional data set in the superimposed image; feeding the derived distance inconsistency into a conversion model to obtain a derived time difference; calculating a target time instant based on the derived time difference and the image-capturing time instant of the candidate image; and selecting, from among the series of images that have been received from the image capturing device, a target image that was captured at a time instant the closest to the target time instant.Type: GrantFiled: December 2, 2022Date of Patent: April 29, 2025Assignee: Automotive Research & Testing CenterInventors: Wei-Xiang Huang, Yi-Jie Lin
-
Patent number: 12288400Abstract: A method and system for carrying out iterative appearance searching is disclosed. The method includes employing at least one first reference image to carry out a first instance, computer vision-driven appearance search of first video data captured across a respective first set of a plurality of first security cameras. The method also includes obtaining a second reference image, having a respective relevance confidence that satisfies a confidence threshold condition, from the portion of the first video data. The method also includes employing the second reference image for a second instance, computer vision-driven appearance search of second video data captured across a respective second set of a plurality of second security cameras.Type: GrantFiled: September 20, 2023Date of Patent: April 29, 2025Assignee: MOTOROLA SOLUTIONS, INC.Inventors: Kenney Koay, Rosnah Antong, Nur Diyana Mohd Asri, Poh Imm Goh
-
Patent number: 12287405Abstract: Techniques for determining attributes associated with objects represented in temporal sensor data. In some examples, the techniques may include receiving sensor data including a representation of an object in an environment. The sensor data may be generated by a temporal sensor of a vehicle and, in some instances, a trajectory of the object or the vehicle may contribute to a distortion in the representation of the object. For instance, a shape of the representation of the object may be distorted relative to an actual shape of the object. The techniques may also include determining an attribute (e.g., velocity, bounding box, etc.) associated with the object based at least in part on a difference between the representation of the object and another representation of the object (e.g., in other sensor data) or the actual shape of the object.Type: GrantFiled: June 22, 2022Date of Patent: April 29, 2025Assignee: Zoox, Inc.Inventor: Scott M. Purdy
-
Patent number: 12282989Abstract: A method includes determining an altitude of a camera of an aerial vehicle, determining a field of view (FOV) of a camera, generating a localized map, determining a relative position of the aerial vehicle on the localized map, and determining a relative heading of the aerial vehicle.Type: GrantFiled: July 6, 2023Date of Patent: April 22, 2025Assignee: Skydio, Inc.Inventor: Joseph Anthony Enke
-
Patent number: 12283112Abstract: Vehicle perception techniques include applying a 3D DNN to a set of inputs to generate 3D detection results including a set of 3D objects, transforming the set of 3D objects onto a set of images as a first set of 2D bounding boxes, applying a 2D DNN to the set of images to generate 2D detection results including a second set of 2D bounding boxes, calculating mean average precision (mAP) values based on a comparison between the first and second sets of 2D bounding boxes, identifying a set or corner cases based on the calculated mAP values, and re-training or updating the 3D DNN using the identified set of corner cases, wherein a performance of the 3D DNN is thereby increased without the use of expensive additional manually and/or automatically annotated training datasets.Type: GrantFiled: August 4, 2022Date of Patent: April 22, 2025Assignee: FCA US LLCInventors: Dalong Li, Rohit S Paranjpe, Benjamin J Chappell
-
Patent number: 12283056Abstract: A computing system identifies player tracking data and event data corresponding to a match. The match includes a first team and a second team. The player tracking data includes coordinate positions of each player during the event. The event data defines events that occur during the match. The computing system divides the player tracking data into a plurality of segments based on the event information. For each segment of the plurality of segments, the computing system learns a first formation associated with a respective team in possession. For each segment of the plurality of segments, the computing system learns a second formation associated with a respective team not in possession. The computing system maps each first formation to a first class of known formation clusters. The computing system maps each second formation to a second class of known formation clusters.Type: GrantFiled: February 11, 2022Date of Patent: April 22, 2025Assignee: STATS LLCInventors: Thomas Seidl, Michael Stöckl, Patrick Joseph Lucey
-
Patent number: 12282089Abstract: Perception systems and methods include use of occlusion constraints for resolving tracks from multiple types of sensors. An occlusion constraint is applied to an association between a radar track and vision track to indicate a probability of occlusion. The perception systems and methods refrain from evaluating occluded and collected radar tracks and vision tracks. The probability of occlusion is utilized for deemphasizing pairs of radar tracks and vision tracks with a high likelihood of occlusion and therefore, not useful for tracking. Improved perception data is provided and more closely represents multiple complex data sets for a vehicle to prevent a collision with an occluded object as the vehicle operates in an environment.Type: GrantFiled: December 6, 2021Date of Patent: April 22, 2025Assignee: Aptiv Technologies AGInventors: Ganesh Sevagamoorthy, Syed Asif Imran
-
Patent number: 12281882Abstract: A system for managing a clay shooting session is provided. The system includes hardware and software control means and a user interface operatively connected to the hardware and software control means. The user interface has a lost clay target button, a second barrel shot button, and a no-target button. In the event of pressing the no-target button, the hardware and software control means are configured to check whether a data set stored before pressing the no-target button includes at least one of data relating to a shot taken signal, data relating to pressing the second barrel shot button, and data relating to pressing the lost clay target button; if so, the data set stored before pressing the no-target button is stored and the data set relating to the shot after the no-target button is pressed is identified as a repeated shot.Type: GrantFiled: October 28, 2020Date of Patent: April 22, 2025Assignee: FABBRICA D'ARMI PIETRO BERETTA S.P.A.Inventors: Giacomo Ziliani, Riccardo Zanardelli
-
Patent number: 12284146Abstract: Systems and methods herein describe generating automatic reactions in an augmented reality messenger system. The claimed systems and methods generate a contextual trigger defining a set of conditions at a first computing device, detect at least one of the set of conditions has been satisfied, cause presentation of an augmented reality content item at a second computing device, generate a user reaction in response to the presentation of the augmented reality content item and transmit the user reaction to a first computing device.Type: GrantFiled: September 16, 2021Date of Patent: April 22, 2025Assignee: Snap Inc.Inventors: Brian Anthony Smith, Yu Jiang Tham, Rajan Vaish
-
Patent number: 12280799Abstract: A travel controller detects objects from an environmental image representing surroundings of a vehicle capable of traveling under autonomous driving control satisfying a predetermined safety standard; detects a looking direction of a driver of the vehicle from a face image of the driver; identifies an object in the looking direction of the driver out of the objects; stores the identified object and a situation condition indicating the situation at detection of the identified object in a memory in association with each other when a danger avoidance action performed by the driver is detected during travel of the vehicle; and changes the predetermined safety standard so that the driver can feel safer, in the case where an object stored in the memory is detected during travel of the vehicle under the autonomous driving control and where the situation at detection of the object satisfies the situation condition.Type: GrantFiled: December 22, 2022Date of Patent: April 22, 2025Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, AISIN CORPORATIONInventors: Yuki Horiuchi, Taku Mitsumori, Yuji Yamanishi, Yuki Takeda, Masahiro Ishihara
-
Patent number: 12279837Abstract: Systems, methods, and instrumentalities are disclosed for identification of image shapes based on situational awareness of a surgical image and annotation of shapes or pixels. A surgical video associated with a surgical procedure may be obtained. Surgical context data for the surgical procedure may be obtained. Elements in the video frames may be identified based on the surgical context data using image processing. Annotation data may be determined and generated for the video frames, for example, based on the surgical context data and the identified element(s).Type: GrantFiled: May 18, 2022Date of Patent: April 22, 2025Assignee: Cilag GmbH InternationalInventor: Frederick E. Shelton, IV
-
Patent number: 12277655Abstract: Generally described, one or more aspects of the present application relate to capturing and generating viewpoints of any given space. Pixel averaging and camera configurations, including microlens cameras, may be implemented to generate and capture viewpoints of any given space.Type: GrantFiled: March 31, 2023Date of Patent: April 15, 2025Inventor: Periannan Senapathy
-
Patent number: 12277671Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure include an image processing apparatus configured to efficiently perform texture synthesis (e.g., increase the size of, or extend, texture in an input image while preserving a natural appearance of the synthesized texture pattern in the modified output image). In some aspects, the image processing apparatus implements an attention mechanism with a multi-stage attention model where different stages (e.g., different transformer blocks) progressively refine image feature patch mapping at different scales, while utilizing repetitive patterns in texture images to enable network generalization. One or more embodiments of the disclosure include skip connections and convolutional layers (e.g., between transformer block stages) that combine high-frequency and low-frequency features from different transformer stages and unify attention to micro-structures, meso-structures and macro-structures.Type: GrantFiled: November 10, 2021Date of Patent: April 15, 2025Assignee: ADOBE INC.Inventors: Shouchang Guo, Arthur Jules Martin Roullier, Tamy Boubekeur, Valentin Deschaintre, Jerome Derel, Paul Parneix
-
Patent number: 12279129Abstract: An electronic device that provides installation instructions is described. During operation, the electronic device may automatically generate a radio-frequency project plan for a network in an environment using a radio-frequency model and a model corresponding to at least a portion of the environment. Then, based at least in part on the radio-frequency project plan, the electronic device may interactively provide the installation instructions to an installer while a given access point in the access points is being installed in the environment. Moreover, after the given access point is installed, the electronic device may validate the given access point. Next, the electronic device may perform automatic testing of the given access point. Furthermore, the electronic device may provide a comparison of estimated communication performance of the given access point in the radio-frequency project plan and measured communication performance of the given access point.Type: GrantFiled: September 10, 2022Date of Patent: April 15, 2025Assignee: Shasta Cloud, Inc.Inventors: Steve A. Martin, Doron Givoni
-
Patent number: 12277717Abstract: The embodiments of the present disclosure disclose an object detection method and system, and a non-transitory computer-readable medium. In the object detection method, multiple candidate bounding boxes of an interest object in a current image frame are obtained. Based on a determined bounding box of the interest object in a previous image frame, the multiple candidate bounding boxes are filtered to obtain a determined bounding box of the interest object in the current image frame.Type: GrantFiled: May 3, 2022Date of Patent: April 15, 2025Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Yang Zhou, Jie Liu
-
Patent number: 12277843Abstract: Disclosed herein are an apparatus, method, and computer-readable medium for detecting an anomalous behavior event for an environment and transmitting an alert of the event. An implementation may comprise detecting, in a plurality of image frames captured by a camera, when a person enters a region of interest, tracking movements of the person in the region of interest by comparing images of the person in the plurality of image frames, determining attributes of the person, determining environmental contexts for the region of interest, determining whether or not an anomalous behavior is detected based on the movements and the attributes of the person and the environmental contexts, generating an alert when the anomalous behavior is detected, and transmitting the generated alert to user devices of one or more predetermined recipients.Type: GrantFiled: January 20, 2022Date of Patent: April 15, 2025Assignee: Tyco Fire & Security GmbHInventor: Matthew Julien
-
Patent number: 12277195Abstract: A method, computer program product, and system include a processor(s) that continuously obtains data from the one or more sensor devices, generating, from the data, frames comprising images, identifies, utilizing the frames, entities within a pre-defined vicinity of a visual display unit at a first time, determines, based on applying the classification model, if each identified entity of the identified entities within the pre-defined vicinity at the first time is objectionable. When the processor(s) determined that at least one identified entity is objectionable, the processor(s) initiates a security action on the visual display unit to prevents the objectionable identified entity from viewing the content on the visual display unit.Type: GrantFiled: October 29, 2021Date of Patent: April 15, 2025Assignee: Kyndryl, Inc.Inventors: Saravanan Devendran, Thangadurai Muthusamy
-
Patent number: 12277782Abstract: An artificial intelligence range hood including a main body; a camera disposed at a lower end of the main body and configured to capture an image of a cooktop located under the main body; a display located on a front surface of the main body; a memory storing an object recognition model; and a processor configured to control the camera to capture the image of the cooktop, generate object recognition information for recognizing cooking objects included the captured image of the cooktop using the object recognition model stored in the memory, set a user region of interest of the captured image corresponding to a recognized cooking object designated by the generated object information, control an operation of the camera to photograph the user region of interest of the captured image, and control the display to display the photographed user region of interest of the captured image.Type: GrantFiled: February 1, 2023Date of Patent: April 15, 2025Assignee: LG ELECTRONICS INC.Inventors: Seoyoung Jeong, Seongik Jeon, Haesol Baek, Jeehoon Lee
-
Patent number: 12276622Abstract: A state change tracking device includes: a hardware processor that non-destructively tracks a state change of an inspection target by a plurality of reconstructed images acquired by imaging the inspection target placed under a specific environment by an X-ray Talbot imaging device over time.Type: GrantFiled: October 20, 2021Date of Patent: April 15, 2025Assignee: Konica Minolta, Inc.Inventors: Tadashi Arimoto, Ikuma Ota
-
Patent number: 12277721Abstract: This application provides a method for optimizing a depth estimation model. The method includes obtaining a video of an object and capturing a first image and a second image from the video. An initial depth estimation model is obtained. An updated depth estimation model is obtained by performing an optimization process on the initial depth estimation model, and the optimization process is repeatedly performed on the updated depth estimation model. Once the updated depth estimation model meets predetermined requirements, the updated depth estimation model meeting predetermined requirements is determined as a target depth estimation model.Type: GrantFiled: August 26, 2022Date of Patent: April 15, 2025Assignee: HON HAI PRECISION INDUSTRY CO., LTD.Inventors: Tsung-Wei Liu, Chin-Pin Kuo
-
Patent number: 12277770Abstract: A system and method for receiving, using one or more processors, image data, the image data including a first training video representing performance of one or more steps on a first workpiece; applying, using the one or more processors, a first set of labels to the first training video based on user input; performing, using the one or more processors, extraction on the image data, thereby generating extracted information, the extracted information including first extracted image information associated with the first training video; and training, using the one or more processors, a process monitoring algorithm based on the extracted information and the first set of labels.Type: GrantFiled: September 28, 2021Date of Patent: April 15, 2025Assignee: Invisible AI Inc.Inventors: Prateek Sachdeva, Eric Danziger
-
Patent number: 12270908Abstract: A system in a vehicle includes a lidar system to transmit incident light and receive reflections from one or more objects as a point cloud of points. The system also includes processing circuitry to identify planar points and to identify edge points of the point cloud. Each set of planar points forms a linear pattern and each edge point is between two sets of planar points, and the processing circuitry identifies each point of the points of the point cloud as being within a virtual beam among a set of virtual beams. Each virtual beam of the set of virtual beams representing a horizontal band of the point cloud.Type: GrantFiled: November 1, 2021Date of Patent: April 8, 2025Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yao Hu, Xinyu Du, Wende Zhang, Hao Yu
-
Patent number: 12270895Abstract: The present technology relates to an information processing apparatus, an information processing method, a program, a mobile-object control apparatus, and a mobile object that make it possible to improve the accuracy in recognizing a target object. An information processing apparatus includes a geometric transformation section that transforms at least one of a captured image or a sensor image to match coordinate systems of the captured image and the sensor image, the captured image being obtained by an image sensor, the sensor image indicating a sensing result of a sensor of which a sensing range at least partially overlaps a sensing range of the image sensor; and an object recognition section that performs processing of recognizing a target object on the basis of the captured image and sensor image of which the coordinate systems have been matched to each other. The present technology is applicable to, for example, a system used to recognize a target object around a vehicle.Type: GrantFiled: November 22, 2019Date of Patent: April 8, 2025Assignee: Sony Semiconductor Solutions CorporationInventor: Tatsuya Sakashita
-
Patent number: 12270659Abstract: Exemplary positioning method and apparatus, a device, a system, a medium and a self-driving vehicle are provided. The positioning method includes determining, according to the current frame of point cloud data collected by a to-be-positioned terminal in a travelling environment, the current key point in the current frame of point cloud data and a point cloud distribution feature of the current key point; selecting, according to point cloud distribution features associated with reference key points in a global positioning map and the point cloud distribution feature of the current key point, a target key point matching the current key point from the reference key points; and determining, according to reference pose data associated with the target key point, the current pose data of the to-be-positioned terminal.Type: GrantFiled: September 22, 2022Date of Patent: April 8, 2025Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.Inventors: Xiangyu Fu, Guowei Wan, Yao Zhou, Liang Peng
-
Patent number: 12272142Abstract: A monitoring system incorporates, and a method and a computer program product provide image projection of content of subjective interest to a person, responsive to a detected mood of the person. A controller of the monitoring system receives image stream(s) from a camera system, including a first image stream encompassing a face of a person. The controller compares facial expressions within the first image stream with facial expression trigger(s) in a visual object library. In response to determining that the first image stream includes facial expression trigger(s) among the visual object library, the controller determines one or more objects in the visual object library having an associated interest value, in a preference tracking data structure, above a threshold soothing value corresponding to the person. The controller triggers an image projector to present the object(s) within the field of view of the person to respond to the facial expression trigger.Type: GrantFiled: March 30, 2022Date of Patent: April 8, 2025Assignee: Motorola Mobility LLCInventors: Amit Kumar Agrawal, Rahul Bharat Desai
-
Patent number: 12271974Abstract: Techniques are described for generating bird's eye view (BEV) images and segmentation maps. According to one or more embodiments, a system is provided comprising a processor that executes computer executable components stored in at least one memory, comprising a machine learning component that generates a synthesized bird's eye view image from a stitched image based on removing artifacts from the stitched image present from a transformation process. The system further comprising a generator that produces the synthesized bird's eye view image and a segmentation map, and a discriminator that predicts whether the synthesized bird's eye view image and the segmentation map are real or generated.Type: GrantFiled: May 9, 2022Date of Patent: April 8, 2025Assignee: Volvo Car CorporationInventors: Sihao Ding, Ekta U. Samani
-
Patent number: 12268196Abstract: A fish counting system has: an image acquisition unit for acquiring a plurality of images wherein a fluid containing fish is imaged over time; a counting unit for counting the fish on the basis of the plurality of images; a fish count change display provision unit for providing a fish count change display, wherein a display corresponding to the number of fish counted per unit time is arranged in a time-series manner; a result provision unit for providing a counting result display wherein an image to which count completion marks indicating a counted fish have been added; and a correction unit for receiving a correction operation and correcting the number of fish.Type: GrantFiled: September 15, 2020Date of Patent: April 8, 2025Assignee: YANMAR POWER TECHNOLOGY CO., LTD.Inventors: Yasuhiro Ueda, Yuichiro Dake, Toshiaki Sakai, Makoto Tani
-
Patent number: 12268545Abstract: Method and systems are provided for characterizing blood flow in an atrioventricular valve of the human heart, the atrioventricular valve connecting an atrium with a corresponding ventricle of the heart, the ventricle being fluidly coupled to a particular vessel that transports blood outside the ventricle blood, which involve obtaining image data of the heart and identifying contours of the atrium and a region within the particular vessel within the image data. A time-density curve for the atrium can be calculated from the contour of the atrium and densitometric image data derived from the image data. A time-density curve for the region of the particular vessel can be calculated from the contour of the vessel region and the densitometric image data. Data that characterizes at least one regurgitation fraction related to the atrioventricular valve can be calculated from such time-density curves.Type: GrantFiled: October 20, 2021Date of Patent: April 8, 2025Assignee: Pie Medical Imaging B.V.Inventor: Jean-Paul Aben
-
Patent number: 12272103Abstract: There is provided a method of matching features depicted in images, comprising: detecting a first object depicted in a first image, projecting a first epipolar line, from the first object of the first image, to a second image, selecting second objects along the first epipolar line of the second image, projecting second epipolar lines, from the second objects of the second image, to a third image, projecting a third epipolar line from the first object of the first image to the third image, identifying on the third image, a third object along an intersection of the third epipolar line and a certain second epipolar line of the second epipolar lines, and generating an indication of the first object depicted in first image and the third object depicted in the third image as matches of a same physical object.Type: GrantFiled: March 29, 2022Date of Patent: April 8, 2025Assignee: Percepto Robotics LtdInventors: Eran Eilat, Yekutiel Katz, Ovadya Menadeva
-
Patent number: 12272151Abstract: Methods, systems, and apparatuses related to autonomous vehicle object detection are described. A method can include receiving, by an autonomous vehicle, an indication that the autonomous vehicle has entered a network coverage zone generated by a base station and performing an operation to reallocate computing resources between a plurality of different types of memory devices associated with the autonomous vehicle in response to receiving the indication. The method can further include capturing, by the autonomous vehicle, data corresponding to an unknown object disposed within a sight line of the autonomous vehicle and performing, using the reallocated computing resources, an operation involving the data corresponding to the unknown object to classify the unknown object.Type: GrantFiled: September 29, 2023Date of Patent: April 8, 2025Assignee: Micron Technology, Inc.Inventor: Reshmi Basu
-
Patent number: 12272095Abstract: Localization of a user device in a mixed reality environment in which the user device obtains at least one keyframe from a server, which can reside on the user device, displays at least one of the keyframes on a screen, captures by a camera an image of the environment, and obtains a localization result based on at least one feature of at least one keyframe and the image.Type: GrantFiled: December 10, 2020Date of Patent: April 8, 2025Assignee: InterDigital CE Patent HoldingsInventors: Matthieu Fradet, Vincent Alleaume, Anthony Laurent, Philippe Robert
-
Patent number: 12270758Abstract: A method for detecting and quantifying gas emissions from a satellite image comprises obtaining an image of an area of interest, determining an amount of variation in light intensity within the image, and correlating the amount of variation in light intensity within the image to a gas concentration of a gas emission located in the area of interest.Type: GrantFiled: October 20, 2022Date of Patent: April 8, 2025Assignee: KayrrosInventors: Edouard Machover, Gabriele Facciolo, Jean-Michel Morel, Carlo De Franchis, Thibaud Ehret, Aurélien De Truchis, Matthieu Mazzolini, Thomas Lauvaux
-
Patent number: 12269483Abstract: Disclosed is a physique estimation device that can estimate a physique. The physique estimation device includes: a sensor having a transmission antenna to transmit a transmission wave, and a reception antenna to receive the transmission wave reflected by at least one target in a vehicle cabin as a received wave; a frequency analysis unit to acquire position information about a reflection point where the transmission wave is reflected using the received wave; and a physique estimation unit to estimate the physique of a non-static object present in the vehicle cabin using the position information.Type: GrantFiled: March 20, 2023Date of Patent: April 8, 2025Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Takayuki Kitamura, Kei Suwa
-
Patent number: 12272094Abstract: The present disclosure describes approaches to camera re-localization using a graph neural network (GNN). A re-localization model includes encoding an input image into a feature map. The model retrieves reference images from an image database of a previously scanned environment based on the feature map of the image. The model builds a graph based on the image and the reference images, wherein nodes represent the image and the reference images, and edges are defined between the nodes. The model may iteratively refine the graph through auto-aggressive edge-updating and message passing between nodes. With the graph built, the model predicts a pose of the image based on the edges of the graph. The pose may be a relative pose in relation to the reference images, or an absolute pose.Type: GrantFiled: December 9, 2021Date of Patent: April 8, 2025Assignee: Niantic, Inc.Inventors: Mehmet Özgür Türkoǧlu, Aron Monszpart, Eric Brachmann, Gabriel J. Brostow