Motion Or Velocity Measuring Patents (Class 382/107)
-
Patent number: 12373528Abstract: A method of identifying a person, the method comprising: acquiring spatiotemporal data for each of a plurality of anatomical landmarks associated with an activity engaged in by a person that defines a spatiotemporal trajectory of the anatomical landmark during the activity; modeling the acquired spatiotemporal data as a spatiotemporal graph (ST-Graph); and processing the ST-Graph using at least one non-local graph convolution neural network (NLGCN) to provide an identity for the person.Type: GrantFiled: July 30, 2021Date of Patent: July 29, 2025Assignee: Ramot at Tel-Aviv University Ltd.Inventors: David Mendlovic, Menahem Koren, Lior Gelberg, Khen Cohen, Mor-Avi Azulay, Ohad Volvovitch
-
Patent number: 12354305Abstract: A camera misalignment detection system detecting a misalignment condition of one or more cameras for a vehicle includes one or more controllers that determine a set of matched pixel pairs, determine a feature matching ratio based on the set of matched pixel pairs, and calculate an alignment angle difference of the one or more cameras. In response to determining the feature matching ratio, the essential matrix inlier ratio, and the alignment angle difference each exceed respective threshold values, the controllers add the alignment angle difference to a queue including a sequence of historical alignment angle difference values. The controllers perform statistical filtering to determine a total number of historical alignment angle difference values within the queue that are inliers and determine a misalignment condition of the one or more cameras based on the total number of historical alignment angle difference values within the queue that are inliers.Type: GrantFiled: April 24, 2024Date of Patent: July 8, 2025Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yao Hu, Binbin Li, Xinyu Du, Hao Yu
-
Patent number: 12351187Abstract: A computer-implemented method estimates a vehicle position. The method provides a first position estimate based on position information from at least one first source, wherein the first position estimate is assigned a first bias value and a first statistical variance; provides a second position estimate based on position information from at least one second source; determines a third bias value and a third statistical variance, wherein the third bias value and the third statistical variance are assigned to a third position estimate which results from a combination of the first position estimate and the second position estimate; and evaluates the first position estimate and the third position estimate on the basis of a quality criterion which takes into account the bias value assigned to each position estimate and the statistical variance assigned to each position estimate.Type: GrantFiled: December 6, 2021Date of Patent: July 8, 2025Assignee: Bayerische Motoren Werke AktiengesellschaftInventors: Sebastian Gruenwedel, Manus McElhone, Marc Scott Meissner, Pascal Minnerup, Peter Pedron, Sebastian Rauch, Barbara Roessle, Maxi Winter, Martin Zeman
-
Patent number: 12347119Abstract: A method of generating a training dataset suitable for training machine learning algorithms to estimate the motion of objects, and for training a machine learning algorithm to perform motion estimation. A plurality of pairs of synthetic images are generated from obtained objects and backgrounds, each pair have a first frame and a second frame. The first frame includes a selection of objects in first positions and first orientations superimposed on a selected background, and the second frame includes the selection of objects in second positions and second orientations superimposed on the selected background. Also provided are processing systems configured to carry out these methods.Type: GrantFiled: March 28, 2024Date of Patent: July 1, 2025Assignee: Imagination Technologies LimitedInventors: Aria Ahmadi, David Walton, Cagatay Dikici
-
Patent number: 12333779Abstract: A method for determining a salient object in an image based on superpixel analysis consists of initializing superpixels using SLIC algorithm and merging of adjacent superpixels that have similar color distribution. Then, calculate the spatial and color distribution correlation between the superpixels after being merged. Combined with the statistics of the occupancy rate, the distance to the original image center, and the global contrast of the superpixels, calculate the saliency evaluation vector for each superpixel. Finally, interpolate the saliency for each pixel in the superpixel.Type: GrantFiled: November 30, 2022Date of Patent: June 17, 2025Assignee: VIETTEL GROUPInventors: Van Bang Le, Manh Hung Lu
-
Patent number: 12320687Abstract: Described here are systems and methods that utilize visual imagery and an optical flow-based computer vision algorithm to measure river velocity in streams or other flowing bodies of water. The systems and methods described in the present disclosure overcome the barriers of conventional flow measurement techniques by providing a fast, non-intrusive, remote method to measure peak flows.Type: GrantFiled: January 11, 2021Date of Patent: June 3, 2025Assignee: Marquette UniversityInventors: Henry Ponti Medeiros, Walter Miller McDonald, Jamir Shariar Jyoti, Spencer M. Sebo
-
Patent number: 12300035Abstract: Techniques for liveness detection using a motion, face, and context cues. The techniques can be implemented to prevent against successful presentation attacks, video injection attacks, and deepfake attacks. In some examples, the techniques encompass receiving a set of video frames from a personal computing device. A first liveness determination can be made using a motion-based model based on the received video frames. A second liveness determination can be made using a face-based model based on the received video frames. A third liveness determination can be made using a context-based model based on the received video frames. A final liveness determination can be made based on the first, second and third liveness determinations.Type: GrantFiled: March 30, 2022Date of Patent: May 13, 2025Assignee: Amazon Technologies, Inc.Inventors: Xiang Xu, Mingze Xu, Zheng Zhang, Yuanjun Xiong, Wei Xia, Jonathan Wu, Joseph P Tighe
-
Patent number: 12293027Abstract: A method of using a remote server to operate a first appliance of a plurality of appliances connected to an external network includes receiving one or more images from a second appliance of the plurality of appliances, analyzing the one or more images to detect a hand gesture, identifying a responsive action associated with the hand gesture, and instructing the first appliance to implement the responsive action.Type: GrantFiled: August 15, 2023Date of Patent: May 6, 2025Assignee: Haier US Appliance Solutions, Inc.Inventors: Haitian Hu, Adam Hofmann
-
Patent number: 12283130Abstract: A method includes receiving a first sequence of images of a first subject captured over a time period in which relative locations of the image acquisition device with respect to the first subject varies. A first image and a second image are selected as representing a first and second relative locations, respectively, of the image acquisition device with respect to the first subject. A first set of points and a second set of points are generated in a three-dimensional space, using the first image and the second image as a stereo pair, the first and second sets of points representing the first subject, and a background, respectively. It is determined that a difference between a first depth associated with the first set of points, and a second depth associated with the second set of points satisfies a threshold condition, and in response, access to a secure system is prevented.Type: GrantFiled: July 24, 2019Date of Patent: April 22, 2025Assignee: Jumio CorporationInventors: Gregory Lee Storm, Reza R. Derakhshani
-
Patent number: 12282696Abstract: Using a pre-trained and fixed Vision Transformer (ViT) model as an external semantic prior, a generator is trained given only a single structure/appearance image pair as input. Given two input images, a source structure image and a target appearance image, a new image is generated by the generator in which the structure of the source image is preserved, while the visual appearance of the target image is transferred in a semantically aware manner, so that objects in the structure image are “painted” with the visual appearance of semantically related objects in the appearance image. A self-supervised, pre-trained ViT model, such as a DINO-VIT model, is leveraged as an external semantic prior, allowing for training of the generator only on a single input image pair, without any additional information (e.g., segmentation/correspondences), and without adversarial training. The method may generate high quality results in high resolution (e.g., HD).Type: GrantFiled: December 18, 2022Date of Patent: April 22, 2025Assignee: Yeda Research and Development Co. Ltd.Inventors: Tali Dekel, Shai Bagon, Omer Bar Tal, Narek Tumanyan
-
Patent number: 12260720Abstract: Systems and methods of selecting a video stream resolution are provided. In one exemplary embodiment, a method comprises, by a network node operationally coupled over a network to a set of optical sensor devices positioned throughout a space that are operable to send at least one of a set of image streams to the network node. The method comprises receiving a first image stream of a set of image streams of the first optical sensor device that is selected based on both a confidence level that at least one object is correctly detected from a second image stream received from the first optical sensor and a current network bandwidth utilization to maintain the current network bandwidth utilization below a network bandwidth utilization threshold, with the first and second image streams having a different resolution and the first optical sensor having a viewing angle towards the detected object.Type: GrantFiled: March 31, 2022Date of Patent: March 25, 2025Assignee: Toshiba Global Commerce Solutions, Inc.Inventor: David J. Steiner
-
Patent number: 12252870Abstract: A wear detection system can be configured to receive a video stream including a plurality of images of a bucket of the work machine from a camera associated with the work machine. The bucket has one or more ground engaging tools (GET). The wear detection system can also be configured to identify a plurality of tool images from the video stream over a period of time. The plurality of tool images depict the GET at a plurality of instances over a period of time. The wear detection system can also be configured to determine a plurality of tool pixel counts from the plurality of tool image and determine a wear level for the GET based on the plurality of tool pixel counts.Type: GrantFiled: October 30, 2020Date of Patent: March 18, 2025Assignee: Caterpillar Inc.Inventors: Peter Joseph Petrany, Shastri Ram
-
Patent number: 12254041Abstract: A position recognition method and a system based on visual information processing are disclosed A position recognition method according to one embodiment including the steps of: generating a frame image through a camera; transmitting, to a server, a first global pose of the camera and the generated frame image; and receiving, from the server, a second global pose of the camera estimated on the basis of a pose of an object included in the transmitted frame image.Type: GrantFiled: April 4, 2022Date of Patent: March 18, 2025Assignee: NAVER CORPORATIONInventors: Dongcheol Hur, Yeong-Ho Jeong, Sangwook Kim
-
Patent number: 12254535Abstract: A system and method include association of imaging event data to one of a plurality of bins based on a time associated with the imaging event data, determination that the time periods of a first bin and the time periods of a second bin are adjacent-in-time, determination of whether a spatial characteristic of the imaging event data of the first bin is within a predetermined threshold of the spatial characteristic of the imaging event data of the second bin, and, based on the determination, reconstruction of one or more images based on the imaging event data of the first bin and the second bin.Type: GrantFiled: October 10, 2019Date of Patent: March 18, 2025Assignee: Siemens Medical Solutions USA, Inc.Inventors: Inki Hong, Ziad Burbar, Paul Schleyer
-
Patent number: 12254640Abstract: In an object tracking device, the extraction means extracts target candidates from images in a time-series. The first setting means sets a first search range based on frame information and reliability of a target in a previous image in the time-series. The tracking means searches the target from the target candidates extracted within the first search range using the reliability indicating similarity to a target model, and tracks the target. The second setting means sets a second search range which includes the first search range and which is larger than the first search range. The model updating means updates the target model using the target candidates extracted within the first search range and the target candidates extracted within the second search range.Type: GrantFiled: March 16, 2020Date of Patent: March 18, 2025Assignee: NEC CORPORATIONInventor: Takuya Ogawa
-
Patent number: 12249137Abstract: A device may capture a plurality of preview frames of a document, and for each preview frame of the plurality of preview frames, process the preview frame to identify an object in the preview frame. Processing the preview frame may include converting the preview frame into a grayscale image, generating a blurred image based on the grayscale image, detecting a plurality of edges in the blurred image, defining at least one bounding rectangle based on the plurality of edges, and determining an outline of the object based on the at least one bounding rectangle. The device may determine whether a value of an image parameter, associated with the one or more preview frames, satisfies a threshold, and provide feedback to a user of the device, or automatically capture an image of the document, based on determining whether the value of the image parameter satisfies the threshold.Type: GrantFiled: December 21, 2023Date of Patent: March 11, 2025Assignee: Capital One Services, LLCInventors: Jason Pribble, Daniel Alan Jarvis, Nicholas Capurso
-
Patent number: 12243256Abstract: Systems and techniques are provided for linking subjects in an area of real space with user accounts. The user accounts are linked with client applications executable on mobile computing devices. A plurality of cameras are disposed above the area. The cameras in the plurality of cameras produce respective sequences of images in corresponding fields of view in the real space. A processing system is coupled to the plurality of cameras. The processing system includes logic to determine locations of subjects represented in the images. The processing system further includes logic to match the identified subjects with user accounts by identifying locations of the mobile computing devices executing client applications in the area of real space and matching locations of the mobile computing devices with locations of the subjects.Type: GrantFiled: November 6, 2023Date of Patent: March 4, 2025Assignee: STANDARD COGNITION, CORP.Inventors: Jordan E. Fisher, Warren Green, Daniel L. Fischetti
-
Patent number: 12243192Abstract: An apparatus to facilitate video motion smoothing is disclosed. The apparatus comprises one or more processors including a graphics processor, the one or more processors including circuitry configured to receive a video stream, decode the video stream to generate a motion vector map and a plurality of video image frames, analyze the motion vector map to detect a plurality of candidate frames, wherein the plurality of candidate frames comprise a period of discontinuous motion in the plurality of video image frames and the plurality of candidate frames are determined based on a classification generated via a convolutional neural network (CNN), generate, via a generative adversarial network (GAN), one or more synthetic frames based on the plurality of candidate frames, insert the one or more synthetic frames between the plurality of candidate frames to generate up-sampled video frames and transmit the up-sampled video frames for display.Type: GrantFiled: June 22, 2021Date of Patent: March 4, 2025Assignee: Intel CorporationInventors: Satyam Srivastava, Saurabh Tangri, Rajeev Nalawadi, Carl S. Marshall, Selvakumar Panneer
-
Patent number: 12236814Abstract: A display method and a display system for an anti-dizziness reference image are provided. The display system includes a display, a range extraction unit, an information analyzing unit, an object analyzing unit and an image setting unit. The display is used to display the anti-dizziness reference image. The range extraction unit is used to obtain a gaze background range of a user. The image setting unit is used to set an image hue, an image lightness, an image brightness, an image content or an ambient lighting display content of the anti-dizziness reference image according to a background hue information, a background lightness information, a background brightness information, or a road information of the gaze background range; or set an image ratio between the anti-dizziness reference image and a display area of the display according to an object distance or an object area of the watched object.Type: GrantFiled: May 26, 2023Date of Patent: February 25, 2025Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Ya-Rou Hsu, Chien-Ju Lee, Hong-Ming Dai, Yu-Hsiang Tsai, Chia-Hsun Tu, Kuan-Ting Chen
-
Patent number: 12229970Abstract: In examples, when attempting to interpolate or extrapolate a frame based on motion vectors of two adjacent frames, there can be more than one pixel value mapped to a given location in the frame. To select between conflicting pixel values for the given location, similarities between the motion vectors of source pixels that cause the conflict and global flow may be evaluated. For example, a level of similarity for a motion vector may be computed using a similarity metric based at least on a difference between an angle of a global motion vector and an angle of the motion vector. The similarity metric may also be based at least on a difference between a magnitude of the global motion vector and a magnitude of the motion vector. The similarity metric may weigh the difference between the angles in proportion to the magnitude of the global motion vector.Type: GrantFiled: August 15, 2022Date of Patent: February 18, 2025Assignee: NVIDIA CorporationInventors: Aurobinda Maharana, Karthick Sekkappan, Rohit Naskulwar
-
Patent number: 12231764Abstract: An image capturing method has: providing an image capturing area on a display screen of a user device; providing an indication area in the image capturing area; marking a license plate after identifying the license plate based on at least one license plate feature in the image capturing area; determining whether the marked license plate is located in the indication area and presented in a predetermined ratio; and capturing an image including the license plate in the image capturing area after the marked license plate is located in the indication area and presented in a predetermined ratio.Type: GrantFiled: July 28, 2022Date of Patent: February 18, 2025Assignee: GOGORO INC.Inventors: Yi-Chia Lin, Chih-Min Fu, I-Fen Shih
-
Patent number: 12229972Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to predict optical flow. One of the methods includes obtaining a batch of one or more training image pairs; for each of the pairs: processing the first training image and the second training image using the neural network to generate a final optical flow estimate; generating a cropped final optical flow estimate from the final optical flow estimate; and training the neural network using the cropped optical flow estimate.Type: GrantFiled: April 14, 2022Date of Patent: February 18, 2025Assignee: Waymo LLCInventors: Daniel Rudolf Maurer, Austin Charles Stone, Alper Ayvaci, Anelia Angelova, Rico Jonschkowski
-
Patent number: 12223022Abstract: A method for transitioning between user profiles in an electronic user device during use of the electronic user device wherein the electronic user device comprises a fingerprint sensor operatively connected to a touch sensor of the electronic user device is disclosed. The method comprises sensing (103), by the fingerprint sensor, at least a part of a fingerprint at a determined position and area of a detected touch, and determining (104), by a fingerprint controller configured to control the fingerprint sensor, whether the sensed part of the fingerprint corresponds to a registered user of the electronic user device.Type: GrantFiled: June 28, 2019Date of Patent: February 11, 2025Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Mohammed Zourob, Alexander Hunt, Andreas Kristensson
-
Patent number: 12209960Abstract: A non-invasive tension measuring system is provided with at least one pair of sensors positioned longitudinally and at a predetermined distance from each other along a linear system. At least one operationally connected processor executes instructions to detect, using the sensors, a transverse wave propagating along the linear system in order to determine a time delay of the transverse wave, to determine a propagation speed of the transverse wave based on the time delay, and to determine a tension of the unit length of the linear system based at least in part on the propagation speed and a mass per the unit length of the linear system.Type: GrantFiled: March 16, 2022Date of Patent: January 28, 2025Inventors: Anthony A Ruffa, Brian K Amaral
-
Patent number: 12197537Abstract: A non-transitory computer-readable recording medium stores a program for causing a computer to execute a process, the process includes inputting an accepted image to a first model generated through machine learning based on a composite image and information, the composite image being obtained by combining a first plurality of images each of which includes one area, the information indicating a combination state of the first plurality of images in the composite image, inputting a first image among a second plurality of images output by the first model to a second model generated through machine learning based on an image which includes one area and an image which includes a plurality of areas, and determining whether to input the first image to the first model, based on a result output by the second model in response to the inputting of the first image.Type: GrantFiled: January 25, 2022Date of Patent: January 14, 2025Assignee: FUJITSU LIMITEDInventor: Ander Martinez
-
Patent number: 12192625Abstract: A method for mitigating motion blur in a visual-inertial tracking system is described. In one aspect, the method includes accessing a first image generated by an optical sensor of the visual tracking system, accessing a second image generated by the optical sensor of the visual tracking system, the second image following the first image, determining a first motion blur level of the first image, determining a second motion blur level of the second image, identifying a scale change between the first image and the second image, determining a first optimal scale level for the first image based on the first motion blur level and the scale change, and determining a second optimal scale level for the second image based on the second motion blur level and the scale change.Type: GrantFiled: May 22, 2023Date of Patent: January 7, 2025Assignee: SNAP INC.Inventors: Matthias Kalkgruber, Daniel Wolf
-
Patent number: 12182906Abstract: Provided are a dynamic fluid display method and apparatus, an electronic device, and a readable medium. The method includes: detecting a target object on a user display interface; obtaining attribute information of the target object; determining, on the user display interface based on the attribute information of the target object, a change of a parameter of a fluid at each target texture pixel associated with the target object; and displaying a dynamic fluid on the user display interface based on the change of the parameter of the fluid.Type: GrantFiled: June 23, 2023Date of Patent: December 31, 2024Assignee: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.Inventors: Qi Li, Xiaoqi Li
-
Patent number: 12172066Abstract: Exemplary embodiments of the present disclosure are directed to systems, methods, and computer-readable media configured to autonomously track a round of golf and/or autonomously generate personalized recommendations for a user before, during, or after a round of golf. The systems and methods can utilize course data, environmental data, user data, and/or equipment data in conjunctions with one or more machine learning algorithms to autonomously generate the personalized recommendations.Type: GrantFiled: January 10, 2022Date of Patent: December 24, 2024Assignee: Arccos Golf LLCInventors: Salman Hussain Syed, Colin David Phillips, Stephen Obsitnik, Ryan Stafford Johnson, David Thomas LeDonne, Owais Murad Hussain Syed, Fabrice Blanc
-
Information output device, camera, information output system, information output method, and program
Patent number: 12177191Abstract: An information output device includes: a first output unit that outputs acquired information acquired by a sensor; and a second output unit that converts personal information included in the acquired information into attribute information from which identification of an individual is impossible, and outputs the attribute information.Type: GrantFiled: December 6, 2021Date of Patent: December 24, 2024Assignee: NEC CORPORATIONInventor: Akira Kato -
Patent number: 12169947Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.Type: GrantFiled: March 21, 2022Date of Patent: December 17, 2024Assignee: APPLE INC.Inventors: Stefan Auer, Sebastian Bernhard Knorr
-
Patent number: 12165101Abstract: In some embodiments, systems and methods are provided to recognize retail products, comprising: a model training system configured to: identify a customer; access an associated customer profile; access and apply a set of filtering rules to a product database based on customer data; generate a listing of products specific to the customer; access and apply a model training set of rules to train a machine learning model based on the listing of products and corresponding image data for each of the products in the listing of products; and communicate the trained model to the portable user device associated with first customer.Type: GrantFiled: January 9, 2024Date of Patent: December 10, 2024Assignee: Walmart Apollo, LLCInventors: Michael A. Garner, Priyanka Paliwal
-
Patent number: 12164023Abstract: A security inspection apparatus and a method of controlling the same are described. An example security inspection apparatus includes a bottom plate configured to carry an inspected object and a two-dimensional multi-input multi-output array panel, including at least one two-dimensional multi-input multi-output sub-array. Each two-dimensional multi-input multi-output sub-array includes transmitting antennas and receiving antennas arranged such that equivalent phase centers are arranged in a two-dimensional array. The security inspection apparatus includes a control circuit configured to control the transmitting antennas to transmit a detection signal in a form of an electromagnetic wave to the inspected object in a preset order, and to control the receiving antennas to receive an echo signal from the inspected object.Type: GrantFiled: June 28, 2021Date of Patent: December 10, 2024Assignees: Tsinghua University, Nuctech Company LimitedInventors: Zhiqiang Chen, Yan You, Ziran Zhao, Xuming Ma, Kai Wang
-
Patent number: 12154284Abstract: Systems and methods are described for generating a three-dimensional track of a ball in a gaming environment from multiple cameras. In some examples, at least two input videos, each including frames of a ball moving in a gaming environment recorded by a camera, may be obtained, along with a camera projection matrix that maps a two-dimensional pixel space representation to a three-dimensional representation of the gaming environment. Candidate two-dimensional image locations of the ball across the plurality of frames of the at least two input videos may be identified using neural network or computer vision techniques. An optimization algorithm may be performed that uses a 3D ball physics model, the camera projection matrix and a subset of the candidate two-dimensional image locations of the ball from the at least two input videos to generate a three-dimensional track of the ball in the gaming environment.Type: GrantFiled: September 23, 2022Date of Patent: November 26, 2024Assignee: MAIDEN AI, INC.Inventors: Vivek Jayaram, Brogan McPartland, Arjun Verma
-
Patent number: 12154035Abstract: A method and system of training a machine learning neural network (MLNN) in monitoring anatomical positioning. The method comprises receiving, in a first input layer, from millimeter wave (mmWave) radar device, mmWave point cloud data representing spatial positions associated with a medical patient during successive changes in the spatial positions corresponding durations between changes, the mmWave data based upon detecting range and reflected wireless signal strength, receiving, in a second layer of the MLNN, attribute data for the corresponding durations, the f input layers interconnected with an output layer via an intermediate layer, the intermediate layer configured with an initial matrix of weights, training a MLNN classifier using classification that establishes correlation between the mmWave radar point cloud data, the attribute data and likelihood of formation of bodily pressure ulcers (BPUs) generated at the output layer, and producing a trained MLNN based on increasing the correlation.Type: GrantFiled: February 22, 2023Date of Patent: November 26, 2024Assignee: Ventech Solutions, Inc.Inventors: Ravi Kiran Pasupuleti, Ravi Kunduru
-
Patent number: 12133725Abstract: A gait analysis apparatus 10 includes, a data acquisition unit 11 that acquires a three-dimensional point cloud data of a human to be analyzed, a center of gravity location calculation unit 12 that calculates coordinates of a center of gravity location on the three-dimensional point cloud data of the human to be analyzed by using coordinates of each point constituting the acquired three-dimensional point cloud data, and a gait index calculation unit 13 that calculates a gait index of the human to be analyzed by using the calculated center of gravity location.Type: GrantFiled: February 12, 2020Date of Patent: November 5, 2024Assignee: NEC Solution Innovators, Ltd.Inventors: Hiroki Terashima, Katsuyuki Nagai
-
Patent number: 12131718Abstract: A motion detection section 720 detects a motion exceeding a permissible limit in a wide-viewing-angle image displayed on a head-mounted display 100. A field-of-view restriction processing section 750 restricts a field of view for observing the wide-viewing-angle image in a case in which the motion exceeding the permissible limit has been detected in the wide-viewing-angle image. An image provision section 760 provides, for the head-mounted display 100, the wide-viewing-angle image in which the field of view has been restricted.Type: GrantFiled: July 22, 2021Date of Patent: October 29, 2024Assignee: SONY INTERACTIVE ENTERTAINMENT INC.Inventor: Yasushi Okumura
-
Patent number: 12094197Abstract: A method for removing extraneous content in a first plurality of images, captured at a corresponding plurality of poses and a corresponding first plurality of times, by a first drone, of a scene in which a second drone is present includes the following steps, for each of the first plurality of captured images. The first drone predicts a 3D position of the second drone at a time of capture of that image. The first drone defines, in an image plane corresponding to that captured image, a region of interest (ROI) including a projection of the predicted 3D position of the second drone at a time of capture of that image. A drone mask for the second drone is generated, and then applied to the defined ROI, to generate an output image free of extraneous content contributed by the second drone.Type: GrantFiled: June 10, 2021Date of Patent: September 17, 2024Assignees: SONY GROUP CORPORATION, SONY CORPORATION OF AMERICAInventor: Cheng-Yi Liu
-
Patent number: 12094127Abstract: There is provided with an image processing apparatus for measuring a flow of a measurement target based on a video. A detection line indicating a position at which the flow of the measurement target in the video is measured is set. From each of a plurality of images in the video, a plurality of partial images set in a vicinity of the detection line are extracted. The flow of the measurement target passing the detection line is measured using the partial images.Type: GrantFiled: December 6, 2021Date of Patent: September 17, 2024Assignee: Canon Kabushiki KaishaInventors: Hajime Muta, Yasuo Bamba
-
Patent number: 12087077Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.Type: GrantFiled: July 5, 2023Date of Patent: September 10, 2024Assignee: NVIDIA CorporationInventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
-
Patent number: 12079995Abstract: A method of image segmentation includes receiving one or more images, determining a loss component, for each pixel one image of the one or more images, identifying a majority class and identify a cross-entropy loss between a network output and a target, randomly selecting pixels associated with the one image and select a second set of pixels to compute a super pixel loss for each pair of pixels, summing corresponding loss associated with each pair of pixels, for each corresponding frame of the plurality of frames of the image, computing a flow loss, a negative flow loss, a contrastive optical flow loss, and a equivariant optical flow loss, computing a final loss including a weighted average of the flow loss, the cross entropy loss, the super pixel loss, and foreground loss, updating a network parameter and outputting a trained neural network.Type: GrantFiled: September 28, 2021Date of Patent: September 3, 2024Assignee: Robert Bosch GmbHInventors: Chirag Pabbaraju, João D. Semedo, Wan-Yi Lin
-
Patent number: 12079998Abstract: A system identifies a movement and generates prescriptive analytics of that movement. To identify a movement, a system accesses an image of an observation volume where users execute movements. The system identifies a location including an active region in the image. The active region includes a movement region and a surrounding region. The system identifies musculoskeletal points of a user in the location and determines when the user enters the active area. The system identifies a movement of a user in the active region based on the time evolution of key-points in the active region. The system determines descriptive analytics describing the movement. Based on the descriptive analytics, the system generates prescriptive analytics for the movement and provides the prescriptive analytics to the user. The prescriptive analytics may inform future and/or current movements of the user.Type: GrantFiled: April 15, 2021Date of Patent: September 3, 2024Assignee: Uplift Labs, Inc.Inventors: Rahul Rajan, Jonathan D. Wills, Sukemasa Kabayama
-
Patent number: 12033353Abstract: A basic pattern extracting unit (15) extracts a “basic pattern” for each human from detection points acquired by an acquiring unit (11). The “basic pattern” includes a “reference body region point” corresponding to a “reference body region type”, and base body region points corresponding to base body region types that are different from the reference body region type and that are different from each other. For example, the “basic pattern” includes at least one of the following two combinations. A first combination is a combination of the reference body region point corresponding to a neck as the reference body region type and two base body region points respectively corresponding to a left shoulder and a left ear as the base body region type. A second combination is a combination of body region points corresponding to a neck, a right shoulder and a right ear.Type: GrantFiled: July 22, 2019Date of Patent: July 9, 2024Assignee: NEC CORPORATIONInventors: Yadong Pan, Shoji Nishimura
-
Patent number: 12022054Abstract: A volumetric display may include a two-dimensional display; a varifocal optical system configured to receive image light from the two-dimensional display and focus the image light; and at least one processor configured to: control the two-dimensional display to cause a plurality of sub-frames associated with an image frame to be displayed by the display, wherein each sub-frame of the plurality of sub-frames includes a corresponding portion of image data associated with the image frame; and control the varifocal optical system to a corresponding focal state for each respective sub-frame.Type: GrantFiled: January 31, 2022Date of Patent: June 25, 2024Assignee: Meta Platforms Technologies, LLCInventors: Afsoon Jamali, Yang Zhao, Wai Sze Tiffany Lam, Lu Lu, Douglas Robert Lanman
-
Patent number: 12008454Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.Type: GrantFiled: March 20, 2023Date of Patent: June 11, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
-
Patent number: 11972352Abstract: Methods, systems, and apparatus for motion-based human video detection are disclosed. A method includes generating a representation of a difference between two frames of a video; providing, to an object detector, a particular frame of the two frames and the representation of the difference between two frames of the video; receiving an indication that the object detector detected an object in the particular frame; determining that detection of the object in the particular frame was a false positive detection; determining an amount of motion energy where the object was detected in the particular frame; and training the object detector based on penalization of the false positive detection in accordance with the amount of motion energy where the object was detected in the particular frame.Type: GrantFiled: November 4, 2022Date of Patent: April 30, 2024Assignee: ObjectVideo Labs, LLCInventors: Sima Taheri, Gang Qian, Sung Chun Lee, Sravanthi Bondugula, Allison Beach
-
Patent number: 11972554Abstract: A detection device includes an acquirer that acquires a video of each of a plurality of bearings of a structure including the plurality of bearings, an extractor that extracts a dynamic feature corresponding to a plurality of degrees of freedom of each of the plurality of bearings based on the video, and an identifier that identifies, among the plurality of bearings, a bearing whose dynamic feature fails to match a dynamic feature of one or more other bearings of the plurality of bearings.Type: GrantFiled: December 23, 2020Date of Patent: April 30, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Taro Imagawa, Akihiro Noda, Yuki Maruyama, Hiroya Kusaka
-
Patent number: 11966413Abstract: In one embodiment, a first deep fusion reasoning engine (DFRE) agent in a network receives first sensor data from a first set of one or more sensors in the network. The first DFRE agent translates the first sensor data into symbolic data. The first DFRE agent applies, using a symbolic knowledge base maintained by the first DFRE agent, symbolic reasoning to the symbolic data to make an inference regarding the first sensor data. The first DFRE agent updates, based on the inference regarding the first sensor data, the knowledge base. The first DFRE agent propagates the inference to one or more other DFRE agents in the network.Type: GrantFiled: March 6, 2020Date of Patent: April 23, 2024Assignee: Cisco Technology, Inc.Inventors: Hugo Latapie, Enzo Fenoglio, Carlos M. Pignataro, Nagendra Kumar Nainar, David Delano Ward
-
Patent number: 11953618Abstract: Methods, apparatus and systems for wireless motion recognition are described. In one example, a described system comprises: a transmitter configured for transmitting a first wireless signal through a wireless multipath channel of a venue; a receiver configured for receiving a second wireless signal through the wireless multipath channel; and a processor. The second wireless signal differs from the first wireless signal due to the wireless multipath channel that is impacted by a motion of an object in the venue. The processor is configured for: obtaining a time series of channel information (TSCI) of the wireless multipath channel based on the second wireless signal, tracking the motion of the object based on the TSCI to generate a gesture trajectory of the object, and determining a gesture shape based on the gesture trajectory and a plurality of pre-determined gesture shapes.Type: GrantFiled: February 20, 2021Date of Patent: April 9, 2024Assignee: ORIGIN RESEARCH WIRELESS, INC.Inventors: Sai Deepika Regani, Beibei Wang, Min Wu, K. J. Ray Liu, Oscar Chi-Lim Au
-
Patent number: 11954801Abstract: A method for virtually representing human body poses includes receiving positioning data detailing parameters of one or more body parts of a human user based at least in part on input from one or more sensors. One or more mapping constraints are maintained that relate a model articulated representation to a target articulated representation. A model pose of the model articulated representation and a target pose of the target articulated representation are concurrently estimated based at least in part on the positioning data and the one or more mapping constraints. The previously-trained pose optimization machine is trained with training positioning data having ground truth labels for the model articulated representation. The target articulated representation is output for display with the target pose as a virtual representation of the human user.Type: GrantFiled: April 11, 2022Date of Patent: April 9, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Thomas Joseph Cashman, Erroll William Wood, Federica Bogo, Sasa Galic, Pashmina Jonathan Cameron
-
Patent number: 11936847Abstract: A video processing method includes dividing a region of a current frame to obtain a plurality of image blocks, obtaining a historical motion information candidate list, and obtaining candidate historical motion information for the plurality of image blocks according to the historical motion information candidate list. The candidate historical motion information is a candidate in the historical motion information candidate list. The method further includes performing prediction for the plurality of image blocks according to the candidate historical motion information. A size of each of the plurality of image blocks is smaller than or equal to a preset size. The same historical motion information candidate list is used for the plurality of image blocks during the prediction. The historical motion information candidate list is not updated while the prediction is being performed for the plurality of image blocks.Type: GrantFiled: June 29, 2021Date of Patent: March 19, 2024Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Suhong Wang, Xiaozhen Zheng, Shanshe Wang, Siwei Ma