Motion Or Velocity Measuring Patents (Class 382/107)
  • Patent number: 12260720
    Abstract: Systems and methods of selecting a video stream resolution are provided. In one exemplary embodiment, a method comprises, by a network node operationally coupled over a network to a set of optical sensor devices positioned throughout a space that are operable to send at least one of a set of image streams to the network node. The method comprises receiving a first image stream of a set of image streams of the first optical sensor device that is selected based on both a confidence level that at least one object is correctly detected from a second image stream received from the first optical sensor and a current network bandwidth utilization to maintain the current network bandwidth utilization below a network bandwidth utilization threshold, with the first and second image streams having a different resolution and the first optical sensor having a viewing angle towards the detected object.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: March 25, 2025
    Assignee: Toshiba Global Commerce Solutions, Inc.
    Inventor: David J. Steiner
  • Patent number: 12254535
    Abstract: A system and method include association of imaging event data to one of a plurality of bins based on a time associated with the imaging event data, determination that the time periods of a first bin and the time periods of a second bin are adjacent-in-time, determination of whether a spatial characteristic of the imaging event data of the first bin is within a predetermined threshold of the spatial characteristic of the imaging event data of the second bin, and, based on the determination, reconstruction of one or more images based on the imaging event data of the first bin and the second bin.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: March 18, 2025
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Inki Hong, Ziad Burbar, Paul Schleyer
  • Patent number: 12252870
    Abstract: A wear detection system can be configured to receive a video stream including a plurality of images of a bucket of the work machine from a camera associated with the work machine. The bucket has one or more ground engaging tools (GET). The wear detection system can also be configured to identify a plurality of tool images from the video stream over a period of time. The plurality of tool images depict the GET at a plurality of instances over a period of time. The wear detection system can also be configured to determine a plurality of tool pixel counts from the plurality of tool image and determine a wear level for the GET based on the plurality of tool pixel counts.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: March 18, 2025
    Assignee: Caterpillar Inc.
    Inventors: Peter Joseph Petrany, Shastri Ram
  • Patent number: 12254041
    Abstract: A position recognition method and a system based on visual information processing are disclosed A position recognition method according to one embodiment including the steps of: generating a frame image through a camera; transmitting, to a server, a first global pose of the camera and the generated frame image; and receiving, from the server, a second global pose of the camera estimated on the basis of a pose of an object included in the transmitted frame image.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: March 18, 2025
    Assignee: NAVER CORPORATION
    Inventors: Dongcheol Hur, Yeong-Ho Jeong, Sangwook Kim
  • Patent number: 12254640
    Abstract: In an object tracking device, the extraction means extracts target candidates from images in a time-series. The first setting means sets a first search range based on frame information and reliability of a target in a previous image in the time-series. The tracking means searches the target from the target candidates extracted within the first search range using the reliability indicating similarity to a target model, and tracks the target. The second setting means sets a second search range which includes the first search range and which is larger than the first search range. The model updating means updates the target model using the target candidates extracted within the first search range and the target candidates extracted within the second search range.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: March 18, 2025
    Assignee: NEC CORPORATION
    Inventor: Takuya Ogawa
  • Patent number: 12249137
    Abstract: A device may capture a plurality of preview frames of a document, and for each preview frame of the plurality of preview frames, process the preview frame to identify an object in the preview frame. Processing the preview frame may include converting the preview frame into a grayscale image, generating a blurred image based on the grayscale image, detecting a plurality of edges in the blurred image, defining at least one bounding rectangle based on the plurality of edges, and determining an outline of the object based on the at least one bounding rectangle. The device may determine whether a value of an image parameter, associated with the one or more preview frames, satisfies a threshold, and provide feedback to a user of the device, or automatically capture an image of the document, based on determining whether the value of the image parameter satisfies the threshold.
    Type: Grant
    Filed: December 21, 2023
    Date of Patent: March 11, 2025
    Assignee: Capital One Services, LLC
    Inventors: Jason Pribble, Daniel Alan Jarvis, Nicholas Capurso
  • Patent number: 12243192
    Abstract: An apparatus to facilitate video motion smoothing is disclosed. The apparatus comprises one or more processors including a graphics processor, the one or more processors including circuitry configured to receive a video stream, decode the video stream to generate a motion vector map and a plurality of video image frames, analyze the motion vector map to detect a plurality of candidate frames, wherein the plurality of candidate frames comprise a period of discontinuous motion in the plurality of video image frames and the plurality of candidate frames are determined based on a classification generated via a convolutional neural network (CNN), generate, via a generative adversarial network (GAN), one or more synthetic frames based on the plurality of candidate frames, insert the one or more synthetic frames between the plurality of candidate frames to generate up-sampled video frames and transmit the up-sampled video frames for display.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: March 4, 2025
    Assignee: Intel Corporation
    Inventors: Satyam Srivastava, Saurabh Tangri, Rajeev Nalawadi, Carl S. Marshall, Selvakumar Panneer
  • Patent number: 12243256
    Abstract: Systems and techniques are provided for linking subjects in an area of real space with user accounts. The user accounts are linked with client applications executable on mobile computing devices. A plurality of cameras are disposed above the area. The cameras in the plurality of cameras produce respective sequences of images in corresponding fields of view in the real space. A processing system is coupled to the plurality of cameras. The processing system includes logic to determine locations of subjects represented in the images. The processing system further includes logic to match the identified subjects with user accounts by identifying locations of the mobile computing devices executing client applications in the area of real space and matching locations of the mobile computing devices with locations of the subjects.
    Type: Grant
    Filed: November 6, 2023
    Date of Patent: March 4, 2025
    Assignee: STANDARD COGNITION, CORP.
    Inventors: Jordan E. Fisher, Warren Green, Daniel L. Fischetti
  • Patent number: 12236814
    Abstract: A display method and a display system for an anti-dizziness reference image are provided. The display system includes a display, a range extraction unit, an information analyzing unit, an object analyzing unit and an image setting unit. The display is used to display the anti-dizziness reference image. The range extraction unit is used to obtain a gaze background range of a user. The image setting unit is used to set an image hue, an image lightness, an image brightness, an image content or an ambient lighting display content of the anti-dizziness reference image according to a background hue information, a background lightness information, a background brightness information, or a road information of the gaze background range; or set an image ratio between the anti-dizziness reference image and a display area of the display according to an object distance or an object area of the watched object.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: February 25, 2025
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Ya-Rou Hsu, Chien-Ju Lee, Hong-Ming Dai, Yu-Hsiang Tsai, Chia-Hsun Tu, Kuan-Ting Chen
  • Patent number: 12231764
    Abstract: An image capturing method has: providing an image capturing area on a display screen of a user device; providing an indication area in the image capturing area; marking a license plate after identifying the license plate based on at least one license plate feature in the image capturing area; determining whether the marked license plate is located in the indication area and presented in a predetermined ratio; and capturing an image including the license plate in the image capturing area after the marked license plate is located in the indication area and presented in a predetermined ratio.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: February 18, 2025
    Assignee: GOGORO INC.
    Inventors: Yi-Chia Lin, Chih-Min Fu, I-Fen Shih
  • Patent number: 12229970
    Abstract: In examples, when attempting to interpolate or extrapolate a frame based on motion vectors of two adjacent frames, there can be more than one pixel value mapped to a given location in the frame. To select between conflicting pixel values for the given location, similarities between the motion vectors of source pixels that cause the conflict and global flow may be evaluated. For example, a level of similarity for a motion vector may be computed using a similarity metric based at least on a difference between an angle of a global motion vector and an angle of the motion vector. The similarity metric may also be based at least on a difference between a magnitude of the global motion vector and a magnitude of the motion vector. The similarity metric may weigh the difference between the angles in proportion to the magnitude of the global motion vector.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: February 18, 2025
    Assignee: NVIDIA Corporation
    Inventors: Aurobinda Maharana, Karthick Sekkappan, Rohit Naskulwar
  • Patent number: 12229972
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to predict optical flow. One of the methods includes obtaining a batch of one or more training image pairs; for each of the pairs: processing the first training image and the second training image using the neural network to generate a final optical flow estimate; generating a cropped final optical flow estimate from the final optical flow estimate; and training the neural network using the cropped optical flow estimate.
    Type: Grant
    Filed: April 14, 2022
    Date of Patent: February 18, 2025
    Assignee: Waymo LLC
    Inventors: Daniel Rudolf Maurer, Austin Charles Stone, Alper Ayvaci, Anelia Angelova, Rico Jonschkowski
  • Patent number: 12223022
    Abstract: A method for transitioning between user profiles in an electronic user device during use of the electronic user device wherein the electronic user device comprises a fingerprint sensor operatively connected to a touch sensor of the electronic user device is disclosed. The method comprises sensing (103), by the fingerprint sensor, at least a part of a fingerprint at a determined position and area of a detected touch, and determining (104), by a fingerprint controller configured to control the fingerprint sensor, whether the sensed part of the fingerprint corresponds to a registered user of the electronic user device.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 11, 2025
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Mohammed Zourob, Alexander Hunt, Andreas Kristensson
  • Patent number: 12209960
    Abstract: A non-invasive tension measuring system is provided with at least one pair of sensors positioned longitudinally and at a predetermined distance from each other along a linear system. At least one operationally connected processor executes instructions to detect, using the sensors, a transverse wave propagating along the linear system in order to determine a time delay of the transverse wave, to determine a propagation speed of the transverse wave based on the time delay, and to determine a tension of the unit length of the linear system based at least in part on the propagation speed and a mass per the unit length of the linear system.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: January 28, 2025
    Inventors: Anthony A Ruffa, Brian K Amaral
  • Patent number: 12197537
    Abstract: A non-transitory computer-readable recording medium stores a program for causing a computer to execute a process, the process includes inputting an accepted image to a first model generated through machine learning based on a composite image and information, the composite image being obtained by combining a first plurality of images each of which includes one area, the information indicating a combination state of the first plurality of images in the composite image, inputting a first image among a second plurality of images output by the first model to a second model generated through machine learning based on an image which includes one area and an image which includes a plurality of areas, and determining whether to input the first image to the first model, based on a result output by the second model in response to the inputting of the first image.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: January 14, 2025
    Assignee: FUJITSU LIMITED
    Inventor: Ander Martinez
  • Patent number: 12192625
    Abstract: A method for mitigating motion blur in a visual-inertial tracking system is described. In one aspect, the method includes accessing a first image generated by an optical sensor of the visual tracking system, accessing a second image generated by the optical sensor of the visual tracking system, the second image following the first image, determining a first motion blur level of the first image, determining a second motion blur level of the second image, identifying a scale change between the first image and the second image, determining a first optimal scale level for the first image based on the first motion blur level and the scale change, and determining a second optimal scale level for the second image based on the second motion blur level and the scale change.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: January 7, 2025
    Assignee: SNAP INC.
    Inventors: Matthias Kalkgruber, Daniel Wolf
  • Patent number: 12182906
    Abstract: Provided are a dynamic fluid display method and apparatus, an electronic device, and a readable medium. The method includes: detecting a target object on a user display interface; obtaining attribute information of the target object; determining, on the user display interface based on the attribute information of the target object, a change of a parameter of a fluid at each target texture pixel associated with the target object; and displaying a dynamic fluid on the user display interface based on the change of the parameter of the fluid.
    Type: Grant
    Filed: June 23, 2023
    Date of Patent: December 31, 2024
    Assignee: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
    Inventors: Qi Li, Xiaoqi Li
  • Patent number: 12172066
    Abstract: Exemplary embodiments of the present disclosure are directed to systems, methods, and computer-readable media configured to autonomously track a round of golf and/or autonomously generate personalized recommendations for a user before, during, or after a round of golf. The systems and methods can utilize course data, environmental data, user data, and/or equipment data in conjunctions with one or more machine learning algorithms to autonomously generate the personalized recommendations.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: December 24, 2024
    Assignee: Arccos Golf LLC
    Inventors: Salman Hussain Syed, Colin David Phillips, Stephen Obsitnik, Ryan Stafford Johnson, David Thomas LeDonne, Owais Murad Hussain Syed, Fabrice Blanc
  • Patent number: 12177191
    Abstract: An information output device includes: a first output unit that outputs acquired information acquired by a sensor; and a second output unit that converts personal information included in the acquired information into attribute information from which identification of an individual is impossible, and outputs the attribute information.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: December 24, 2024
    Assignee: NEC CORPORATION
    Inventor: Akira Kato
  • Patent number: 12169947
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: December 17, 2024
    Assignee: APPLE INC.
    Inventors: Stefan Auer, Sebastian Bernhard Knorr
  • Patent number: 12165101
    Abstract: In some embodiments, systems and methods are provided to recognize retail products, comprising: a model training system configured to: identify a customer; access an associated customer profile; access and apply a set of filtering rules to a product database based on customer data; generate a listing of products specific to the customer; access and apply a model training set of rules to train a machine learning model based on the listing of products and corresponding image data for each of the products in the listing of products; and communicate the trained model to the portable user device associated with first customer.
    Type: Grant
    Filed: January 9, 2024
    Date of Patent: December 10, 2024
    Assignee: Walmart Apollo, LLC
    Inventors: Michael A. Garner, Priyanka Paliwal
  • Patent number: 12164023
    Abstract: A security inspection apparatus and a method of controlling the same are described. An example security inspection apparatus includes a bottom plate configured to carry an inspected object and a two-dimensional multi-input multi-output array panel, including at least one two-dimensional multi-input multi-output sub-array. Each two-dimensional multi-input multi-output sub-array includes transmitting antennas and receiving antennas arranged such that equivalent phase centers are arranged in a two-dimensional array. The security inspection apparatus includes a control circuit configured to control the transmitting antennas to transmit a detection signal in a form of an electromagnetic wave to the inspected object in a preset order, and to control the receiving antennas to receive an echo signal from the inspected object.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: December 10, 2024
    Assignees: Tsinghua University, Nuctech Company Limited
    Inventors: Zhiqiang Chen, Yan You, Ziran Zhao, Xuming Ma, Kai Wang
  • Patent number: 12154035
    Abstract: A method and system of training a machine learning neural network (MLNN) in monitoring anatomical positioning. The method comprises receiving, in a first input layer, from millimeter wave (mmWave) radar device, mmWave point cloud data representing spatial positions associated with a medical patient during successive changes in the spatial positions corresponding durations between changes, the mmWave data based upon detecting range and reflected wireless signal strength, receiving, in a second layer of the MLNN, attribute data for the corresponding durations, the f input layers interconnected with an output layer via an intermediate layer, the intermediate layer configured with an initial matrix of weights, training a MLNN classifier using classification that establishes correlation between the mmWave radar point cloud data, the attribute data and likelihood of formation of bodily pressure ulcers (BPUs) generated at the output layer, and producing a trained MLNN based on increasing the correlation.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: November 26, 2024
    Assignee: Ventech Solutions, Inc.
    Inventors: Ravi Kiran Pasupuleti, Ravi Kunduru
  • Patent number: 12154284
    Abstract: Systems and methods are described for generating a three-dimensional track of a ball in a gaming environment from multiple cameras. In some examples, at least two input videos, each including frames of a ball moving in a gaming environment recorded by a camera, may be obtained, along with a camera projection matrix that maps a two-dimensional pixel space representation to a three-dimensional representation of the gaming environment. Candidate two-dimensional image locations of the ball across the plurality of frames of the at least two input videos may be identified using neural network or computer vision techniques. An optimization algorithm may be performed that uses a 3D ball physics model, the camera projection matrix and a subset of the candidate two-dimensional image locations of the ball from the at least two input videos to generate a three-dimensional track of the ball in the gaming environment.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: November 26, 2024
    Assignee: MAIDEN AI, INC.
    Inventors: Vivek Jayaram, Brogan McPartland, Arjun Verma
  • Patent number: 12133725
    Abstract: A gait analysis apparatus 10 includes, a data acquisition unit 11 that acquires a three-dimensional point cloud data of a human to be analyzed, a center of gravity location calculation unit 12 that calculates coordinates of a center of gravity location on the three-dimensional point cloud data of the human to be analyzed by using coordinates of each point constituting the acquired three-dimensional point cloud data, and a gait index calculation unit 13 that calculates a gait index of the human to be analyzed by using the calculated center of gravity location.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: November 5, 2024
    Assignee: NEC Solution Innovators, Ltd.
    Inventors: Hiroki Terashima, Katsuyuki Nagai
  • Patent number: 12131718
    Abstract: A motion detection section 720 detects a motion exceeding a permissible limit in a wide-viewing-angle image displayed on a head-mounted display 100. A field-of-view restriction processing section 750 restricts a field of view for observing the wide-viewing-angle image in a case in which the motion exceeding the permissible limit has been detected in the wide-viewing-angle image. An image provision section 760 provides, for the head-mounted display 100, the wide-viewing-angle image in which the field of view has been restricted.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: October 29, 2024
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Yasushi Okumura
  • Patent number: 12094127
    Abstract: There is provided with an image processing apparatus for measuring a flow of a measurement target based on a video. A detection line indicating a position at which the flow of the measurement target in the video is measured is set. From each of a plurality of images in the video, a plurality of partial images set in a vicinity of the detection line are extracted. The flow of the measurement target passing the detection line is measured using the partial images.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: September 17, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hajime Muta, Yasuo Bamba
  • Patent number: 12094197
    Abstract: A method for removing extraneous content in a first plurality of images, captured at a corresponding plurality of poses and a corresponding first plurality of times, by a first drone, of a scene in which a second drone is present includes the following steps, for each of the first plurality of captured images. The first drone predicts a 3D position of the second drone at a time of capture of that image. The first drone defines, in an image plane corresponding to that captured image, a region of interest (ROI) including a projection of the predicted 3D position of the second drone at a time of capture of that image. A drone mask for the second drone is generated, and then applied to the defined ROI, to generate an output image free of extraneous content contributed by the second drone.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: September 17, 2024
    Assignees: SONY GROUP CORPORATION, SONY CORPORATION OF AMERICA
    Inventor: Cheng-Yi Liu
  • Patent number: 12087077
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: July 5, 2023
    Date of Patent: September 10, 2024
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 12079995
    Abstract: A method of image segmentation includes receiving one or more images, determining a loss component, for each pixel one image of the one or more images, identifying a majority class and identify a cross-entropy loss between a network output and a target, randomly selecting pixels associated with the one image and select a second set of pixels to compute a super pixel loss for each pair of pixels, summing corresponding loss associated with each pair of pixels, for each corresponding frame of the plurality of frames of the image, computing a flow loss, a negative flow loss, a contrastive optical flow loss, and a equivariant optical flow loss, computing a final loss including a weighted average of the flow loss, the cross entropy loss, the super pixel loss, and foreground loss, updating a network parameter and outputting a trained neural network.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: September 3, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Chirag Pabbaraju, João D. Semedo, Wan-Yi Lin
  • Patent number: 12079998
    Abstract: A system identifies a movement and generates prescriptive analytics of that movement. To identify a movement, a system accesses an image of an observation volume where users execute movements. The system identifies a location including an active region in the image. The active region includes a movement region and a surrounding region. The system identifies musculoskeletal points of a user in the location and determines when the user enters the active area. The system identifies a movement of a user in the active region based on the time evolution of key-points in the active region. The system determines descriptive analytics describing the movement. Based on the descriptive analytics, the system generates prescriptive analytics for the movement and provides the prescriptive analytics to the user. The prescriptive analytics may inform future and/or current movements of the user.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: September 3, 2024
    Assignee: Uplift Labs, Inc.
    Inventors: Rahul Rajan, Jonathan D. Wills, Sukemasa Kabayama
  • Patent number: 12033353
    Abstract: A basic pattern extracting unit (15) extracts a “basic pattern” for each human from detection points acquired by an acquiring unit (11). The “basic pattern” includes a “reference body region point” corresponding to a “reference body region type”, and base body region points corresponding to base body region types that are different from the reference body region type and that are different from each other. For example, the “basic pattern” includes at least one of the following two combinations. A first combination is a combination of the reference body region point corresponding to a neck as the reference body region type and two base body region points respectively corresponding to a left shoulder and a left ear as the base body region type. A second combination is a combination of body region points corresponding to a neck, a right shoulder and a right ear.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: July 9, 2024
    Assignee: NEC CORPORATION
    Inventors: Yadong Pan, Shoji Nishimura
  • Patent number: 12022054
    Abstract: A volumetric display may include a two-dimensional display; a varifocal optical system configured to receive image light from the two-dimensional display and focus the image light; and at least one processor configured to: control the two-dimensional display to cause a plurality of sub-frames associated with an image frame to be displayed by the display, wherein each sub-frame of the plurality of sub-frames includes a corresponding portion of image data associated with the image frame; and control the varifocal optical system to a corresponding focal state for each respective sub-frame.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: June 25, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Afsoon Jamali, Yang Zhao, Wai Sze Tiffany Lam, Lu Lu, Douglas Robert Lanman
  • Patent number: 12008454
    Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: June 11, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
  • Patent number: 11972352
    Abstract: Methods, systems, and apparatus for motion-based human video detection are disclosed. A method includes generating a representation of a difference between two frames of a video; providing, to an object detector, a particular frame of the two frames and the representation of the difference between two frames of the video; receiving an indication that the object detector detected an object in the particular frame; determining that detection of the object in the particular frame was a false positive detection; determining an amount of motion energy where the object was detected in the particular frame; and training the object detector based on penalization of the false positive detection in accordance with the amount of motion energy where the object was detected in the particular frame.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: April 30, 2024
    Assignee: ObjectVideo Labs, LLC
    Inventors: Sima Taheri, Gang Qian, Sung Chun Lee, Sravanthi Bondugula, Allison Beach
  • Patent number: 11972554
    Abstract: A detection device includes an acquirer that acquires a video of each of a plurality of bearings of a structure including the plurality of bearings, an extractor that extracts a dynamic feature corresponding to a plurality of degrees of freedom of each of the plurality of bearings based on the video, and an identifier that identifies, among the plurality of bearings, a bearing whose dynamic feature fails to match a dynamic feature of one or more other bearings of the plurality of bearings.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: April 30, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Taro Imagawa, Akihiro Noda, Yuki Maruyama, Hiroya Kusaka
  • Patent number: 11966413
    Abstract: In one embodiment, a first deep fusion reasoning engine (DFRE) agent in a network receives first sensor data from a first set of one or more sensors in the network. The first DFRE agent translates the first sensor data into symbolic data. The first DFRE agent applies, using a symbolic knowledge base maintained by the first DFRE agent, symbolic reasoning to the symbolic data to make an inference regarding the first sensor data. The first DFRE agent updates, based on the inference regarding the first sensor data, the knowledge base. The first DFRE agent propagates the inference to one or more other DFRE agents in the network.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: April 23, 2024
    Assignee: Cisco Technology, Inc.
    Inventors: Hugo Latapie, Enzo Fenoglio, Carlos M. Pignataro, Nagendra Kumar Nainar, David Delano Ward
  • Patent number: 11954801
    Abstract: A method for virtually representing human body poses includes receiving positioning data detailing parameters of one or more body parts of a human user based at least in part on input from one or more sensors. One or more mapping constraints are maintained that relate a model articulated representation to a target articulated representation. A model pose of the model articulated representation and a target pose of the target articulated representation are concurrently estimated based at least in part on the positioning data and the one or more mapping constraints. The previously-trained pose optimization machine is trained with training positioning data having ground truth labels for the model articulated representation. The target articulated representation is output for display with the target pose as a virtual representation of the human user.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: April 9, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Joseph Cashman, Erroll William Wood, Federica Bogo, Sasa Galic, Pashmina Jonathan Cameron
  • Patent number: 11953618
    Abstract: Methods, apparatus and systems for wireless motion recognition are described. In one example, a described system comprises: a transmitter configured for transmitting a first wireless signal through a wireless multipath channel of a venue; a receiver configured for receiving a second wireless signal through the wireless multipath channel; and a processor. The second wireless signal differs from the first wireless signal due to the wireless multipath channel that is impacted by a motion of an object in the venue. The processor is configured for: obtaining a time series of channel information (TSCI) of the wireless multipath channel based on the second wireless signal, tracking the motion of the object based on the TSCI to generate a gesture trajectory of the object, and determining a gesture shape based on the gesture trajectory and a plurality of pre-determined gesture shapes.
    Type: Grant
    Filed: February 20, 2021
    Date of Patent: April 9, 2024
    Assignee: ORIGIN RESEARCH WIRELESS, INC.
    Inventors: Sai Deepika Regani, Beibei Wang, Min Wu, K. J. Ray Liu, Oscar Chi-Lim Au
  • Patent number: 11936847
    Abstract: A video processing method includes dividing a region of a current frame to obtain a plurality of image blocks, obtaining a historical motion information candidate list, and obtaining candidate historical motion information for the plurality of image blocks according to the historical motion information candidate list. The candidate historical motion information is a candidate in the historical motion information candidate list. The method further includes performing prediction for the plurality of image blocks according to the candidate historical motion information. A size of each of the plurality of image blocks is smaller than or equal to a preset size. The same historical motion information candidate list is used for the plurality of image blocks during the prediction. The historical motion information candidate list is not updated while the prediction is being performed for the plurality of image blocks.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: March 19, 2024
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Suhong Wang, Xiaozhen Zheng, Shanshe Wang, Siwei Ma
  • Patent number: 11933888
    Abstract: The invention relates to a process for monitoring vehicles on a road by a system comprising at least one radar sensor and a second sensor different from the radar sensor, wherein the second remote sensor is a time-of-flight optical sensor or optical image sensor, the process comprising a temporal readjustment and a spatial matching in order to obtain a set of measurement points each assigned to first characteristics derived from the radar data and second characteristics derived from the optical data, the determination of the radar vehicle trackings and of the optical vehicle trackings, a comparison of similarity between the radar vehicle trackings and the optical vehicle trackings, the elimination of the radar vehicle trackings for which no optical vehicle tracking is similar, the process comprising monitoring a parameter derived from first characteristics of a retained radar vehicle tracking.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 19, 2024
    Assignee: IDEMIA IDENTITY & SECURITY FRANCE
    Inventors: Samuel Alliot, Grégoire Carrion, Eric Guidon, Nicolas Fougeroux
  • Patent number: 11928866
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting locations in an environment of a vehicle where objects are likely centered and determining properties of those objects. One of the methods includes receiving an input characterizing an environment external to a vehicle. For each of a plurality of locations in the environment, a respective first object score that represents a likelihood that a center of an object is located at the location is determined. Based on the first object scores, one or more locations from the plurality of locations are selected as locations in the environment at which respective objects are likely centered. Object properties of the objects that are likely centered at the selected locations are also determined.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: March 12, 2024
    Assignee: Waymo LLC
    Inventors: Abhijit Ogale, Alexander Krizhevsky
  • Patent number: 11912310
    Abstract: A method includes receiving a series of road images from a side-view camera sensor of the autonomous driving vehicle. For each object from objects captured in the series of road images, a series of bounding boxes in the series of road images is generated, and a direction of travel or stationarity of the object is determined. The methods and apparatus also include determining a speed of each object for which the direction of travel has been determined and determining, based on the directions of travel, speeds, or stationarity of the objects, whether the autonomous driving vehicle can safely move in a predetermined direction. Furthermore, one or more control signals is sent to the autonomous driving vehicle to cause the autonomous driving vehicle to move or to remain stationary based on determining whether the autonomous driving vehicle can safely move in the predetermined direction.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: February 27, 2024
    Assignee: TUSIMPLE, INC.
    Inventors: Yiqian Gan, Yijie Wang, Xiaodi Hou, Lingting Ge
  • Patent number: 11908144
    Abstract: A plurality of motion vectors are acquired from consecutive images. From the acquired plurality of motion vectors, a motion vector of interest and neighboring motion vectors neighboring the motion vector of interest are selected and a degree of similarity between two motion vectors is acquired. A value related to the total number of neighboring motion vectors having high degrees of similarity to the motion vector of interest is acquired as a degree of reliability.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: February 20, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Masaaki Kobayashi
  • Patent number: 11894153
    Abstract: The present disclosure relates to a narrow slit channel visualization experimental device and method under a six-degree-of-freedom motion condition. The system comprises a six-degree-of-freedom motion simulation platform, a main circulation loop, a cooling water system, an electric heating system and a bubble monitoring system, wherein the main circulation loop is composed of an S-shaped preheater, a three-surface visualization experimental section, a double-pipe condenser, a pressurizing circulating pump, a voltage stabilizer and related equipment, wherein the cooling water system is composed of the double-pipe condenser, a plate heat exchanger, a cooling tower, a cooling fan, a cooling water tank and related equipment, wherein the electric heating system is composed of a direct-current power supply, a low-voltage power controller and a transformer, and wherein the bubble monitoring system is composed of two high-speed cameras, a PIV measuring system and an electric servo module.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: February 6, 2024
    Assignee: Xi'an Jiaotong University
    Inventors: Kui Zhang, Zhixian Lai, Mingjun Wang, Chong Chen, Zhiming Zhu, Jing Zhang, Wenxi Tian, Suizheng Qiu, Guanghui Su
  • Patent number: 11877809
    Abstract: Disclosed is a computer-implemented of adapting a biomechanical model of an anatomical body part of a patient to a current status of the patient. The method encompasses determination of a currently executed step of a workflow such as a medical intervention, the result of the determination serving as a basis for adapting and/or updating a biomechanical model of an anatomical body part to the corresponding current status of the patient. The determination of the current workflow step may also be used as basis for controlling an imaging device for tracking entities around the patient or for imaging the anatomical body part or acquiring further data or for urging the user to perform a specific action such as acquisition of information using a tracked instrument such as a pointer. The biomechanical model has been generated from atlas data.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: January 23, 2024
    Assignee: BRAINLAB AG
    Inventors: Stefan Vilsmeier, Andreas Blumhofer, Jens Schmaler, Patrick Hiepe
  • Patent number: 11861907
    Abstract: Methods, systems and apparatuses may provide for technology that selects a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is nearest to a projectile depicted in the 2D video data. The technology may also track a location of the selected player over a subsequent plurality of frames in the 2D video data and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: January 2, 2024
    Assignee: Intel Corporation
    Inventors: Yikai Fang, Qiang Li, Wenlong Li, Haihua Lin, Chen Ling, Ming Lu, Hongzhi Tao, Xiaofeng Tong, Yumeng Wang
  • Patent number: 11861783
    Abstract: Various methods are provided for the generation of motion vectors in the context of 3D computer-generated images. In one example, a method includes generating, for each pixel of one or more objects to be rendered in a current frame, a 1-phase motion vector (MV1) and a 0-phase motion vector (MV0), each MV1 and MV0 having an associated depth value, to thereby form an MV1 texture and an MV0 texture, each MV0 determined based on a camera MV0 and an object MV0, converting the MV1 texture to a set of MV1 pixel blocks and converting the MV0 texture to a set of MV0 pixel blocks and outputting the set of MV1 pixel blocks and the set of MV0 pixel blocks for image processing.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: January 2, 2024
    Inventors: Hongmin Zhang, Miao Sima, Gongxian Liu, Zongming Han, Junhua Chen, Guohua Cheng, Baochen Liu, Neil Woodall, Yue Ma, Huili Han
  • Patent number: 11863891
    Abstract: An imaging apparatus acquires pixel signals for each predetermined-size partial region of an imaging plane and for detecting a region among the regions of the imaging plane in which a subject image has changed, the number of acquired pixel signals being less than the number of pixels included in the partial region, and controls to perform image capturing in response to detection of the region in which the subject image has changed, and to output a captured image based the pixel signals. The imaging apparatus acquires pixel signals from each partial region of the imaging plane by, when pixel signals are to be acquired from a first partial region of the imaging plane, acquiring pixel signals from a second partial region that is adjacent to the first partial region and partially overlapped with the first partial region.
    Type: Grant
    Filed: January 14, 2022
    Date of Patent: January 2, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Koji Oshima
  • Patent number: 11845006
    Abstract: A skeleton model updating apparatus, a skeleton model updating method, and a program by which time and effort for changing the pose of a skeleton model to a known standard pose can be reduced. A target node identifying section identifies target nodes from among a plurality of nodes included in a skeleton model that is in a pose other than a standard pose. A reference node identifying section identifies a reference node that is positioned closest to the side of the target nodes, from among nodes that are connected to all of the target nodes via one or more bones. A position deciding section decides positions of the target nodes such that relative positions of the target nodes with respect to the position of the reference node are adjusted to predetermined positions.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: December 19, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Yoshinori Ohashi