Motion Or Velocity Measuring Patents (Class 382/107)
  • Patent number: 11657319
    Abstract: As learning data, an image of a virtual space corresponding to a physical space and geometric information of the virtual space is generated. Learning processing of a learning model is performed using the learning data. A position and/or orientation of an image capturing device is calculated based on geometric information output from the learning model when a captured image of the physical space captured by the image capturing device is input to the learning model.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: May 23, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Masahiro Suzuki, Daisuke Kotake, Kazuhiko Kobayashi
  • Patent number: 11645763
    Abstract: An image processing apparatus detects a tracking target object in an image, executes tracking processing to track the object, determines whether an attribute of an object detected from the image is a predetermined attribute, identifies, when a first state in which the object is detected changes to a second state with the object not detected, a given object included in the image and positioned at least partially in front of the object in the second state, based on a position of the object in the first state, controls the tracking processing, based on a determination whether an attribute of the given object is the predetermined attribute, and determines, based on the result, whether to continue the tracking processing. The tracking processing continues until at least a predetermined time has elapsed when a determination to continue the tracking processing on the tracking target object for the predetermined time has been made.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 9, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Takuya Toyoda
  • Patent number: 11643102
    Abstract: A vehicle dash cam may be configured to execute one or more neural networks (and/or other artificial intelligence), such as based on input from one or more of the cameras and/or other sensors associated with the dash cam, to intelligently detect safety events in real-time. Detection of a safety event may trigger an in-cab alert to make the driver aware of the safety risk. The dash cam may include logic for determining which asset data to transmit to a backend server in response to detection of a safety event, as well as which asset data to transmit to the backend server in response to analysis of sensor data that did not trigger a safety event. The asset data transmitted to the backend server may be further analyzed to determine if further alerts should be provided to the driver and/or to a safety manager.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: May 9, 2023
    Assignee: Samsara Inc.
    Inventors: Mathew Chasan Calmer, Justin Delegard, Justin Pan, Sabrina Shemet, Meelap Shah, Kavya Joshi, Brian Tuan, Sharan Srinivasan, Muhammad Ali Akhtar, John Charles Bicket, Margaret Finch, Vincent Shieh, Bruce Kellerman, Mitch Lin, Marvin Arroz, Siddhartha Datta Roy, Jason Symons, Tina Quach, Cassandra Lee Rommel, Saumya Jain
  • Patent number: 11636307
    Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: April 25, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
  • Patent number: 11625836
    Abstract: In the case where a three-dimensional position of a moving body is calculated based on image data taken by a plurality of cameras that are synchronized, high performance equipment and system such as a system to make the synchronization among the plurality of cameras and a camera having a built-in function to make the synchronization are required. It is also required to fix the camera position with high accuracy beforehand. It is made possible to calculate a trajectory of a moving body as a target in the three-dimensional space using image data taken by a plurality of cameras that are non-synchronized mutually, thereby solving the above issue. And positions of respective cameras are calculated in the three-dimensional space from a plurality of reference points having fixed position coordinates in the three-dimensional space that are commonly shown in the image data of the respective cameras, thereby solving the above issue.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: April 11, 2023
    Assignee: QONCEPT, INC.
    Inventors: Kenichi Hayashi, Shunsuke Nambu
  • Patent number: 11624833
    Abstract: Provided are embodiments including a system for automatically generating a plan of scan locations for performing a scanning operation where the system includes a storage medium that is coupled to a processor. The processor is configured to receive a map of an environment, apply a distance transform to the map, wherein the distance transform determines a path through the map, wherein the path comprises a plurality of points, and identify a set of candidate scan locations based on the path. The processor is also configured to select scan locations from the set of candidate scan locations for performing 3D scans, and perform the 3D scans of the environment based on the selected scan locations. Also provided are embodiments for a method and computer program product for automatically generating a plan of scan locations for performing a scanning operation.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: April 11, 2023
    Assignee: FARO Technologies, Inc.
    Inventors: Ahmad Ramadneh, Aleksej Frank, Oliver Zweigle, Joao Santos, Simon Raab
  • Patent number: 11620832
    Abstract: This disclosure relates to systems and methods of obtaining accurate motion and orientation estimates for a vehicle traveling at high speed based on images of a road surface. A purpose of these systems and methods is to provide a supplementary or alternative means of locating a vehicle on a map, particularly in cases where other locationing approaches (e.g., GPS) are unreliable or unavailable.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: April 4, 2023
    Inventors: Hendrik J. Volkerink, Ajay Khoche
  • Patent number: 11595568
    Abstract: A system configured to assist a user in scanning a physical environment in order to generate a three-dimensional scan or model. In some cases, the system may include an interface to assist the user in capturing data usable to determine a scale or depth of the physical environment and to perform a scan in a manner that minimizes gaps.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: February 28, 2023
    Assignee: Occipital, Inc.
    Inventors: Vikas M. Reddy, Jeffrey Roger Powers, Anton Yakubenko, Gleb Krivovyaz, Yury Berdnikov, George Evmenov, Timur Ibadov, Oleg Kazmin, Ivan Malin, Yuping Lin
  • Patent number: 11580663
    Abstract: A camera height calculation method that causes a computer to execute a process, the process includes obtaining one or more images captured by an in-vehicle camera, extracting one or more feature points from the one or more images, identifying first feature points that exist over a road surface from the one or more feature points, and calculating a height of the in-vehicle camera from the road surface, based on positions of the identified first feature points.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: February 14, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Jun Kanetake, Rie Hasada, Haruyuki Ishida, Takushi Fujita
  • Patent number: 11574408
    Abstract: The present disclosure relates to a motion estimation method, a chip, an electronic device, and a storage medium. The present disclosure is beneficial to improving the accuracy of motion estimation.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: February 7, 2023
    Assignee: Amlogic (Shanghai) Co., Ltd.
    Inventors: Zheng Bao, Tao Ji, Chun Wang, Dongjian Wang, Xuyun Chen
  • Patent number: 11559037
    Abstract: A three-dimensional accelerometer in an animal tag registers a first set of acceleration parameters expressing a respective acceleration of the tag along each of three independent spatial axes. A processor in the tag derives a respective estimated gravity-related component in each parameter in the first set, and compensates for the respective estimated gravity-related components in the first set to obtain a second set of acceleration parameters representing respective accelerations of the animal tag along each of three independent spatial axes each in which the parameter is balanced around a base level with no influence of gravitation. The processor determines behavior-related data of rise-up and/or lie-down movements of an animal carrying the animal tag based on deviations in a single parameter in the second set of acceleration parameters relative to the base level.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: January 24, 2023
    Assignee: DeLaval Holding AB
    Inventor: Keld Florczak
  • Patent number: 11557065
    Abstract: Example implementations described herein involve systems and methods for a mobile application device to playback and record augmented reality (AR) overlays indicating gestures to be made to a recorded device screen. A device screen is recorded by a camera of the mobile device, wherein a mask is overlaid on a user hand interacting with the device screen. Interactions made to the device screen are detected based on the mask, and AR overlays are generated corresponding to the reactions.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: January 17, 2023
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Scott Carter, Laurent Denoue, Yulius Tjahjadi, Gerry Filby
  • Patent number: 11527069
    Abstract: Poses of a person depicted within video frame may be determined. The poses of the person may be used to generate intermediate video frames between the video frames.
    Type: Grant
    Filed: January 18, 2021
    Date of Patent: December 13, 2022
    Assignee: GoPro, Inc.
    Inventors: Andrew Russell, Robert McIntosh
  • Patent number: 11513607
    Abstract: The shape or movement of a gesturing body or portion thereof in two- or three-dimensional space is ascertained from the path of the outline of the shape or the path of the movement. The method disclosed involves receiving input of data that represents a path and using artificial intelligence to recognize the meaning of the path, i.e., to recognize which of a plurality of pre-prepared meanings is the meaning of a gesture. As pre-processing for inputting location data for a point group along the path to the artificial intelligence, at least one attribute from among the location, size, and direction of the entire point group is extracted, and location data for the point group is converted to attribute invariant location data that is relative to the extracted attribute(s) but not dependent on the extracted attribute(s). Then data that includes the attribute invariant location data and the extracted attribute(s) is inputted to the artificial intelligence as input data.
    Type: Grant
    Filed: November 1, 2020
    Date of Patent: November 29, 2022
    Assignee: MARUI-PlugIn Co., Ltd.
    Inventor: Maximilian Michael Krichenbauer
  • Patent number: 11510629
    Abstract: Methods and systems are provided for detecting patient motion during a diagnostic scan. In one example, a method for a medical imaging system includes obtaining output from one or more patient monitoring devices configured to monitor a patient before and during a diagnostic scan executed with the medical imaging system, receiving a request to initiate the diagnostic scan, tracking patient motion based on the output from the one or more patient monitoring devices, and initiating the diagnostic scan responsive to patient motion being below a threshold level.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: November 29, 2022
    Assignee: General Electric Company
    Inventors: Adam Gregory Pautsch, Robert Senzig, Christopher Unger, Erik Kemper
  • Patent number: 11514623
    Abstract: A method is for providing a medical image of a patient, acquired via a computed tomography apparatus. An embodiment of the method includes acquiring first projection data of a first measurement region; acquiring second projection data of a second measurement region; registering a reference image to the at least one respiration-correlated image of the patient, wherein the reference image corresponds to the at least one functional image of the patient or is reconstructed under a second reconstruction rule from the second projection data, to produce a deformation model; applying the deformation model to the at least one functional image of the patient; combining the at least one functional image of the patient, deformed by the applying of the deformation model, with the at least one respiration-correlated image of the patient, to produce the medical image of the patient; and providing the medical image of the patient.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: November 29, 2022
    Assignees: UNVERSITÄT ZÜRICH, SIEMENS HEALTHCARE GMBH
    Inventors: André Ritter, Christian Hofmann, Matthias Guckenberger, Stephanie Tanadini-Lang
  • Patent number: 11501531
    Abstract: A computing device captures a live video of a user. For a first frame of the live video, the computing device obtains first target positional coordinates of a first target point located a first predetermined distance from the computing device and obtains first background data. For a second frame, the computing device obtains second target positional coordinates of a second target point located a second predetermined distance from the computing device and obtains second background data. The computing device calculates a target motion vector based on the first target point and the second target point and calculates a background motion vector based on feature points in the first background data and the second background data. The computing device determines a difference value between the target motion vector and the background motion vector and determines whether the user is spoofing the computing device based on the difference value.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: November 15, 2022
    Assignee: CYBERLINK CORP.
    Inventors: Fu-Kai Chuang, Sheng-Hung Liu
  • Patent number: 11494918
    Abstract: A moving state analysis device improves accuracy of moving state recognition by including a detection unit configured to detect, from image data associated with a frame, an object and a region of the object, for each of frames that constitute first video data captured in a course of movement of a first moving body, and a learning unit configured to learn a DNN model that takes video data and sensor data as input and that outputs a probability of each moving state, based on the first video data, a feature of first sensor data measured in relation to the first moving body and corresponding to a capture of the first video data, a detection result of the object and the region of the object, and information that indicates a moving state associated with the first video data.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: November 8, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuhei Yamamoto, Hiroyuki Toda
  • Patent number: 11482046
    Abstract: [Problem] To provide an action-estimating device with which an action of a subject appearing in a plurality of time-series images can be precisely estimated. [Solution] In the action-estimating device 1, an estimating-side detecting unit 13 detects a plurality of articulations A appearing in each time-series image Y on the basis of a reference having been stored in an estimating-side identifier 11 and serving to identify the plurality of articulations A. An estimating-side measuring unit 14 measures the coordinates and the depths of the plurality of articulations A appearing in each of the time-series images Y. On the basis of displacement in the plurality of time-series images Y of the measured coordinate and depth of each of the articulations A, a specifying unit 15 specifies, from among the plurality of articulations A, an articulation group B which belongs to a given subject.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: October 25, 2022
    Assignee: ASILLA, INC.
    Inventor: Daisuke Kimura
  • Patent number: 11466984
    Abstract: An ECU includes a memory including computer executable instructions for monitoring the condition of a ground engaging tool, and a processor coupled to the memory and configured to execute the computer executable instructions, the computer executable instructions when executed by the processor cause the processor to: acquire an image of the ground engaging tool, evaluate the image using an algorithm that compares the acquired image to a database of existing images to determine the damage, the amount of wear, or the absence of the ground engaging tool, and grade the quality of the acquired image.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: October 11, 2022
    Assignee: Caterpillar Inc.
    Inventors: John Michael Plouzek, Mitchell Chase Vlaminck, Nolan S. Finch
  • Patent number: 11463738
    Abstract: Various embodiments describe methods, systems, and devices for delivering on-demand video viewing angles of an arena at a venue are disclosed. Exemplary implementations may receive images of an event taking place across a plurality of positions within the arena from a series of cameras surrounding the plurality of positions. Content of interest may be identified within the images for a select user. Also, a score may be determined for each of the images based on the identified content of interest for the select user. A highest-score position may be determined from the plurality of positions based on the determined score and an offer to view images of the highest-score position may be transmitted to a display device of the select viewer for viewing video of the event.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 4, 2022
    Assignee: Charter Communications Operating, LLC
    Inventors: Sami Makinen, Yassine Maalej
  • Patent number: 11450010
    Abstract: Methods and systems for determining and classifying a number of repetitive motions in a video are described, and include the steps of first determining a plurality of images from a video, where the images are segmented from at least one video frame of the video. Next, performing a pose detection process on a feature of the images to generate one or more landmarks. Next, determining one or more principle component axes on points associated with a given landmark. Finally, determining at least one repetitive motion based on a pattern associated with a projection of the points onto the one or more principle components. In some embodiments, the disclosed methods can classify the repetitive motions to respective types. The present invention can be implemented for convenient use on a mobile computing device, such as a smartphone, for tracking exercises and similar repetitive motions.
    Type: Grant
    Filed: October 16, 2021
    Date of Patent: September 20, 2022
    Assignee: NEX Team Inc.
    Inventors: On Loy Sung, Qi Zhang, Keng Fai Lee, Shing Fat Mak, Daniel Dejos, Man Hon Chan
  • Patent number: 11443439
    Abstract: An air-to-air background-oriented schlieren system in which reference frames are acquired concurrently with the image frames, recording a target aircraft from a sensor aircraft flying in formation, while concurrently recording reference frames of underlying terrain to provide a visually textured background as a reference. This auto-referencing method improves the original AirBOS method by allowing a much more flexible and useful measurement, reducing the flight planning and piloting burden, and broadening the possible camera choices to acquire imaging of visible density changes in air that cause a refractive index change by an airborne vehicle.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: September 13, 2022
    Assignee: U.S.A. as Represented by the Administrator of the National Aeronautics and Space Administration
    Inventors: Daniel W Banks, James T Heineck
  • Patent number: 11432737
    Abstract: Systems and methods for predicting motion of a target using imaging are provided. In one aspect, a method includes receiving image data, acquired using an imaging system, corresponding to a region of interest (“ROI”) in a subject, and generating a set of reconstructed images from the image data. The method also includes processing the set of reconstructed images to obtain motion information associated with a target in the ROI, and applying the motion information in a motion prediction framework to estimate a predicted motion of the target. The method further includes generating a report based on the predicted motion estimated.
    Type: Grant
    Filed: March 17, 2018
    Date of Patent: September 6, 2022
    Assignee: The Regents of the University of California
    Inventors: Xinzhou Li, Holden H. Wu
  • Patent number: 11429229
    Abstract: An image processing apparatus according to the present disclosure includes: a position detection illumination unit; an image recognition illumination unit; an illumination control unit; an imaging unit; and an image processing unit. The position detection illumination unit outputs position detection illumination light. The position detection illumination light is used for position detection on a position detection object. The image recognition illumination unit outputs image recognition illumination light. The image recognition illumination light is used for image recognition on an image recognition object. The illumination control unit controls the position detection illumination unit and the image recognition illumination unit to cause the position detection illumination light and the image recognition illumination light to be outputted at timings different from each other.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 30, 2022
    Assignee: SONY GROUP CORPORATION
    Inventor: Masahiro Ando
  • Patent number: 11430308
    Abstract: A method includes obtaining, by a motion generator that has been trained to generate torque values for a plurality of joints of a rig associated with a target, a set of parameters associated with a target motion. The method includes, in response to the target motion being a first type of motion, generating a first set of torque values for the plurality of joints based on the set of parameters and a set of previous poses of the target. The method includes, in response to the target motion being a second type of motion, generating a second set of torque values for the plurality of joints based on the set of parameters and the set of previous poses of the target. The method includes triggering a movement of the target in accordance with the first set of torque values or the second set of torque values.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: August 30, 2022
    Assignee: Apple Inc.
    Inventors: Jian Zhang, Siva Chandra Mouli Sivapurapu, Aashi Manglik, Amritpal Singh Saini, Edward S. Ahn
  • Patent number: 11416507
    Abstract: Techniques for processing combinations of timeseries data and time-dependent semantic data are provided. The timeseries data can be data from one or more Internet of things (IOT) devices having one or more hardware sensors. The semantic data can be master data. Disclosed techniques allow for time dependent semantic data to be used with the timeseries data, so that semantic data appropriate for a time period associated with the timeseries data can be used. Changes to semantic data are tracked and recorded, where the changes can represent a new value to be used going forward in time or an update to a value for a prior time period. Timeseries data and semantic data can be stored with identifiers that facilitate their combination, such as date ranges, identifiers of analog world objects, or identifiers for discrete sets of semantic data values.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: August 16, 2022
    Assignee: SAP SE
    Inventors: Christian Conradi, Seshatalpasai Madala
  • Patent number: 11417115
    Abstract: An obstacle recognition device of a vehicle provided with a camera capturing an image around the vehicle, includes an acquiring unit sequentially acquiring the image captured by the camera; a feature point extracting unit extracting a plurality of feature points of an object included in the image; a calculation unit calculating each motion distance of the plurality of feature points between the image previously acquired and the image currently acquired by the acquiring unit; a first determination unit determining whether each motion distance of the feature points is larger than or equal to a first threshold; a second determination unit determining whether each motion distance of the feature points is larger than or equal to a second threshold; and an obstacle recognition unit recognizing an obstacle.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: August 16, 2022
    Assignees: DENSO CORPORATION, TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Koki Osada, Takayuki Hiromitsu, Tomoyuki Fujimoto, Takuya Miwa, Yutaka Hamamoto, Masumi Fukuman, Akihiro Kida, Kunihiro Sugihara
  • Patent number: 11410546
    Abstract: Systems and methods determining velocity of an object associated with a three-dimensional (3D) scene may include: a LIDAR system generating two sets of 3D point cloud data of the scene from two consecutive point cloud sweeps; a pillar feature network encoding data of the point cloud data to extract two-dimensional (2D) bird's-eye-view embeddings for each of the point cloud data sets in the form of pseudo images, wherein the 2D bird's-eye-view embeddings for a first of the two point cloud data sets comprises pillar features for the first point cloud data set and the 2D bird's-eye-view embeddings for a second of the two point cloud data sets comprises pillar features for the second point cloud data set; and a feature pyramid network encoding the pillar features and performing a 2D optical flow estimation to estimate the velocity of the object.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: August 9, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kuan-Hui Lee, Matthew T. Kliemann, Adrien David Gaidon
  • Patent number: 11402495
    Abstract: The invention concerns a monitoring method that comprises coupling in an integral manner at least one electromagnetic mirror of passive type with a given target to be monitored and monitoring the given target; wherein monitoring the given target includes: acquiring, via one or more synthetic aperture radar(s) installed on board one or more satellites and/or one or more aerial platforms, SAR images of a given area of the earth's surface where the given target is located; and determining, via a processing unit, a movement of the electromagnetic mirror on the basis of the acquired SAR images.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: August 2, 2022
    Assignee: Thales Alenia Space Italia S.p.A. Con Unico Socio
    Inventors: Luca Soli, Diego Calabrese
  • Patent number: 11388555
    Abstract: Provided herein is a method for quantifying and measuring human mobility within defined geographic regions and sub-regions. Methods may include: identifying sub-regions within a region; identifying static information associated with the sub-regions from one or more static information sources; obtaining dynamic information associated with the sub-regions from one or more dynamic information sources; determining correlations between elements of the static information associated with a respective sub-region and elements of the dynamic information associated with the respective sub-regions; generating a mobility score for the respective sub-region based, at least in part, on the correlations between the elements of the static information and the elements of the dynamic information associated with the respective sub-region; and providing the mobility score to one or more clients for guiding an action relative to the mobility score.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: July 12, 2022
    Assignee: HERE GLOBAL B.V.
    Inventors: Dmitry Koval, Jerome Beaurepaire
  • Patent number: 11388385
    Abstract: Disclosed herein are primary and auxiliary image capture devices for image processing and related methods. According to an aspect, a method may include using primary and auxiliary image capture devices to perform image processing. The method may include using the primary image capture device to capture a first image of a scene, the first image having a first quality characteristic. Further, the method may include using the auxiliary image capture device to capture a second image of the scene. The second image may have a second quality characteristic. The second quality characteristic may be of lower quality than the first quality characteristic. The method may also include adjusting at least one parameter of one of the captured images to create a plurality of adjusted images for one of approximating and matching the first quality characteristic. Further, the method may include utilizing the adjusted images for image processing.
    Type: Grant
    Filed: January 3, 2021
    Date of Patent: July 12, 2022
    Assignee: 3DMedia Corporation
    Inventors: Bahram Dahi, Tassos Markas, Michael McNamer, Jon Boyette
  • Patent number: 11386288
    Abstract: A movement state recognition multitask DNN model training section 46 trains a parameter of a DNN model based on an image data time series and a sensor data time series, and based on first annotation data, second annotation data, and third annotation data generated for the image data time series and the sensor data time series. Training is performed such that a movement state recognized by the DNN model in a case in which input with the image data time series and the sensor data time series matches movement states indicated by the first annotation data, the second annotation data, and the third annotation data. This thereby enables information to be efficiently extracted and combined from both video data and sensor data, and also enables movement state recognition to be implemented with high precision for a data set including data that does not fall in any movement state class.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: July 12, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuhei Yamamoto, Hiroyuki Toda
  • Patent number: 11379696
    Abstract: The present disclosure provides a pedestrian re-identification method and apparatus, computer device and readable medium. The method comprises: collecting a target image and a to-be-identified image including a pedestrian image; obtaining a feature expression of the target image and a feature expression of the to-be-identified image respectively, based on a pre-trained feature extraction model; wherein the feature extraction model is obtained by training based on a self-attention feature of a base image as well as a co-attention feature of the base image relative to a reference image; identifying whether a pedestrian in the to-be-identified image is the same pedestrian as that in the target image according to the feature expression of the target image and the feature expression of the to-be-identified image.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: July 5, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Zhigang Wang, Jian Wang, Shilei Wen, Errui Ding, Hao Sun
  • Patent number: 11373367
    Abstract: A method for characterization of respiratory characteristics based on a voxel model includes: successively capturing multiple frames of depth image of a thoracoabdominal surface of human body and modelling the multiple frames of depth image in 3D to obtain multiple frames of voxel model in time series; traversing voxel units of the multiple frames of voxel model and extracting a volumetric characteristic and areal characteristic of the multiple frames of voxel model; acquiring a minimum common voxel bounding box of the multiple frames of voxel model; describing spatial distribution of the multiple frames of voxel model in the form of probability and arranging the probabilities of the minimum voxel bounding boxes of individual frames of voxel model to construct a sample space of super-high dimensional vectors; reducing the dimensions of the sample space to obtain intrinsic parameters; obtaining a characteristic variable capable of characterizing the voxel model.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: June 28, 2022
    Assignee: SOOCHOW UNIVERSITY
    Inventors: Shumei Yu, Pengcheng Hou, Rongchuan Sun, Shaolong Kuang, Lining Sun
  • Patent number: 11373411
    Abstract: A method includes obtaining a two-dimensional image, obtaining a two-dimensional image annotation that indicates presence of an object in the two-dimensional image, determining a location proposal based on the two-dimensional image annotation, determining a classification for the object, determining an estimated size for the object based on the classification for the object, and defining a three-dimensional cuboid for the object based on the location proposal and the estimated size.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: June 28, 2022
    Assignee: Apple Inc.
    Inventors: Hanlin Goh, Nitish Srivastava, Yichuan Tang, Ruslan Salakhutdinov
  • Patent number: 11368632
    Abstract: A method for processing a video includes: identifying a target object in a first video segment; acquiring a current video frame of a second video segment; acquiring a first image region corresponding to the target object in a first target video frame of the first video segment, and acquiring a second image region corresponding to the target object in the current video frame of the second video segment, wherein the first target video frame corresponds to the current video frame of the second video segment in terms of video frame time; and performing picture splicing on the first target video frame and the current video frame of the second video segment according to the first image region and the second image region to obtain a processed first video frame.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: June 21, 2022
    Assignee: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Binglin Chang
  • Patent number: 11353476
    Abstract: Embodiments of the present disclosure provide a method and apparatus for determining a velocity of an obstacle, a device, and a medium. An implementation includes: acquiring a first point cloud data of the obstacle at a first time and a second point cloud data of the obstacle at a second time; registering the first point cloud data and the second point cloud data by moving the first point cloud data or the second point cloud data; and determining a moving velocity of the obstacle based on a distance between two data points in a registered data point pair.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 7, 2022
    Assignee: Apollo Intelligent Driving Technology (Beijing) Co., Ltd.
    Inventors: Hao Wang, Liang Wang, Yu Ma
  • Patent number: 11353953
    Abstract: A method of modifying an image on a computational device is disclosed.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: June 7, 2022
    Assignee: FOVO TECHNOLOGY LIMTED
    Inventors: Robert Pepperell, Alistair Burleigh
  • Patent number: 11337652
    Abstract: A method for determining spatial information for a multi-segment articulated rigid body system having at least an anchored segment and a non-anchored segment coupled to the anchored segment, each segment in the multi-segment articulated rigid body system representing a respective body part of a user, the method comprising: obtaining signals recorded by a first autonomous movement sensor coupled to a body part of the user represented by the non-anchored segment; providing the obtained signals as input to a trained statistical model and obtaining corresponding output of the trained statistical model; and determining, based on the corresponding output of the trained statistical model, spatial information for at least the non-anchored segment of the multi-segment articulated rigid body system.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: May 24, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Patrick Kaifosh, Timothy Machado, Thomas Reardon, Erik Schomburg, Calvin Tong
  • Patent number: 11343526
    Abstract: A video processing method includes dividing a current frame into a plurality of blocks, generating a motion vector of each block of the plurality of blocks of the current frame according to the each block of the current frame and a corresponding block of a previous frame, generating a global motion vector according to a plurality of motion vectors of the current frame, generating a sum of absolute differences of pixels of each block of the current frame according to the global motion vector, generating a region with a set of blocks of the current frame, matching a distribution of the sum of absolute differences of pixels of the region with a plurality of models, identifying a best matching model, and labeling each block in the region in the current frame with a label of either a foreground block or a background block according to the best matching model.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: May 24, 2022
    Assignee: Realtek Semiconductor Corp.
    Inventors: Yanting Wang, Guangyu San
  • Patent number: 11333794
    Abstract: Embodiments of the present invention disclose a method, a computer program product, and a computer system for generating a wind map based on multimedia analysis. A computer receives a multimedia source configuration and builds a wind scale reference database. In addition, the computer extracts and processes both wind speed data and contextual data from the multimedia. Moreover, the computer analyses temporal and spatial features, as well as generates a wind map based on the extracted context, extracted wind speed, and analysed temporal and spatial features. Lastly, the wind map generator validates and modifies the wind scale reference database.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ivan M. Milman, Sushain Pandit, Charles D. Wolfson, Su Liu, Fang Wang
  • Patent number: 11328394
    Abstract: Provided is a deep learning based contrast-enhanced (CE) CT image contrast amplifying method and the deep learning based CE CT image contrast amplifying method includes extracting at least one component CT image between a CE component and a non-CE component for an input CE CT image with the input CE CT image as an input to a previously trained deep learning model; and outputting a contrast-amplified CT image with respect to the CE CT image based on the input CE CT image and the at least one extracted component CT image.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: May 10, 2022
    Assignees: CLARIPI INC., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Jong Hyo Kim, Hyun Sook Park, Tai Chul Park, Chul Kyun Ahn
  • Patent number: 11320529
    Abstract: A tracking device is provided, which may include a correction target area setting module configured to set an area in which an unnecessary echo tends to be generated based on a structure or behavior of a ship, as a correction target area, a correction target echo extracting module configured to extract a target object echo within the correction target area from a plurality of detected target object echoes, as a correction target echo, a scoring module configured to score a matching level between previous echo information on a target object echo and detected echo information on each of the target object echoes, based on the previous echo information, the detected echo information and the extraction result, and a determining module configured to determine a target object echo as a current tracking target by using the scored result.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: May 3, 2022
    Assignee: Furuno Electric Co., Ltd.
    Inventors: Daisuke Fujioka, Katsuyuki Yanagi, Suminori Ekuni, Yugo Kubota
  • Patent number: 11315287
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Stefan Auer, Sebastian Bernhard Knorr
  • Patent number: 11308678
    Abstract: Systems and methods for generating cartoon images or emojis of an individual from a photograph of the individual is described. The systems and methods involve transmitting a picture of the individual, such as one taken with a mobile device, to a server that generates a set of emojis showing different emotions of the individual from the picture. The emojis are then transmitted to the mobile device and are available for use by the user in messaging applications, emails, or other electronic communications. The emojis can be added to the default keyboard of the mobile device or be generated in a separate emoji keyboard and be available for selection by the user.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: April 19, 2022
    Assignee: UMOJIFY, INC.
    Inventor: Afshin Pishevar
  • Patent number: 11304173
    Abstract: Provided is a node location tracking method, including an initial localization step of estimating initial locations of a robot and neighboring nodes using inter-node measurement and a Sum of Gaussian (SoG) filter, wherein the initial localization step includes an iterative multilateration step of initializing the locations of the nodes; and a SoG filter generation step of generating the SoG filter.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: April 12, 2022
    Assignee: Korea Institute of Science and Technology
    Inventors: Doik Kim, Jung Hee Kim
  • Patent number: 11292560
    Abstract: A supervisory propulsion controller module, a speed and position sensing system, and a communication system that are incorporated on marine vessels to reduce the wave-making resistance of the multiple vessels by operating them in controlled and coordinated spatial patterns to destructively cancel their Kelvin wake transverse or divergent wave system through active control of the vessels separation distance with speed. This will enable improvement in the vessel's mobility (speed, payload and range), improve survivability and reliability and reduce acquisition and total ownership cost.
    Type: Grant
    Filed: August 9, 2020
    Date of Patent: April 5, 2022
    Inventors: Terrence W. Schmidt, Jeffrey E. Kline
  • Patent number: 11288824
    Abstract: A system for processing images captured by a movable object includes one or more processors individually or collectively configured to process a first image set captured by a first imaging component to obtain texture information in response to a second image set captured by a second imaging component having a quality below a predetermined threshold, and obtain environmental information for the movable object based on the texture information. The first imaging component has a first field of view and the second imaging component has a second field of view narrower than the first field of view.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 29, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Mingyu Wang, Zhenyu Zhu
  • Patent number: 11277556
    Abstract: Based on information on a tracking target, a tracking target detecting unit is configured to detect the tracking target from an image captured by an automatic tracking camera. An influencing factor detecting unit is configured to detect an influencing factor that influences the amount of movement of the tracking target and set an influence degree, based on information on the influencing factor. Based on the influence degree set by the influencing factor detecting unit and a past movement amount of the tracking target, an adjustment amount calculating unit is configured to calculate an imaging direction adjustment amount for the automatic tracking camera.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 15, 2022
    Assignee: JVCKENWOOD Corporation
    Inventor: Takakazu Katou