Motion Or Velocity Measuring Patents (Class 382/107)
  • Patent number: 11466984
    Abstract: An ECU includes a memory including computer executable instructions for monitoring the condition of a ground engaging tool, and a processor coupled to the memory and configured to execute the computer executable instructions, the computer executable instructions when executed by the processor cause the processor to: acquire an image of the ground engaging tool, evaluate the image using an algorithm that compares the acquired image to a database of existing images to determine the damage, the amount of wear, or the absence of the ground engaging tool, and grade the quality of the acquired image.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: October 11, 2022
    Assignee: Caterpillar Inc.
    Inventors: John Michael Plouzek, Mitchell Chase Vlaminck, Nolan S. Finch
  • Patent number: 11463738
    Abstract: Various embodiments describe methods, systems, and devices for delivering on-demand video viewing angles of an arena at a venue are disclosed. Exemplary implementations may receive images of an event taking place across a plurality of positions within the arena from a series of cameras surrounding the plurality of positions. Content of interest may be identified within the images for a select user. Also, a score may be determined for each of the images based on the identified content of interest for the select user. A highest-score position may be determined from the plurality of positions based on the determined score and an offer to view images of the highest-score position may be transmitted to a display device of the select viewer for viewing video of the event.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 4, 2022
    Assignee: Charter Communications Operating, LLC
    Inventors: Sami Makinen, Yassine Maalej
  • Patent number: 11450010
    Abstract: Methods and systems for determining and classifying a number of repetitive motions in a video are described, and include the steps of first determining a plurality of images from a video, where the images are segmented from at least one video frame of the video. Next, performing a pose detection process on a feature of the images to generate one or more landmarks. Next, determining one or more principle component axes on points associated with a given landmark. Finally, determining at least one repetitive motion based on a pattern associated with a projection of the points onto the one or more principle components. In some embodiments, the disclosed methods can classify the repetitive motions to respective types. The present invention can be implemented for convenient use on a mobile computing device, such as a smartphone, for tracking exercises and similar repetitive motions.
    Type: Grant
    Filed: October 16, 2021
    Date of Patent: September 20, 2022
    Assignee: NEX Team Inc.
    Inventors: On Loy Sung, Qi Zhang, Keng Fai Lee, Shing Fat Mak, Daniel Dejos, Man Hon Chan
  • Patent number: 11443439
    Abstract: An air-to-air background-oriented schlieren system in which reference frames are acquired concurrently with the image frames, recording a target aircraft from a sensor aircraft flying in formation, while concurrently recording reference frames of underlying terrain to provide a visually textured background as a reference. This auto-referencing method improves the original AirBOS method by allowing a much more flexible and useful measurement, reducing the flight planning and piloting burden, and broadening the possible camera choices to acquire imaging of visible density changes in air that cause a refractive index change by an airborne vehicle.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: September 13, 2022
    Assignee: U.S.A. as Represented by the Administrator of the National Aeronautics and Space Administration
    Inventors: Daniel W Banks, James T Heineck
  • Patent number: 11432737
    Abstract: Systems and methods for predicting motion of a target using imaging are provided. In one aspect, a method includes receiving image data, acquired using an imaging system, corresponding to a region of interest (“ROI”) in a subject, and generating a set of reconstructed images from the image data. The method also includes processing the set of reconstructed images to obtain motion information associated with a target in the ROI, and applying the motion information in a motion prediction framework to estimate a predicted motion of the target. The method further includes generating a report based on the predicted motion estimated.
    Type: Grant
    Filed: March 17, 2018
    Date of Patent: September 6, 2022
    Assignee: The Regents of the University of California
    Inventors: Xinzhou Li, Holden H. Wu
  • Patent number: 11430308
    Abstract: A method includes obtaining, by a motion generator that has been trained to generate torque values for a plurality of joints of a rig associated with a target, a set of parameters associated with a target motion. The method includes, in response to the target motion being a first type of motion, generating a first set of torque values for the plurality of joints based on the set of parameters and a set of previous poses of the target. The method includes, in response to the target motion being a second type of motion, generating a second set of torque values for the plurality of joints based on the set of parameters and the set of previous poses of the target. The method includes triggering a movement of the target in accordance with the first set of torque values or the second set of torque values.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: August 30, 2022
    Assignee: Apple Inc.
    Inventors: Jian Zhang, Siva Chandra Mouli Sivapurapu, Aashi Manglik, Amritpal Singh Saini, Edward S. Ahn
  • Patent number: 11429229
    Abstract: An image processing apparatus according to the present disclosure includes: a position detection illumination unit; an image recognition illumination unit; an illumination control unit; an imaging unit; and an image processing unit. The position detection illumination unit outputs position detection illumination light. The position detection illumination light is used for position detection on a position detection object. The image recognition illumination unit outputs image recognition illumination light. The image recognition illumination light is used for image recognition on an image recognition object. The illumination control unit controls the position detection illumination unit and the image recognition illumination unit to cause the position detection illumination light and the image recognition illumination light to be outputted at timings different from each other.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 30, 2022
    Assignee: SONY GROUP CORPORATION
    Inventor: Masahiro Ando
  • Patent number: 11416507
    Abstract: Techniques for processing combinations of timeseries data and time-dependent semantic data are provided. The timeseries data can be data from one or more Internet of things (IOT) devices having one or more hardware sensors. The semantic data can be master data. Disclosed techniques allow for time dependent semantic data to be used with the timeseries data, so that semantic data appropriate for a time period associated with the timeseries data can be used. Changes to semantic data are tracked and recorded, where the changes can represent a new value to be used going forward in time or an update to a value for a prior time period. Timeseries data and semantic data can be stored with identifiers that facilitate their combination, such as date ranges, identifiers of analog world objects, or identifiers for discrete sets of semantic data values.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: August 16, 2022
    Assignee: SAP SE
    Inventors: Christian Conradi, Seshatalpasai Madala
  • Patent number: 11417115
    Abstract: An obstacle recognition device of a vehicle provided with a camera capturing an image around the vehicle, includes an acquiring unit sequentially acquiring the image captured by the camera; a feature point extracting unit extracting a plurality of feature points of an object included in the image; a calculation unit calculating each motion distance of the plurality of feature points between the image previously acquired and the image currently acquired by the acquiring unit; a first determination unit determining whether each motion distance of the feature points is larger than or equal to a first threshold; a second determination unit determining whether each motion distance of the feature points is larger than or equal to a second threshold; and an obstacle recognition unit recognizing an obstacle.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: August 16, 2022
    Assignees: DENSO CORPORATION, TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Koki Osada, Takayuki Hiromitsu, Tomoyuki Fujimoto, Takuya Miwa, Yutaka Hamamoto, Masumi Fukuman, Akihiro Kida, Kunihiro Sugihara
  • Patent number: 11410546
    Abstract: Systems and methods determining velocity of an object associated with a three-dimensional (3D) scene may include: a LIDAR system generating two sets of 3D point cloud data of the scene from two consecutive point cloud sweeps; a pillar feature network encoding data of the point cloud data to extract two-dimensional (2D) bird's-eye-view embeddings for each of the point cloud data sets in the form of pseudo images, wherein the 2D bird's-eye-view embeddings for a first of the two point cloud data sets comprises pillar features for the first point cloud data set and the 2D bird's-eye-view embeddings for a second of the two point cloud data sets comprises pillar features for the second point cloud data set; and a feature pyramid network encoding the pillar features and performing a 2D optical flow estimation to estimate the velocity of the object.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: August 9, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kuan-Hui Lee, Matthew T. Kliemann, Adrien David Gaidon
  • Patent number: 11402495
    Abstract: The invention concerns a monitoring method that comprises coupling in an integral manner at least one electromagnetic mirror of passive type with a given target to be monitored and monitoring the given target; wherein monitoring the given target includes: acquiring, via one or more synthetic aperture radar(s) installed on board one or more satellites and/or one or more aerial platforms, SAR images of a given area of the earth's surface where the given target is located; and determining, via a processing unit, a movement of the electromagnetic mirror on the basis of the acquired SAR images.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: August 2, 2022
    Assignee: Thales Alenia Space Italia S.p.A. Con Unico Socio
    Inventors: Luca Soli, Diego Calabrese
  • Patent number: 11386288
    Abstract: A movement state recognition multitask DNN model training section 46 trains a parameter of a DNN model based on an image data time series and a sensor data time series, and based on first annotation data, second annotation data, and third annotation data generated for the image data time series and the sensor data time series. Training is performed such that a movement state recognized by the DNN model in a case in which input with the image data time series and the sensor data time series matches movement states indicated by the first annotation data, the second annotation data, and the third annotation data. This thereby enables information to be efficiently extracted and combined from both video data and sensor data, and also enables movement state recognition to be implemented with high precision for a data set including data that does not fall in any movement state class.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: July 12, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuhei Yamamoto, Hiroyuki Toda
  • Patent number: 11388555
    Abstract: Provided herein is a method for quantifying and measuring human mobility within defined geographic regions and sub-regions. Methods may include: identifying sub-regions within a region; identifying static information associated with the sub-regions from one or more static information sources; obtaining dynamic information associated with the sub-regions from one or more dynamic information sources; determining correlations between elements of the static information associated with a respective sub-region and elements of the dynamic information associated with the respective sub-regions; generating a mobility score for the respective sub-region based, at least in part, on the correlations between the elements of the static information and the elements of the dynamic information associated with the respective sub-region; and providing the mobility score to one or more clients for guiding an action relative to the mobility score.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: July 12, 2022
    Assignee: HERE GLOBAL B.V.
    Inventors: Dmitry Koval, Jerome Beaurepaire
  • Patent number: 11388385
    Abstract: Disclosed herein are primary and auxiliary image capture devices for image processing and related methods. According to an aspect, a method may include using primary and auxiliary image capture devices to perform image processing. The method may include using the primary image capture device to capture a first image of a scene, the first image having a first quality characteristic. Further, the method may include using the auxiliary image capture device to capture a second image of the scene. The second image may have a second quality characteristic. The second quality characteristic may be of lower quality than the first quality characteristic. The method may also include adjusting at least one parameter of one of the captured images to create a plurality of adjusted images for one of approximating and matching the first quality characteristic. Further, the method may include utilizing the adjusted images for image processing.
    Type: Grant
    Filed: January 3, 2021
    Date of Patent: July 12, 2022
    Assignee: 3DMedia Corporation
    Inventors: Bahram Dahi, Tassos Markas, Michael McNamer, Jon Boyette
  • Patent number: 11379696
    Abstract: The present disclosure provides a pedestrian re-identification method and apparatus, computer device and readable medium. The method comprises: collecting a target image and a to-be-identified image including a pedestrian image; obtaining a feature expression of the target image and a feature expression of the to-be-identified image respectively, based on a pre-trained feature extraction model; wherein the feature extraction model is obtained by training based on a self-attention feature of a base image as well as a co-attention feature of the base image relative to a reference image; identifying whether a pedestrian in the to-be-identified image is the same pedestrian as that in the target image according to the feature expression of the target image and the feature expression of the to-be-identified image.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: July 5, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Zhigang Wang, Jian Wang, Shilei Wen, Errui Ding, Hao Sun
  • Patent number: 11373411
    Abstract: A method includes obtaining a two-dimensional image, obtaining a two-dimensional image annotation that indicates presence of an object in the two-dimensional image, determining a location proposal based on the two-dimensional image annotation, determining a classification for the object, determining an estimated size for the object based on the classification for the object, and defining a three-dimensional cuboid for the object based on the location proposal and the estimated size.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: June 28, 2022
    Assignee: Apple Inc.
    Inventors: Hanlin Goh, Nitish Srivastava, Yichuan Tang, Ruslan Salakhutdinov
  • Patent number: 11373367
    Abstract: A method for characterization of respiratory characteristics based on a voxel model includes: successively capturing multiple frames of depth image of a thoracoabdominal surface of human body and modelling the multiple frames of depth image in 3D to obtain multiple frames of voxel model in time series; traversing voxel units of the multiple frames of voxel model and extracting a volumetric characteristic and areal characteristic of the multiple frames of voxel model; acquiring a minimum common voxel bounding box of the multiple frames of voxel model; describing spatial distribution of the multiple frames of voxel model in the form of probability and arranging the probabilities of the minimum voxel bounding boxes of individual frames of voxel model to construct a sample space of super-high dimensional vectors; reducing the dimensions of the sample space to obtain intrinsic parameters; obtaining a characteristic variable capable of characterizing the voxel model.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: June 28, 2022
    Assignee: SOOCHOW UNIVERSITY
    Inventors: Shumei Yu, Pengcheng Hou, Rongchuan Sun, Shaolong Kuang, Lining Sun
  • Patent number: 11368632
    Abstract: A method for processing a video includes: identifying a target object in a first video segment; acquiring a current video frame of a second video segment; acquiring a first image region corresponding to the target object in a first target video frame of the first video segment, and acquiring a second image region corresponding to the target object in the current video frame of the second video segment, wherein the first target video frame corresponds to the current video frame of the second video segment in terms of video frame time; and performing picture splicing on the first target video frame and the current video frame of the second video segment according to the first image region and the second image region to obtain a processed first video frame.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: June 21, 2022
    Assignee: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Binglin Chang
  • Patent number: 11353953
    Abstract: A method of modifying an image on a computational device is disclosed.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: June 7, 2022
    Assignee: FOVO TECHNOLOGY LIMTED
    Inventors: Robert Pepperell, Alistair Burleigh
  • Patent number: 11353476
    Abstract: Embodiments of the present disclosure provide a method and apparatus for determining a velocity of an obstacle, a device, and a medium. An implementation includes: acquiring a first point cloud data of the obstacle at a first time and a second point cloud data of the obstacle at a second time; registering the first point cloud data and the second point cloud data by moving the first point cloud data or the second point cloud data; and determining a moving velocity of the obstacle based on a distance between two data points in a registered data point pair.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 7, 2022
    Assignee: Apollo Intelligent Driving Technology (Beijing) Co., Ltd.
    Inventors: Hao Wang, Liang Wang, Yu Ma
  • Patent number: 11337652
    Abstract: A method for determining spatial information for a multi-segment articulated rigid body system having at least an anchored segment and a non-anchored segment coupled to the anchored segment, each segment in the multi-segment articulated rigid body system representing a respective body part of a user, the method comprising: obtaining signals recorded by a first autonomous movement sensor coupled to a body part of the user represented by the non-anchored segment; providing the obtained signals as input to a trained statistical model and obtaining corresponding output of the trained statistical model; and determining, based on the corresponding output of the trained statistical model, spatial information for at least the non-anchored segment of the multi-segment articulated rigid body system.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: May 24, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Patrick Kaifosh, Timothy Machado, Thomas Reardon, Erik Schomburg, Calvin Tong
  • Patent number: 11343526
    Abstract: A video processing method includes dividing a current frame into a plurality of blocks, generating a motion vector of each block of the plurality of blocks of the current frame according to the each block of the current frame and a corresponding block of a previous frame, generating a global motion vector according to a plurality of motion vectors of the current frame, generating a sum of absolute differences of pixels of each block of the current frame according to the global motion vector, generating a region with a set of blocks of the current frame, matching a distribution of the sum of absolute differences of pixels of the region with a plurality of models, identifying a best matching model, and labeling each block in the region in the current frame with a label of either a foreground block or a background block according to the best matching model.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: May 24, 2022
    Assignee: Realtek Semiconductor Corp.
    Inventors: Yanting Wang, Guangyu San
  • Patent number: 11333794
    Abstract: Embodiments of the present invention disclose a method, a computer program product, and a computer system for generating a wind map based on multimedia analysis. A computer receives a multimedia source configuration and builds a wind scale reference database. In addition, the computer extracts and processes both wind speed data and contextual data from the multimedia. Moreover, the computer analyses temporal and spatial features, as well as generates a wind map based on the extracted context, extracted wind speed, and analysed temporal and spatial features. Lastly, the wind map generator validates and modifies the wind scale reference database.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ivan M. Milman, Sushain Pandit, Charles D. Wolfson, Su Liu, Fang Wang
  • Patent number: 11328394
    Abstract: Provided is a deep learning based contrast-enhanced (CE) CT image contrast amplifying method and the deep learning based CE CT image contrast amplifying method includes extracting at least one component CT image between a CE component and a non-CE component for an input CE CT image with the input CE CT image as an input to a previously trained deep learning model; and outputting a contrast-amplified CT image with respect to the CE CT image based on the input CE CT image and the at least one extracted component CT image.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: May 10, 2022
    Assignees: CLARIPI INC., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Jong Hyo Kim, Hyun Sook Park, Tai Chul Park, Chul Kyun Ahn
  • Patent number: 11320529
    Abstract: A tracking device is provided, which may include a correction target area setting module configured to set an area in which an unnecessary echo tends to be generated based on a structure or behavior of a ship, as a correction target area, a correction target echo extracting module configured to extract a target object echo within the correction target area from a plurality of detected target object echoes, as a correction target echo, a scoring module configured to score a matching level between previous echo information on a target object echo and detected echo information on each of the target object echoes, based on the previous echo information, the detected echo information and the extraction result, and a determining module configured to determine a target object echo as a current tracking target by using the scored result.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: May 3, 2022
    Assignee: Furuno Electric Co., Ltd.
    Inventors: Daisuke Fujioka, Katsuyuki Yanagi, Suminori Ekuni, Yugo Kubota
  • Patent number: 11315287
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Stefan Auer, Sebastian Bernhard Knorr
  • Patent number: 11308678
    Abstract: Systems and methods for generating cartoon images or emojis of an individual from a photograph of the individual is described. The systems and methods involve transmitting a picture of the individual, such as one taken with a mobile device, to a server that generates a set of emojis showing different emotions of the individual from the picture. The emojis are then transmitted to the mobile device and are available for use by the user in messaging applications, emails, or other electronic communications. The emojis can be added to the default keyboard of the mobile device or be generated in a separate emoji keyboard and be available for selection by the user.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: April 19, 2022
    Assignee: UMOJIFY, INC.
    Inventor: Afshin Pishevar
  • Patent number: 11304173
    Abstract: Provided is a node location tracking method, including an initial localization step of estimating initial locations of a robot and neighboring nodes using inter-node measurement and a Sum of Gaussian (SoG) filter, wherein the initial localization step includes an iterative multilateration step of initializing the locations of the nodes; and a SoG filter generation step of generating the SoG filter.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: April 12, 2022
    Assignee: Korea Institute of Science and Technology
    Inventors: Doik Kim, Jung Hee Kim
  • Patent number: 11292560
    Abstract: A supervisory propulsion controller module, a speed and position sensing system, and a communication system that are incorporated on marine vessels to reduce the wave-making resistance of the multiple vessels by operating them in controlled and coordinated spatial patterns to destructively cancel their Kelvin wake transverse or divergent wave system through active control of the vessels separation distance with speed. This will enable improvement in the vessel's mobility (speed, payload and range), improve survivability and reliability and reduce acquisition and total ownership cost.
    Type: Grant
    Filed: August 9, 2020
    Date of Patent: April 5, 2022
    Inventors: Terrence W. Schmidt, Jeffrey E. Kline
  • Patent number: 11288824
    Abstract: A system for processing images captured by a movable object includes one or more processors individually or collectively configured to process a first image set captured by a first imaging component to obtain texture information in response to a second image set captured by a second imaging component having a quality below a predetermined threshold, and obtain environmental information for the movable object based on the texture information. The first imaging component has a first field of view and the second imaging component has a second field of view narrower than the first field of view.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 29, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Mingyu Wang, Zhenyu Zhu
  • Patent number: 11277556
    Abstract: Based on information on a tracking target, a tracking target detecting unit is configured to detect the tracking target from an image captured by an automatic tracking camera. An influencing factor detecting unit is configured to detect an influencing factor that influences the amount of movement of the tracking target and set an influence degree, based on information on the influencing factor. Based on the influence degree set by the influencing factor detecting unit and a past movement amount of the tracking target, an adjustment amount calculating unit is configured to calculate an imaging direction adjustment amount for the automatic tracking camera.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 15, 2022
    Assignee: JVCKENWOOD Corporation
    Inventor: Takakazu Katou
  • Patent number: 11265526
    Abstract: A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: March 1, 2022
    Assignee: Occipital, Inc.
    Inventors: Patrick O'Keefe, Jeffrey Roger Powers, Nicolas Burrus
  • Patent number: 11257576
    Abstract: A method, computer program product, and computing system for tracking encounter participants is executed on a computing device and includes obtaining encounter information of a patient encounter, wherein the encounter information includes machine vision encounter information obtained via one or more machine vision systems. The machine vision encounter information is processed to identify one or more humanoid shapes.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: February 22, 2022
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Donald E. Owen, Daniel Paulino Almendro Barreda, Dushyant Sharma
  • Patent number: 11256085
    Abstract: Light deflection prism for altering a field of view of a camera of a device comprising a surface with a camera aperture region defining an actual light entrance angular cone projecting from the surface. A method comprises disposing a first surface of a light deflection prism on the surface so as to overlap the angular cone; internally reflecting a central ray, entering the prism under a normal incidence angle through a second surface, at a third surface of the prism towards the first surface under a normal angle of incidence such that the central ray enters the prism at an angle of less than 90° relative to a normal of the device surface and such that a ray at one boundary of an effective light entrance angular cone defined as a result of the light reflection at the third surface is substantially parallel to the device surface.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: February 22, 2022
    Assignee: NATIONAL UNIVERSITY OF SINGAPORE
    Inventor: Mark Brian Howell Breese
  • Patent number: 11250247
    Abstract: There is provided an information processing device including a control unit to generate play event information based on a determination whether detected behavior of a user is a predetermined play event.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: February 15, 2022
    Inventors: Hideyuki Matsunaga, Kosei Yamashita
  • Patent number: 11238597
    Abstract: [Problem] To provide a suspicious or abnormal subject detecting device for detecting a suspicious or abnormal subject appeared in time-series images. [Solution] An accumulating device 2 includes a first detecting unit 23 for detecting movement of a plurality of articulations includes in an action subject Z appeared in a plurality of first time-series images Y1 obtained by photographing a predetermined point; and a determining unit 24 for determining one or more of normal actions at the predetermined point based on a large number of movement of the plurality of articulations detected by the first detecting unit 23.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: February 1, 2022
    Assignee: ASILLA, INC.
    Inventor: Daisuke Kimura
  • Patent number: 11232583
    Abstract: A method of determining a pose of a camera is described. The method comprises analyzing changes in an image detected by the camera using a plurality of sensors of the camera; determining if a pose of the camera is incorrect; determining which sensors of the plurality of sensors are providing the most reliable image data; and analyzing data from the sensors providing the most reliable image data.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: January 25, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Rajesh Narasimha
  • Patent number: 11227397
    Abstract: The invention relates to computation of optical flow using event-based vision sensors.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: January 18, 2022
    Assignee: UNIVERSITÄT ZÜRICH
    Inventors: Tobias Delbruck, Min Liu
  • Patent number: 11229107
    Abstract: A method includes the steps of obtaining a frame from an image sensor, the frame comprising a number of pixel values, detecting a change in a first subset of the pixel values, detecting a change in the second subset of the pixel values near the first subset of the pixel values, and determining an occupancy state based on a relationship between the change in the first subset of the pixel values and the second subset of the pixel values. The occupancy state may be determined to be occupied when the change in the first subset of the pixel values is in a first direction and the change in the second subset of the pixel values is in a second direction opposite the first direction.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: January 18, 2022
    Assignee: IDEAL Industries Lighting LLC
    Inventors: Sten Heikman, Yuvaraj Dora, Ronald W. Bessems, John Roberts, Robert D. Underwood
  • Patent number: 11227169
    Abstract: A system includes a sensor, which is configured to detect a plurality of objects within an area, and a computing device in communication with the sensor. The computing device is configured to determine that one of the plurality of objects is static, determine that one of the plurality of objects is temporary, determine a geometric relationship between the temporary object and the static object, and determine whether one of the plurality of objects is a ghost object based on the geometric relationship.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: January 18, 2022
    Assignee: Continental Automotive Systems, Inc.
    Inventor: Andrew Phillip Bolduc
  • Patent number: 11219814
    Abstract: Exemplary embodiments of the present disclosure are directed to systems, methods, and computer-readable media configured to autonomously generate personalized recommendations for a user before, during, or after a round of golf. The systems and methods can utilize course data, environmental data, user data, and/or equipment data in conjunctions with one or more machine learning algorithms to autonomously generate the personalized recommendations.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: January 11, 2022
    Assignee: Arccos Golf LLC
    Inventors: Salman Hussain Syed, Colin David Phillips
  • Patent number: 11216969
    Abstract: A system for managing a position of a target stores identification information for identifying a target to be managed in association with position information indicating a position of the target. The system further obtains an image from an image capture device attached to a mobile device and obtains the image captured by the image capture device at an image capture position and image capture position information indicating the image capture position. The system further locates the position of the target included in the image using the image capture position information. The system further stores the position of the target in association with the identification information of the target.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: January 4, 2022
    Assignee: OBAYASHI CORPORATION
    Inventors: Takuma Nakabayashi, Tomoya Kaneko, Masashi Suzuki
  • Patent number: 11216669
    Abstract: The invention belongs to motion detection and three-dimensional image analysis technology fields. It can be applied to movement detection or intrusion detection in the surveillance of monitored volumes or monitored spaces. It can also be applied to obstacle detection or obstacle avoidance for self-driving, semi-autonomous vehicles, safety systems and ADAS. A three-dimensional imaging system stores 3D surface points and free space locations calculated from line of sight data. The 3D surface points typically represent reflective surfaces detected by a sensor such as a LiDAR, a radar, a depth sensor or stereoscopic cameras. By using free space information, the system can unambiguously derive a movement or an intrusion the first time a surface is detected at a particular coordinate. Motion detection can be performed using a single frame or a single 3D point that was previously a free space location.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 4, 2022
    Assignee: Outsight SA
    Inventor: Raul Bravo Orellana
  • Patent number: 11210994
    Abstract: The present application discloses a driving method of a display panel, a display apparatus and a virtual reality device. The display panel includes a middle display region and a peripheral display region at the periphery of the middle display region. The display panel is driven such that a display resolution of the middle display region is greater than a display resolution of the peripheral display region.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 28, 2021
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., BEIJING BOE DISPLAY TECHNOLOGY CO.. LTD.
    Inventors: Yanfeng Wang, Xiaoling Xu, Yuanxin Du, Yun Qiu, Xiao Sun
  • Patent number: 11210775
    Abstract: A sequence of frames of a video can be received. For a given frame in the sequence of frames, a gradient-embedded frame is generated corresponding to the given frame. The gradient-embedded frame incorporates motion information. The motion information can be represented as disturbance in the gradient-embedded frame. A plurality of such gradient-embedded frames can be generated corresponding to a plurality of the sequence of frames. Based on the plurality of gradient-embedded frames, a neural network such as a generative adversarial network is trained to learn to suppress the disturbance in the gradient-embedded frame and to generate a substitute frame. In inference stage, anomaly in a target video frame can be detected by comparing it to a corresponding substitute frame generated by the neural network.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bo Wu, Chuang Gan, Dakuo Wang, Rameswar Panda
  • Patent number: 11212444
    Abstract: Provided is an image processing apparatus including at least one processor configured to implement a feature point extractor that detects a plurality of feature points in a first image input from an image sensor; a first motion vector extractor that extracts local motion vectors of the plurality of feature points and select effective local motion vectors from among the local motion vectors by applying different algorithms according to zoom magnifications of the image sensor; a second motion vector extractor that extracts a global motion vector by using the effective local motion vectors; and an image stabilizer configured to correct shaking of the first image based on the global motion vector.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: December 28, 2021
    Assignee: HANWHA TECHWIN CO., LTD.
    Inventors: Gab Cheon Jung, Eun Cheol Choi
  • Patent number: 11200523
    Abstract: A method receiving image information with one or more processor(s) and from a sensor disposed at a worksite and determining an identity of a work tool disposed at the worksite based at least partly on the image information. The method further includes receiving location information with the one or more processor(s), the location information indicating a first location of the sensor at the worksite. Additionally, the method includes determining a second location of the work tool at the worksite based at least partly on the location information. In some instances, the method includes generating a worksite map with the one or more processor(s), the worksite map identifying the work tool and indicating the second location of the work tool at the worksite, and at least one of providing the worksite map to an additional processor and causing the worksite map to be rendered via a display.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: December 14, 2021
    Assignee: Caterpillar Inc.
    Inventors: Peter Joseph Petrany, Jeremy Lee Vogel
  • Patent number: 11200684
    Abstract: Disclosed is a river flow velocity measurement device using optical flow image processing, including: an image photographing unit configured to acquire consecutive images of a flow velocity measurement site of a river; an image conversion analysis unit configured to dynamically extract frames of the consecutive images in order to normalize image data of the image photographing unit, image-convert the extracted frames, and perform homography calculation; an analysis region extracting unit configured to extract an analysis region of an analysis point; a pixel flow velocity calculating unit configured to calculate a pixel flow velocity using an image in the analysis region of the analysis point extracted by the analysis region extracting unit; and an actual flow velocity calculating unit configured to convert the pixel flow velocity calculated by the pixel flow velocity calculating unit into an actual flow velocity.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: December 14, 2021
    Assignees: HYDROSEM, REPUBLIC OF KOREA (NATIONAL DISASTER MANAGEMENT RESEARCH INSTITUTE)
    Inventors: Seo Jun Kim, Byung Man Yoon, Ho Jun You, Dong Su Kim, Tae Sung Cheong, Jae Seung Joo, Hyeon Seok Choi
  • Patent number: 11200738
    Abstract: A system may receive imaging data generated by an imaging device directed at a heart. The system may receive a first input operation indicative of a selected time-frame. The system may display images of the heart based on the intensity values mapped to the selected time-frame. The system may receive, based on interaction with the images, an apex coordinate and a base coordinate. The system may calculate, based on the apex coordinate and the base coordinate, a truncated ellipsoid representative an endocardial or epicardial boundary of the heart. The system may generate a four-dimensional mesh comprising three-dimensional vertices spaced along the mesh. The system may overlay, on the displayed images, markers representative of the vertices. The system may receive a second input operation corresponding to a selected marker. The system may enhance the mesh by adjusting or interpolating vertices across multiple time-frames.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: December 14, 2021
    Assignee: Purdue Research Foundation
    Inventors: Craig J Goergen, Frederick William Damen
  • Patent number: 11188780
    Abstract: Briefly, embodiments disclosed herein relate to image cropping, such as for digital images, for example.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: November 30, 2021
    Assignee: VERIZON MEDIA INC.
    Inventors: Daozheng Chen, Mihyoung Sally Lee, Brian Webb, Ralph Rabbat, Ali Khodaei, Paul Krakow, Dave Todd, Samantha Giordano, Max Chern