Motion Or Velocity Measuring Patents (Class 382/107)
  • Patent number: 11373367
    Abstract: A method for characterization of respiratory characteristics based on a voxel model includes: successively capturing multiple frames of depth image of a thoracoabdominal surface of human body and modelling the multiple frames of depth image in 3D to obtain multiple frames of voxel model in time series; traversing voxel units of the multiple frames of voxel model and extracting a volumetric characteristic and areal characteristic of the multiple frames of voxel model; acquiring a minimum common voxel bounding box of the multiple frames of voxel model; describing spatial distribution of the multiple frames of voxel model in the form of probability and arranging the probabilities of the minimum voxel bounding boxes of individual frames of voxel model to construct a sample space of super-high dimensional vectors; reducing the dimensions of the sample space to obtain intrinsic parameters; obtaining a characteristic variable capable of characterizing the voxel model.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: June 28, 2022
    Assignee: SOOCHOW UNIVERSITY
    Inventors: Shumei Yu, Pengcheng Hou, Rongchuan Sun, Shaolong Kuang, Lining Sun
  • Patent number: 11368632
    Abstract: A method for processing a video includes: identifying a target object in a first video segment; acquiring a current video frame of a second video segment; acquiring a first image region corresponding to the target object in a first target video frame of the first video segment, and acquiring a second image region corresponding to the target object in the current video frame of the second video segment, wherein the first target video frame corresponds to the current video frame of the second video segment in terms of video frame time; and performing picture splicing on the first target video frame and the current video frame of the second video segment according to the first image region and the second image region to obtain a processed first video frame.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: June 21, 2022
    Assignee: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Binglin Chang
  • Patent number: 11353953
    Abstract: A method of modifying an image on a computational device is disclosed.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: June 7, 2022
    Assignee: FOVO TECHNOLOGY LIMTED
    Inventors: Robert Pepperell, Alistair Burleigh
  • Patent number: 11353476
    Abstract: Embodiments of the present disclosure provide a method and apparatus for determining a velocity of an obstacle, a device, and a medium. An implementation includes: acquiring a first point cloud data of the obstacle at a first time and a second point cloud data of the obstacle at a second time; registering the first point cloud data and the second point cloud data by moving the first point cloud data or the second point cloud data; and determining a moving velocity of the obstacle based on a distance between two data points in a registered data point pair.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 7, 2022
    Assignee: Apollo Intelligent Driving Technology (Beijing) Co., Ltd.
    Inventors: Hao Wang, Liang Wang, Yu Ma
  • Patent number: 11343526
    Abstract: A video processing method includes dividing a current frame into a plurality of blocks, generating a motion vector of each block of the plurality of blocks of the current frame according to the each block of the current frame and a corresponding block of a previous frame, generating a global motion vector according to a plurality of motion vectors of the current frame, generating a sum of absolute differences of pixels of each block of the current frame according to the global motion vector, generating a region with a set of blocks of the current frame, matching a distribution of the sum of absolute differences of pixels of the region with a plurality of models, identifying a best matching model, and labeling each block in the region in the current frame with a label of either a foreground block or a background block according to the best matching model.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: May 24, 2022
    Assignee: Realtek Semiconductor Corp.
    Inventors: Yanting Wang, Guangyu San
  • Patent number: 11337652
    Abstract: A method for determining spatial information for a multi-segment articulated rigid body system having at least an anchored segment and a non-anchored segment coupled to the anchored segment, each segment in the multi-segment articulated rigid body system representing a respective body part of a user, the method comprising: obtaining signals recorded by a first autonomous movement sensor coupled to a body part of the user represented by the non-anchored segment; providing the obtained signals as input to a trained statistical model and obtaining corresponding output of the trained statistical model; and determining, based on the corresponding output of the trained statistical model, spatial information for at least the non-anchored segment of the multi-segment articulated rigid body system.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: May 24, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Patrick Kaifosh, Timothy Machado, Thomas Reardon, Erik Schomburg, Calvin Tong
  • Patent number: 11333794
    Abstract: Embodiments of the present invention disclose a method, a computer program product, and a computer system for generating a wind map based on multimedia analysis. A computer receives a multimedia source configuration and builds a wind scale reference database. In addition, the computer extracts and processes both wind speed data and contextual data from the multimedia. Moreover, the computer analyses temporal and spatial features, as well as generates a wind map based on the extracted context, extracted wind speed, and analysed temporal and spatial features. Lastly, the wind map generator validates and modifies the wind scale reference database.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ivan M. Milman, Sushain Pandit, Charles D. Wolfson, Su Liu, Fang Wang
  • Patent number: 11328394
    Abstract: Provided is a deep learning based contrast-enhanced (CE) CT image contrast amplifying method and the deep learning based CE CT image contrast amplifying method includes extracting at least one component CT image between a CE component and a non-CE component for an input CE CT image with the input CE CT image as an input to a previously trained deep learning model; and outputting a contrast-amplified CT image with respect to the CE CT image based on the input CE CT image and the at least one extracted component CT image.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: May 10, 2022
    Assignees: CLARIPI INC., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Jong Hyo Kim, Hyun Sook Park, Tai Chul Park, Chul Kyun Ahn
  • Patent number: 11320529
    Abstract: A tracking device is provided, which may include a correction target area setting module configured to set an area in which an unnecessary echo tends to be generated based on a structure or behavior of a ship, as a correction target area, a correction target echo extracting module configured to extract a target object echo within the correction target area from a plurality of detected target object echoes, as a correction target echo, a scoring module configured to score a matching level between previous echo information on a target object echo and detected echo information on each of the target object echoes, based on the previous echo information, the detected echo information and the extraction result, and a determining module configured to determine a target object echo as a current tracking target by using the scored result.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: May 3, 2022
    Assignee: Furuno Electric Co., Ltd.
    Inventors: Daisuke Fujioka, Katsuyuki Yanagi, Suminori Ekuni, Yugo Kubota
  • Patent number: 11315287
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Stefan Auer, Sebastian Bernhard Knorr
  • Patent number: 11308678
    Abstract: Systems and methods for generating cartoon images or emojis of an individual from a photograph of the individual is described. The systems and methods involve transmitting a picture of the individual, such as one taken with a mobile device, to a server that generates a set of emojis showing different emotions of the individual from the picture. The emojis are then transmitted to the mobile device and are available for use by the user in messaging applications, emails, or other electronic communications. The emojis can be added to the default keyboard of the mobile device or be generated in a separate emoji keyboard and be available for selection by the user.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: April 19, 2022
    Assignee: UMOJIFY, INC.
    Inventor: Afshin Pishevar
  • Patent number: 11304173
    Abstract: Provided is a node location tracking method, including an initial localization step of estimating initial locations of a robot and neighboring nodes using inter-node measurement and a Sum of Gaussian (SoG) filter, wherein the initial localization step includes an iterative multilateration step of initializing the locations of the nodes; and a SoG filter generation step of generating the SoG filter.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: April 12, 2022
    Assignee: Korea Institute of Science and Technology
    Inventors: Doik Kim, Jung Hee Kim
  • Patent number: 11292560
    Abstract: A supervisory propulsion controller module, a speed and position sensing system, and a communication system that are incorporated on marine vessels to reduce the wave-making resistance of the multiple vessels by operating them in controlled and coordinated spatial patterns to destructively cancel their Kelvin wake transverse or divergent wave system through active control of the vessels separation distance with speed. This will enable improvement in the vessel's mobility (speed, payload and range), improve survivability and reliability and reduce acquisition and total ownership cost.
    Type: Grant
    Filed: August 9, 2020
    Date of Patent: April 5, 2022
    Inventors: Terrence W. Schmidt, Jeffrey E. Kline
  • Patent number: 11288824
    Abstract: A system for processing images captured by a movable object includes one or more processors individually or collectively configured to process a first image set captured by a first imaging component to obtain texture information in response to a second image set captured by a second imaging component having a quality below a predetermined threshold, and obtain environmental information for the movable object based on the texture information. The first imaging component has a first field of view and the second imaging component has a second field of view narrower than the first field of view.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 29, 2022
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Mingyu Wang, Zhenyu Zhu
  • Patent number: 11277556
    Abstract: Based on information on a tracking target, a tracking target detecting unit is configured to detect the tracking target from an image captured by an automatic tracking camera. An influencing factor detecting unit is configured to detect an influencing factor that influences the amount of movement of the tracking target and set an influence degree, based on information on the influencing factor. Based on the influence degree set by the influencing factor detecting unit and a past movement amount of the tracking target, an adjustment amount calculating unit is configured to calculate an imaging direction adjustment amount for the automatic tracking camera.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 15, 2022
    Assignee: JVCKENWOOD Corporation
    Inventor: Takakazu Katou
  • Patent number: 11265526
    Abstract: A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: March 1, 2022
    Assignee: Occipital, Inc.
    Inventors: Patrick O'Keefe, Jeffrey Roger Powers, Nicolas Burrus
  • Patent number: 11257576
    Abstract: A method, computer program product, and computing system for tracking encounter participants is executed on a computing device and includes obtaining encounter information of a patient encounter, wherein the encounter information includes machine vision encounter information obtained via one or more machine vision systems. The machine vision encounter information is processed to identify one or more humanoid shapes.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: February 22, 2022
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Donald E. Owen, Daniel Paulino Almendro Barreda, Dushyant Sharma
  • Patent number: 11256085
    Abstract: Light deflection prism for altering a field of view of a camera of a device comprising a surface with a camera aperture region defining an actual light entrance angular cone projecting from the surface. A method comprises disposing a first surface of a light deflection prism on the surface so as to overlap the angular cone; internally reflecting a central ray, entering the prism under a normal incidence angle through a second surface, at a third surface of the prism towards the first surface under a normal angle of incidence such that the central ray enters the prism at an angle of less than 90° relative to a normal of the device surface and such that a ray at one boundary of an effective light entrance angular cone defined as a result of the light reflection at the third surface is substantially parallel to the device surface.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: February 22, 2022
    Assignee: NATIONAL UNIVERSITY OF SINGAPORE
    Inventor: Mark Brian Howell Breese
  • Patent number: 11250247
    Abstract: There is provided an information processing device including a control unit to generate play event information based on a determination whether detected behavior of a user is a predetermined play event.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: February 15, 2022
    Inventors: Hideyuki Matsunaga, Kosei Yamashita
  • Patent number: 11238597
    Abstract: [Problem] To provide a suspicious or abnormal subject detecting device for detecting a suspicious or abnormal subject appeared in time-series images. [Solution] An accumulating device 2 includes a first detecting unit 23 for detecting movement of a plurality of articulations includes in an action subject Z appeared in a plurality of first time-series images Y1 obtained by photographing a predetermined point; and a determining unit 24 for determining one or more of normal actions at the predetermined point based on a large number of movement of the plurality of articulations detected by the first detecting unit 23.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: February 1, 2022
    Assignee: ASILLA, INC.
    Inventor: Daisuke Kimura
  • Patent number: 11232583
    Abstract: A method of determining a pose of a camera is described. The method comprises analyzing changes in an image detected by the camera using a plurality of sensors of the camera; determining if a pose of the camera is incorrect; determining which sensors of the plurality of sensors are providing the most reliable image data; and analyzing data from the sensors providing the most reliable image data.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: January 25, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Rajesh Narasimha
  • Patent number: 11227169
    Abstract: A system includes a sensor, which is configured to detect a plurality of objects within an area, and a computing device in communication with the sensor. The computing device is configured to determine that one of the plurality of objects is static, determine that one of the plurality of objects is temporary, determine a geometric relationship between the temporary object and the static object, and determine whether one of the plurality of objects is a ghost object based on the geometric relationship.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: January 18, 2022
    Assignee: Continental Automotive Systems, Inc.
    Inventor: Andrew Phillip Bolduc
  • Patent number: 11227397
    Abstract: The invention relates to computation of optical flow using event-based vision sensors.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: January 18, 2022
    Assignee: UNIVERSITÄT ZÜRICH
    Inventors: Tobias Delbruck, Min Liu
  • Patent number: 11229107
    Abstract: A method includes the steps of obtaining a frame from an image sensor, the frame comprising a number of pixel values, detecting a change in a first subset of the pixel values, detecting a change in the second subset of the pixel values near the first subset of the pixel values, and determining an occupancy state based on a relationship between the change in the first subset of the pixel values and the second subset of the pixel values. The occupancy state may be determined to be occupied when the change in the first subset of the pixel values is in a first direction and the change in the second subset of the pixel values is in a second direction opposite the first direction.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: January 18, 2022
    Assignee: IDEAL Industries Lighting LLC
    Inventors: Sten Heikman, Yuvaraj Dora, Ronald W. Bessems, John Roberts, Robert D. Underwood
  • Patent number: 11219814
    Abstract: Exemplary embodiments of the present disclosure are directed to systems, methods, and computer-readable media configured to autonomously generate personalized recommendations for a user before, during, or after a round of golf. The systems and methods can utilize course data, environmental data, user data, and/or equipment data in conjunctions with one or more machine learning algorithms to autonomously generate the personalized recommendations.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: January 11, 2022
    Assignee: Arccos Golf LLC
    Inventors: Salman Hussain Syed, Colin David Phillips
  • Patent number: 11216669
    Abstract: The invention belongs to motion detection and three-dimensional image analysis technology fields. It can be applied to movement detection or intrusion detection in the surveillance of monitored volumes or monitored spaces. It can also be applied to obstacle detection or obstacle avoidance for self-driving, semi-autonomous vehicles, safety systems and ADAS. A three-dimensional imaging system stores 3D surface points and free space locations calculated from line of sight data. The 3D surface points typically represent reflective surfaces detected by a sensor such as a LiDAR, a radar, a depth sensor or stereoscopic cameras. By using free space information, the system can unambiguously derive a movement or an intrusion the first time a surface is detected at a particular coordinate. Motion detection can be performed using a single frame or a single 3D point that was previously a free space location.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 4, 2022
    Assignee: Outsight SA
    Inventor: Raul Bravo Orellana
  • Patent number: 11216969
    Abstract: A system for managing a position of a target stores identification information for identifying a target to be managed in association with position information indicating a position of the target. The system further obtains an image from an image capture device attached to a mobile device and obtains the image captured by the image capture device at an image capture position and image capture position information indicating the image capture position. The system further locates the position of the target included in the image using the image capture position information. The system further stores the position of the target in association with the identification information of the target.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: January 4, 2022
    Assignee: OBAYASHI CORPORATION
    Inventors: Takuma Nakabayashi, Tomoya Kaneko, Masashi Suzuki
  • Patent number: 11210775
    Abstract: A sequence of frames of a video can be received. For a given frame in the sequence of frames, a gradient-embedded frame is generated corresponding to the given frame. The gradient-embedded frame incorporates motion information. The motion information can be represented as disturbance in the gradient-embedded frame. A plurality of such gradient-embedded frames can be generated corresponding to a plurality of the sequence of frames. Based on the plurality of gradient-embedded frames, a neural network such as a generative adversarial network is trained to learn to suppress the disturbance in the gradient-embedded frame and to generate a substitute frame. In inference stage, anomaly in a target video frame can be detected by comparing it to a corresponding substitute frame generated by the neural network.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bo Wu, Chuang Gan, Dakuo Wang, Rameswar Panda
  • Patent number: 11210994
    Abstract: The present application discloses a driving method of a display panel, a display apparatus and a virtual reality device. The display panel includes a middle display region and a peripheral display region at the periphery of the middle display region. The display panel is driven such that a display resolution of the middle display region is greater than a display resolution of the peripheral display region.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 28, 2021
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., BEIJING BOE DISPLAY TECHNOLOGY CO.. LTD.
    Inventors: Yanfeng Wang, Xiaoling Xu, Yuanxin Du, Yun Qiu, Xiao Sun
  • Patent number: 11212444
    Abstract: Provided is an image processing apparatus including at least one processor configured to implement a feature point extractor that detects a plurality of feature points in a first image input from an image sensor; a first motion vector extractor that extracts local motion vectors of the plurality of feature points and select effective local motion vectors from among the local motion vectors by applying different algorithms according to zoom magnifications of the image sensor; a second motion vector extractor that extracts a global motion vector by using the effective local motion vectors; and an image stabilizer configured to correct shaking of the first image based on the global motion vector.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: December 28, 2021
    Assignee: HANWHA TECHWIN CO., LTD.
    Inventors: Gab Cheon Jung, Eun Cheol Choi
  • Patent number: 11200684
    Abstract: Disclosed is a river flow velocity measurement device using optical flow image processing, including: an image photographing unit configured to acquire consecutive images of a flow velocity measurement site of a river; an image conversion analysis unit configured to dynamically extract frames of the consecutive images in order to normalize image data of the image photographing unit, image-convert the extracted frames, and perform homography calculation; an analysis region extracting unit configured to extract an analysis region of an analysis point; a pixel flow velocity calculating unit configured to calculate a pixel flow velocity using an image in the analysis region of the analysis point extracted by the analysis region extracting unit; and an actual flow velocity calculating unit configured to convert the pixel flow velocity calculated by the pixel flow velocity calculating unit into an actual flow velocity.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: December 14, 2021
    Assignees: HYDROSEM, REPUBLIC OF KOREA (NATIONAL DISASTER MANAGEMENT RESEARCH INSTITUTE)
    Inventors: Seo Jun Kim, Byung Man Yoon, Ho Jun You, Dong Su Kim, Tae Sung Cheong, Jae Seung Joo, Hyeon Seok Choi
  • Patent number: 11200738
    Abstract: A system may receive imaging data generated by an imaging device directed at a heart. The system may receive a first input operation indicative of a selected time-frame. The system may display images of the heart based on the intensity values mapped to the selected time-frame. The system may receive, based on interaction with the images, an apex coordinate and a base coordinate. The system may calculate, based on the apex coordinate and the base coordinate, a truncated ellipsoid representative an endocardial or epicardial boundary of the heart. The system may generate a four-dimensional mesh comprising three-dimensional vertices spaced along the mesh. The system may overlay, on the displayed images, markers representative of the vertices. The system may receive a second input operation corresponding to a selected marker. The system may enhance the mesh by adjusting or interpolating vertices across multiple time-frames.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: December 14, 2021
    Assignee: Purdue Research Foundation
    Inventors: Craig J Goergen, Frederick William Damen
  • Patent number: 11200523
    Abstract: A method receiving image information with one or more processor(s) and from a sensor disposed at a worksite and determining an identity of a work tool disposed at the worksite based at least partly on the image information. The method further includes receiving location information with the one or more processor(s), the location information indicating a first location of the sensor at the worksite. Additionally, the method includes determining a second location of the work tool at the worksite based at least partly on the location information. In some instances, the method includes generating a worksite map with the one or more processor(s), the worksite map identifying the work tool and indicating the second location of the work tool at the worksite, and at least one of providing the worksite map to an additional processor and causing the worksite map to be rendered via a display.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: December 14, 2021
    Assignee: Caterpillar Inc.
    Inventors: Peter Joseph Petrany, Jeremy Lee Vogel
  • Patent number: 11188780
    Abstract: Briefly, embodiments disclosed herein relate to image cropping, such as for digital images, for example.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: November 30, 2021
    Assignee: VERIZON MEDIA INC.
    Inventors: Daozheng Chen, Mihyoung Sally Lee, Brian Webb, Ralph Rabbat, Ali Khodaei, Paul Krakow, Dave Todd, Samantha Giordano, Max Chern
  • Patent number: 11175366
    Abstract: A method for acquiring magnetic resonance imaging data with respiratory motion compensation using one or more motion signals includes acquiring a plurality of gradient-delay-corrected radial readout views of a subject using a free-breathing multi-echo pulse sequence, and sampling a plurality of data points of the gradient-delay-corrected radial readout views to yield a self-gating signal. The self-gating signal is used to determine a plurality of respiratory motion states corresponding to the plurality of gradient-delay-corrected radial readout views. The respiratory motion states are used to correct respiratory motion bias in the gradient-delay-corrected radial readout views, thereby yielding gradient-delay-corrected and motion-compensated multi-echo data. One or more images are reconstructed using the gradient-delay-corrected and motion-compensated multi-echo data.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: November 16, 2021
    Assignees: Siemens Healthcare GmbH, The Regents of the University of California
    Inventors: Xiaodong Zhong, Holden H. Wu, Vibhas S. Deshpande, Tess Armstrong, Li Pan, Marcel Dominik Nickel, Stephan Kannengiesser
  • Patent number: 11150750
    Abstract: An electronic pen main body unit of an electronic pen having a function of a fountain pen includes an ink writing unit in which a cartridge housing liquid ink is fitted to a rear end portion of a pen core, and a pen body is disposed so as to be superposed on the pen core in a direction orthogonal to a coupling direction of the pen core and the cartridge, and an interaction circuit having an electronic part which, in operation, exchanges a signal with a tablet. The interaction circuit is disposed on a side of the pen core opposite the pen body in the direction orthogonal to the coupling direction of the pen core and the cartridge in a state in which the interaction circuit recedes to the cartridge side from a writing end of the pen body in the coupling direction of the pen core and the cartridge.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: October 19, 2021
    Assignee: Wacom Co., Ltd.
    Inventors: Kohei Tanaka, Kenichi Ninomiya, Takenori Kaneda, Toshihiko Horie
  • Patent number: 11141239
    Abstract: A reprocessing apparatus for cleaning and/or disinfecting a medical instrument including a fluid container for a reprocessing fluid and a reprocessing device. The reprocessing device includes: a reprocessing space in which the medical instrument is introduced for reprocessing; a fluid line for connection to at least one channel of the medical instrument, wherein the fluid line is configured to transport the reprocessing fluid to the at least one channel; a bubble introducing apparatus for introducing gas bubbles into the fluid line; and a gas bubble speed determining apparatus for determining a speed of the gas bubbles in the fluid line. The gas bubble speed determining apparatus includes a camera for capturing successive images of at least a portion of the gas bubbles in the fluid line.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: October 12, 2021
    Assignee: OLYMPUS WINTER & IBE GMBH
    Inventors: Niklas Erdmann, Sascha Eschborn, Antonia Weis
  • Patent number: 11146661
    Abstract: An endpoint system including one or more computing devices receives user input associated with an avatar in a shared virtual environment; calculates, based on the user input, motion for a portion of the first avatar, such as a hand; determines, based on the user input, a first gesture state for first avatar; transmits first location change notifications and a representation of the first gesture state for the first avatar; receives second location change notifications and a representation of a second gesture state for a second avatar; detects a collision between the first avatar and the second avatar based on the first location change notifications and the second location change notifications; and identifies a collaborative gesture based on the detected collision, the first gesture state, and the second gesture state.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: October 12, 2021
    Assignee: Rec Room Inc.
    Inventors: Nicholas Fajt, Cameron Brown, Dan Kroymann, Omer Bilal Orhan, Johnathan Bevis, Joshua Wehrly
  • Patent number: 11140329
    Abstract: An image processing apparatus and an image processing method include obtaining status information of a terminal device, obtaining photographing scene information of the terminal device, determining an image processing mode based on the status information and the photographing scene information, obtaining a to-be-displayed image, and processing the to-be-displayed image based on the image processing mode.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: October 5, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Dai, Biying Hu, Yining Huang
  • Patent number: 11127181
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, multiple user data are obtained and related to the sensing result of a user from multiple data sources. Multiple first emotion decisions are determined, respectively, based on each user data. Whether an emotion collision occurs among the first emotion decisions is determined. The emotion collision is related that the corresponding emotion groups of the first emotion decisions are not matched with each other. A second emotion decision is determined from one or more emotion groups according to the determining result of the emotion collision. The first or second emotion decision is related to one emotion group. A facial expression of an avatar is generated based on the second emotion decision. Accordingly, a proper facial expression of the avatar could be presented.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: September 21, 2021
    Assignee: XRSPACE CO., LTD.
    Inventors: Feng-Seng Chu, Peter Chou
  • Patent number: 11127146
    Abstract: The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: September 21, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Didier Doyen, Guillaume Boisson, Sylvain Thiebaud
  • Patent number: 11119587
    Abstract: An image sensing system control method, comprising: (a) predicting a first velocity of the image sensor; (b) calculating a first time duration between a first frame time and a first polling time after the first frame time, wherein the image sensor captures a first frame at the first frame time and receives a first polling from the control circuit at the first polling time; and (c) calculating a first predicted motion delta of the first time duration according to the first velocity and the first time duration.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 14, 2021
    Assignee: PixArt Imaging Inc.
    Inventor: Shang Chan Kong
  • Patent number: 11113526
    Abstract: A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: September 7, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kevin Stone, Krishna Shankar, Michael Laskey
  • Patent number: 11110343
    Abstract: An example method includes obtaining, obtaining, from one or more sensors of a computing device, data relating to a feature in an environment. Following, the method includes analyzing the data to identify one or more details of the feature in the environment and determining, based on a comparison of the data to a stored dataset in a database, that the details includes a detail that the stored dataset lacks. The method includes providing one or more game elements for gameplay on an interface of the computing device based on the details including the detail that the stored dataset lacks.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: September 7, 2021
    Assignee: Niantic, Inc.
    Inventors: Ryan Michael Hickman, Soohyun Bae
  • Patent number: 11116027
    Abstract: Provided is a method of storing information on a face of a passenger in a vehicle in association with a terminal of the passenger, and an electronic apparatus therefor. In the present disclosure, at least one of an electronic apparatus, a vehicle, a vehicle terminal, and an autonomous vehicle may be connected or converged with an artificial intelligence (AI) module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device associated with a 5G service, and the like.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: September 7, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Poram Kim, Hyunchul Choi, Inyeop Jang, Salkmann Ji, Hyunsu Choi, Sungmin You, Taegil Cho
  • Patent number: 11096630
    Abstract: A method for generating a movement signal of a body part, of which at least a portion is undergoing a cardiac movement, includes providing a pilot tone signal acquired from the body part by a magnetic resonance receiver coil arrangement. A demixing matrix is calculated from a calibration portion of the Pilot Tone signal using an independent component analysis algorithm. The independent component corresponding to the cardiac movement is selected. The demixing matrix is applied to further portions of the pilot tone signal to obtain a movement signal representing the cardiac movement. An, adaptive stochastic, or model-based filter is applied to the signal representing the cardiac movement, to obtain a filtered movement signal.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 24, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Peter Speier, Mario Bacher
  • Patent number: 11093762
    Abstract: A method for validation of an obstacle candidate identified within a sequence of image frames comprises the following steps: A. for a current image frame of the sequence of image frames, determining within the current image frame a region of interest representing the obstacle candidate, dividing the region of interest into sub-regions, and, for each sub-region, determining a Time-To-Contact (TTC) based on at least the current image frame and a preceding or succeeding image frame of the sequence of image frames; B. determining one or more classification features based on the TTCs of the sub-regions determined for the current image frame; and C. classifying the obstacle candidate based on the determined one or more classification features.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: August 17, 2021
    Assignee: Aptiv Technologies Limited
    Inventors: Jan Siegemund, Christian Nunn
  • Patent number: 11080517
    Abstract: Face anti-counterfeiting detection methods and systems, electronic devices, and computer storage media include: obtaining an image or video to be detected containing a face; extracting a feature of the image or video to be detected, and detecting whether the extracted feature contains counterfeited face clue information; and determining whether the face passes the face anti-counterfeiting detection according to a detection result.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: August 3, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Liwei Wu, Tianpeng Bao, Meng Yu, Yinghui Che, Chenxu Zhao
  • Patent number: 11062452
    Abstract: An image processing apparatus configured to acquire first registration information being registration information between a first image of interest, which is the first frame image in the frame image pair, and the first reference image; to acquire second registration information being registration information between a second image of interest, which is the second frame image in the frame image pair, and the second reference image; to acquire reference registration information being registration information between the first reference image and the second reference image; and to acquire third registration information being registration information between the first image of interest and the second image of interest, based on the first registration information, the second registration information and the reference registration information.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: July 13, 2021
    Assignees: Canon Kabushiki Kaisha, Canon Medical Systems Corporation
    Inventors: Toru Tanaka, Ryo Ishikawa
  • Patent number: 11048914
    Abstract: Face anti-counterfeiting detection methods and systems, electronic devices, and computer storage media include: obtaining an image or video to be detected containing a face; extracting a feature of the image or video to be detected, and detecting whether the extracted feature contains counterfeited face clue information; and determining whether the face passes the face anti-counterfeiting detection according to a detection result.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: June 29, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Liwei Wu, Tianpeng Bao, Meng Yu, Yinghui Che, Chenxu Zhao