Motion Or Velocity Measuring Patents (Class 382/107)
  • Patent number: 11277556
    Abstract: Based on information on a tracking target, a tracking target detecting unit is configured to detect the tracking target from an image captured by an automatic tracking camera. An influencing factor detecting unit is configured to detect an influencing factor that influences the amount of movement of the tracking target and set an influence degree, based on information on the influencing factor. Based on the influence degree set by the influencing factor detecting unit and a past movement amount of the tracking target, an adjustment amount calculating unit is configured to calculate an imaging direction adjustment amount for the automatic tracking camera.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 15, 2022
    Assignee: JVCKENWOOD Corporation
    Inventor: Takakazu Katou
  • Patent number: 11265526
    Abstract: A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: March 1, 2022
    Assignee: Occipital, Inc.
    Inventors: Patrick O'Keefe, Jeffrey Roger Powers, Nicolas Burrus
  • Patent number: 11257576
    Abstract: A method, computer program product, and computing system for tracking encounter participants is executed on a computing device and includes obtaining encounter information of a patient encounter, wherein the encounter information includes machine vision encounter information obtained via one or more machine vision systems. The machine vision encounter information is processed to identify one or more humanoid shapes.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: February 22, 2022
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Donald E. Owen, Daniel Paulino Almendro Barreda, Dushyant Sharma
  • Patent number: 11256085
    Abstract: Light deflection prism for altering a field of view of a camera of a device comprising a surface with a camera aperture region defining an actual light entrance angular cone projecting from the surface. A method comprises disposing a first surface of a light deflection prism on the surface so as to overlap the angular cone; internally reflecting a central ray, entering the prism under a normal incidence angle through a second surface, at a third surface of the prism towards the first surface under a normal angle of incidence such that the central ray enters the prism at an angle of less than 90° relative to a normal of the device surface and such that a ray at one boundary of an effective light entrance angular cone defined as a result of the light reflection at the third surface is substantially parallel to the device surface.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: February 22, 2022
    Assignee: NATIONAL UNIVERSITY OF SINGAPORE
    Inventor: Mark Brian Howell Breese
  • Patent number: 11250247
    Abstract: There is provided an information processing device including a control unit to generate play event information based on a determination whether detected behavior of a user is a predetermined play event.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: February 15, 2022
    Inventors: Hideyuki Matsunaga, Kosei Yamashita
  • Patent number: 11238597
    Abstract: [Problem] To provide a suspicious or abnormal subject detecting device for detecting a suspicious or abnormal subject appeared in time-series images. [Solution] An accumulating device 2 includes a first detecting unit 23 for detecting movement of a plurality of articulations includes in an action subject Z appeared in a plurality of first time-series images Y1 obtained by photographing a predetermined point; and a determining unit 24 for determining one or more of normal actions at the predetermined point based on a large number of movement of the plurality of articulations detected by the first detecting unit 23.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: February 1, 2022
    Assignee: ASILLA, INC.
    Inventor: Daisuke Kimura
  • Patent number: 11232583
    Abstract: A method of determining a pose of a camera is described. The method comprises analyzing changes in an image detected by the camera using a plurality of sensors of the camera; determining if a pose of the camera is incorrect; determining which sensors of the plurality of sensors are providing the most reliable image data; and analyzing data from the sensors providing the most reliable image data.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: January 25, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Rajesh Narasimha
  • Patent number: 11229107
    Abstract: A method includes the steps of obtaining a frame from an image sensor, the frame comprising a number of pixel values, detecting a change in a first subset of the pixel values, detecting a change in the second subset of the pixel values near the first subset of the pixel values, and determining an occupancy state based on a relationship between the change in the first subset of the pixel values and the second subset of the pixel values. The occupancy state may be determined to be occupied when the change in the first subset of the pixel values is in a first direction and the change in the second subset of the pixel values is in a second direction opposite the first direction.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: January 18, 2022
    Assignee: IDEAL Industries Lighting LLC
    Inventors: Sten Heikman, Yuvaraj Dora, Ronald W. Bessems, John Roberts, Robert D. Underwood
  • Patent number: 11227397
    Abstract: The invention relates to computation of optical flow using event-based vision sensors.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: January 18, 2022
    Assignee: UNIVERSITÄT ZÜRICH
    Inventors: Tobias Delbruck, Min Liu
  • Patent number: 11227169
    Abstract: A system includes a sensor, which is configured to detect a plurality of objects within an area, and a computing device in communication with the sensor. The computing device is configured to determine that one of the plurality of objects is static, determine that one of the plurality of objects is temporary, determine a geometric relationship between the temporary object and the static object, and determine whether one of the plurality of objects is a ghost object based on the geometric relationship.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: January 18, 2022
    Assignee: Continental Automotive Systems, Inc.
    Inventor: Andrew Phillip Bolduc
  • Patent number: 11219814
    Abstract: Exemplary embodiments of the present disclosure are directed to systems, methods, and computer-readable media configured to autonomously generate personalized recommendations for a user before, during, or after a round of golf. The systems and methods can utilize course data, environmental data, user data, and/or equipment data in conjunctions with one or more machine learning algorithms to autonomously generate the personalized recommendations.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: January 11, 2022
    Assignee: Arccos Golf LLC
    Inventors: Salman Hussain Syed, Colin David Phillips
  • Patent number: 11216969
    Abstract: A system for managing a position of a target stores identification information for identifying a target to be managed in association with position information indicating a position of the target. The system further obtains an image from an image capture device attached to a mobile device and obtains the image captured by the image capture device at an image capture position and image capture position information indicating the image capture position. The system further locates the position of the target included in the image using the image capture position information. The system further stores the position of the target in association with the identification information of the target.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: January 4, 2022
    Assignee: OBAYASHI CORPORATION
    Inventors: Takuma Nakabayashi, Tomoya Kaneko, Masashi Suzuki
  • Patent number: 11216669
    Abstract: The invention belongs to motion detection and three-dimensional image analysis technology fields. It can be applied to movement detection or intrusion detection in the surveillance of monitored volumes or monitored spaces. It can also be applied to obstacle detection or obstacle avoidance for self-driving, semi-autonomous vehicles, safety systems and ADAS. A three-dimensional imaging system stores 3D surface points and free space locations calculated from line of sight data. The 3D surface points typically represent reflective surfaces detected by a sensor such as a LiDAR, a radar, a depth sensor or stereoscopic cameras. By using free space information, the system can unambiguously derive a movement or an intrusion the first time a surface is detected at a particular coordinate. Motion detection can be performed using a single frame or a single 3D point that was previously a free space location.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 4, 2022
    Assignee: Outsight SA
    Inventor: Raul Bravo Orellana
  • Patent number: 11210994
    Abstract: The present application discloses a driving method of a display panel, a display apparatus and a virtual reality device. The display panel includes a middle display region and a peripheral display region at the periphery of the middle display region. The display panel is driven such that a display resolution of the middle display region is greater than a display resolution of the peripheral display region.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 28, 2021
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., BEIJING BOE DISPLAY TECHNOLOGY CO.. LTD.
    Inventors: Yanfeng Wang, Xiaoling Xu, Yuanxin Du, Yun Qiu, Xiao Sun
  • Patent number: 11212444
    Abstract: Provided is an image processing apparatus including at least one processor configured to implement a feature point extractor that detects a plurality of feature points in a first image input from an image sensor; a first motion vector extractor that extracts local motion vectors of the plurality of feature points and select effective local motion vectors from among the local motion vectors by applying different algorithms according to zoom magnifications of the image sensor; a second motion vector extractor that extracts a global motion vector by using the effective local motion vectors; and an image stabilizer configured to correct shaking of the first image based on the global motion vector.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: December 28, 2021
    Assignee: HANWHA TECHWIN CO., LTD.
    Inventors: Gab Cheon Jung, Eun Cheol Choi
  • Patent number: 11210775
    Abstract: A sequence of frames of a video can be received. For a given frame in the sequence of frames, a gradient-embedded frame is generated corresponding to the given frame. The gradient-embedded frame incorporates motion information. The motion information can be represented as disturbance in the gradient-embedded frame. A plurality of such gradient-embedded frames can be generated corresponding to a plurality of the sequence of frames. Based on the plurality of gradient-embedded frames, a neural network such as a generative adversarial network is trained to learn to suppress the disturbance in the gradient-embedded frame and to generate a substitute frame. In inference stage, anomaly in a target video frame can be detected by comparing it to a corresponding substitute frame generated by the neural network.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bo Wu, Chuang Gan, Dakuo Wang, Rameswar Panda
  • Patent number: 11200523
    Abstract: A method receiving image information with one or more processor(s) and from a sensor disposed at a worksite and determining an identity of a work tool disposed at the worksite based at least partly on the image information. The method further includes receiving location information with the one or more processor(s), the location information indicating a first location of the sensor at the worksite. Additionally, the method includes determining a second location of the work tool at the worksite based at least partly on the location information. In some instances, the method includes generating a worksite map with the one or more processor(s), the worksite map identifying the work tool and indicating the second location of the work tool at the worksite, and at least one of providing the worksite map to an additional processor and causing the worksite map to be rendered via a display.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: December 14, 2021
    Assignee: Caterpillar Inc.
    Inventors: Peter Joseph Petrany, Jeremy Lee Vogel
  • Patent number: 11200738
    Abstract: A system may receive imaging data generated by an imaging device directed at a heart. The system may receive a first input operation indicative of a selected time-frame. The system may display images of the heart based on the intensity values mapped to the selected time-frame. The system may receive, based on interaction with the images, an apex coordinate and a base coordinate. The system may calculate, based on the apex coordinate and the base coordinate, a truncated ellipsoid representative an endocardial or epicardial boundary of the heart. The system may generate a four-dimensional mesh comprising three-dimensional vertices spaced along the mesh. The system may overlay, on the displayed images, markers representative of the vertices. The system may receive a second input operation corresponding to a selected marker. The system may enhance the mesh by adjusting or interpolating vertices across multiple time-frames.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: December 14, 2021
    Assignee: Purdue Research Foundation
    Inventors: Craig J Goergen, Frederick William Damen
  • Patent number: 11200684
    Abstract: Disclosed is a river flow velocity measurement device using optical flow image processing, including: an image photographing unit configured to acquire consecutive images of a flow velocity measurement site of a river; an image conversion analysis unit configured to dynamically extract frames of the consecutive images in order to normalize image data of the image photographing unit, image-convert the extracted frames, and perform homography calculation; an analysis region extracting unit configured to extract an analysis region of an analysis point; a pixel flow velocity calculating unit configured to calculate a pixel flow velocity using an image in the analysis region of the analysis point extracted by the analysis region extracting unit; and an actual flow velocity calculating unit configured to convert the pixel flow velocity calculated by the pixel flow velocity calculating unit into an actual flow velocity.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: December 14, 2021
    Assignees: HYDROSEM, REPUBLIC OF KOREA (NATIONAL DISASTER MANAGEMENT RESEARCH INSTITUTE)
    Inventors: Seo Jun Kim, Byung Man Yoon, Ho Jun You, Dong Su Kim, Tae Sung Cheong, Jae Seung Joo, Hyeon Seok Choi
  • Patent number: 11188780
    Abstract: Briefly, embodiments disclosed herein relate to image cropping, such as for digital images, for example.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: November 30, 2021
    Assignee: VERIZON MEDIA INC.
    Inventors: Daozheng Chen, Mihyoung Sally Lee, Brian Webb, Ralph Rabbat, Ali Khodaei, Paul Krakow, Dave Todd, Samantha Giordano, Max Chern
  • Patent number: 11175366
    Abstract: A method for acquiring magnetic resonance imaging data with respiratory motion compensation using one or more motion signals includes acquiring a plurality of gradient-delay-corrected radial readout views of a subject using a free-breathing multi-echo pulse sequence, and sampling a plurality of data points of the gradient-delay-corrected radial readout views to yield a self-gating signal. The self-gating signal is used to determine a plurality of respiratory motion states corresponding to the plurality of gradient-delay-corrected radial readout views. The respiratory motion states are used to correct respiratory motion bias in the gradient-delay-corrected radial readout views, thereby yielding gradient-delay-corrected and motion-compensated multi-echo data. One or more images are reconstructed using the gradient-delay-corrected and motion-compensated multi-echo data.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: November 16, 2021
    Assignees: Siemens Healthcare GmbH, The Regents of the University of California
    Inventors: Xiaodong Zhong, Holden H. Wu, Vibhas S. Deshpande, Tess Armstrong, Li Pan, Marcel Dominik Nickel, Stephan Kannengiesser
  • Patent number: 11150750
    Abstract: An electronic pen main body unit of an electronic pen having a function of a fountain pen includes an ink writing unit in which a cartridge housing liquid ink is fitted to a rear end portion of a pen core, and a pen body is disposed so as to be superposed on the pen core in a direction orthogonal to a coupling direction of the pen core and the cartridge, and an interaction circuit having an electronic part which, in operation, exchanges a signal with a tablet. The interaction circuit is disposed on a side of the pen core opposite the pen body in the direction orthogonal to the coupling direction of the pen core and the cartridge in a state in which the interaction circuit recedes to the cartridge side from a writing end of the pen body in the coupling direction of the pen core and the cartridge.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: October 19, 2021
    Assignee: Wacom Co., Ltd.
    Inventors: Kohei Tanaka, Kenichi Ninomiya, Takenori Kaneda, Toshihiko Horie
  • Patent number: 11141239
    Abstract: A reprocessing apparatus for cleaning and/or disinfecting a medical instrument including a fluid container for a reprocessing fluid and a reprocessing device. The reprocessing device includes: a reprocessing space in which the medical instrument is introduced for reprocessing; a fluid line for connection to at least one channel of the medical instrument, wherein the fluid line is configured to transport the reprocessing fluid to the at least one channel; a bubble introducing apparatus for introducing gas bubbles into the fluid line; and a gas bubble speed determining apparatus for determining a speed of the gas bubbles in the fluid line. The gas bubble speed determining apparatus includes a camera for capturing successive images of at least a portion of the gas bubbles in the fluid line.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: October 12, 2021
    Assignee: OLYMPUS WINTER & IBE GMBH
    Inventors: Niklas Erdmann, Sascha Eschborn, Antonia Weis
  • Patent number: 11146661
    Abstract: An endpoint system including one or more computing devices receives user input associated with an avatar in a shared virtual environment; calculates, based on the user input, motion for a portion of the first avatar, such as a hand; determines, based on the user input, a first gesture state for first avatar; transmits first location change notifications and a representation of the first gesture state for the first avatar; receives second location change notifications and a representation of a second gesture state for a second avatar; detects a collision between the first avatar and the second avatar based on the first location change notifications and the second location change notifications; and identifies a collaborative gesture based on the detected collision, the first gesture state, and the second gesture state.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: October 12, 2021
    Assignee: Rec Room Inc.
    Inventors: Nicholas Fajt, Cameron Brown, Dan Kroymann, Omer Bilal Orhan, Johnathan Bevis, Joshua Wehrly
  • Patent number: 11140329
    Abstract: An image processing apparatus and an image processing method include obtaining status information of a terminal device, obtaining photographing scene information of the terminal device, determining an image processing mode based on the status information and the photographing scene information, obtaining a to-be-displayed image, and processing the to-be-displayed image based on the image processing mode.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: October 5, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Dai, Biying Hu, Yining Huang
  • Patent number: 11127146
    Abstract: The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: September 21, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Didier Doyen, Guillaume Boisson, Sylvain Thiebaud
  • Patent number: 11127181
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, multiple user data are obtained and related to the sensing result of a user from multiple data sources. Multiple first emotion decisions are determined, respectively, based on each user data. Whether an emotion collision occurs among the first emotion decisions is determined. The emotion collision is related that the corresponding emotion groups of the first emotion decisions are not matched with each other. A second emotion decision is determined from one or more emotion groups according to the determining result of the emotion collision. The first or second emotion decision is related to one emotion group. A facial expression of an avatar is generated based on the second emotion decision. Accordingly, a proper facial expression of the avatar could be presented.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: September 21, 2021
    Assignee: XRSPACE CO., LTD.
    Inventors: Feng-Seng Chu, Peter Chou
  • Patent number: 11119587
    Abstract: An image sensing system control method, comprising: (a) predicting a first velocity of the image sensor; (b) calculating a first time duration between a first frame time and a first polling time after the first frame time, wherein the image sensor captures a first frame at the first frame time and receives a first polling from the control circuit at the first polling time; and (c) calculating a first predicted motion delta of the first time duration according to the first velocity and the first time duration.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 14, 2021
    Assignee: PixArt Imaging Inc.
    Inventor: Shang Chan Kong
  • Patent number: 11113526
    Abstract: A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: September 7, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kevin Stone, Krishna Shankar, Michael Laskey
  • Patent number: 11116027
    Abstract: Provided is a method of storing information on a face of a passenger in a vehicle in association with a terminal of the passenger, and an electronic apparatus therefor. In the present disclosure, at least one of an electronic apparatus, a vehicle, a vehicle terminal, and an autonomous vehicle may be connected or converged with an artificial intelligence (AI) module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device associated with a 5G service, and the like.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: September 7, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Poram Kim, Hyunchul Choi, Inyeop Jang, Salkmann Ji, Hyunsu Choi, Sungmin You, Taegil Cho
  • Patent number: 11110343
    Abstract: An example method includes obtaining, obtaining, from one or more sensors of a computing device, data relating to a feature in an environment. Following, the method includes analyzing the data to identify one or more details of the feature in the environment and determining, based on a comparison of the data to a stored dataset in a database, that the details includes a detail that the stored dataset lacks. The method includes providing one or more game elements for gameplay on an interface of the computing device based on the details including the detail that the stored dataset lacks.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: September 7, 2021
    Assignee: Niantic, Inc.
    Inventors: Ryan Michael Hickman, Soohyun Bae
  • Patent number: 11096630
    Abstract: A method for generating a movement signal of a body part, of which at least a portion is undergoing a cardiac movement, includes providing a pilot tone signal acquired from the body part by a magnetic resonance receiver coil arrangement. A demixing matrix is calculated from a calibration portion of the Pilot Tone signal using an independent component analysis algorithm. The independent component corresponding to the cardiac movement is selected. The demixing matrix is applied to further portions of the pilot tone signal to obtain a movement signal representing the cardiac movement. An, adaptive stochastic, or model-based filter is applied to the signal representing the cardiac movement, to obtain a filtered movement signal.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 24, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Peter Speier, Mario Bacher
  • Patent number: 11093762
    Abstract: A method for validation of an obstacle candidate identified within a sequence of image frames comprises the following steps: A. for a current image frame of the sequence of image frames, determining within the current image frame a region of interest representing the obstacle candidate, dividing the region of interest into sub-regions, and, for each sub-region, determining a Time-To-Contact (TTC) based on at least the current image frame and a preceding or succeeding image frame of the sequence of image frames; B. determining one or more classification features based on the TTCs of the sub-regions determined for the current image frame; and C. classifying the obstacle candidate based on the determined one or more classification features.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: August 17, 2021
    Assignee: Aptiv Technologies Limited
    Inventors: Jan Siegemund, Christian Nunn
  • Patent number: 11080517
    Abstract: Face anti-counterfeiting detection methods and systems, electronic devices, and computer storage media include: obtaining an image or video to be detected containing a face; extracting a feature of the image or video to be detected, and detecting whether the extracted feature contains counterfeited face clue information; and determining whether the face passes the face anti-counterfeiting detection according to a detection result.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: August 3, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Liwei Wu, Tianpeng Bao, Meng Yu, Yinghui Che, Chenxu Zhao
  • Patent number: 11062452
    Abstract: An image processing apparatus configured to acquire first registration information being registration information between a first image of interest, which is the first frame image in the frame image pair, and the first reference image; to acquire second registration information being registration information between a second image of interest, which is the second frame image in the frame image pair, and the second reference image; to acquire reference registration information being registration information between the first reference image and the second reference image; and to acquire third registration information being registration information between the first image of interest and the second image of interest, based on the first registration information, the second registration information and the reference registration information.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: July 13, 2021
    Assignees: Canon Kabushiki Kaisha, Canon Medical Systems Corporation
    Inventors: Toru Tanaka, Ryo Ishikawa
  • Patent number: 11048914
    Abstract: Face anti-counterfeiting detection methods and systems, electronic devices, and computer storage media include: obtaining an image or video to be detected containing a face; extracting a feature of the image or video to be detected, and detecting whether the extracted feature contains counterfeited face clue information; and determining whether the face passes the face anti-counterfeiting detection according to a detection result.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: June 29, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Liwei Wu, Tianpeng Bao, Meng Yu, Yinghui Che, Chenxu Zhao
  • Patent number: 11043037
    Abstract: A method to determine the dimensions and distance of a number of objects in an environment includes providing a number of objects including a marking element; recording a visual image-dataset of at least one of the objects with a camera; and determining a parameter value from the image of a marking element in the image-dataset or from a measurement of an additional sensor at the location of the camera. The parameter value is a value depending from the distance of the object to the camera. The method further includes calculating the relative distance between the object and the camera based on the parameter value and calculating dimensions of the object from at least a part of the image of the object in the image-dataset and the calculated distance. A related device, a related system and a related control unit for a virtual reality system are also disclosed.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: June 22, 2021
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Anton Ebert
  • Patent number: 11037194
    Abstract: A favorable merging or grouping of simply connected regions into which the array of information samples is sub-divided, is coded with a reduced amount of data. To this end, a predetermined relative locational relationship is defined enabling an identifying, for a predetermined simply connected region, of simply connected regions within the plurality of simply connected regions which have the predetermined relative locational relationship to the predetermined simply connected region. Namely, if the number is zero, a merge indicator for the predetermined simply connected region may be absent within the data stream. In other embodiments, spatial sub-division is performed depending on a first subset of syntax elements, followed by combining spatially neighboring simply connected regions depending on a second subset of syntax elements, to obtain an intermediate sub-division.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: June 15, 2021
    Assignee: GE Video Compression, LLC
    Inventors: Philipp Helle, Simon Oudin, Martin Winken, Detlev Marpe, Thomas Wiegand
  • Patent number: 11035663
    Abstract: Systems and related methods are disclosed for characterizing physical phenomena. In an embodiment, the system includes a frame defining an active volume, a camera configured to capture an image of the active volume, and a controller coupled to the camera. In an embodiment, the controller is configured to: track an object within the active volume via the camera, analyze a motion of the object within the active volume, and output a visual depiction of the object and one or more vectors characterizing the motion of the object on a display.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: June 15, 2021
    Assignee: The Texas A&M University System
    Inventors: Ricardo Eusebi, Jeffrey Breitschopf, David Andrew Overton, Brian Muldoon, Sifu Luo, Zhikun Xing
  • Patent number: 11037525
    Abstract: A display system with high display quality in which display unevenness is reduced is provided. The display system includes a processing unit and a display portion. The processing unit generates second image data by using first image data. The display portion displays an image on the basis of the second image data. The processing unit includes three layers. The first image data is supplied to the first layer. The first image data contains a plurality of pieces of data. The plurality of pieces of data each correspond to any one of the plurality of pixels. The first layer generates first arithmetic data by making the number of data corresponding to one pixel larger than the number of the first image data by using the first image data. The second layer generates second arithmetic data by multiplying the first arithmetic data by a weight coefficient.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: June 15, 2021
    Inventors: Masataka Shiokawa, Natsuko Takase, Hideaki Okamoto, Kensuke Yoshizumi, Daiki Nakamura
  • Patent number: 11017242
    Abstract: A traffic monitoring system includes a first car moving on a first path; a camera having a field of vision including at least a portion of the first path; and a computing system. The computing system receives a plurality of images from the camera. The computing system has a processor. When instructed, the processor performs circling a perimeter of the first car on each of the images with a first rectangle; composing a first set of points, each point of the first set of points representing a center of the first rectangle; finding a first centroid using the first set of points, wherein the first centroid represents the first path; and calculating a speed of the first car using the first centroid.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: May 25, 2021
    Assignee: Unisys Corporation
    Inventors: Kelsey L Bruso, Dayln Limesand, James Combs
  • Patent number: 11006042
    Abstract: An imaging device includes a plurality of image sensors, and circuitry configured to determine whether a difference between average brightness values of first image data and second image data, both captured by a same image sensor, is equal to or greater than a first threshold. The second image data is captured at a timing later than capture of the first image data. The circuitry perform one of a) output of image data captured by a rest of the plurality of image sensors excluding the one of the plurality of image sensors and b) composition of the image data captured by the rest of the plurality of image sensors, in response to a determination that the difference in average brightness value is equal to or greater than the first threshold and the average brightness value of the second image data is equal to or smaller than a second threshold.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: May 11, 2021
    Assignee: Ricoh Company, Ltd.
    Inventors: Koji Takatsu, Susumu Fujioka
  • Patent number: 11003914
    Abstract: A system for monitoring and recording and processing an activity includes one or more cameras for automatically recording video of the activity. A processor and memory associated and in communication with the camera is disposed near the location of the activity. The system may include AI logic configured to identify a user recorded within a video frame captured by the camera. The system may also detect and identify a user when the user is located within a predetermined area. The system may include a video processing engine configured to process images within the video frame to identify the user and may modify and format the video upon identifying the user and the activity. The system may include a communication module to communicate formatted video to a remote video processing system, which may further process the video and enable access to a mobile app of the user.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: May 11, 2021
    Inventor: Kevin R. Imes
  • Patent number: 10991262
    Abstract: A simulation mapping system and method for determining a plurality of performance metric values in relation to a training activity performed by a user in an interactive computer simulation, the interactive computer simulation simulating a virtual element comprising a plurality of dynamic subsystems. A processor module obtains dynamic data related to the virtual element being simulated in an interactive computer simulation station comprising a tangible instrument module. The dynamic data captures actions performed by the user on tangible instruments. The processor module constructs a dataset corresponding to the plurality of performance metric values from the dynamic data having a target time step by synchronizing dynamic data and by inferring, for at least one missing dynamic subsystems of the plurality of dynamic subsystems missing from the dynamic data, a new set of data into the dataset from dynamic data associated to one or more co-related dynamic subsystems.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: April 27, 2021
    Assignee: CAE Inc.
    Inventors: Jean-François Delisle, Antoine Dufour, Marc-André Proulx, Dac Toan Ho
  • Patent number: 10973581
    Abstract: Disclosed are systems and methods for obtaining a structured light reconstruction using a hybrid spatio-temporal pattern sequence projected on a surface. The method includes projecting a structured light pattern, such as a binary de Bruijn sequence, onto a 3D surface and acquiring an image set of at least a portion of this projected sequence with a camera system, and projecting a binary edge detection pattern onto the portion of the surface and acquiring an image set of the same portion of the projected binary pattern. The acquired image set of the binary pattern is processed to determine edge locations therein, and then employed to identify the locations of pattern edges within the acquired image set of the structured light pattern. The detected edges of the structured light pattern images are employed to decode the structured light pattern and calculate a disparity map, which is used to reconstruct the 3D surface.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: April 13, 2021
    Assignee: 7D SURGICAL INC.
    Inventors: Adrian Mariampillai, Kenneth Kuei-Ching Lee, Michael Leung, Peter Siegler, Beau Anthony Standish, Victor X. D. Yang
  • Patent number: 10977804
    Abstract: In accordance with an embodiment, a method of detecting moving objects via a moving camera includes receiving a sequence of images from the moving camera; determining optical flow data from the sequence of images; decomposing the optical flow data into global motion related motion vectors and local object related motion vectors; calculating global motion parameters from the global motion related motion vectors; calculating moto-compensated vectors from the local object related motion vectors and the calculated global motion parameters; compensating the local object related motion vectors using the calculated global motion parameters; and clustering the compensated local object related motion vectors to generate a list of detected moving objects.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: April 13, 2021
    Assignee: STMICROELECTRONICS S.R.L.
    Inventors: Giuseppe Spampinato, Salvatore Curti, Arcangelo Ranieri Bruna
  • Patent number: 10970855
    Abstract: Provided are embodiments for a computer-implemented method. The method includes receiving a sequence of image data, transforming objects in each frame of the sequence of the image data into direction vectors, and clustering the direction vectors based at least in part on features of the objects. The method also includes mapping the direction vectors for the objects in each frame into a position-orientation data structure, and performing tracking using the mapped direction vectors in the position-orientation data structure. Also provided are embodiments of a computer program product and a system for performing object tracking.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 6, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Umar Asif, Jianbin Tang, Subhrajit Roy
  • Patent number: 10970935
    Abstract: A person who is not using a hybrid reality (HR) system communicates with the HR system without using a network communications link using a body pose. Data is received from a sensor and an individual is detected in the sensor data. A first situation of at least one body part of the individual in 3D space is ascertained at a first time and a body pose is determined based on the first situation of the at least one body part. An action is decided on based on the body pose and the action is performed on an HR system worn by a user.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: April 6, 2021
    Inventors: Anthony Mark Jones, Jessica A. F. Jones, Bruce A. Young
  • Patent number: 10956751
    Abstract: The present invention provides an external apparatus connected to an imaging apparatus over a network, the imaging apparatus including an imaging unit which captures an image of a vessel being a subject, the external apparatus including an obtaining unit which obtains image data including the vessel captured by the imaging unit, a display unit which displays the image data, an analyzing unit which extracts vessel estimation information regarding an arbitrary vessel included in the image data based on the image data, a receiving unit which receives vessel information based on a wireless communication from the vessel, and a comparing unit which compares the vessel estimation information and the vessel information, wherein, in a case where the vessel estimation information and the vessel information are not matched, the display unit displays a warning in addition to the image data.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: March 23, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Koji Shinohe
  • Patent number: 10952658
    Abstract: An information processing method includes, by a computer: acquiring biological information on a first person; acquiring an image obtained by imaging the first person in synchronization with acquisition timing of the biological information; identifying person identification information for identifying the first person based on the image; storing, in a storage unit, the identified person identification information, the acquired biological information, and the acquired image in association with one another; acquiring the person identification information on the first person selected by a second person different from the first person, and state information indicating a state of the first person selected by the second person; and extracting, from the storage unit, the image associated with the acquired person identification information and the biological information corresponding to the acquired state information.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: March 23, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Masaru Yamaoka, Mikiko Matsuo