Object Tracking Patents (Class 348/169)
  • Patent number: 9911191
    Abstract: The purpose of the present invention is to provide a state estimation apparatus that appropriately estimates the internal state of an observation target by determining likelihoods from a plurality of observations. An observation obtaining unit of the state estimation system obtains, at given time intervals, a plurality of observation data obtained from an observable event. The observation selecting unit selects a piece of observation data from the plurality of pieces of observation data obtained by the observation obtaining unit based on a posterior probability distribution data obtained at a preceding time t?1. The likelihood obtaining unit obtains likelihood data based on the observation data selected by the observation selecting unit and predicted probability distribution data obtained through prediction processing using the posterior probability distribution data.
    Type: Grant
    Filed: October 13, 2015
    Date of Patent: March 6, 2018
    Assignees: MegaChips Corporation, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Norikazu Ikoma, Hiromu Hasegawa
  • Patent number: 9907147
    Abstract: The invention provides a light management information system for an outdoor lighting network system, having a plurality of outdoor light units each including at least one sensor type, where each of the light units communicates with at least one other light unit, at least one user input/output device in communication with at one or more of said outdoor light units, a central management system in communication with light units, said central management system sends control commands and/or information to one or more of said outdoor light units, in response to received outdoor light unit status/sensor information from one or more of said outdoor light units or received user information requests from said user input/output device, a resource server in communication with said central management system, wherein the central management system uses the light unit status/sensor information and resources from the resource server to provide information to the user input/output device and/or reconfigure one or more of the l
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: February 27, 2018
    Inventors: Hongxin Chen, Dave Alberto Tavares Cavalcanti, Kiran Srinivas Challapali, Sanae Chraibi, Liang Jia, Andrew Ulrich Rutgers, Yong Yang, Michael Alex Van Hartskamp, Dzmitry Viktorovich Aliakseyeu, Hui Li, Qing Li, Fulong Ma, Jonathan David Mason, Berent Willem Meerbeek, John Brean Mills, Talmai Brandão De Oliveira, Daniel J. Piotrowski, Yuan Shu, Neveen Shlayan, Marcin Krzysztof Szczodrak, Yi Qiang Yu, Zhong Huang, Martin Elixmann, Qin Zhao, Xianneng Peng, Jianfeng Wang, Dan Jiang
  • Patent number: 9904371
    Abstract: A hand region is detected in a captured image, and for each part of the background area, a light source presence degree indicating the probability that a light source is present is determined according to the luminance or color of that part; on the basis of the light source presence degree, a region in which the captured image is affected by a light source is estimated, and if the captured image includes a region estimated to be affected by a light source, whether or not a dropout has occurred in the hand region in the captured image is decided; on the basis of the result of this decision, an action is determined. Gesture determinations can be made correctly even when the hand region in the captured image is affected by a light source at the time of gesture manipulation input.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: February 27, 2018
    Assignee: Mitsubishi Electric Corporation
    Inventors: Nobuhiko Yamagishi, Yudai Nakamura, Tomonori Fukuta
  • Patent number: 9898650
    Abstract: Provided are a system and a method for tracking a position based on multi sensors. The system according to the present invention includes: a space management server which divides a predetermined space into a predetermined number of spaces to generate a plurality of zones and manages information of each zone; and a zone management server which manages the information of the zone generated by the space management server and provides the space management server with zone session information of an object positioned in each zone and positional information of the corresponding object sensed by a position tracking sensor every time slot.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: February 20, 2018
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young Ho Suh, Kang Woo Lee, Sang Keun Rhee
  • Patent number: 9898557
    Abstract: A non-transitory computer-readable storage medium is disclosed. In an embodiment, the non-transitory computer-readable storage medium includes instructions that, when executed by a computer, cause the computer to perform steps involving receiving parameters, selecting pre-configured slices from a library of slices that satisfy the parameters, and placing the selected slices to generate a configuration variant in accordance with slice placement logic.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: February 20, 2018
    Assignee: ADITAZZ, INC.
    Inventors: Richard L. Sarao, Scott Ewart
  • Patent number: 9898677
    Abstract: In one embodiment, a method of determine and track moving objects in a video, including detecting and extracting regions from accepted frames of a video, matching parts including using the extracted parts of a current frame and matching each part from a previous frame to a region in the current frame, tracking the matched parts to form part tracks, and determining a set of path features for each tracked part path. The determined path features are used to classify each path as that of mover or a static. The method includes clustering the paths of movers, including grouping parts of movers that likely belong to a single object, in order to generate one or more single moving objects and track moving objects. Also a system to carry out the method and a non-transitory computer-readable medium that when executed in a processing system causes carrying out the method.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: February 20, 2018
    Assignee: MotionDSP, Inc.
    Inventors: Neboj{hacek over (s)}a Anđjelković, Edin Mulalić, Nemanja Grujić, Sa{hacek over (s)}a Anđelković, Vuk Ilić, Milan Marković
  • Patent number: 9891716
    Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: February 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Tarek El Dokor
  • Patent number: 9892520
    Abstract: A method and system determines flows by first acquiring a video of the flows with a camera, wherein the flows are pedestrians in a scene, wherein the video includes a set of frames. Motion vectors are extracted from each frame in the set, and a data matrix is constructed from the motion vectors in the set of frames. A low rank Koopman operator is determined from the data matrix and a spectrum of the low rank Koopman operator is analyzed to determine a set of Koopman modes. Then, the frames are segmented into independent flows according to a clustering of the Koopman modes.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: February 13, 2018
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Hassan Mansour, Caglayan Dicle, Dong Tian, Mouhacine Benosman, Anthony Vetro
  • Patent number: 9886775
    Abstract: The disclosure relates to a method for detection of the horizontal and gravity directions of an image, the method comprising: selecting equidistant sampling points in an image at an interval of the radius of the sampling circle of an attention focus detector; placing the center of the sampling circle of the attention focus detector on each of the sampling points, and using the attention focus detector to acquire attention focus coordinates and the corresponding significant orientation angle, and all the attention focus coordinates and the corresponding significant orientation angles constitute a set ?p; using an orientation perceptron to determine a local orientation angle and a weight at the attention focus according to the gray image information, and generating a local orientation function; obtaining a sum of each of the local orientation functions as an image direction function; obtaining a function MCGCS(?), and further obtaining the horizontal and gravity identification angles.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: February 6, 2018
    Assignee: Institute of Automation Chinese Academy of Sciences
    Inventors: Zhiqiang Cao, Xilong Liu, Chao Zhou, Min Tan, Kun Ai
  • Patent number: 9888174
    Abstract: An omnidirectional camera is presented. The camera comprises: at least one lens coupled to an image sensor, a controller coupled to the image sensor, and a movement detection element coupled to the controller. The movement detection element is configured to detect a speed and direction of movement of the camera, the camera is configured to capture images via the at least one lens coupled to the image sensor, and the controller is configured to select a digital viewpoint in the captured images. A central point of the digital viewpoint is selected on the basis of direction of movement, and a field of view of the digital viewpoint is based on the speed of movement. A system and method are also presented.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Esa Kankaanpää, Shahil Soni
  • Patent number: 9880395
    Abstract: Movement of a display device is detected, and an image is displayed in stereoscopic display or planar display depending on the detected movement.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: January 30, 2018
    Assignees: NEC CORPORATION, Tianma Japan, Ltd.
    Inventors: Tetsusi Sato, Koji Shigemura, Masahiro Serizawa
  • Patent number: 9881380
    Abstract: Techniques and systems are described for performing video segmentation using fully connected object proposals. For example, a number of object proposals for a video sequence are generated. A pruning step can be performed to retain high quality proposals that have sufficient discriminative power. A classifier can be used to provide a rough classification and subsampling of the data to reduce the size of the proposal space, while preserving a large pool of candidate proposals. A final labeling of the candidate proposals can then be determined, such as a foreground or background designation for each object proposal, by solving for a posteriori probability of a fully connected conditional random field, over which an energy function can be defined and minimized.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: January 30, 2018
    Inventors: Alexander Sorkine Hornung, Federico Perazzi, Oliver Wang
  • Patent number: 9875431
    Abstract: To provide a training data generating device capable of easily generating a large amount of training data used for machine-learning a dictionary of a discriminator for recognizing a crowd state. A person state determination unit 72 determines a person state of a crowd according to a people state control designation as designation information on a person state of people and an individual person state control designation as designation information on a state of an individual person in the people. A crowd state image synthesis unit 73 generates a crowd state image as an image in which a person image corresponding to the person state determined by the person state determination unit 72 is synthesized with an image at a predetermined size acquired by a background extraction unit 71, and specifies a training label for the crowd state image.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: January 23, 2018
    Assignee: NEC Corporation
    Inventor: Hiroo Ikeda
  • Patent number: 9875411
    Abstract: The present disclosure relates to a video monitoring method and a video monitoring system based on a depth video. The video monitoring method comprises: obtaining video data collected by a video collecting module; determining an object as a monitored target based on pre-set scene information and the video data; extracting characteristic information of the object; and determining predictive information of the object based on the characteristic information, wherein the video data comprises video data including the depth information.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: January 23, 2018
    Inventors: Gang Yu, Chao Li, Qizheng He, Qi Yin
  • Patent number: 9874636
    Abstract: Some demonstrative embodiments include devices, systems and/or methods of orientation estimation of a mobile device. For example, a mobile device may include an orientation estimator to detect a pattern in at least one image captured by the mobile device, and based on one or more geometric elements of the detected pattern, to determine one or more orientation parameters related to an orientation of the mobile device.
    Type: Grant
    Filed: June 8, 2012
    Date of Patent: January 23, 2018
    Inventors: Uri Schatzberg, Yuval Amizur, Leor Banin
  • Patent number: 9870619
    Abstract: An operation method of an electronic device is provided. The method includes detecting motion objects in each of two or more continuously captured images, determining whether the detected motion objects are synthesizable for use as wallpaper, and providing feedback according to whether the detected motion objects are synthesizable for use as wallpaper.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: January 16, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-Sik Sohn, Ki-Huk Lee, Young-Kwon Yoon, Woo-Yong Lee
  • Patent number: 9870513
    Abstract: An object-recognition method for a vehicle's driver assistance system involves obtaining a 2D image and a 3D image, forming a 3D apparent object from the 3D image, detecting one or more detected objects in a portion of the 2D image corresponding to the apparent object from the 3D image, classifying the one or more detected objects into at least one pre-defined object class, and dividing the apparent object into at least two 3D objects when the apparent object does not correspond with at least one class-specific property of the determined at least one object class.
    Type: Grant
    Filed: September 4, 2014
    Date of Patent: January 16, 2018
    Assignee: Conti Temic microelectronic GmbH
    Inventors: Robert Thiel, Alexander Bachmann
  • Patent number: 9870704
    Abstract: A method and system for camera calibration comprises configuring a calibration target comprising calibration reflectors on a test vehicle. Video of a test scene is collected. Next the test vehicle is identified as it enters the test scene and recorded as it passes through the test scene. The position of the calibration target in each frame of the video is determined and the corresponding individual position of each calibration reflector for each frame of the recorded frames is used to construct a camera calibration map to calibrate the video camera.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: January 16, 2018
    Assignee: Conduent Business Services, LLC
    Inventors: Martin Edward Hoover, David Martin Todd Jackson, Wencheng Wu, Vladimir Kozitsky
  • Patent number: 9852323
    Abstract: The present invention provides a facial image display apparatus that can display moving images concentrated on the face when images of people's faces are displayed. A facial image display apparatus is provided wherein a facial area detecting unit (21) detects facial areas in which faces are displayed from within a target image for displaying a plurality of faces; a dynamic extraction area creating unit (22) creates, based on the facial areas detected by the facial area detecting means, a dynamic extraction area of which at least one of position and surface area varies over time in the target image; and a moving image output unit (27) sequentially extracts images in the dynamic extraction area and outputs the extracted images as a moving image.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: December 26, 2017
    Inventors: Munetaka Tsuda, Shuji Hiramatsu, Akira Suzuki
  • Patent number: 9842276
    Abstract: A system and method for analyzing a personalized characteristic are provided. The system includes an analysis range calculator configured to calculate a plurality of analysis ranges having different analysis times from positioning data according to a lapse of time of an analysis target; an image analyzer configured to identify one or more objects from the image data corresponding to each of the analysis ranges, and analyze one or more visual characteristics from each of the identified objects; and a characteristic analyzer configured to generate personalized characteristic information of the analysis target using a characteristic analysis result of each of the analysis ranges.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: December 12, 2017
    Assignee: SAMSUNG SDS CO., LTD.
    Inventors: Min-Woo Jung, Hyun-Jung Soh, Hye-Ran Lee, Tae-Hwan Jeong, Hyun-Chul Kim
  • Patent number: 9842392
    Abstract: An input unit (20) obtains a sequence of image frames over time. A segmentation unit (22) segments image frames of the sequence of image frames. A tracking unit (24) tracks segments of the segmented image frame over time in the sequence of image frames. A clustering unit (26) clusters the tracked segments to obtain clusters representing skin of a subject by use of one or more image features of the tracked segments.
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: December 12, 2017
    Inventor: Gerard De Haan
  • Patent number: 9836846
    Abstract: The present invention relates to image analysis. In particular, but not limited to, the invention relates to estimating 3D facial geometry. First, images are acquired 205 of an object, typically a face. Then a first three-dimensional (3D) geometry of the object is estimated 215 based upon at least the first image. A calibration image of the object and a calibration rig 120 is acquired 405. A scaling factor is determined 420 of the first 3D geometry based upon the calibration image, a known size of the calibration rig 120 and a predetermined spatial configuration. Finally, scaling the first 3D geometry using the scaling factor. The invention also concerns a system and software.
    Type: Grant
    Filed: June 19, 2014
    Date of Patent: December 5, 2017
    Inventor: Simon Lucey
  • Patent number: 9838604
    Abstract: A method, system, and computer program product for stabilizing frames, the method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: December 5, 2017
    Inventors: Markus Schlattmann, Rohit Mande
  • Patent number: 9836852
    Abstract: A method includes receiving first data defining a first bounding box for a first image of a sequence of images. The first bounding box corresponds to a region of interest including a tracked object. The method also includes receiving object tracking data for a second image of the sequence of images, the object tracking data defining a second bounding box. The second bounding box corresponds to the region of interest including the tracked object in the second image. The method further includes determining a similarity metric for first pixels within the first bounding box and search pixels within each of multiple search bounding boxes. Search coordinates of each of the search bounding boxes correspond to second coordinates of the second bounding box shifted in one or more directions. The method also includes determining a modified second bounding box based on the similarity metric.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: December 5, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Li, Xin Zhong, Dashan Gao, Yingyong Qi, Kai Guo
  • Patent number: 9830395
    Abstract: A system and method for spatial data processing are described. Path bundle data packages from a viewing device are accessed and processed. The path bundle data packages identify a user interaction of the viewing device with an augmented reality content relative to and based on a physical object captured by the viewing device. The path bundle data packages are generated based on the sensor data using a data model comprising a data header and a data payload. The data header comprises a contextual header having data identifying the viewing device and a user of the viewing device. A path header having data identifies the path of the interaction with the augmented reality content. A sensor header having data identifies the plurality of sensors. The data payload comprises dynamically sized sampling data from the sensor data. The path bundle data packages are normalized and aggregated. Analytics computation is performed on the normalized and aggregated path bundle data packages.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: November 28, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Frank Chester Irving, Jr.
  • Patent number: 9826202
    Abstract: The apparatus comprises a camera (10) with a digital sensor read by a mechanism of the rolling shutter type delivering video data (Scam) line by line. An exposure control circuit (22) adjusts dynamically the exposure time (texp) as a function of the level of illumination of the scene that is captured. A gyrometer unit (12) delivers a gyrometer signal (Sgyro) representative of the instantaneous variations of attitude (?, ?, ?) of the camera, and a processing circuit (18) that receives the video data (Scam) and the gyrometer signal (Sgyro) delivers as an output video data processed and corrected for artefacts introduced by vibrations specific to the apparatus. An anti-wobble filter (24) dynamically modifies the gain of the gyrometer signal as a function of the exposure time (texp), so as to reduce the gain of the filter when the exposure time increases, and vice versa.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: November 21, 2017
    Assignee: Parrot Drones
    Inventors: Pierre Eline, Eng Hong Sron
  • Patent number: 9818023
    Abstract: A method for face detection includes capturing a depth map and an image of a scene and selecting one or more locations in the image to test for presence of human faces. At each selected location, a respective face detection window is defined, having a size that is scaled according to a depth coordinate of the location that is indicated by the depth map. Apart of the image that is contained within each face detection window is processed to determine whether the face detection window contains a human face. Similar methods may also be applied in identifying other object types.
    Type: Grant
    Filed: January 26, 2017
    Date of Patent: November 14, 2017
    Assignee: APPLE INC.
    Inventors: Yael Shor, Tomer Yanir, Yaniv Shaked
  • Patent number: 9811736
    Abstract: An information processing method includes: acquiring setting pseudo multipole information on a pseudo multipole (S10), the setting pseudo multipole information being set such that color information on poles p1 and p2 is related to color information on an object 2, the poles corresponding to a plurality of points in an image of a single predetermined frame 20; specifying a position of the pseudo multipole in an initial frame and seeking an image of a single frame in the video for the pseudo multipole (S14), the pseudo multipole having poles whose colors conform to colors in the color information on the poles in the acquired setting pseudo multipole information, a distance between the poles of the pseudo multipole being equal to or less than a predetermined distance; specifying a position of the found pseudo multipole in the image of the single frame of the video (S17); and tracking the object on the basis of the position of the pseudo multipole found from the image of the single frame and the position of the p
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: November 7, 2017
    Assignee: Rakuten, Inc.
    Inventor: Hiromi Hirano
  • Patent number: 9813693
    Abstract: Images captured at short distances, such as “selfies,” can be improved by addressing magnification and perspective effects present in the images. Distance information, such as a three-dimensional depth map, can be determined for an object using stereoscopic imaging or another distance measuring approach. Based on a magnification function determined for a camera lens, magnification levels for different regions of the captured images can be determined. At least some of these regions then can be adjusted or transformed in order to provide for a more consistent magnification levels across those regions, thereby reducing anamorphic effects. Where appropriate, gaps in the image can also be filled to enhance the image. At least some control over the amount of adjustment may be provided to users for aesthetic control.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: November 7, 2017
    Inventor: Leo Benedict Baldwin
  • Patent number: 9800834
    Abstract: A method is disclosed for detecting interaction between two or more participants in a meeting, which includes capturing at least one three-dimensional stream of data on the two or more participants; extracting a time-series of skeletal data from the at least one three-dimensional stream of data on the two or more participants; classifying the time-series of skeletal data for each of the two or more participants based on a plurality of body position classifiers; and calculating an engagement score for each of the two or more participants. In addition, a method is disclosed for improving a group interaction in a meeting, which includes calculating, for each of the two or more participants, an individual engagement state based on attitudes of the participant, wherein the individual engagement state is an engagement state of the participant to the meeting including an engaged state and a disengaged state.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: October 24, 2017
    Inventors: Maria Frank, Ghassem Tofighi, Nandita Nayak, Haisong Gu
  • Patent number: 9791264
    Abstract: A method to estimate a set of camera locations, in clockwise or counter-clockwise order, according to the videos captured by these cameras is described herein. In some embodiments, the cameras are assumed to be fixed, with no or very mild tilting angles and no rolling angles (the horizon is horizontal in each camera image). The difference of orientation (rolling angle) between each neighboring (closest) camera pair is able to be up to 45 degrees. Each camera is assumed to have overlapped views with at least one other camera. Each camera has one right neighboring camera and one left neighboring camera, except the first and the last cameras which have only one neighboring camera at one side. The locations of the cameras then are able to be expressed as a unique list counter-clockwise. The input videos are assumed to be synchronized in time.
    Type: Grant
    Filed: February 4, 2015
    Date of Patent: October 17, 2017
    Inventors: Cheng-Yi Liu, Alexander Berestov
  • Patent number: 9789820
    Abstract: An object detection apparatus that detects an object in a vicinity of a vehicle includes: (a) an image processing circuit configured to: (i) derive vectors representing movement of feature points in captured images acquired periodically by a camera that captures images of the vicinity of the vehicle; and (ii) detect the object based on the vectors; and (b) a controller configured to (i) acquire a velocity of the vehicle; and (ii) set a parameter that affects a number of the feature points based on the velocity of the vehicle.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: October 17, 2017
    Inventor: Tetsuo Yamamoto
  • Patent number: 9785857
    Abstract: A positioning server is connected to a collection of access points, base stations, NFC stations, and image or video cameras and the collected data is used for positioning objects. A plurality of electronic devices are paired with an object by tracking the position of the object based on imaging and the position of electronic devices based on RF signals in vicinity of the object. Once a device is paired with an object, the propagation channel profile measured through the electronic device is used to develop and tune a database of channel profiles versus location. This database is used based on signature/profile matching and correlation for positioning devices and objects that do not have pairing or have poor image-based positioning accuracy or reliability. When a device is detected that cannot be paired with any object, or a device that is unpaired from a previously associated object, a theft or loss alert is generated.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: October 10, 2017
    Assignee: GOLBA LLC
    Inventor: Mehran Moshfeghi
  • Patent number: 9785744
    Abstract: The system and method disclosed herein provides an integrated and automated workflow, sensor, and reasoning system that automatically detects breaches in protocols, appropriately alarms and records these breaches, facilitates staff adoption of protocol adherence, and ultimately enables the study of protocols for care comparative effectiveness. The system provides real-time alerts to medical personnel in the actual processes of care, thereby reducing the number of negative patient events and ultimately improving staff behavior with respect to protocol adherence.
    Type: Grant
    Filed: September 13, 2011
    Date of Patent: October 10, 2017
    Inventors: Christopher Donald Johnson, Peter Henry Tu, Piero Patrone Bonissone, John Michael Lizzi, Jr., Kunter Seref Akbay, Ting Yu, Corey Nicholas Bufi, Viswanath Avasarala, Naresh Sundaram Iyer, Yi Yao, Kedar Anil Patwardhan, Dashan Gao
  • Patent number: 9785836
    Abstract: A mobile platform visually detects and/or tracks a target that includes a dynamically changing portion, or otherwise undesirable portion, using a feature dataset for the target that excludes the undesirable portion. The feature dataset is created by providing an image of the target and identifying the undesirable portion of the target. The identification of the undesirable portion may be automatic or by user selection. An image mask is generated for the undesirable portion. The image mask is used to exclude the undesirable portion in the creation of the feature dataset for the target. For example, the image mask may be overlaid on the image and features are extracted only from unmasked areas of the image of the target. Alternatively, features may be extracted from all areas of the image and the image mask used to remove features extracted from the undesirable portion.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: October 10, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Daniel Wagner, Zsolt Szabolcs Szalavari
  • Patent number: 9783320
    Abstract: A collision avoidance system for an airplane under tow may include a sensing device configured to capture image data of at least a portion of the airplane and an object while the airplane is being towed. The sensing device may be located remotely to both the airplane and the object. Positions of two or more features of the airplane may be determined based on the image data. A bounding box encompassing the airplane may be generated based, at least in part, on the positions of the two or more features. Additionally, based on a comparison of the position of an object relative to the bounding box, it may be determined whether the object is within a predetermined distance from the airplane.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: October 10, 2017
    Assignee: DM3 AVIATION LLC
    Inventors: Megan D. Barnes, Michael W. Delk, Richard L. Tutwiler, John R. Durkin, David E. Barnes, Jr.
  • Patent number: 9781363
    Abstract: A multifunctional sky camera system and techniques for the use thereof for total sky imaging and spectral irradiance/radiance measurement are provided. In one aspect, a sky camera system is provided. The sky camera system includes an objective lens having a field of view of greater than about 170 degrees; a spatial light modulator at an image plane of the objective lens, wherein the spatial light modulator is configured to attenuate light from objects in images captured by the objective lens; a semiconductor image sensor; and one or more relay lens configured to project the images from the spatial light modulator to the semiconductor image sensor. Techniques for use of the one or more of the sky camera systems for optical flow based cloud tracking and three-dimensional cloud analysis are also provided.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: October 3, 2017
    Assignee: International Business Machines Corporation
    Inventors: Hendrik F. Hamann, Siyuan Lu
  • Patent number: 9778472
    Abstract: A stereoscopic display device is provided. The stereoscopic display device comprises: a display panel (100), comprising a plurality of first display units (101) and a plurality of second display units (102) which are alternately arranged; and a grating (200) positioned on a light exiting side of the display panel (100) and including a plurality of light-transmitting regions (a) and a plurality of light-shielding regions (b), wherein the stereoscopic display device comprises a lens (300) with a light divergence action at a position corresponding to each of the light-transmitting regions (a) of the grating (200). Therefore, when the stereoscopic display device is viewed at a short distance, a mechanical performance of the stereoscopic display device is improved.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: October 3, 2017
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Wei Wei
  • Patent number: 9773317
    Abstract: Provided are a pedestrian tracking and counting method and device for a near-front top-view monitoring video, wherein the method includes that a video image under a current monitoring scene is acquired, the acquired video image is compared with a background image, and when it is determined that the video image is a foreground image, each blob in the foreground image is segmented and combined to acquire a target blob representing an individual pedestrian, and tracking and counting are performed according to the center-of-mass coordinate of each target blob in a detection area to acquire the number of pedestrians under the current monitoring scene. Thus the accuracy of a counting result can be improved.
    Type: Grant
    Filed: September 3, 2013
    Date of Patent: September 26, 2017
    Inventors: Ping Lu, Jian Lu, Gang Zeng
  • Patent number: 9767378
    Abstract: Various aspects of a method and system to track one or more objects in a video stream are disclosed herein. In accordance with an embodiment, the method includes computation of a first confidence score of a first geometrical shape that encompasses at least a portion of an object in a first image frame of the video stream. The first geometrical shape is utilized to track the object in the video stream. The first geometrical shape is split into a plurality of second geometrical shapes. The split of the first geometrical shape is based on a comparison of the computed first confidence score with a pre-defined threshold score.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: September 19, 2017
    Inventors: Houman Rastgar, Alexander Berestov
  • Patent number: 9763615
    Abstract: The present invention provides a device and a method for monitoring the bladder volume of a subject. A device for monitoring the bladder volume of a subject comprises a sensor to be attached to a region on the exterior surface of the abdomen of the subject, the region corresponding to the bladder of the subject, the sensor being configured to obtain a sensor signal indicating the bladder volume of the subject; a controlling unit configured to generate a control action signal if it determines, based on the sensor signal, that the change of the bladder volume of the subject exceeds a predetermined amount; an ultrasound probe to be attached to the subject and configured to emit, in response to the control signal, an ultrasonic signal toward the bladder of the subject and receive echo signals from the bladder of the subject; a deriving unit configured to derive the bladder volume of the subject from the received echo signals.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: September 19, 2017
    Inventor: Jingping Xu
  • Patent number: 9767336
    Abstract: Methods, computer program products and systems for providing video tracking. The method includes receiving a first signal from a radio frequency identification (RFID) tag. A location of the RFID tag is determined in response to the first signal. An image that includes the location of the RFID tag is recorded. The location of the RFID tag is marked on the image, resulting in a marked image.
    Type: Grant
    Filed: October 3, 2016
    Date of Patent: September 19, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Barrett Kreiner, Jonathan L. Reeves
  • Patent number: 9762956
    Abstract: In one embodiment, a mobile device analyzes frames before and after a particular frame of a real-time video to identify one or more social network objects, and selects one or more frames before and after the particular frame based on social network information for further storage in the mobile device.
    Type: Grant
    Filed: April 4, 2013
    Date of Patent: September 12, 2017
    Assignee: Facebook, Inc.
    Inventors: Andrew Garrod Bosworth, David Harry Garcia, Oswald Soleio Cuervo
  • Patent number: 9752880
    Abstract: An object linking method including: detecting each first object based on a video image, detecting each second object based on each data measured by each inertial sensor, each first object or each second object having each first state or each second state that is one of states including a moving state and a stopping state, and linking, by a computer, each first object and each second object in one-to-one, based on each first change of each first state and each second change of each second state, wherein when a specified first object of at least one first object is not linked to any of the at least one second object, the specified first object is linked to a virtual second object that is added to the at least one second object.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: September 5, 2017
    Inventor: Bin Chen
  • Patent number: 9747516
    Abstract: Disclosed embodiments facilitate keypoint selection in part by assigning a similarity score to each candidate keypoint being considered for selection. The similarity score may be based on the maximum measured similarity of an image patch associated with a keypoint in relation to an image patch in a local image section in a region around the image patch. A subset of the candidate keypoints with the lowest similarity scores may be selected and used to detect and/or track objects in subsequent images and/or to determine camera pose.
    Type: Grant
    Filed: September 4, 2015
    Date of Patent: August 29, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Daniel Wagner, Kiyoung Kim
  • Patent number: 9750103
    Abstract: A method for a light bulb or fixture to emit light and measure ambient light. The method includes driving solid state light sources, such as LEDs, in the bulb with a cyclical signal to repeatedly turn the solid state light sources off and on, where the light sources are turned off and on at a rate sufficient for the bulb to appear on. The method also includes measuring ambient light via a light sensor in or on the bulb during at least some times when the light sources are off, and outputting a signal related to the measured ambient light. The ambient light level signal can be used to control when the light bulb is on and an intensity of light output by the bulb.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: August 29, 2017
    Inventors: Mark G. Mathews, James F. Poch, Kayla A. McGrath, Jake D. Swensen, Blake R. Shamla, James C. Medek
  • Patent number: 9747697
    Abstract: Systems and methods are provided for generating calibration information for a media projector. The method includes tracking at least position of a tracking apparatus that can be positioned on a surface. The media projector shines a test spot on the surface, and the test spot corresponds to a known pixel coordinate of the media projector. The system includes a computing device in communication with at least two cameras, wherein each of the cameras are able to capture images of one or more light sources attached to an object. The computing device determines the object's position by comparing images of the light sources and generates an output comprising the real-world position of the object. This real-world position is mapped to the known pixel coordinate of the media projector.
    Type: Grant
    Filed: March 16, 2016
    Date of Patent: August 29, 2017
    Assignee: CAST Group of Companies Inc.
    Inventors: Gilray Densham, Justin Eichel
  • Patent number: 9742974
    Abstract: A method and a system for controlling camera orientation in training and exhibition systems. The method and system use a control algorithm to drive the orientation of a camera system at a determined reference velocity in order to place the aim-point of the camera system following a target aim-point in a local coordinate system. In some embodiments, the position and velocity of the target aim-point in the local coordinate system are determined based on dynamically filtered position and motion of a target object, where the position and motion of the target object are measured from a local positioning system.
    Type: Grant
    Filed: February 11, 2014
    Date of Patent: August 22, 2017
    Inventors: Xueming Tang, Hai Yu
  • Patent number: 9740933
    Abstract: An image monitoring system includes a recorder that records an image captured by a camera via a network. The system is controlled to display the present image captured by the camera or a past image recorded on the recorder. A moving object is detected from the image captured by the camera, the detector including a resolution converter for generating an image with a resolution lower than the resolution of the image captured by the camera. A moving object is detected from the image generated by the resolution converter and positional information on the detected moving object is output. The positional information of the detected moving object is merged with the image captured by the camera on the basis of the positional information.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: August 22, 2017
    Inventors: Masaki Demizu, Miki Murakami
  • Patent number: 9737757
    Abstract: A golf ball launch monitor is disclosed that may be used with an alignment stick. The monitor has a default alignment and an image sensor adapted to capture an image of the alignment stick and communicate that image to a processor. The processor is configured to perform the following steps: (a) detect a horizontal edge within the image representative of the alignment stick by detecting large contrast changes; (b) convert each edge to a vector that starts at the sensor's focal point and projects into space based on the sensor's calibration; (c) locate the plane formed by the vectors by applying standard outlier removal and best fit analysis (d) determine the intersection of the plane and an earth tangential plane; and (e) calculate an azimuth alignment angle offset based on the line and the monitor's default alignment. The calculated azimuth alignment angle can then be used to adjust ball flight trajectory calculations.
    Type: Grant
    Filed: April 29, 2017
    Date of Patent: August 22, 2017
    Assignee: WAWGD, INC
    Inventor: Chris Kiraly