Object Tracking Patents (Class 348/169)
  • Patent number: 10026286
    Abstract: A network camera includes: a camera main unit including an image sensor unit and a movable portion having a motor for swiveling the image sensor unit; and a base unit, connected detachably to the camera main unit and fixed to a camera installation surface, the base unit having a nonvolatile memory for storing network operation information including network setting information unique to the network camera.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 17, 2018
    Assignee: SONY CORPORATION
    Inventor: Katsumi Oosawa
  • Patent number: 10019637
    Abstract: Disclosed are systems and methods for detecting moving objects. A computer-implemented method for detecting moving objects comprises obtaining a streaming video captured by a camera; extracting an input image sequence including a series of images from the streaming video; tracking point features and maintaining a set of point trajectories for at least one of the series of images; measuring a likelihood for each point trajectory to determine whether it belongs to a moving object using constraints from multi-view geometry; and determining a conditional random field (CRF) on an entire frame to obtain a moving object segmentation.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: July 10, 2018
    Assignee: Honda Motor Co., Ltd.
    Inventors: Sheng Chen, Alper Ayvaci
  • Patent number: 10021305
    Abstract: An angular velocity of an image capture apparatus and a motion vector between images are detected. An object velocity is computed based on a comparison between a change amount of the angular velocities and a change amount of the motion vectors. Then, by changing an optical axis based on the object velocity during exposure, a panning assistance function capable of dealing with various panning operations can be provided.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: July 10, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hiroaki Kuchiki
  • Patent number: 10019635
    Abstract: An objective of the present invention is to obtain a vehicle-mounted recognition device capable of recognizing an object to be recognized in a shorter processing time than previously possible.
    Type: Grant
    Filed: January 7, 2015
    Date of Patent: July 10, 2018
    Assignee: Hitachi Automotive Systems, Ltd.
    Inventors: Hideaki Kido, Takeshi Nagasaki, Shinji Kakegawa
  • Patent number: 10007330
    Abstract: A sensor manager provides dynamic input fusion using thermal imaging to identify and segment a region of interest. Thermal overlay is used to focus heterogeneous sensors on regions of interest according to optimal sensor ranges and to reduce ambiguity of objects of interest. In one implementation, a thermal imaging sensor locates a region of interest that includes an object of interest within predetermined wavelengths. Based on the thermal imaging sensor input, the regions each of the plurality of sensors are focused on and the parameters each sensor employs to capture data from a region of interest are dynamically adjusted. The thermal imaging sensor input may be used during data pre-processing to dynamically eliminate or reduce unnecessary data and to dynamically focus data processing on sensor input corresponding to a region of interest.
    Type: Grant
    Filed: June 21, 2011
    Date of Patent: June 26, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raghu Murthi, Steven Bathiche, John Allen Tardif, Nicholas Robert Baker
  • Patent number: 9996769
    Abstract: Techniques detecting usage of copyrighted video content using object recognition are provided. In one example, a computer-implemented method comprises determining, by a system operatively coupled to a processor, digest information for a video, wherein the digest information comprises objects appearing in the video and respective times at which the objects appear in the video. The method further comprises comparing, by the system, the digest information with reference digest information for reference videos, wherein the reference digest information identifies reference objects appearing in the reference videos and respective reference times at which the reference objects appear in the reference videos. The method further comprises determining, by the system, whether the video comprises content included in one or more of the reference videos based on a degree of similarity between the digest information and reference digest information associated with one or more of the reference videos.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: June 12, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christopher J. Hardee, Steven Robert Joroff, Pamela Ann Nesbitt, Scott Edward Schneider
  • Patent number: 9998731
    Abstract: A 3-dimensional displaying apparatus includes an image displaying panel having a plurality of pixels and a backlight panel spaced apart from one surface of the image displaying panel. The backlight panel includes a first line source set having a plurality of line sources arranged at regular intervals and a second line source set having line sources arranged spaced apart from the respective line sources of the first line source set by a predetermined interval. The first line source set and the second line source set are driven alternately. Thus, in a case where a horizontal location of an observer varies, the change of brightness of image information and the crosstalk between adjacent visual fields are minimized, and pseudo-stereoscopic vision is prevented. Also, the irregularity of brightness distribution in a visual field may be solved.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: June 12, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sung Kyu Kim, Ki Hyuk Yoon
  • Patent number: 9990546
    Abstract: An example target acquisition method includes obtaining, according to a global feature of each video frame of a plurality of video frames, a target pre-estimated position of each scale in the video frame; clustering the target pre-estimated position in each video frame to obtain a corresponding target candidate region; and determining a target actual region in the video frame according to all the target candidate regions in each video frame in combination with confidence levels of the target candidate regions and corresponding scale processing. The techniques of the present disclosure quickly and effectively acquire one or multiple targets, and, more particularly, achieve accurately distinguishing and acquiring the multiple targets.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: June 5, 2018
    Assignee: Alibaba Group Holding Limited
    Inventor: Xuan Jin
  • Patent number: 9981198
    Abstract: A distance detecting device is mounted on a user's vehicle to detect a distance between the user's vehicle and a leading vehicle moving in front of the user's vehicle. The distance detecting device communicates with a processing device, which applies race based distance rules to determine whether the user's vehicle is close to or incurring a penalty based at least on the distance between the user's vehicle and the leading vehicle. An indication can be given to the user regarding an impending penalty and/or when a penalty is incurred. Similarly, the device can be read to apply a penalty. Various modifications can be made to allow for recording, display, and transmission of racing and penalty statistics, enabling and disabling recordation of penalty occurrences, tailoring the penalty determinations to a given race, and mounting the distance detecting device to a vehicle.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: May 29, 2018
    Assignee: Zebra Innovations, LLC
    Inventors: Robbie Ventura, Michael D. Paley, Robby Ketchell
  • Patent number: 9977967
    Abstract: The object detection device 1 detects an object being recognized (such as a pedestrian) in a frame image 42, and identifies an area 42a where a detected object which is detected in the frame image 42 is present. A frame image 43 is input after the frame image 42. The object detection device 1 detects the object being recognized in the frame image 43, and identifies an area 43a where a detected object which is detected in the frame image 43 is present. When a distance from center coordinates 42b of the area 42a to center coordinates 43b of the area 43a is smaller than a reference distance, the object detection device 1 determines that the detected object which is detected in the frame image 43 is identical to the detected object which is detected in the frame image 42.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: May 22, 2018
    Assignee: MegaChips Corporation
    Inventors: Taizo Umezaki, Yuki Haraguchi, Hiromu Hasegawa
  • Patent number: 9978180
    Abstract: Motion vector estimation is provided for generating and displaying images at a frame rate that is greater than a rendering frame rate. The displayed images may include late stage graphical adjustments of pre-rendered scenes that incorporate motion vector estimations. A head-mounted display (HMD) device may determine a predicted pose associated with a future position and orientation of the HMD, render a current frame based on the predicted pose, determine a set of motion vectors based on the current frame and a previous frame, generate an updated image based on the set of motion vectors and the current frame, and display the updated image on the HMD. In one embodiment, the HMD may determine an updated pose associated with the HMD subsequent to or concurrent with generating the current frame, and generate the updated image based on the updated pose and the set of motion vectors.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: May 22, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jeffrey Neil Margolis, Matthew Crisler
  • Patent number: 9971939
    Abstract: In accordance with one embodiment, an image processing apparatus which is accessible to at least one storage device, the apparatus includes a decision unit, a generation unit and a determination unit. The decision unit estimates a separation distance from the first camera to the display shelf. The decision unit decides a search distance, based on the separation distance. The generation unit generates a template image by converting a number of pixels of single item image acquired by photographing the product as a single piece and stored in the storage device, with a magnification corresponding to a ratio between the search distance and a known photographing distance. The determination unit determines an area, which is similar to the template image, within a shelf image stored in the storage device.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: May 15, 2018
    Assignee: TOSHIBA TEC KABUSHIKI KAISHA
    Inventor: Takayuki Sawada
  • Patent number: 9959468
    Abstract: A method for classifying at least one object of interest in a video is provided. The method includes accessing, using at least one processing device, a frame of the video, the frame including at least one object of interest to be classified, performing, using the at least one processing device, object detection on the frame to detect the object of interest, tracking, using the at least one processing device, the object of interest over a plurality of frames in the video over time using a persistent tracking capability, isolating, using the at least one processing device, a segment of the frame that includes the object of interest, classifying, using the at least one processing device, the object of interest by processing the segment using deep learning, and generating an output that indicates the classification of the object of interest.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: May 1, 2018
    Assignee: The Boeing Company
    Inventors: Aaron Y. Mosher, David Keith Mefford
  • Patent number: 9961315
    Abstract: A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: May 1, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sasa Junuzovic, William Thomas Blank, Steven Bathiche, Anoop Gupta, Andrew D. Wilson
  • Patent number: 9959905
    Abstract: In accordance with example embodiments, the method and system for 360-degree video post-production generally makes use of points of view (POVs) to facilitate the 360-degree video post-production process. A POV is a rectilinear subset view of a 360-degree composition based on a particular focal length, angle of view, and orientation for each frame of the 360-degree composition. Video post-production editing can be applied to a POV by the user, using rectilinear video post-production methods or systems. The rectilinear video post-production editing done on the POV is integrated back into the 360-degree environment of the 360-degree composition. In accordance with example embodiments, the method and system for 360-degree video post-production comprises identifying a point of view in a 360-degree composition; applying video post-production editing to the point of view; and aligning a new layer containing the video post-production editing with the point of view in the 360-degree composition.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: May 1, 2018
    Assignee: Torus Media Labs Inc.
    Inventor: Michel Sevigny
  • Patent number: 9953421
    Abstract: A disappearing direction determination device and method, a video camera calibration apparatus and method, a video camera and a computer program product are provided. The device comprises: a moving target detecting unit for detecting in the video image a moving target area where a moving object locates; a feature point extracting unit for extracting at least one feature point on the moving object in the detected moving target area; a moving trajectory obtaining unit for tracking a movement of the feature point in a predetermined number of video image frames to obtain a movement trajectory of the feature point; and a disappearing direction determining unit for determining, according to the movement trajectories of one or more moving objects in the video image, a disappearing direction pointed by a major moving direction of the moving objects. Thus, a disappearing direction and video camera gesture parameters can be determined accurately.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: April 24, 2018
    Assignee: SONY CORPORATION
    Inventors: Yuyu Liu, Zhi Nie
  • Patent number: 9955059
    Abstract: Examples of an electronic device according to an embodiment include an electronic device in which user can see through at least a transparent part of a first display area when the electronic device is worn on a body of the user. The electronic device includes: a processor configured to transmit a first part included in a first position on an image imaged by the camera and first information regarding the first part and receive second information relating to processing result on the first part after a first term has passed since transmission of the first information; and display circuitry configured to display the second information at a third position on the first display area, the third position determined according to a second position in the first part on an image imaged by the camera after the first term has passed since transmission of the first information.
    Type: Grant
    Filed: June 11, 2015
    Date of Patent: April 24, 2018
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Hiroaki Komaki
  • Patent number: 9953247
    Abstract: A method of determining eye position information includes identifying an eye area in a facial image; verifying a two-dimensional (2D) feature in the eye area; and performing a determination operation including, determining a three-dimensional (3D) target model based on the 2D feature; and determining 3D position information based on the 3D target model.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: April 24, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Mingcai Zhou, Jingu Heo, Tao Hong, Zhihua Liu, DongKyung Nam, Kang Xue, Weiming Li, Xiying Wang, Gengyu Ma, Haitao Wang
  • Patent number: 9946957
    Abstract: Examples of the present disclosure relate to a method, apparatus, computer program and system for image analysis. According to certain examples, there is provided method comprising causing, at least in part, actions that result in: receiving orientation information of an image capturing device; receiving one or more features detected from an image captured by the image capturing device; and selecting a clustering model for clustering the features, wherein the clustering model is selected, at least in part, in dependence upon the orientation information.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: April 17, 2018
    Assignee: Nokia Technologies Oy
    Inventor: Francesco Cricri
  • Patent number: 9946951
    Abstract: Embodiments are directed to an object detection system having at least one processor circuit configured to receive a series of image regions and apply to each image region in the series a detector, which is configured to determine a presence of a predetermined object in the image region. The object detection system performs a method of selecting and applying the detector from among a plurality of foreground detectors and a plurality of background detectors in a repeated pattern that includes sequentially selecting a selected one of the plurality of foreground detectors; sequentially applying the selected one of the plurality of foreground detectors to one of the series of image regions until all of the plurality of foreground detectors have been applied; selecting a selected one of the plurality of background detectors; and applying the selected one of the plurality of background detectors to one of the series of image regions.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: April 17, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Russell P. Bobbitt, Rogerio S. Feris, Chiao-Fe Shu, Yun Zhai
  • Patent number: 9939642
    Abstract: Provided is a glass type terminal and a control method thereof. The glass type terminal includes: a frame unit configured to be wearable on a user's head; a display unit; and a control unit configured to turn on power of a camera when preset conditions are met, analyze an image captured by the camera to produce image capture guide information, and control the display unit to output the produced image capture guide information.
    Type: Grant
    Filed: April 9, 2015
    Date of Patent: April 10, 2018
    Assignee: LG ELECTRONICS INC.
    Inventors: Yujune Jang, Taeseong Kim, Taekyoung Lee, Jeongyoon Rhee
  • Patent number: 9940524
    Abstract: A system comprising a computer-readable storage medium storing at least one program and a method for identifying and monitoring vehicles in motion is presented. The method may include accessing sets of pixel coordinates defining a location of a vehicle within a first image and a second image. The method may further include translating the sets of pixel coordinates to sets of global coordinates defining a first and second geospatial location of the vehicle. The method further includes determining the vehicle is in motion based on a comparison of the first and second geospatial locations of the vehicle. The method further includes causing presentation of a user interface that includes a geospatial map with visual indicators of the first and second geospatial location of the vehicle.
    Type: Grant
    Filed: April 14, 2016
    Date of Patent: April 10, 2018
    Assignee: General Electric Company
    Inventors: Lokesh Babu Krishnamoorthy, Francisco de Leon, Madhavi Vudumula, Aileen Margaret Hackett
  • Patent number: 9942486
    Abstract: For cameras that capture several images in a burst mode, some embodiments of the invention provide a method that presents one or more of the captured images differently than the remaining captured images. The method identifies at least one captured image as dominant image and at least another captured image as a non-dominant image. The method then displays each dominant image different from each non-dominant image in a concurrent presentation of the images captured during the burst mode. The dominant images may appear larger than non-dominant images, and/or appear with a marking that indicates that the images are dominant.
    Type: Grant
    Filed: April 4, 2016
    Date of Patent: April 10, 2018
    Assignee: APPLE INC.
    Inventors: Claus Mølgaard, Mikael Rousson, Vincent Yue-Tao Wong, Brett M. Keating, Jeffrey A. Brasket, Karl C. Hsu, Todd S. Sachs, Justin Titi, Elliott B. Harris
  • Patent number: 9942477
    Abstract: An image processing apparatus comprises: a dividing unit that divides two frame images into a plurality of divided areas; a determination unit that determines a representative point for each of the divided areas in one of the two frame images; a setting unit that, for each of the two frame images, sets image portions for detecting movement between the two frame images, based on the representative points; and a detection unit that detects movement between the two frame images based on correlation values of image signals in the set image portions, wherein for each of the divided areas, the determination unit determines a feature point of the divided area or a predetermined fixed point as the representative point of the divided area in accordance with a position of the divided area in the one frame image.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: April 10, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Minoru Sakaida, Tomotaka Uekusa, Hiroyuki Yaguchi, Soichiro Suzuki
  • Patent number: 9939909
    Abstract: A hand region is detected in a captured image, and for each part of the background area, a light source presence degree indicating the probability that a light source is present is determined according to the luminance or color of that part; on the basis of the light source presence degree, a region in which the captured image is affected by a light source is estimated, and if the captured image includes a region estimated to be affected by a light source, whether or not a dropout has occurred in the hand region in the captured image is decided; on the basis of the result of this decision, an action is determined. Gesture determinations can be made correctly even when the hand region in the captured image is affected by a light source at the time of gesture manipulation input.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: April 10, 2018
    Assignee: Mitsubishi Electric Corporation
    Inventors: Nobuhiko Yamagishi, Yudai Nakamura, Tomonori Fukuta
  • Patent number: 9933921
    Abstract: A system and method for providing an interactive media content with explorable content on a computing device that includes rendering a field of view within a navigable media content item; rendering at least one targetable object within the media content item; through a user input mechanism, receiving a navigation command; navigating the field of view within the media based at least in part on the received user input mechanism; detecting a locking condition based, at least in part, on of the targetable object being in the field of view and entering a object-locked mode with the targetable object; and in the object-locked mode, automatically navigating the field of view to substantially track the targetable object of the object-locked mode.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: April 3, 2018
    Assignee: Google Technology Holdings LLC
    Inventors: Baback Elmieh, Brian M. Collins, Jan J. Pinkava, Douglas Paul Sweetland
  • Patent number: 9934428
    Abstract: The invention relates to a method for detecting a pedestrian (27) moving in an environmental region of a motor vehicle relatively to the motor vehicle based on a temporal sequence of images (18) of the environmental region, which are provided by means of a camera of the motor vehicle (1), wherein characteristic features are extracted from the images (18) and a plurality of optical flow vectors is determined to the characteristic features of at least two consecutively captured images of the sequence by means of an image processing device of the motor vehicle, which indicate a movement of the respective characteristic features over the sequence, wherein for detecting the pedestrian (27), several confidence metrics are determined based on the characteristic features and the optical flow vectors, and based on the confidence metrics, it is examined if a preset plausibility check criterion required for the detection of the pedestrian (27) is satisfied, wherein the pedestrian (27) is supposed to be detected if the p
    Type: Grant
    Filed: July 30, 2014
    Date of Patent: April 3, 2018
    Assignee: Connaught Electronics Limited
    Inventors: James McDonald, John McDonald
  • Patent number: 9936112
    Abstract: The present application relates to a camera where a non-wide-angle lens and a wide-angle lens projects images onto different regions of one and the same image sensor. The non-wide-angle lens images a part of a periphery of the wide-angle lens image, and in this way an overview image with an improved quality peripheral region can be achieved.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: April 3, 2018
    Assignee: Axis AB
    Inventors: Jonas Hjelmström, Stefan Lundberg, Andreas Karlsson Jägerman
  • Patent number: 9936170
    Abstract: A content analysis engine receives video input and performs analysis of the video input to produce one or more gross change primitives. A view engine coupled to the content analysis engine receives the one or more gross change primitives from the content analysis engine and provides view identification information. A rules engine coupled to the view engine receives the view identification information from the view engine and provides one or more rules based on the view identification information. An inference engine performs video analysis based on the one or more rules provided by the rules engine and the one or more gross change primitives.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: April 3, 2018
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Weihong Yin, Andrew J. Chosak, John I. W. Clark, Geoffrey Egnal, Matthew F. Frazier, Niels Haering, Alan J. Lipton, Donald G. Madden, Michael C. Mannebach, Gary W. Myers, James S. Sfekas, Peter L. Venetianer, Zhong Zhang
  • Patent number: 9934580
    Abstract: Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: April 3, 2018
    Assignee: Leap Motion, Inc.
    Inventors: David S. Holz, Hua Yang
  • Patent number: 9936166
    Abstract: The invention relates to a method for planning or for checking the planning of a dental and/or a maxillofacial treatment, wherein at least one video recording of an object (3) is recorded by means of at least one video camera (1). A patient model (4) is available which comprises image data of the object (3), wherein the video recording is virtually coupled to the patient model (4) in such a way that a viewing direction (13) of the view of the patient model (4) is changed in dependence on a changing recording direction (9, 46, 47, 48, 49, 50) of the video recording when the video camera (1) is moved in relation to the object (3).
    Type: Grant
    Filed: November 19, 2013
    Date of Patent: April 3, 2018
    Assignee: Sirona Dental Systems GmbH
    Inventors: Kai Lindenberg, Ciamak Abkai
  • Patent number: 9928707
    Abstract: A surveillance system (10) for monitoring movement in a region of interest (18) is described. The surveillance system (10) includes: An image capturing system (12) having a field of view (16) including a region of interest (18), and adapted to capture an image of the region of interest (18). An image processing system (78) configured to process a time-sequential series of images of the field of view from the image capturing system such that at least a portion (52, 53, 54) of each processed image (50) is analyzed for detecting movement within the region of interest (18). The image capturing system (12) is configured to capture, in each image, different apparent magnifications of respective zones (20, 22, 24) within the region of interest (18) in the field of view (16) that are at different distances from the image capturing system (12) such that an object (60) in at least one position in at least two of said zones has substantially the same apparent size in the images.
    Type: Grant
    Filed: May 16, 2012
    Date of Patent: March 27, 2018
    Assignee: GARRETT THERMAL SYSTEMS LIMITED
    Inventor: Matthew John Naylor
  • Patent number: 9928716
    Abstract: A system for monitoring a lifestyle of a person in a living space includes sensors for detecting a presence of the person and/or an activity of the person, and a processor controlled system, coupled to the sensors for deriving events caused by the person and times when the events occur, for detecting event free periods of first lengths in which no events of a predetermined group occur, and for deriving, an estimate of in-bed time on the basis of the event free period of at least the first length detected in a first observation period beginning before a usual in-bed time, and for determining an estimate of the out-bed time on the basis of the events, and event free periods of at least a second length, detected in a second observation period ending after an usual out-bed time.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: March 27, 2018
    Assignee: Dutch Domotics B.V.
    Inventor: Ireneusz Piotr Karkowski
  • Patent number: 9924363
    Abstract: A wireless audio device includes one or more electroacoustic transducers and a first wireless transceiver. The device also includes a processor configured to (a) process audio signals received by the transceiver in a first received signal and communicate the processed audio signals to the transducer(s) to cause the transducer(s) to output sound pressure waves, (b) determine that a characteristic of a second received signal independent of the first received signal surpassed at least one threshold, and (c) change a state of the audio device based on the determining.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: March 20, 2018
    Assignee: Bose Corporation
    Inventors: Douglas Warren Young, David A. Howley, Misha K. Hill, Douglas C. Moore
  • Patent number: 9924088
    Abstract: A camera module including a main body having a spherical shape and including a camera; a first arm connected to the main body and including a first motor configured to rotate the main body; a second arm connected to the first arm and including a second motor configured to rotate the first arm; a fixing member connected to the second arm and including a third motor for rotating the second arm; and a controller configured to independently rotate the main body, the first arm and the second arm to allow omnidirectional capturing in a three-dimensional space where the camera is placed.
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: March 20, 2018
    Assignee: LG ELECTRONICS INC.
    Inventors: Peter Helm, James Khatiblou, Timothy Seward
  • Patent number: 9911191
    Abstract: The purpose of the present invention is to provide a state estimation apparatus that appropriately estimates the internal state of an observation target by determining likelihoods from a plurality of observations. An observation obtaining unit of the state estimation system obtains, at given time intervals, a plurality of observation data obtained from an observable event. The observation selecting unit selects a piece of observation data from the plurality of pieces of observation data obtained by the observation obtaining unit based on a posterior probability distribution data obtained at a preceding time t?1. The likelihood obtaining unit obtains likelihood data based on the observation data selected by the observation selecting unit and predicted probability distribution data obtained through prediction processing using the posterior probability distribution data.
    Type: Grant
    Filed: October 13, 2015
    Date of Patent: March 6, 2018
    Assignees: MegaChips Corporation, KYUSHU INSTITUTE OF TECHNOLOGY
    Inventors: Norikazu Ikoma, Hiromu Hasegawa
  • Patent number: 9907147
    Abstract: The invention provides a light management information system for an outdoor lighting network system, having a plurality of outdoor light units each including at least one sensor type, where each of the light units communicates with at least one other light unit, at least one user input/output device in communication with at one or more of said outdoor light units, a central management system in communication with light units, said central management system sends control commands and/or information to one or more of said outdoor light units, in response to received outdoor light unit status/sensor information from one or more of said outdoor light units or received user information requests from said user input/output device, a resource server in communication with said central management system, wherein the central management system uses the light unit status/sensor information and resources from the resource server to provide information to the user input/output device and/or reconfigure one or more of the l
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: February 27, 2018
    Assignee: PHILIPS LIGHTING HOLDING B.V.
    Inventors: Hongxin Chen, Dave Alberto Tavares Cavalcanti, Kiran Srinivas Challapali, Sanae Chraibi, Liang Jia, Andrew Ulrich Rutgers, Yong Yang, Michael Alex Van Hartskamp, Dzmitry Viktorovich Aliakseyeu, Hui Li, Qing Li, Fulong Ma, Jonathan David Mason, Berent Willem Meerbeek, John Brean Mills, Talmai Brandão De Oliveira, Daniel J. Piotrowski, Yuan Shu, Neveen Shlayan, Marcin Krzysztof Szczodrak, Yi Qiang Yu, Zhong Huang, Martin Elixmann, Qin Zhao, Xianneng Peng, Jianfeng Wang, Dan Jiang
  • Patent number: 9904371
    Abstract: A hand region is detected in a captured image, and for each part of the background area, a light source presence degree indicating the probability that a light source is present is determined according to the luminance or color of that part; on the basis of the light source presence degree, a region in which the captured image is affected by a light source is estimated, and if the captured image includes a region estimated to be affected by a light source, whether or not a dropout has occurred in the hand region in the captured image is decided; on the basis of the result of this decision, an action is determined. Gesture determinations can be made correctly even when the hand region in the captured image is affected by a light source at the time of gesture manipulation input.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: February 27, 2018
    Assignee: Mitsubishi Electric Corporation
    Inventors: Nobuhiko Yamagishi, Yudai Nakamura, Tomonori Fukuta
  • Patent number: 9898677
    Abstract: In one embodiment, a method of determine and track moving objects in a video, including detecting and extracting regions from accepted frames of a video, matching parts including using the extracted parts of a current frame and matching each part from a previous frame to a region in the current frame, tracking the matched parts to form part tracks, and determining a set of path features for each tracked part path. The determined path features are used to classify each path as that of mover or a static. The method includes clustering the paths of movers, including grouping parts of movers that likely belong to a single object, in order to generate one or more single moving objects and track moving objects. Also a system to carry out the method and a non-transitory computer-readable medium that when executed in a processing system causes carrying out the method.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: February 20, 2018
    Assignee: MotionDSP, Inc.
    Inventors: Neboj{hacek over (s)}a Anđjelković, Edin Mulalić, Nemanja Grujić, Sa{hacek over (s)}a Anđelković, Vuk Ilić, Milan Marković
  • Patent number: 9898650
    Abstract: Provided are a system and a method for tracking a position based on multi sensors. The system according to the present invention includes: a space management server which divides a predetermined space into a predetermined number of spaces to generate a plurality of zones and manages information of each zone; and a zone management server which manages the information of the zone generated by the space management server and provides the space management server with zone session information of an object positioned in each zone and positional information of the corresponding object sensed by a position tracking sensor every time slot.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: February 20, 2018
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young Ho Suh, Kang Woo Lee, Sang Keun Rhee
  • Patent number: 9898557
    Abstract: A non-transitory computer-readable storage medium is disclosed. In an embodiment, the non-transitory computer-readable storage medium includes instructions that, when executed by a computer, cause the computer to perform steps involving receiving parameters, selecting pre-configured slices from a library of slices that satisfy the parameters, and placing the selected slices to generate a configuration variant in accordance with slice placement logic.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: February 20, 2018
    Assignee: ADITAZZ, INC.
    Inventors: Richard L. Sarao, Scott Ewart
  • Patent number: 9892520
    Abstract: A method and system determines flows by first acquiring a video of the flows with a camera, wherein the flows are pedestrians in a scene, wherein the video includes a set of frames. Motion vectors are extracted from each frame in the set, and a data matrix is constructed from the motion vectors in the set of frames. A low rank Koopman operator is determined from the data matrix and a spectrum of the low rank Koopman operator is analyzed to determine a set of Koopman modes. Then, the frames are segmented into independent flows according to a clustering of the Koopman modes.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: February 13, 2018
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Hassan Mansour, Caglayan Dicle, Dong Tian, Mouhacine Benosman, Anthony Vetro
  • Patent number: 9891716
    Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: February 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Tarek El Dokor
  • Patent number: 9886775
    Abstract: The disclosure relates to a method for detection of the horizontal and gravity directions of an image, the method comprising: selecting equidistant sampling points in an image at an interval of the radius of the sampling circle of an attention focus detector; placing the center of the sampling circle of the attention focus detector on each of the sampling points, and using the attention focus detector to acquire attention focus coordinates and the corresponding significant orientation angle, and all the attention focus coordinates and the corresponding significant orientation angles constitute a set ?p; using an orientation perceptron to determine a local orientation angle and a weight at the attention focus according to the gray image information, and generating a local orientation function; obtaining a sum of each of the local orientation functions as an image direction function; obtaining a function MCGCS(?), and further obtaining the horizontal and gravity identification angles.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: February 6, 2018
    Assignee: Institute of Automation Chinese Academy of Sciences
    Inventors: Zhiqiang Cao, Xilong Liu, Chao Zhou, Min Tan, Kun Ai
  • Patent number: 9888174
    Abstract: An omnidirectional camera is presented. The camera comprises: at least one lens coupled to an image sensor, a controller coupled to the image sensor, and a movement detection element coupled to the controller. The movement detection element is configured to detect a speed and direction of movement of the camera, the camera is configured to capture images via the at least one lens coupled to the image sensor, and the controller is configured to select a digital viewpoint in the captured images. A central point of the digital viewpoint is selected on the basis of direction of movement, and a field of view of the digital viewpoint is based on the speed of movement. A system and method are also presented.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Esa Kankaanpää, Shahil Soni
  • Patent number: 9881380
    Abstract: Techniques and systems are described for performing video segmentation using fully connected object proposals. For example, a number of object proposals for a video sequence are generated. A pruning step can be performed to retain high quality proposals that have sufficient discriminative power. A classifier can be used to provide a rough classification and subsampling of the data to reduce the size of the proposal space, while preserving a large pool of candidate proposals. A final labeling of the candidate proposals can then be determined, such as a foreground or background designation for each object proposal, by solving for a posteriori probability of a fully connected conditional random field, over which an energy function can be defined and minimized.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: January 30, 2018
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH
    Inventors: Alexander Sorkine Hornung, Federico Perazzi, Oliver Wang
  • Patent number: 9880395
    Abstract: Movement of a display device is detected, and an image is displayed in stereoscopic display or planar display depending on the detected movement.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: January 30, 2018
    Assignees: NEC CORPORATION, Tianma Japan, Ltd.
    Inventors: Tetsusi Sato, Koji Shigemura, Masahiro Serizawa
  • Patent number: 9875411
    Abstract: The present disclosure relates to a video monitoring method and a video monitoring system based on a depth video. The video monitoring method comprises: obtaining video data collected by a video collecting module; determining an object as a monitored target based on pre-set scene information and the video data; extracting characteristic information of the object; and determining predictive information of the object based on the characteristic information, wherein the video data comprises video data including the depth information.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: January 23, 2018
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Gang Yu, Chao Li, Qizheng He, Qi Yin
  • Patent number: 9874636
    Abstract: Some demonstrative embodiments include devices, systems and/or methods of orientation estimation of a mobile device. For example, a mobile device may include an orientation estimator to detect a pattern in at least one image captured by the mobile device, and based on one or more geometric elements of the detected pattern, to determine one or more orientation parameters related to an orientation of the mobile device.
    Type: Grant
    Filed: June 8, 2012
    Date of Patent: January 23, 2018
    Assignee: INTEL CORPORATION
    Inventors: Uri Schatzberg, Yuval Amizur, Leor Banin
  • Patent number: 9875431
    Abstract: To provide a training data generating device capable of easily generating a large amount of training data used for machine-learning a dictionary of a discriminator for recognizing a crowd state. A person state determination unit 72 determines a person state of a crowd according to a people state control designation as designation information on a person state of people and an individual person state control designation as designation information on a state of an individual person in the people. A crowd state image synthesis unit 73 generates a crowd state image as an image in which a person image corresponding to the person state determined by the person state determination unit 72 is synthesized with an image at a predetermined size acquired by a background extraction unit 71, and specifies a training label for the crowd state image.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: January 23, 2018
    Assignee: NEC Corporation
    Inventor: Hiroo Ikeda