Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 10255504
    Abstract: Tracking position of at least one object in video frames. The tracking includes processing an initial frame of a set of frames, the processing the initial frame using feature extraction to identify locations of features of the at least one object. The tracking further includes using motion estimation to track locations of the features in subsequent frames of the set of frames, including iteratively performing: obtaining a next frame of the set of frames, and applying a motion estimation algorithm as between the next frame and a prior frame of the set of frames to identify updated locations of the features in the next frame, where locations of the features as identified based on the prior frame are used as input to the motion estimation algorithm to identify the updated locations of the features in the next frame based on searching less than an entirety of the next frame.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: April 9, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Marzia E. Castellani, Roberto Guarda, Roberto Ragusa, Alessandro Rea
  • Patent number: 10257423
    Abstract: A method and apparatus for determining an indication of interaction in a direction towards a webcam. The method includes the steps of determining an object in a region of interest, determining a first size of the object in the region of interest and tracking the object in the region of interest. A second size of the object in the region of interest is then determined, and a push interaction is confirmed as having taken place if the ratio of the second size to the first size is greater than a predetermined value.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: April 9, 2019
    Assignee: AIC Innovations Group, Inc.
    Inventors: Adam Hanina, Gordon Kessler, Lei Guan
  • Patent number: 10255453
    Abstract: Embodiments of the present invention may involve a method, system, and computer program product for controlling privacy in a face recognition application. A computer may receive an input including a face recognition query and a digital image of a face. The computer may identify a target user associated with a facial signature in a first database based at least in part on a statistical correlation between a detected facial signature and one or more facial signatures in the first database. The computer may extract a profile of the target user from a second database. The profile of the target user may include one or more privacy preferences. The computer may generate a customized profile of the target user. The customized profile may omit one or more elements of the profile of the target user based on the one or more privacy preferences and/or a current context.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: April 9, 2019
    Assignee: International Business Machines Corporation
    Inventors: Seraphin B. Calo, Bong Jun Ko, Kang-Won Lee, Theodoros Salonidis, Dinesh C. Verma
  • Patent number: 10255296
    Abstract: A device includes: an image data receiving component operable to receive multiband image data of a geographic region; a surface index generation component operable to generate a surface index based on at least a portion of the received multiband image data; a classification component operable generate a land cover classification based on the surface index; a segment data receiving component operable to receive segment data relating to at least a portion of the geographic region; a zonal statistics component operable generate a segment land cover classification based on the land cover classification and the segment data; a feature data receiving component operable to receive feature data; a feature index generation component operable to generate a feature index based on the received feature data; and a catalog component operable to generate a segment feature index based on the feature index and the segment land cover classification.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: April 9, 2019
    Assignee: OmniEarth, Inc.
    Inventors: Jonathan Fentzke, Shadrian Strong, David Murr, Lars Dyrud
  • Patent number: 10252417
    Abstract: An information processing apparatus, comprises an obtainment unit configured to obtain a relative position and orientation between a first object and a second object; an specifying unit configured to specify an occlusion region where the second object occludes the first object, from an image that includes the first object and the second object, based on the relative position and orientation, a first shape model that represents a shape of the first object, and a second shape model that represents a shape of the second object; and a generation unit configured to generate information for obtaining a position and orientation of the first object based on the occlusion region.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: April 9, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Daisuke Watanabe
  • Patent number: 10255595
    Abstract: The present disclosure generally relates to making payments with a mobile device. In one example process, the device receives first authentication data, such as fingerprint authentication information, and second authentication data, such as a bank authorization code. The device then transmits a transaction request for a payment transaction. In another example process, the device detects activation of a physical input mechanism and detects a fingerprint using a biometric sensor. The device is enabled to participate in NFC payment transactions. In another example process, the device displays a live preview of images obtained via a camera sensor while the device detects partial credit card information of a credit card in a field of view of the camera sensor.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: April 9, 2019
    Assignee: Apple Inc.
    Inventors: Marcel Van Os, George R. Dicker, Glen W. Steele, Pablo F. Caro, Peter Anton, Donald W. Pitschel
  • Patent number: 10255691
    Abstract: The invention discloses a method and a system of detecting and recognizing a vehicle logo based on Selective Search, the method comprising: positioning a vehicle plate on an original image of a vehicle to obtain a vehicle plate position; coarsely positioning a vehicle logo on the original image to obtain a coarse positioning image of the vehicle logo; selecting vehicle logo candidate areas in the coarse positioning image; performing target positioning in the vehicle logo candidate areas with the Selective Search to obtain a set of target regions; training a vehicle logo location classifier with Spatial Pyramid Matching based on Sparse Coding (ScSPM) to determine the vehicle logo from the set of target regions to obtain a vehicle logo position; and training a multi-class vehicle logo recognition classifier with the ScSPM to conduct a specific type-recognition for the vehicle logo to obtain a vehicle logo recognition result.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: April 9, 2019
    Assignees: SUN YAT-SEN UNIVERSITY, Guangdong Fundway Technology Co., Ltd.
    Inventors: Xiying Li, Shuo Lv, Qianyin Jiang, Donghua Luo, Minxian Yuan, Zhi Yu
  • Patent number: 10247556
    Abstract: A mobile platform having a camera for outputting image data of features with unknown locations, an inertial measurement unit for outputting inertial measurements of the features, where the coordinates of the features are unknown, a processor, storage and an extended-Kalman filter-based estimator executable on the processor for processing the inertial measurement and features of the image data, where a state vector of the estimator contains a sliding window of states for determining the position and orientation of the mobile platform.
    Type: Grant
    Filed: July 23, 2014
    Date of Patent: April 2, 2019
    Assignee: The Regents of the University of California
    Inventor: Anastasios Mourikis
  • Patent number: 10248218
    Abstract: A method of recognizing an aimed point on a plane is provided. Images captured by one or more image sensor are processed for obtaining data obtaining data indicative of location of at least one pointing element in the viewing space and data indicative of at least one predefined user's body part in the viewing space; using the obtained data an aimed point on the plane is identified. In case it is determined that a predefined condition is met a predefined command and/or message is executed.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 2, 2019
    Assignee: Eyesight Mobile Technologies, Ltd.
    Inventors: Itay Katz, Amnon Shenfeld
  • Patent number: 10246030
    Abstract: An object of the present invention is to provide an object detection apparatus and a driving assistance apparatus in which movement information of a target object can be obtained with high accuracy. In the present invention, the object detection apparatus is an apparatus that detects a target object from a predetermined mounting position (a vehicle and the like); performs an object detection from a predetermined mounting position; in a case where a target object is detected, acquires the position of the target object; obtains a feature amount of a fixed object existing around the target object and detects a position of a fixed object; sets the position of the fixed object as a reference point; and calculates movement information of the target object from the position of the target object with the reference point as a reference.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: April 2, 2019
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Masayuki Katoh
  • Patent number: 10248881
    Abstract: In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.
    Type: Grant
    Filed: August 19, 2015
    Date of Patent: April 2, 2019
    Assignee: NEC Corporation
    Inventor: Hiroo Ikeda
  • Patent number: 10248855
    Abstract: The present disclosure relates to a method and a device for identifying a gesture. The method includes: determining a depth of each pixel in each of a plurality of images to be processed, in which the plurality of images to be processed are separately collected by the plurality of cameras, and the depth is configured to at least partially represent a distance between an actual object point corresponding to each pixel and the mobile apparatus; determining a target region of each of the images to be processed according to the depth; and determining a gesture of a target user according to image information of the target regions.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: April 2, 2019
    Assignee: Beijing Xiaomi Mobile Software Co., Ltd.
    Inventor: Xingsheng Lin
  • Patent number: 10246038
    Abstract: The present invention addresses the problem of attaining an object recognition device that can change control of a vehicle in accordance with the reliability of detection of a target object. The object recognition device according to the present invention recognizes a target object around a vehicle and includes: a distance-information-based target object determination unit 106 that determines whether or not an object 303 is a target object by using distance information from the vehicle 301 to the object 303; an image-information-based target object determination unit 107 that determines whether or not the object 303 is a target object by using image information obtained by capturing an image of the object 303 from the vehicle 301; and a target object detection reliability calculation unit 108 that calculates the reliability of detection of a target object by using the distance information and the image information.
    Type: Grant
    Filed: August 7, 2015
    Date of Patent: April 2, 2019
    Assignee: HITACHI AUTOMOTIVE SYSTEMS, LTD.
    Inventors: Takeshi Shima, Takuma Osato, Masayuki Takemura, Yuuji Otsuka
  • Patent number: 10248852
    Abstract: The present invention discloses an expression recognition apparatus and methods using a head mounted display thereof. A head mounted display apparatus for performing expression recognition according to an embodiment of the present invention comprises a sensing unit including at least one of expression detection sensing units installed inside of said apparatus for sensing expression information around eyes by at least one of a contact or a non-contact manner; an image acquiring unit installed outside of said apparatus for collecting expression information around a mouth; and an acquisition unit for information of expressions for collecting expression information around the eyes and expression information around the mouth.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: April 2, 2019
    Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITY
    Inventors: Shiho Kim, Jaekwang Cha
  • Patent number: 10248842
    Abstract: A head mounted display (HMD) displays content to a user wearing the HMD, where the content may be based on a facial model of the user. The HMD uses an electronic display to illuminate a portion of the face of the user with. The electronic display emits a pattern of structured light and/or monochromatic light of a given color. A camera assembly captures images of the illuminated portion of the face. A controller processes the captured images to determine depth information or color information of the face of the user. Further, the processed images may be used to update the facial model, for example, which is represented as a virtual avatar and presented to the user in a virtual reality, augmented reality, or mixed reality environment.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: April 2, 2019
    Assignee: Facebook Technologies, LLC
    Inventors: Andrew Matthew Bardagjy, Joseph Duggan, Cina Hazegh, Fei Liu, Mark Timothy Sullivan, Simon Morris Shand Weiss
  • Patent number: 10249203
    Abstract: Apparatus and associated methods relate to using an image of a fiducial located indicating a parking location for the aircraft to provide docking guidance data to a pilot of an aircraft. The fiducial has vertically-separated indicia and laterally-separated indicia. A camera is configured to mount at a camera location so as to be able to capture two-dimensional images of a scene external to the aircraft. The two-dimensional image includes pixel data generated by the two-dimensional array of light-sensitive pixels. A digital processor identifies first and second sets of pixel coordinates corresponding to the two vertically-separated and the two laterally-separated indicia, respectively. The digital processor then calculates, based at least in part on the identified first pixel coordinates corresponding to the two vertically-separated indicia, a range to the parking location.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: April 2, 2019
    Assignee: Rosemount Aerospace Inc.
    Inventors: Joseph T. Pesik, Todd Anthony Ell, Robert Rutkiewicz
  • Patent number: 10248943
    Abstract: A checkout lane management system is described that uses object recognition to order a plurality of checkout lanes according to estimated checkout periods per checkout lane. The checkout lane management system may comprise one or more cameras for collecting a stream of images focused on the plurality of checkout lanes. The checkout lane management system also comprises a plurality of indicator lights for the plurality of checkout lanes that illuminate according to a plurality of light intensity values.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: April 2, 2019
    Inventor: Jacob D. Richards
  • Patent number: 10248216
    Abstract: A method for operation a terminal device with gesture and a device are provided. The method includes obtaining a gesture video segment comprising a preset number of frames of images, obtaining gesture track information of a user according to a location of a finger of the user in each frame of image of the gesture video segment, searching in a preset corresponding relationship between the gesture track information and an operation according to the gesture track information, obtaining the operation corresponding to the gesture track information of the user and performing the operation. With the technical scheme of the present disclosure, the gesture track information of the user is obtained via analyzing each frame of image in the obtained gesture video segment. The gesture track information of the user in a dimensional plane is concerned and the corresponding operation thereof is obtained via the gesture track information.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: April 2, 2019
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Jian Liu
  • Patent number: 10250777
    Abstract: An image processing apparatus includes an image display unit, an area specifying unit, an area display unit, a reference specifying unit, and a reference display unit. The image display unit displays a first image and a second image. The area specifying unit specifies a first area which is at least a portion of the first image. The area display unit displays a second area on the second image. The second area corresponds to the first area. The reference specifying unit specifies a first reference in the first area. The reference display unit displays a second reference in the second area in such a manner that a relative position of the first reference with respect to the first area matches a relative position of the second reference with respect to the second area.
    Type: Grant
    Filed: April 27, 2015
    Date of Patent: April 2, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Miyuki Iizuka
  • Patent number: 10242441
    Abstract: According to an aspect, there is provided an apparatus for identifying living skin tissue in a video sequence, the apparatus comprising a processing unit configured to receive a video sequence, the video sequence comprising a plurality of image frames; divide each of the image frames into a plurality of frame segments, wherein each frame segment is a group of neighboring pixels in the image frame; form a plurality of video sub-sequences, each video sub-sequence comprising a frame segment from two or more of the plurality of image frames; analyze the plurality of video sub-sequences to determine a pulse signal for each video sub-sequence; determine a similarity matrix based on pairwise similarities for each determined pulse signal with each of the other determined pulse signals; and identify areas of living skin tissue in the video sequence from the similarity matrix.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: March 26, 2019
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Wenjin Wang, Gerard De Haan
  • Patent number: 10242502
    Abstract: The present disclosure is directed toward systems and methods for generating and providing an augmented reality overlay for display in connection with an augmented reality display device. For example, systems and methods described herein identify a user being viewed through an augmented reality display device, and builds an augmented reality overlay for the user that is displayed on a view of the user through the augmented reality display device. Systems and methods described herein build the augmented reality overlay based on the location of the augmented reality display device, and on other networking system information including a networking system relationship between the user wearing the augmented reality display device and the user who is being looked at through the augmented reality display device.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: March 26, 2019
    Assignee: FACEBOOK, INC.
    Inventor: Amod Ashok Dange
  • Patent number: 10244228
    Abstract: Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: March 26, 2019
    Assignee: Aemass, Inc.
    Inventor: Marshall Reed Millett
  • Patent number: 10243753
    Abstract: In the present disclosure, methods, systems, and non-transitory computer readable medium are described whereby images are obtained at a server or another suitable computing device. The images may contain metadata indicating the time, date, and geographic location of capture. In one aspect of the present disclosure, this metadata may be used to determine that the images were captured at an event that is still occurring (in-process event). Users who have uploaded images that are captured at the event may be invited to join an image sharing pool. In response to accepting the invitation, the images provided by the user may be contributed to the image sharing pool, and other users having also joined the image sharing pool will be able to access the newly contributed images. The event associated with the sharing pool may be detected by the system or proposed by a moderator.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: March 26, 2019
    Assignee: Ikorongo Technology, LLC
    Inventor: Hugh Blake Svendsen
  • Patent number: 10242291
    Abstract: An image processor device includes a computer processor unit (CPU), at least one memory connected to the CPU, and device for transferring images to the CPU. The memory contains an image-processing program for processing images showing at least one person. The program performs the following operations: detecting at least a face in each image and extracting therefrom a biometric template of the face; for each image, storing in a database an image reference, the biometric template, and if possible context information for the image; comparing the biometric templates corresponding to different image references with one another and associating together the image references for which the comparison has a similarity score greater than a predetermined threshold; and searching for context information corresponding to at least one of the references of the associated images, and if there is corresponding context information, establishing a link between the associated images.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: March 26, 2019
    Assignee: IDEMIA IDENTITY & SECURITY
    Inventors: Laurent Lambert, Marie Jarlegan, Laurent Rostaing
  • Patent number: 10238976
    Abstract: A system for providing an interactive experience for multiple game-players. The experience is provided in a light-controlled setting where the game-players may wear or hold one or more of various toys (e.g., gloves). The system detects and recognizes the toy along with gestures and pointing efforts performed by the game-player and the system generates an effect based on the type of toy and the type of gesture. The system also generates one or more visual targets that are visible to the game-player such as projections, holograms, and displays of one or more of various fantasy virtual adversaries. The generated effects may include any combination of sensory effects including visual, audio, tactile, and smell/taste. The generated effect may be directed based on the pointing efforts of the game-player. The system may then register the effect on the target, if the pointing efforts intersect with the virtual location of the target.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: March 26, 2019
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Jonathan Ackley, Christopher J. Purvis, Scott Frazier Watson, Mark A. Reichow, Kyle Prestenback
  • Patent number: 10235565
    Abstract: A system and methodologies for neuromorphic vision simulate conventional analog NM system functionality and generate digital NM image data that facilitate improved object detection, classification, and tracking so as to detect and predict movement of a vehicle occupant.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: March 19, 2019
    Assignees: Volkswagen AG, Audi AG, Posrche AG
    Inventors: Edmund Dawes Zink, Douglas Allen Hauger, Lutz Junge, Luis Marcial Hernandez Gonzalez, Jerramy L. Gipson, Anh Vu, Martin Hempel, Nikhil J. George
  • Patent number: 10235576
    Abstract: An analysis method of lane stripe images, an image analysis device and a non-transitory computer readable medium thereof are provided to perform steps of: setting a reference point as a center to recognize the lane stripe image in a plurality of default directions; defining a plurality of preset sections onto the lane stripe image and determining a characteristic value of the lane stripe image in each of the preset sections whenever the lane stripe image is recognized in one of the default directions; determining a first feature parameter according to the characteristic values of the lane stripe image in the preset sections when the lane stripe image is recognized in at least one of the default directions; and determining an actual lane parameter of the lane stripe image according to at least the first feature parameter.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: March 19, 2019
    Assignee: WISTRON CORP.
    Inventors: Shang-Min Yeh, Ming-Che Ho, Yu-Wen Huang, Yueh-Chi Hung, Yi-Sheng Chao
  • Patent number: 10237548
    Abstract: Systems and methods are provided for alleviating bandwidth limitations of video transmission, enhancing the quality of videos at a receiver, and improving the VR/AR experience. In particular, an improved video transmission and rendering system is provided for generating high-resolution videos. The systems have therein a transmitter and a VR/AR receiver; the transmitter includes an outer encoder and a core encoder, while the receiver includes a core decoder and an outer decoder. The outer encoder is adapted to receive the video from a source and separately output a salient video and an encoded three-dimensional background, and the outer decoder is adapted to merge the background with the salient video thereby producing an augmented video. Also provided is a system that simulates pan-tilt-zoom (PTZ) operations without PTZ hardware.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: March 19, 2019
    Assignee: Huddly AS
    Inventors: Jan Tore Korneliussen, Anders Eikenes, Havard Pedersen Alstad, Stein Ove Eriksen, Eamonn Shaw
  • Patent number: 10237481
    Abstract: An imaging device operates as an event camera. The device includes an event sensor and a controller. The sensor comprises a plurality of photodiodes that asynchronously output data values corresponding to relative intensity changes within a local area. The controller populates an event matrix based in part on data values asynchronously received from the sensor and positions of photodiodes associated with the received data values over a first time period. The controller populates a change matrix based in part on a threshold intensity value and the photodiodes associated with the received data values over the first time period, and generates an image for the first time period using the event matrix and the change matrix.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: March 19, 2019
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Michael Hall, Xinqiao Liu, Steven John Lovegrove, Julian Straub
  • Patent number: 10229510
    Abstract: The present disclosure provides systems and methods for tracking vehicles or other objects that are perceived by an autonomous vehicle. A vehicle filter can employ a motion model that models the location of the tracked vehicle using a vehicle bounding shape and an observation model that generates an observation bounding shape from sensor observations. A dominant vertex or side from each respective bounding shape can be identified and used to update or otherwise correct one or more predicted shape locations associated with the vehicle bounding shape based on a shape location associated with the observation bounding shape.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 12, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: Brian C. Becker, J. Andrew Bagnell, Arunprasad Venkatraman, Karthik Lakshmanan
  • Patent number: 10227119
    Abstract: Systems and methods are described where odometry information that is obtained from a video camera mounted on an underwater vehicle is used to estimate the velocity of the underwater vehicle. The techniques described herein estimate the velocity of the underwater vehicle passively without emitting sound or other energy from the underwater vehicle.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: March 12, 2019
    Assignee: Lockheed Martin Corporation
    Inventors: Firooz A. Sadjadi, Sekhar C. Tangirala
  • Patent number: 10230713
    Abstract: A method, system and software for assessing an entity (15) at a first user terminal (13) connected to a data network (10). A control system (11) is used to receive an access request (101) from the entity (15) or an assessing user (16) at a second user terminal (14). The control system (11) invokes or facilitates transmission of a time-delimited sequence of unpredictable prompts (18) to the entity (15) for a performance of visible prompted actions (20). A video recording (21) of the prompted action performance is stored in a data store (61) and the control system performs an automated assessment of the video recording (21) by a gesture recognition system (67d) and generates an assessment signal respectively including a positive or negative indication of whether or not said entity (15) validly performed said prompted actions.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: March 12, 2019
    Assignee: 3FISH LIMITED
    Inventor: Jeremy Wyn-Harris
  • Patent number: 10223620
    Abstract: A rectangular region group storage unit stores a group of rectangular regions indicating portions to be recognized for a crowd state on an image. A crowd state recognition dictionary storage unit stores a dictionary of a discriminator acquired by machine learning by use of a plurality of pairs of crowd state image as an image which expresses a crowd state at a predetermined size and includes a person whose reference site is expressed as large as the size of the reference site of a person defined for the predetermined size, and training label for the crowd state image. A crowd state recognition unit extracts regions indicated in the group of rectangular regions stored in the rectangular region group storage unit from a given image, and recognizes states of the crowds shot in the extracted images based on the dictionary.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: March 5, 2019
    Assignee: NEC Corporation
    Inventor: Hiroo Ikeda
  • Patent number: 10223797
    Abstract: A launch monitor having a camera can be used to measure a trajectory parameter of a ball. In one example, a method can include changing a mode of the camera from a low-speed mode to a high-speed mode. The camera can include an image sensor array having a plurality of pixels. The camera can generate a video frame using more pixels in the low-speed mode than in the high-speed mode. A first video frame can be received, the video frame comprising values captured during the high-speed mode from a first subset of the plurality of pixels. A second video frame can be received, the video frame comprising values captured during the high-speed mode from a second subset of the plurality of pixels. The trajectory parameter of the ball can be calculated using the first video frame and the second video frame.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: March 5, 2019
    Assignee: Taylor Made Golf Company, Inc.
    Inventors: Raymond Michael Tofolo, James Edward Michael Cornish, Craig Richard Slyfield
  • Patent number: 10223800
    Abstract: Examples disclosed herein relate to determining the presence of quasi-periodic two-dimensional object. In one implementation, a processor determines peak points of a DFT of an image where the peak points are points with a value above a threshold relative to surrounding points. The processor may then output information indicating the existence of a quasi-periodic two-dimensional object within the image based on the peak points.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: March 5, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Robert Ulichney, Matthew D Gaubatz, Stephen Pollard
  • Patent number: 10223592
    Abstract: A method for performing cooperative counting and an associated apparatus are provided, where the method is applicable to a counter system, and the counter system includes a plurality of cameras. The method includes: setting a plurality of points on an electronic map as a plurality of predetermined points according to user inputs; determining at least one rule related to the predetermined points according to rule information, where the rule information is stored in the counter system; respectively performing video object detection upon a plurality of images captured by the cameras to generate detection results respectively corresponding to the cameras; and merging the detection results respectively corresponding to the cameras, to count events complying with the at least one rule.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: March 5, 2019
    Assignee: Synology Incorporated
    Inventors: Szu-Lu Hsu, Yu-Hsiang Chiu, Szu-Hsien Lee
  • Patent number: 10223595
    Abstract: The present invention relates to determining a trajectory of a target from two streams of images obtained from two sources of images, a sub-image of each of the images of the two streams of images representing an overlapping area of a real scene. After having obtained a target path for each of a plurality of targets, from images of the sources of images, for each of the two streams of images, each target path being obtained from a target tracker associated with a source of image, each of the obtained target paths is split into a plurality of target path portions as a function of each potential target switch along the obtained target path. Then, the trajectory is generated as a function of a plurality of the target path portions.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: March 5, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Johann Citerin, Julien Sevin, Gerald Kergourlay
  • Patent number: 10223580
    Abstract: Methods and systems for video action recognition using poselet keyframes are disclosed. An action recognition model may be implemented to spatially and temporally model discriminative action components as a set of discriminative keyframes. One method of action recognition may include the operations of selecting a plurality of poselets that are components of an action, encoding each of a plurality of video frames as a summary of the detection confidence of each of the plurality of poselets for the video frame, and encoding correlations between poselets in the encoded video frames.
    Type: Grant
    Filed: September 12, 2013
    Date of Patent: March 5, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Michail Raptis, Leonid Sigal
  • Patent number: 10225523
    Abstract: A device and a method for assisting security in the 3D tracking of objects of interest are provided. A proposed risk propagation module makes it possible to create kinship links between the analyzed tracks, during interactions or during disappearance/reappearance of tracks, thus making it possible to diffuse the highest risks to each track concerned.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: March 5, 2019
    Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Yoann Dhome, Patrick Sayd
  • Patent number: 10222802
    Abstract: A method for assisting a driver of a two-wheeled vehicle. The method includes sensing and evaluating a driving environment of the motorcycle as a function of a driving state of the motorcycle, especially an inclination of the motorcycle, in order to detect objects in the driving environment; determining a hazard potential as a function of the detected objects and the driving state; and warning the driver and/or triggering a driver assistance system and/or a vehicle safety system of the motorcycle as a function of the determined hazard potential.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: March 5, 2019
    Assignee: ROBERT BOSCH GMBH
    Inventor: Alfred Kuttenberger
  • Patent number: 10220316
    Abstract: Provided are an information processing device, an information processing method, a program, an information storage medium, an information processing system, and a management device for allowing a user to request capture of a desired play image. A request section transmits capture condition data representing a condition for capturing a play image that indicates details of a game in progress. A confirmation process execution section performs a confirmation process for capture of the play image appropriate to the condition represented by the capture condition data.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: March 5, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Shizuto Fukuda, Shinji Aizawa, Hisao Wada
  • Patent number: 10223611
    Abstract: In one aspect, the present disclosure relates to a method for or performing single-pass object detection and image classification. The method comprises receiving image data for an image in a system comprising a convolutional neural network (CNN), the CNN comprising a first convolutional layer, a last convolutional layer, and a fully connected layer; providing the image data to an input of the first convolutional layer; extracting multi-channel data from the output of the last convolutional layer; and summing the extracted data to generate a general activation map; and detecting a location of an object within the image by applying the general activation map to the image data.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: March 5, 2019
    Assignee: Capital One Services, LLC
    Inventors: Micah Price, Jason Hoover, Geoffrey Dagley, Stephen Wylie, Qiaochu Tang
  • Patent number: 10223803
    Abstract: A method for characterizing a scene by computing the 3D orientation of observed elements of the scene comprises a step of computing 3D points of the scene and further comprises the steps of: regularly quantifying the 3D points of the scene, along three axes, in a preset 3D grid of voxels; for each non-empty voxel of the 3D grid, computing N predefined statistical characteristics from the coordinates of the points contained in the voxel, where N is an integer higher than 1, and for each of the N characteristics, defining a 3D grid associated with the characteristic; for each 3D grid associated with a statistical characteristic, computing an integral 3D grid associated with the statistical characteristic, each voxel of the integral 3D grid comprising an integral of said statistical characteristic; and for each non-empty voxel of the 3D grid of voxels: defining a 3D rectangular parallelepipedal vicinity centered on the voxel and of predefined 3D dimensions, and eight vertices of the vicinity; for each of the N s
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: March 5, 2019
    Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENGINES ALTERNATIVES
    Inventor: Mohamed Chaouch
  • Patent number: 10223798
    Abstract: The present application relates to systems and methods used to characterize or verify the accuracy of a tracker comprising optically detectable features. The tracker may be used in spatial localization using an optical sensor. Characterization results in the calculation of a Tracker Definition that includes geometrical characteristics of the tracker. Verification results in an assessment of accuracy of a tracker against an existing Tracker Definition.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: March 5, 2019
    Assignee: INTELLIJOINT SURGICAL INC.
    Inventors: Andre Novomir Hladio, Richard Tyler Fanson, Luke Becker, Arash Abadpour, Joseph Arthur Schipper
  • Patent number: 10217196
    Abstract: An image processing apparatus which is capable of carrying out a refocusing process for an image. A plurality of pieces of unprocessed data on which developing process has not been carried out are obtained. A developing process is carried out on the piece of unprocessed data to obtain a piece of processed data, and a subject recognition process is carried out on the processed data to identify a main subject from among a plurality of subjects included in the processed data. And based on a phase difference between the plurality of pieces of unprocessed data, results of the developing process on the plurality of pieces of unprocessed data are synthesized together so that the main subject can be brought into focus.
    Type: Grant
    Filed: August 19, 2015
    Date of Patent: February 26, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takahiro Matsushita
  • Patent number: 10216263
    Abstract: A display system includes a display alignment tracker configured track the position of a first signal in a first waveguide and the position of a second signal in a second waveguide. The display alignment tracker optically multiplexes a portion of a first signal and a portion of the second signal into a combined optical signal and measures a differential between the first signal and the second signal. The differential is used to adjust the position, dimensions, or a color attribute of the first signal relative to the second signal.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: February 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Steven John Robbins, Drew Edward Steedly, Michael Edward Samples, Zhiqiang Liu, Andrew K. Juenger
  • Patent number: 10217005
    Abstract: Provided is a method for generating target detection information, including detecting target objects around the vehicle by multiple different types of sensors, and determining the detection targets representing a same target object, detected by the different types of sensors, by spatial position and time tracing. With taking the target object as the detection result, the target detection information generated for the detection result includes a spatial matching confidence of the detection result in the current detection period, a time matching confidence of the detection result, and target detection information on each of the detection targets representing the detection result, collected by the sensor detecting the detection target in a dimension corresponding to the sensor.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: February 26, 2019
    Assignees: NEUSOFT CORPORATION, NEUSOFT REACH AUTOMOTIVE TECHNOLOGY (SHANGHAI) CO., LTD.
    Inventors: Wei Liu, Xin Yang, Lu Wei
  • Patent number: 10216765
    Abstract: Systems, methods, and apparatuses are described for image based routing and confirmation. A routing request for a point of interest is received. A point of interest for the routing request may be identified from a geographic database. A message is sent to a user device, and the message includes an option to confirm or reject a destination based on the routing request that corresponds to the point of interest. When the destination is rejected, a set of point of interest images from one or more sources is selected. The set of point of interest images from the one or more sources may be sent to the user device.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: February 26, 2019
    Assignee: HERE Global B.V.
    Inventor: Katherine R. Reynertson
  • Patent number: 10217229
    Abstract: A method for tracking a moving target based on an optical flow method, including, providing video images, and implementing pre-processing of the images to generate pre-processed images; implementing edge-detection of the pre-processed images and using an optical flow method to extract target information from the pre-processed images, and on the basis of a combination of the edge-detection information and the extracted target information, generating a complete moving target; using an optical flow method to perform estimation analysis of the moving target and using a forward-backward error algorithm based on feature point trace to eliminate light-generated false matching points; and creating a template image and implementing template image matching to track the moving target. The method and system for tracking a moving target based on an optical flow method have the advantages of accurate and complete extraction and the ability to implement stable tracking over a long period of time.
    Type: Grant
    Filed: November 9, 2015
    Date of Patent: February 26, 2019
    Assignee: CHINA UNIVERSITY OF MINING AND TECHNOLOGY
    Inventors: Deqiang Cheng, Leida Li, Hai Liu, Guopeng Zhang, Wei Chen, Songyong Liu
  • Patent number: 10217226
    Abstract: Video analysis methods are described in which abnormalities are detected by comparing features extracted from a video sequence or motion patterns determined from the video sequence with a statistical model. The statistical model may be updated during the video analysis.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: February 26, 2019
    Assignee: Vi Dimensions Pte Ltd
    Inventors: Yosua Michael Maranatha, Christopher Tay Meng Keat, Raymond Looi, Ashish Sriram