Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 10026270
    Abstract: The disclosed embodiments include methods and systems for detecting ATM skimmers, other unauthorized devices, such as hidden video cameras or keypad overlays, and/or possible damage to the ATM based upon radio frequency (RF) signal emitted from the ATM and/or 3D image analysis.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: July 17, 2018
    Assignee: Capital One Services, LLC
    Inventors: William A. Hodges, Christopher R. Marshall, Pierrick Burgain
  • Patent number: 10026002
    Abstract: An object detection apparatus, etc., capable of detecting an object area with greater precision is disclosed. Such an object detection apparatus is provided with: a part area indication means for indicating a part area which is an area including a target part among parts forming an object including an detection-target object, from a plurality of images including the object; an appearance probability distribution generation means for generating an appearance probability distribution and the absence probability distribution of the part area based on the appearance frequency of the part area associated with each position in the images; and an object determination means for determining, in an input image, the area including the object, with reference to the appearance probability distribution and the absence probability distribution of the part area.
    Type: Grant
    Filed: August 21, 2014
    Date of Patent: July 17, 2018
    Assignee: NEC CORPORATION
    Inventor: Tetsuo Inoshita
  • Patent number: 10027888
    Abstract: A system and method for identifying an interesting area within video data using gaze tracking is provided. The system determines gaze vectors for individual faces and determines if the gazes are aligned on a single object. For example, the system may map the gaze vectors to a three-dimensional space and determine a point at which the gaze vectors converge. The point may be mapped to the video data to determine the interesting area having a high priority. The system may generate a heat map indicating gaze tracking over time to delineate persistent interesting areas and instantaneous interesting areas.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: July 17, 2018
    Assignee: Amazon Technologies, Inc.
    Inventor: James Domit Mackraz
  • Patent number: 10026179
    Abstract: An object may be identified, if each measured value of a characteristic of the object is within a corresponding range of a set of characteristics. The object may then be classified as a true alarm or false alarm by a user. Next, the measured values of the object may be added as a data point to a set of data points. Each of data points is along a plurality of dimensions and each of the dimensions corresponds to one of the set of characteristics. Further, each of the data points has been classified as a true alarm or false alarm. The range of the set of characteristics may be updated to reduce a weighted score based on a number of the true alarms that are outside a region along the plurality of dimensions and a number of the false alarms inside the region for the set of data points. The region is defined based on numerical analysis of the set of data points. The weighted score may provide separate weights to the true alarms outside the region and the missed alarms inside the region.
    Type: Grant
    Filed: February 23, 2016
    Date of Patent: July 17, 2018
    Assignee: ENTIT SOFTWARE LLC
    Inventors: David Bettinson, Tom Rosoman, Chris Smith, Unai Ayo Aresti
  • Patent number: 10023311
    Abstract: A system and method for painting a structure. The system comprises: a computer vision processing system configured to obtain images of a target structure and generate first instructions signals for real time communication to an unmanned aerial vehicle (UAV) having a paint fluid dispensing system provided thereon; a control device at the UAV responsive to received the first instruction signals for controlling real time navigating of the UAV to a location at the target structure; and the computer vision processing system configured to generate second instruction signals for real time communication to the UAV, wherein the control device at the UAV is configured to automatically actuate the paint fluid dispensing system to apply a paint fluid at the location on the target structure in response, the received second instruction signals configuring the UAV to render a desired visual image on the target structure.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: July 17, 2018
    Assignee: International Business Machines Corporation
    Inventors: Jui-Hsin Lai, Yu Ma, Conglei Shi, Yinglong Xia
  • Patent number: 10026212
    Abstract: A system includes a head mounted display (HMD) device comprising at least one display and at least one sensor to provide pose information for the HMD device. The system further includes a sensor integrator module coupled to the at least one sensor, the sensor integrator module to determine a motion vector for the HMD device based on the pose information, and an application processor to render a first texture based on pose of the HMD device determined from the pose information. The system further includes a motion analysis module to determine a first velocity field having a pixel velocity for at least a subset of pixels of the first texture, and a compositor to render a second texture based on the first texture, the first velocity field and the motion vector for the HMD, and to provide the second texture to the display of the HMD device.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: July 17, 2018
    Assignee: Google LLC
    Inventors: Craig Donner, Paul Albert Lalonde, Evan Hardesty Parker
  • Patent number: 10026019
    Abstract: In a method to detect a stationary person by a tracking method using images, image feature quantities of objects, other than persons, are registered in advance, and when the image feature quantity of a stationary region of an image does not resemble the preregistered ones, the stationary region is recognized as a person. Hence, there has been a problem that it requires a large volume of image feature quantities to be collected and registered in advance. A person detecting device and a person detecting method which relate to the present invention determine a stationary region on the basis of the distance between the position of a first moving-object region extracted from an image and the position of a second moving-object region extracted from an image previous to the image of the first moving-object region. The presence or absence of a person is determined by using the image feature variation of the stationary region.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: July 17, 2018
    Assignee: Mitsubishi Electric Corporation
    Inventors: Seiji Okumura, Shinichiro Otani, Fumio Igarashi
  • Patent number: 10026438
    Abstract: There is provided an information processing apparatus including a reproduction section which acquires reproduction data corresponding to a reproduction position of content data by reproducing the content data, an output section which causes the reproduction data acquired by the reproduction section to be output from a region of an output device, the region corresponding to the information processing apparatus, a control signal acquisition section which acquires a control signal generated and output by a control device based on a predetermined input recognized by a recognition device, and an output control section which controls output of the reproduction data to the output device performed by the output section in accordance with the control signal acquired by the control signal acquisition section.
    Type: Grant
    Filed: September 13, 2010
    Date of Patent: July 17, 2018
    Assignee: SONY CORPORATION
    Inventors: Michinari Kohno, Fujio Nobori, Satoshi Miyazaki, Yoshiki Tanaka, Mitsuhiro Hosoki, Tomohiko Okazaki
  • Patent number: 10025972
    Abstract: Systems, methods, and non-transitory computer-readable media can acquire real-time image data depicting at least a portion of a face of a user of a computing system (or device). The real-time image data can be analyzed to determine a state associated with at least the portion of the face. An emoji can be provided based on the state associated with at least the portion of the face. The emoji can be inputted in a communication to be made by the user.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: July 17, 2018
    Assignee: Facebook, Inc.
    Inventors: Michael James Matas, Michael Waldman Reckhow, Yaniv Taigman
  • Patent number: 10026189
    Abstract: Example systems and methods are disclosed for determining the direction of an actor based on image data and sensors in an environment. The method may include receiving point cloud data for an actor at a location within the environment. The method may also include receiving image data of the location. The received image data corresponds to the point cloud data received from the same location. The method may also include identifying a part of the received image data that is representative of the face of the actor. The method may further include determining a direction of the face of the actor based on the identified part of the received image data. The method may further include determining a direction of the actor based on the direction of the face of the actor. The method may also include providing information indicating the determined direction of the actor.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: July 17, 2018
    Assignee: Google LLC
    Inventors: Paul Vincent Byrne, Daniel Aden
  • Patent number: 10026148
    Abstract: The image recognition method includes: setting ROIs in an image of one image plane; and searching for a region including a recognition-targeted image. The method involves dictionary data including feature quantities calculated from blocks forming a recognition-targeted region including a recognition-targeted image and having blocks concerning the region. Focusing on one block in the image plane, feature quantities are calculated, which make block-of-interest feature quantities. With each of ROIs including the block, inner products of feature quantities calculated from a block corresponding to the one block in the dictionary data, and block-of-interest feature quantities are accumulatively added into evaluation value intermediate data for ROI. Whether or not each ROI includes the recognition-targeted image is determined on an evaluation value of ROI on which the accumulative addition has been finished on all of blocks forming the ROI. The noticed blocks are sequentially scanned in the image plane.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: July 17, 2018
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Shunsuke Okumura
  • Patent number: 10021371
    Abstract: An information handling system includes a RGB digital camera and a secondary digital camera that can be any type of two-dimensional or three-dimensional digital camera known in the art and a processor. The processor executes code instructions of a gross-level input detection system to detect objects in images taken contemporaneously by the RGB digital camera and the secondary digital camera using object detection techniques, and to calculate the positions of regions of interest within those objects. Further, the processor executes code instructions to detect the orientation of regions of interest within identified objects, and to associate those orientations, changes in orientations, or movement of regions of interest with user commands.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: July 10, 2018
    Assignee: Dell Products, LP
    Inventors: Roman Joel Pacheco, Thomas Lanzoni, Deeder M. Aurongzeb
  • Patent number: 10017113
    Abstract: A peripheral image display device for displaying a peripheral image around a construction machine including at least two imaging units capturing the peripheral image around the construction machine; an image combining unit that combines at least two images obtained from the at least two imaging units to form a single projection image, the at least two images including an overlapping area overlapping between the at least two images; a guideline setting unit that sets a guideline, which includes at least two color information pieces, on the projection image formed by the image combining unit at a position a predetermined distance apart from the construction machine; a guideline generating unit that generates the guideline on the projection image based on setup information obtained by the guideline setting unit; and a display unit that displays the projection image having the guideline generated by the guideline generating unit on a screen.
    Type: Grant
    Filed: May 29, 2014
    Date of Patent: July 10, 2018
    Assignee: SUMITOMO(S.H.I.) CONSTRUCTION MACHINERY CO., LTD.
    Inventor: Takeya Izumikawa
  • Patent number: 10016896
    Abstract: Systems and methods for detection of people are disclosed. In some exemplary implementations, a robot can have a plurality of sensor units. Each sensor unit can be configured to generate sensor data indicative of a portion of a moving body at a plurality of times. Based on at least the sensor data, the robot can determine that the moving body is a person by at least detecting the motion of the moving body and determining that the moving body has characteristics of a person. The robot can then perform an action based at least in part on the determination that the moving body is a person.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 10, 2018
    Assignee: BRAIN CORPORATION
    Inventors: Oleg Sinyavskiy, Borja Ibarz Gabardos, Jean-Baptiste Passot
  • Patent number: 10011258
    Abstract: Provided is a monitoring apparatus including an acquiring unit that acquires information relating to an operation status when an operating body is operated, a determination unit that determines one of plural categories of the operation status, which are classified based on a degree of occurrence of a malfunction of the operating body or a degree of danger of the operation status, to which the information relating to the operation status acquired by the acquiring unit belongs, and an attention calling unit that calls for an attention for an operation of the operating body in a case where the information relating to the operation status is determined to belong to a specific category.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: July 3, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Shoji Yamaguchi, Masayasu Takano, Daigo Kusano
  • Patent number: 10013507
    Abstract: Systems, apparatuses, and methods are provided for three-dimensional modeling of building roofs using three-dimensional point cloud data. Point cloud data of a roof of a building is received, and roof data points are selected or extracted from the point cloud data. Semantic type classifications are calculated for each selected roof data point. Roof styles are determined from the semantic type classifications, and a synthetic model of the roof and building is rendered based on the determined roof style.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: July 3, 2018
    Assignee: HERE Global B.V.
    Inventors: Xi Zhang, Xin Chen
  • Patent number: 10011228
    Abstract: A trailer backup assist system is provided herein. A plurality of imaging devices is configured to capture rear-vehicle images. A controller is configured to receive output from each imaging device. The controller processes the output from each imaging device to track a number of trailer features. The controller determines a confidence score for the output of each imaging device. The confidence score is determined based on the number of trailer features being tracked and a tracking quality of each trailer feature. The controller also determines a hitch angle between a vehicle and a trailer based on the output associated with whichever imaging device yielded the highest confidence score.
    Type: Grant
    Filed: April 25, 2016
    Date of Patent: July 3, 2018
    Assignee: Ford Global Technologies, LLC
    Inventors: Zheng Hu, Erick Michael Lavoie
  • Patent number: 10012989
    Abstract: To increase the frequency of executing a loop closing process, and to reduce an accumulated error in a local device position, and the like. A rotational image picker of an autonomous movement device picks up an image while performing a rotational action. An image memory stores information on the picked-up image. A map memory stores a created map. A position estimator estimates the local device position. A similar image searcher searches, from the image memory, the image that has a similarity level of equal to or greater than a predetermined similarity level. A map corrector corrects the map stored in the map memory when the similar image searcher founds the image that has the similarity level of equal to or greater than the predetermined similarity level.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: July 3, 2018
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Tetsuro Narikawa
  • Patent number: 10013889
    Abstract: A method, computer program product, and a system for enhancing an interaction between a teacher and a student are disclosed, the method includes receiving video images of a region of interest from a plurality of multi-functional devices; comparing the video images of the region of interest received from the plurality of multi-functional devices; detecting differences in the region of interest of at least one multi-functional device in comparison to the region of interest of the plurality of multi-functional devices; and providing a signal to the at least one multi-functional device based on the detected difference in the region of interest.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: July 3, 2018
    Assignee: KONICA MINOLTA LABORATORY U.S.A., INC.
    Inventors: Yibin Tian, Wei Ming
  • Patent number: 10013765
    Abstract: An image registrations includes determining a first binary descriptor of a first key point in a first image, determining a second binary descriptor of a second key point in a second image, determining a weighted Hamming distance between the first binary descriptor and the second binary descriptor, and registering the first key point with the second key point when the weighted Hamming distance is below a noise threshold. At least one element in the first or the second binary descriptor is a result of a comparison of a difference between intensities of at least two pixels of the first or the second image with a threshold. At least two weights of the weighted Hamming distance for comparing at least two elements of the first or the second binary descriptors are different.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: July 3, 2018
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Srikumar Ramalingam, Yuichi Taguchi, Bharath Sankaran
  • Patent number: 10015400
    Abstract: Disclosed are a mobile terminal, and a method for controlling the same. The mobile terminal includes: a body; a camera provided at the body; a display unit configured to display a preview screen as the camera is driven; a sensing unit configured to sense a moved degree of the body as the camera is driven; and a controller configured to control the camera to execute a timer capturing, if a face region of a subject is detected from the displayed preview screen, and if the sensed moved degree of the body satisfies a preset capturing condition. The controller may execute an operation corresponding to the timer capturing, and may consecutively generate a next capturing command at a reference time interval while the moved degree of the body is within a reference range.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: July 3, 2018
    Assignee: LG ELECTRONICS INC.
    Inventors: Byoungeul Kim, Seonghyok Kim
  • Patent number: 10013653
    Abstract: A method for extracting hierarchical features from data defined on a geometric domain is provided. The method includes applying on said data at least an intrinsic convolution layer, including the steps of applying a patch operator to extract a local representation of the input data around a point on the geometric domain and outputting the correlation of a patch resulting from the extraction with a plurality of templates. A system to implement the method is also described.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: July 3, 2018
    Assignee: UNIVERSITÀ DELLA SVIZZERA ITALIANA
    Inventors: Michael Bronstein, Davide Boscaini, Jonatan Masci, Pierre Vandergheynst
  • Patent number: 10013614
    Abstract: A system receives a subject video. The system identifies dynamic segments and semi-static segments within the subject video. The system determines matches between the dynamic segments of the subject video and reference dynamic segments of reference videos. Similarly, the system determines matches between the semi-static segments of the subject video and reference semi-static segments of reference videos. The system generates the match merge list including one or more entries. Each entry of the match merge list includes an indication of a grouped segment of the subject video including sequential occurrences of a dynamic segment and a semi-static segment of the subject video, and an indication of a reference grouped segment of a reference video including sequential occurrences of a reference dynamic segment and a reference semi-static segment of the reference video, where the reference dynamic segment matches the dynamic segment and the reference semi-static segment matches the semi-static segment.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: July 3, 2018
    Assignee: GOOGLE LLC
    Inventor: Johan Georg Granström
  • Patent number: 10008010
    Abstract: Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator identifying an item of possible interest in the captured images on the display overlying the visual presentation of the captured images; and a correlation component for execution by the processor component to track eye movement to determine a portion of the display gazed at by an eye, and to correlate the portion of the display to the item of possible interest. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 12, 2013
    Date of Patent: June 26, 2018
    Assignee: INTEL CORPORATION
    Inventors: Ron Ferens, Gila Kamhi, Barak Hurwitz, Amit Moran, Dror Reif
  • Patent number: 10009598
    Abstract: A system, method, and computer-readable medium are disclosed for dynamically controlling a multi-modal camera system to take advantage of the benefits of sensing gestures with a 2D camera, while overcoming the challenges associated with 2D cameras for performing gesture detection. In certain embodiments, the multi-modal camera system includes an RGB camera and a depth camera, thus providing both a 2D and a 3D capture mode.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: June 26, 2018
    Assignee: DELL PRODUCTS L.P.
    Inventors: Roman J. Pacheco, Keith M. Alfano
  • Patent number: 10005639
    Abstract: A method includes generating a depth stream from a scene associated with a conveyance device; processing, by a computing device, the depth stream to obtain depth information; recognizing a gesture based on the depth information; and controlling the conveyance device based on the gesture.
    Type: Grant
    Filed: August 15, 2013
    Date of Patent: June 26, 2018
    Assignee: OTIS ELEVATOR COMPANY
    Inventors: Hongcheng Wang, Arthur Hsu, Alan Matthew Finn, Hui Fang
  • Patent number: 10007846
    Abstract: An image processing method for a picture of a participant, photographed in an event, such as a marathon race, increases the accuracy of recognition of a race bib number by performing image processing on a detected race bib area, and associates the recognized race bib number with a person included in the picture. This image processing method detects a person from an input image, estimates an area in which a race bib exists based on a face position of the detected person, detects an area including a race bib number from the estimated area, performs image processing on the detected area to thereby perform character recognition of the race bib number from an image subjected to image processing, and associates the result of character recognition with the input image.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: June 26, 2018
    Assignee: CANON IMAGING SYSTEMS INC.
    Inventor: Yasushi Inaba
  • Patent number: 10007989
    Abstract: To acquire information relating to a vessel wall thickness by: acquiring interference signal sets of a plurality of frames including interference signal sets corresponding to a plurality of frames forming an image of the same cross section of an fundus; generating 3-D tomographic image data on the fundus from the interference signal sets of the plurality of frames; generating 3-D motion contrast data in the fundus from the interference signal sets corresponding to the plurality of frames that form the same cross section; extracting a vessel from the fundus based on the 3-D tomographic image data or the 3-D motion contrast data; detecting a coordinate of an outer surface of a vessel wall of the vessel based on the 3-D tomographic image data; and detecting a coordinate of an inner surface of the vessel wall of the vessel based on the 3-D motion contrast data.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: June 26, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tomoyuki Ikegami, Tomoyuki Makihira
  • Patent number: 10007841
    Abstract: A facial recognition solution is disclosed that includes adding a specified numeric pixels to an edge area of a digital image to acquire an enhanced digital image and then performing a facial recognition process on the enhanced digital image to determine a human face from the digital image.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: June 26, 2018
    Assignee: Xiaomi Inc.
    Inventors: Zhijun Chen, Pingze Wang, Qiuping Qin
  • Patent number: 10001959
    Abstract: The present disclosure is directed to a method and user interface for redirecting print jobs. The method involves receiving a notification indicating that execution of a print job at a first printing device failed. The method also involves displaying a network printing device map on a display unit in response to receiving the notification. The network printing device map is a graphical representation of a network topology of a plurality of printing devices within a local network. The method further involves receiving an input gesture indicative of a selection of a second printing device of the plurality of printing devices with which to execute the print job. Additionally, the method involves causing the first printing device to transmit the print job to the second printing device upon receiving the input gesture.
    Type: Grant
    Filed: April 11, 2017
    Date of Patent: June 19, 2018
    Assignee: KYOCERA DOCUMENT SOLUTIONS INC.
    Inventor: Keisuke Fukushima
  • Patent number: 10002464
    Abstract: The present invention relates to a light field light source orientation method for augmented reality and virtual reality and a front-end device. The method comprises: A: identifying a target marker; B. tracking the target marker; C. analyzing the pixel color shape on the marker object; D. analyzing color difference zones on the marker object and analyzing the shape cast to calculate direction of environmental light source; E. pushing the light source directional data to an augmented reality object; and F. compensating and adjusting the imaging of an augmented reality object: the present invention can collect surrounding environmental factors, such as the light source direction, so as to project a computer-generated object into a reality environment to possess a shadow consistent with that in the reality, so that an augmented reality environment is more realistic and represents a realist shadow effect of the object.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: June 19, 2018
    Assignee: PACIFIC FUTURE LIMITED
    Inventors: Ian Roy Thew, Kien Yi Lee
  • Patent number: 9996743
    Abstract: Methods, systems, and media for detecting gaze locking are provided. In some embodiments, methods for gaze locking are provided, the methods comprising: receiving an input image including a face; locating a pair of eyes in the face of the input image; generating a coordinate frame based on the pair of eyes; identifying an eye region in the coordinate frame; generating, using a hardware processor, a feature vector based on values of pixels in the eye region; and determining whether the face is gaze locking based on the feature vector.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: June 12, 2018
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Brian Anthony Smith, Qi Yin, Shree Kumar Nayar
  • Patent number: 9996947
    Abstract: This disclosure describes an information processing apparatus including a processor configured to acquire first image data, detect reference image data of a particular object from the first image data, store first time information indicating a first time when the reference image data is detected from the first image data or when the first image data is captured, acquire second image data, generate, when another reference image data of another particular object is detected from the second image data, second time information indicating a second time when the another reference image data is detected from the second image data or when the second image data is captured, generate movement information based on a difference between the first time information and the second time information, and determine whether a work is implemented in a place where the work has to be implemented.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: June 12, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Susumu Koga
  • Patent number: 9994254
    Abstract: A method of assisting a driver of a vehicle in driving a road, the method includes obtaining a feature of a road using a sensor, selecting, via a controller, a parameter based on the feature of the road, determining, via a controller, whether the vehicle is approaching a boundary of the road, and providing a feedback operation to assist the driver in avoiding the boundary of the road, the feedback operation based on the selected parameter.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: June 12, 2018
    Assignees: NISSAN NORTH AMERICA, INC., NISSAN MOTOR CO., LTD.
    Inventors: Yoshiro Takamatsu, Yuji Takada, Yoshinori Kusayanagi, Norimasa Kishi
  • Patent number: 9996752
    Abstract: A method and system associated with a camera view of a moving-object in a scene. The method comprises detecting and tracking the moving object over multiple video frames, estimating an orientation of the moving object in each of the video frames, and constructing a cost map from the estimated orientations over the multiple video frames for finding a minimum cost path over the cost map. The Method also comprises determining regularized orientation estimates of the moving-object from the minimum cost path, and locating the vanishing point of the camera view based on an axis of the moving-object from the minimum cost path, the axis formed by using the regularized orientation estimates.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: June 12, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Quang Tuan Pham, Geoffrey Richard Taylor
  • Patent number: 9996741
    Abstract: In one embodiment, a method includes receiving a digital image captured by a mobile device; and using a processor of the mobile device: generating a first representation of the digital image, the first representation being characterized by a reduced resolution; generating a first feature vector based on the first representation; comparing the first feature vector to a plurality of reference feature matrices; classifying an object depicted in the digital image as a member of a particular object class based at least in part on the comparing; and determining one or more object features of the object based at least in part on the particular object class. Corresponding systems and computer program products are also disclosed.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: June 12, 2018
    Assignee: KOFAX, INC.
    Inventors: Jan W. Amtrup, Anthony Macciola, Steve Thompson, Jiyong Ma, Alexander Shustorovich, Christopher W. Thrasher
  • Patent number: 9996761
    Abstract: Briefly, embodiments disclosed herein relate to image cropping, such as for digital images, for example.
    Type: Grant
    Filed: December 31, 2014
    Date of Patent: June 12, 2018
    Assignee: Oath Inc.
    Inventors: Daozheng Chen, Mihyoung Sally Lee, Brian Webb, Ralph Rabbat, Ali Khodaei, Paul Krakow, Dave Todd, Samantha Giordano, Max Chern
  • Patent number: 9998636
    Abstract: The present invention is a method of removing the illumination and background spectral components thus isolating spectra from multi-spectral and hyper-spectral data cubes. The invention accomplishes this by first balancing a reference and sample data cubes for each spectra associated with each location, or pixel/voxel, in the spatial image. The set of residual spectra produced in the balancing step is used to obtain and correct a new set of reference spectra that is used to remove the illumination and background components in a sample data cube.
    Type: Grant
    Filed: May 6, 2017
    Date of Patent: June 12, 2018
    Assignee: Center for Quantitative Cytometry
    Inventors: Abraham Schwartz, Philip Sherman, Peter Ingram
  • Patent number: 9996753
    Abstract: The present disclosure provides for a method, device, and computer-readable storage medium for performing a method for discerning a vehicle at an access control point. The method including obtaining a video sequence of the access control point; detecting an object of interest from the video sequence; tracking the object from the video sequence to obtain tracked-object data; classifying the object to obtain classified-object data; determining that the object is a vehicle based on the classified-object data; and determining that the vehicle is present in a predetermined detection zone based on the tracked-object data.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: June 12, 2018
    Assignee: Avigilon Fortress Corporation
    Inventors: Don Madden, Ethan Shayne
  • Patent number: 9998658
    Abstract: Certain aspects pertain to Fourier ptychographic imaging systems, devices, and methods such as, for example, high NA Fourier ptychographic imaging systems and reflective-mode NA Fourier ptychographic imaging systems.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: June 12, 2018
    Assignee: California Institute of Technology
    Inventors: Xiaoze Ou, Roarke W. Horstmeyer, Guoan Zheng, Changhuei Yang
  • Patent number: 9996744
    Abstract: A gaze tracking method, system, and non-transitory computer readable medium for tracking an eye gaze on a device including a monocular camera and an accelerometer, include calculating a parametric equation of a gaze vector passing through a pupil of a user and an eye ball center of the user, calculating a new offset plane equation of a screen of the device based on yaw, roll, and pitch offset readings from a stationary registered plane equation initially determined from the screen of the device, and calculating an intersection of the eye gaze vector with the new offset plane equation.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: June 12, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Karan Ahuja, Kuntal Dey, Seema Nagar, Roman Vaculin
  • Patent number: 9990029
    Abstract: A method for interfacing with an interactive program is provided. Images of an interface object and a motion controller disposed in a gameplay environment are captured, a tag defined on a surface of the interface object. The captured images are analyzed to identify the tag on the surface of the interface object. Movement of the interface object is tracked by tracking the tag. Augmented images are generated by replacing, in the captured images, the tag with a virtual object, wherein generating the augmented images includes processing the tracked movement of the interface object to define movement of the virtual object. The captured images are analyzed to identify a pointing direction of the motion controller in the gameplay environment. The pointing direction of the motion controller is processed to identify a selection of a portion of the virtual object. The augmented images are presented on a display.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: June 5, 2018
    Assignee: Sony Interactive Entertainment Europe Limited
    Inventor: Masami Kochi
  • Patent number: 9990031
    Abstract: An indicating method and device for correcting failure of motion-sensing interaction tracking. The method includes: acquiring physical space coordinates of a border of an actual motion-sensing interaction range of a human body and physical space coordinates of one or more key-part points in the human body; if a failure of motion-sensing interaction tracking is determined, obtaining, a current position of the human body relative to the border of the actual motion-sensing interaction range based on the physical space coordinates of the border of the actual motion-sensing interaction range and a physical space coordinate of at least one key-part point from the one or more key-part points acquired in a latest time; and scaling down the border of the actual motion-sensing interaction range and a distance of the current position relative to the border of the actual motion-sensing interaction range, and displaying at a display interface through an auxiliary image.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: June 5, 2018
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Yingjie Li
  • Patent number: 9992514
    Abstract: A method for disoccluded region coding in multiview video data stream by an entropy encoder, the method comprising the steps of: coding a block of a base view; storing state and estimated probabilities in contexts models of the entropy encoder in a context storage module with a reference identifying the block of the base view; repeating the aforementioned steps of coding and storing for every block of the base view of the multiview video data stream; starting coding of a disoccluded region and dividing, into blocks, disoccluded area of a side view associated with the base view; determining, for neighboring blocks, of currently coded block, that have not been in the disoccluded area, a corresponding block in the base view, using a block correspondence database; when such a correspondence is determined, reading a previously stored state and estimated probabilities in context models of the entropy encoder for the corresponding block; copying all coding modes from the corresponding block to the neighboring block;
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: June 5, 2018
    Assignee: POLITECHNIKA POZNANSKA
    Inventors: Marek Domanski, Tomasz Grajek, Damian Karwowski, Krzysztof Klimaszewski, Olgierd Stankiewicz, Jakub Stankowski, Krzysztof Wegner
  • Patent number: 9990587
    Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.
    Type: Grant
    Filed: January 22, 2015
    Date of Patent: June 5, 2018
    Assignee: PREFERRED NETWORKS, INC.
    Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
  • Patent number: 9990690
    Abstract: In an example, a method for tile-based processing by a display processor may include reading first foreground tile data of a foreground image from a first memory space. The method may include storing the read first foreground tile data into a second memory space. The method may include reading first background tile data of a background image from the first memory space. The method may include storing the read first background tile data into a third memory space. The method may include reading a subset of data of the first foreground tile data from the second memory space. The method may include reading a subset of data of the first background tile data from the third memory space.
    Type: Grant
    Filed: September 21, 2015
    Date of Patent: June 5, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Kumar Saurabh, Sreekanth Modaikkal, Anitha Madugiri Siddaraju, Naveenchandra Shetty Brahmavar
  • Patent number: 9984297
    Abstract: An object detection device may include: a candidate object selection unit suitable for selecting candidate objects from a stereo image transmitted from moving cameras; a global motion estimation unit suitable for estimating global motions representing movement of the moving cameras from the stereo image; and an object determination unit suitable for detecting a moving object from the stereo image based on the candidate objects and the global motions.
    Type: Grant
    Filed: December 4, 2015
    Date of Patent: May 29, 2018
    Assignees: SK Hynix Inc., KWANGWOON UNIVERSITY INDUSTRY ACADEMIC COLLABORATION FOUNDATION
    Inventors: Ji Sang Yoo, Gyu Cheol Lee
  • Patent number: 9986289
    Abstract: Methods and apparatus to count people are disclosed. An example apparatus includes a populator to populate a list with first characteristic datasets obtained from first image data representative of an environment during a first period of time, respective ones of the first characteristic datasets representative of a face detected in the environment during the first period of time; a comparator to compare the first characteristic datasets to each other to determine a first number of unique faces in the environment during the first period of time; and a discarder to delete the first characteristic datasets from the list when the first period of time has ended, the populator to re-populate the list with second characteristic datasets obtained from second image data representative of the environment during a second period of time subsequent to the first period of time.
    Type: Grant
    Filed: March 2, 2015
    Date of Patent: May 29, 2018
    Assignee: The Nielsen Company (US), LLC
    Inventors: Padmanabhan Soundararajan, Marko Usaj, Venugopal Srinivasan
  • Patent number: 9984286
    Abstract: A method and an apparatus for detecting persons are disclosed. The method includes initially detecting the persons in a height-top-view; dividing the height-top-view into one or more regions, and estimating crowd density in each region; determining, based on the crowd density, visible regions of the initially detected persons in each of the regions; for each of the initially detected persons, extracting a first gradient feature and a second gradient feature of the person from the height-top-view, and a grayscale image or a color image corresponding to the height-top-view, respectively; for each of the initially detected persons, determining, based on the extracted first gradient feature and second gradient feature, using a previously constructed classifier corresponding to the determined visible region of the person, a confidence level of the initially detected person; and correcting, based on the confidence level, a detection result of the initially detected persons.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: May 29, 2018
    Assignee: Ricoh Company, Ltd.
    Inventors: Xin Wang, Shengyin Fan, Qian Wang, Jiadan Zhu
  • Patent number: 9984303
    Abstract: Provided is a technique for, even when the lightness of an image contained in a video changes, accurately separating the background of the image and an object in the background and thus efficiently collecting images that contain a target object to be detected, and also suppressing the amount of data communicated between a terminal device and a server. In the present invention, a terminal device accurately separates the background of an image and an object (i.e., a target object to be detected) in the background so as to simply detect the object in the background, and transfers to a server only candidate images that contain the detected object. Meanwhile, as such simple target object detection may partially involve erroneous detection, the server closely examines the candidate images to identify the target object to be detected, and thus recognizes the object.
    Type: Grant
    Filed: May 19, 2016
    Date of Patent: May 29, 2018
    Assignee: Hitachi, Ltd.
    Inventors: Hideharu Hattori, Tatsuhiko Kagehiro, Yoshifumi Izumi