Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 10692379
    Abstract: An information processing apparatus according to an embodiment of the present technology includes a detection unit, an estimation unit, and a judgment unit. The detection unit detects a target object from an input image. The estimation unit estimates a posture of the detected target object. The judgment unit judges a possibility of the target object slipping on the basis of the estimated posture.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: June 23, 2020
    Assignee: SONY CORPORATION
    Inventors: Suguru Aoki, Taketo Akama, Hideki Oyaizu, Yasutaka Hirasawa, Yuhi Kondo
  • Patent number: 10692225
    Abstract: A moving object detection method and apparatus are disclosed. The method may include: obtaining a first image and a second image of a scene; determining a difference of the first image and the second image; performing a binarization operation on the difference of the first image and the second image, to generate a binary image; determining the number of pixels whose values are nonzero in each column of the binary image, to generate a column pixel histogram; and determining whether a moving object is present in the scene based on the column pixel histogram.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: June 23, 2020
    Assignee: SHANGHAI XIAOYI TECHNOLOGY CO., LTD.
    Inventors: Lili Zhao, Zhao Li
  • Patent number: 10692234
    Abstract: Methods and apparatus for making environmental measurements are described. In some embodiments different devices are used to capture environmental information at different times, rates and/or resolutions. Environmental information, e.g., depth information, from multiples sources captured using a variety of devices is processed and combined. Some environmental information is captured during an event. Such information is combined, in some embodiments, with environmental information that was captured prior to the event. Environmental depth model is generated in some embodiments by combining, e.g., reconciling, depth information from at least two different sources including: i) depth information obtained from a static map, ii) depth information obtained from images captured by light field cameras, and iii) depth information obtained from images captured by stereoscopic camera pairs.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: June 23, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10694110
    Abstract: An image processing device includes an inward state detection unit that detects, on the basis of an inward image acquired by photographing of a user side, a state of outside light in the inward image as an inward outside light state, and an outward state detection unit that detects, on the basis of an outward image acquired by photographing of an opposite side of the user side, a state of outside light in the outward image as an outward outside light state. Also included is a control unit that performs brightness adjustment processing related to brightness of the inward image according to a detection result of the inward outside light state and a detection result of the outward outside light state, and a recognition processing unit that performs gaze detection of the user on the basis of the inward image acquired by performance of the brightness adjustment processing.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: June 23, 2020
    Assignee: SONY CORPORATION
    Inventors: Kenya Michishita, Yoshiaki Iwai, Takeo Tsukamoto
  • Patent number: 10692238
    Abstract: Methods for presenting an image indicating a position for a person are disclosed. A method includes: determining, by a computing device, at least one free space in a location using at least one camera; determining, using the computing device, a new position for a first person in the location based upon the determined at least one free space in the location; and presenting an image to indicate the determined new position for the first person in the location.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: June 23, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tomoka Mochizuki, Wen Lianzi, Munehiko Sato, Tomonori Sugiura
  • Patent number: 10692172
    Abstract: A method of plane tracking comprising: capturing by a camera a reference frame of a given plane from a first angle; capturing by the camera a destination frame of the given plane from a second angle different than the first angle; defining coordinates of matching points in the reference frame and the destination frame; calculating, using the first and second angles, first and second respective rotation transformations to a simulated plane parallel to the given plane; applying an affine transformation between the reference frame coordinate on the simulated plane and the destination frame coordinate on the simulated plane; and applying a projective transformation on the simulated plane destination frame coordinate to calculate the destination frame coordinate.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: June 23, 2020
    Assignee: Snap Inc.
    Inventors: Ozi Egri, Eyal Zak
  • Patent number: 10691971
    Abstract: A method includes actuating a processor to apply an input image to a feature extractor including a plurality of layers, determine a third feature vector based on first feature vectors of an input image output by a first layer included in a feature extractor and second feature vectors of the input image output by a second layer in the feature extractor, and identify an object in the input image based on the third feature vector.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: June 23, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyun Sung Chang, Donghoon Sagong, Minjung Son, Kyungboo Jung, Young Hun Sung
  • Patent number: 10691969
    Abstract: An image data processing method includes receiving, from an image sensor, frame image data of a plurality of frames; receiving a plurality of control rules comprising a respective control rule for each of the frames, wherein each of the control rules identifies one of a plurality of process regions and one of a plurality of object detectors; identifying a region of interest in each frame by a location and a category, comprising applying the object detector identified by the respective control rule to the respective frame image data in the process region identified by the respective control rule; identifying a final region of interest based on the identified regions of interest; and reporting the final region of interest.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: June 23, 2020
    Assignee: EAGLESENS SYSTEMS CORPORATION
    Inventors: Weihua Xiong, Guangbin Zhang
  • Patent number: 10692228
    Abstract: A system determines spatial locations of pixels of an image. The system includes a processor configured to: receive location data from devices located within a hotspot; generate a density map for the hotspot including density pixels associated with spatial locations defined by the location data, each density pixel having a value indicating an amount of location data received from an associated spatial location; match the density pixels of the density map to at least a portion of the pixels of the image; and determine spatial locations of the at least a portion of the pixels of the image based on the spatial locations of the matching density pixels of the density map. In some embodiments, the image and density map are converted to edge maps, and a convolution is applied to the edge maps to match the density map to the pixels of the image.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: June 23, 2020
    Assignee: Mapbox, Inc.
    Inventor: Damon Burgett
  • Patent number: 10691925
    Abstract: Embodiments described herein provide various examples of a real-time face-detection, face-tracking, and face-pose-selection subsystem within an embedded vision system. In one aspect, a process for identifying near-duplicate-face images using this subsystem is disclosed. This process includes the steps of: receiving a determined best-pose-face image associated with a tracked face when the tracked face is determined to be lost; extracting an image feature from the best-pose-face image; computing a set of similarity values between the extracted image feature and each of a set of stored image features in a feature buffer, wherein the set of stored image features are extracted from a set of previously transmitted best-pose-face images; determining if any of the computed similarity values is above a predetermined threshold; and if no computed similarity value is above the predetermined threshold, transmitting the best-pose-face image to a server and storing the extracted image feature into the feature buffer.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 23, 2020
    Assignee: AltumView Systems Inc.
    Inventors: Him Wai Ng, Xing Wang, Yu Gao, Rui Ma, Ye Lu
  • Patent number: 10691982
    Abstract: A method for vehicle damage identification, includes: obtaining a vehicle damage picture to be identified; inputting the vehicle damage picture into a plurality of pre-trained target detection models respectively, and obtaining corresponding detection results from the plurality of target detection models as a detection result set, wherein the detection result set comprises candidate bounding boxes detected by the plurality of target detection models and category prediction results of the candidate bounding boxes; determining an integrated feature vector of a first candidate bounding box of the candidate bounding boxes; and separately inputting integrated feature vectors corresponding to the candidate bounding boxes into a pre-trained classification model, and optimizing the detection result set according to output results of the classification model.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: June 23, 2020
    Assignee: Alibaba Group Holding Limited
    Inventor: Juan Xu
  • Patent number: 10692222
    Abstract: A linkage between work for setting delimitation of motions and work for setting attribute information for the motions is improved. A work analysis device 10 includes a video image data acquisition unit 110 that acquires a video image obtained by imaging a series of motions performed by a worker, a delimitation operation reception unit 131 that receives a delimitation operation for setting a motion delimitation in the video image, a selection screen display control unit 122 that executes a process for displaying a selection screen for selecting attribute information to be associated with a video image range delimited by the delimitation operation at a timing when the delimitation operation is received by the delimitation operation reception unit 131, and a storage control unit 140 that stores the attribute information selected through the selection screen in association with the video image range.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: June 23, 2020
    Assignee: Broadleaf Co., Ltd.
    Inventors: Takahide Arao, Akira Ooka, Mitsuru Amami
  • Patent number: 10691940
    Abstract: A method and apparatus for detecting a blink. An embodiment includes: extracting two frames of face images from a video recording a face; extracting a first to-be-processed eye image and a second to-be-processed eye image respectively from the two frames of face images, and aligning the first to-be-processed eye image with the second to-be-processed eye image through a set marking point, the marking point being used to mark a set position of an eye image; acquiring a difference image between the aligned first to-be-processed eye image and the second to-be-processed eye image, the difference image being used to represent a pixel difference between the first to-be-processed eye image and the second to-be-processed eye image; and importing the difference image into a pre-trained blink detection model to obtain a blink detection label, the blink detection model being used to match the blink detection label corresponding to the difference image.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: June 23, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventor: Zhibin Hong
  • Patent number: 10691953
    Abstract: An intelligent door lock system is coupled to a door at a dwelling. A sensor is at the dwelling. The sensor is coupled to a drive shaft of a lock device to assist in locking and unlocking a lock of a lock device at the door. The lock device is coupled to the sensor and includes a bolt. An engine, an energy source and a memory are coupled together. A camera is coupled to or part of the intelligent door lock system. The camera is configured to define a safe zone in the dwelling in which an occupant, and a non dwelling occupant third person is allowed into the dwelling.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: June 23, 2020
    Assignee: August Home, Inc.
    Inventors: Jason Johnson, Herve Jacques Clement Letourneur, Christopher Kim, Christopher Dow
  • Patent number: 10685212
    Abstract: A method for identifying regions of interest (ROIs) includes receiving, by a processor from a video camera, a video image and computing, by the processor, an optical flow image, based on the video image. The method also includes computing, by the processor, a magnitude of optical flow image based on the video image and computing a histogram of optical flow magnitudes (HOFM) image for the video image based on the magnitude of optical flow image. Additionally, the method includes generating, by the processor, a mask indicating ROIs of the video image, based on the HOFM.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: June 16, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Aishwarya Dubey, Hetul Sanghvi
  • Patent number: 10685551
    Abstract: Provided is an information processing system, an information processing apparatus, and an information processing method, which are capable of achieving all of crime prevention, disaster prevention, and activation of local community by appropriately conveying an indoor state to the outside while distancing from physical contact. The information processing system includes a detecting unit that detects a feeling of one or more persons located indoors on the basis of sensor data obtained by sensing an indoor state; and a control unit configured to perform control such that a captured image obtained by imaging the indoor state is output to an outdoor display apparatus in a case in which a value of a feeling of at least any one person among the detected feelings satisfies a predetermined condition.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: June 16, 2020
    Assignee: SONY CORPORATION
    Inventor: Yoshinori Takagi
  • Patent number: 10682075
    Abstract: A dry eye syndrome alert system through posture and work detection, includes: a data collecting unit configured to detect a posture of a user to collect posture data of the user and preprocess the posture data; an eye blink frequency calculating unit configured to identify a posture change of the user on the basis of the posture data, calculate a motion variability on the basis of the posture change, and estimate an eye blink frequency of the user on the basis of the motion variability; and a diagnosis and alert output unit configured to store data regarding the estimated eye blink frequency, compare the estimated eye blink frequency with a preset reference value, and output an alert to the user when the estimated eye blink frequency is less than or equal to the preset reference value.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: June 16, 2020
    Assignee: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Min Ho Lee, Woo Jin Park, Seung Won Baek, Hae Seok Jeong, Taek Beom Yoo, Yoon Jin Lee, Hae Hyun Lee, Byoung Hyun Choi, Soo Min Hyun
  • Patent number: 10685460
    Abstract: A method of generating a photo-story is provided. The method includes generating tags that indicate properties of a context of photo images; predicting, based on the generated tags, scenes indicated by the photo images; and generating, based on the predicted scenes, a photo-story including a combination of the predicted scenes.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: June 16, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Adam Porecki, Andrzej Ruta, Juliusz Labecki, Michal Kudelski, Tomasz Rudny, Jakub Tkaczuk
  • Patent number: 10687169
    Abstract: Methods, computer program products, and systems are presented. The method, computer program products, and systems can include, for instance: obtaining position data for a plurality of mobile devices, wherein mobile devices of the plurality of mobile devices have associated identifiers, and wherein the obtained position data is data that has been derived using wirelessly emitted signals; examining data of the position data to determine that one or more user is present within a neighboring zone of a venue, the neighboring zone being a zone that neighbors a certain zone of the venue; predicting that at least one user of the one or more user within the neighboring zone intends to be in the certain zone; specifying a zone association of the at least one user as the certain zone of the venue; and providing one or more output based on the specifying.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: June 16, 2020
    Assignee: International Business Machines Corporation
    Inventors: Ryan M. Graham, Jeremy Greenberger, Ciaran Hannigan, Matthew J. Margolis, Kevin Roisin
  • Patent number: 10685449
    Abstract: The purpose of the present invention is to provide a surrounding environment recognition device such that even in a situation where a certain moving three-dimensional object is apparently overlapped with another three-dimensional object, early detection of the certain moving three-dimensional object is enabled. To this end, a surrounding environment recognition device for a moving body is equipped with: imaging units for photographing multiple images in a time series; a three-dimensional object detection unit for detecting three-dimensional objects on the basis of distances of the objects from the imaging units; a vector detection unit for tracking feature points within predetermined areas of the multiple images containing the three-dimensional objects, thereby detecting motion vectors of the feature points; and a moving three-dimensional object detection unit for detecting three-dimensional objects which are present in the areas on the basis of detection results of the vector detection unit.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: June 16, 2020
    Assignee: Hitachi Automotive Systems, Ltd.
    Inventors: Masayuki Takemura, Takeshi Shima, Takeshi Nagasaki
  • Patent number: 10685228
    Abstract: Index-based geospatial analysis may include applying a first and second index-based analysis for a set of imagery. The set of imagery may include a location-specific index values used to form a histogram for a single index image (e.g., for a single surveyed field). This first analysis may be referred to as an “acre-to-acre” mapping, which may be useful for identifying differences in indices (e.g., NDVI vegetative health) of different parts of the field from a single day. A second “day-to-day” index-based analysis may be performed by calculating a histogram for each set of imagery from multiple days, combining the histograms, and generating a single equal-area index map. The index map can be applied to redistribute the histogram values within multiple days of data, which may provide a more useful map of variation in each individual image and changes between images.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: June 16, 2020
    Assignee: Sentera, Inc.
    Inventors: Andrew Muehlfeld, Justin Hui, Reid Plumbo, Joe Tinguely
  • Patent number: 10685256
    Abstract: Methods and systems including computer programs encoded on a computer storage medium, for generating and displaying object recognition state indicators during object recognition processing of an image. In one aspect, a method includes performing object recognition on an image displayed in an application environment of an application on a user device using an object recognition model having multiple object recognition states including an identification state, where a candidate object in the image is positively identified, and one or more precursor states to the identification state, and where each of the precursor states has a different respective indicator for display within the image during the respective precursor state that visually emphasizes the candidate object and the identification state has a different respective indicator for display within the image during the identification state that visually emphasizes the positively identified object as being positively identified.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: June 16, 2020
    Assignee: Google LLC
    Inventors: Don Barnett, John DiMartile, Alison Lentz, Rachel Lara Been
  • Patent number: 10685059
    Abstract: A portable electronic device according to the present disclosure may include a memory configured to store video data, a touch screen configured to receive a touch input related to a summary of the video data, and a controller configured to generate the summary of the video data in response to the touch input, wherein the controller extracts objects included in the video data, and detects a section in which at least one of the extracted objects appears and then disappears, and edits the video data based on the detected section to generate a summary of the video data.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: June 16, 2020
    Assignee: LG Electronics Inc.
    Inventors: Younghan Kim, Jaeho Kwak, Jaehwan Park, Sanghyun Jung
  • Patent number: 10678260
    Abstract: Systems and method are provided for controlling a vehicle. The vehicle includes a first device onboard the vehicle providing first data, a second device onboard the vehicle providing second data, one or more sensors onboard the vehicle, one or more actuators onboard the vehicle, and a controller. The controller detects a stationary condition based on output of the one or more sensors, obtains a first set of the first data from the first device during the stationary condition, filters horizontal edge regions from the first set resulting in a filtered set of the first data, obtains a second set of the second data during the stationary condition, determines one or more transformation parameter values based on a relationship between the second set and the filtered set, and autonomously operates the one or more actuators onboard the vehicle in a manner that is influenced by the transformation parameter values.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: June 9, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventor: Wei Mou
  • Patent number: 10679083
    Abstract: Disclosed is a liveness test method and apparatus. A liveness test apparatus determines a pre-liveness score based on a plurality of sub-images acquired from an input image, determines a post-liveness score based on a recognition model for recognizing an object included in the input image, and determines a liveness of the object based on any one or any combination of the pre-liveness score and the post-liveness score.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: June 9, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jingtao Xu, ByungIn Yoo, Jae-Joon Han, Chao Zhang, Hao Feng, Yanhu Shan, Yaozu An, Changkyu Choi
  • Patent number: 10678887
    Abstract: A state change detection unit obtains the data occurrence probability based on the values of observation data and the value of a parameter of a prior distribution, obtains, based on the data generation probability, a run length probability distribution when the time-series observation data acquired up to the current time point is used as a condition, and detects a change in the state of a facility based on the run length probability distribution. An update unit updates the value of the parameter of the prior distribution using the value of the observation data, to generate the prior distribution for calculating the data occurrence probability at a next time point. In cases when the current time point is determined to be a change point in the state of the facility, the state change detection unit also searches for a change indication point based on the run length probability distribution.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: June 9, 2020
    Assignee: OMRON Corporation
    Inventor: Hiroshi Tasaki
  • Patent number: 10678960
    Abstract: Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2D) environment. The method may include capturing the 2D environment and adding scale and perspective to the 2D environment. Further, a user may select intersection points on a ground plane of the 2D environment to form walls, thereby converting the 2D environment into a 3D space. The user may further add 3D models of objects on the wall plane such that the objects may remain flush with the wall plane.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: June 9, 2020
    Assignee: Atheer, Inc.
    Inventor: Milos Jovanovic
  • Patent number: 10677448
    Abstract: Provided is a video-image projection function-equipped lighting device more convenient for a user. A lighting device includes a lighting unit (200) for emitting illumination light, and a projection-type video-image display unit (100) for projecting a video image. The projection-type video-image display unit (100) includes a content editing means for editing content of the video image to be displayed.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: June 9, 2020
    Assignee: MAXELL, LTD.
    Inventors: Katsuyuki Watanabe, Tatsuya Ishikawa, Maki Hanada, Sosuke Hisamatsu, Hiroyuki Urata, Takuya Shimizu
  • Patent number: 10679369
    Abstract: A method for performing real-time recognition of objects in motion includes receiving an input video stream from a camera, generating one or more depth maps for one or more frames of the input video stream, recognizing one or more objects in a current frame based on corresponding depth map using a machine learning algorithm, and displaying the one or more recognized objects in the current frame in one or more bounding boxes.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: June 9, 2020
    Assignee: Chiral Software, Inc.
    Inventors: Eric Jonathan Hollander, Michael Travis Remington
  • Patent number: 10679355
    Abstract: Described is a system for detecting moving objects. During operation, the system obtains ego-motion velocity data of a moving platform and generates a predicted image of a scene proximate the moving platform by projecting three-dimensional (3D) data into an image plane based on pixel values of the scene. A contrast image is generated based on a difference between the predicted image and an actual image taken at a next step in time. Next, an actionable prediction map is then generated based on the contrast mage. Finally, one or more moving objects may be detected based on the actionable prediction map.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: June 9, 2020
    Assignee: HRL Laboratories, LLC
    Inventors: Kyungnam Kim, Hyukseong Kwon, Heiko Hoffmann
  • Patent number: 10672113
    Abstract: The current document is directed to digital-image-normalization methods and systems that generate accurate intensity mappings between the intensities in two digital images. The intensity mapping generated from two digital images is used to normalize or adjust the intensities in one image in order to produce a pair of normalized digital images to which various types of change-detection methodologies can be applied in order to extract differential data. In one approach, a mapping model is selected to provide a basis for statistically meaningful intensity normalization. In this implementation, a genetic optimization approach is used to determine and refine model parameters. The implementation produces a hybrid intensity mapping that includes both intensity mappings calculated by application of the mapping model and intensity mappings obtained directly from comparison of the images.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: June 2, 2020
    Assignee: AI Analysis, Inc.
    Inventor: Julia Patriarche
  • Patent number: 10672052
    Abstract: The present invention relates to a method, an apparatus, a system, and a computer program in which a server offers product information corresponding to an image displayed on a client and the client displays the product information. A method for offering product information by a server to a client includes: generating a database on a plurality of products and extracting feature information on an image included in the database; receiving an image displayed on the client as a query image from the client; determining a matching product matched to feature information on the query image by retrieving feature information on the image in the database; and offering information on the matching product to the client. According to the present invention, the server may retrieve a database of product information only with an image displayed on the client and may offer product information to the client.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: June 2, 2020
    Assignee: ODD CONCEPTS INC.
    Inventors: Jung Tae Kim, Jin Myeong Ahn, Kyung Mo Koo
  • Patent number: 10671886
    Abstract: In one aspect, the present disclosure relates to a method for or performing single-pass object detection and image classification. The method comprises receiving image data for an image in a system comprising a convolutional neural network (CNN), the CNN comprising a first convolutional layer, a last convolutional layer, and a fully connected layer; providing the image data to an input of the first convolutional layer; extracting multi-channel data from the output of the last convolutional layer; and summing the extracted data to generate a general activation map; and detecting a location of an object within the image by applying the general activation map to the image data.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 2, 2020
    Assignee: Capital One Services, LLC
    Inventors: Micah Price, Jason Hoover, Geoffrey Dagley, Stephen Wylie, Qiaochu Tang
  • Patent number: 10671891
    Abstract: A method is provided for reducing a computational cost of deep reinforcement learning using an input image to provide a filtered output image composed of pixels. The method includes generating a moving gate in which the pixels of the filtered output image to be masked are assigned a first gate value and the pixels of the filtered output image to be passed through are assigned a second gate value. The method further includes applying the input image and the moving gate to a GCNN to provide the filtered output image such that only the pixels of the input image used to compute the pixels assigned the second gate value are processed by the GCNN while bypassing the pixels of the input image useable to compute the pixels assigned the first gate to reduce an overall processing time of the input image in order to provide the filtered output image.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: June 2, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shohei Ohsawa, Takayuki Osogami
  • Patent number: 10673537
    Abstract: The invention relates to a sensor and a system for measuring pressure, variation in sound pressure, a magnetic field, acceleration, vibration, or the composition of a gas. The sensor comprises an ultrasound transmitter, a cavity, and a passive sensor element. In accordance with the invention the sensor includes antenna means for receiving radio frequency signals (f1, f2), and connecting means connecting the antenna to the ultrasound transmitter for using the radio frequency signals for providing energy for driving the ultrasound transmitter.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: June 2, 2020
    Assignee: Teknologian tutkimuskeskus VTT Oy
    Inventors: Heikki Seppa, Teuvo Sillanpaa, Ville Viikari
  • Patent number: 10671162
    Abstract: An eyeball tracking module for video glasses, including: at least two infrared light sources, at least one image sensor assembly and at least one infrared cut-off filtering device. Each of the image sensor assembly comprises an image sensor body and an infrared filter provided in front of the image sensor body. The at least two infrared light sources are fixedly provided in an area laterally in front of an eyeball and are used for emitting infrared light to the eyeball, so as to form, on the eyeball which reflects the infrared light, a reflection point. The at least one image sensor assembly is fixedly provided at an edge or outside of a visual angle of video glasses. The at least one infrared cut-off filtering device is provided in an overlapping area between a reflection light path of the eyeball and an acquisition area of an image sensor.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: June 2, 2020
    Assignee: BEIJING 7INVENSUN TECHNOLOGY CO., LTD.
    Inventors: Tongbing Huang, Yunfei Wang
  • Patent number: 10670771
    Abstract: A weather forecasting system may receive satellite image samples and identify an updraft and components of the updraft within a cloud. These satellite image samples are collected over time (e.g., at 30 second to 1 minute time intervals). The system may identify an area of rotation and/or divergence at cloud top in a cumulus cloud or mature convective storm over time by comparing the samples and determine a parameter indicative of the updraft based on the area of rotation and divergence. The system may estimate aspects of the environment related to storm development and predict the occurrence of a weather event in the future based on the parameter and generate an output indicative of the occurrence.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: June 2, 2020
    Assignee: Board of Trustees of the University of Alabama, for and on behalf of the University of Alabama in Hunstville
    Inventor: John R. Mecikalski
  • Patent number: 10671355
    Abstract: A code completion tool uses machine learning models to more precisely predict the likelihood of a method invocation completing a code fragment that follows one or more method invocations of a same class in a same document during program development. In one aspect, the machine learning model is a n-order Markov chain model that is trained on features that represent characteristics of the context of method invocations of a class in commonly-used programs from a sampled population.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC.
    Inventors: Jorge Banuelos, Shengyu Fu, Roshanak Zilouchian Moghaddam, Neelakantan Sundaresan, Siyu Yang, Ying Zhao
  • Patent number: 10671860
    Abstract: A system and method of operating a vehicle. The system includes a two-dimensional imager, a three-dimensional imager, and at least one processor. The two-dimensional imager obtains a two-dimensional image of an environment surrounding the vehicle, wherein the environment includes an object. The three-dimensional imager obtains a three-dimensional (3D) point cloud of the environment. The at least one processor identifies the object from the 2D image and assigns the identification of the object to a selected point of the 3D point cloud.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: June 2, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Hansi Liu, Fan Bai, Shuqing Zeng
  • Patent number: 10671156
    Abstract: An electronic apparatus including an image capturing device, a storage device and a processor and an operation method thereof are provided. The image capturing device captures an image for a user, and the storage device records a plurality of modules. The processor is coupled to the image capturing device and the storage device and is configured to: configure the image capturing device to capture a head image of a user; perform a face recognition operation to obtain a face region; detect a plurality of facial landmarks within the face region; estimate a head posture angle of the user according to the facial landmarks; calculate a gaze position where the user gazes on the screen according to the head posture angle, a plurality of rotation reference angle, and a plurality of predetermined calibration positions; and configure the screen to display a corresponding visual effect according to the gaze position.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: June 2, 2020
    Assignee: Acer Incorporated
    Inventors: Cheng-Tse Wu, An-Cheng Lee, Sheng-Lin Chiu, Ying-Shih Hung
  • Patent number: 10663593
    Abstract: In an aspect of the invention, a projector apparatus (20) with a distance image acquisition device includes a projection image generation unit (28), a difference value acquisition unit (101) that acquires a difference value between distance information of a first distance image acquired at a first timing and distance information of a second distance image acquired at a second timing, a determination unit (103) that determines whether the body to be projected is at a standstill on the basis of the difference value acquired by the difference value acquisition unit (101), a projection instruction unit (105) that outputs a command to project an image generated by the projection image generation unit (28) to the body to be projected, and a projection control unit (107) that controls a projection operation of the projector apparatus (20) on the basis of the projection command output from the projection instruction unit (105).
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: May 26, 2020
    Assignee: FUJIFILM Corporation
    Inventors: Junya Kitagawa, Tomoyuki Kawai, Yasuhiro Shinkai, Tomonori Masuda, Yoshinori Furuta
  • Patent number: 10664689
    Abstract: An information processing method is provided. The information processing method includes acquiring a motion state of eyes of a user to form eye motion data and record a first acquisition time of the eye motion data; extracting position data of the user's eyeballs from the eye motion data; and capturing user behavior activity data to record a second acquisition time of the user behavior activity data. The method also includes, based on the first acquisition time and the second acquisition time, determining a correspondence relationship between the position data of the user's eyeballs and the user behavior activity data; and, based on the correspondence relationship and a current eye motion, determining a current user behavior activity.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: May 26, 2020
    Assignee: LENOVO (BEIJING) CO., LTD.
    Inventor: Daye Yang
  • Patent number: 10664996
    Abstract: In a method for the start-up operation of a multi-axis system, with the multi-axis system including, as components, segments connected via respective joints and are movable in one or more axes, and a tool, connected to one of the segments and is movable to a specified position, optical markers are arranged in the environment. Position coordinates of the optical markers in a first, global coordinate system are ascertained and stored in the controller. The environment is captured as image data by a camera system. The image data are transmitted to an AR system and visualized in an output apparatus. The optical markers and virtual markers are represented during the visualization of the image data, wherein a respective virtual marker is assigned to an optical marker. A check is performed as to whether an optical marker and the virtual marker overlay one another in the visualized image data.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: May 26, 2020
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Mehdi Hamadou, Jan Richter, Maximilian Walter
  • Patent number: 10666926
    Abstract: Multiview calibration is essential for accurate three-dimensional computation. However, multiview calibration can not be accurate enough because of the tolerances required in some of the intrinsic and extrinsic parameters that are associated with the calibration process, along with fundamental imperfections that are associated with the manufacturing and assembly process itself. As a result, residual error in calibration is left over, with no known methods to mitigate such errors. Residual error mitigation is presented in this work to address the shortcomings that are associated with calibration of multiview camera systems. Residual error mitigation may be performed inline with a given calibration approach, or may be presented as a secondary processing step that is more application specific. Residual error mitigation aims at modifying the original parameters that have been estimated during an initial calibration process.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: May 26, 2020
    Assignee: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Jordan Cluster, Joshua King, James Holmes, Milind Subhash Gide
  • Patent number: 10664957
    Abstract: An image projection system includes an image projecting section configured to project an image onto a projection surface, a control section configured to cause the image projecting section to project a pattern image, an imaging section configured to capture the pattern image projected on the projection surface, a detecting section configured to detect a plurality of reference points on the basis of the pattern image captured by the imaging section, and an image-information correcting section configured to correct, on the basis of positions of the reference points detected by the detecting section, the image projected by the projecting section. The pattern image includes a plurality of unit patterns for specifying the reference points. The plurality of unit patterns include unit patterns of seven colors.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: May 26, 2020
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Taisuke Yamauchi
  • Patent number: 10666761
    Abstract: The various implementations described herein include methods and systems for collecting media associated with a mobile device. In one aspect, a method is performed at a computing system. The method comprises receiving and storing, without user interaction, video and audio data captured during a predefined time period by a plurality of distributed video devices configured to monitor one or more vicinities, and mobile device presence information from which presence of mobile devices in vicinity of the video devices can be determined throughout the predefined time period. The method further comprises receiving from a requestor a request to identify from the captured video and audio data a first subset associated with a first person. The request includes first information of a mobile device associated with the first person. In response to the request, the first subset based on the mobile device presence information is identified and transmitted to the requestor.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: May 26, 2020
    Assignee: Google Technology Holdings LLC
    Inventor: William P. Alberth, Jr.
  • Patent number: 10664523
    Abstract: A first retriever performs, based on a person feature of a person extracted from a video image before an acceptance time point of a retrieval instruction of a target person and stored and a feature extracted from the target person related to the retrieval instruction, a first retrieving process of retrieving the target person from the video image stored, a second retriever performs, based on a feature of a person extracted from a video image after the acceptance time point and the feature of the target person extracted from a query image of the retrieval instruction, a second retrieving process of retrieving the target person from the video image input after the acceptance time point, and the first retriever performs the first retrieving process to the video image input during a period from the acceptance time point to preparation completion of the second retrieving process and second retrieving process start.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: May 26, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Hirotaka Shiiyama, Masahiro Matsushita
  • Patent number: 10664048
    Abstract: In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
    Type: Grant
    Filed: July 7, 2017
    Date of Patent: May 26, 2020
    Assignee: Apple Inc.
    Inventors: Avi E. Cieplinski, Jeffrey Traer Bernstein, Julian Missig, May-Li Khoe, Bianca Cheng Costanzo, Myra Mary Haggerty, Duncan Robert Kerr, Bas Ording, Elbert D. Chen
  • Patent number: 10659768
    Abstract: A system for reconstructing a three-dimensional (3D) model of a scene including a point cloud having points identified by 3D coordinates includes at least one sensor to acquire a set of images of the scene from different poses defining viewpoints of the images and a memory to store the set of images and the 3D model of the scene. The system also includes a processor operatively connected to the memory and coupled with stored instructions to transform the images from the set of images to produce a set of virtual images of the scene viewed from virtual viewpoints; compare at least some features from the images and the virtual images to determine the viewpoint of each image in the set of images; and update 3D coordinates of at least one point in the model of the scene to match coordinates of intersections of ray back-projections from pixels of at least two images corresponding to the point according to the viewpoints of the two images.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 19, 2020
    Assignees: Mitsubishi Electric Research Laboratories, Inc., Mitsubishi Electric Corporation
    Inventors: Chen Feng, Yuichi Taguchi, Esra Cansizoglu, Srikumar Ramalingam, Khalid Yousif, Haruyuki Iwama
  • Patent number: 10657386
    Abstract: [Problem] To provide a motion condition estimation device, a motion condition estimation method and a motion condition estimation program capable of accurately estimating the motion condition of monitored subjects even in a crowded environment. [Solution] A motion condition estimation device according to the present invention is provided with a quantity estimating means 81 and a motion condition estimating means 82. The quantity estimating means 81 uses a plurality of chronologically consecutive images to estimate a quantity of monitored subjects for each local region in each image. The motion condition estimating means 82 estimates the motion condition of the monitored subjects from chronological changes in the quantities estimated in each local region.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: May 19, 2020
    Assignee: NEC CORPORATION
    Inventor: Hiroyoshi Miyano