Patents Issued in May 12, 2020
  • Patent number: 10650238
    Abstract: Image processing of an image is used to determine the opportunity to view an object. Rather than relying on simple numbers passing an object, the opportunity to view the object is weighted based on attention, which is derived from other objects competing for attention. For the processor to more accurately determine opportunity to view as compared to using geometric information alone, a machine-learned network is used. To deal with changes in obstructions, another machine-learned network may extract obstructions from camera images. Trace data is used to allow for daily variation in base counts of viewers, allowing greater temporal resolution and determination based on information more recently acquired than counts.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: May 12, 2020
    Assignee: BOOHMA TECHNOLOGIES LLC
    Inventor: Shawn Spooner
  • Patent number: 10650239
    Abstract: Devices, computer-readable media, and methods for providing an enhanced indication of an object that is located via a visual feed in accordance with a user context are disclosed. For instance, in one example, a processing system including at least one processor may detect a user context from a visual feed, locate an object via the visual feed in accordance with the user context, and provide an enhanced indication of the object via an augmented reality display.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: May 12, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Eric Zavesky, Bernard S. Renger, Behzad Shahraray, Lee Begeja
  • Patent number: 10650240
    Abstract: An approach is provided in which an information handling system trains a classifier using rated content segments that each has a first content type rating corresponding to a content type. Then, the information handling system uses the trained classifier to classify unrated content segments corresponding to an unrated content and generates second content type ratings for each of unrated content segments accordingly that identify a corresponding content type. In turn, the information handling system generates an overall content rating of the unrated content based on a combination of the second content type ratings.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Florian Pinel, Russell P. Bobbitt
  • Patent number: 10650241
    Abstract: Systems, methods, and non-transitory computer-readable media can generate at least one fingerprint based on a set of frames corresponding to a test content item, generate a set of distorted fingerprints using at least a portion of the fingerprint, and determine one or more reference content items using the set of distorted fingerprints, wherein the test content item is evaluated against at least one reference content item to identify matching content.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: May 12, 2020
    Assignee: Facebook, Inc.
    Inventors: Sergiy Bilobrov, Eran Ambar
  • Patent number: 10650242
    Abstract: An information processing apparatus includes at least one processor causing the information processing apparatus to act as a first obtainment unit configured to execute processing for obtaining a first feature amount for each of a plurality of frames, a specification unit configured to specify a priority order of frames for obtaining a second feature amount different from the first feature amount based on the first feature amount obtained by the first obtainment unit, a second obtainment unit configured to execute processing for obtaining the second feature amount from a frame in accordance with the priority order, and a selection unit configured to select, based on the second feature amount obtained by the second obtainment unit, an image processing target frame. The number of frames from which the second feature amount is obtained is fewer than the number of the plurality of frames from which the first feature amount is obtained.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: May 12, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tatsuya Yamamoto, Sammy Chan
  • Patent number: 10650243
    Abstract: In the image processing device, the method and the recording medium according to the present invention, the characteristic information extractor extracts the number of times the same subject appears in the still images. The target object detector detects the target subject which appears the number of times not less than the threshold value. The image analysis condition determiner sets the image analysis condition for the portion of the moving image where the target subject is absent to be rougher than that where the target subject is present. The frame image analyzer extracts frame images from the moving image in accordance with the image analysis condition. The frame image output section calculates the evaluation value of the frame images based on the result of the image analysis and output the frame image having the evaluation value not less than the threshold value.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: May 12, 2020
    Assignee: FUJIFILM Corporation
    Inventor: Kei Yamaji
  • Patent number: 10650244
    Abstract: A computer implemented method includes extracting one or more portions from a first video stream of a first physical environment; transmitting captured video data via a first communication link to one or more electronic display devices disposed within a second physical environment, wherein the captured video data includes the one or more extracted portions and the captured video data includes a preview portion that includes a first portion of a frame of the first video stream; and transmitting a second video stream of a second field of view of the first physical environment to at least one of the one or more electronic display devices disposed within the second physical environment via a second communication link, wherein the second video stream is generated in response to a selection of the preview portion of the captured video data made by a user located in the second physical environment.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: May 12, 2020
    Assignee: LOGITECH EUROPE S.A.
    Inventors: Mathieu Meisser, Remy Zimmermann, Mario Arturo Gutierrez Alonso, David Guhl, Nicolas Sasselli, Jean-Christophe Hemes, Stephane Delorenzi, Ali Moayer
  • Patent number: 10650245
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital video summaries based on analyzing a digital video utilizing a relevancy neural network, an aesthetic neural network, and/or a generative neural network. For example, the disclosed systems can utilize an aesthetics neural network to determine aesthetics scores for frames of a digital video and a relevancy neural network to generate importance scores for frames of the digital video. Utilizing the aesthetic scores and relevancy scores, the disclosed systems can select a subset of frames and apply a generative reconstructor neural network to create a digital video reconstruction. By comparing the digital video reconstruction and the original digital video, the disclosed systems can accurately identify representative frames and flexibly generate a variety of different digital video summaries.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: May 12, 2020
    Assignee: ADOBE INC.
    Inventors: Viswanathan Swaminathan, Hongxiang Gu
  • Patent number: 10650246
    Abstract: Described is a method for processing image data to determine if a portion of the image data is affected due to sunlight. In some implementations, image data is sent to an image data store and camera parameters are sent to a radiance detection service. The radiance detection service, upon receiving the camera parameters, retrieves the image data, converts the image data to gray-scale and processes the image data based on the camera parameters to determine a radiance value for the camera. The radiance value may be compared to a baseline radiance value to determine if sunlight is represented in the image data. In some implementations, a baseline model may be developed for the camera and used to cancel out any pixels of the image data that are overexposed under normal or baseline conditions. Likewise, a foreground model may be generated to detect any objects in the image data for which corresponding pixel values should not be considered for determining if sunlight is represented in the image data.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: May 12, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Riccardo Gherardi, Saral Jain, Hasan Tuna Icingir, Griffin Alexander Jarmin, Bo Chen
  • Patent number: 10650247
    Abstract: Video footage captured by A/V recording and communication devices may be readily uploaded to the cloud and shared with a requesting party, such as a law enforcement agency. When a request is received from a requesting party for video footage, videos meeting the criteria specified by the requesting party may be determined. Consent requests may then be sent to users associated with each of the A/V recording and communication devices that recorded the videos meeting the criteria specified by the requesting party. When user consents to share the videos, the videos may be provided to the requesting party.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: May 12, 2020
    Assignee: A9.Com, Inc.
    Inventors: Elliott Lemberger, Aaron Harpole, Mark Troughton
  • Patent number: 10650248
    Abstract: A system to facilitate management of surveillance devices, that are distributed over a monitored region, through a geographic information (GI) portal, having GI storage to store map data defining a geographic map of the monitored region. A GI manager unit (GIMU) to record, in the GI storage, asset position information with regarding locations for assets of interest within the monitored region. The GIMU obtains, from a remote surveillance device (SD) database, device-related records. The GIMU stores the device related records in the GI storage. The device records are associated with the surveillance devices installed in the monitored region. The device records include position tags that identify a location of the surveillance devices in the monitored region. The GIMU obtains, from a remote network (NW) database, network-related records. The GIMU stores the network related records in the GI storage. The network-related records associated with network devices are installed over the monitored region.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: May 12, 2020
    Assignee: Tyco Integrated Security, LLC
    Inventors: Jeffrey Gutierrez, Phillip William Ponce
  • Patent number: 10650249
    Abstract: The present disclosure provides a method and device for counting pedestrians. The method comprises: reading a pedestrian depth image, and comparing the pedestrian depth image and a pre-acquired environmental mean image to acquire a foreground image; dividing the foreground image into a plurality of regions, detecting whether or not there is a step in an edge pixel point of each region, and detecting whether or not the region surface formed in each region coincides with the curved surface of the head top; determining that the currently detected region is the region of the head top when there is a step in the point and when the region surface coincides with the curved surface; and counting and outputting the number of pedestrians according to the region of the head top determined from the pedestrian depth image and the region of the head top determined from an adjacent pedestrian depth image.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: May 12, 2020
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Yong Zhang, Lei Liu, Zehong Chen, Dongning Zhao, Jianyong Chen, Yanshan Li
  • Patent number: 10650250
    Abstract: A method for detecting heavy rain in images includes providing a series of images as input information, the images relating to a scene external to a vehicle viewed through a pane of the vehicle. A measure of blur is evaluated in an area of interest in the images. The method also includes analyzing the course of the measured blur values over time in order to detect transitions between images with low blur values and images with high blur values or vice versa. Heavy rain may then be detected from the detected transitions and information output in response to the heavy rain being detected.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: May 12, 2020
    Assignee: Continental Automotive GmbH
    Inventor: Cezar Regep
  • Patent number: 10650251
    Abstract: In an industrial machine, the work safety is further improved by eliminating blind spots from an image displayed on a monitor. A hydraulic excavator 1 which is the industrial machine is equipped with monitoring cameras 15F, 15B, 15L, 15R mounted in the respective places of a revolving upperstructure 3 in order to capture images for monitoring. A monitor 20 displays camera images 21F, 21B, 21L, 21R obtained by the cameras as well as an icon image 21C of an image illustration of the hydraulic excavator 1. The cameras 15F, 15L, 15R are mounted at the distal ends of support arms 40F, 40L, 40R to be located in positions jutting from a revolving upperstructure main unit 3a of the revolving upperstructure 3, so that a hidden area under the underside of a catwalk 14 provided on the revolving upperstructure 3 falls within the field of view.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: May 12, 2020
    Assignee: HITACHI CONSTRUCTION MACHINERY CO., LTD.
    Inventors: Yoichi Kowatari, Hidefumi Ishimoto, Moritaka Oota, Yoshihiro Inanobe, Hiroyoshi Tanaka, Kouji Fujita, Takashi Kusama
  • Patent number: 10650252
    Abstract: The reliability of detection of the traveling lane can be improved even in a case where an image capturing environment changes. The lane detection device includes a first image capturing device which is fixed to a moving body and periodically captures a first image under a predetermined first image capturing condition, a second image capturing device which is fixed to the moving body and periodically captures a second image under a second image capturing condition different from the predetermined first image capturing condition, and an operation device, regarding a traveling lane on which the moving body travels, which detects a first boundary line of the traveling lane on the first image, detects a second boundary line of the traveling lane on the second image, and estimates the position of the first boundary line on the first image based on a position of the second boundary line on the second image under a predetermined execution condition.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: May 12, 2020
    Assignee: Hitachi, Ltd.
    Inventors: Alex Masuo Kaneko, Kenjiro Yamamoto, Taiki Iimura
  • Patent number: 10650253
    Abstract: A method for estimating traffic lanes uses as input data the position and direction of feature vectors, which are measured from multiple different sensors independently of one another. A feature vector is formed by a position in the coordinate system of the ego-vehicle, which position describes a point on the border of a traffic lane, and a direction or an angle which indicates the direction in which the border of the traffic lane runs at this position. Further input data are variables which represent the quality of the measurement of the positional and the directional accuracy and the probability of the existence of a feature vector. The input data are accumulated chronologically together. The geometry of traffic lanes is estimated from the accumulated input data, taking into account the quality of the measurement. The estimated geometry of traffic lanes is output.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: May 12, 2020
    Assignee: Continental Teves AG & Co. oHG
    Inventors: Christopher Bayer, Claudia Loy
  • Patent number: 10650254
    Abstract: Systems and methods use cameras to provide autonomous navigation features. In one implementation, a driver-assist system is provided for a vehicle. The system may include one or more image capture devices configured to acquire images of an area forward of the vehicle. The system may also include at least one processing device configured to receive, via one or more data interfaces, the images. The at least one processing device may be further configured to analyze the images acquired by the one or more image capture devices and cause at least one navigational response in the vehicle based on monocular and/or stereo image analysis of the images.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: May 12, 2020
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Amnon Shashua, Gaby Hayon, Yossi Hadad, Efim Belman, Eyal Bagon
  • Patent number: 10650255
    Abstract: A method for detecting a vehicle via a vehicular vision system includes equipping a vehicle with a camera and providing a control at the equipped vehicle. Frames of image data captured by the camera are processed via an image processor of the control. Responsive at least in part to (i) vehicle motion information of the equipped vehicle and (ii) processing at the control of frames of image data captured by the camera, an object present in the field of view of the camera is detected and motion of the detected vehicle relative to the moving equipped vehicle is determined. The motion of the detected object relative to the moving equipped vehicle is determined by (i) determining corresponding feature points of the detected object in at least two frames of captured image data and (ii) estimating object motion trajectory of the detected object based on the determined corresponding feature points.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: May 12, 2020
    Assignee: MAGNA ELECTRONICS INC.
    Inventors: Nikhil Gupta, Liang Zhang
  • Patent number: 10650256
    Abstract: Among other things, one or more travel signals are identified by analyzing one or more images and data from sensors, classifying candidate travel signals into zero, one or more true and relevant travel signals, and estimating a signal state of the classified travel signals.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: May 12, 2020
    Assignee: nuTonomy Inc.
    Inventors: Baoxing Qin, Aravindkumar Vijayalingam
  • Patent number: 10650257
    Abstract: A method for identifying a signaling state of at least one signaling device including a traffic light includes obtaining at least one image which includes an image of the at least one signaling device, extracting a region of the at least one image which includes the image of the at least one signaling device, detecting the at least one signaling device within the extracted region of the at least one image, and detecting a signaling state of the signaling device after detecting the at least one signaling device within the extracted region.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: May 12, 2020
    Assignee: SMR Patents S.à.r.l.
    Inventors: Nikhil Satyakumar, Harish S. Bharadwaj, Krishna Reddy Konda, Ganeshan Narayanan, Henrik Schäfer
  • Patent number: 10650258
    Abstract: A system. The system may include a glare shield positioned between an exterior pane of an aircraft window and an interior portion of an aircraft. The glare shield may include at least one anti-reflective mask and at least one viewing aperture. Each of the at least one anti-reflective mask may be positioned adjacent one of the at least one viewing aperture.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: May 12, 2020
    Assignee: B/E Aerospace, Inc.
    Inventors: R. Klaus Brauer, John Warren, Simon Robert Lee, Ian L. Frost
  • Patent number: 10650259
    Abstract: The embodiment of the present invention provides a human face recognition method and recognition system. The method includes that: a human face recognition request is acquired, and a statement is randomly generated according to the human face recognition request; audio data and video data returned by a user in response to the statement are acquired; corresponding voice information is acquired according to the audio data; corresponding lip movement information is acquired according to the video data; and when the lip movement information and the voice information satisfy a preset rule, the human face recognition request is permitted. By performing fit goodness matching between the lip movement information and voice information in a video for dynamic human face recognition, an attack by human face recognition with a real photo may be effectively avoided, and higher security is achieved.
    Type: Grant
    Filed: July 7, 2017
    Date of Patent: May 12, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Chengjie Wang, Jilin Li, Hui Ni, Yongjian Wu, Feiyue Huang
  • Patent number: 10650260
    Abstract: A perspective distortion characteristic based facial image authentication method and storage and processing device thereof are proposed. The method includes: S1: recognizing key points and a contour in a 2D facial image; S2: acquiring key points in a corresponding 3D model; S3: calculating camera parameters based on a correspondence between the key points in the 2D image and the key points in the 3D model; S4: optimizing the camera parameters based on the contour in the 2D image; S5: sampling the key points in the two-dimensional facial image by multiple times to obtain a camera intrinsic parameter estimation point cloud; and S6: calculating the inconsistency between the camera intrinsic parameter estimation point cloud and the camera nominal intrinsic parameters, and determining the authenticity of the facial image. The present disclosure can effectively authenticate the 2D image and has a relatively higher accuracy.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: May 12, 2020
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu Tan, Jing Dong, Wei Wang, Bo Peng
  • Patent number: 10650261
    Abstract: One embodiment facilitates identification of re-photographed images. During operation, the system obtains a sequence of video frames of a target object. The system selects a frame with an acceptable level of quality. The system obtains, from the selected frame, a first image and a second image associated with the target object, wherein at least one of a zoom ratio property and a size property is different between the first image and the second image. The system inputs the first image and the second image to at least a first neural network to obtain scores for the first image and the second image, wherein a respective score indicates a probability that the corresponding image is re-photographed, wherein a re-photographed image is obtained by photographing or recording an image of the target object. The system indicates the selected frame as re-photographed based on the obtained probabilities.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: May 12, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Xuetao Feng, Yan Wang
  • Patent number: 10650262
    Abstract: The present invention includes systems, methods, and devices for interactive media sharing including a non-transient computer readable storage medium storing a set of instructions for accessing a media item from a first computing device, identifying a first position-of-interest on the media item from a memory component of the first computing device, displaying the media item on a second computing device, identifying a second position-of-interest on the media item from a memory component of the second computing device; generating an area-of-interest surrounding the first position-of-interest; comparing the second position-of-interest with the area-of-interest to determine whether the second position-of-interest intersects with the area-of-interest.
    Type: Grant
    Filed: November 9, 2017
    Date of Patent: May 12, 2020
    Assignee: ClicPic, Inc.
    Inventors: Anthony Cipolla, Cameron Cipolla, Brandon Cipolla
  • Patent number: 10650263
    Abstract: Provided is a technique for enhancing operability of a mobile apparatus. An information processing apparatus (2000) includes a first processing unit (2020), a second processing unit (2040), and a control unit (2060). The first processing unit (2020) generates information indicating an event detection position in accordance with a position on a surveillance image set in a first operation. The first operation is an operation with respect to the surveillance image displayed on a display screen. The second processing unit (2040) performs a display change process with respect to the surveillance image or a window including the surveillance image. The control unit (2060) causes any one of the first processing unit (2020) and the second processing unit (2040) to process the first operation on the basis of a second operation.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: May 12, 2020
    Assignee: NEC CORPORATION
    Inventors: Kenichiro Ida, Hiroshi Kitajima, Hiroyoshi Miyano
  • Patent number: 10650264
    Abstract: An image recognition apparatus (100) includes: an object specifying unit (102) that specifies a position, in a captured image, of a detection target object which is set in a predetermined arrangement according to a processing target object in an imaging target and has a feature depending on the processing target object, by image recognition; and a processing unit (104) that specifies, based on object position data indicating a relative position between the detection target object in the imaging target and the processing target object which is set in a predetermined arrangement according to the imaging target and has a feature depending on the imaging target, the processing target object in the captured image which is present at the relative position from the position, in the captured image, of the detection target object specified by the object specifying unit (102), and executes a process allocated to the specified processing target object.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: May 12, 2020
    Assignee: NEC Corporation
    Inventor: Hiroo Harada
  • Patent number: 10650265
    Abstract: Disclosed embodiments provide systems, methods, and computer-readable storage media for enhancing a vehicle identification with preprocessing. The system may comprise memory and processor devices to execute instructions for receiving an image depicting a vehicle. The image may be analyzed and first predicted identity and first confidence value may be determined. The first confidence value may be compared to a predetermined threshold. The processors may further select a processing technique for modifying the image and further analyze the modified image determining a second predicted identity of the vehicle. And a second confidence value may be determined. And the system may further compare the second confidence value to the predetermined threshold to select the first or second predicted identity for transmission to a user.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: May 12, 2020
    Assignee: Capital One Services, LLC
    Inventors: Micah Price, Chi-san Ho, Aamer Charania, Sunil Vasisht
  • Patent number: 10650266
    Abstract: Embodiments of the present invention disclose a method, computer program product, and system for cataloging images based on a gridded color histogram analysis. The computer accesses an image gallery specified by a user, wherein the image gallery is at least one of an image gallery stored on a user computing device, an image gallery stored on a user account at a third-party image storage, or an image gallery searched on the web. The computer receives a request to search the image gallery. The computer performs a search of the image gallery, wherein the search is using a color based histogram algorithm based on a user input. The computer transmits a cataloged and sorted image gallery to the user computing device to be displayed.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey A. Calcaterra, Wei Ting Dong, Shi Kun Li, Su Liu
  • Patent number: 10650267
    Abstract: According to one embodiment, a medical image processing apparatus includes a memory, an information collector and a determiner. The memory stores DICOM image data including study information. The information collector collects information relating to general-purpose image data incompatible with DICOM. The determiner determines whether designated general-purpose image data is related to a study which generates the DICOM image data, based on information collected by the information collector and the study information.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: May 12, 2020
    Assignee: Canon Medical Systems Corporation
    Inventors: Masashi Yoshida, Kousuke Sakaue
  • Patent number: 10650268
    Abstract: Provided are an image processing apparatus and an image processing method for improving the accuracy of a recognition result of a current object included in captured images. The image processing apparatus performs recognition processing on the current object on the basis of recognition results of the current object obtained from a plurality of captured images with different output information regarding imaging so that the accuracy of the recognition result of the captured images can be improved. For example, the image processing method can be applied to an electronic device having a function of performing the recognition processing on the current object.
    Type: Grant
    Filed: November 2, 2016
    Date of Patent: May 12, 2020
    Assignee: SONY SEMICONDUCTOR SOLUTIONS CORPORATION
    Inventor: Katsuya Shinozaki
  • Patent number: 10650269
    Abstract: Systems and methods for detecting image anomalies include extracting one or more detected images from a submission file received from at least one computing device and generating an image identification (ID) for each of the one or more images. One or more image quality indices are determined for the submission file based on at least one of predetermined image features, an image type of the one or more images, and submission file attributes, and one or more image anomalies associated with the one or more images of the submission file are detected based on at least one of the image ID and the one or more image quality indices.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: May 12, 2020
    Assignee: FEDERAL HOME LOAN MORTGAGE CORPORATION
    Inventors: Wenhua Liu, Ming Xiong
  • Patent number: 10650270
    Abstract: Examples relate to simultaneous localization and calibration. An example implementation may involve receiving sensor data indicative of markers detected by a sensor on a vehicle located at vehicle poses within an environment, and determining a pose graph representing the vehicle poses and the markers. For instance, the pose graph may include edges associated with a cost function representing a distance measurement between matching marker detections at different vehicle poses. The distance measurement may incorporate the different vehicle poses and a sensor pose on the vehicle. The implementation may further involve determining a sensor pose transform representing the sensor pose on the vehicle that optimizes the cost function associated with the edges in the pose graph, and providing the sensor pose transform. In further examples, motion model parameters of the vehicle may be optimized as part of a graph-based system as well or instead of sensor calibration.
    Type: Grant
    Filed: October 9, 2017
    Date of Patent: May 12, 2020
    Assignee: X Development LLC
    Inventors: Dirk Holz, Troy Straszheim
  • Patent number: 10650271
    Abstract: An image processing apparatus includes a processor configured to generate vertical direction distribution data which is a distribution of distance values to a vertical direction of a distance image from the distance image having distance values according to distance of a road surface in captured images. There is an extraction of a pixel having the highest frequency value for each of the distance values from pixels in a search area. There is a detection of the road surface based on each pixel extracted.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: May 12, 2020
    Assignee: RICOH COMPANY, LTD.
    Inventors: Otoichi Nakata, Naoki Motohashi, Shinichi Sumiyoshi
  • Patent number: 10650272
    Abstract: Methods and systems for testing base text direction (BTD) include comparing one or more images from an end-user system to a respective reference image associated with a respective text test case. Each of the one or more images includes respective text test case information. It is determined whether the end-user system produces BTD errors based on the comparison in accordance with one or more BTD error rules.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: May 12, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aya R. A. Elgebeely, Mohamed M. El-Khouly, Mariam M. R. A. Eltantawi, Hisham E. Elshishiny
  • Patent number: 10650273
    Abstract: A face recognition system of a residential environment identifies an individual present in the residential environment. The residential environment include a plurality of home devices and is associated with a group of different persons. The face recognition system identifies which person in the group is the individual and generate an operating instruction for a home device based on identity of the individual. For example, the face recognition system captures an image set of the individual's head and face and applies the image set to a machine learning model that is trained to distinguish between the different persons based on images of their heads and faces. The face recognition system can retrieve a personal profile of the identified individual, which includes settings of the home device for the identified individual. The face recognition system generates the operating instruction based on the personal profile.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: May 12, 2020
    Assignee: MIDEA GROUP CO. LTD.
    Inventors: Dongyan Wang, Xin Chen, Hua Zhou
  • Patent number: 10650274
    Abstract: A method and a clustering system for image clustering, and a computer-readable storage medium are provided. The method includes: extracting a GIST feature of a first image and a GIST feature of a second image; obtaining an image fingerprint of the first image, based on the GIST feature of the first image and in conjunction with an LSH algorithm and obtaining an image fingerprint of the second image, based on the GIST feature of the second image and in conjunction with the LSH algorithm; calculating a similarity between the first and second images, based on the image fingerprints of the first and second images; and classifying the first image and the second image as a same category of image in a case that the similarity between the first image and the second image is larger than a predetermined similarity threshold.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: May 12, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Pipei Huang
  • Patent number: 10650275
    Abstract: A method for detecting temporal pattern anomalies in a video stream includes detecting an object in a current frame of the video stream, generating a processed current frame that contains the detected object, generating a feature representation of the processed current frame, clustering the feature representation in one or more primary clusters in a clustering space of the primary class, generating an information vector of the feature representation, that includes information regarding the primary class, the sub-class and one or more external factors associated with the feature representation, clustering each information vector into one or more secondary clusters, and reporting a next frame as an anomaly when a corresponding information vector is positioned outside a secondary cluster of a feature presentation of a previous frame.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: May 12, 2020
    Assignee: Chiral Software, Inc.
    Inventors: Eric Jonathan Hollander, Michael Travis Remington
  • Patent number: 10650276
    Abstract: Systems, methods, and articles of manufacture to generate, by a neural network of a variational autoencoder, a latent vector for a first input image, generate, by the neural network of the variational autoencoder, a first reconstructed image by sampling the latent vector for the first input image, determine a reconstruction loss incurred in generating the first reconstructed image based at least in part on: (i) a difference of the first input image and the first reconstructed image, and (ii) a master model trained to detect a sensitive attribute in images, determine a total loss based at least in part on the reconstruction loss and a classification loss, and optimize a plurality of weights of the neural network of the variational autoencoder based on a backpropagation operation and the determined total loss, the optimized neural network trained to not consider the sensitive attribute in images.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 12, 2020
    Assignee: Capital One Services, LLC
    Inventors: Omar Florez Choque, Erik Mueller
  • Patent number: 10650277
    Abstract: The present disclosure relates to apparatus and method for training a learning system to detect event. The present disclosure provides apparatus and method for training a learning system that includes an event related keyword collecting unit that collects an event related keyword inputted by a user, a related keyword collecting unit that collects at least one related keyword from a word database by transmitting an event related keyword to the word database, an event related image collecting unit that collects at least one event related image that is related to a search formula from an image database by transmitting the search formula to the image database, and a training unit that trains a learning system by communicating with the learning system with a use of an event related image as training data.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: May 12, 2020
    Assignees: CENTER FOR INTEGRATED SMART SENSORS FOUNDATION, KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY (KAIST)
    Inventors: Dong Su Han, Jin Woo Shin, Byeok San Lee, Jin Young Yang
  • Patent number: 10650278
    Abstract: Systems and methods for semantic labeling of point clouds using images. Some implementations may include obtaining a point cloud that is based on lidar data reflecting one or more objects in a space; obtaining an image that includes a view of at least one of the one or more objects in the space; determining a projection of points from the point cloud onto the image; generating, using the projection, an augmented image that includes one or more channels of data from the point cloud and one or more channels of data from the image; inputting the augmented image to a two dimensional convolutional neural network to obtain a semantic labeled image wherein elements of the semantic labeled image include respective predictions; and mapping, by reversing the projection, predictions of the semantic labeled image to respective points of the point cloud to obtain a semantic labeled point cloud.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: May 12, 2020
    Assignee: Apple Inc.
    Inventors: Huy Tho Ho, Jingwei Wang, Kjell Fredrik Larsson
  • Patent number: 10650279
    Abstract: A learning method for generating integrated object detection information of an integrated image by integrating first object detection information and second object detection information is provided. The method includes steps of: (a) a learning device, if the first object detection information and the second object detection information is acquired, instructing a concatenating network included in a DNN to generate pair feature vectors including information on pairs of first original ROIs and second original ROIs; (b) the learning device instructing a determining network included in the DNN to apply FC operations to the pair feature vectors, to thereby generate (i) determination vectors and (ii) box regression vectors; (c) the learning device instructing a loss unit to generate an integrated loss, and performing backpropagation processes by using the integrated loss, to thereby learn at least part of parameters included in the DNN.
    Type: Grant
    Filed: December 22, 2019
    Date of Patent: May 12, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10650280
    Abstract: In various embodiments, training objects are classified by human annotators, psychometric data characterizing the annotation of the training objects is acquired, a human-weighted loss function based at least in part on the classification data and the psychometric data is computationally derived, and one or more features of a query object are computationally classified based at least its part on the human-weighted loss function.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: May 12, 2020
    Assignee: PRESIDENT AND FELLOWS OF HARVARD COLLEGE
    Inventors: David Cox, Walter Scheirer, Samuel Anthony, Ken Nakayama
  • Patent number: 10650281
    Abstract: The inventory capture system, method and apparatus (i.e., the inventory capture system) may provide for creating and updating an inventory of clothing for a user. The inventory capture system may use voice and image recognition to capture an inventory of clothing and provide users the ability to enhance the captured details about an inventory of clothing with annotations. Moreover, the inventory capture system may provide a way to facilitate retailers and users to leverage the user's existing inventory of clothing and augment the user's inventory of clothing with shared, purchased and/or rented clothing.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Yuri I. Krimon, David I. Poisner
  • Patent number: 10650282
    Abstract: A first classification unit outputs a plurality of evaluation values indicating the possibility of being each of a plurality of types of case regions for each pixel of a three-dimensional image. Based on selected conversion definition information, a second classification unit converts a first classification result for each pixel of the three-dimensional image based on the plurality of evaluation values, and outputs a second classification result for each pixel of the three-dimensional image.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: May 12, 2020
    Assignee: FUJIFILM Corporation
    Inventor: Shoji Kanada
  • Patent number: 10650283
    Abstract: An electronic apparatus is provided. The electronic apparatus includes: a storage configured to store a plurality of filters each corresponding to a plurality of image patterns; and a processor configured to classify an image block including a target pixel and a plurality of surrounding pixels into one of the plurality of image patterns based on a relationship between pixels within the image block and to obtain a final image block in which the target pixel is image-processed by applying at least one filter corresponding to the classified image pattern from among the plurality of filters to the image block, wherein the plurality of filters are obtained by learning, through an artificial intelligence algorithm, a relationship between a plurality of first sample image blocks and a plurality of second sample image blocks corresponding to the plurality of first sample image blocks based on each of the plurality of image patterns.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: May 12, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyun-seung Lee, Dong-hyun Kim, Young-su Moon, Tae-gyoung Ahn
  • Patent number: 10650284
    Abstract: A system to verify product authenticity includes a transceiver configured to receive information regarding one or more pieces of ferromagnetic material that are in or on a product. The system also includes an electromagnetic radiation source configured to emit radiation to heat the one or more pieces of ferromagnetic material in or on the product. The system also includes a heat sensor configured to detect heat emitted from the one or more pieces of ferromagnetic material that are in or on the product. The system further includes a processor in communication with the transceiver, the electromagnetic radiation source, and the heat sensor. The processor is configured to determine if the product is counterfeit based on the received information and the detected heat.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: May 12, 2020
    Inventor: Bernard Fryshman
  • Patent number: 10650285
    Abstract: In an illustrative embodiment, methods and systems for automatically categorizing a condition of a property characteristic may include obtaining aerial imagery of a geographic region including the property, identifying features of the aerial imagery corresponding to the property characteristic, analyzing the features to determine a property characteristic classification, and analyzing a region of the aerial imagery including the property characteristic to determine a condition classification.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: May 12, 2020
    Assignee: Aon Benfield Inc.
    Inventor: Takeshi Okazaki
  • Patent number: 10650286
    Abstract: Embodiments of the present systems and methods may provide the capability to classify medical images, such as mammograms, in an automated manner using existing annotation information. In embodiments, only the global, image level tag may be needed to classify a mammogram into certain types, without fine annotation of the findings in the image. In an embodiment, a computer-implemented method for classifying medical images may comprise receiving a plurality of image tiles, each image tile including a portion of a whole view, processed by a trained or a pre-trained model and outputting a one-dimensional feature vector for each tile to generate a three-dimensional feature volume and classifying the larger image by a trained model based on the generated three-dimensional feature volume to form a classification of the image.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rami Ben-Ari, Pavel Kisilev, Jeremias Sulam
  • Patent number: 10650287
    Abstract: Methods are provided for determining discriminant functions of minimum risk quadratic classification systems, wherein a discriminant function is represented by a geometric locus of a principal eigenaxis of a quadratic decision boundary. A geometric locus of a principal eigenaxis is determined by solving a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium. Feature vectors and machine learning algorithms are used to determine discriminant functions and ensembles of discriminant functions of minimum risk quadratic classification systems, wherein a discriminant function of a minimum risk quadratic classification system exhibits the minimum probability of error for classifying given collections of feature vectors and unknown feature vectors related to the collections.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: May 12, 2020
    Inventor: Denise Marie Reeves