Patents Examined by Gregory M. Desire
  • Patent number: 11107205
    Abstract: A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device. The method also includes using a convolutional neural network to generate blending maps associated with the image frames. The blending maps contain or are based on both a measure of motion in the image frames and a measure of how well exposed different portions of the image frames are. The method further includes generating a final image of the scene using at least some of the image frames and at least some of the blending maps. The final image of the scene may be generated by blending the at least some of the image frames using the at least some of the blending maps, and the final image of the scene may include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: August 31, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yuting Hu, Ruiwen Zhen, John W. Glotzbach, Ibrahim Pekkucuksen, Hamid R. Sheikh
  • Patent number: 11107143
    Abstract: Many embodiments can include a system. In some embodiments, the system can comprise one or more processors and one or more non-transitory storage devices storing computing instructions are disclosed.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: August 31, 2021
    Assignee: WALMART APOLLO LLC
    Inventors: Stephen Dean Guo, Kannan Achan, Venkata Syam Prakash Rapaka
  • Patent number: 11093751
    Abstract: A system and methods are disclosed for using a trained machine learning model to identify constituent images within composite images. A method may include providing pixel data of a first image as input to the trained machine learning model, obtaining one or more outputs from the trained machine learning model, and extracting, from the one or more outputs, a level of confidence that (i) the first image is a composite image that includes a constituent image, and (ii) at least a portion of the constituent image is in a particular spatial area of the first image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: August 17, 2021
    Assignee: GOOGLE LLC
    Inventors: Filip Pavetic, King Hong Thomas Leung, Dmitrii Tochilkin
  • Patent number: 11087175
    Abstract: A method for learning a recurrent neural network to check an autonomous driving safety to be used for switching a driving mode of an autonomous vehicle is provided. The method includes steps of: a learning device (a) if training images corresponding to a front and a rear cameras of the autonomous vehicle are acquired, inputting each pair of the training images into corresponding CNNs, to concatenate the training images and generate feature maps for training, (b) inputting the feature maps for training into long short-term memory models corresponding to sequences of a forward RNN, and into those corresponding to the sequences of a backward RNN, to generate updated feature maps for training and inputting feature vectors for training into an attention layer, to generate an autonomous-driving mode value for training, and (c) allowing a loss layer to calculate losses and to learn the long short-term memory models.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 10, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11080554
    Abstract: Embodiments provide techniques, including systems and methods, for processing imaging data to identify an installed component. Embodiments include a component identification system that is configured to receive imaging data including an installed component, extract features of the installed component from the imaging data, and search a data store of components for matching reference components that match those features. A relevance score may be determined for each of the reference components based on a similarity between the image and a plurality of reference images in a component model of each of the plurality of reference components. At least one matching reference component may be identified by comparing each relevance score to a threshold relevance score and matching component information may be provided to an end-user for each matching reference component.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: August 3, 2021
    Assignee: LOMA LINDA UNIVERSITY
    Inventors: Montry Suprono, Robert Walter
  • Patent number: 11080555
    Abstract: Detecting trends is provided. The method comprises receiving, from a number of data sources, data regarding choices of people at a number of specified events and public places and determining, according to a number of clustering algorithms, trend clusters according to data received from the data sources cross-referenced to defined event types and place types. Customer profile data and preferences are received from a number of registered customers through user interfaces, and a number of customer clusters according to the customer profile data and preferences are determined according to clustering algorithms. Correlation rules are calculated between the trend clusters and the customer clusters. A number of trend predictions and recommendations are then sent to a user regarding a number of specified events or time frames according to the correlation rules.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Shubhadip Ray, John David Costantini, Avik Sanyal, Sarbajit K. Rakshit
  • Patent number: 11080900
    Abstract: Provided is a method and apparatus for metal artifact reduction in industrial three-dimensional (3D) cone beam computed tomography (CBCT) that may align computer-aided design (CAD) data to correspond to CT data, generate registration data from the aligned CAD data, set a sinogram surgery region corresponding to a metal region based on the registration data, perform an average fill-in process on the CT data based on the registration data, update data of the sinogram surgery region based on the averaged filled-in information, and reconstruct a 3D CT image from the updated sinogram data with surgery region.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 3, 2021
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Chang-Ock Lee, Soomin Jeon, Seongeun Kim
  • Patent number: 11074592
    Abstract: An economical and accurate method of classifying a consumer good as authentic is provided. The method leverages machine learning and the use of steganographic features on the authentic consumer good.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: July 27, 2021
    Assignee: The Procter & Gamble Company
    Inventors: Jonathan Richard Stonehouse, Boguslaw Obara
  • Patent number: 11074708
    Abstract: Various embodiments described herein relate to techniques for computing dimensions of an object. In this regard, a dimensioning system converts points cloud data associated with an object into a density image for a scene associated with the object. The dimensioning system also segments the density image to determine a void region in the density image that corresponds to the object. Furthermore, the dimensioning system determines, based on the void region for the density image, dimension data indicative of one or more dimensions of the object.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: July 27, 2021
    Assignee: Hand Held Products, Inc.
    Inventors: Scott McCloskey, Michael Albright
  • Patent number: 11074684
    Abstract: A high-frequency component removing part removes high-frequency components from a first object image obtained by picking up an image of an object and a first reference image, to acquire a second object image and a second reference image, respectively. A correction part corrects a value of each pixel of at least one of the first object image and the first reference image on the basis of a discrepancy, which is a ratio or a difference, between a value of the corresponding pixel of the second object image and a value of the corresponding pixel of the second reference image. A comparison part compares the first object image with the first reference image, to thereby detect a defect area in the first object image.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: July 27, 2021
    Assignee: SCREEN HOLDINGS CO., LTD.
    Inventor: Hiroyuki Onishi
  • Patent number: 11066088
    Abstract: A detection and positioning method for a train water injection port includes: acquiring train water injection port video images, and performing threshold segmentation on the train water injection port video images to obtain binarized train water injection port video images; processing the binarized train water injection port video images and matching the processed binarized train water injection port video images with a train water injection port template image; detecting a position of the water injection port in the train water injection port video image and comparing the position with a pre-set position range where the water injection port is located; and if the position and the pre-set position range have been matched, then transmitting a matching valid signal to a mechanical device control module to control a mechanical device to move or stop and start or stop water injection.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: July 20, 2021
    Assignee: XIDIAN UNIVERSITY
    Inventors: Huixin Zhou, Pei Xiang, Baokai Deng, Yue Yu, Lixin Guo, Dong Zhao, Hanlin Qin, Bingjian Wang, Jiangluqi Song, Huan Li, Bo Yao, Rui Lai, Xiuping Jia, Jun Zhou
  • Patent number: 11069098
    Abstract: An imaging data set (22) comprising detected counts along lines of response (LORs) is reconstructed (24) to generate a full-volume image at a standard resolution. A region selection graphical user interface (GUI) (26) is provided via which a user-chosen region of interest (ROI) is defined in the full-volume image, and this is automatically adjusted by identifying an anatomical feature corresponding to the user-chosen ROI and adjusting the user-chosen ROI to improve alignment with that feature. A sub-set (32) of the counts of the imaging data set is selected (30) for reconstructing the ROI, and only the selected sub-set is reconstructed (34) to generate a ROI image (36) representing the ROI at a higher resolution than the standard resolution. A fraction of the sub-set of counts may be reconstructed using different reconstruction algorithms (40) to generate corresponding sample ROI images, and a reconstruction algorithm selection graphical user interface (42) employs these sample ROI images.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: July 20, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Shekhar Dwivedi, Andriy Andreyev, Chuanyong Bai, Chi-Hua Tung
  • Patent number: 11068728
    Abstract: A method or system is capable of detecting operator behavior (“OB”) utilizing a virtuous cycle containing sensors, machine learning center (“MLC”), and cloud based network (“CBN”). In one aspect, the process monitors operator body language captured by interior sensors and captures surrounding information observed by exterior sensors onboard a vehicle as the vehicle is in motion. After selectively recording the captured data in accordance with an OB model generated by MLC, an abnormal OB (“AOB”) is detected in accordance with vehicular status signals received by the OB model. Upon rewinding recorded operator body language and the surrounding information leading up to detection of AOB, labeled data associated with AOB is generated. The labeled data is subsequently uploaded to CBN for facilitating OB model training at MLC via a virtuous cycle.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: July 20, 2021
    Assignee: XEVO INC.
    Inventors: Robert Victor Welland, Samuel James McKelvie, Richard Chia-Tsing Tong, Noah Harrison Fradin, Vladimir Sadovsky
  • Patent number: 11062466
    Abstract: The present disclosure relates to an apparatus and a method for information processing enabling alignment between imaged images to be more accurately performed. With respect to a detection result of a correspondence point between an imaged image including a pattern irradiated for alignment with other imaged image, and the other imaged image including the pattern, an evaluation value with which an occurrence rate of an error in the alignment when the imaged image and the other imaged image are composed with each other is evaluated is calculated; and an irradiated position of the pattern is updated on the basis of the evaluation value calculated. The present disclosure, for example, can be applied to an information processing apparatus, an irradiation device, an imaging apparatus, an irradiation imaging apparatus, a controller, an imaging system or the like.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: July 13, 2021
    Assignee: SONY CORPORATION
    Inventors: Takuro Kawai, Kenichiro Hosokawa, Koji Nishida, Keisuke Chida
  • Patent number: 11062434
    Abstract: A method of generating an elemental map includes: acquiring a plurality of correction channel images by scanning a surface of a standard specimen having a uniform elemental concentration with a primary beam and generating a correction channel image for each channel; generating correction information for each pixel of each correction channel image among the plurality of correction channel images based on a brightness value of the pixel; acquiring a plurality of analysis channel images by scanning a surface of a specimen to be analyzed with the primary beam and generating an analysis channel image for each channel; correcting brightness values of pixels constituting an analysis channel image among the plurality of analysis channel images based on the correction information; and generating an elemental map of the specimen to be analyzed based on the plurality of analysis channel images having pixels with corrected brightness values.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: July 13, 2021
    Assignee: JEOL Ltd.
    Inventor: Tatsuya Uchida
  • Patent number: 11055829
    Abstract: A type of glasses worn on a human face in a to-be-processed picture is detected, and a lens area of the glasses is determined; and a deformation model of the lens area is determined based on the type of the glasses, where the deformation model is used to indicate deformation caused by the glasses on an image in the lens area, and then the image in the lens area is restored based on the deformation model.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 6, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Chao Qin
  • Patent number: 11055818
    Abstract: A panorama image alignment method method comprises: obtaining multipath spherical images; calculating rotation Euler angles between each spherical image and a middle portion, a left portion and a right portion of an adjacent spherical image according to a middle portion, a left portion and a right portion of each spherical image to obtain a first left portion rotation matrix and a second right portion rotation matrix; obtaining a first left panorama image, a first right panorama image, a second left panorama image and a second right panorama image; aligning the second left panorama image to the first left panorama image, obtaining a second left portion rotation matrix by means of calculation, and then obtaining a rotation matrix of a left panorama; aligning the second right panorama image to the first right panorama image, obtaining a second right portion rotation matrix by means of calculation, and obtaining a rotation matrix of a right panorama.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: July 6, 2021
    Assignee: ARASHI VISION INC.
    Inventors: Chenglong Yin, Wenjie Jiang, Jingkang Liu
  • Patent number: 11055567
    Abstract: The present disclosure provides an unsupervised exception access detection method and apparatus based on one-hot encoding mechanism. The method includes: encoding each test URL sample by using one-hot encoding mechanism, to obtain a high-dimensional vector; inputting the high-dimensional vector into a pre-built deep autoencoder network for compression and dimension reduction processing, to obtain a two-dimensional vector; performing visualization operation on the two-dimensional vectors by using a two-dimensional coordinate system, to obtain visualized test URL samples; performing a cluster analysis on all visualized test URL samples by using a K-means algorithm, to divide the test URL sample set into a first type and a second type of URL sets; comparing sample sizes of the first type and second type of URL sets, determining the URL set with a larger sample size as a normal URL set, and determining the URL set with a smaller sample size as an abnormal URL set.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: July 6, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Ke Xu, Yi Zhao, Qi Tan
  • Patent number: 11048951
    Abstract: An occupant state recognition apparatus including an eyelid opening recognition unit configured to recognize an eyelid opening of a driver and maximum and minimum values of the eyelid opening; an eye state determination unit configured to determine that the eye is in an eye open state if the eyelid opening becomes greater than or equal to a preset threshold value, and to determine that the eye is in an eye closed state if the eyelid opening becomes less than the threshold value; and a threshold value resetting unit configured to reset the threshold value to a value between the maximum value and the minimum value of the eyelid opening, if the maximum value (or the minimum value) has not become greater (or less) than or equal to the threshold value for a predetermined period or a period corresponding to a predetermined number of times of eyelid opening and closing.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: June 29, 2021
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Naoki Nishimura, Shunichiroh Sawai, Kenichiroh Hara, Koichiro Yamauchi
  • Patent number: 11049232
    Abstract: Embodiments of the present application provide an image fusion apparatus and an image fusion method. The image fusion apparatus includes: a light acquisition device, an image processor, and an image sensor having four types of photosensitive channels. The four types of photosensitive channels include red, green and blue RGB channels and an infrared IR channel. The light acquisition device is configured to filter a spectrum component of a first predetermined wavelength range from incident light to obtain target light. The first predetermined wavelength range is a spectrum wavelength range in which a difference between the responsivities of the RGB channels and the responsivity of the IR channel in the image sensor in the infrared band is higher than a first predetermined threshold. The image sensor is configured to convert the target light into an image signal through the RGB channels and the IR channel.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: June 29, 2021
    Assignee: Hangzhou Hikvision Digital Technology Co., Ltd.
    Inventors: Meng Fan, Hai Yu, Shiliang Pu