Patents Examined by Jayesh Patel
  • Patent number: 11270167
    Abstract: A hyperspectral analysis computer device is provided. The hyperspectral analysis computer device includes at least one processor in communication with at least one memory device. The hyperspectral analysis computer device is configured to store a plurality of spectral analysis data, receive at least one background item and at least one item to be detected from a user, generate one or more spectral bands for analysis based on the at least one background item, the at least one item to be detected, and the stored plurality of spectral analysis data, receive one or more mission parameters from the user, and determine a probability of success based on the one or more mission parameters and the generated one or more spectral bands.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: March 8, 2022
    Assignee: THE BOEING COMPANY
    Inventors: William Thomas Garrison, Mark A. Rivera, David Roderick Gerwe, Todd Harding Tomkinson
  • Patent number: 11263445
    Abstract: A method, apparatus and a system for human body tracking processing, where an apparatus for video collection processing in the system has a built-in intelligent chip, and before uploading video data to a cloud server, the intelligent chip performs a pre-processing on the video data, retains a key image frame and performs a human body detection and a tracking processing on the key image frame by using human body detection tracking algorithm to acquire a first human body detection tracking result. Afterwards, the intelligent chip sends the first human body detection tracking result to the cloud server, so that the cloud server performs a human body re-identification algorithm processing and/or three-dimensional reconstruction algorithm processing on the first human body detection tracking result to acquire a second human body detection tracking result.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: March 1, 2022
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Zeyu Liu, Le Kang, Chengyue Zhang, Zhizhen Chi, Jian Wang, Xubin Li, Xiao Liu, Hao Sun, Shilei Wen, Errui Ding, Hongwu Zhang, Mingyu Chen, Yingze Bao
  • Patent number: 11263447
    Abstract: An information processing method includes: acquiring an image to be processed; determining object content of the image to be processed based on a recognition result of the image to be processed; and processing the image to be processed according to the object content to obtain a processed file including an editable file such as an Office file. As such, it is realized that the user determines the processing to be performed on the image to be processed according to the recognition result of the image to be processed, and obtain the processed format file, without specifying in advance the processing of the image to be processed, which brings convenience to the user in recognizing the image to be processed and improves the user experience.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: March 1, 2022
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Keyuan Li, Ming Liu
  • Patent number: 11256914
    Abstract: System and method for detecting the authenticity of products by detecting a unique chaotic signature. Photos of the products are taken at the plant and stored in a database/server. The server processes the images to detect for each authentic product a unique authentic signature which is the result of a manufacturing process, a process of nature etc. To detect whether the product is genuine or not at the store, the user/buyer may take a picture of the product and send it to the server (e.g. using an app installed on a portable device or the like). Upon receipt of the photo, the server may process the receive image in search for a pre-detected and/or pre-stored chaotic signature associated with an authentic product. The server may return a response to the user indicating the result of the search. A feedback mechanism may be included to guide the user to take a picture at a specific location of the product where the chaotic signature may exist.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: February 22, 2022
    Inventor: Guy Le Henaff
  • Patent number: 11238270
    Abstract: The present application provides an identity authentication method and an apparatus. The method may include obtaining a sequence of depth images containing a target face and a sequence of original two-dimensional (2D) images containing the target face, and performing identity authentication. The identity authentication may be conducted by: calculating a target face three-dimensional (3D) texture image according to the depth images containing the target face and the original 2D images containing the target face; projecting the target face 3D texture image to a 2D plane to obtain a target face 2D image; extracting feature information from the target face 2D image; comparing the feature information of the target face 2D image with feature information of a reference face 2D image to determine a similarity value; and in response to that the similarity value exceeds a first threshold, determining that the identity authentication succeeds.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: February 1, 2022
    Assignee: Orbbec Inc.
    Inventors: Zhenzhong Xiao, Yuanhao Huang, Xu Chen
  • Patent number: 11232589
    Abstract: An object recognition device includes: a data holding unit that stores a reference image of an object of a recognition candidate, each feature point in the reference image, and a feature quantity at each feature point; an image acquisition unit acquires a scene image that is an image of a recognition processing target; a definition calculation unit detects definition indicating the degree of sharpness in each region of the scene image; and a feature acquisition unit and a matching calculation unit that detect a feature point in the scene image to perform a process of matching with the feature point. The matching calculation unit executes, by different methods, an extraction method of feature points in a first region of the scene image where the definition is a first range and in a second region of the scene image where the definition is a second range lower than the first range.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: January 25, 2022
    Assignee: HITACHI, LTD.
    Inventors: Taiki Yano, Nobutaka Kimura
  • Patent number: 11227149
    Abstract: A processor-implemented liveness detection method includes: obtaining an initial image using a dual pixel sensor; obtaining a left image and a right image from the initial image; and detecting liveness of an object included in the initial image using the left image and the right image.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: January 18, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaejoon Han, Hana Lee, Jihye Kim, Jingtao Xu, Chang Kyu Choi, Hangkai Tan, Jiaqian Yu
  • Patent number: 11222399
    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 11, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
  • Patent number: 11216705
    Abstract: Presented herein are systems and methods for increasing reliability of object detection, comprising, receiving a plurality of images of one or more objects captured by imaging sensor(s), receiving an object classification coupled with a first probability score from machine learning model(s) trained to detect the object(s) and applied to the image(s), computing a second probability score for classification of the object(s) according to physical attribute(s) of the object(s) estimated by analyzing the image(s), computing a third probability score for classification of the object(s) according to a movement pattern of the object(s) estimated by analyzing at least some consecutive images, computing an aggregated probability score aggregating the first, second and third probability scores, and outputting, in case the aggregated probability score exceeds a certain threshold, the classification of each object coupled with the aggregated probability score for use by object detection based system(s).
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: January 4, 2022
    Assignee: Anyvision Interactive Technologies Ltd.
    Inventors: Ishay Sivan, Ailon Etshtein, Alexander Zilberman, Neil Martin Robertson, Sankha Subhra Mukherjee, Rolf Hugh Baxter
  • Patent number: 11207161
    Abstract: Disclosed is method, a user interface, a computer program product and a system for predicting future development of a dental condition of a patient's set of teeth, wherein the method comprises: —obtaining two or more digital 3D representations for the teeth recorded at different points in time; —deriving based on the obtained digital 3D representations a formula expressing the development of the dental condition in terms of at least one parameter as a function of time; and—predicting the future development of the condition based on the derived formula.
    Type: Grant
    Filed: May 29, 2017
    Date of Patent: December 28, 2021
    Assignee: 3SHAPE A/S
    Inventor: Henrik John Brandt
  • Patent number: 11195018
    Abstract: Disclosed are augmented reality (AR) personalization systems to enable a user to edit and personalize presentations of real-world typography in real-time. The AR personalization system captures an image depicting a physical location via a camera coupled to a client device. For example, the client device may include a mobile device that includes a camera configured to record and display images (e.g., photos, videos) in real-time. The AR personalization system causes display of the image at the client device, and scans the image to detect occurrences of typography within the image (e.g., signs, billboards, posters, graffiti).
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: December 7, 2021
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, Qi Pan, Eitan Pilipski
  • Patent number: 11197073
    Abstract: The present invention relates to an advertisement detection system based on fingerprints, and provides an advertisement detection systems based on fingerprints, including a content stream storage unit for storing broadcast content in real time, a section selection unit for selecting a reference section and a test section from broadcast content stored by the content stream storage unit, a fingerprint extraction unit for extracting fingerprints from the reference section and the test section selected by the section selection unit using one or more methods, a fingerprint matching unit for comparing the fingerprints from the test section and the reference section, extracted by the fingerprint extraction unit, with each other and then performing matching between the fingerprints, an advertisement section determination unit for determining advertisement segments from the test section based on results of the matching performed by the fingerprint matching.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: December 7, 2021
    Assignee: Enswers Co., Ltd.
    Inventors: Hoon-Young Cho, Jaehyung Lee
  • Patent number: 11182643
    Abstract: A user's collection of images may be analyzed to identify people's faces within the images, then create clusters of similar faces, where each of the clusters may represent a person. The clusters may be ranked in order of size to determine a relative importance of the associated person to the user. The ranking may be used in many social networking applications to filter and present content that may be of interest to the user. In one use scenario, the clusters may be used to identify images from a second user's image collection, where the identified images may be pertinent or interesting to the first user. The ranking may also be a function of user interactions with the images, as well as other input not related to the images. The ranking may be incrementally updated when new images are added to the user's collection.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: November 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eyal Krupka, Igor Abramovski, Igor Kviatkovsky
  • Patent number: 11182635
    Abstract: A personal information separation unit separates a document image containing personal information into a personal information image containing the personal information and a general information image that does not contain the personal information on the basis of the document image, and transmits the general information image to a cloud server. A recognition result integration unit receives a general recognition result that is the recognition result of the character recognition processing for the general information image from the cloud server, and acquires a target recognition result that is the recognition result of the character recognition processing for the document image in accordance with the general recognition result and the information based on the personal information image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: November 23, 2021
    Assignee: HITACHI, LTD.
    Inventors: Ryosuke Odate, Hiroshi Shinjo, Yuuichi Nonaka
  • Patent number: 11176404
    Abstract: An embodiment of this application provides an image object detection method. The method may include obtaining a detection image, an n-level deep feature map framework, and an m-level non-deep feature map framework. The method may further include extracting deep feature from an (i?1)-level feature of the detection image using an i-level deep feature map framework, to obtain an i-level feature of the detection image. The method may further include extracting non-deep feature from a (j?1+n)-level feature of the detection image using a j-level non-deep feature map framework, to obtain a (j+n)-level feature of the detection image. The method may further include performing information regression operation on an a-level feature to an (m+n)-level feature of the detection image, to obtain an object type information and an object position information of an object in the detection image. The a is an integer less than n and greater than or equal to 2.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 16, 2021
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Shijie Zhao, Feng Li, Xiaoxiang Zuo
  • Patent number: 11170525
    Abstract: The present application provides autonomous vehicle based position detection method and apparatus, a device and a medium, where the method includes: identifying an obtained first visual perception image according to an underlying neural network layer in a slender convolution kernel neural network model to determine feature information of the target linear object image, and identifying the feature information of the target linear object image by using a high-level neural network layer in the slender convolution kernel neural network model to determine size information of the target linear object image; further, matching the size information of the target linear object image with preset coordinate system map information to determine a position of the autonomous vehicle. Embodiments of the present application can accurately determine the position of the autonomous vehicle.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: November 9, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Jiajia Chen, Ji Wan, Tian Xia
  • Patent number: 11170487
    Abstract: An adhered substance detection apparatus includes a controller configured to function as a determination part, an extractor, and a detector. The determination part determines a representative edge direction using a predetermined angle range as a unit for each pixel area of a plurality of pixel areas of a photographic image photographed by a photographing device, the representative edge direction being determined for each of the pixel areas based on an edge angle of each pixel included in the pixel area. The extractor extracts an array pattern in which a plurality of the pixel areas having a same representative edge direction are continuously arranged along a predetermined scanning direction based on the representative edge directions of the pixel areas determined by the determination part. The detector detects whether an adhered substance area exists on a lens of the photographing device based on the array pattern extracted by the extractor.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: November 9, 2021
    Assignee: DENSO TEN Limited
    Inventors: Nobuhisa Ikeda, Nobunori Asayama, Takashi Kono, Yasushi Tani, Daisuke Yamamoto, Tomokazu Oki, Teruhiko Kamibayashi
  • Patent number: 11163822
    Abstract: Provided are techniques for enhancing images with emotion information, comprising capturing a plurality of images; identifying an individual in the plurality of images; analyzing the plurality of images for emotional content; converting the emotional content into emotion metadata; correlating the emotional content with the individual to produce associated emotion metadata; and storing the associated emotion metadata in conjunction with the captured image in a computer-readable storage medium. The disclosed techniques may also include capturing physiological data corresponding to an individual that captures the image; analyzing the physiological data for a second emotional content; converting the second emotional content into a second emotion metadata; storing the second emotion metadata in conjunction with the captured image in the computer-readable storage medium.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ilse M. Breedvelt-Schouten, Jana H. Jenkins, Jeffrey A. Kusnitz, John A. Lyons
  • Patent number: 11157549
    Abstract: Provided are techniques for enhancing images with emotion information, comprising capturing a plurality of images; identifying an individual in the plurality of images; analyzing the plurality of images for emotional content; converting the emotional content into emotion metadata; correlating the emotional content with the individual to produce associated emotion metadata; and storing the associated emotion metadata in conjunction with the captured image in a computer-readable storage medium. The disclosed techniques may also include capturing physiological data corresponding to an individual that captures the image; analyzing the physiological data for a second emotional content; converting the second emotional content into a second emotion metadata; storing the second emotion metadata in conjunction with the captured image in the computer-readable storage medium.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: October 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ilse M. Breedvelt-Schouten, Jana H. Jenkins, Jeffrey A. Kusnitz, John A. Lyons
  • Patent number: 11158077
    Abstract: A disparity estimation method, an electronic device, and a computer-readable storage medium are provided. The disparity estimation method includes: performing feature extraction on each image in an image pair; and performing cascaded multi-stage disparity processing according to the extracted image features to obtain multiple disparity maps with increasing sizes. The input of a first stage disparity processing in the multi-stage disparity processing includes multiple image features each having a size corresponding to the first stage disparity processing, and the input of disparity processing of each stage other than the first stage disparity processing in the multi-stage disparity processing includes: one or more image features each having a size corresponding to disparity processing of the stage and a disparity map generated by disparity processing of an immediate previous stage.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: October 26, 2021
    Assignee: Nextvpu (Shanghai) Co., Ltd.
    Inventors: Shu Fang, Ji Zhou, Xinpeng Feng