Patents Examined by Kevin Ky
  • Patent number: 10832654
    Abstract: Techniques (300, 400, 500) and apparatuses (100, 200, 700) for recognizing accented speech are described. In some embodiments, an accent module recognizes accented speech using an accent library based on device data, uses different speech recognition correction levels based on an application field into which recognized words are set to be provided, or updates an accent library based on corrections made to incorrectly recognized speech.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: November 10, 2020
    Assignee: Google Technology Holdings LLC
    Inventor: Kristin A. Gray
  • Patent number: 10824891
    Abstract: A method of recognizing a biological feature is provided. In an example, the method includes: first biological feature data is obtained; a first recognition operation is performed according to the first biological feature data and biological feature template data to obtain a first recognition result; when the first recognition result indicates a match failure, second biological feature data is obtained; and a re-recognition operation is performed according to the second biological feature data and the biological feature template data to obtain a second recognition result. The second biological feature data and the first biological feature data are collected by a same biological feature collector at different moments in a same biological feature recognition process.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: November 3, 2020
    Assignee: Beijing Xiaomi Mobile Software Co., Ltd.
    Inventors: Xuebin Huang, Chuanshun Ji
  • Patent number: 10824909
    Abstract: System, methods, and other embodiments described herein relate to conditionally generating custom images by sampling latent space of a generator. In one embodiment, a method includes, in response to receiving a request to generate a custom image, generating a component instruction by translating a description about requested characteristics for the object instance into a vector that identifies a portion of a latent space within a respective generator. The method includes computing the object instance by controlling the respective one of the generators according to the component instruction to produce the object instance. The respective one of the generators being configured to generate objects within a semantic object class. The method includes generating the custom image from at least the object instance to produce the custom image from the description as a photorealistic image approximating a real image corresponding to the description.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: November 3, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: German Ros Sanchez, Adrien D. Gaidon, Kuan-Hui Lee, Jie Li
  • Patent number: 10824906
    Abstract: An image processing device includes: a feature detection image generating unit that generates multiple feature detection images corresponding to multiple classes by applying a convolutional neural network having the classes learned previously to an input image; a post-processing unit that generates a measurement result by performing a post-process on at least some feature detection images of the multiple feature detection images on the basis of a setting parameter; and a user interface unit that receives an input of the setting parameter while presenting a user at least one of at least some of the feature detection images which are generated by the feature detection image generating unit and the measurement result which is generated by causing the post-processing unit to perform the post-process using at least some of the feature detection images which are generated by the feature detection image generating unit.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: November 3, 2020
    Assignee: OMRON Corporation
    Inventor: Yasuyuki Ikeda
  • Patent number: 10825457
    Abstract: An information processing apparatus that detects a voice command via a microphone in order to activate the device and execute certain applications. The apparatus comprises a digital signal processor (DSP) and a host controller which are responsible for processing the voice commands. The DSP recognizes and processes voice commands intermittently while the host processor is in a sleep state, thereby reducing the overall power consumption of the apparatus. Further, when the DSP is configured to recognize voice commands intended only to activate the device, a memory having a sufficiently lower storage capacity suffices.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: November 3, 2020
    Assignee: Sony Corporation
    Inventor: Kenji Tokutake
  • Patent number: 10817717
    Abstract: The present application relates to a method and a device for parsing a table in a document image. The method comprises the following steps: inputting a document image to be parsed which includes one or more table areas into the electronic device; detecting, by the electronic device, a table area in the document image by using a pre-trained table detection model; detecting, by the electronic device, internal text blocks included in the table area by using a pre-trained text detection model; determining, by the electronic device, a space structure of the table; and performing text recognition on a text block in each cell according to the space structure of the table, so as to obtain editable structured data by parsing. The method and the device of the present application can be applied to various tables such as line-including tables or line-excluding tables or black-and-white tables.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: October 27, 2020
    Assignee: ABC FINTECH CO., LTD.
    Inventors: Zhou Yu, Yongzhi Yang, Xian Wang
  • Patent number: 10810727
    Abstract: The analysis apparatus (2000) includes a co-appearance event extraction unit (2020) and a frequent event detection unit (2040). The co-appearance event extraction unit (2020) extracts co-appearance events of two or more persons from each of a plurality of sub video frame sequences. The sub video frame sequence is included in a video frame sequence. The analysis apparatus (2000) may obtain the plurality of sub video frame sequences from one or more of the video frame sequences. The one or more of the video frame sequences may be generated by one or more of surveillance cameras. Each of the sub video frame sequences has a predetermined time length. The frequent event detection unit (2040) detects co-appearance events of the same persons occurring at a frequency higher than or equal to a pre-determined frequency threshold.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: October 20, 2020
    Assignee: NEC CORPORATION
    Inventors: Jianquan Liu, Ka Wai Yung
  • Patent number: 10796187
    Abstract: The present disclosure relates to detection of texts. A text detecting method includes: acquiring a first image to be detected of a text object to be detected; determining whether the first image to be detected contains a predetermined indicator; determining, if the first image to be detected contains the predetermined indicator, a position of the predetermined indicator, and acquiring a second image to be detected of the text object to be detected; determining whether the second image to be detected contains the predetermined indicator; and determining, if the second image to be detected does not contain the predetermined indicator, a text detecting region based on the position of the predetermined indicator.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: October 6, 2020
    Assignee: NEXTVPU (SHANGHAI) CO., LTD.
    Inventors: Song Mei, Haijiao Cai, Xinpeng Feng, Ji Zhou
  • Patent number: 10789698
    Abstract: The analysis apparatus (2000) includes a co-appearance event extraction unit (2020) and a frequent event detection unit (2040). The co-appearance event extraction unit (2020) extracts co-appearance events of two or more persons from each of a plurality of sub video frame sequences. The sub video frame sequence is included in a video frame sequence. The analysis apparatus (2000) may obtain the plurality of sub video frame sequences from one or more of the video frame sequences. The one or more of the video frame sequences may be generated by one or more of surveillance cameras. Each of the sub video frame sequences has a predetermined time length. The frequent event detection unit (2040) detects co-appearance events of the same persons occurring at a frequency higher than or equal to a pre-determined frequency threshold.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: September 29, 2020
    Assignee: NEC CORPORATION
    Inventors: Jianquan Liu, Ka Wai Yung
  • Patent number: 10778857
    Abstract: An image forming apparatus that utilizes a communication device is disclosed. A communication unit of a housing which corresponds to an antenna portion is provided at a position at which the communication unit can be seen without obstruction from the upper side. A communication device such as a smartphone can be brought close to or into contact with the communication unit readily. Accordingly, wireless communication between the antenna portion and the communication device can be established.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: September 15, 2020
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventors: Ryoichi Matsushima, Hirofumi Kondo, Yasuhiro Kato, Masayoshi Hayashi, Masato Sueyasu, Reiko Toyama
  • Patent number: 10776671
    Abstract: Techniques are disclosed for blur classification. The techniques utilize an image content feature map, a blur map, and an attention map, thereby combining low-level blur estimation with a high-level understanding of important image content in order to perform blur classification. The techniques allow for programmatically determining if blur exists in an image, and determining what type of blur it is (e.g., high blur, low blur, middle or neutral blur, or no blur). According to one example embodiment, if blur is detected, an estimate of spatially-varying blur amounts is performed and blur desirability is categorized in terms of image quality.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: September 15, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Shanghang Zhang, Radomir Mech
  • Patent number: 10771697
    Abstract: Techniques are disclosed for managing image capture and processing in a multi-camera imaging system. In such a system, a pair of cameras each may output a sequence of frames representing captured image data. The cameras' output may be synchronized to each other to cause synchronism in the image capture operations of the cameras. The system may assess image quality of frames output from the cameras and, based on the image quality, designate a pair of the frames to serve as a “reference frame pair.” Thus, one frame from the first camera and a paired frame from the second camera will be designated as the reference frame pair. The system may adjust each reference frame in the pair using other frames from their respective cameras. The reference frames also may be processed by other operations within the system, such as image fusion.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: September 8, 2020
    Assignee: Apple Inc.
    Inventors: Paul M. Hubel, Marius Tico, Ting Chen
  • Patent number: 10767990
    Abstract: Highly accurate aerial photogrammetry is performed while avoiding increase in cost. A survey data processing device includes an image data receiving part, a location data receiving part, an identification marker detecting part, an identifying part, and a location identifying part. The image data receiving part receives image data of aerial photographs of vehicles. The vehicles are equipped with GNSS location identifying units and identification markers, respectively. The location data receiving part receives location data of the vehicles that are identified by the respective GNSS location identifying units. The identification marker detecting part detects the identification markers of the vehicles from the image data. The identifying part identifies each of the vehicles in the aerial photographs. The location identifying part identifies locations of ground control points (GCPs) in the aerial photographs by using the location data of the vehicles and by using the identification information.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: September 8, 2020
    Assignee: TOPCON CORPORATION
    Inventor: You Sasaki
  • Patent number: 10769542
    Abstract: Systems and methods are provided for analyzing messages generated by a plurality of computing devices associated with a plurality of users in a messaging system to generate training data to train a machine learning model to determine a probability that a media content item was generated inside an enclosed location or outside, receiving a media content item from a computing device, analyzing the media content item using the trained machine learning model to determine a probability that the media content item was generated inside an enclosed location or outside, determining, based on the probability generated by the trained machine learning model, that the media content item was generated inside an enclosed location, and determining an inside temperature associated with the venue based on messages generated by a plurality of computing devices in a messaging system comprising media content items and temperature information for the venue or a similar venue type.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: September 8, 2020
    Assignee: Snap Inc.
    Inventors: Anup Prabhakar Dhalwani, Walton Lin, Andrew Lin, Amer Shahnawaz, Leonid Gorkin, Amber Taylor, Lillian Zheng, Eric Wood
  • Patent number: 10762300
    Abstract: Techniques to predictively respond to user requests using natural language processing are described. In one embodiment, an apparatus may comprise a client communication component operative to receive a user service request from a user client; an interaction processing component operative to submit the user service request to a memory-based natural language processing component; generate a series of user interaction exchanges with the user client based on output from the memory-based natural language processing component, wherein the series of user interaction exchanges are represented in a memory component of the memory-based natural language processing component; and receive one or more operator instructions for the performance of the user service request from the memory-based natural language processing component; and a user interface component operative to display the one or more operator instructions in an operator console. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: September 1, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Jason E Weston, Antoine Bordes, Alexandre Lebrun, Martin Jean Raison
  • Patent number: 10755723
    Abstract: Techniques are described for shared audio functionality between multiple computing devices, based on identifying computing devices in a device set. The devices may provide audio output, audio input, or both audio output and input. The devices may be organized into one or more device sets based on location, supported functions, or other criteria. The shared audio functionality may enable a voice command received at one device to be employed for controlling audio output or other operations of other device(s) in the device set. Shared audio functionality between devices may also enable synchronized audio output through using multiple devices.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: August 25, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Albert M Scalise, Tony David
  • Patent number: 10748246
    Abstract: An object of the present invention is to implement a bit pattern capable of specifying a plurality of colors while storing information on a shape. The present invention is an image processing apparatus that converts image data in a bitmap format into data including a bit pattern, the apparatus including: a creation unit configured to create, based on pixel values of pixels within an image area of a predetermined size within the image data, the bit pattern storing shape information on the image area, which specifies to which of a plurality of kinds of pixel each pixel within the image area corresponds, and color information on the image area, which specifies a number of colors in accordance with a kind of pixel specified by the shape information, and the number in accordance with a kind of pixel specified by the shape information is smaller than a total number of pixels within the image area.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: August 18, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yusuke Hashii, Yuta Ikeshima, Ryosuke Iguchi, Shinjiro Hori, Manabu Yamazoe, Akitoshi Yamada
  • Patent number: 10726287
    Abstract: Disclosed are a camera and an object processing apparatus using the same. A camera according to an embodiment of the present invention focuses on moving objects by adjusting the ray distance between a lens and a sensor in a manner whereby a mirror is moved between the lens and the sensor, which are each fixedly installed, or whereby one side end of the sensor is moved, without a mirror.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: July 28, 2020
    Assignee: Gachisoft Inc.
    Inventor: Hoyon Kim
  • Patent number: 10726524
    Abstract: Methods and apparatuses are disclosed for generating real-time bokeh in images. An example method may include receiving an image, partitioning the image into a plurality of tiles, and designating each of the plurality of tiles as one of a fully foreground tile, a fully background tile, and a mixed tile. Either the fully background tiles or the fully foreground tiles may be processed using a tile-based filtering operation to generate a number of replacement tiles, and the mixed tiles may be processed to generate a number of replacement mixed tiles. An output image nay be generated based on the replacement tiles, the replacement mixed tiles, and either unaltered fully foreground tiles or unaltered fully background tiles.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: July 28, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Ravi Kumar Neti, Babu Chitturi, Sharath Chandra Nadipalli
  • Patent number: 10726508
    Abstract: In general, intelligent fuel dispensers are provided. In at least some implementations, an intelligent fuel dispenser can determine customer identities and/or other characteristics and provide customized fueling sessions based on the determined customer identities and/or other characteristics. In at least some implementations, the fuel dispenser includes a touchless interface allowing customers to complete fueling sessions with minimal physical contact with the fuel dispenser.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: July 28, 2020
    Assignee: Wayne Fueling Systems LLC
    Inventors: John Joseph Morris, Scott R. Negley, III, Annika Birkler, Richard Carlsson, Patrick Jeitler, Randal S. Kretzler