Patents Examined by Casey L Kretzer
  • Patent number: 11436451
    Abstract: The present disclosure provides a multimodal fine-grained mixing method and system, a device, and a storage medium. The method includes: extracting data features from multimodal graphic and textual data, and obtaining each composition of the data features, the data features including a visual regional feature and a text word feature; performing fine-grained classification on modal information of each composition of the data features, to obtain classification results; and performing inter-modal and intra-modal information fusion on each composition according to the classification results, to obtain a fusion feature. The method enables a multimodal model to utilize a complementary characteristic of the multimodal data, with no influence by irrelevant information.
    Type: Grant
    Filed: January 17, 2022
    Date of Patent: September 6, 2022
    Assignees: Harbin Institute of Technology (Shenzhen) (Shenzhen Institute of Science and Technology Innovation, Harbin Institute of Technology), Dongguan University of Technology
    Inventors: Qing Liao, Ye Ding, Binxing Fang, Xuan Wang
  • Patent number: 11423644
    Abstract: An apparatus including a memory and a circuit. The memory may comprise three buffers. The circuit may be configured to allocate the three buffers in the memory based on a size of a full resolution feature map, receive a plurality of regions of interest ranked based on a feature map pyramid, generate a plurality of levels of the feature map pyramid starting from the full resolution feature map and store the levels in the buffers. The circuit may store the levels that are used by at least one of the plurality of regions of interest or do have a dependent level, the levels that are generated may be stored in the buffers in a pattern that ensures the level is stored until no longer needed to create the dependent level and enables the level to be discarded when no longer needed to create the dependent level.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: August 23, 2022
    Assignee: Ambarella International LP
    Inventors: Xuejiao Liang, Wei Fang
  • Patent number: 11423655
    Abstract: A computer-implemented method is provided for disentangled data generation. The method includes accessing, by a variational autoencoder, a plurality of supervision signals. The method further includes accessing, by the variational autoencoder, a plurality of auxiliary tasks that utilize the supervision signals as reward signals to learn a disentangled representation. The method also includes training the variational autoencoder to disentangle a sequential data input into a time-invariant factor and a time-varying factor using a self-supervised training approach which is based on outputs of the auxiliary tasks obtained by using the supervision signals to accomplish the plurality of auxiliary tasks.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: August 23, 2022
    Inventors: Renqiang Min, Yizhe Zhu, Asim Kadav, Hans Peter Graf
  • Patent number: 11417097
    Abstract: A video annotation system for deep learning based video analytics and corresponding methods of use and operation are described that significantly improve the efficiency of video data frame labeling and the user experience. The video annotation system described herein may be deployed at a network edge and may support various intelligent annotation functionality including annotation tracking, adaptive video segmentation, and execution of predictive annotation algorithms. In addition, the video annotation system described herein supports team collaboration functionality in connection with large-scale labeling tasks.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: August 16, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Qun Yang Lin, Jun Qing Xie, Shuai Wang, Kyu-Han Kim
  • Patent number: 11416719
    Abstract: The present disclosure provides a localization method as well as a helmet and a computer readable storage medium using the same. The method includes: extracting first feature points from a target image; obtaining inertial information of the carrier, and screening the first feature points based on the inertial information to obtain second feature points; triangulating the second feature points of the target image to generate corresponding initial three-dimensional map points, if the target image is a key frame image; performing a localization error loopback calibration on the initial three-dimensional map points according to at least a predetermined constraint condition to obtain target three-dimensional map points; and determining a positional point of the specific carrier according to the target three-dimensional map points. In this manner, the accuracy of the localization of a dynamic object such as a person when moving can be improved.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: August 16, 2022
    Assignee: UBTECH ROBOTICS CORP LTD
    Inventors: Chenchen Jiang, Zhichao Liu, Yongsheng Zhao, Yu Tang, Jianxin Pang, Youjun Xiong
  • Patent number: 11416720
    Abstract: Disclosed are methods and systems for model-based data transformation.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: August 16, 2022
    Assignee: Teachers Insurance and Annuity Association of America
    Inventors: Edward J. Miller, Jr., Daniel R. Hursh, Alexis S. Pecoraro, Pankaj Agrawal, James G. Rauscher, John V. Hintze
  • Patent number: 11410440
    Abstract: Systems and methods for classifying and/or sorting T cells by activation state are disclosed. The system includes a cell classifying pathway, a single-cell autofluorescence image sensor, a processor, and a non-transitory computer-readable memory. The memory is accessible to the processor and has stored thereon a trained convolutional neural network and instructions. The instructions, when executed by the processor, cause the processor to: a) receive the autofluorescence intensity image; b) optionally pre-process the autofluorescence intensity image to produce an adjusted autofluorescence intensity image; c) input the autofluorescence intensity image or the adjusted autofluorescence intensity image into the trained convolutional neural network to produce an activation prediction for the T cell.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: August 9, 2022
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Melissa C. Skala, Anthony Gitter, Zijie Wang, Alexandra J. Walsh
  • Patent number: 11410001
    Abstract: An image processing method and apparatus, and a storage medium are provided. The method includes: obtaining a first image and a second image of a to-be-authenticated object, where the first image is captured by a first camera module, and the second image is captured by at least one second camera module; comparing the first image with image data in a target library for identity authentication, to obtain a first authentication result; and in response to that the first authentication result is authentication failure, performing joint authentication on the first image and the second image, and determining the identity of the to-be-authenticated object according to a second authentication result of the joint authentication.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: August 9, 2022
    Assignee: SHANGHAI SENSETIME INTELLIGENT TECHNOLOGY CO., LTD
    Inventors: Yi Lu, Li Cao, Chunlei Hong
  • Patent number: 11410438
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 9, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Patent number: 11403496
    Abstract: In an embodiment, an apparatus includes: a sensor to sense real world information; a digitizer coupled to the sensor to digitize the real world information into digitized information; a signal processor coupled to the digitizer to process the digitized information into an image; a discriminator coupled to the signal processor to determine, based at least in part on the image, whether the real world information comprises an anomaly, where the discriminator is trained via a generative adversarial network; and a controller coupled to the discriminator.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: August 2, 2022
    Assignee: Silicon Laboratories Inc.
    Inventors: Javier Elenes, Antonio Torrini
  • Patent number: 11404158
    Abstract: An image viewer and method for using the same in a medical image management system are described.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: August 2, 2022
    Inventors: Brigil Vincent, Tatsuo Kawanaka, Keiichi Morita, Prajeesh Prabhakaran, Maneesh Chirangattu
  • Patent number: 11393255
    Abstract: Disclosed are a liveness determining method and apparatus and a method and apparatus for training the liveness determining apparatus. The liveness determining method includes extracting, by a processor, a feature from an input fingerprint image, inputting the feature into the current layer classifier, inputting the feature into the subsequent layer classifier, based on a determination that an output of the current layer classifier is live, and determining a liveness of the input fingerprint image to be false, based on a determination that an output of the subsequent layer classifier is fake, wherein the current layer classifier and the subsequent layer classifier are respectively trained based on a plurality of training fake images belonging to different groups.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: July 19, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Joohyeon Kim, Younkyu Lee, Jingu Heo
  • Patent number: 11393210
    Abstract: The invention relates to a device that receives images from one or more cameras, process images and automatically detects unknown humans in the field of view of the camera, for example to prevent burglary. In order to do so, the device comprises a processing logic configured to detect faces, recognize faces and verify if a face corresponds to a face in a collection of faces of known humans. If a face is detected, but does not correspond to a known face, an alarm event is triggered. The processing logic is further configured to classify objects in the image in classes of object comprising at least a human class. If a human is recognized, but no face has been detected for this human, an alarm event is also triggered. Thus, an alarm can be triggered in any case wherein a human is detected, which is not a known, trusty human.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: July 19, 2022
    Assignee: NETATMO
    Inventors: Mehdi Felhi, Alice Lebois, Fred Potter, Florian Deleuil
  • Patent number: 11392838
    Abstract: The application discloses a method for knowledge extraction based on TextCNN, comprising: S10, collecting first training data, and constructing a character vector dictionary and a word vector dictionary; S20, constructing a first convolutional neural network, and training the first convolutional neural network based on a first optimization algorithm, the first convolutional neural network comprises a first embedding layer, a first multilayer convolution, and a first softmax function connected in turn; S30, constructing a second convolutional neural network, and training the second convolutional neural network based on a second optimization algorithm, the second convolutional neural network comprises a second embedding layer, a second multilayer convolution, a pooling layer, two fully-connected layers and a second softmax function, the second embedding layer connected in turn; S40, extracting a knowledge graph triple of the to-be-predicted data according to an entity tagging prediction output by the first trai
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: July 19, 2022
    Assignee: Ping An Technology (Shenzhen) Co., Ltd.
    Inventors: Ge Jin, Liang Xu, Jing Xiao
  • Patent number: 11386674
    Abstract: A class labeling system for autonomous driving includes a detection module, a segmentation module, and a lane road boundary detection module. The detection module is configured to detect objects for autonomous driving from an image captured by a camera to generate a bounding box for each of the objects and detect property information about the object. The segmentation module is configured to determine classes for each pixel of the bounding box detected by the detection module and process at least one of the classes as don't care. The lane road boundary detection module is configured to detect at least one of lane and road boundaries using the bounding box detected by the detection module.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: July 12, 2022
    Assignee: Hyundai Mobis Co., Ltd.
    Inventors: Dong Yul Lee, Jin Aeon Lee, Ho Pyong Gil, Byung Min Kim, Sang Woo Park
  • Patent number: 11373069
    Abstract: An analogy generating system includes one or more image databases that include a first set of images depicting a first symbolic class and a second set of images depicting a second symbolic class and an autoencoder that receive images from the first set of images and the second set of images; determines a first characteristic shared between the first symbolic class and the second symbolic class using a first node from multiple nodes on a neural network; determine a second characteristic shared between the first symbolic class and the second symbolic class using a second node from multiple nodes on the neural network; and exchange the first characteristic and the second characteristic between the first node and the second node to establish an analogy between the first symbolic class and the second symbolic class.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: June 28, 2022
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Peter Henry Tu, Tao Gao, Alexander S-Ban Chen, Jilin Tu
  • Patent number: 11373442
    Abstract: According to an embodiment, a collation device includes a hardware processor configured to: generate, based at least in part on input data, an input vector comprising input data features indicating features of the input data, the input data features comprising D number of features, D being an integer equal to or larger than two; and generate first specification information that specifies d selected features among the input data features of the input vector, based at least in part on a plurality of reference vectors and the input vector, the plurality of reference vectors each comprising reference features in the same form as the input vector, the reference features comprising the D number of features, d being an integer equal to or larger than one and smaller than D.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: June 28, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Naoki Kawamura, Susumu Kubota
  • Patent number: 11374674
    Abstract: The present disclosure relates to passive optical network (PON) systems, an optical line terminal (OLT), and an optical network unit (ONU). One example PON system includes an OLT and at least two ONUs, and the OLT and the ONUs exchange data on one downstream channel and two upstream channels. The OLT sends downstream data to each ONU on the downstream channel, where the downstream data includes an upstream bandwidth grant which is used to control the ONU to send upstream data. Each ONU receives the downstream data on the downstream channel, and sends the upstream data on a first upstream channel or a second upstream channel based on the upstream bandwidth grant included in the downstream data. The OLT receives, on the first upstream channel and the second upstream channel, the upstream data sent by each ONU.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: June 28, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Huafeng Lin, Jinrong Yin, Dianbo Zhao, Xifeng Wan, Shiwei Nie, Gang Zheng, Zhijing Luo, Xiaofei Zeng, Jun Luo
  • Patent number: 11366984
    Abstract: Embodiments include a method, an electronic device, and a computer program product for information processing. In an example embodiment, a method for information processing includes: acquiring, at a first device, a first feature associated with a target object; applying the first feature to a trained first model deployed at the first device to determine a first confidence coefficient, the first confidence coefficient being associated with probabilities that the first model determines the target object as a real object and as a false object; if the first confidence coefficient is lower than a first threshold confidence coefficient, sending a request for verifying the target object to a second device, the second device being deployed with a trained second model for verifying the target object, and the second model being more complex than the first model; and updating the first model based on a response to the request.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: June 21, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jiacheng Ni, Jinpeng Liu, Qiang Chen, Zhen Jia
  • Patent number: 11367277
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first plurality of inputs that identify a plurality of geographical locations and a plurality of infrastructure located at the plurality of geographical locations, classifying each of the plurality of geographical locations in accordance with the first plurality of inputs to obtain a plurality of classes, obtaining a second plurality of inputs that identify costs, revenue, profits, or any combination thereof, associated with the plurality of infrastructure, processing the second plurality of inputs in conjunction with the plurality of classes to identify a first plurality of locations included in the plurality of geographical locations to decommission infrastructure included in the plurality of infrastructure, and presenting the first plurality of locations via a device. Other embodiments are disclosed.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: June 21, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Carlos Eduardo De Andrade, Will Adams Culpepper, Vijay Gopalakrishnan, Sarat Puthenpura, Weiyi Zhang