Patents Examined by Utpal D Shah
  • Patent number: 11893789
    Abstract: A deep neural network provides real-time pose estimation by combining two custom deep neural networks, a location classifier and an ID classifier, with a pose estimation algorithm to achieve a 6D0F location of a fiducial marker. The locations may be further refined into subpixel coordinates using another deep neural network. The networks may be trained using a combination of auto-labeled videos of the target marker, synthetic subpixel corner data, and/or extreme data augmentation. The deep neural network provides improved pose estimations particularly in challenging low-light, high-motion, and/or high-blur scenarios.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: February 6, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Danying Hu, Daniel DeTone, Tomasz Jan Malisiewicz
  • Patent number: 11887366
    Abstract: A method comprising performing object detection within a set of representations of a hierarchically-structured signal, the set of representations comprising at least a first representation of the signal at a first level of quality and a second representation of the signal at a second, higher level of quality.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: January 30, 2024
    Inventors: Guido Meardi, Guendalina Cobianchi, Balázs Keszthelyi, Ivan Makeev, Simone Ferrara, Stergios Poularakis
  • Patent number: 11886604
    Abstract: The technology described herein obfuscates image content using a local neural network and a remote neural network. The local network runs on a local computer system and a remote classifier runs in a remote computing system. Together, the local network and the remote classifier are able to classify images, while the image never leaves the local computer system. In aspects of the technology, the local network receives a local image and creates a transformed object. The transformed object may be generated by processing the image with a local neural network to generate a multidimensional array and then randomly shuffling data locations within a multidimensional array. The transformed object is communicated to the remote classifier in the remote computing system for classification. The remote classifier may not have the seed used to deterministically scramble the spatial arrangement of data within the multidimensional array.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: January 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kamlesh Dattaram Kshirsagar, Frank T. Seide
  • Patent number: 11881010
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for machine learning for video analysis and feedback. In some implementations, a machine learning model is trained to classify videos into performance level classifications based on characteristics of image data and audio data in the videos. Video data captured by a device of a user following a prompt that the device provides to the user is received. A set of feature values that describe audio and video characteristics of the video data are determined. The set of feature values are provided as input to the trained machine learning model to generate output that classifies the video data with respect to the performance level classifications. A user interface of the device is updated based on the performance level classification for the video data.
    Type: Grant
    Filed: September 1, 2022
    Date of Patent: January 23, 2024
    Assignee: Voomer, Inc.
    Inventor: David Wesley Anderton-Yang
  • Patent number: 11875597
    Abstract: A method includes obtaining, by a processing device from an optical sensor of a mobile device, an image, processing, by the processing device, the image by using a neural network to identify a position of an object in the image and the object in the image, thereby obtaining an identified object, after processing the image, extracting, by the processing device from the identified object, a biometric characteristic, and providing, by the processing device, at least the biometric characteristic as input to determine whether the biometric characteristic identifies a user.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: January 16, 2024
    Assignee: Identy Inc.
    Inventors: Hardik Gupta, Satheesh Murugan
  • Patent number: 11875592
    Abstract: An elongated biometric device provides a slim solution for capturing biometric data, and may be placed on a portion of an electronic device having limited space, such as a side of the electronic device. The elongated biometric device may include a force sensor, which may be positioned within a housing of the electronic device and actuated through posts extending from the elongated biometric device through the housing to transfer an applied force to the force sensor.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: January 16, 2024
    Assignee: Apple Inc.
    Inventors: Daniel B. Sargent, Dale Setlak, Giovanni Gozzini, John Raff, Michael B Wittenberg, Richard H. Koch, Ron A. Hopkinson
  • Patent number: 11875268
    Abstract: A client device configured with a neural network includes a processor, a memory, a user interface, a communications interface, a power supply and an input device, wherein the memory includes a trained neural network received from a server system that has trained and configured the neural network for the client device. A server system and a method of training a neural network are disclosed.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: January 16, 2024
    Inventors: Zhengping Ji, Ilia Ovsiannikov, Yibing Michelle Wang, Lilong Shi
  • Patent number: 11869241
    Abstract: An apparatus including an interface and a processor. The interface may be configured to receive pixel data generated by a capture device. The processor may be configured to generate video frames in response to the pixel data, perform computer vision operations on the video frames to detect objects, perform a classification of the objects detected based on characteristics of the objects, determine whether the classification of the objects corresponds to a user-defined event and a user-defined identity and generate encoded video frames from the video frames. The encoded video frames may comprise a first sample of the video frames selected at a first rate when the user-defined event is not detected and a second sample of the video frames selected at a second rate while the user-defined event is detected. The video frames comprising the user-defined identity without a second person may be excluded from the encoded video frames.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: January 9, 2024
    Assignee: Ambarella International LP
    Inventor: Jian Tang
  • Patent number: 11868889
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen
  • Patent number: 11869249
    Abstract: The present disclosure provides a system, a method and an apparatus for object identification, capable of solving the problem in the related art that a system for centralized control and management of unmanned vehicles may not be able to identify an object effectively. The system for object identification includes a sensing device, a control device and one or more unmanned vehicles.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: January 9, 2024
    Assignee: BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.
    Inventor: Nan Wu
  • Patent number: 11847811
    Abstract: The present disclosure discloses an image segmentation method combined with superpixel and multi-scale hierarchical feature recognition. This method is based on a convolutional neural network model taking multi-scale hierarchical features extracted from a Gaussian pyramid of an image as a recognition basis, and then being connected with a multilayer perceptron to achieve the recognition of each pixel in the image, moreover, this method is used tier performing superpixel segmentation on the image and is combined with a method for improving superpxiel in combination with LBP texture features to segment an original image so that an obtained superpixel block is more fitted to edges of targets, then, the original image is merged according to a mean value of a color, and finally, recognition of each target in the image is achieved.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: December 19, 2023
    Assignee: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Dengyin Zhang, Wenye Ni, Xiaofei Jin, Qunjian Du
  • Patent number: 11842426
    Abstract: The disclosure discloses methods for determining fitting model for a signal, reconstructing a signal and devices thereof. The determination method may comprise the following steps: segmenting a sampled signal based on collected characteristic information of the signal to obtain a plurality of signal segments; fitting sampling points in the plurality of signal segments using a plurality of fitting models; acquiring fitted values for a parameter of interest in each signal segment based on fitting results; comparing each of the fitted values for the parameter of interest under each of the fitting models with an acquired measurement value for the parameter of interest, and determining a final fitting model for reconstructing the signal among the plurality of the fitting models based on comparison results. In the technical solution provided in the disclosure, the accuracy of the signal reconstruction result and precision of the signal reconstruction may be improved.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: December 12, 2023
    Assignee: Raycan Technology Co., Ltd. (Suzhou)
    Inventors: Yuming Su, Junhua Mei, Kezhang Zhu, Qingguo Xie, Pingping Dai, Hao Wang
  • Patent number: 11842554
    Abstract: A high security key scanning system and method is provided. The scanning system may comprise a sensing device configured to determine information and characteristics of a master high security key, and a digital logic to analyze the information and characteristics of the master key. The sensing device may be configured to capture information about the geometry of features cut into the surface of the master key. The logic may analyze the information related to that geometry and compare it to known characteristics of that style of high security key in order to determine the data needed to replicate the features on a new high security key blank. The system may be configured to capture the surface geometry using a camera or other imaging device. The system may utilize object coating techniques, illumination techniques, filtering techniques, image processing techniques, and feature extraction techniques to capture the desired features.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: December 12, 2023
    Assignee: HY-KO PRODUCTS COMPANY LLC
    Inventors: William R. Mutch, Thomas F. Fiore, Randall A. Porras, Chester O. D. Thompson
  • Patent number: 11830230
    Abstract: Method, an electronic device and a storage medium for living body detection based on face recognition are disclosed. The method comprises: obtaining to-be-detected infrared image and visible light image; performing edge detection and texture feature extraction on the infrared image, and feature extraction on the visible light image through a convolutional neural network; and determining whether the infrared and visible light images pass living body detection based on results of the edge detection and texture feature extraction on the to-be-detected infrared image, and a result of feature extraction on the to-be-detected visible light image through the convolutional neural network. The method, an electronic device and a storage medium for living body detection based on face recognition combine the advantages of three technologies of edge detection, texture feature extraction and convolution neural network, effectively perform living body detection, and improve the determination accuracy.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: November 28, 2023
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.
    Inventors: Shengguo Wang, Xianlin Zhao, Chuan Shen
  • Patent number: 11830160
    Abstract: In various examples, a single camera is used to capture two images of a scene from different locations. A trained neural network, taking the two images as inputs, outputs a scene structure map that indicates a ratio of height and depth values for pixel locations associated with the images. This ratio may indicate the presence of an object above a surface (e.g., road surface) within the scene. Object detection then can be performed on non-zero values or regions within the scene structure map.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: November 28, 2023
    Assignee: NVIDIA Corporation
    Inventors: Le An, Yu-Te Cheng, Oliver Wilhelm Knieps, Su Inn Park
  • Patent number: 11828824
    Abstract: Image reconstruction systems and methods include providing sensitivity maps for coils of a magnetic resonance imaging (MRI) system to a neural network. The systems and methods also include providing interleaved k-space data to the neural network, wherein the interleaved k-space data includes partial k-space data interleaved with zeros, or synthesized k-space data, to provide an extended field of view (FOV) different from a FOV utilized during acquisition of the partial k-space data, wherein the partial k-space data were obtained during a scan of a region of interest with the MRI system. The systems and methods further include outputting, from the neural network, a final reconstructed MR image based at least on the sensitivity maps and the interleaved k-space data, wherein the final reconstructed MR image includes the FOV utilized during the acquisition of the partial k-space data.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: November 28, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: Xucheng Zhu, Graeme Colin McKinnon, Andrew James Coristine, Martin Andreas Janich
  • Patent number: 11823386
    Abstract: Diagnosis is inferred by using at least one of a plurality of inferencers configured to infer diagnosis from a medical image and by using a medical image as an input into the at least one of the plurality of inferencers, and the inferred diagnosis is represented.
    Type: Grant
    Filed: July 12, 2022
    Date of Patent: November 21, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Naoki Matsuki, Masami Kawagishi, Kiyohide Satoh
  • Patent number: 11823029
    Abstract: A method of processing a neural network, includes generating an integral map for each channel in a first layer of the neural network based on calculating of area sums of pixel values in first output feature maps of channels in the first layer, generating an accumulated integral map by performing an accumulation operation on the integral maps generated for the respective channels, obtaining pre-output feature maps of a second layer, subsequent to the first layer, by performing a convolution operation between input feature maps of the second layer and weight kernels, and removing offsets in the weight kernels to obtain second output feature maps of the second layer by subtracting accumulated values of the accumulated integral map from pixel values of the pre-output feature maps.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: November 21, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sangwon Ha
  • Patent number: 11816889
    Abstract: Unsupervised learning for video classification. One or more features from one or more video clips are extracted using a spatial-temporal encoder. The one or more extracted features are processed, using a video instance discrimination task, to generate a classification label, the classification label indicating whether two of the video clips are from a same video. The one or more extracted features are processed, using a pair-wise speed discrimination task, to generate a comparison label, the comparison label indicating a relative playback speed between two given video clips. A search is performed in a video database for a video that is similar to a given video based on the comparison label.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: November 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Chuang Gan, Dakuo Wang, Antonio Jose Jimeno Yepes, Bo Wu
  • Patent number: 11804034
    Abstract: A computer-implemented method of training a machine learnable function, such as an image classifier or image feature extractor. When applying such machine learnable functions in autonomous driving and similar application areas, generalizability may be important. To improve generalizability, the machine learnable function is rewarded for responding predictably at a layer of the machine learnable function to a set of differences between input observations. This is done by means of a regularization objective included in the objective function used to train the machine learnable function. The regularization objective rewards a mutual statistical dependence between representations of input observations at the given layer, given a difference label indicating a difference between the input observations.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: October 31, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Thomas Andy Keller, Anna Khoreva, Max Welling