Patents Examined by Charlotte M. Baker
  • Patent number: 11455837
    Abstract: This application relates to an adaptive inference system and an operation method therefor. In one aspect, the system includes a user terminal for collecting multi-modal information including at least visual information, voice information and text information. The system may also include an inference support device for receiving the multi-modal information from the user terminal, and inferring the intention of a user on the basis of pre-stored history information related to the user terminal, individualized information and the multi-modal information.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: September 27, 2022
    Assignee: Korea Electronics Technology Institute
    Inventors: Min Young Jung, Sa Im Shin, Jin Yea Jang, San Kim
  • Patent number: 11443533
    Abstract: An information processing apparatus, comprising a determination unit configured to determine whether a vehicle is located inside a lower accuracy area, in which an estimation accuracy of the emotion estimation process lowers, wherein the emotion estimation process is a process for estimating an emotion of an occupant based on an image of the occupant of the vehicle captured by an image-capturing unit provided in the vehicle, and an emotion estimating unit configured to estimate the emotion of an occupant by performing different emotion estimation processes depending on whether the vehicle is located inside the lower accuracy area or not is provided.
    Type: Grant
    Filed: December 22, 2019
    Date of Patent: September 13, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yoshikazu Matsuo, Hisumi Esaki, Yusuke Oi
  • Patent number: 11442804
    Abstract: Systems and methods are disclosed for detecting anomalies in text content of data objects even when a format of the data and/or data object is unknown. These may include receiving a first data object that corresponds to a first application service and that includes first text content. An anomaly classifier may be trained based on an artificial neural network by using a natural language processing algorithm on respective text content of at least a portion of each of a plurality of data objects corresponding to the first computing service. Each of the plurality of data objects may be labeled as belonging a category. The trained anomaly classifier may identify one or more text character sequences in the first text content of the first data object as anomalous and output identifying information indicating the one or more anomalous text character sequences in the first text content of the first data object.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: September 13, 2022
    Assignee: PAYPAL, INC.
    Inventor: Dmitry Martyanov
  • Patent number: 11436704
    Abstract: In order to more accurately white balance an image, weightings can be determined for pixels of an image when computing an illuminant color value of the image and/or a scene. The weightings can be based at least in part on the Signal-to-Noise Ratio (SNR) of the pixels. The SNR may be actual SNR or SNR estimated from brightness levels of the pixels. SNR weighting (e.g., SNR adjustment) may reduce the effect of pixels with high noise on the computed illuminant color value. For example, one or more channel values of the illuminant color value can be determined based on the weightings and color values of the pixels. One or more color gain values can be determined based on the one or more channel values of the illuminant color value and used to white balance the image.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: September 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Hamidreza Mirzaei Domabi, Eric Dujardin, Animesh Khemka, Yining Deng
  • Patent number: 11436825
    Abstract: Provided are methods and apparatuses for determining a target object in an image based on an interactive input. A target object determining method acquires first feature information corresponding to an image and second feature information corresponding to an interactive input; and determines a target object corresponding to the interactive input from among objects in the image based on the first feature information and the second feature information.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: September 6, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyong Euk Lee, Qiang Wang, Chao Zhang
  • Patent number: 11436714
    Abstract: Embodiments of the innovation relate to an emotional quality estimation device comprising a controller having a memory and a processor, the controller configured to execute a training engine with labelled training data to train a neural network and generate a classroom analysis machine, the labelled training data including historical video data and an associated classroom quality score table; receive a classroom observation video from a classroom environment; execute the classroom analysis machine relative to the classroom observation video from the classroom environment to generate an emotional quality score relating to the emotional quality of the classroom environment; and output the emotional quality score for the classroom environment.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: September 6, 2022
    Assignees: Worcester Polytechnic Institute, University of Virginia Patent Foundation
    Inventors: Jacob Whitehill, Anand Ramakrishnan, Erin Ottmar, Jennifer LoCasale-Crouch
  • Patent number: 11429819
    Abstract: A packer classification apparatus extracts features based on a section that holds packer information from files and classifies packers using a Deep Neural Network(DNN) for detection of new/variant packers. A packer classification apparatus according to an embodiment uses PE section information. packer classification apparatus includes a collection classification module collecting a data set and classifying data by packer type to prepare for a model learning, a token hash module tokenizing a character string obtained after extracting labels and section names of each data and combining the section names, and obtaining a certain standard output value using Feature Hashing, and a type classification module generating a learning model after learning the data set with a Deep Neural Network(DNN) algorithm using extracted features, and classifying files for each packer type using the learning model after extracting features for the files to be classified.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: August 30, 2022
    Assignee: HOSEO UNIVERSITY ACADEMIC COOPERATION FOUNDATION
    Inventors: Tae Jin Lee, Young Joo Lee
  • Patent number: 11427210
    Abstract: Systems and methods for predicting the trajectory of an object are disclosed herein. One embodiment receives sensor data that includes a location of the object in an environment of the object; accesses a location-specific latent map, the location-specific latent map having been learned together with a neural-network-based trajectory predictor during a training phase, wherein the neural-network-based trajectory predictor is deployed in a robot; inputs, to the neural-network-based trajectory predictor, the location of the object and the location-specific latent map, the location-specific latent map providing, to the neural-network-based trajectory predictor, a set of location-specific biases regarding the environment of the object; and outputs, from the neural-network-based trajectory predictor, a predicted trajectory of the object.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: August 30, 2022
    Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology
    Inventors: Guy Rosman, Igor Gilitschenski, Arjun Gupta, Sertac Karaman, Daniela Rus
  • Patent number: 11429807
    Abstract: Methods and systems for automatically generating training data for use in machine learning are disclosed. The methods can involve the use of environmental data derived from first and second environmental sensors for a single event. The environmental data types derived from each environmental sensor are different. The event is detected based on first environmental data derived from the first environmental sensor, and a portion of second environmental data derived from the second environmental sensor is selected to generate training data for the detected event. The resulting training data can be employed to train machine learning models.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: August 30, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Vivek Pradeep
  • Patent number: 11429069
    Abstract: A machine learning system that includes three machine learning models implemented in a hardware processor, a first-level feature creation module, and a combination module provides an output based on one or more channel inputs. Each of the three machine learning models receives the channel inputs and additional feature inputs based on the channel inputs to produce the output. The first-level feature creation module is implemented in hardware and receives the channel inputs, performs a feature creation operation, creates the additional feature inputs, and provides the additional feature inputs to at least one of the machine learning models. The first-level feature creation operation performs a calculation on one or more aspects of the channel inputs, and the combination module receives the one or more machine learning model outputs and produce a machine learning channel output.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 30, 2022
    Assignee: Hamilton Sundstrand Corporation
    Inventors: Kirk A. Lillestolen, Kanwalpreet Reen, Richard A. Poisson, Joshua Robert Dunning
  • Patent number: 11423593
    Abstract: Methods and systems for reconstructing an image. For example, a method includes: receiving k-space data; receiving a transform operator corresponding to the k-space data; determining a distribution representing information associated with one or more previous iteration images; generating a next iteration image by an image reconstruction model to reduce an objective function, the objective function corresponding to a data consistency metric and a regularization metric; evaluating whether the next iteration image is satisfactory; and if the next iteration image is satisfactory, outputting the next iteration image as an output image. In certain examples, the data consistency metric corresponds to a first previous iteration image, the k-space data, and the transform operator. In certain examples, the regularization metric corresponds to the distribution. In certain examples, the computer-implemented method is performed by one or more processors.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: August 23, 2022
    Assignee: Shanghai United Imaging Intelligence Co., Ltd.
    Inventors: Zhang Chen, Shanhui Sun, Terrence Chen
  • Patent number: 11417117
    Abstract: A method of detecting lanes includes the steps: capturing (S1) a camera image (K) of a vehicle environment by a camera device (2) of a vehicle (5); determining (S2) feature points (P1 to P15) in the camera image (K), which feature points correspond to regions of possible lane boundaries (M1, M2); generating (S3) image portions of the captured camera image (K) respectively around the feature points (P1 to P15); analyzing (S4) the image portions using a neural network to classify the feature points (P1 to P15); and determining (S5) lanes in the vehicle environment taking account of the classified feature points (P1 to P15).
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: August 16, 2022
    Assignee: CONTINENTAL TEVES AG & CO. OHG
    Inventors: Christopher Bayer, Claudio Heller, Claudia Loy, Alexey Abramov
  • Patent number: 11417085
    Abstract: A method includes accessing a web-based property over a network; storing a plurality of images or videos from the web-based property and associations between the plurality of images or videos and a target audience identifier responsive to the web-based property having a stored association with the target audience identifier; retrieving the plurality of images or videos from the database responsive to each of the plurality of images or videos having stored associations with the target audience identifier; executing a neural network to generate a performance score for each of the plurality of images or videos; calculating a target audience benchmark; executing the neural network to generate a first performance score for a first image or video and a second performance score for a second image or video; comparing the first performance score and the second performance score to the benchmark; and generating a record identifying the first image or video.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: August 16, 2022
    Assignee: Vizit Labs, Inc.
    Inventors: Elham Saraee, Zachary Halloran, Jehan Hamedi
  • Patent number: 11410426
    Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for generating summary content are presented. Voice audio data and video data for an electronic meeting may be received. A language processing model may be applied to a transcript of the audio data and textual importance scores may be calculated. A video/image model may be applied to the video data and visual importance scores may be calculated. A combined importance score may be calculated for sections of the electronic meeting based on the textual importance scores and the visual importance scores. A meeting summary that includes summary content from sections for which combined importance scores exceed a threshold value may be generated.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: August 9, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Erez Kikin-Gil, Daniel Yancy Parish
  • Patent number: 11410320
    Abstract: The present disclosure discloses an image processing method, apparatus, and a non-transitory computer readable medium. The method can includes: acquiring a three-dimensional (3D) model and original texture images of an object, wherein the original texture images are acquired by an imaging device; determining a mapping relationship between the 3D model and the original texture images of the object; determining, among the original texture images, a subset of texture images associated with a first perspective of the imaging device; splicing the subset of texture images into a spliced texture image corresponding to the first perspective; and mapping the spliced texture image to the 3D model according to the mapping relationship.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: August 9, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Bin Wang, Jingming Yu, Xiaoduan Feng, Pan Pan, Rong Jin
  • Patent number: 11403499
    Abstract: Systems and methods for generating composite sets of data based on sensor data from different sensors are disclosed. Exemplary implementations may capture a color image including chromatic information; capture a depth image; generate inertial signals conveying values that are used to determine motion parameters; determine the motion parameters based on the inertial signals; generate a re-projected depth image as if the depth image had been captured at the same time as the color image, based on the interpolation of motion parameters; and generate a composite set of data based on different kinds of sensor data by combining information from the color image, the re-projected depth image, and one or more motion parameters.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: August 2, 2022
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventor: Georgios Evangelidis
  • Patent number: 11398240
    Abstract: Systems and methods are presented for cross-fading (or other multiple clip processing) of information streams on a user or client device, such as a telephone, tablet, computer or MP3 player, or any consumer device with audio playback. Multiple clip processing can be accomplished at a client end according to directions sent from a service provider that specify a combination of (i) the clips involved; (ii) the device on which the cross-fade or other processing is to occur and its parameters; and (iii) the service provider system. For example, a consumer device with only one decoder, can utilize that decoder (typically hardware) to decompress one or more elements that are involved in a cross-fade at faster than real time, thus pre-fetching the next element(s) to be played in the cross-fade at the end of the currently being played element.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: July 26, 2022
    Assignee: Sirius XM Radio Inc.
    Inventors: Raymond Lowe, Christopher Ward, Charles W. Christine
  • Patent number: 11392821
    Abstract: An apparatus includes a processing device configured to obtain time series diagnostic data associated with assets in an information technology (IT). The processing device is also configured to generate first modality information comprising behavior labels assigned to each of a plurality of time periods, a given behavior label for a given time period being based at least in part on measured feature values for the features collectively in the given time period. The processing device is further configured to generate second modality information comprising feature deltas characterizing differences between measured feature values for interdependent feature pairs.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: July 19, 2022
    Assignee: Dell Products L.P.
    Inventor: Mohammad Rafey
  • Patent number: 11386592
    Abstract: Example methods and systems for tomographic data analysis are provided. One example method may comprise: obtaining first three-dimensional (3D) feature volume data and processing the first 3D feature volume data using an AI engine that includes multiple first processing layers, an interposing forward-projection module and multiple second processing layers. Example processing using the AI engine may involve: generating second 3D feature volume data by processing the first 3D feature volume data using the multiple first processing layers, transforming the second 3D volume data into 2D feature data using the forward-projection module and generating analysis output data by processing the 2D feature data using the multiple second processing layers.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: July 12, 2022
    Inventors: Pascal Paysan, Benjamin M Haas, Janne Nord, Sami Petri Perttu, Dieter Seghers, Joakim Pyyry
  • Patent number: 11386307
    Abstract: A machine vision system comprising receiving means configured to receive image data indicative of an object to be classified where there is provided processing means with an initial neural network, the processing means configured to determine a differential equation describing the initial neural network algorithm based on the neural network parameters, and to determine a solution to the differential equation in the form of a series expansion; and to convert the series expansion to a finite series expansion by limiting the number of terms in the series expansion to a finite number; and to determine the output classification in dependence on the finite series expansion.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: July 12, 2022
    Assignee: NISSAN MOTOR CO., LTD.
    Inventors: Andrew Batchelor, Garry Jones, Yoshinori Sato