Patents Examined by Miranda M Huang
-
Patent number: 12141513Abstract: A method for improving performance of a predefined Deep Neural Network (DNN) convolution processing on a computing device includes inputting parameters, as input data into a processor on a computer that formalizes a design space exploration of a convolution mapping, on a predefined computer architecture that will execute the predefined convolution processing. The parameters are predefined as guided by a specification for the predefined convolution processing to be implemented by the convolution mapping and by a microarchitectural specification for the processor that will execute the predefined convolution processing. The processor calculates performance metrics for executing the predefined convolution processing on the computing device, as functions of the predefined parameters, as proxy estimates of performance of different possible design choices to implement the predefined convolution processing.Type: GrantFiled: October 31, 2018Date of Patent: November 12, 2024Assignee: International Business Machines CorporationInventors: Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan, Vijayalakshmi Srinivasan, Swagath Venkataramani, Jintao Zhang
-
Patent number: 12131230Abstract: A method includes, as part of establishing a feature merging threshold (?) for determining equivalence between two features, selecting a set of candidate ? values, partitioning training data into a plurality of groups, establishing a model W? for each ? value of the set of candidate ? values, iteratively performing: selecting a next group of training data of the plurality of groups of training data; adding the selected next group of training data to a training set; and for each ? value in the set of candidate ? values: training the W? for the ? value using the training set, and evaluating a size of W?, the size comprising a number of features included in the model, and choosing the feature merging threshold ? based on the iteratively performing.Type: GrantFiled: August 4, 2020Date of Patent: October 29, 2024Assignee: Assured Information Security, Inc.Inventors: Daniel Scofield, Craig Miles
-
Patent number: 12130841Abstract: A single unified machine learning model (e.g., a neural network) is trained to perform both supervised event predictions and unsupervised time-varying clustering for a sequence of events (e.g., a sequence representing a user behavior) using sequences of events for multiple users using a combined loss function. The unified model can then be used for, given a sequence of events as input, predict a next event to occur after the last event in the sequence and generate a clustering result by performing a clustering operation on the sequence of events. As part of predicting the next event, the unified model is trained to predict an event type for the next event and a time of occurrence for the next event. In certain embodiments, the unified model is a neural network comprising a recurrent neural network (RNN) such as an Long Short Term Memory (LSTM) network.Type: GrantFiled: July 20, 2020Date of Patent: October 29, 2024Assignee: Adobe Inc.Inventors: Karan Aggarwal, Georgios Theocharous, Anup Rao
-
Patent number: 12124941Abstract: Examples to determine a dynamic batch size of a layer are disclosed herein. An example apparatus to determine a dynamic batch size of a layer includes a layer operations controller to determine a layer ratio between a number of operations of a layer and weights of the layer, a comparator to compare the layer ratio to a number of operations per unit of memory size performed by a computation engine, and a batch size determination controller to, when the layer ratio is less than the number of operations per unit of memory size, determine the dynamic batch size of the layer.Type: GrantFiled: March 27, 2020Date of Patent: October 22, 2024Assignee: Intel CorporationInventors: Eric Luk, Mohamed Elmalaki, Sara Almalih, Cormac Brick
-
Patent number: 12117914Abstract: Static parameters of a software container are identified that relate to metadata of the software container itself. The software container is assigned to a selected runtime environment based on the static parameters using a first machine learning model. Runtime parameters for the software container are identified by analyzing the software container at runtime. The runtime parameters relate to operations that the software container requires during runtime. Using a second machine learning model, it is determined whether the selected runtime environment matches the runtime parameters. Where the runtime environment matches, the software container continues to run in this environment. Where the runtime environment does not match, the software container is run in a different runtime environment that matches both the static and runtime parameters.Type: GrantFiled: July 22, 2020Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Nadiya Kochura, Tiberiu Suto, Erik Rueger, Nicolò Sgobba
-
Patent number: 12118459Abstract: A self-supervised machine-learning system identifies whether an intermittent signal is present. The system includes a receiver, an encoding neural network, a decoding neural network, and a gating neural network. The receiver detects radiation and from the detected radiation generates a sampled sequence including sampled values describing the intermittent signal and noise. The encoding neural network is trained to compress each window over the sampled sequence into a respective context vector having a fixed dimension less than an incoming dimension of the window. The decoding neural network is trained to decompress the respective context vector for each window into an interim sequence describing the intermittent signal while suppressing the noise. The gating neural network is trained to produce a confidence sequence from a sigmoidal output based on the interim sequence.Type: GrantFiled: September 8, 2020Date of Patent: October 15, 2024Inventors: Diego Marez, John David Reeder
-
Patent number: 12118449Abstract: A system includes a housing, sensors, actuators, and a processing device. The housing is formed of material that is soft, flexible, stretchable, deformable, or any combination of thereof. Each sensor is disposed on a surface of the housing, disposed at least partially inside the housing, or positioned spaced apart from the housing, and are configured to generate data associated with physical properties of the housing an environment external to the housing. The processing device is configured to implement a neural network having one or more inputs and one or more outputs. The inputs include the data associated with the physical properties of the housing and the data associated with the environment external to the housing. The outputs are based on the data associated the physical properties of the housing and the data associated with the environment external to the housing.Type: GrantFiled: March 23, 2021Date of Patent: October 15, 2024Assignee: University of Southern CaliforniaInventors: Antonio Damasio, Kingson Man
-
Patent number: 12112241Abstract: The present disclosure generally relates to apparatus, software and methods for detecting anomalous elements in data. For example, the data can be any time series, such as but not limited to radio frequency data, temperature data, stock data, or production data. Each type of data may be susceptible to repeating phenomena that produce recognizable features of anomalous elements. In some embodiments, the features can be characterized as known patterns and used to train a machine learning model via supervised learning to recognize those features in a new data series.Type: GrantFiled: September 20, 2019Date of Patent: October 8, 2024Assignee: Cable Television Laboratories, Inc.Inventors: Jingjie Zhu, Karthik Sundaresan
-
Patent number: 12099571Abstract: Heterogeneous monitoring nodes may each generate a series of monitoring node values over time associated with operation of an industrial asset. An offline abnormal state detection model creation computer may receive the series of monitoring node values and perform a feature extraction process using a multi-modal, multi-disciplinary framework to generate an initial set of feature vectors. Then feature dimensionality reduction is performed to generate a selected feature vector subset. The model creation computer may derive digital models through a data-driven machine learning modeling method, based on input/output variables identified by domain experts or by learning from the data. The system may then automatically generate domain level features based on a difference between sensor measurements and digital model output.Type: GrantFiled: May 21, 2018Date of Patent: September 24, 2024Assignee: GE INFRASTRUCTURE TECHNOLOGY LLCInventors: Weizhong Yan, Lalit Keshav Mestha, Daniel Francis Holzhauer
-
Patent number: 12099566Abstract: Techniques for learning and using content type embeddings. The content type embeddings have the useful property that a distance in an embedding space between two content type embeddings corresponds to a semantic similarity between the two content types represented by the two content type embeddings. The closer the distance in the space, the more the two content types are semantically similar. The farther the distance in the space, the less the two content types are semantically similar. The learned content type embeddings can be used in a content suggestion system as machine learning features to improve content suggestions to end-users.Type: GrantFiled: November 6, 2019Date of Patent: September 24, 2024Assignee: Dropbox, Inc.Inventors: Jongmin Baek, Jiarui Ding, Neeraj Kumar
-
Patent number: 12086697Abstract: A relationship analysis device includes a parameter sample data calculation unit that calculates sample data for parameters for a simulator that receives inputs of data of a first type and outputs data of a second type, calculating sample data; a second type sample data acquisition unit that inputs, to the simulator, observation data and sample data, and obtains sample data of the second type; and a parameter value determination unit that calculates a weight for sample data based on the difference between observation data of the second type and the sample data of the second type, and based on the relationship between a first distribution that the observation data of the first type followed and a second distribution being a distribution of the data of the first type, and calculates a value for the parameters using the calculated weight.Type: GrantFiled: June 7, 2019Date of Patent: September 10, 2024Assignee: NEC CORPORATIONInventors: Keiichi Kisamori, Keisuke Yamazaki
-
Patent number: 12086705Abstract: An apparatus to facilitate compute optimization is disclosed. The apparatus includes a at least one processor to perform operations to implement a neural network and compute logic to accelerate neural network computations.Type: GrantFiled: December 29, 2017Date of Patent: September 10, 2024Assignee: Intel CorporationInventors: Amit Bleiweiss, Abhishek Venkatesh, Gokce Keskin, John Gierach, Oguz Elibol, Tomer Bar-On, Huma Abidi, Devan Burke, Jaikrishnan Menon, Eriko Nurvitadhi, Pruthvi Gowda Thorehosur Appajigowda, Travis T. Schluessler, Dhawal Srivastava, Nishant Patel, Anil Thomas
-
Patent number: 12076120Abstract: In accordance with some embodiments, systems, methods, and media for estimating compensatory reserve and predicting hemodynamic decompensation using physiological data are provided. In some embodiments, a system for estimating compensatory reserve is provided, the system comprising: a processor programmed to: receive a blood pressure waveform of a subject; generate a first sample of the blood pressure waveform with a first duration; provide the sample as input to a trained CNN that was trained using samples of the first duration from blood pressure waveforms recorded from subjects while decreasing the subject's central blood volume, each sample being associated with a compensatory reserve metric; receive, from the trained CNN, a first compensatory reserve metric based on the first sample; and cause information indicative of remaining compensatory reserve to be presented.Type: GrantFiled: July 21, 2020Date of Patent: September 3, 2024Assignees: Mayo Foundation for Medical Education and Research, The Government of the United States, as Represented by the Secretary of the ArmyInventors: Robert W. Techentin, Timothy B. Curry, Michael J. Joyner, Clifton R. Haider, David R. Holmes, III, Christopher L. Felton, Barry K. Gilbert, Charlotte Sue Van Dorn, William A. Carey, Victor A. Convertino
-
Patent number: 12079725Abstract: In some embodiments, an application receives a request to execute a convolutional neural network model. The application determines the computational complexity requirement for the neural network based on the computing resource available on the device. The application further determines the architecture of the convolutional neural network model by determining the locations of down-sampling layers within the convolutional neural network model based on the computational complexity requirement. The application reconfigures the architecture of the convolutional neural network model by moving the down-sampling layers to the determined locations and executes the convolutional neural network model to generate output results.Type: GrantFiled: January 24, 2020Date of Patent: September 3, 2024Assignee: Adobe Inc.Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
-
Patent number: 12073320Abstract: Disclosed are systems and methods to incrementally train neural networks. Incrementally training the neural networks can include defining a probability distribution of labeled training examples from a training sample pool, generating a first training set based off the probability distribution, training the neural network with the first training set, adding at least one additional training sample to the training sample pool, generating a second training set, and training the neural network with the second training set. The incremental training can be recursive for additional training sets until a decision to end the recursion is made.Type: GrantFiled: October 13, 2020Date of Patent: August 27, 2024Assignee: Ford Global Technologies, LLCInventor: Lucas Ross
-
Patent number: 12073308Abstract: Embodiments are directed towards a hardware accelerator engine that supports efficient mapping of convolutional stages of deep neural network algorithms. The hardware accelerator engine includes a plurality of convolution accelerators, and each one of the plurality of convolution accelerators includes a kernel buffer, a feature line buffer, and a plurality of multiply-accumulate (MAC) units. The MAC units are arranged to multiply and accumulate data received from both the kernel buffer and the feature line buffer. The hardware accelerator engine also includes at least one input bus coupled to an output bus port of a stream switch, at least one output bus coupled to an input bus port of the stream switch, or at least one input bus and at least one output bus hard wired to respective output bus and input bus ports of the stream switch.Type: GrantFiled: February 2, 2017Date of Patent: August 27, 2024Assignees: STMICROELECTRONICS INTERNATIONAL N.V., STMICROELECTRONICS S.r.lInventors: Thomas Boesch, Giuseppe Desoli
-
Patent number: 12061991Abstract: Transfer learning in machine learning can include receiving a machine learning model. Target domain training data for reprogramming the machine learning model using transfer learning can be received. The target domain training data can be transformed by performing a transformation function on the target domain training data. Output labels of the machine learning model can be mapped to target labels associated with the target domain training data. The transformation function can be trained by optimizing a parameter of the transformation function. The machine learning model can be reprogrammed based on input data transformed by the transformation function and a mapping of the output labels to target labels.Type: GrantFiled: September 23, 2020Date of Patent: August 13, 2024Assignees: International Business Machines Corporation, National Tsing Hua UniversityInventors: Pin-Yu Chen, Sijia Liu, Chia-Yu Chen, I-Hsin Chung, Tsung-Yi Ho, Yun-Yun Tsai
-
Patent number: 12056580Abstract: A method, system and computer program product, the method comprising: creating a model representing underperforming cases; from a case collection having a total performance, and which comprises for each of a multiplicity of records: a value for each feature from a collection of features, a ground truth label and a prediction of a machine learning (ML) engine, obtaining one or more features; dividing the records into groups, based on values of the features in each record; for one group of the groups, calculating a performance parameter of the ML engine over the portion of the records associated with the group; subject to the performance parameter of the group being below the total performance in at least a predetermined threshold: determining a characteristic for the group; adding the characteristic of the group to the model; and providing the model to a user, thus indicating under-performing parts of the test collection.Type: GrantFiled: October 24, 2019Date of Patent: August 6, 2024Assignee: International Business Machines CorporationInventors: Orna Raz, Marcel Zalmanovici, Aviad Zlotnick
-
Patent number: 12056610Abstract: A learning mechanism with partially-labeled web images is provided while correcting the noise labels during the learning. Specifically, the mechanism employs a momentum prototype that represents common characteristics of a specific class. One training objective is to minimize the difference between the normalized embedding of a training image sample and the momentum prototype of the corresponding class. Meanwhile, during the training process, the momentum prototype is used to generate a pseudo label for the training image sample, which can then be used to identify and remove out of distribution (OOD) samples to correct the noisy labels from the original partially-labeled training images. The momentum prototype for each class is in turn constantly updated based on the embeddings of new training samples and their pseudo labels.Type: GrantFiled: August 28, 2020Date of Patent: August 6, 2024Assignee: Salesforce, Inc.Inventors: Junnan Li, Chu Hong Hoi
-
Patent number: 12056593Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent interacting with an environment. A current observation characterizing a current state of the environment is received. For each action in a set of multiple actions that can be performed by the agent to interact with the environment, a probability distribution is determined over possible Q returns for the action-current observation pair. For each action, a measure of central tendency of the possible Q returns with respect to the probability distributions for the action-current observation pair is determined. An action to be performed by the agent in response to the current observation is selected using the measures of central tendency.Type: GrantFiled: November 16, 2020Date of Patent: August 6, 2024Assignee: DeepMind Technologies LimitedInventors: Marc Gendron-Bellemare, William Clinton Dabney