Patents by Inventor Akshay Jain

Akshay Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119364
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for automatically generating and executing machine learning pipelines based on a variety of user selections of various settings, machine learning structures, and other machine learning pipeline criteria. In particular, in one or more embodiments, the disclosed systems utilize user input selecting various machine learning pipeline settings to generate machine learning model pipeline files. Further, the disclosed systems execute and deploy the machine learning pipelines based on user-selected schedules. In some embodiments, the disclosed systems also register the machine learning pipelines and associated machine learning pipeline data in a machine learning pipeline registry. Further, the disclosed systems can generate and provide a machine learning pipeline graphical user interface for monitoring and managing machine learning pipelines.
    Type: Application
    Filed: September 21, 2023
    Publication date: April 11, 2024
    Inventors: Akshay Jain, Frank Teoh, Peeyush Agarwal, Michael Tompkins, Sashidhar Guntury, Yunfan Zhong, Greg Tobkin
  • Publication number: 20240119003
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that utilize a low-latency machine learning model prediction cache for improving distribution of current state machine learning predictions across computer networks. In particular, in one or more implementations, the disclosed systems utilize a prediction registration platform for defining prediction datatypes and corresponding machine learning model prediction templates. Moreover, in one or more embodiments, the disclosed systems generate a machine learning data repository that includes predictions generated from input features utilizing machine learning models. From this repository, the disclosed systems also generate a low-latency machine learning prediction cache by extracting current state machine learning model predictions according to the machine learning prediction templates and then utilize the low-latency machine learning prediction cache to respond to queries for machine learning model predictions.
    Type: Application
    Filed: October 5, 2022
    Publication date: April 11, 2024
    Inventors: Greg Tobkin, Akshay Jain, Frank Teoh, Paul Zeng, Peeyush Agarwal, Sashidhar Guntury
  • Patent number: 11949414
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve in-memory multiply and accumulate operations. An example apparatus includes a first multiplexer in a subarray of memory, the first multiplexer to receive first values representative of a column of a lookup table (LUT) including entries to represent products of four-bit numbers and return second values from an intersection of a row and the column of the LUT based on a first element of a first operand; shift and adder logic in the subarray, the shift and adder logic to shift the second values based on at least one of the first element of the first operand or a first element of a second operand; and accumulation storage in the subarray, the accumulation storage to store at least the shifted second values.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: April 2, 2024
    Assignee: INTEL CORPORATION
    Inventors: Gurpreet Singh Kalsi, Akshay Krishna Ramanathan, Kamlesh Pillai, Sreenivas Subramoney, Srivatsa Rangachar Srinivasa, Anirud Thyagharajan, Om Ji Omer, Saurabh Jain
  • Publication number: 20240037378
    Abstract: Systems, apparatuses and methods may provide for technology that identifies an embedding table associated with a neural network. The neural network is associated with a plurality of compute nodes. The technology further identifies a number of entries of the embedding table, and determines whether to process gradients associated with the embedding table as dense gradients or sparse gradients based on the number of entries.
    Type: Application
    Filed: December 24, 2020
    Publication date: February 1, 2024
    Applicant: Intel Corporation
    Inventors: Guokai Ma, Jiong Gong, Dhiraj Kalamkar, Rachitha Prem Seelin, Hongzhen Liu, Akshay Jain, Liangang Zhang
  • Publication number: 20230229735
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a pre-defined model container workflow allowing computing devices to flexibly and efficiently define, train, deploy, and maintain machine-learning models. For instance, the disclosed systems can provide scaffolding and boilerplate code for machine-learning models. To illustrate, boilerplate code can include predetermined designs of base classes for common use cases like training, batch inference, etc. In addition, the scaffolding provides an opinionated directory structure for organizing code of a machine-learning model. Further, the disclosed systems can provide containerization and various tooling (e.g., command interface tooling, platform upgrade tooling, and model repository management tooling). Additionally, the disclosed systems can provide out of the box compatibility with one or more different compute instances for increased flexibility and cross-system integration.
    Type: Application
    Filed: January 18, 2022
    Publication date: July 20, 2023
    Inventors: Akshay Jain, Frank Teoh, Greg Tobkin, Michael Tompkins, Peeyush Agarwal, Sashidhar Guntury, Yunfan Zhong
  • Publication number: 20230196185
    Abstract: This disclosure describes a feature family system that, as part of an inter-network facilitation system, can intelligently generate and maintain a feature family repository for quickly and efficiently retrieving and providing machine learning features upon request. For example, the disclosed systems can generate a feature family repository as a centralized network location of feature references indicating network locations where different machine learning features are stored. In some cases, the disclosed systems identify a stored feature family that matches the request and retrieves the stored features from their respective network locations. The disclosed systems can generate feature families for online features as well as offline features and can automatically update feature values associated with various machine learning features on a period basis or in response to trigger events.
    Type: Application
    Filed: December 21, 2021
    Publication date: June 22, 2023
    Inventors: Akshay Jain, Peeyush Agarwal, Frank Teoh
  • Publication number: 20230097558
    Abstract: Disclosed herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for a multimedia environment that includes a computing device, a media server system, and a third party shopping system. The computing device can display an advertisement including a reference to an advertised subject provided by the third party shopping system. The computing device can further display media content provided by the media server system. In response to receiving an indication to place an order for an item associated with the advertised subject, the computing device can further collect information included in the order. The computing device can further transmit, to the third party shopping system, at least a portion of the order and media account information. The order is recorded in a user shopping account corresponding to the media account associated with the computing device.
    Type: Application
    Filed: September 29, 2021
    Publication date: March 30, 2023
    Applicant: Roku, Inc
    Inventors: Shravan MAJITHIA, Michael Veach, Derrick Johnson, Akshay Jain, Vinay Muthreja Ashok, Shyam Srinivas
  • Publication number: 20230073893
    Abstract: Method for accelerating matrix vector multiplication (MVM), long short-term memory (LSTM) systems and integrated circuits for the same are described herein. In one example, a system for accelerating processing by an LSTM architecture includes a first processing circuitry (FPC) and a second processing circuitry (SPC). The FPC receives weight matrices from a trained neural network, and stores the weight matrices in a first memory circuitry. The SPC stores the weight matrices in a second memory circuitry, and generates an output vector based on the weight matrices and input vectors. The FPC further processes each of the weight matrices for communication to the SPC, and divides each weight matrix into a number of tiles based on an available resources in the second memory circuitry and a size of the weight matrix. The SPC further applies each tile of each weight matrix to a corresponding input vector to generate the output vector.
    Type: Application
    Filed: September 7, 2021
    Publication date: March 9, 2023
    Inventors: Karri Manikantta REDDY, Akshay JAIN, Keshava Gopal Goud CHERUKU
  • Publication number: 20220180060
    Abstract: In an approach to content driven predictive auto completion of IT queries, an input phrase for an inquiry is received, where the input phrase is a sequence of words. Next words for the input phrase are predicted, where the prediction is based on a deep neural network model that has been trained with a corpus of documents for a specific domain. The next words are appended to the input phrase to create one or more predicted phrases. The predicted phrases are sorted, where the predicted phrases are sorted based on a similarity computation between the predicted phrases and the corpus of documents for the specific domain.
    Type: Application
    Filed: December 9, 2020
    Publication date: June 9, 2022
    Inventors: Akshay Jain, Ruchi Mahindru, Soumitra Sarkar, Shu Tao
  • Patent number: 10936961
    Abstract: Methods and apparatuses are described for automated predictive product recommendations using reinforcement learning. A server captures historical activity data associated with a plurality of users. The server generates a context vector for each user, the context vector comprising a multidimensional array corresponding to historical activity data. The server transforms each context vector into a context embedding. The server assigns each context embedding to an embedding cluster. The server determines, for each context embedding, (i) an overall likelihood of successful attempt and (ii) an incremental likelihood of success associated products available for recommendation. The server calculates, for each context embedding, an incremental income value associated with each of the likelihoods of success. The server aggregates (i) the overall likelihood of successful attempt, (ii) the likelihoods of success, and (iii) the incremental income values into a recommendation matrix.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: March 2, 2021
    Assignee: FMR LLC
    Inventors: Akshay Jain, Debalina Gupta, Shishir Shekhar, Bernard Kleynhans, Serdar Kadioglu, Alex Arias-Vargas
  • Patent number: 10856205
    Abstract: A method includes identifying multiple communication paths extending from a source device and for each path, determining a data rate, a power level used to communicate over at least a portion of the path and a power per data rate value by dividing the power level by the data rate. One of the multiple communication paths is selected such that the selected communication path has the lowest power per data rate value given quality of service requirements of an application. The power per data rate can be optimized from the device perspective as well as an overall network perspective. A message is then sent over the selected communication path.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: December 1, 2020
    Assignee: ORANGE
    Inventors: John Benko, Akshay Jain, Dachuan Yu
  • Publication number: 20190282701
    Abstract: The present invention relates to a polymer-drug conjugate wherein the polymer is hyaluronic acid and the drug is an anticancer compound. The anticancer compound is covalently linked to the hyaluronic acid by a pH-labile boronic acid-containing linkage. These conjugates can be used for the treatment of cancer.
    Type: Application
    Filed: July 25, 2017
    Publication date: September 19, 2019
    Inventors: Nicola TIRELLI, Vincenzo QUAGLIARIELLO, Alfonso BARBARISI, Francesco ROSSO, Som Akshay JAIN, Manlio BARBARISI, Rosario Vincenzo LAFFAIOLI, Ian James STRATFORD, Muna OQAL, Manal MEHIBEL
  • Patent number: 10404635
    Abstract: Aspects of the disclosure relate to optimizing data replication across multiple data centers. A computing platform may receive, from an authentication hub computing platform, an event message corresponding to an event associated with the authentication hub computing platform. In response to receiving the event message, the computing platform may transform the event message to produce multiple transformed messages. The multiple transformed messages may include a first transformed message associated with a first topic and a second transformed message associated with a second topic different from the first topic. Subsequently, the computing platform may send, to at least one messaging service computing platform associated with at least one other data center different from a data center associated with the computing platform, the multiple transformed messages.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: September 3, 2019
    Assignee: Bank of America Corporation
    Inventors: Tao Huang, Archie Agrawal, Akshay Jain, Xianhong Zhang
  • Publication number: 20180318246
    Abstract: The present invention relates to pharmaceutical composition comprising dimethyl fumarate; an enzyme modulator or a permeation enhancer or both; and one or more pharmaceutically acceptable excipients. It further relates to a pulsatile release pharmaceutical composition comprising dimethyl fumarate and one or more pharmaceutically acceptable excipients. The compositions of the present invention are administered at a lower dose as compared to the recommended daily dose of Tecfidera®. Further, the compositions of the present invention are resistant to dose dumping in the presence of alcohol.
    Type: Application
    Filed: October 27, 2016
    Publication date: November 8, 2018
    Inventors: Chandrashekhar GARGOTE, Lalit GARG, Shrikant Vaijanathap HODGE, Subodh DESHMUKH, Romi Barat SINGH, Vikas BATRA, Som Akshay JAIN
  • Publication number: 20180278610
    Abstract: Aspects of the disclosure relate to optimizing data replication across multiple data centers. A computing platform may receive, from an authentication hub computing platform, an event message corresponding to an event associated with the authentication hub computing platform. In response to receiving the event message, the computing platform may transform the event message to produce multiple transformed messages. The multiple transformed messages may include a first transformed message associated with a first topic and a second transformed message associated with a second topic different from the first topic. Subsequently, the computing platform may send, to at least one messaging service computing platform associated with at least one other data center different from a data center associated with the computing platform, the multiple transformed messages.
    Type: Application
    Filed: March 21, 2017
    Publication date: September 27, 2018
    Inventors: Tao Huang, Archie Agrawal, Akshay Jain, Xianhong Zhang
  • Publication number: 20180070285
    Abstract: A method includes identifying multiple communication paths extending from a source device and for each path, determining a data rate, a power level used to communicate over at least a portion of the path and a power per data rate value by dividing the power level by the data rate. One of the multiple communication paths is selected such that the selected communication path has the lowest power per data rate value given quality of service requirements of an application. The power per data rate can be optimized from the device perspective as well as an overall network perspective. A message is then sent over the selected communication path.
    Type: Application
    Filed: December 18, 2015
    Publication date: March 8, 2018
    Inventors: John Benko, Akshay Jain, Dachuan Yu
  • Patent number: 9665465
    Abstract: This disclosure describes, in part, a system and process that allows for the automated review of a submitted application to determine the actually requested permissions and identify differences between the requested permissions and a submitted permissions list associated with the application. For example, when an application is submitted for review or approval it may include a submitted permissions list identifying the permissions that are to be associated with the application. In some instances, the permissions included in the submitted permissions list may not correspond with the permissions actually needed for proper operation of the application—there may be omitted permissions that should be included and/or permissions included in the permissions list that are never requested by the application. This disclosure describes, in part, a system and process for confirming that the appropriate permissions are included in the submitted permissions list.
    Type: Grant
    Filed: November 19, 2012
    Date of Patent: May 30, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Akshay Jain, Nitin Kumar Grover
  • Publication number: 20160337833
    Abstract: A system and method for call routing among multiple service providers is disclosed. In one aspect, a call dialed by the user is directed to a call routing broker for determination of a cost effective or otherwise best call option for the user. If an option to place a call through a service provider that does not incur extra charge and does not accrue data or minutes against a plan quota, the call will be placed through that service provider. If only an option to place the call through a service provider that does not incur extra charge but does accrue data and/or minutes is identified, the call will be placed through that service provider. If only an option to place the call through a service provider that does incur extra charge, the call will be placed through a service provider selected based in part on the best rates.
    Type: Application
    Filed: January 16, 2015
    Publication date: November 17, 2016
    Applicant: ORANGE
    Inventors: John Benko, Akshay Jain, Dachuan Yu, Ombeline Choupin
  • Publication number: 20140297288
    Abstract: A system and associated method are provided for using a voice activated voice personal assistant (VPA) for a first user equipment, comprising: detecting establishment of a voice communication with a second user equipment; monitoring the voice communications using the VPA for commands relevant to the VPA; identifying, by the VPA, the commands within the voice communication; and implementing an action related to the commands during the ongoing voice communication.
    Type: Application
    Filed: March 28, 2014
    Publication date: October 2, 2014
    Inventors: Dachuan Yu, John Benko, Akshay Jain, Georges Nahon