Patents by Inventor Akshay Jain

Akshay Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12265124
    Abstract: According to an embodiment, a digital circuit with N number of redundant flip-flops is provided, each having a data input coupled to a common data signal. The digital circuit operates in a functional mode and a test mode. During test mode, a first flip-flop is arranged as part of a test path and N?1 flip-flops are arranged as shadow logic. A test pattern at the common data signal is provided and a test output signal is observed at an output terminal of the first flip-flop to determine faults within a test path of the first flip-flop. At the same cycle, the test output signals of each of the N?1 number of redundant flip-flops is observed through the functional path to determine faults.
    Type: Grant
    Filed: September 26, 2023
    Date of Patent: April 1, 2025
    Assignee: STMicroelectronics International N.V.
    Inventors: Sandeep Jain, Akshay Kumar Jain, Jeena Mary George
  • Publication number: 20250102574
    Abstract: According to an embodiment, a digital circuit with N number of redundant flip-flops is provided, each having a data input coupled to a common data signal. The digital circuit operates in a functional mode and a test mode. During test mode, a first flip-flop is arranged as part of a test path and N-1 flip-flops are arranged as shadow logic. A test pattern at the common data signal is provided and a test output signal is observed at an output terminal of the first flip-flop to determine faults within a test path of the first flip-flop. At the same cycle, the test output signals of each of the N-1 number of redundant flip-flops is observed through the functional path to determine faults.
    Type: Application
    Filed: September 26, 2023
    Publication date: March 27, 2025
    Inventors: Sandeep Jain, Akshay Kumar Jain, Jeena Mary George
  • Patent number: 12248696
    Abstract: Example compute-in-memory (CIM) or processor-in-memory (PIM) techniques using repurposed or dedicated static random access memory (SRAM) rows of an SRAM sub-array to store look-up-table (LUT) entries for use in a multiply and accumulate (MAC) operation.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: March 11, 2025
    Assignee: Intel Corporation
    Inventors: Saurabh Jain, Srivatsa Rangachar Srinivasa, Akshay Krishna Ramanathan, Gurpreet Singh Kalsi, Kamlesh R. Pillai, Sreenivas Subramoney
  • Publication number: 20250071526
    Abstract: There are provided measures for data collection optimization. Such measures exemplarily comprise selecting, out of a plurality of radio equipment entities, a set of radio equipment entities, based on correlation information reflecting at least temporal and/or spatial correlation among data from said plurality of radio equipment entities, and determining a movement trajectory for a movable data collection entity configured for reading-out data from said plurality of radio equipment entities, said movement trajectory being a path with a minimum length among paths connecting all data read-out zones corresponding to all radio equipment entities of said set of radio equipment entities.
    Type: Application
    Filed: August 21, 2024
    Publication date: February 27, 2025
    Inventors: Karthik UPADHYA, Akshay JAIN
  • Patent number: 12182180
    Abstract: The model generation platform enables generation of machine learning models based on modular storage of user-specified training data. The platform can obtain a dataset from a different heterogenous source (e.g., a structured database, an unstructured database, a semi-structured file system, manual upload of a comma-separated value file, a spreadsheet, and/or big data) through an associated application programming interface and store this data in a first storage medium. The model generation platform can obtain an indication of a portion of the dataset from a user via a user interface and determine a second storage medium for this portion of the dataset based on an associated estimated performance metric. In response to a request for generation of a machine learning model, the model generation platform can generate a machine learning model using training data comprising a subset of the portion of the dataset.
    Type: Grant
    Filed: April 4, 2024
    Date of Patent: December 31, 2024
    Assignee: Citibank, N.A.
    Inventors: Sanakar Narayan Karasudha Patnaik, Akshay Jain
  • Publication number: 20240119003
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that utilize a low-latency machine learning model prediction cache for improving distribution of current state machine learning predictions across computer networks. In particular, in one or more implementations, the disclosed systems utilize a prediction registration platform for defining prediction datatypes and corresponding machine learning model prediction templates. Moreover, in one or more embodiments, the disclosed systems generate a machine learning data repository that includes predictions generated from input features utilizing machine learning models. From this repository, the disclosed systems also generate a low-latency machine learning prediction cache by extracting current state machine learning model predictions according to the machine learning prediction templates and then utilize the low-latency machine learning prediction cache to respond to queries for machine learning model predictions.
    Type: Application
    Filed: October 5, 2022
    Publication date: April 11, 2024
    Inventors: Greg Tobkin, Akshay Jain, Frank Teoh, Paul Zeng, Peeyush Agarwal, Sashidhar Guntury
  • Publication number: 20240119364
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for automatically generating and executing machine learning pipelines based on a variety of user selections of various settings, machine learning structures, and other machine learning pipeline criteria. In particular, in one or more embodiments, the disclosed systems utilize user input selecting various machine learning pipeline settings to generate machine learning model pipeline files. Further, the disclosed systems execute and deploy the machine learning pipelines based on user-selected schedules. In some embodiments, the disclosed systems also register the machine learning pipelines and associated machine learning pipeline data in a machine learning pipeline registry. Further, the disclosed systems can generate and provide a machine learning pipeline graphical user interface for monitoring and managing machine learning pipelines.
    Type: Application
    Filed: September 21, 2023
    Publication date: April 11, 2024
    Inventors: Akshay Jain, Frank Teoh, Peeyush Agarwal, Michael Tompkins, Sashidhar Guntury, Yunfan Zhong, Greg Tobkin
  • Publication number: 20240037378
    Abstract: Systems, apparatuses and methods may provide for technology that identifies an embedding table associated with a neural network. The neural network is associated with a plurality of compute nodes. The technology further identifies a number of entries of the embedding table, and determines whether to process gradients associated with the embedding table as dense gradients or sparse gradients based on the number of entries.
    Type: Application
    Filed: December 24, 2020
    Publication date: February 1, 2024
    Applicant: Intel Corporation
    Inventors: Guokai Ma, Jiong Gong, Dhiraj Kalamkar, Rachitha Prem Seelin, Hongzhen Liu, Akshay Jain, Liangang Zhang
  • Publication number: 20230229735
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a pre-defined model container workflow allowing computing devices to flexibly and efficiently define, train, deploy, and maintain machine-learning models. For instance, the disclosed systems can provide scaffolding and boilerplate code for machine-learning models. To illustrate, boilerplate code can include predetermined designs of base classes for common use cases like training, batch inference, etc. In addition, the scaffolding provides an opinionated directory structure for organizing code of a machine-learning model. Further, the disclosed systems can provide containerization and various tooling (e.g., command interface tooling, platform upgrade tooling, and model repository management tooling). Additionally, the disclosed systems can provide out of the box compatibility with one or more different compute instances for increased flexibility and cross-system integration.
    Type: Application
    Filed: January 18, 2022
    Publication date: July 20, 2023
    Inventors: Akshay Jain, Frank Teoh, Greg Tobkin, Michael Tompkins, Peeyush Agarwal, Sashidhar Guntury, Yunfan Zhong
  • Publication number: 20230196185
    Abstract: This disclosure describes a feature family system that, as part of an inter-network facilitation system, can intelligently generate and maintain a feature family repository for quickly and efficiently retrieving and providing machine learning features upon request. For example, the disclosed systems can generate a feature family repository as a centralized network location of feature references indicating network locations where different machine learning features are stored. In some cases, the disclosed systems identify a stored feature family that matches the request and retrieves the stored features from their respective network locations. The disclosed systems can generate feature families for online features as well as offline features and can automatically update feature values associated with various machine learning features on a period basis or in response to trigger events.
    Type: Application
    Filed: December 21, 2021
    Publication date: June 22, 2023
    Inventors: Akshay Jain, Peeyush Agarwal, Frank Teoh
  • Publication number: 20230097558
    Abstract: Disclosed herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for a multimedia environment that includes a computing device, a media server system, and a third party shopping system. The computing device can display an advertisement including a reference to an advertised subject provided by the third party shopping system. The computing device can further display media content provided by the media server system. In response to receiving an indication to place an order for an item associated with the advertised subject, the computing device can further collect information included in the order. The computing device can further transmit, to the third party shopping system, at least a portion of the order and media account information. The order is recorded in a user shopping account corresponding to the media account associated with the computing device.
    Type: Application
    Filed: September 29, 2021
    Publication date: March 30, 2023
    Applicant: Roku, Inc
    Inventors: Shravan MAJITHIA, Michael Veach, Derrick Johnson, Akshay Jain, Vinay Muthreja Ashok, Shyam Srinivas
  • Publication number: 20230073893
    Abstract: Method for accelerating matrix vector multiplication (MVM), long short-term memory (LSTM) systems and integrated circuits for the same are described herein. In one example, a system for accelerating processing by an LSTM architecture includes a first processing circuitry (FPC) and a second processing circuitry (SPC). The FPC receives weight matrices from a trained neural network, and stores the weight matrices in a first memory circuitry. The SPC stores the weight matrices in a second memory circuitry, and generates an output vector based on the weight matrices and input vectors. The FPC further processes each of the weight matrices for communication to the SPC, and divides each weight matrix into a number of tiles based on an available resources in the second memory circuitry and a size of the weight matrix. The SPC further applies each tile of each weight matrix to a corresponding input vector to generate the output vector.
    Type: Application
    Filed: September 7, 2021
    Publication date: March 9, 2023
    Inventors: Karri Manikantta REDDY, Akshay JAIN, Keshava Gopal Goud CHERUKU
  • Publication number: 20220180060
    Abstract: In an approach to content driven predictive auto completion of IT queries, an input phrase for an inquiry is received, where the input phrase is a sequence of words. Next words for the input phrase are predicted, where the prediction is based on a deep neural network model that has been trained with a corpus of documents for a specific domain. The next words are appended to the input phrase to create one or more predicted phrases. The predicted phrases are sorted, where the predicted phrases are sorted based on a similarity computation between the predicted phrases and the corpus of documents for the specific domain.
    Type: Application
    Filed: December 9, 2020
    Publication date: June 9, 2022
    Inventors: Akshay Jain, Ruchi Mahindru, Soumitra Sarkar, Shu Tao
  • Patent number: 10936961
    Abstract: Methods and apparatuses are described for automated predictive product recommendations using reinforcement learning. A server captures historical activity data associated with a plurality of users. The server generates a context vector for each user, the context vector comprising a multidimensional array corresponding to historical activity data. The server transforms each context vector into a context embedding. The server assigns each context embedding to an embedding cluster. The server determines, for each context embedding, (i) an overall likelihood of successful attempt and (ii) an incremental likelihood of success associated products available for recommendation. The server calculates, for each context embedding, an incremental income value associated with each of the likelihoods of success. The server aggregates (i) the overall likelihood of successful attempt, (ii) the likelihoods of success, and (iii) the incremental income values into a recommendation matrix.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: March 2, 2021
    Assignee: FMR LLC
    Inventors: Akshay Jain, Debalina Gupta, Shishir Shekhar, Bernard Kleynhans, Serdar Kadioglu, Alex Arias-Vargas
  • Patent number: 10856205
    Abstract: A method includes identifying multiple communication paths extending from a source device and for each path, determining a data rate, a power level used to communicate over at least a portion of the path and a power per data rate value by dividing the power level by the data rate. One of the multiple communication paths is selected such that the selected communication path has the lowest power per data rate value given quality of service requirements of an application. The power per data rate can be optimized from the device perspective as well as an overall network perspective. A message is then sent over the selected communication path.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: December 1, 2020
    Assignee: ORANGE
    Inventors: John Benko, Akshay Jain, Dachuan Yu
  • Publication number: 20190282701
    Abstract: The present invention relates to a polymer-drug conjugate wherein the polymer is hyaluronic acid and the drug is an anticancer compound. The anticancer compound is covalently linked to the hyaluronic acid by a pH-labile boronic acid-containing linkage. These conjugates can be used for the treatment of cancer.
    Type: Application
    Filed: July 25, 2017
    Publication date: September 19, 2019
    Inventors: Nicola TIRELLI, Vincenzo QUAGLIARIELLO, Alfonso BARBARISI, Francesco ROSSO, Som Akshay JAIN, Manlio BARBARISI, Rosario Vincenzo LAFFAIOLI, Ian James STRATFORD, Muna OQAL, Manal MEHIBEL
  • Patent number: 10404635
    Abstract: Aspects of the disclosure relate to optimizing data replication across multiple data centers. A computing platform may receive, from an authentication hub computing platform, an event message corresponding to an event associated with the authentication hub computing platform. In response to receiving the event message, the computing platform may transform the event message to produce multiple transformed messages. The multiple transformed messages may include a first transformed message associated with a first topic and a second transformed message associated with a second topic different from the first topic. Subsequently, the computing platform may send, to at least one messaging service computing platform associated with at least one other data center different from a data center associated with the computing platform, the multiple transformed messages.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: September 3, 2019
    Assignee: Bank of America Corporation
    Inventors: Tao Huang, Archie Agrawal, Akshay Jain, Xianhong Zhang
  • Publication number: 20180318246
    Abstract: The present invention relates to pharmaceutical composition comprising dimethyl fumarate; an enzyme modulator or a permeation enhancer or both; and one or more pharmaceutically acceptable excipients. It further relates to a pulsatile release pharmaceutical composition comprising dimethyl fumarate and one or more pharmaceutically acceptable excipients. The compositions of the present invention are administered at a lower dose as compared to the recommended daily dose of Tecfidera®. Further, the compositions of the present invention are resistant to dose dumping in the presence of alcohol.
    Type: Application
    Filed: October 27, 2016
    Publication date: November 8, 2018
    Inventors: Chandrashekhar GARGOTE, Lalit GARG, Shrikant Vaijanathap HODGE, Subodh DESHMUKH, Romi Barat SINGH, Vikas BATRA, Som Akshay JAIN
  • Publication number: 20180278610
    Abstract: Aspects of the disclosure relate to optimizing data replication across multiple data centers. A computing platform may receive, from an authentication hub computing platform, an event message corresponding to an event associated with the authentication hub computing platform. In response to receiving the event message, the computing platform may transform the event message to produce multiple transformed messages. The multiple transformed messages may include a first transformed message associated with a first topic and a second transformed message associated with a second topic different from the first topic. Subsequently, the computing platform may send, to at least one messaging service computing platform associated with at least one other data center different from a data center associated with the computing platform, the multiple transformed messages.
    Type: Application
    Filed: March 21, 2017
    Publication date: September 27, 2018
    Inventors: Tao Huang, Archie Agrawal, Akshay Jain, Xianhong Zhang
  • Publication number: 20180070285
    Abstract: A method includes identifying multiple communication paths extending from a source device and for each path, determining a data rate, a power level used to communicate over at least a portion of the path and a power per data rate value by dividing the power level by the data rate. One of the multiple communication paths is selected such that the selected communication path has the lowest power per data rate value given quality of service requirements of an application. The power per data rate can be optimized from the device perspective as well as an overall network perspective. A message is then sent over the selected communication path.
    Type: Application
    Filed: December 18, 2015
    Publication date: March 8, 2018
    Inventors: John Benko, Akshay Jain, Dachuan Yu