Patents by Inventor Rejith George Joseph

Rejith George Joseph has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11314551
    Abstract: A scheduler of a batch job management service determines that a set of resources a client is insufficient to execute one or more jobs. The scheduler prepares a multi-dimensional statistical representation of resource requirements of the jobs, and transmits it to a resource controller. The resource controller uses the multi-dimensional representation and resource usage state information to make resource allocation change decisions.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: April 26, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dougal Stuart Ballantyne, James Edward Kinney, Jr., Aswin Damodar, Chetan Hosmani, Rejith George Joseph, Chris William Ramsey, Kiuk Chung, Jason Roy Rupard
  • Patent number: 10936432
    Abstract: Methods, systems, and computer-readable media for implementing a fault-tolerant parallel computation framework are disclosed. Execution of an application comprises execution of a plurality of processes in parallel. Process states for the processes are stored during the execution of the application. The processes use a message passing interface for exchanging messages with one other. The messages are exchanged and the process states are stored at a plurality of checkpoints during execution of the application. A final successful checkpoint is determined after the execution of the application is terminated. The final successful checkpoint represents the most recent checkpoint at which the processes exchanged messages successfully. Execution of the application is resumed from the final successful checkpoint using the process states stored at the final successful checkpoint.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: March 2, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Tin-Yu Lee, Rejith George Joseph, Scott Michael Le Grand, Saurabh Dileep Baji
  • Patent number: 10896459
    Abstract: Some aspects of the present disclosure relate to generating and training a neural network by separating historical item interaction data into both inputs and outputs. This may be done, for example, based on date. For example, a neural network machine learning technique may be used to generate a prediction model using a set of inputs that includes both a number of items purchased by a number of users before a certain date as well as some or all attributes of those items, and a set of outputs that includes the items purchased after that date. The items purchased before that date and the associated attributes can be subjected to a time-decay function.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: January 19, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Rejith George Joseph, Oleg Rybakov
  • Publication number: 20200218569
    Abstract: A scheduler of a batch job management service determines that a set of resources a client is insufficient to execute one or more jobs. The scheduler prepares a multi-dimensional statistical representation of resource requirements of the jobs, and transmits it to a resource controller. The resource controller uses the multi-dimensional representation and resource usage state information to make resource allocation change decisions.
    Type: Application
    Filed: March 13, 2020
    Publication date: July 9, 2020
    Applicant: Amazon Technologies, Inc.
    Inventors: Dougal Stuart Ballantyne, James Edward Kinney, JR., Aswin Damodar, Chetan Hosmani, Rejith George Joseph, Chris William Ramsey, Kiuk Chung, Jason Roy Rupard
  • Patent number: 10659523
    Abstract: At the request of a customer, a distributed computing service provider may create multiple clusters under a single customer account, and may isolate them from each other. For example, various isolation mechanisms (or combinations of isolation mechanisms) may be applied when creating the clusters to isolate a given cluster of compute nodes from network traffic from compute nodes of other clusters (e.g., by creating the clusters in different VPCs); to restrict access to data, metadata, or resources that are within the given cluster of compute nodes or that are associated with the given cluster of compute nodes by compute nodes of other clusters in the distributed computing system (e.g., using an instance metadata tag and/or a storage system prefix); and/or restricting access to application programming interfaces of the distributed computing service by the given cluster of compute nodes (e.g., using an identity and access manager).
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: May 19, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Rejith George Joseph, Tin-Yu Lee, Scott Michael Le Grand, Saurabh Dileep Baji
  • Patent number: 10650432
    Abstract: Some aspects of the present disclosure relate to generating and training a neural network by separating historical item interaction data into both inputs and outputs. This may be done, for example, based on date. For example, a neural network machine learning technique may be used to generate a prediction model using a set of inputs that includes both a number of items purchased by a number of users before a certain date as well as some or all attributes of those items, and a set of outputs that includes the items purchased after that date. The items purchased before that date and the associated attributes can be subjected to a time-decay function.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: May 12, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Rejith George Joseph, Oleg Rybakov
  • Patent number: 10635973
    Abstract: Techniques described herein are directed to improved artificial neural network machine learning techniques that may be employed with a recommendation system to provide predictions with improved accuracy. In some embodiments, item consumption events may be identified for a plurality of users. From these item consumption events, a set of inputs and a set of outputs may be generated according to a data split. In some embodiments, the set of outputs (and potentially the set of inputs) may include item consumption events that are weighted according to a time-decay function. Once a set of inputs and a set of outputs are identified, they may be used to train a prediction model using an artificial neural network. The prediction model may then be used to identify predictions for a specific user based on user-specific item consumption event data.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: April 28, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Leo Parker Dirac, Rejith George Joseph, Vijai Mohan, Oleg Rybakov
  • Patent number: 10592280
    Abstract: A scheduler of a batch job management service determines that a set of resources a client is insufficient to execute one or more jobs. The scheduler prepares a multi-dimensional statistical representation of resource requirements of the jobs, and transmits it to a resource controller. The resource controller uses the multi-dimensional representation and resource usage state information to make resource allocation change decisions.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: March 17, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Dougal Stuart Ballantyne, James Edward Kinney, Jr., Aswin Damodar, Chetan Hosmani, Rejith George Joseph, Chris William Ramsey, Kiuk Chung, Jason Roy Rupard
  • Patent number: 10482380
    Abstract: The present disclosure is directed to parallelization of artificial neural network processing by conditionally synchronizing, among multiple computer processors, either the input or output of individual operations, and by conditionally using either rows or columns of certain matrices used in the operations. The conditional processing may depend upon the relative sizes of the input and output of the specific operations to be performed. For example, if a current layer matrix of values is larger than a next layer matrix of values to be computed, then rows of a weight matrix may be used by the computer processors to compute the next layer matrix. If the current layer matrix is smaller than the next layer matrix, then columns of the weight matrix may be used by the computer processors to compute the next layer matrix.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: November 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Scott Michael Le Grand, Rejith George Joseph
  • Patent number: 10379956
    Abstract: Data files in a distributed system sometimes becomes unavailable. A method for fault tolerance without data loss in a distributed file system includes allocating data nodes of the distributed file system among a plurality of compute groups, replicating a data file among a subset of the plurality of the compute groups such that the data file is located in at least two compute zones, wherein the first compute zone is isolated from the second compute zone, monitoring the accessibility of the data files, and causing a distributed task requiring data in the data file to be executed by a compute instance in the subset of the plurality of the compute groups. Upon detecting a failure in the accessibility of a data node with the data file, the task management node may redistribute the distributed task among other compute instances with access to any replica of the data file.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: August 13, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Mohana Sudhan Gandhi, Rejith George Joseph, Bandish N. Chheda, Saurabh Dileep Baji
  • Patent number: 10148736
    Abstract: A client may submit a job to a service provider that processes a large data set and that employs a message passing interface (MPI) to coordinate the collective execution of the job on multiple compute nodes. The framework may create a MapReduce cluster (e.g., within a VPC) and may generate a single key pair for the cluster, which may be downloaded by nodes in the cluster and used to establish secure node-to-node communication channels for MPI messaging. A single node may be assigned as a mapper process and may launch the MPI job, which may fork its commands to other nodes in the cluster (e.g., nodes identified in a hostfile associated with the MPI job), according to the MPI interface. A rankfile may be used to synchronize the MPI job and another MPI process used to download portions of the data set to respective nodes in the cluster.
    Type: Grant
    Filed: May 19, 2014
    Date of Patent: December 4, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Tin-Yu Lee, Rejith George Joseph, Scott Michael Le Grand, Saurabh Dileep Baji, Peter Sirota
  • Patent number: 10133646
    Abstract: A method for providing fault tolerance in a distributed file system of a service provider may include launching at least one data storage node on at least a first virtual machine instance (VMI) running on one or more servers of the service provider and storing file data. At least one data management node may be launched on at least a second VMI running on the one or more servers of the service provider. The at least second VMI may be associated with a dedicated IP address and the at least one data management node may store metadata information associated with the file data in a network storage attached to the at least second VMI. Upon detecting a failure of the at least second VMI, the at least one data management node may be re-launched on at least a third VMI running on the one or more servers.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: November 20, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Rejith George Joseph, Tin-Yu Lee, Bandish N. Chheda, Scott Michael Le Grand, Saurabh Dileep Baji
  • Publication number: 20180143852
    Abstract: A scheduler of a batch job management service determines that a set of resources a client is insufficient to execute one or more jobs. The scheduler prepares a multi-dimensional statistical representation of resource requirements of the jobs, and transmits it to a resource controller. The resource controller uses the multi-dimensional representation and resource usage state information to make resource allocation change decisions.
    Type: Application
    Filed: November 23, 2016
    Publication date: May 24, 2018
    Applicant: Amazon Technologies, Inc.
    Inventors: DOUGAL STUART BALLANTYNE, JAMES EDWARD KINNEY, JR., Aswin Damodar, Chetan Hosmani, Rejith George Joseph, Chris William Ramsey, Kiuk Chung, Jason Roy Rupard
  • Patent number: 9881226
    Abstract: Recommendations can be generated even in situations where sufficient user information is unavailable for providing personalized recommendations. Instead of generating recommendations for an item based on item type or category, a relation graph can be consulted that enables other items to be recommended that are related to the item in some way, which may be independent of the type or category of item. For example, images of models, celebrities, or everyday people wearing items of clothing, jewelry, handbags, shoes, and other such items can be received and analyzed to recognize those items and cause them to be linked in the relation graph. When generating recommendations or selecting advertisements, the relation graph can be consulted to recommend products that other people have obtained with the item from any of a number of sources, such that the recommendations may be more valuable to the user.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: January 30, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Oleg Rybakov, Matias Omar Gregorio Benitez, Leo Parker Dirac, Rejith George Joseph, Vijai Mohan, Srikanth Thirumalai
  • Publication number: 20170249215
    Abstract: Data files in a distributed system sometimes becomes unavailable. A method for fault tolerance without data loss in a distributed file system includes allocating data nodes of the distributed file system among a plurality of compute groups, replicating a data file among a subset of the plurality of the compute groups such that the data file is located in at least two compute zones, wherein the first compute zone is isolated from the second compute zone, monitoring the accessibility of the data files, and causing a distributed task requiring data in the data file to be executed by a compute instance in the subset of the plurality of the compute groups. Upon detecting a failure in the accessibility of a data node with the data file, the task management node may redistribute the distributed task among other compute instances with access to any replica of the data file.
    Type: Application
    Filed: May 15, 2017
    Publication date: August 31, 2017
    Inventors: Mohana Sudhan Gandhi, Rejith George Joseph, Bandish N. Chheda, Saurabh Dileep Baji
  • Publication number: 20170193368
    Abstract: The present disclosure is directed to parallelization of artificial neural network processing by conditionally synchronizing, among multiple computer processors, either the input or output of individual operations, and by conditionally using either rows or columns of certain matrices used in the operations. The conditional processing may depend upon the relative sizes of the input and output of the specific operations to be performed. For example, if a current layer matrix of values is larger than a next layer matrix of values to be computed, then rows of a weight matrix may be used by the computer processors to compute the next layer matrix. If the current layer matrix is smaller than the next layer matrix, then columns of the weight matrix may be used by the computer processors to compute the next layer matrix.
    Type: Application
    Filed: December 30, 2015
    Publication date: July 6, 2017
    Inventors: Scott Michael Le Grand, Rejith George Joseph
  • Patent number: 9672122
    Abstract: Data files in a distributed system sometimes becomes unavailable. A method for fault tolerance without data loss in a distributed file system includes allocating data nodes of the distributed file system among a plurality of compute groups, replicating a data file among a subset of the plurality of the compute groups such that the data file is located in at least two compute zones, wherein the first compute zone is isolated from the second compute zone, monitoring the accessibility of the data files, and causing a distributed task requiring data in the data file to be executed by a compute instance in the subset of the plurality of the compute groups. Upon detecting a failure in the accessibility of a data node with the data file, the task management node may redistribute the distributed task among other compute instances with access to any replica of the data file.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: June 6, 2017
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Mohana Sudhan Gandhi, Rejith George Joseph, Bandish N. Chheda, Saurabh Dileep Baji
  • Patent number: 9612924
    Abstract: A method for providing fault tolerance in a distributed file system of a service provider may include launching at least one data storage node on at least a first virtual machine instance (VMI) running on one or more servers of the service provider and storing file data. At least one data management node may be launched on at least a second VMI running on the one or more servers of the service provider. The at least second VMI may be associated with a dedicated IP address and the at least one data management node may store metadata information associated with the file data in a network storage attached to the at least second VMI. Upon detecting a failure of the at least second VMI, the at least one data management node may be re-launched on at least a third VMI running on the one or more servers.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: April 4, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Rejith George Joseph, Tin-Yu Lee, Bandish N. Chheda, Scott Michael Le Grand, Saurabh Dileep Baji