Patents by Inventor Venkatesh Babu

Venkatesh Babu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954513
    Abstract: The disclosed approach works without the individualized credentials of failed machines when setting up recovery VMs in a cloud computing environment. Each recovery VMs is customized to properly correspond to the system state of its failed counterpart. An illustrative data storage management system recovers backup data and system states collected from the counterpart computing devices, custom-configures recovery VMs in the cloud computing environment, and injects the desired drivers into each recovery VM during an enhanced bare-metal restore process. The enhanced bare-metal restore process works without the failed computer's credentials. The system also restores the backed up data to recovery volumes attached to the recovery VMs. The present approach is both scalable and secure.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: April 9, 2024
    Assignee: Commvault Systems, Inc.
    Inventors: Amit Mahajan, Ratish Babu Andham Veetil, Venkatesh Maharajan
  • Patent number: 11897634
    Abstract: A system includes one or more components within an internal cabin of a vehicle. An imaging device is configured to obtain an image of the one or more components. A state determination control unit includes a processor. The state determination control unit is in communication with the imaging device. The state determination control unit receives image data including the image from the imaging device. Further, the state determination control unit determines a state of the one or more components based on the image data.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: February 13, 2024
    Assignee: THE BOEING COMPANY
    Inventors: Venkatesh Babu Radhakrishnan, Mugaludi Ramesha Rakesh, Madhanmohan Savadamuthu, Shubham Tripathi, Ethan Carl Owyang
  • Publication number: 20230376846
    Abstract: An information processing apparatus includes one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to decide a gain matrix based on an input metric, perform selection of first training data from a plurality of unlabeled training data, to be used for training a machine learning model, based on the gain matrix, and perform training of the machine learning model based on the first training data, a predicted label that is predicted from the first training data, and a loss function including the gain matrix.
    Type: Application
    Filed: May 19, 2023
    Publication date: November 23, 2023
    Applicants: Fujitsu Limited, INDIAN INSTITUTE OF SCIENCE
    Inventors: Sho TAKEMORI, Takashi KATOH, Yuhei UMEDA, Harsh RANGWANI, Shrinivas RAMASUBRAMANIAN, Venkatesh Babu RADHAKRISHNAN
  • Publication number: 20220033108
    Abstract: A system includes one or more components within an internal cabin of a vehicle. An imaging device is configured to obtain an image of the one or more components. A state determination control unit includes a processor. The state determination control unit is in communication with the imaging device. The state determination control unit receives image data including the image from the imaging device. Further, the state determination control unit determines a state of the one or more components based on the image data.
    Type: Application
    Filed: June 15, 2021
    Publication date: February 3, 2022
    Applicant: THE BOEING COMPANY
    Inventors: Venkatesh Babu Radhakrishnan, Mugaludi Ramesha Rakesh, Madhanmohan Savadamuthu, Shubham Tripathi, Ethan Carl Owyang
  • Patent number: 11200657
    Abstract: State of the art image processing techniques such as background subtraction, and Convolutional Neural Network based approaches, when used for change detection, fail to support certain datasets. The disclosure herein generally relates to semantic change detection, and, more particularly, to a method and system for semantic change detection using a deep neural network feature correlation approach. An adaptive correlation layer is used by the system, which determines extent of computation required at pixel level, based on amount of information at pixels, and uses this information in further computation done for the semantic change detection. Information on the determined extent of computation required is then used to extract semantic features, which is then used to compute one or more correlation maps between the at least one feature map of a test image and corresponding reference image. Further the semantic changes are determined from the one or more correlation maps.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: December 14, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Ram Prabhakar Kathirvel, Venkatesh Babu Radhakrishnan
  • Publication number: 20210065354
    Abstract: State of the art image processing techniques such as background subtraction, and Convolutional Neural Network based approaches, when used for change detection, fail to support certain datasets. The disclosure herein generally relates to semantic change detection, and, more particularly, to a method and system for semantic change detection using a deep neural network feature correlation approach. An adaptive correlation layer is used by the system, which determines extent of computation required at pixel level, based on amount of information at pixels, and uses this information in further computation done for the semantic change detection. Information on the determined extent of computation required is then used to extract semantic features, which is then used to compute one or more correlation maps between the at least one feature map of a test image and corresponding reference image. Further the semantic changes are determined from the one or more correlation maps.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Applicant: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Balamuralidhar PURUSHOTHAMAN, Ram Prabhakar KATHIRVEL, Venkatesh Babu RADHAKRISHNAN
  • Patent number: 10650013
    Abstract: Disclosed aspects relate to access operation management to a database management system (DBMS) on a shared pool of configurable computing resources having a set of members. A map of the set of table names to the set of members may be established. A query may be received which indicates the access operation request to the DBMS. The query may be parsed to identify a mentioned table name. In the query related to the access operation request to the DBMS, the mentioned table name may be identified. A specific member of the set of members may be selected by comparing the mentioned table name with the map. The specific member of the set of members may be configured to process the access operation request to the DBMS. The routing may be performed in order to process the access operation request to the DBMS.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Venkatesh Babu Ks, Chetan Babu Papaiah
  • Patent number: 10592273
    Abstract: A method and apparatus are provided in which a source and target perform bidirectional forwarding of traffic while a migration guest is being transferred from the source to the target. In some examples, the migration guest is exposed to the impending migration and takes an action in response. A virtual network programming controller informs other devices in the network of the change, such that those devices may communicate directly with the migration guest on the target host. According to some examples, an “other” virtual network device in communication with the controller and the target host facilitates the seamless migration. In such examples, the forwarding may be performed only until the other virtual machine receives an incoming packet from the target host, and then the other virtual machine resumes communication with the migration guest on the target host.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: March 17, 2020
    Assignee: Google LLC
    Inventors: Brian Matthew Fahs, Jinnah Dylan Hosein, Venkatesh Babu Chitlur Srinivasa, Guy Shefner, Roy Donald Bryant, Uday Ramakrishna Naik, Francis Edward Swiderski, III, Nan Hua
  • Patent number: 10162686
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: December 25, 2018
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Publication number: 20180203721
    Abstract: A method and apparatus are provided in which a source and target perform bidirectional forwarding of traffic while a migration guest is being transferred from the source to the target. In some examples, the migration guest is exposed to the impending migration and takes an action in response. A virtual network programming controller informs other devices in the network of the change, such that those devices may communicate directly with the migration guest on the target host. According to some examples, an “other” virtual network device in communication with the controller and the target host facilitates the seamless migration. In such examples, the forwarding may be performed only until the other virtual machine receives an incoming packet from the target host, and then the other virtual machine resumes communication with the migration guest on the target host.
    Type: Application
    Filed: March 16, 2018
    Publication date: July 19, 2018
    Inventors: Brian Matthew Fahs, Jinnah Dylan Hosein, Venkatesh Babu Chitlur Srinivasa, Guy Shefner, Roy Donald Bryant, Uday Ramakrishna Naik, Francis Edward Swiderski, III, Nan Hua
  • Patent number: 10013276
    Abstract: A method and apparatus are provided in which a source and target perform bidirectional forwarding of traffic while a migration guest is being transferred from the source to the target. In some examples, the migration guest is exposed to the impending migration and takes an action in response. A virtual network programming controller informs other devices in the network of the change, such that those devices may communicate directly with the migration guest on the target host. According to some examples, an “other” virtual network device in communication with the controller and the target host facilitates the seamless migration. In such examples, the forwarding may be performed only until the other virtual machine receives an incoming packet from the target host, and then the other virtual machine resumes communication with the migration guest on the target host.
    Type: Grant
    Filed: June 20, 2014
    Date of Patent: July 3, 2018
    Assignee: Google LLC
    Inventors: Brian Matthew Fahs, Jinnah Dylan Hosein, Venkatesh Babu Chitlur Srinivasa, Guy Shefner, Roy Donald Bryant, Uday Ramakrishna Naik, Francis E. Swiderski, Nan Hua
  • Publication number: 20180165469
    Abstract: Disclosed aspects relate to access operation management to a database management system (DBMS) on a shared pool of configurable computing resources having a set of members. A map of the set of table names to the set of members may be established. A query may be received which indicates the access operation request to the DBMS. The query may be parsed to identify a mentioned table name. In the query related to the access operation request to the DBMS, the mentioned table name may be identified. A specific member of the set of members may be selected by comparing the mentioned table name with the map. The specific member of the set of members may be configured to process the access operation request to the DBMS. The routing may be performed in order to process the access operation request to the DBMS.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Inventors: Venkatesh Babu KS, Chetan Babu Papaiah
  • Publication number: 20180067784
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Application
    Filed: November 8, 2017
    Publication date: March 8, 2018
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Patent number: 9842008
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: December 12, 2017
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Patent number: 9430799
    Abstract: A method of fulfilling a transaction between a bank and a customer of the bank makes available a form used in a banking transaction to a customer digital device. The form has a plurality of fields. The method scans, within a physical bank branch, an encoded visual display on the client digital device. The encoded visual display has an encoded version of the form and some but not all of the plurality of fields completed with client data. The method further decodes the encoded visual display using a decoding algorithm to produce the form and completed and uncompleted fields. At least one datum may be added, within the physical bank branch, to at least one uncompleted field of the form, and the banking transaction may be processed based on the form with the at least one client datum within the at least one field of the form.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: August 30, 2016
    Assignee: IGATE Global Solutions Ltd.
    Inventors: Sanjeev Sivadasan, Radhika Shankar, Venkatesh Babu
  • Publication number: 20160246655
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 25, 2016
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Publication number: 20160125526
    Abstract: A method of fulfilling a transaction between a bank and a customer of the bank makes available a form used in a banking transaction to a customer digital device. The form has a plurality of fields. The method scans, within a physical bank branch, an encoded visual display on the client digital device. The encoded visual display has an encoded version of the form and some but not all of the plurality of fields completed with client data. The method further decodes the encoded visual display using a decoding algorithm to produce the form and completed and uncompleted fields. At least one datum may be added, within the physical bank branch, to at least one uncompleted field of the form, and the banking transaction may be processed based on the form with the at least one client datum within the at least one field of the form.
    Type: Application
    Filed: February 10, 2015
    Publication date: May 5, 2016
    Inventors: Sanjeev Sivadasan, Radhika Shankar, Venkatesh Babu
  • Publication number: 20150379547
    Abstract: A method and apparatus for incentivizing a person to make a targeted purchase receives online data related to a person's online interaction with a company, and responsively processes the online data to determine behavior data. After determining that the person is within a physical store related to the company, the method and apparatus track the person's movement within the physical store and determine an incentive for making a purchase of a product or service within the store. The incentive is determined as a function of at least the person's position within the physical store and the behavior data. The method and apparatus then forwards an incentive message, having indicia relating to the incentive, to the person when the person is in the store.
    Type: Application
    Filed: November 11, 2014
    Publication date: December 31, 2015
    Inventors: Anupam Raj Gautam, Ravikumar Krishnamoorthy, Venkatesh Babu, Ashok B. Yalamanchili, Suyog Joshi
  • Publication number: 20150370596
    Abstract: A method and apparatus are provided in which a source and target perform bidirectional forwarding of traffic while a migration guest is being transferred from the source to the target. In some examples, the migration guest is exposed to the impending migration and takes an action in response. A virtual network programming controller informs other devices in the network of the change, such that those devices may communicate directly with the migration guest on the target host. According to some examples, an “other” virtual network device in communication with the controller and the target host facilitates the seamless migration. In such examples, the forwarding may be performed only until the other virtual machine receives an incoming packet from the target host, and then the other virtual machine resumes communication with the migration guest on the target host.
    Type: Application
    Filed: June 20, 2014
    Publication date: December 24, 2015
    Inventors: Brian Matthew Fahs, Jinnah Dylan Hosein, Venkatesh Babu Chitlur Srinivasa, Guy Shefner, Roy Donald Bryant, Uday Ramakrishna Naik, Francis E. Swiderski, III, Nan Hua
  • Patent number: RE44818
    Abstract: Methods and apparatus facilitate the management of input/output (I/O) subsystems in virtual I/O servers to provide appropriate quality of services (QoS). A hierarchical QoS scheme based on partitioning of network interfaces and I/O subsystems transaction types are used to classify Virtual I/O communications. This multi-tier QoS method allows virtual I/O servers to be scalable and provide appropriate QoS granularity.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: March 25, 2014
    Assignee: Intellectual Ventures Holding 80 LLC
    Inventors: Rohit Jnagal, Venkatesh Babu Chitlur Srinivasa