Patents by Inventor Ravi Krishna

Ravi Krishna has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12240654
    Abstract: Food holding system, liner thereof, and associated methods. A food tray receiver liner is configured to be installed in a food tray receiver. The liner is configured to collect food that may fall into the food tray receiver. Desirably, the liner obstructs food from passing between the liner and side walls of the food tray receiver.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: March 4, 2025
    Assignee: DUKE MANUFACTURING CO.
    Inventor: Sai Ravi Krishna Subramani
  • Publication number: 20250063054
    Abstract: Examples are disclosed for systems and methods for monitoring and filtering data transmitted to a vehicle connected to a wireless network. In one embodiment, a method for an edge node of a wireless network comprises routing traffic of the wireless network to a vehicle connected to the wireless network through the edge node; examining the traffic for potentially malicious content at the edge node; transmitting data packets of the traffic without potentially malicious content to the vehicle; and not transmitting data packets of the traffic with potentially malicious content to the vehicle.
    Type: Application
    Filed: December 13, 2022
    Publication date: February 20, 2025
    Inventors: Harshawardhan Vipat, Ravi Puvvala, Maria Praveen Kumar Yatagiri, Prasanna Krishna Harpanhalli
  • Patent number: 12222836
    Abstract: A method and system for rendering a stack trace visualization display has been developed. A first stack trace associated with execution of an application during a time period is received from a central processing unit profiler. A first stack trace visualization display is rendered including a plurality of stack frames stacked in accordance with an order of ancestry based on the first stack trace. Rendering at least one stack frame involves rendering at a first location of the first stack trace visualization display, a stack frame rectangle for the at least one stack frame in accordance with the order of ancestry and rendering at a second location of the first stack trace visualization display, stack frame specific text for the at least one stack frame. The second location overlays the first location. Rendering of the stack frame rectangle is independent of the rendering of the stack frame specific text.
    Type: Grant
    Filed: March 9, 2023
    Date of Patent: February 11, 2025
    Inventors: Ravi Sankar Pulle, Ajay Krishna Borra, Alexander Kouthoofd
  • Publication number: 20250045605
    Abstract: There is a need for more accurate and more efficient predictive data analysis steps/operations. This need can be addressed by, for example, techniques for efficient predictive data analysis steps/operations. In one example, a method includes mapping a primary event having a primary event code to a related subset of a plurality of candidate secondary events by at least processing one or more lifecycle-related attributes for the primary event code using a lifecycle inference machine learning model to detect an inferred lifecycle for the primary event.
    Type: Application
    Filed: October 23, 2024
    Publication date: February 6, 2025
    Inventors: Rama Krishna Singh, Priyank Jain, Ravi Pande
  • Publication number: 20250037282
    Abstract: Systems and methods are configured for preprocessing of images for further content based analysis thereof. Such images are extracted from a source data file, by standardizing individual pages within a source data file as image data files, and identifying whether the image satisfies applicable size-based criteria, applicable color-based criteria, and applicable content-based criteria, among others, utilizing one or more machine-learning based models. Various systems and methods may identify particular features within the extracted images to facilitate further image-based analysis based on the identified features.
    Type: Application
    Filed: September 19, 2024
    Publication date: January 30, 2025
    Inventors: Russell H. Amundson, Saurabh Bhargava, Rama Krishna Singh, Ravi Pande, Vishwakant Gupta, Gaurav Mantri, Abhinav Agrawal, Sapeksh Suman
  • Patent number: 12211225
    Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: January 28, 2025
    Assignee: ADOBE INC.
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
  • Publication number: 20240359323
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for planning and executing robotic processes. One of the methods includes receiving a definition of a robotic behavior tree, receiving a definition of a data flow graph, and executing a robotic process using the definition of the robotic behavior tree and the data flow graph.
    Type: Application
    Filed: December 5, 2023
    Publication date: October 31, 2024
    Inventors: Michael Beardsworth, Andreas Heiner Bihlmaier, Bala Venkata Sai Ravi Krishna Kolluri
  • Patent number: 12128563
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using learnable robotic control plans. One of the methods comprises obtaining a learnable robotic control plan comprising data defining a state machine that includes a plurality of states and a plurality of transitions between states, wherein: one or more states are learnable states, and each learnable state comprises data defining (i) one or more learnable parameters of the learnable state and (ii) a machine learning procedure for automatically learning a respective value for each learnable parameter of the learnable state; and processing the learnable robotic control plan to generate a specific robotic control plan, comprising: obtaining data characterizing a robotic execution environment; and for each learnable state, executing, using the obtained data, the respective machine learning procedures defined by the learnable state to generate a respective value for each learnable parameter of the learnable state.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: October 29, 2024
    Assignee: Intrinsic Innovation LLC
    Inventors: Ning Ye, Maryam Bandari, Klas Jonas Alfred Kronander, Bala Venkata Sai Ravi Krishna Kolluri, Jianlan Luo, Wenzhao Lian, Chang Su
  • Publication number: 20240277235
    Abstract: Disclosed herein are methods, devices, and media for determining physiological characteristics. In some embodiments, a method involves obtaining an electromyography (EMG) signal representing muscle activity from an EMG electrode disposed in or on a band of a wrist-worn device, and a photoplethysmography (PPG) signal using a PPG sensor of the wrist-worn device. The method may involve generating a modified PPG signal using the EMG signal, wherein the modified PPG signal corrects for motion artifacts in the PPG signal due to motion activity of a wearer of the wrist-worn device. The method may involve determining at least one physiological characteristic based on the modified PPG signal.
    Type: Application
    Filed: January 24, 2024
    Publication date: August 22, 2024
    Inventor: Ravi Krishna SHAGA
  • Publication number: 20240198516
    Abstract: Methods, systems, and media comprising; a physical robot in a physical workcell; an onsite execution subsystem that is configured to control the physical robot using a real-time control subsystem; a cloud-based belief world subsystem that is configured to receive and store sensor data captured in the workcell, wherein the onsite execution subsystem is configured to use sensor data stored by the cloud-based belief world subsystem in order to control the robot using the real-time control subsystem.
    Type: Application
    Filed: December 16, 2022
    Publication date: June 20, 2024
    Inventors: Bala Venkata Sai Ravi Krishna Kolluri, Stoyan Gaydarov, David Andrew Schmidt
  • Patent number: 11986958
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using skill templates for robotic demonstration learning. One of the methods includes receiving a skill template for a task to be performed by a robot, wherein the skill template defines a state machine having a plurality of subtasks and one or more respective transition conditions between one or more of the subtasks. Local demonstration data for a demonstration subtask of the skill template is received, where the local demonstration data is generated from a user demonstrating how to perform the demonstration subtask with the robot. A machine learning model is refined for the demonstration subtask and the skill template is executed on the robot, causing the robot to transition through the state machine defined by the skill template to perform the task.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: May 21, 2024
    Assignee: Intrinsic Innovation LLC
    Inventors: Bala Venkata Sai Ravi Krishna Kolluri, Stefan Schaal, Benjamin M. Davis, Ralf Oliver Michael Schönherr, Ning Ye
  • Publication number: 20240157554
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.
    Type: Application
    Filed: November 20, 2023
    Publication date: May 16, 2024
    Inventors: Bala Venkata Sai Ravi Krishna Kolluri, Stefan Schaal, Ralf Oliver Michael Schönherr, Benjamin M. Davis, Ning Ye
  • Patent number: 11912465
    Abstract: Food holding system, liner thereof, and associated methods. A food tray receiver liner is configured to be installed in a food tray receiver. The liner is configured to collect food that may fall into the food tray receiver. Desirably, the liner obstructs food from passing between the liner and side walls of the food tray receiver.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: February 27, 2024
    Assignee: DUKE MANUFACTURING CO.
    Inventor: Sai Ravi Krishna Subramani
  • Publication number: 20230382595
    Abstract: Food holding system, liner thereof, and associated methods. A food tray receiver liner is configured to be installed in a food tray receiver. The liner is configured to collect food that may fall into the food tray receiver. Desirably, the liner obstructs food from passing between the liner and side walls of the food tray receiver.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 30, 2023
    Inventor: Sai Ravi Krishna Subramani
  • Patent number: 11820014
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: November 21, 2023
    Assignee: Intrinsic Innovation LLC
    Inventors: Bala Venkata Sai Ravi Krishna Kolluri, Stefan Schaal, Ralf Oliver Michael Schönherr, Benjamin M. Davis, Ning Ye
  • Publication number: 20230356393
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
    Type: Application
    Filed: June 26, 2023
    Publication date: November 9, 2023
    Inventors: Bala Venkata Sai Ravi Krishna Kolluri, Stefan Schaal, Benjamin M. Davis, Ralf Oliver Michael Schönherr, Ning Ye
  • Publication number: 20230337981
    Abstract: Methods and wearable devices for optimizing power consumption using sensor-based position and use determinations are described here. One example method is performed at a device that includes a first sensor configured to operate with a first power consumption rate and a second sensor configured to operate with a second power consumption rate. The method includes, while a component associated with the second sensor is in an inactive state, receiving first sensor data, and determining whether the first sensor data indicates movement of the device. The method also includes, when movement of the device is indicated, operating the second sensor in an active state. The method further includes, after activating the second sensor, when second sensor data from the second sensor indicates that the device has been placed on a user’s body, continuing to operate the second sensor in the active state.
    Type: Application
    Filed: April 18, 2023
    Publication date: October 26, 2023
    Inventors: Nishant Srinivasan, Nagalakshmi Rajagopal, Derek William Wright, Edwin Corona Aparicio, Szymon Michal Tankiewicz, Ravi Krishna Shaga, Ramiro Calderon, Shan Chu, Priyanka Sharma, Lei Yin, Lidu Huang
  • Patent number: 11780086
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using a demonstration device for robotic demonstration learning. One of the methods includes generating, by a demonstration device for a robot, a representation of a sequence of states input by a user of the demonstration device. The representation is provided by the demonstration device to a robot execution system. The representation of the sequence of actions is translated into a plurality of robot commands corresponding to the representation of the sequence of states input by the user on the demonstration device. The plurality of robot commands corresponding to the sequence of actions input by the user on the demonstration device are executed. Demonstration data is generated from one or more sensor streams of the robot while executing the plurality of robot commands corresponding to the sequence of actions input by the user on the demonstration device.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: October 10, 2023
    Assignee: Intrinsic Innovation LLC
    Inventors: Bala Venkata Sai Ravi Krishna Kolluri, Stefan Schaal, Ralf Oliver Michael Schönherr, Ning Ye
  • Publication number: 20230286148
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing interpolated robot control parameters. One of the methods includes receiving, by a real-time bridge from a control agent for a robot, a non-real-time command for the robot, wherein the non-real-time command specifies a trajectory to be attained by a component of the robot and a target value for a control parameter, wherein the control parameter controls how a real-time controller will cause the robot to react to one or more external stimuli encountered during a control cycle of the real-time controller. The real-time bridge provides the one or more real-time commands translated from the non-real-time command and interpolated control parameter information to the real-time controller, thereby causing the robot to effectuate the trajectory of the non-real-time command according to the interpolated control parameter information.
    Type: Application
    Filed: May 17, 2023
    Publication date: September 14, 2023
    Inventors: Michael Beardsworth, Klas Jonas Alfred Kronander, Sean Alexander Cassero, Bala Venkata Sai Ravi Krishna Kolluri
  • Patent number: D1005781
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: November 28, 2023
    Assignee: DUKE MANUFACTURING CO.
    Inventor: Sai Ravi Krishna Subramani