Patents by Inventor Lava Kumar Bokam

Lava Kumar Bokam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250123885
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 17, 2025
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Patent number: 12210902
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Grant
    Filed: February 15, 2024
    Date of Patent: January 28, 2025
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Patent number: 12190113
    Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
    Type: Grant
    Filed: June 8, 2023
    Date of Patent: January 7, 2025
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Raju Datla, Abhilash Bharath Ghanore, Lava Kumar Bokam, Suresh Kumar Vennam, Rajashekar Reddy Ereddy
  • Publication number: 20240403667
    Abstract: Application prototyping systems and methods are disclosed. One aspect is a processing method for multiple computing devices that includes identifying resource constraints for the multiple computing devices. Using identified resource constraints, a presentation model having a plurality of modifiable parameters based at least in part based on the resource constraints is created. At least one inference engine supporting neural network processing is used to execute a particular neural network model based at least in part on the presentation model.
    Type: Application
    Filed: May 30, 2023
    Publication date: December 5, 2024
    Inventors: Abhilash Bharath Ghanore, Suresh Lakshmi Goduguluru, Rajashekar Reddy Ereddy, Sreenivas Aerra Reddy, Satya Uppalapati, Lava Kumar Bokam, Siva Kumar Vemuri, Arindam Chakraborty, Snigdha Alkanti, Davyansh Agrawal, Amit Pandey
  • Publication number: 20240403668
    Abstract: Application prototyping systems and methods are disclosed. One aspect is a processing method for multiple computing devices that includes identifying resource constraints for the multiple computing devices. Using identified resource constraints, multiple presentation models at least in part based on identified processing metrics are created. In one aspect, the multiple presentation models include multiple processing pipelines configurable for execution on multiple computing devices. An inference engine can be used to provide an execution model for the multiple processing pipelines based at least in part on the multiple presentation models, with the execution model having improved processing metrics as compared to at least one of the multiple presentation models.
    Type: Application
    Filed: May 30, 2023
    Publication date: December 5, 2024
    Inventors: Abhilash Bharath Ghanore, Suresh Lakshmi Goduguluru, Rajashekar Reddy Ereddy, Sreenivas Aerra Reddy, Satya Uppalapati, Lava Kumar Bokam, Siva Kumar Vemuri, Arindam Chakraborty, Snigdha Alkanti, Divyansh Agrawal, Amit Pandey
  • Publication number: 20240272939
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Application
    Filed: February 15, 2024
    Publication date: August 15, 2024
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Patent number: 11941440
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Grant
    Filed: October 25, 2022
    Date of Patent: March 26, 2024
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Publication number: 20230409936
    Abstract: Proxy systems and methods for multiprocessing architectures are described. One method includes receiving an inference request and a statistics request from a client computing system. The method may access a load state of each processing device in a subset of processing devices preloaded with the neural network model, and select a target processing device from the subset based on the load states. One aspect includes transmitting the inference request to the target processing device, and monitoring an execution of the inference request by the target processing device based on the neural network model. The method may receive an inference result generated by the target processing device after executing the inference request, and compute an average inference time for the inference request execution based on the monitoring. The method may transmit the inference result and the average inference time to the client computing system.
    Type: Application
    Filed: May 17, 2023
    Publication date: December 21, 2023
    Inventors: Lava Kumar Bokam, Sriduth Jayhari, Divya Vipin, Rajashekar Reddy Ereddy, Snigdha Alkanti, Venkateswara Rao Andole Mankali, Suresh Kumar Vennam, Mohammed Mujahid, Sreenivas Aerra Reddy
  • Publication number: 20230376728
    Abstract: Proxy systems and methods for multiprocessing architectures are described. One method includes receiving a neural network model from a client computing system. System resource availability on a plurality of processing devices may be assessed, and a subset of available processing devices may be selected based on the system resource availability. In one aspect, the neural network model is loaded into each processing device in the subset. The method may include receiving an inference request from the client computing system. A load state of each processing device in the subset may be accessed, and a target processing device from the subset may be selected based on the load states. The inference request may be transmitted to the target processing device.
    Type: Application
    Filed: May 17, 2023
    Publication date: November 23, 2023
    Inventors: Lava Kumar Bokam, Sriduth Jayhari, Divya Vipin, Rajashekar Reddy Ereddy, Snigdha Alkanti, Venkateswara Rao Andole Mankali, Suresh Kumar Vennam, Mohammed Mujahid, Sreenivas Aerra Reddy
  • Publication number: 20230315464
    Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
    Type: Application
    Filed: June 8, 2023
    Publication date: October 5, 2023
    Inventors: Mohamed Shahim, Raju Datla, Abhilash Bharath Ghanore, Lava Kumar Bokam, Suresh Kumar Vennam, Rajashekar Reddy Ereddy
  • Patent number: 11714651
    Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: August 1, 2023
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Raju Datla, Abhilash Bharath Ghanore, Lava Kumar Bokam, Suresh Kumar Vennam, Rajashekar Reddy Ereddy
  • Publication number: 20230052277
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Application
    Filed: October 25, 2022
    Publication date: February 16, 2023
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Patent number: 11513847
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: November 29, 2022
    Assignee: Deep Vision Inc.
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Publication number: 20210373792
    Abstract: A tensor traversal engine in a processor system comprising a source memory component and a destination memory component, the tensor traversal engine comprising: a control signal register storing a control signal for a strided data transfer operation from the source memory component to the destination memory component, the control signal comprising an initial source address, an initial destination address, a first source stride length in a first dimension, and a first source stride count in the first dimension; a source address register communicatively coupled to the control signal register; a destination address register communicatively coupled to the control signal register; a first source stride counter communicatively coupled to the control signal register; and control logic communicatively coupled to the control signal register, the source address register, and the first source stride counter.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 2, 2021
    Inventors: Mohamed Shahim, Raju Datla, Abhilash Bharath Ghanore, Lava Kumar Bokam, Suresh Kumar Vennam, Rajashekar Reddy Ereddy
  • Publication number: 20210303346
    Abstract: A method includes: dequeuing a signal primitive from a signaling command queue in the set of command queues, the signal primitive pointing to a waiting command queue; in response to the signal primitive pointing to the waiting command queue, incrementing a number of pending signal primitives in the signal-wait counter matrix; dequeuing a wait primitive from the waiting command queue, the wait primitive pointing to the signaling command queue; in response to the wait primitive pointing to the signaling command queue, accessing the register to read the number of pending signal primitives; in response to the number of pending signal primitives indicating at least one pending signal primitive: decrementing the number of pending signal primitives; and dequeuing an instruction from the waiting command queue; and dispatching a control signal representing the instruction to a resource.
    Type: Application
    Filed: March 24, 2021
    Publication date: September 30, 2021
    Inventors: Mohamed Shahim, Sreenivas Aerra Reddy, Raju Datla, Lava Kumar Bokam, Suresh Kumar Vennam, Sameek Banerjee
  • Publication number: 20210191765
    Abstract: A method for scheduling an artificial neural network includes: accessing a processor representation of a multicore processor comprising processor cores, direct memory access cores, and a cost model; and accessing a network structure defining a set of layers. The method also includes, for each layer in the set of layers: generating a graph based on the processor representation, the graph defining compute nodes, data transfer nodes, and edges representing dependencies between the compute nodes and the data transfer nodes; and generating a schedule for the layer based on the graph, the schedule assigning the compute nodes to the processor cores and assigning the data transfer nodes to the direct memory access cores. The method further includes aggregating the schedule for each layer in the set of layers to generate a complete schedule for the artificial neural network.
    Type: Application
    Filed: December 18, 2020
    Publication date: June 24, 2021
    Inventors: Lava Kumar Bokam, Sameek Bannerjee, Abhilash Bharath Ghanore, Rajashekar Reddy Ereddy, Wajahat Qadeer, Rehan Hameed, Mohamed Shahim, Sreenivas Aerra Reddy