Patents by Inventor Elisa Rodrigues
Elisa Rodrigues has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9858241Abstract: A system and method can support efficient packet processing in a network environment. The system can comprise a direct memory access (DMA) resources pool that comprises one or more of DMA resources. Furthermore, the system can use a plurality of packet buffers in a memory, wherein each said DMA resource can point to a chain of packet buffers in the memory. Here, the chain of packet buffers can be implemented based on either a linked list data structure and/or a linear array data structure. Additionally, each said DMA resource allows a packet processing thread to access the chain of packet buffers using a pre-assigned thread key.Type: GrantFiled: November 5, 2013Date of Patent: January 2, 2018Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Arvind Srinivasan, Ajoy Siddabathuni, Elisa Rodrigues
-
Patent number: 9489327Abstract: A system and method can support efficient packet processing in a network environment. The system can comprise a thread scheduling engine that operates to assign a thread key to each software thread in a plurality of software threads. Furthermore, the system can comprise a pool of direct memory access (DMA) resources that can be used to process packets in the network environment. Additionally, each said software thread operates to request access to a DMA resource in the pool of DMA resources by presenting an assigned thread key, and a single software thread is allowed to access multiple DMA resources using the same thread key.Type: GrantFiled: November 5, 2013Date of Patent: November 8, 2016Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Arvind Srinivasan, Ajoy Siddabathuni, Elisa Rodrigues
-
Publication number: 20150127869Abstract: A system and method can support efficient packet processing in a network environment. The system can comprise a thread scheduling engine that operates to assign a thread key to each software thread in a plurality of software threads. Furthermore, the system can comprise a pool of direct memory access (DMA) resources that can be used to process packets in the network environment. Additionally, each said software thread operates to request access to a DMA resource in the pool of DMA resources by presenting an assigned thread key, and a single software thread is allowed to access multiple DMA resources using the same thread key.Type: ApplicationFiled: November 5, 2013Publication date: May 7, 2015Applicant: Oracle International CorporationInventors: Arvind Srinivasan, Ajoy Siddabathuni, Elisa Rodrigues
-
Publication number: 20150127762Abstract: A system and method can support efficient packet processing in a network environment. The system can comprise a direct memory access (DMA) resources pool that comprises one or more of DMA resources. Furthermore, the system can use a plurality of packet buffers in a memory, wherein each said DMA resource can point to a chain of packet buffers in the memory. Here, the chain of packet buffers can be implemented based on either a linked list data structure and/or a linear array data structure. Additionally, each said DMA resource allows a packet processing thread to access the chain of packet buffers using a pre-assigned thread key.Type: ApplicationFiled: November 5, 2013Publication date: May 7, 2015Applicant: Oracle International CorporationInventors: Arvind Srinivasan, Ajoy Siddabathuni, Elisa Rodrigues
-
Patent number: 8527745Abstract: An I/O device includes a host interface configured to process function level reset (FLR) requests in a specified amount of time. The host interface includes a control unit and groups of configuration space registers, each group corresponding to a function. The host interface also includes application availability registers, each associated with a respective function, and which may indicate whether application hardware within the respective function is available for access by a corresponding application device driver. The I/O device also includes application hardware resources associated with a respective function. In response to receiving an FLR request of a particular function, the control unit may cause the associated application availability register to indicate that the application hardware within the particular function is not available to the driver.Type: GrantFiled: December 7, 2009Date of Patent: September 3, 2013Assignee: Oracle America, Inc.Inventors: John E. Watkins, Elisa Rodrigues
-
Patent number: 8402320Abstract: An I/O device includes a host interface that may be configured to receive and process a plurality of transaction packets sent by a number of processing units, with each processing unit corresponding to a respective root complex. The host interface includes an error handling unit having error logic implemented in hardware that may be configured to determine whether each transaction packet has an error and to store information corresponding to any detected errors within a storage. More particularly, the error handling unit may perform the error detection and capture of the error information as the transaction packets are received, or in real time, while the error handling unit may include firmware that may subsequently process the information corresponding to the detected errors.Type: GrantFiled: May 25, 2010Date of Patent: March 19, 2013Assignee: Oracle International CorporationInventors: John E. Watkins, Elisa Rodrigues
-
Patent number: 8312187Abstract: An I/O device includes a host interface coupled to a plurality of hardware resources. The host interface includes a transaction layer packet (TLP) processing unit that may receive and process a plurality of transaction layer packets sent by a plurality of processing units. Each processing unit may correspond to a respective root complex. The TLP processing unit may identify a transaction type and a processing unit corresponding to each transaction layer packet and store each transaction layer packet within a storage according to the transaction type and the processing unit. The TLP processing unit may select one or more transaction layer packets from the storage for process scheduling based upon a set of fairness criteria using an arbitration scheme. The TLP processing unit may further select and dispatch transaction layer packets for processing by downstream application hardware based upon additional criteria.Type: GrantFiled: September 18, 2009Date of Patent: November 13, 2012Assignee: Oracle America, Inc.Inventors: Elisa Rodrigues, John E. Watkins
-
Patent number: 8286027Abstract: An I/O device includes a host interface that may receive and process transaction packets sent by a number of processing units, with each processing unit corresponding to a respective root complex. The host interface includes an error handling unit having error logic implemented in hardware that may determine, as each packet is received, whether each transaction packet has an error and to store information corresponding to any detected errors. The error handling unit may include an error processor that may be configured to execute error processing instructions to determine any error processing operations based upon the information. The error processor may also generate and send one or more instruction operations, each corresponding to a particular error processing operation. The error handling unit may also include an error processing unit that may execute the one or more instruction operations to perform the particular error processing operations.Type: GrantFiled: May 25, 2010Date of Patent: October 9, 2012Assignee: Oracle International CorporationInventors: John E. Watkins, Elisa Rodrigues, Abbas Morshed
-
Patent number: 8117350Abstract: The described embodiments provide a system for accessing values for configuration space registers (CSRs). This system includes a CSR data storage mechanism with an address input and a CSR data output. The CSR data storage mechanism includes a memory containing a number of memory locations for storing the true or actual values for CSRs for functions for corresponding devices. In these embodiments, the memory locations are divided into at least one shared region and at least one unique region. In these embodiments, in response to receiving an address for a memory location on the address input, the CSR data storage mechanism accesses the value for the CSR in the memory location in a corresponding shared region or unique region.Type: GrantFiled: November 3, 2009Date of Patent: February 14, 2012Assignee: Oracle America, Inc.Inventors: John E. Watkins, Elisa Rodrigues
-
Publication number: 20110296256Abstract: An I/O device includes a host interface that may receive and process transaction packets sent by a number of processing units, with each processing unit corresponding to a respective root complex. The host interface includes an error handling unit having error logic implemented in hardware that may determine, as each packet is received, whether each transaction packet has an error and to store information corresponding to any detected errors. The error handling unit may include an error processor that may be configured to execute error processing instructions to determine any error processing operations based upon the information. The error processor may also generate and send one or more instruction operations, each corresponding to a particular error processing operation. The error handling unit may also include an error processing unit that may execute the one or more instruction operations to perform the particular error processing operations.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Inventors: John E. Watkins, Elisa Rodrigues, Abbas Morshed
-
Publication number: 20110296255Abstract: An I/O device includes a host interface that may be configured to receive and process a plurality of transaction packets sent by a number of processing units, with each processing unit corresponding to a respective root complex. The host interface includes an error handling unit having error logic implemented in hardware that may be configured to determine whether each transaction packet has an error and to store information corresponding to any detected errors within a storage. More particularly, the error handling unit may perform the error detection and capture of the error information as the transaction packets are received, or in real time, while the error handling unit may include firmware that may subsequently process the information corresponding to the detected errors.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Inventors: John E. Watkins, Elisa Rodrigues
-
Patent number: 8032669Abstract: A universal DMA (Direct Memory Access) engine can be dynamically configured to function in either a receive or transmit mode. DMAs are logically assembled and bound as needed, without limitation to a fixed, pre-determined number of receive engines and transmit engines. Because a DMA engine may be dynamically assembled to support the flow of data in either direction, varied usage models are enabled, and components used to assemble a receive DMA engine for one application may be subsequently used to assemble a transmit engine for a different application. An application may request a specific number of each type of engine, depending on the nature of its input/output traffic. The number of receive or transmit engines can be dynamically increased or decreased without suspending or rebooting the host. A universal DMA architecture provides a unified software framework, thereby decreasing the complexity of the software and the hardware gate count cost.Type: GrantFiled: January 20, 2008Date of Patent: October 4, 2011Assignee: Oracle America, Inc.Inventors: Rahoul Puri, Arvind Srinivasan, Elisa Rodrigues
-
Publication number: 20110138161Abstract: An I/O device includes a host interface configured to process function level reset (FLR) requests in a specified amount of time. The host interface includes a control unit and groups of configuration space registers, each group corresponding to a function. The host interface also includes application availability registers, each associated with a respective function, and which may indicate whether application hardware within the respective function is available for access by a corresponding application device driver. The I/O device also includes application hardware resources associated with a respective function. In response to receiving an FLR request of a particular function, the control unit may cause the associated application availability register to indicate that the application hardware within the particular function is not available to the driver.Type: ApplicationFiled: December 7, 2009Publication date: June 9, 2011Inventors: John E. Watkins, Elisa Rodrigues
-
Publication number: 20110106981Abstract: The described embodiments provide a system for accessing values for configuration space registers (CSRs). This system includes a CSR data storage mechanism with an address input and a CSR data output. The CSR data storage mechanism includes a memory containing a number of memory locations for storing the true or actual values for CSRs for functions for corresponding devices. In these embodiments, the memory locations are divided into at least one shared region and at least one unique region. In these embodiments, in response to receiving an address for a memory location on the address input, the CSR data storage mechanism accesses the value for the CSR in the memory location in a corresponding shared region or unique region.Type: ApplicationFiled: November 3, 2009Publication date: May 5, 2011Applicant: SUN MICROSYSTEMS, INC.Inventors: John E. Watkins, Elisa Rodrigues
-
Publication number: 20110072172Abstract: An I/O device includes a host interface coupled to a plurality of hardware resources. The host interface includes a transaction layer packet (TLP) processing unit that may receive and process a plurality of transaction layer packets sent by a plurality of processing units. Each processing unit may correspond to a respective root complex. The TLP processing unit may identify a transaction type and a processing unit corresponding to each transaction layer packet and store each transaction layer packet within a storage according to the transaction type and the processing unit. The TLP processing unit may select one or more transaction layer packets from the storage for process scheduling based upon a set of fairness criteria using an arbitration scheme. The TLP processing unit may further select and dispatch transaction layer packets for processing by downstream application hardware based upon additional criteria.Type: ApplicationFiled: September 18, 2009Publication date: March 24, 2011Inventors: Elisa Rodrigues, John E. Watkins
-
Patent number: 7620693Abstract: A system and method for tracking responses to InfiniBand RDMA Reads. When an RDMA Read or Read request is issued by a transmit module, a receive module is informed of the packet sequence numbers (PSN) associated with the expected RDMA Read responses. The receive module maintains a linked list for each queue pair that issues RDMA Reads. Each entry in the linked list corresponds to one RDMA Read for the associated queue pair, and identifies the first and last PSN and includes a link to the next entry in the linked list. When the final RDMA Read response is received, the receive module notifies the transmit module, which can then retire the RDMA Read from its retry queue.Type: GrantFiled: March 29, 2004Date of Patent: November 17, 2009Assignee: Sun Microsystems, Inc.Inventors: James A. Mott, Elisa Rodrigues
-
Publication number: 20090187679Abstract: A universal DMA (Direct Memory Access) engine can be dynamically configured to function in either a receive or transmit mode. DMAs are logically assembled and bound as needed, without limitation to a fixed, pre-determined number of receive engines and transmit engines. Because a DMA engine may be dynamically assembled to support the flow of data in either direction, varied usage models are enabled, and components used to assemble a receive DMA engine for one application may be subsequently used to assemble a transmit engine for a different application. An application may request a specific number of each type of engine, depending on the nature of its input/output traffic. The number of receive or transmit engines can be dynamically increased or decreased without suspending or rebooting the host. A universal DMA architecture provides a unified software framework, thereby decreasing the complexity of the software and the hardware gate count cost.Type: ApplicationFiled: January 20, 2008Publication date: July 23, 2009Inventors: Rahoul Puri, Arvind Srinivasan, Elisa Rodrigues
-
Patent number: 7342934Abstract: A system and method for processing interleaved Sends of encapsulated communications and responses to RDMA Reads in a single InfiniBand queue pair receive queue. The queue is implemented as one or more linked lists of memory buckets, and stores Send commands (containing encapsulated communications or RDMA Read descriptors for retrieving a communication) until their associated communications are assembled and forwarded to a transmit module. The queue grows as new InfiniBand packets are received, and shrinks as communications (e.g., Ethernet packets) are forwarded. A next packet pointer identifies the next Send command whose communication should be assembled. If it is an encapsulated communication, the communication is forwarded. Otherwise, RDMA Read requests are issued and the responses bypass the tail of the queue and are assembled in an assembly area at the head of the queue.Type: GrantFiled: March 29, 2004Date of Patent: March 11, 2008Assignee: Sun Microsystems, Inc.Inventors: James A. Mott, Elisa Rodrigues
-
Patent number: 6985153Abstract: A graphics system comprising a scheduling network, a sample buffer and a plurality of filtering units. The sample buffer is configured to store sample generated by a rendering engine. The plurality of filtering units are coupled in a linear series. Each filtering unit of the linear series is configured to send a request for a scanline of sample bins to a first filtering unit of the linear series. The first filtering unit is configured to service the scanline requests by sending burst requests to a scheduling network and coordinating the flow of samples forming the bursts from the sample buffer to the filtering units.Type: GrantFiled: July 15, 2002Date of Patent: January 10, 2006Assignee: Sun Microsystems, Inc.Inventors: Elisa Rodrigues, Lisa C. Grenier, Nimita J. Taneja
-
Publication number: 20040008203Abstract: A graphics system comprising a scheduling network, a sample buffer and a plurality of filtering units. The sample buffer is configured to store sample generated by a rendering engine. The plurality of filtering units are coupled in a linear series. Each filtering unit of the linear series is configured to send a request for a scanline of sample bins to a first filtering unit of the linear series. The first filtering unit is configured to service the scanline requests by sending burst requests to a scheduling network and coordinating the flow of samples forming the bursts from the sample buffer to the filtering units.Type: ApplicationFiled: July 15, 2002Publication date: January 15, 2004Inventors: Elisa Rodrigues, Lisa C. Grenier, Nimita J. Taneja