Patents by Inventor Nazar Zaidi

Nazar Zaidi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7068603
    Abstract: A cross-bar switch includes a set of input ports for receiving data packets and a set of sink ports for transmitting the received packets to identified targets. A set of data rings couples the input ports to the sink ports. Each sink port utilizes the set of data rings to simultaneously accept multiple data packets targeted to the same destination—creating a non-blocking cross-bar switch. Sink ports are also each capable of supporting multiple targets—providing the cross-bar switch with implicit multicast capability.
    Type: Grant
    Filed: July 6, 2001
    Date of Patent: June 27, 2006
    Assignee: Juniper Networks, Inc.
    Inventors: Abbas Rashid, Nazar Zaidi, Mark Bryers, Fred Gruner
  • Patent number: 7065090
    Abstract: A cross-bar switch includes a set of input ports and a set of sink ports in communication with the input ports. The input ports receive packets, which are snooped by the sink ports. The cross-bar switch also includes a set of port address tables. Each port address table is adapted to store data identifying a plurality of destinations supported by a sink port. For example, a first port address table is adapted to identify a plurality of destinations supported by a first sink port in the set of sink ports. When determining whether to accept a packet, a sink port considers whether the packet's destination is identified in the sink port's port address table. By supporting multiple destinations, a port address table implicitly facilitates a sink port's multicast operation.
    Type: Grant
    Filed: December 21, 2001
    Date of Patent: June 20, 2006
    Assignee: Juniper Networks, Inc.
    Inventors: Abbas Rashid, Nazar Zaidi, Mark Bryers
  • Patent number: 7062636
    Abstract: Embodiments include various methods, apparatuses, and systems in which a processor includes an out of order issue engine and an in-order execution pipeline. For some embodiments, the issue engine may be remote from the execution pipeline and execution resources may be many clock cycles away from the issue engine The issue engine categorizes operations as at least one of either a speculative operation, which perform computations, or an architectural operation, which has potential to fault or cause an exception. Potentially excepting operations may be decomposed into two separate micro-operations: a speculative micro-operation, which is used to generate data results speculatively so that operations dependent on the results may be speculatively issued, and an architectural micro-operation, which signals the faulting condition for the excepting operation. A STORE operation becomes an architectural operation and all previous faulting conditions may be guaranteed to have evaluated before a STORE is issued.
    Type: Grant
    Filed: September 19, 2002
    Date of Patent: June 13, 2006
    Assignee: Intel Corporation
    Inventors: Jeffery J. Baxter, Gary N. Hammond, Nazar A. Zaidi
  • Patent number: 6920542
    Abstract: A compute engine's central processing unit is coupled to a coprocessor that includes application engines. The central processing unit initializes the coprocessor to perform an application, and the coprocessor initializes an application engine to perform the application. The application engine responds by carrying out the application. In performing some applications, the application engine accesses cache memory—obtaining a physical memory address that corresponds to a virtual address and providing the physical address to the cache memory. In some instances, the coprocessor employs multiple application engines to carry out an application. In one implementation, the application engines facilitate different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: July 19, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan, Nazar Zaidi
  • Patent number: 6901482
    Abstract: A system includes a plurality of processing clusters and a snoop controller. A first processing cluster in the plurality of processing clusters includes a first tier cache memory coupled to a second tier cache memory. The system employs a store-create operation to obtain sole ownership of a full cache line memory location for the first processing cluster, without retrieving the memory location from other processing clusters. The system issues the store-create operation for the memory location to the first tier cache. The first tier cache forwards a memory request including the store-create operation command to the second tier cache. The second tier cache determines whether the second tier cache has sole ownership of the memory location. If the second tier cache does not have sole ownership of the memory location, ownership of the memory location is relinquished by the other processing clusters with any ownership of the memory location.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 31, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Robert Hathaway, Ramesh Panwar, Ricardo Ramirez, Nazar Zaidi
  • Patent number: 6898673
    Abstract: A compute engine includes a central processing unit coupled to a coprocessor. The coprocessor includes a media access controller engine and a data transfer engine. The media access controller engine couples the compute engine to a communications network. The data transfer engine couples the media access controller engine to a set of cache memory. In further embodiments, a compute engine includes two media access controller engines. A reception media access controller engine receives data from the communications network. A transmission media access controller engine transmits data to the communications network. The compute engine also includes two data transfer engines. A streaming output engine stores network data from the reception media access controller engine in cache memory. A streaming input engine transfers data from cache memory to the transmission media access controller engine.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 24, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan, Nazar Zaidi
  • Patent number: 6895477
    Abstract: A system includes a plurality of processing clusters and a snoop controller adapted to service memory requests. The snoop controller and each processing cluster are coupled to a snoop ring. A first processing cluster forwards a memory request to the snoop controller for access to a memory location. In response to the memory request, the snoop controller places a snoop request on the snoop ring—calling for a change in ownership of the requested memory location. A second processing cluster receives the snoop request on the snoop ring. The second processing cluster generates a response to the snoop request. If the second processing cluster owns the requested memory location, the second processing cluster modifies ownership status of the requested memory location.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 17, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: David Hass, Frederick Gruner, Nazar Zaidi, Ramesh Panwar, Mark Vilas
  • Patent number: 6892282
    Abstract: A multi-processor unit includes a set of processing clusters. Each processing cluster is coupled to a data ring and a snoop ring. The unit also includes a snoop controller adapted to process memory requests from each processing cluster. The data ring enables clusters to exchange requested information. The snoop ring is coupled to the snoop controller—enabling the snoop controller to forward each cluster's memory requests to the other clusters in the form of snoop requests.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 10, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: David Hass, Mark Vilas, Frederick Gruner, Ramesh Panwar, Nazar Zaidi
  • Patent number: 6880049
    Abstract: A set of cache memory includes a set of first tier cache memory and a second tier cache memory. In the set of first tier cache memory each first tier cache memory is coupled to a compute engine in a set of compute engines. The second tier cache memory is coupled to each first tier cache memory in the set of first tier cache memory. The second tier cache memory includes a data ring interface and a snoop ring interface.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: April 12, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Ramesh Panwar, Nazar Zaidi
  • Patent number: 6839808
    Abstract: A multi-processor includes multiple processing clusters for performing assigned applications. Each cluster includes a set of compute engines, with each compute engine coupled to a set of cache memory. A compute engine includes a central processing unit and a coprocessor with a set of application engines. The central processing unit and coprocessor are coupled to the compute engine's associated cache memory. The sets of cache memory within a cluster are also coupled to one another.
    Type: Grant
    Filed: July 6, 2001
    Date of Patent: January 4, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Robert Hathaway, Ramesh Penwar, Ricardo Ramirez, Nazar Zaidi
  • Patent number: 6745289
    Abstract: A system for processing data includes a first set of cache memory and a second set of cache memory that are each coupled to a main memory. A compute engine coupled to the first set of cache memory transfers data from a communications medium into the first set of cache memory. The system transfers the data from the first set of cache memory to the second set of cache memory, in response to a request for the data from a compute engine coupled to the second set of cache memory. Data is transferred between the sets of cache memory without accessing main memory, regardless of whether the data has been modified. The data is also transferred directly between sets of cache memory when the data is exclusively owned by a set of cache memory or shared by sets of cache memory.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: June 1, 2004
    Assignee: Juniper Networks, Inc.
    Inventors: Frederick Gruner, Elango Ganesan, Nazar Zaidi, Ramesh Panwar
  • Publication number: 20040103248
    Abstract: An advanced telecommunications processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective instruction cache. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: October 8, 2003
    Publication date: May 27, 2004
    Inventors: David T. Hass, Nazar A. Zaidi, Abbas Rashid
  • Publication number: 20040059898
    Abstract: Various methods, apparatuses, and systems in which a processor includes an issue engine and an in-order execution pipeline. The issue engine categorizes operations as at least one of either a speculative operation which perform computations or an architectural operation which has potential to fault or cause an exception. Each architectural operation issues with an associated architectural micro-operation. A first micro-operation checks whether a first speculative operation is dependent upon an intervening first architectural operation. The in-order execution pipeline executes the speculative operation, the architectural operation, and the associated architectural micro-operations.
    Type: Application
    Filed: September 19, 2002
    Publication date: March 25, 2004
    Inventors: Jeffery J. Baxter, Gary N. Hammond, Nazar A. Zaidi
  • Publication number: 20030154346
    Abstract: A system for processing data includes a first set of cache memory and a second set of cache memory that are each coupled to a main memory. A compute engine coupled to the first set of cache memory transfers data from a communications medium into the first set of cache memory. The system transfers the data from the first set of cache memory to the second set of cache memory, in response to a request for the data from a compute engine coupled to the second set of cache memory. Data is transferred between the sets of cache memory without accessing main memory, regardless of whether the data has been modified. The data is also transferred directly between sets of cache memory when the data is exclusively owned by a set of cache memory or shared by sets of cache memory.
    Type: Application
    Filed: March 25, 2002
    Publication date: August 14, 2003
    Inventors: Frederick Gruner, Elango Ganesan, Nazar Zaidi, Ramesh Panwar
  • Publication number: 20030126233
    Abstract: A network content service apparatus includes a set of compute elements adapted to perform a set of network services; and a switching fabric coupling compute elements in said set of compute elements. The set of network services includes firewall protection, Network Address Translation, Internet Protocol forwarding, bandwidth management, Secure Sockets Layer operations, Web caching, Web switching, and virtual private networking. Code operable on the compute elements enables the network services, and the compute elements are provided on blades which further include at least one input/output port.
    Type: Application
    Filed: July 8, 2002
    Publication date: July 3, 2003
    Inventors: Mark Bryers, Elango Ganesan, Frederick Gruner, David Hass, Robert Hathaway, Ramesh Panwar, Ricardo Ramirez, Abbas Rashid, Mark Vilas, Nazar Zaidi, Yen Lee, Chau Anh Ngoc Nguyen, John Phillips, Yuhong Andy Zhou, Gregory G. Spurrier, Sankar Ramanoorthi, Michael Freed
  • Publication number: 20030120876
    Abstract: A multi-processor unit includes a set of processing clusters. Each processing cluster is coupled to a data ring and a snoop ring. The unit also includes a snoop controller adapted to process memory requests from each processing cluster. The data ring enables clusters to exchange requested information. The snoop ring is coupled to the snoop controller—enabling the snoop controller to forward each cluster's memory requests to the other clusters in the form of snoop requests.
    Type: Application
    Filed: March 25, 2002
    Publication date: June 26, 2003
    Inventors: David Hass, Mark Vilas, Fred Gruner, Ramesh Panwar, Nazar Zaidi
  • Patent number: 6581154
    Abstract: A microarchitecture for dynamically expanding and executing microcode routines is provided. According to one aspect of the present invention, a mechanism expands a generic instruction into specific instructions at run-time, which may be employed to execute a computer program. These generic instructions use a special class of micro-ops (uops), called “super-uops” (or “Suops)” which are expanded into a variable number of regular (i.e., simple) uops. In one embodiment, the computer of the present invention utilizes a two-level decode scheme. The first-level decoder converts macro-instructions into either simple uops or one or more Suops, which represent a sequence of one or more simple uops. A second-level decoder is responsible for converting the Suops into the appropriate uop sequence based upon an indicator associated with the macro-instruction. An execution unit within the computer then executes the flow of uops generated by the first and second decoding units.
    Type: Grant
    Filed: December 31, 1999
    Date of Patent: June 17, 2003
    Assignee: Intel Corporation
    Inventor: Nazar Zaidi
  • Patent number: 6574689
    Abstract: A queuing system that avoids live-locking is provided. A representative implementation of this system 1) selects a first queue item pointed to by a rotating pointer if the first queue item is ready to be serviced, 2) selects a second queue item pointed to by a find-first-pointer if the first queue item is not ready to be serviced, and 3) updates the rotating pointer so that the rotating pointer points to a third queue item.
    Type: Grant
    Filed: March 8, 2000
    Date of Patent: June 3, 2003
    Assignee: Intel Corporation
    Inventors: Nazar A. Zaidi, Jeen Miin
  • Publication number: 20030069973
    Abstract: An architecture for controlling a multiprocessing system to provide at least one network service to subscriber data packets transmitted in the system using a plurality of compute elements, comprising a management compute element including service set-up information for at least one service and at least one processing compute element applying said at least one network service to said data packets and communicating service set-up information with the management compute element in order to perform service specific operations on data packets. In a further embodiment, a method of controlling a processing system including a plurality of processors is disclosed.
    Type: Application
    Filed: July 8, 2002
    Publication date: April 10, 2003
    Inventors: Elango Ganesan, Ramesh Penwar, Yen Lee, Chau Am Nguyen, John Phillips, Andy Yuhong Zhou, Greg G. Spurrier, Sankar Ramanoorthi, Michael Freed, Mark Bryers, Nazar Zaidi
  • Publication number: 20030043829
    Abstract: A cross-bar switch includes a set of input ports for receiving data packets and a set of sink ports for transmitting the received packets to identified targets. A set of data rings couples the input ports to the sink ports. Each sink port utilizes the set of data rings to simultaneously accept multiple data packets targeted to the same destination—creating a non-blocking cross-bar switch. Sink ports are also each capable of supporting multiple targets—providing the cross-bar switch with implicit multicast capability.
    Type: Application
    Filed: December 21, 2001
    Publication date: March 6, 2003
    Inventors: Abbas Rashid, Nazar Zaidi