Patents by Inventor David Hass

David Hass has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20050230305
    Abstract: This abstract discusses producing mixed matrix composite (MMC) membranes with a good balance of permeability and selectivity. MMC membranes are particularly needed for separating fluids in oxygen/nitrogen separation processes, processes for removing carbon dioxide from hydrocarbons or nitrogen, and the separation of hydrogen from petrochemical and oil refining streams. MMC Membranes made using washed sieve material, such as washed SSZ-13 sieve material, provide surprisingly good permeability and selectivity. The method of the current invention produces a fluid separation membrane by providing a polymer and a washed molecular sieve material, then synthesizing a concentrated suspension of a solvent, the polymer, and the washed molecular sieve material. The concentrated suspension is used to form the fluid separation membrane of the desired configuration. Membranes of the current invention can be formed into hollow fiber membranes that are particularly suitable for high trans-membrane pressure applications.
    Type: Application
    Filed: March 28, 2005
    Publication date: October 20, 2005
    Inventors: Sudhir Kulkarni, Okan Ekiner, David Hasse
  • Patent number: 6901482
    Abstract: A system includes a plurality of processing clusters and a snoop controller. A first processing cluster in the plurality of processing clusters includes a first tier cache memory coupled to a second tier cache memory. The system employs a store-create operation to obtain sole ownership of a full cache line memory location for the first processing cluster, without retrieving the memory location from other processing clusters. The system issues the store-create operation for the memory location to the first tier cache. The first tier cache forwards a memory request including the store-create operation command to the second tier cache. The second tier cache determines whether the second tier cache has sole ownership of the memory location. If the second tier cache does not have sole ownership of the memory location, ownership of the memory location is relinquished by the other processing clusters with any ownership of the memory location.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 31, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Robert Hathaway, Ramesh Panwar, Ricardo Ramirez, Nazar Zaidi
  • Patent number: 6895477
    Abstract: A system includes a plurality of processing clusters and a snoop controller adapted to service memory requests. The snoop controller and each processing cluster are coupled to a snoop ring. A first processing cluster forwards a memory request to the snoop controller for access to a memory location. In response to the memory request, the snoop controller places a snoop request on the snoop ring—calling for a change in ownership of the requested memory location. A second processing cluster receives the snoop request on the snoop ring. The second processing cluster generates a response to the snoop request. If the second processing cluster owns the requested memory location, the second processing cluster modifies ownership status of the requested memory location.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 17, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: David Hass, Frederick Gruner, Nazar Zaidi, Ramesh Panwar, Mark Vilas
  • Patent number: 6892282
    Abstract: A multi-processor unit includes a set of processing clusters. Each processing cluster is coupled to a data ring and a snoop ring. The unit also includes a snoop controller adapted to process memory requests from each processing cluster. The data ring enables clusters to exchange requested information. The snoop ring is coupled to the snoop controller—enabling the snoop controller to forward each cluster's memory requests to the other clusters in the form of snoop requests.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: May 10, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: David Hass, Mark Vilas, Frederick Gruner, Ramesh Panwar, Nazar Zaidi
  • Publication number: 20050086361
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: April 21, 2005
    Inventors: Abbas Rashid, David Hass
  • Patent number: 6880049
    Abstract: A set of cache memory includes a set of first tier cache memory and a second tier cache memory. In the set of first tier cache memory each first tier cache memory is coupled to a compute engine in a set of compute engines. The second tier cache memory is coupled to each first tier cache memory in the set of first tier cache memory. The second tier cache memory includes a data ring interface and a snoop ring interface.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: April 12, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Ramesh Panwar, Nazar Zaidi
  • Publication number: 20050055503
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: July 23, 2004
    Publication date: March 10, 2005
    Inventor: David Hass
  • Publication number: 20050055504
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: July 23, 2004
    Publication date: March 10, 2005
    Inventors: David Hass, Abbas Rashid
  • Publication number: 20050055502
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: July 23, 2004
    Publication date: March 10, 2005
    Inventors: David Hass, Rohini Kaza
  • Publication number: 20050055510
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: July 23, 2004
    Publication date: March 10, 2005
    Inventors: David Hass, Basab Mukherjee
  • Publication number: 20050055540
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: July 23, 2004
    Publication date: March 10, 2005
    Inventors: David Hass, Abbas Rashid
  • Patent number: 6862669
    Abstract: An apparatus includes a compute engine coupled to a first tier cache memory including a data array. The first tier cache receives memory access requests from the compute engine. A second tier cache memory is coupled to the first tier cache to receive memory access requests for memory locations not owned by the first tier cache. To avoid stale data storage, the first tier cache does not load the data array with data returned by the second tier cache under the following condition—the second tier cache returns the data in response to a cacheable load operation from a memory location after the compute engine issues a subsequent store operation to the same memory location.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: March 1, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Robert Hathaway
  • Publication number: 20050044308
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 24, 2005
    Inventors: Abbas Rashid, David Hass
  • Publication number: 20050041666
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 24, 2005
    Inventor: David Hass
  • Publication number: 20050044324
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 24, 2005
    Inventors: Abbas Rashid, David Hass
  • Publication number: 20050041651
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 24, 2005
    Inventors: David Hass, Abbas Rashid
  • Publication number: 20050044323
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 24, 2005
    Inventor: David Hass
  • Publication number: 20050033889
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 10, 2005
    Inventors: David Hass, Abbas Rashid
  • Publication number: 20050027793
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: August 31, 2004
    Publication date: February 3, 2005
    Inventor: David Hass
  • Patent number: 6839808
    Abstract: A multi-processor includes multiple processing clusters for performing assigned applications. Each cluster includes a set of compute engines, with each compute engine coupled to a set of cache memory. A compute engine includes a central processing unit and a coprocessor with a set of application engines. The central processing unit and coprocessor are coupled to the compute engine's associated cache memory. The sets of cache memory within a cluster are also coupled to one another.
    Type: Grant
    Filed: July 6, 2001
    Date of Patent: January 4, 2005
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, David Hass, Robert Hathaway, Ramesh Penwar, Ricardo Ramirez, Nazar Zaidi