Patents by Inventor Joseph Chamdani

Joseph Chamdani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130268489
    Abstract: Embodiments of the present invention provide fine grain concurrency control for transactions in the presence of database updates. During operations, each transaction is assigned a snapshot version number or SVN. A SVN refers to a historical snapshot of the database that can be created periodically or on demand. Transactions are thus tied to a particular SVN, such as, when the transaction was created. Queries belonging to the transactions can access data that is consistent as of a point in time, for example, corresponding to the latest SVN when the transaction was created. At various times, data from the database stored in a memory can be updated using the snapshot data corresponding to a SVN. When a transaction is committed, a snapshot of the database with a new SVN is created based on the data modified by the transaction and the snapshot is synchronized to the memory.
    Type: Application
    Filed: June 3, 2013
    Publication date: October 10, 2013
    Inventors: Kapil Surlaker, Ravindran Krishnamurthy, Krishnan Meiyyappan, Alan Lee Beck, Hung Tran, Jeremy Branscome, Joseph Chamdani
  • Patent number: 7952997
    Abstract: A scalable solution to managing congestion in a network is disclosed. In one implementation, such a solution comprises a means for managing traffic including at least one flow monitor and a plurality of flow control regulators that together manage congestion within a network. Each of the flow control regulators monitor traffic at a corresponding ingress point and determine a state of the ingress point corresponding to the traffic monitored at the ingress point. Each flow control regulators forward the state (or information representative of the state) to the flow monitor. The flow monitor detects congestion based upon the states of the flow control regulators and, in the event of congestion, determines a target bandwidth for the ingress points. The flow monitor provides a control signal to at least one of the flow control regulators, and at least one of the flow control regulators control flows at its corresponding ingress point based upon the control signal received from the flow monitor.
    Type: Grant
    Filed: May 18, 2006
    Date of Patent: May 31, 2011
    Assignee: MCDATA Corporation
    Inventors: Michael Corwin, Joseph Chamdani, Stephen Trevitt
  • Publication number: 20070268825
    Abstract: A scalable solution to managing fairness in a congested hierarchical switched system is disclosed. The solution comprises a means for managing fairness during congestion in a hierarchical switched system comprising a first level arbitration system and a second level arbitration system of a stage. The first level arbitration system comprises a plurality of arbitration segments that arbitrate between information flows received from at least one ingress point based upon weights associated with those information flows (or the ingress points). Each arbitration segment determines an aggregate weight from each active ingress point providing the information flows to the segment and forwards a selected information flow along with the aggregate weight (in-band or out-of-band) to the second level arbitration system.
    Type: Application
    Filed: May 19, 2006
    Publication date: November 22, 2007
    Inventors: Michael Corwin, Joseph Chamdani, Stephen Trevitt
  • Publication number: 20070268829
    Abstract: A scalable solution to managing congestion in a network is disclosed. In one implementation, such a solution comprises a means for managing traffic including at least one flow monitor and a plurality of flow control regulators that together manage congestion within a network. Each of the flow control regulators monitor traffic at a corresponding ingress point and determine a state of the ingress point corresponding to the traffic monitored at the ingress point. Each flow control regulators forward the state (or information representative of the state) to the flow monitor. The flow monitor detects congestion based upon the states of the flow control regulators and, in the event of congestion, determines a target bandwidth for the ingress points. The flow monitor provides a control signal to at least one of the flow control regulators, and at least one of the flow control regulators control flows at its corresponding ingress point based upon the control signal received from the flow monitor.
    Type: Application
    Filed: May 18, 2006
    Publication date: November 22, 2007
    Inventors: Michael Corwin, Joseph Chamdani, Stephen Trevitt
  • Publication number: 20070258443
    Abstract: A method, system or switch device, the switch device being one of a ported and a non-ported switch device, both including a housing containing an ASIC providing a switching system within the switch device; the housing further including a plurality of extender ports communicating with the ASIC and being connectable to themselves either in loopback fashion or to one or more ported or non-ported switch devices, whereby the extender ports operate on a discrete protocol from standard switch ports. The ported switch device further includes a plurality of standard ports connectable to one or more external computer network devices and is adapted to be operable as a switch system in an independent standalone mode as well as being adapted to be operable in conjunction with a discrete non-ported switch device.
    Type: Application
    Filed: May 2, 2006
    Publication date: November 8, 2007
    Inventors: Joseph Chamdani, Raj Cherabuddi, Michael Corwin, Yu Fang, Joseph Pelissier
  • Publication number: 20070258380
    Abstract: A method, system or switch device, the switch device being one of a ported and a non-ported switch device, either of which including a housing containing an ASIC providing a switching system within the switch device, the housing further including a plurality of extender ports communicating with the ASIC and being connectable to themselves either in loopback fashion or to one or more ported or non-ported switch devices, whereby the extender ports operate on a discrete protocol from standard switch ports. The ported switch device further includes a plurality of standard ports connectable to one or more external computer network devices. A switch device hereof is adapted to send and/or receive an identification communication, the identification communication adapted to be indicative of the health of a switch device or a connecting link in a switch system.
    Type: Application
    Filed: May 2, 2006
    Publication date: November 8, 2007
    Inventors: Joseph Chamdani, Michael Corwin, Joseph Pelissier, Michael Crater
  • Publication number: 20070211640
    Abstract: A method, system or switch device, the switch device having both switch and test capabilities. A method includes running in a test or switch mode or both; and, performing the testing operation or the switching operations, or both. Another method includes setting up the test functionality in the switch device, the test functionality including one or both of transmitting test data and receiving test data. Other steps may include initiating the transmission of test data; and checking the test data. A switch device may include an ASIC disposed within the switch device, the ASIC including one or both of an egress test block and an ingress test block; whereby the egress test block and the ingress test block are respectively adapted to transmit and receive a test packet; whereby the ASIC and one or both of the egress and ingress test blocks provide for alternatively operating in the conventional switch mode and in test mode.
    Type: Application
    Filed: March 10, 2006
    Publication date: September 13, 2007
    Inventors: Subbarao Palacharla, Robert Matesevac, Litko Chan, Joseph Chamdani
  • Publication number: 20070174597
    Abstract: A processor reduces wasted cycle time resulting from stalling and idling, and increases the proportion of execution time, by supporting and implementing both vertical multithreading and horizontal multithreading. Vertical multithreading permits overlapping or “hiding” of cache miss wait times. In vertical multithreading, multiple hardware threads share the same processor pipeline. A hardware thread is typically a process, a lightweight process, a native thread, or the like in an operating system that supports multithreading. Horizontal multithreading increases parallelism within the processor circuit structure, for example within a single integrated circuit die that makes up a single-chip processor. To further increase system parallelism in some processor embodiments, multiple processor cores are formed in a single die. Advances in on-chip multiprocessor horizontal threading are gained as processor core sizes are reduced through technological advancements.
    Type: Application
    Filed: February 23, 2007
    Publication date: July 26, 2007
    Inventors: William Joy, Marc Tremblay, Gary Lauterbach, Joseph Chamdani
  • Publication number: 20070147364
    Abstract: A method, system or switch device, the switch device including an ASIC creating a switching system within the switch device, the ASIC including an ingress packet processor, an egress packet assembly device, a transmit control device and a routing device; whereby the ingress packet processor is disposed to receive a data packet, the routing device is adapted to route the data packet from the ingress packet processor to the egress packet assembly device and the transmit control device is disposed to control the routing of the routing device; the switch device further including an ingress port communicating with the ASIC and being connectable to one or more external computer network devices, the ingress port being a substantially standard switch port; an egress port communicating with the ASIC and being connectable to one or more external computer network devices, the egress port being a substantially standard switch port; and, an extender port, the extender port being connectable to another extender port in loo
    Type: Application
    Filed: December 22, 2005
    Publication date: June 28, 2007
    Inventors: Subbarao Palacharla, Yu Fang, Joseph Chamdani
  • Publication number: 20070083625
    Abstract: Intelligent services are provided in a storage network using intelligent service modules that can be cabled to a switch external to the switch chassis and yet be managed as part of the switch's logical domain. Data and management communications between the intelligent service module and the core switch are provided through a “soft-backplane” implemented using in-band communications through cabling attached between the switch and the intelligent service module rather than through hardwired backplane within the chassis. Management communications from management software is directed to the switch, which handles the management functions relating to the intelligent service module or forwards the management requests to the intelligent service module for processing.
    Type: Application
    Filed: September 29, 2005
    Publication date: April 12, 2007
    Inventors: Joseph Chamdani, Gurumurthy Ramkumar, Bruce Younglove, Corey Hill
  • Patent number: 7185338
    Abstract: A computer system includes a processor capable of executing a plurality of N threads of instructions, N being an integer greater than one, with a set of global registers visible to each of the plurality of threads and a plurality of busy bit memory elements used to signal whether or not a register is in use by a thread. The processor includes logic to stall a read from global register if the thread reading the global register is a speculative thread and the busy bits for prior threads are set. The processor might also include a speculative load address memory, into which speculative loads from speculative threads are entered and logic to compare addresses for stores from nonspeculative threads with addressees in the speculative load address memory and invalidate speculative threads corresponding to the speculative load addresses stored in the speculative load address memory.
    Type: Grant
    Filed: October 15, 2002
    Date of Patent: February 27, 2007
    Assignee: Sun Microsystems, Inc.
    Inventors: Joseph Chamdani, Yuan Chou
  • Publication number: 20070038679
    Abstract: Differential configuration update commands can be communicated and applied quickly and efficiently to active zone sets and zone set libraries, without requiring propagation of entire zone sets through a fabric of a SAN. Furthermore, the commands can be applied quickly to support dynamic configuration updates. Ordered differential configuration update commands can be applied to ordered zone set data structures to minimize update instruction communication requirements and optimize configuration update operations. In addition, differential configuration update commands can be applied to active zone set data structures (e.g., in an active zone set or a zone set library) to optimize configuration update operations.
    Type: Application
    Filed: August 15, 2005
    Publication date: February 15, 2007
    Inventors: Gurumurthy Ramkumar, Larry Hofer, Sunil Ramesh, Joseph Chamdani, Raj Cherabuddi, Greg Majszak
  • Publication number: 20040073906
    Abstract: A computer system includes a processor capable of executing a plurality of N threads of instructions, N being an integer greater than one, with a set of global registers visible to each of the plurality of threads and a plurality of busy bit memory elements used to signal whether or not a register is in use by a thread. The processor includes logic to stall a read from global register if the thread reading the global register is a speculative thread and the busy bits for prior threads are set. The processor might also include a speculative load address memory, into which speculative loads from speculative threads are entered and logic to compare addresses for stores from nonspeculative threads with addressees in the speculative load address memory and invalidate speculative threads corresponding to the speculative load addresses stored in the speculative load address memory.
    Type: Application
    Filed: October 15, 2002
    Publication date: April 15, 2004
    Applicant: Sun Microsystems, Inc.
    Inventors: Joseph Chamdani, Yuan Chou
  • Patent number: 6678796
    Abstract: A method and apparatus for scheduling instructions to provide adequate prefetch latency is disclosed during compilation of a program code in to a program. The prefetch scheduler component of the present invention selects a memory operation within the program code as a “martyr load” and removes the prefetch associated with the martyr load, if any. The prefetch scheduler takes advantage of the latency associated with the martyr load to schedule prefetches for memory operations which follow the martyr load. The prefetches are scheduled “behind” (i.e., prior to) the martyr load to allow the prefetches to complete before the associated memory operations are carried out. The prefetch schedule component continues this process throughout the program code to optimize prefetch scheduling and overall program operation.
    Type: Grant
    Filed: October 3, 2000
    Date of Patent: January 13, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Nicolai Kosche, Peter C. Damron, Joseph Chamdani, Partha Pal Tirumalai
  • Patent number: 6490658
    Abstract: A memory cache method and apparatus with two memory execution pipelines, each having a translation lookaside buffer (TLB). Memory instructions are executed in the first pipeline (324) by searching a data cache (310) and a prefetch cache (320). A large data TLB (330) provides memory for storing address translations for the first pipeline (324) A second pipeline (328) executes memory instructions by accessing the prefetch cache (320). A second micro-TLB (340) is associated with the second pipeline (328). It is loaded in anticipation of data that will be referenced by the second pipeline (328). A history file (360) is also provided to retain information on previous instructions to aid in deciding when to prefetch data. Prefetch logic (370) determines when to prefetch data, and steering logic (380) routes certain instructions to the second pipeline (328) to increase system performance.
    Type: Grant
    Filed: June 23, 1997
    Date of Patent: December 3, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Sultan Ahmed, Joseph Chamdani
  • Patent number: 6175898
    Abstract: A memory cache method and apparatus with two memory execution pipelines, each having a translation lookaside buffer (TLB). Memory instructions are executed in the first pipeline (324) by searching a data cache (310) and a prefetch cache (320). A large data TLB (330) provides memory for storing address translations for the first pipeline (324) A second pipeline (328) executes memory instructions by accessing the prefetch cache (320). A second micro-TLB (340) is associated with the second pipeline (328). It is loaded in anticipation of data that will be referenced by the second pipeline (328). A history file (360) is also provided to retain information on previous instructions to aid in deciding when to prefetch data. Prefetch logic (370) determines when to prefetch data, and steering logic (380) routes certain instructions to the second pipeline (328) to increase system performance.
    Type: Grant
    Filed: June 23, 1997
    Date of Patent: January 16, 2001
    Assignee: Sun Microsystems, Inc.
    Inventors: Sultan Ahmed, Joseph Chamdani