Patents by Inventor James M. Guyer

James M. Guyer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972112
    Abstract: A host IO devices directly implements host read operations on both local memory, and on peer memory via a PCIe non-transparent bridge. When a host read operation is received by a host IO device from a host, the host IO device uses an API to obtain the physical address of the requested data on the peer memory, and generates a PCIe Transaction Layer Packet (TLP) addressed to the address in the peer memory. The TLP addressed to an address in the peer memory is passed over the NTB to the peer compute node to retrieve the data stored in the addressed slot of peer memory. The requested data is returned to the host IO device over the NTB, stored in a buffer, and read out to the host to directy respond to the host read operation.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: April 30, 2024
    Assignee: Dell Products, L.P.
    Inventors: Jonathan Krasner, Ro Monserrat, Michael Scharland, Jerome Cartmell, James M Guyer, Scott Rowlands, Julie Zhivich, Thomas Mackintosh
  • Patent number: 11561695
    Abstract: In a storage system such as a SAN, NAS, or storage array that implements hierarchical performance tiers based rated drive access latency, on-drive compression is used on data stored on a first tier and off-drive compression is used on data stored on a second tier. Off-drive compression is more processor intensive and may introduce some data access latency but reduces storage requirements. On-drive compression is performed at or near line speed but generally yields lower size reduction ratios than off-drive compression. On-drive compression may be implemented at a higher performance tier whereas off-drive compression may be implemented at a lower performance tier. Further, space saving realized from on-drive compression may be applied to over-provisioning.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: January 24, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventor: James M Guyer
  • Publication number: 20230009942
    Abstract: In a storage system such as a SAN, NAS, or storage array that implements hierarchical performance tiers based rated drive access latency, on-drive compression is used on data stored on a first tier and off-drive compression is used on data stored on a second tier. Off-drive compression is more processor intensive and may introduce some data access latency but reduces storage requirements. On-drive compression is performed at or near line speed but generally yields lower size reduction ratios than off-drive compression. On-drive compression may be implemented at a higher performance tier whereas off-drive compression may be implemented at a lower performance tier. Further, space saving realized from on-drive compression may be applied to over-provisioning.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 12, 2023
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventor: James M Guyer
  • Patent number: 11537313
    Abstract: Mirrored volatile memory in a storage system is configured with a dual cast region of addresses. Buffers in the dual cast region are allocated for data associated with a received Write IO. A host IO device associates the dual cast addresses with the data. A switch or CPU complex recognizes the dual cast addresses associated with the data and, in response, creates and sends a first copy of the data to a first volatile memory mirror and creates and sends a second copy of the data to a second volatile memory mirror. The second copy may be sent via PCIe NTB between switches or CPU complexes.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: December 27, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jason J Duquette, James M Guyer, Thomas Mackintosh, Earl Medeiros
  • Patent number: 11461023
    Abstract: Flexibly expanding the storage capacity of a data storage system by adding a single physical storage device or any number of disk drives to an existing storage system without the need to reconfigure existing erasure encoding groups of the system. The physical storage devices of a data storage system may be divided into a plurality of slices, and each slice may be a member of an erasure encoding group. Physical storage devices that are added to the data storage system may be divided into same number of slices and/to slices of a same size, which then may be added to existing erasure encoding groups, utilized as spare slices or left idle until all of the slices are integrated into the data storage system.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: October 4, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jun Li, James M. Guyer, Stephen R. Ives
  • Publication number: 20190317682
    Abstract: The system, devices, and methods disclosed herein relate to using historic data storage system utilization metrics to automate data expansion capacity. In some embodiments, the data storage system is a RAID cloud having a plurality of storage devices divided into logical slices, also called splits. Methods disclosed herein allow user defined thresholds to be set wherein the thresholds control when data storage will be expanded. Data storage expansion capacity can be based upon historic usage criteria or customer-based criteria. Additional system customizations include user control over rebalancing data distribution as well as determining when system performance should be routinely evaluated offline.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 17, 2019
    Inventors: Jun Li, Adnan Sahin, James M. Guyer, Stephen R. Ives
  • Patent number: 7979572
    Abstract: A data storage system having protocol controller for converting packets between PCIE format used by a storage processor and Rapid IO format used by a packet switching network. The controller includes a PCIE end point for transferring atomic operation (DSA) requests, a data pipe section having a plurality of data pipes for passing user data; and a message engine section for passing messages among the plurality of storage processors. An acceleration path controller bypasses a DSA buffer in the absence of congestion on the network. Packets fed to the PCIE end point include an address portion having code indicating an atomic operation. An encoder converts the code from a PCIE format into the same atomic operation in SRIO format. Each one of a plurality of CPUs is adapted to perform a second DSA request during execution of a first DSA request.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: July 12, 2011
    Assignee: EMC Corporation
    Inventors: Nhut Tran, Michael Sgrosso, William F. Baxter, III, James M. Guyer
  • Patent number: 7979588
    Abstract: A data storage system having protocol controller for converting packets between PCIE format used by a storage processor and Rapid IO format used by a packet switching network. The controller includes a PCIE end point for transferring atomic operation (DSA) requests, a data pipe section having a plurality of data pipes for passing user data; and a message engine section for passing messages among the plurality of storage processors. An acceleration path controller passes a DSA buffer in the absence of congestion on the network. Packets fed to the PCIE end point include an address portion having code indicating an atomic operation. An encoder converts the code from a PCIE format into the same atomic operation in SRIO format. Each one of a plurality of CPUs is adapted to perform a second DSA request during execution of a first DSA request.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: July 12, 2011
    Assignee: EMC Corporation
    Inventors: Nhut Tran, Michael Sgrosso, James M. Guyer, William F. Baxter, III
  • Patent number: 7769928
    Abstract: A data storage system having protocol controller for converting packets between PCIE format used by a storage processor and Rapid IO format used by a packet switching network. The controller includes a PCIE end point for transferring atomic operation (DSA) requests, a data pipe section having a plurality of data pipes for passing user data; and a message engine section for passing messages among the plurality of storage processors. An acceleration path controller bypasses a DSA buffer in the absence of congestion on the network. Packets fed to the PCIE end point include an address portion having code indicating an atomic operation. An encoder converts the code from a PCIE format into the same atomic operation in SRIO format. Each one of a plurality of CPUs is adapted to perform a second DSA request during execution of a first DSA request.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: August 3, 2010
    Assignee: EMC Corporation
    Inventors: Nhut Tran, Michael Sgrosso, William F. Baxter, III, James M. Guyer
  • Patent number: 7454536
    Abstract: A queuing system wherein at least one input/output (I/O) interface having an outbound queue. A plurality of processing units is coupled to the at least one I/O interface. Each one of the processing units is coupled to a corresponding processing unit memory. Each one of the processing unit memories has an inbound queue for such coupled processing unit. The at least one I/O interface outbound queue stores outbound information being returned to the I/O interface after being processed by one of the processing units. The I/O interface creates queue indices for storage in the inbound queues of the processor unit memories. The I/O interface includes a translation table, such table storing at a location a producer index for the plurality of processing units and a consumer index for such plurality of processing units.
    Type: Grant
    Filed: September 30, 2003
    Date of Patent: November 18, 2008
    Assignee: EMC Corporation
    Inventors: John K. Walton, William F. Baxter, III, Kendell A. Chilton, Daniel Castel, Michael Bermingham, James M. Guyer
  • Patent number: 7437425
    Abstract: A system interface having a plurality of directors, one portion of such directors being adapted for coupling to a host computer/server and another portion of the directors being adapted for coupling to a bank of disk drives. The plurality of directors are interconnected through a network. A common resource section is provided having a resource shared among the plurality of directors. The common shared resource section includes a shared computer code used by the plurality of directors. The code includes computer code for booting up each one of the plurality directors. The common shared code storage section is interconnected to the directors through the network. A second, redundant common shared resource section is provided. The network is a packet switching network.
    Type: Grant
    Filed: September 30, 2003
    Date of Patent: October 14, 2008
    Assignee: EMC Corporation
    Inventors: John K. Walton, William F. Baxter, III, Kendell A. Chilton, Daniel Castel, Michael Bermingham, James M. Guyer
  • Patent number: 7124245
    Abstract: A system interface having: a plurality of front end directors adapted for coupling to a host computer/server; a plurality of back end directors adapted for coupling to a bank of disk drives; a data transfer section having cache memory; a cache memory manager; and, a message network. The cache memory is coupled to the plurality of front end and back end directors. The messaging network operates independently of the data transfer section and is coupled to the plurality of front end and back end. The front end and back end directors control data transfer between the host computer/server and the bank of disk drives in response to messages passing between the front end directors and the back end directors through the messaging network to facilitate data transfer between host computer/server and the bank of disk drives. The data passes through the cache memory in the data transfer section as such data passes between the host computer and the bank of disk drives.
    Type: Grant
    Filed: September 30, 2003
    Date of Patent: October 17, 2006
    Assignee: EMC Corporation
    Inventors: John K. Walton, William F. Baxter, III, Kendell A. Chilton, Daniel Castel, Michael Bermingham, James M. Guyer
  • Patent number: 6122756
    Abstract: A high availability computer system and methodology including a backplane, having at least one backplane communication bus and a diagnostic bus, a plurality of motherboards, each interfacing to the diagnostic bus. Each motherboard also includes a memory system including main memory distributed among the plurality of motherboards and a memory controller module for accessing said main memory interfacing to said motherboard communication bus. Each motherboard also includes at least one daughterboard, detachably connected to thereto. The motherboard further includes a backplane diagnostic bus interface mechanism interfacing each of the motherboards to the backplane diagnostic bus; a microcontroller for processing information and providing outputs and a test bus controller mechanism including registers therein.
    Type: Grant
    Filed: February 10, 1998
    Date of Patent: September 19, 2000
    Assignee: Data General Corporation
    Inventors: William F. Baxter, Robert G. Gelinas, James M. Guyer, Dan R. Huck, Michael F. Hunt, David L. Keating, Jeff S. Kimmell, Phil J. Roux, Liz M. Truebenbach, Rob P. Valentine, Pat J. Weiler, Joseph Cox, Barry E. Gillott, Andrea Heyda, Rob J. Pike, Tom V. Radogna, Art A. Sherman, Micheal Sporer, Doug J. Tucker, Simon N. Yeung
  • Patent number: 6026461
    Abstract: A very fast, memory efficient, highly expandable, highly efficient CCNUMA processing system based on a hardware architecture that minimizes system bus contention, maximizes processing forward progress by maintaining strong ordering and avoiding retries, and implements a full-map directory structure cache coherency protocol. A Cache Coherent Non-Uniform Memory Access (CCNUMA) architecture is implemented in a system comprising a plurality of integrated modules each consisting of a motherboard and two daughterboards. The daughterboards, which plug into the motherboard, each contain two Job Processors (JPs), cache memory, and input/output (I/O) capabilities. Located directly on the motherboard are additional integrated I/O capabilities in the form of two Small Computer System Interfaces (SCSI) and one Local Area Network (LAN) interface. The motherboard includes main memory, a memory controller (MC) and directory DRAMs for cache coherency.
    Type: Grant
    Filed: December 9, 1998
    Date of Patent: February 15, 2000
    Assignee: Data General Corporation
    Inventors: William F. Baxter, Robert G. Gelinas, James M. Guyer, Dan R. Huck, Michael F. Hunt, David L. Keating, Jeff S. Kimmell, Phil J. Roux, Liz M. Truebenbach, Rob P. Valentine, Pat J. Weiler, Joseph Cox, Barry E. Gillott, Andrea Heyda, Rob J. Pike, Tom V. Radogna, Art A. Sherman, Michael Sporer, Doug J. Tucker, Simon N. Yeung
  • Patent number: 5887146
    Abstract: A very fast, memory efficient, highly expandable, highly efficient CCNUMA processing system based on a hardware architecture that minimizes system bus contention, maximizes processing forward progress by maintaining strong ordering and avoiding retries, and implements a full-map directory structure cache coherency protocol. A Cache Coherent Non-Uniform Memory Access (CCNUMA) architecture is implemented in a system comprising a plurality of integrated modules each consisting of a motherboard and two daughterboards. The daughterboards, which plug into the motherboard, each contain two Job Processors (JPs), cache memory, and input/output (I/O) capabilities. Located directly on the motherboard are additional integrated I/O capabilities in the form of two Small Computer System Interfaces (SCSI) and one Local Area Network (LAN) interface. The motherboard includes main memory, a memory controller (MC) and directory DRAMs for cache coherency.
    Type: Grant
    Filed: August 12, 1996
    Date of Patent: March 23, 1999
    Assignee: Data General Corporation
    Inventors: William F. Baxter, Robert G. Gelinas, James M. Guyer, Dan R. Huck, Michael F. Hunt, David L. Keating, Jeff S. Kimmell, Phil J. Roux, Liz M. Truebenbach, Rob P. Valentine, Pat J. Weiler, Joseph Cox, Barry E. Gillott, Andrea Heyda, Rob J. Pike, Tom V. Radogna, Art A. Sherman, Michael Sporer, Doug J. Tucker, Simon N. Yeung
  • Patent number: 5070475
    Abstract: A data processing system which includes a floating point computation unit (FPU) which interfaces with a central processing unit (CPU) in which the CPU supplies a dispatch control signal to inform the FPU that it is about to execute a floating point macroinstruction and supplies a dispatch address which includes the starting address of the floating point microinstructions therefor during the same operating cycle that the dispatch control signal is supplied. A buffer memory is provided in the FPU to store the starting address of one decoded macroinstruction while a sequence of microinstructions for a previously decoded macroinstruction is being executed by the FPU. When the buffer already has a starting address resident in its buffer the FPU supplies a control signal to prevent the CPU from supplying a further dispatch address until the buffer is empty. Other control signals for synchronizing the CPU and FPU operations and data transfers are also provided.
    Type: Grant
    Filed: November 14, 1985
    Date of Patent: December 3, 1991
    Assignee: Data General Corporation
    Inventors: Kevin B. Normoyle, James M. Guyer, Rainer Vogt, Anthony S. Fong
  • Patent number: 4796176
    Abstract: A multiprocessor computing system is disclosed which includes a system bus, a plurality of processing units and a plurality of synchronous input/output channel controllers. A plurality of priority lines each corresponding to a processing unit are provided through each input/output channel controller in order of priority. A synchronizing signal is generated at the same time in each input/output channel controller in response to the end of an address phase on the system bus. A latch is provided in the input/output controllers which responds to the synchronizing signal by storing the condition of the priority lines and whether an interrupt is pending. In response to a broadcast interrupt origin request instruction from a processing unit, all input/output channel controllers will respond at the same time but only the one with the priority interrupt for the requesting processing unit gives a non-zero response.
    Type: Grant
    Filed: November 15, 1985
    Date of Patent: January 3, 1989
    Assignee: Data General Corporation
    Inventors: Lynn W. D'Amico, James M. Guyer
  • Patent number: 4597041
    Abstract: A data processing system having separate kernel, vertical and horizontal microcode, separate loading of vertical microcode and a permanently resident kernel microcode, and a soft console with dual levels of capability. The system includes a processor having dual ALC and microcode processors, and an instruction processor. Also included are a processor incorporating a multifunction processor memory, a malfunction nibble shifter, and a high speed look-aside memory control.
    Type: Grant
    Filed: November 15, 1982
    Date of Patent: June 24, 1986
    Assignee: Data General Corp.
    Inventors: James M. Guyer, David I. Epstein, David L. Keating, Walker Anderson, James E. Veres, Harold R. Kimmens
  • Patent number: 4591972
    Abstract: A data processing system having separate kernel, vertical and horizontal microcode, separate loading of vertical microcode and a permanently resident kernel microcode, and a soft console with dual levels of capability. The system includes a processor having dual ALC and microcode processors, and an instruction processor. Also included are a processor incorporating a multifunction processor memory, a multifunction nibble shifter, and a high speed look-aside memory control. Adaptive microcode control means 272 are disclosed in which microinstruction sequencing is a function 273 of the current microinstruction and current machine state.
    Type: Grant
    Filed: November 15, 1982
    Date of Patent: May 27, 1986
    Assignee: Data General Corp.
    Inventors: James M. Guyer, David I. Epstein, David L. Keating
  • Patent number: 4569018
    Abstract: A data processing uses instructions which may refer to operands in main memory by either physical or logical addresses. The central processor has an internal memory organized as two portions. The first portion provides a scratchpad memory function for the central processor and the second portion is responsive to logical addresses to provide corresponding physical addresses.
    Type: Grant
    Filed: November 15, 1982
    Date of Patent: February 4, 1986
    Assignee: Data General Corp.
    Inventors: Mark D. Hummel, James M. Guyer, David I. Epstein, David L. Keating, Steven J. Wallach