Patents by Inventor Keith Lowery

Keith Lowery has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11860813
    Abstract: A method of processing memory instructions including receiving a memory related command from a client system in communication with a memory appliance via a communication protocol, wherein the memory appliance comprises a processor, a memory unit controller and a plurality of memory devices coupled to said memory unit controller. The memory related command is translated by the processor into a plurality of commands that are formatted to perform prescribed data manipulation operations on data of the plurality of memory devices stored in data structures. The plurality of primitive commands is executed on data stored in the memory devices to produce a result, wherein the executing is performed by the memory unit controller. A direct memory transfer of the result is established over the communication protocol to a network.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: January 2, 2024
    Assignee: Rambus Inc.
    Inventors: Keith Lowery, Vlad Fruchter
  • Patent number: 11520633
    Abstract: A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: December 6, 2022
    Assignee: Rambus Inc.
    Inventor: Keith Lowery
  • Publication number: 20220188249
    Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
    Type: Application
    Filed: December 20, 2021
    Publication date: June 16, 2022
    Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
  • Publication number: 20220100697
    Abstract: A method of processing memory instructions including receiving a memory related command from a client system in communication with a memory appliance via a communication protocol, wherein the memory appliance comprises a processor, a memory unit controller and a plurality of memory devices coupled to said memory unit controller. The memory related command is translated by the processor into a plurality of commands that are formatted to perform prescribed data manipulation operations on data of the plurality of memory devices stored in data structures. The plurality of primitive commands is executed on data stored in the memory devices to produce a result, wherein the executing is performed by the memory unit controller. A direct memory transfer of the result is established over the communication protocol to a network.
    Type: Application
    Filed: September 23, 2021
    Publication date: March 31, 2022
    Inventors: Keith LOWERY, Vlad FRUCHTER
  • Patent number: 11210240
    Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: December 28, 2021
    Assignee: Rambus Inc.
    Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
  • Patent number: 11132328
    Abstract: A method of processing memory instructions including receiving a memory related command from a client system in communication with a memory appliance via a communication protocol, wherein the memory appliance comprises a processor, a memory unit controller and a plurality of memory devices coupled to said memory unit controller. The memory related command is translated by the processor into a plurality of commands that are formatted to perform prescribed data manipulation operations on data of the plurality of memory devices stored in data structures. The plurality of primitive commands is executed on data stored in the memory devices to produce a result, wherein the executing is performed by the memory unit controller. A direct memory transfer of the result is established over the communication protocol to a network.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: September 28, 2021
    Assignee: RAMBUS, INC.
    Inventors: Keith Lowery, Vlad Fruchter
  • Publication number: 20210011768
    Abstract: A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.
    Type: Application
    Filed: July 22, 2020
    Publication date: January 14, 2021
    Inventor: Keith LOWERY
  • Patent number: 10725824
    Abstract: A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: July 28, 2020
    Assignee: Rambus Inc.
    Inventor: Keith Lowery
  • Publication number: 20200110715
    Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
    Type: Application
    Filed: October 7, 2019
    Publication date: April 9, 2020
    Inventors: Vlad FRUCHTER, Keith LOWERY, George Michael UHLER, Steven WOO, Chi-Ming (Philip) YEUNG, Ronald LEE
  • Patent number: 10574734
    Abstract: Methods and systems for managing data storage and compute resources. The data can be stored a multiple locations allowing compute operations to be performed in a distributed manner in one or more locations. The cloud storage and cloud compute resources can be dynamically scaled based on the locations of the data and based on the cloud storage and/or cloud computing budgets. Dynamic reconfiguration of reconfigurable processors (e.g., FPGA) can further be used to accelerate compute operations.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: February 25, 2020
    Assignee: Rambus Inc.
    Inventor: Keith Lowery
  • Patent number: 10437747
    Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: October 8, 2019
    Assignee: Rambus Inc.
    Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
  • Patent number: 10255104
    Abstract: Embodiments described herein include a system, a computer-readable medium and a computer-implemented method for processing a system call (SYSCALL) request. The SYSCALL request from an invisible processing device is stored in a queueing mechanism that is accessible to a visible processing device, where the visible processing device is visible to an operating system and the invisible processing device is invisible to the operating system. The SYSCALL request is processed using the visible processing device, and the invisible processing device is notified using a notification mechanism that the SYSCALL request was processed.
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: April 9, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Benjamin Thomas Sander, Michael Clair Houston, Keith Lowery, Newton Cheung
  • Publication number: 20190034442
    Abstract: When requested content is available at a data center, the data center returns the requested content to the data center. When the requested content is locally unavailable at the data center, the requested content is retrieved from an origin server. The retrieval of the content from the origin server may be delayed based on the processing load at the origin server. When retrieval of the content is delayed, the request is prioritized and placed in a queue for handling by the origin server based on the priority of the request. Also, when retrieval of the content is delayed, a status page may be communicated to the browser to inform a user of the delay and provide alternate content and status information related to the request determined as a function of the request or the current state of the origin server.
    Type: Application
    Filed: September 24, 2018
    Publication date: January 31, 2019
    Applicant: Parallel Networks LLC
    Inventors: Keith A. Lowery, David K. Davidson, Avinash C. Saxena
  • Patent number: 10146575
    Abstract: Methods, systems and computer-readable mediums for task scheduling on an accelerated processing device (APD) are provided. In an embodiment, a method comprises: enqueuing one or more tasks in a memory storage module based on the APD; using a software-based enqueuing module; and dequeuing the one or more tasks from the memory storage module using a hardware-based command processor, wherein the command processor forwards the one or more tasks to the shader cote.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: December 4, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Benjamin Thomas Sander, Michael Houston, Newton Cheung, Keith Lowery
  • Patent number: 10114903
    Abstract: When requested content is available at a data center, the data center returns the requested content to the data center. When the requested content is locally unavailable at the data center, the requested content is retrieved from an origin server. The retrieval of the content from the origin server may be delayed based on the processing load at the origin server. When retrieval of the content is delayed, the request is prioritized and placed in a queue for handling by the origin server based on the priority of the request. Also, when retrieval of the content is delayed, a status page may be communicated to the browser to inform a user of the delay and provide alternate content and status information related to the request determined as a function of the request or the current state of the origin server.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: October 30, 2018
    Assignee: Parallel Networks, LLC
    Inventors: Keith A. Lowery, David K. Davidson, Avinash C. Saxena
  • Publication number: 20180203734
    Abstract: A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.
    Type: Application
    Filed: July 5, 2016
    Publication date: July 19, 2018
    Inventor: Keith LOWERY
  • Publication number: 20180097814
    Abstract: A data center determines whether requested content is available at the data center. The content is available when the content is both present at the data center and current. When the requested content is available at the data center, the data center returns the requested content to the browser. When the requested content is locally unavailable at the data center, the requested content is retrieved from an origin server. When retrieval of the content is delayed, the request is prioritized and placed in a queue for handling by the origin server based on the priority of the request. A status page may be communicated to the browser to inform a user of the delay and provide alternate content and status information related to the request determined as a function of the request or the current state of the origin server.
    Type: Application
    Filed: December 3, 2017
    Publication date: April 5, 2018
    Applicant: Parallel Networks LLC
    Inventors: Keith Lowery, David K. Davidson, Avinash C. Saxena
  • Patent number: 9934194
    Abstract: A memory appliance system is described that includes a memory unit comprising a memory unit controller and a plurality of memory devices. A reconfigurable memory structure is stored in the plurality of memory devices, wherein the memory structure comprises a plurality of variably sized containers. Each container of data includes metadata, payload, and relationship information that associates a corresponding container with one or more other containers stored in the memory structure. The controller is data structure aware such that the controller is configured to traverse the memory structure and perform operations on the memory structure based on the metadata and relationship information.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: April 3, 2018
    Assignee: Rambus Inc.
    Inventors: Keith Lowery, Vlad Fruchter
  • Patent number: 9880971
    Abstract: A memory appliance system is described and includes a processor coupled to one or more communication channels with a command interface, wherein the processor is configured for communicating commands over the communication channels. A plurality of Smart Memory Cubes (SMCs) is coupled to the processor through the communication channels. Each of the SMCs includes a controller that is programmable, and a plurality of memory devices. The controller is configured to respond to commands from the command interface to access content stored in one or more of the plurality of memory devices and to perform data operations on content accessed from the plurality of memory devices.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: January 30, 2018
    Assignee: Rambus Inc.
    Inventors: Keith Lowery, Vlad Fruchter, Chi-Ming Yeung
  • Patent number: 9665533
    Abstract: A memory appliance system is described and includes a plurality of memory devices storing data in a plurality of containers and a controller. The containers include metadata, relationship information associating a respective container with related containers, and a payload. The controller is configured to perform data operations on the payload of one of the containers, and based on the relationship information associating the respective container with related containers and the payload of related containers.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: May 30, 2017
    Assignee: Rambus Inc.
    Inventors: Keith Lowery, Vlad Fruchter