Patents by Inventor Keith Lowery
Keith Lowery has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250061066Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.Type: ApplicationFiled: August 27, 2024Publication date: February 20, 2025Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
-
Publication number: 20250004906Abstract: Techniques are described for adaptive application profiling, such as for adaptively collecting profiling runtime data for an application running on heterogeneous processing architectures. A first set of profiling data is collected from a first set of tracking circuitry during execution of an application by one or more processors. During the execution of the application and based on the first set of profiling data, a second set of tracking circuitry is determined for use in collecting additional profiling data for the application, the second set of tracking circuitry being distinct from the first set of tracking circuitry. A second set of runtime profiling data is collected from the second set of tracking circuitry.Type: ApplicationFiled: June 29, 2023Publication date: January 2, 2025Inventor: Keith Lowery
-
Patent number: 12099454Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.Type: GrantFiled: December 20, 2021Date of Patent: September 24, 2024Assignee: Rambus Inc.Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
-
Patent number: 11860813Abstract: A method of processing memory instructions including receiving a memory related command from a client system in communication with a memory appliance via a communication protocol, wherein the memory appliance comprises a processor, a memory unit controller and a plurality of memory devices coupled to said memory unit controller. The memory related command is translated by the processor into a plurality of commands that are formatted to perform prescribed data manipulation operations on data of the plurality of memory devices stored in data structures. The plurality of primitive commands is executed on data stored in the memory devices to produce a result, wherein the executing is performed by the memory unit controller. A direct memory transfer of the result is established over the communication protocol to a network.Type: GrantFiled: September 23, 2021Date of Patent: January 2, 2024Assignee: Rambus Inc.Inventors: Keith Lowery, Vlad Fruchter
-
Patent number: 11520633Abstract: A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.Type: GrantFiled: July 22, 2020Date of Patent: December 6, 2022Assignee: Rambus Inc.Inventor: Keith Lowery
-
Publication number: 20220188249Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.Type: ApplicationFiled: December 20, 2021Publication date: June 16, 2022Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
-
Patent number: 11210240Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.Type: GrantFiled: October 7, 2019Date of Patent: December 28, 2021Assignee: Rambus Inc.Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
-
Patent number: 11132328Abstract: A method of processing memory instructions including receiving a memory related command from a client system in communication with a memory appliance via a communication protocol, wherein the memory appliance comprises a processor, a memory unit controller and a plurality of memory devices coupled to said memory unit controller. The memory related command is translated by the processor into a plurality of commands that are formatted to perform prescribed data manipulation operations on data of the plurality of memory devices stored in data structures. The plurality of primitive commands is executed on data stored in the memory devices to produce a result, wherein the executing is performed by the memory unit controller. A direct memory transfer of the result is established over the communication protocol to a network.Type: GrantFiled: November 12, 2014Date of Patent: September 28, 2021Assignee: RAMBUS, INC.Inventors: Keith Lowery, Vlad Fruchter
-
Patent number: 10725824Abstract: A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.Type: GrantFiled: July 5, 2016Date of Patent: July 28, 2020Assignee: Rambus Inc.Inventor: Keith Lowery
-
Patent number: 10574734Abstract: Methods and systems for managing data storage and compute resources. The data can be stored a multiple locations allowing compute operations to be performed in a distributed manner in one or more locations. The cloud storage and cloud compute resources can be dynamically scaled based on the locations of the data and based on the cloud storage and/or cloud computing budgets. Dynamic reconfiguration of reconfigurable processors (e.g., FPGA) can further be used to accelerate compute operations.Type: GrantFiled: March 24, 2016Date of Patent: February 25, 2020Assignee: Rambus Inc.Inventor: Keith Lowery
-
Patent number: 10437747Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.Type: GrantFiled: April 11, 2016Date of Patent: October 8, 2019Assignee: Rambus Inc.Inventors: Vlad Fruchter, Keith Lowery, George Michael Uhler, Steven Woo, Chi-Ming (Philip) Yeung, Ronald Lee
-
Patent number: 10255104Abstract: Embodiments described herein include a system, a computer-readable medium and a computer-implemented method for processing a system call (SYSCALL) request. The SYSCALL request from an invisible processing device is stored in a queueing mechanism that is accessible to a visible processing device, where the visible processing device is visible to an operating system and the invisible processing device is invisible to the operating system. The SYSCALL request is processed using the visible processing device, and the invisible processing device is notified using a notification mechanism that the SYSCALL request was processed.Type: GrantFiled: March 29, 2013Date of Patent: April 9, 2019Assignee: Advanced Micro Devices, Inc.Inventors: Benjamin Thomas Sander, Michael Clair Houston, Keith Lowery, Newton Cheung
-
Patent number: 10146575Abstract: Methods, systems and computer-readable mediums for task scheduling on an accelerated processing device (APD) are provided. In an embodiment, a method comprises: enqueuing one or more tasks in a memory storage module based on the APD; using a software-based enqueuing module; and dequeuing the one or more tasks from the memory storage module using a hardware-based command processor, wherein the command processor forwards the one or more tasks to the shader cote.Type: GrantFiled: August 29, 2016Date of Patent: December 4, 2018Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Benjamin Thomas Sander, Michael Houston, Newton Cheung, Keith Lowery
-
Publication number: 20180097814Abstract: A data center determines whether requested content is available at the data center. The content is available when the content is both present at the data center and current. When the requested content is available at the data center, the data center returns the requested content to the browser. When the requested content is locally unavailable at the data center, the requested content is retrieved from an origin server. When retrieval of the content is delayed, the request is prioritized and placed in a queue for handling by the origin server based on the priority of the request. A status page may be communicated to the browser to inform a user of the delay and provide alternate content and status information related to the request determined as a function of the request or the current state of the origin server.Type: ApplicationFiled: December 3, 2017Publication date: April 5, 2018Applicant: Parallel Networks LLCInventors: Keith Lowery, David K. Davidson, Avinash C. Saxena
-
Patent number: 9934194Abstract: A memory appliance system is described that includes a memory unit comprising a memory unit controller and a plurality of memory devices. A reconfigurable memory structure is stored in the plurality of memory devices, wherein the memory structure comprises a plurality of variably sized containers. Each container of data includes metadata, payload, and relationship information that associates a corresponding container with one or more other containers stored in the memory structure. The controller is data structure aware such that the controller is configured to traverse the memory structure and perform operations on the memory structure based on the metadata and relationship information.Type: GrantFiled: November 12, 2014Date of Patent: April 3, 2018Assignee: Rambus Inc.Inventors: Keith Lowery, Vlad Fruchter
-
Patent number: 9880971Abstract: A memory appliance system is described and includes a processor coupled to one or more communication channels with a command interface, wherein the processor is configured for communicating commands over the communication channels. A plurality of Smart Memory Cubes (SMCs) is coupled to the processor through the communication channels. Each of the SMCs includes a controller that is programmable, and a plurality of memory devices. The controller is configured to respond to commands from the command interface to access content stored in one or more of the plurality of memory devices and to perform data operations on content accessed from the plurality of memory devices.Type: GrantFiled: November 12, 2014Date of Patent: January 30, 2018Assignee: Rambus Inc.Inventors: Keith Lowery, Vlad Fruchter, Chi-Ming Yeung
-
Patent number: 9665533Abstract: A memory appliance system is described and includes a plurality of memory devices storing data in a plurality of containers and a controller. The containers include metadata, relationship information associating a respective container with related containers, and a payload. The controller is configured to perform data operations on the payload of one of the containers, and based on the relationship information associating the respective container with related containers and the payload of related containers.Type: GrantFiled: November 12, 2014Date of Patent: May 30, 2017Assignee: Rambus Inc.Inventors: Keith Lowery, Vlad Fruchter
-
Patent number: 9645854Abstract: A method, system and article of manufacture for balancing a workload on heterogeneous processing devices. The method comprising accessing a memory storage of a processor of one type by a dequeuing entity associated with a processor of a different type, identifying a task from a plurality of tasks within the memory that can be processed by the processor of the different type, synchronizing a plurality of dequeuing entities capable of accessing the memory storage, and dequeuing the task form the memory storage.Type: GrantFiled: November 2, 2011Date of Patent: May 9, 2017Assignee: Advanced Micro Devices, Inc.Inventors: Benjamin Thomas Sander, Michael Houston, Newton Cheung, Keith Lowery
-
Publication number: 20170034171Abstract: A data center determines whether requested content is available at the data center. The content is available when the content is both present at the data center and current. When the requested content is available at the data center, the data center returns the requested content to the browser. When the requested content is locally unavailable at the data center, the requested content is retrieved from an origin server. When retrieval of the content is delayed, the request is prioritized and placed in a queue for handling by the origin server based on the priority of the request. A status page may be communicated to the browser to inform a user of the delay and provide alternate content and status information related to the request determined as a function of the request or the current state of the origin server.Type: ApplicationFiled: August 30, 2015Publication date: February 2, 2017Applicant: Parallel Networks LLCInventors: Keith Lowery, David K. Davidson, Avinash C. Saxena
-
Publication number: 20160371116Abstract: Methods, systems and computer-readable mediums for task scheduling on an accelerated processing device (APD) are provided. In an embodiment, a method comprises: enqueuing one or more tasks in a memory storage module based on the APD; using a software-based enqueuing module; and dequeuing the one or more tasks from the memory storage module using a hardware-based command processor, wherein the command processor forwards the one or more tasks to the shader cote.Type: ApplicationFiled: August 29, 2016Publication date: December 22, 2016Applicant: Advanced Micro Devices, Inc.Inventors: Benjamin Thomas Sander, Michael Houston, Newton Cheung, Keith Lowery