Patents by Inventor Erik Schlanger

Erik Schlanger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10055358
    Abstract: Techniques are described herein for efficient movement of data from a source memory to a destination memory. In an embodiment, in response to a particular memory location being pushed into a first register within a first register space, the first set of electronic circuits accesses a descriptor stored at the particular memory location. The descriptor indicates a width of a column of tabular data, a number of rows of tabular data, and one or more tabular data manipulation operations to perform on the column of tabular data. The descriptor also indicates a source memory location for accessing the tabular data and a destination memory location for storing data manipulation result from performing the one or more data manipulation operations on the tabular data.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: August 21, 2018
    Assignee: Oracle International Corporation
    Inventors: David A. Brown, Rishabh Jain, Sam Idicula, Erik Schlanger, David Joseph Hawkins
  • Publication number: 20180150407
    Abstract: Techniques provide for hardware accelerated data movement between main memory and an on-chip data movement system that comprises multiple core processors that operate on the tabular data. The tabular data is moved to or from the scratch pad memories of the core processors. While the data is in-flight, the data may be manipulated by data manipulation operations. The data movement system includes multiple data movement engines, each dedicated to moving and transforming tabular data from main memory data to a subset of the core processors. Each data movement engine is coupled to an internal memory that stores data (e.g. a bit vector) that dictates how data manipulation operations are performed on tabular data moved from a main memory to the memories of a core processor, or to and from other memories. The internal memory of each data movement engine is private to the data movement engine.
    Type: Application
    Filed: November 28, 2016
    Publication date: May 31, 2018
    Inventors: DAVID A. BROWN, SAM IDICULA, ERIK SCHLANGER, RISHABH JAIN, MICHAEL DULLER
  • Publication number: 20180150542
    Abstract: Techniques provide for hardware accelerated data movement between main memory and an on-chip data movement system that comprises multiple core processors that operate on the tabular data. The tabular data is moved to or from the scratch pad memories of the core processors. While the data is in-flight, the data may be manipulated by data manipulation operations. The data movement system includes multiple data movement engines, each dedicated to moving and transforming tabular data from main memory data to a subset of the core processors. Each data movement engine is coupled to an internal memory that stores data (e.g. a bit vector) that dictates how data manipulation operations are performed on tabular data moved from a main memory to the memories of a core processor, or to and from other memories. The internal memory of each data movement engine is private to the data movement engine.
    Type: Application
    Filed: November 28, 2016
    Publication date: May 31, 2018
    Inventors: DAVID A. BROWN, SAM IDICULA, ERIK SCHLANGER, RISHABH JAIN, MICHAEL DULLER, CHRISTOPHER JOSEPH DANIELS, DAVID JOSEPH HAWKINS
  • Publication number: 20180107482
    Abstract: Circuitry may be configured to identify a particular element position of a bit vector stored in a register, where a value of the element occupying the particular element position matches a first predetermined value, and determine an address value dependent upon the particular element position of the bit vector and a base address. The circuitry may be further configured to load data from a memory dependent upon the address value.
    Type: Application
    Filed: October 18, 2016
    Publication date: April 19, 2018
    Inventors: Erik Schlanger, Charles Roth, Daniel Fowler
  • Publication number: 20180067889
    Abstract: Techniques are provided for exchanging dedicated hardware signals to manage a first-in first-out (FIFO). In an embodiment, a first processor initiates content transfer into the FIFO. The first processor activates a first hardware signal that is reserved for indicating that content resides within the FIFO. A second processor activates a second hardware signal that is reserved for indicating that content is accepted. The second hardware signal causes the first hardware signal to be deactivated. This exchange of hardware signals demarcates a FIFO transaction, which is mediated by interface circuitry of the FIFO.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 8, 2018
    Inventors: David A. Brown, Daniel Fowler, Rishabh Jain, Erik Schlanger, Michael Duller
  • Publication number: 20180004581
    Abstract: Techniques are provided for improving the performance of a constellation of coprocessors by hardware support for asynchronous events. In an embodiment, a coprocessor receives an event descriptor that identifies an event and a logic. The coprocessor processes the event descriptor to configure the coprocessor to detect whether the event has been received. Eventually a device, such as a CPU or another coprocessor, sends the event. The coprocessor detects that it has received the event. In response to detecting the event, the coprocessor performs the logic.
    Type: Application
    Filed: June 29, 2016
    Publication date: January 4, 2018
    Inventors: DAVID A. BROWN, RISHABH JAIN, MICHAEL DULLER, ERIK SCHLANGER
  • Publication number: 20170270052
    Abstract: Techniques are described herein for efficient movement of data from a source memory to a destination memory. In an embodiment, in response to a particular memory location being pushed into a first register within a first register space, the first set of electronic circuits accesses a descriptor stored at the particular memory location. The descriptor indicates a width of a column of tabular data, a number of rows of tabular data, and one or more tabular data manipulation operations to perform on the column of tabular data. The descriptor also indicates a source memory location for accessing the tabular data and a destination memory location for storing data manipulation result from performing the one or more data manipulation operations on the tabular data.
    Type: Application
    Filed: March 18, 2016
    Publication date: September 21, 2017
    Inventors: DAVID A. BROWN, RISHABH JAIN, MICHAEL DULLER, SAM IDICULA, ERIK SCHLANGER, DAVID JOSEPH HAWKINS
  • Publication number: 20170270053
    Abstract: Techniques are described herein for efficient movement of data from a source memory to a destination memory. In an embodiment, in response to a particular memory location being pushed into a first register within a first register space, the first set of electronic circuits accesses a descriptor stored at the particular memory location. The descriptor indicates a width of a column of tabular data, a number of rows of tabular data, and one or more tabular data manipulation operations to perform on the column of tabular data. The descriptor also indicates a source memory location for accessing the tabular data and a destination memory location for storing data manipulation result from performing the one or more data manipulation operations on the tabular data.
    Type: Application
    Filed: March 18, 2016
    Publication date: September 21, 2017
    Inventors: DAVID A. BROWN, RISHABH JAIN, SAM IDICULA, ERIK SCHLANGER, DAVID JOSEPH HAWKINS
  • Patent number: 9557997
    Abstract: Techniques are described herein for using configurable logic constructs in a loop buffer. In an embodiment, a configurable hardware block is programmed based on one or more target functions within a loop. The configurable hardware block is associated with a plurality of registers, including a loopcount register and a first output register. For each iteration of the loop, a counter value in the loopcount register is updated and a target value in the first output register is updated using the programmed configurable hardware block. For each iteration of the loop, a set of one or more instructions may be fetched from the instruction buffer and executed based on the updated target value in the first output value.
    Type: Grant
    Filed: July 22, 2013
    Date of Patent: January 31, 2017
    Assignee: Oracle International Corporation
    Inventors: Aarti Basant, Brian Gold, Erik Schlanger
  • Publication number: 20150039874
    Abstract: Translation of boot code read request commands from an on-board processor of a system on a chip (SoC) from a bus protocol (e.g., advanced high-performance bus (AHB) protocol) into a sequence of commands understandable by a serial interface of the SoC to read boot code from an off-board (e.g., flash or other non-volatile) memory device. The serial interface of the memory device may include a relatively low pin count (e.g., 5 pins) and boot code of the memory device may be modified after tape-out of the SoC free of necessitating a subsequent tape-out of the SoC.
    Type: Application
    Filed: July 31, 2013
    Publication date: February 5, 2015
    Applicant: Oracle International Corporation
    Inventors: Erik Schlanger, Eric Devolder, Ashraf Ahmed
  • Publication number: 20150026434
    Abstract: Techniques are described herein for using configurable logic constructs in a loop buffer. In an embodiment, a configurable hardware block is programmed based on one or more target functions within a loop. The configurable hardware block is associated with a plurality of registers, including a loopcount register and a first output register. For each iteration of the loop, a counter value in the loopcount register is updated and a target value in the first output register is updated using the programmed configurable hardware block. For each iteration of the loop, a set of one or more instructions may be fetched from the instruction buffer and executed based on the updated target value in the first output value.
    Type: Application
    Filed: July 22, 2013
    Publication date: January 22, 2015
    Applicant: Oracle International Corporation
    Inventors: Aarti Basant, Brian Gold, Erik Schlanger
  • Patent number: 8923384
    Abstract: In one form, a video processing device (150) includes a memory (110, 130) and a plurality of staged macroblock processing engines (112, 114, 116). The memory (110, 130) is operable to store partially decoded video data decoded from a stream of encoded video data. The plurality of staged macroblock processing engines (112, 114, 116) is coupled to the memory (110, 130) and is responsive to a request to process the partially decoded video data to generate a plurality of macroblocks of decoded video data. In another form, a first a first macroblock of decoded video data having a first location (426) within a first row (408) of a video frame (400) is generated, and a second macroblock of decoded video data having a second location (424) within a second row (410) of the video frame (400) is generated during the generating of the first macroblock.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: December 30, 2014
    Assignee: Netlogic Microsystems, Inc.
    Inventors: Erik Schlanger, Rens Ross
  • Patent number: 8576924
    Abstract: A video processing apparatus and methodology are implemented as a combination of a processor and a video decoding hardware block to decode video data by performing piecewise processing of overlap smoothing and in-loop deblocking in a macroblock-based fashion. With this approach, a smaller on-board memory may be used for the in-loop filtering operations of the video decoding hardware block. By pipelining the piecewise processing operations, latency in the filtering operations is hidden and the filtering output is smoothed, thereby avoiding the need for bursts of fetching and storing of blocks.
    Type: Grant
    Filed: January 25, 2005
    Date of Patent: November 5, 2013
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bill Kwan, Erik Schlanger, Casey King, Raquel Rozas
  • Patent number: 8462841
    Abstract: A video processing device (150) includes a bitstream accelerator module (106) and a video processing engine (108). The bitstream accelerator module (106) has an input for receiving a stream of encoded video data, and an output adapted to be coupled to a memory (112) for storing partially decoded video data. The bitstream accelerator module (106) partially decodes the stream of encoded video data according to a selected one of a plurality of video formats to provide the partially decoded video data. The video processing engine (108) has input adapted to be coupled to the memory (112) for reading the partially decoded video data, and an output for providing decoded video data.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: June 11, 2013
    Assignee: NetLogic Microsystems, Inc.
    Inventors: Erik Schlanger, Brendan Donahe, Eric Devolder, Rens Ross, Sandip Ladhani, Eric Swartzendruber
  • Patent number: 7965773
    Abstract: A video processing apparatus and methodology use a combination of a processor and a video decoding hardware block to decode video data by using a reference block cache memory to perform motion compensation decode operations in the video decoding hardware block. To improve the cache hit rate, each memory access for required reference block(s) is used to fetch one or more additional reference blocks which can be used to improve the cache hit rate with future motion compensation operations. Speculative fetch control logic selects the additional reference blocks by using a frequency history table to accumulate compared motion vector information for a current motion compensation block with motion vector information from previously processed motion compensation blocks.
    Type: Grant
    Filed: June 30, 2005
    Date of Patent: June 21, 2011
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Erik Schlanger, Nathan Sheeley, Eric Swartzendruber, Bill Kwan, Chin-Chia Kuo
  • Patent number: 7792385
    Abstract: A video processing apparatus and methodology are implemented as a combination of a processor and a video decoding hardware block to decode video data by providing the video decoding block with an in-loop filter and a scratch pad memory, so that the in-loop filter may efficiently perform piecewise processing of overlap smoothing and in-loop deblocking in a macroblock-based fashion which is a much more efficient algorithm than the frame-based method.
    Type: Grant
    Filed: January 25, 2005
    Date of Patent: September 7, 2010
    Assignee: GlobalFoundries Inc.
    Inventors: Bill Kwan, Erik Schlanger, Raquel Rozas, Casey King
  • Publication number: 20090168899
    Abstract: A video processing device (150) includes a bitstream accelerator module (106) and a video processing engine (108). The bitstream accelerator module (106) has an input for receiving a stream of encoded video data, and an output adapted to be coupled to a memory (112) for storing partially decoded video data. The bitstream accelerator module (106) partially decodes the stream of encoded video data according to a selected one of a plurality of video formats to provide the partially decoded video data. The video processing engine (108) has input adapted to be coupled to the memory (112) for reading the partially decoded video data, and an output for providing decoded video data.
    Type: Application
    Filed: December 31, 2007
    Publication date: July 2, 2009
    Applicant: RAZA MICROELECTRONICS, INC.
    Inventors: Erik Schlanger, Brendan Donahe, Eric DeVolder, Rens Ross, Sandip Ladhani, Eric Swartzendruber
  • Publication number: 20090168893
    Abstract: In one form, a video processing device (150) includes a memory (110, 130) and a plurality of staged macroblock processing engines (112, 114, 116). The memory (110, 130) is operable to store partially decoded video data decoded from a stream of encoded video data. The plurality of staged macroblock processing engines (112, 114, 116) is coupled to the memory (110, 130) and is responsive to a request to process the partially decoded video data to generate a plurality of macroblocks of decoded video data. In another form, a first a first macroblock of decoded video data having a first location (426) within a first row (408) of a video frame (400) is generated, and a second macroblock of decoded video data having a second location (424) within a second row (410) of the video frame (400) is generated during the generating of the first macroblock.
    Type: Application
    Filed: December 31, 2007
    Publication date: July 2, 2009
    Applicant: RAZA MICROELECTRONICS, INC.
    Inventors: Erik Schlanger, Rens Ross
  • Publication number: 20060165164
    Abstract: A video processing apparatus and methodology are implemented as a combination of a processor and a video decoding hardware block to decode video data by providing the video decoding block with an in-loop filter and a scratch pad memory, so that the in-loop filter may efficiently perform piecewise processing of overlap smoothing and in-loop deblocking in a macroblock-based fashion which is a much more efficient algorithm than the frame-based method.
    Type: Application
    Filed: January 25, 2005
    Publication date: July 27, 2006
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Bill Kwan, Erik Schlanger, Raquel Rozas, Casey King
  • Publication number: 20060165181
    Abstract: A video processing apparatus and methodology are implemented as a combination of a processor and a video decoding hardware block to decode video data by performing piecewise processing of overlap smoothing and in-loop deblocking in a macroblock-based fashion. With this approach, a smaller on-board memory may be used for the in-loop filtering operations of the video decoding hardware block. By pipelining the piecewise processing operations, latency in the filtering operations is hidden and the filtering output is smoothed, thereby avoiding the need for bursts of fetching and storing of blocks.
    Type: Application
    Filed: January 25, 2005
    Publication date: July 27, 2006
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Bill Kwan, Erik Schlanger, Casey King, Raquel Rozas