Patents by Inventor Peter J Wilson

Peter J Wilson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10324723
    Abstract: Disclosed is a digital processor comprising an instruction memory having a first input, a second input, a first output, and a second output. A program counter register is in communication with the first input of the instruction memory. The program counter register is configured to store an address of an instruction to be fetched. A data pointer register is in communication with the second input of the instruction memory. The data pointer register is configured to store an address of a data value in the instruction memory. An instruction buffer is in communication with the first output of the instruction memory. The instruction buffer is arranged to receive an instruction according to a value at the program counter register. A data buffer is in communication with the second output of the instruction memory. The data buffer is arranged to receive a data value according to a value at the data pointer register.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: June 18, 2019
    Assignee: NXP USA, Inc.
    Inventors: Peter J Wilson, Brian C Kahne, Jeffrey W Scott
  • Patent number: 10235225
    Abstract: A method of handling requests between contexts in a processing system includes, in a current context of a source processing system element (PSE): executing a send-and rendezvous instruction that specifies a destination PSE, a queue address in the destination PSE, a set of source registers, and a set of receive registers; and sending a send-and-rendezvous message (SRM) to the destination PSE, wherein the SRM includes an address of the destination PSE, a destination queue address, a source PSE address, and an identifier of the current context in the source PSE.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: March 19, 2019
    Assignee: NXP USA, Inc.
    Inventors: Peter J. Wilson, Brian C. Kahne
  • Patent number: 10031753
    Abstract: In a pipelined element configured to execute multiple contexts and including an instruction pipeline and a plurality of context modules each having a register file and a functional unit, a method includes scheduling a first context for execution in the instruction pipeline. The instruction pipeline includes an execution unit having a plurality of functional units. Each functional unit of the plurality of functional units is configured to execute instructions of a scheduled context of the plurality of contexts. A first instruction of the first context which precedes an instruction loop of the first context is executed. In response to executing the first instruction, the first context is released from being scheduled for execution in the instruction pipeline and execution of the first context is continued using a first context module. The first context module includes a context-specific functional unit configured to execute the instruction loop.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: July 24, 2018
    Assignee: NXP USA, Inc.
    Inventors: Peter J Wilson, Brian C Kahne
  • Patent number: 10025720
    Abstract: A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: July 17, 2018
    Assignee: NXP USA, Inc.
    Inventor: Peter J. Wilson
  • Publication number: 20170364398
    Abstract: A method of handling requests between contexts in a processing system includes, in a current context of a source processing system element (PSE): executing a send-and rendezvous instruction that specifies a destination PSE, a queue address in the destination PSE, a set of source registers, and a set of receive registers; and sending a send-and-rendezvous message (SRM) to the destination PSE, wherein the SRM includes an address of the destination PSE, a destination queue address, a source PSE address, and an identifier of the current context in the source PSE.
    Type: Application
    Filed: August 1, 2017
    Publication date: December 21, 2017
    Inventors: Peter J. WILSON, BRIAN C. KAHNE
  • Patent number: 9824242
    Abstract: A storage location of a device that can be configured to act as a master in a particular security mode, such as a Direct Memory Access (DMA) having one or more channels, can be programmed to indicate a security indicator to be provided when configured to operate as a master device.
    Type: Grant
    Filed: July 27, 2015
    Date of Patent: November 21, 2017
    Assignee: NXP USA, INC.
    Inventors: Joseph C. Circello, Daniel M. McCarthy, John D. Mitchell, Peter J. Wilson, John J. Vaglica
  • Publication number: 20170308480
    Abstract: A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced.
    Type: Application
    Filed: July 11, 2017
    Publication date: October 26, 2017
    Inventor: Peter J. Wilson
  • Patent number: 9753790
    Abstract: A method of handling requests between contexts in a processing system includes, in a current context of a source processing system element (PSE): executing a send-and rendezvous instruction that specifies a destination PSE, a queue address in the destination PSE, a set of source registers, and a set of receive registers; and sending a send-and-rendezvous message (SRM) to the destination PSE, wherein the SRM includes an address of the destination PSE, a destination queue address, a source PSE address, and an identifier of the current context in the source PSE.
    Type: Grant
    Filed: August 18, 2015
    Date of Patent: September 5, 2017
    Assignee: NXP USA, Inc.
    Inventors: Peter J. Wilson, Brian C. Kahne
  • Patent number: 9734080
    Abstract: A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced.
    Type: Grant
    Filed: August 8, 2013
    Date of Patent: August 15, 2017
    Assignee: NXP USA, Inc.
    Inventor: Peter J. Wilson
  • Publication number: 20170052834
    Abstract: A method of handling requests between contexts in a processing system includes, in a current context of a source processing system element (PSE): executing a send-and rendezvous instruction that specifies a destination PSE, a queue address in the destination PSE, a set of source registers, and a set of receive registers; and sending a send-and-rendezvous message (SRM) to the destination PSE, wherein the SRM includes an address of the destination PSE, a destination queue address, a source PSE address, and an identifier of the current context in the source PSE.
    Type: Application
    Filed: August 18, 2015
    Publication date: February 23, 2017
    Inventors: PETER J. WILSON, BRIAN C. KAHNE
  • Patent number: 9507654
    Abstract: A processing system includes a first processing system element, and a second processing system element configured to communicate with the first processing system. The second processing system element includes a set of messaging queues. Each of the messaging queues includes one or more entries for storing data, a set of delegate queue addresses associated with one of the set of messaging queues; and a delegate queue associated with the set of messaging queues. The delegate queue includes a set of entries corresponding to the delegate queue addresses, and each of the entries of the delegate queue indicates whether a corresponding one of the set of messaging queues is storing data.
    Type: Grant
    Filed: April 23, 2015
    Date of Patent: November 29, 2016
    Assignee: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Peter J. Wilson, Brian C. Kahne
  • Publication number: 20160342421
    Abstract: In a pipelined element configured to execute multiple contexts and including an instruction pipeline and a plurality of context modules each having a register file and a functional unit, a method includes scheduling a first context for execution in the instruction pipeline. The instruction pipeline includes an execution unit having a plurality of functional units. Each functional unit of the plurality of functional units is configured to execute instructions of a scheduled context of the plurality of contexts. A first instruction of the first context which precedes an instruction loop of the first context is executed. In response to executing the first instruction, the first context is released from being scheduled for execution in the instruction pipeline and execution of the first context is continued using a first context module. The first context module includes a context-specific functional unit configured to execute the instruction loop.
    Type: Application
    Filed: May 22, 2015
    Publication date: November 24, 2016
    Inventors: PETER J. WILSON, BRIAN C. KAHNE
  • Publication number: 20160314030
    Abstract: A processing system includes a first processing system element, and a second processing system element configured to communicate with the first processing system. The second processing system element includes a set of messaging queues. Each of the messaging queues includes one or more entries for storing data, a set of delegate queue addresses associated with one of the set of messaging queues; and a delegate queue associated with the set of messaging queues. The delegate queue includes a set of entries corresponding to the delegate queue addresses, and each of the entries of the delegate queue indicates whether a corresponding one of the set of messaging queues is storing data.
    Type: Application
    Filed: April 23, 2015
    Publication date: October 27, 2016
    Inventors: Peter J. Wilson, Brian C. Kahne
  • Publication number: 20160283233
    Abstract: A data processing system includes a plurality of contexts, a current context indicator configured to indicate a context of the plurality of contexts as the current context, an instruction queue configured to store fetched instructions for execution using in the current context, and a scheduler coupled to the context selector. The scheduler is configured to, in response to a context switch event, save a current context instruction state from the instruction queue to the corresponding instruction buffer of the current context, select a next context of the plurality of contexts, restore a context instruction state from the corresponding instruction buffer of the next context to the instruction queue, and set the current context indicator to indicate the selected next context as the current context.
    Type: Application
    Filed: March 24, 2015
    Publication date: September 29, 2016
    Inventors: PETER J. WILSON, BRIAN C. KAHNE
  • Publication number: 20160004536
    Abstract: Disclosed is a digital processor comprising an instruction memory having a first input, a second input, a first output, and a second output. A program counter register is in communication with the first input of the instruction memory. The program counter register is configured to store an address of an instruction to be fetched. A data pointer register is in communication with the second input of the instruction memory. The data pointer register is configured to store an address of a data value in the instruction memory. An instruction buffer is in communication with the first output of the instruction memory. The instruction buffer is arranged to receive an instruction according to a value at the program counter register. A data buffer is in communication with the second output of the instruction memory. The data buffer is arranged to receive a data value according to a value at the data pointer register.
    Type: Application
    Filed: July 2, 2014
    Publication date: January 7, 2016
    Applicant: Freescale Semiconductor Inc.
    Inventors: Peter J. Wilson, Brian C. Kahne, Jeffrey W. Scott
  • Publication number: 20150332069
    Abstract: A storage location of a device that can be configured to act as a master in a particular security mode, such as a Direct Memory Access (DMA) having one or more channels, can be programmed to indicate a security indicator to be provided when configured to operate as a master device.
    Type: Application
    Filed: July 27, 2015
    Publication date: November 19, 2015
    Inventors: Joseph C. Circello, Daniel M. McCarthy, John D. Mitchell, Peter J. Wilson, John J. Vaglica
  • Patent number: 9092647
    Abstract: A storage location of a device that can be configured to act as a master in a particular security mode, such as a Direct Memory Access (DMA) having one or more channels, can be programmed to indicate a security indicator to be provided when configured to operate as a master device.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: July 28, 2015
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Joseph C. Circello, Daniel M. McCarthy, John D. Mitchell, Peter J. Wilson, John J. Vaglica
  • Patent number: 8990546
    Abstract: Embodiments of a system and method are disclosed that can include a memory unit, and a memory management unit coupled to the memory unit. The memory management unit can include address mapping circuitry and access control circuitry operable to: provide address mappings for at least a frame stack and a link stack in the memory unit for programs being executed by the processing unit, and provide an access permission indicator applicable to any segment of the memory unit. A processing unit can save context information for a program to the frame stack, and execute a savelink instruction subsequent to the execution of a branch and link instruction. If the access permission indicator is set, the savelink instruction saves to the link stack a return address provided by the branch and link instruction.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: March 24, 2015
    Assignee: Freescale Semiconductor, Inc.
    Inventor: Peter J. Wilson
  • Publication number: 20150046658
    Abstract: A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced.
    Type: Application
    Filed: August 8, 2013
    Publication date: February 12, 2015
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventor: Peter J. Wilson
  • Publication number: 20140321185
    Abstract: A memory cluster includes a first block, a second block, a third block, and a fourth block arranged to have a center hole, wherein the first, second, third, and fourth blocks are each have a first port, a second port, a third port, and a fourth port. A first core is in the center hole coupled to the first port of each of the first, second, third, and fourth blocks. A second core is in the center hole coupled to the second port of each of the first, second, third, and fourth blocks. A third core is in the center hole coupled to the third port of each of the first, second, third, and fourth blocks. A fourth core in the center hole coupled to the fourth port of each of the first, second, third, and fourth blocks.
    Type: Application
    Filed: April 30, 2013
    Publication date: October 30, 2014
    Inventors: Perry H. Pelley, Peter J. Wilson