Patents by Inventor Paul Kimelman
Paul Kimelman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12265444Abstract: Systems and methods for detection of persistent faults in processing units and memory have been described. In an illustrative, non-limiting embodiment, a Machine Learning (ML) processor includes one or more registers, and a data moving circuit coupled to the one or more registers. The data moving circuit can be configured to select, based upon a first value stored in the one or more registers, an original one of a plurality of parallel handling circuits within the ML processor to obtain an original data processing result. The data moving circuit can also be configured to select, based upon a second value stored in the one or more registers, an alternative one of the plurality of parallel handling circuits to obtain an alternative data processing result that, upon comparison with the original data processing result, provides an indication of a persistent fault in the ML processor.Type: GrantFiled: March 9, 2023Date of Patent: April 1, 2025Assignee: NXP B.V.Inventors: Paul Kimelman, Adam Fuks
-
Publication number: 20240303143Abstract: Systems and methods for detection of persistent faults in processing units and memory have been described. In an illustrative, non-limiting embodiment, a Machine Learning (ML) processor includes one or more registers, and a data moving circuit coupled to the one or more registers. The data moving circuit can be configured to select, based upon a first value stored in the one or more registers, an original one of a plurality of parallel handling circuits within the ML processor to obtain an original data processing result. The data moving circuit can also be configured to select, based upon a second value stored in the one or more registers, an alternative one of the plurality of parallel handling circuits to obtain an alternative data processing result that, upon comparison with the original data processing result, provides an indication of a persistent fault in the ML processor.Type: ApplicationFiled: March 9, 2023Publication date: September 12, 2024Inventors: Paul Kimelman, Adam Fuks
-
Patent number: 12079710Abstract: A scalable neural network accelerator may include a first circuit for selecting a sub array of an array of registers, wherein the sub array comprises LH rows of registers and LW columns of registers, and wherein LH and RH are integers. The accelerator may also include a register for storing a value that determines LH. In addition, the accelerator may include a first load circuit for loading data received from the memory bus into registers of the sub array.Type: GrantFiled: December 31, 2020Date of Patent: September 3, 2024Assignee: NXP USA, Inc.Inventors: Adam Fuks, Paul Kimelman, Franciscus Petrus Widdershoven, Brian Christopher Kahne, Xiaomin Lu
-
Publication number: 20240281496Abstract: A convolution layer processor for a neural network accelerator includes a memory access module to access elements of an input feature map having a first array of pixels and a plurality of convolution modules. Each convolution module receives an element of the input feature map and performs a convolution operation on the received element of the input feature map with a convolution kernel having a second array of pixels to provide a corresponding element of an output feature map. The memory access module includes a DMA requester to request elements of the input feature map, a data buffer to provide the requested elements to each of the plurality of convolution modules, and a pad supervisor module to provide to the data buffer, for each element requested by the DMA requester, padding pixels of the input feature map when the requested element extends beyond a boundary of the input feature map.Type: ApplicationFiled: February 13, 2024Publication date: August 22, 2024Inventors: Adam Fuks, Paul Kimelman, Iancu Ciprian Mindru, Fred William Peterson, Andrei-Alexandru Avram, Mihai Despotovici
-
Publication number: 20240272959Abstract: Systems and methods for dynamically creating multiple isolated partitions in a multi-core processing system have been described. For example, an illustrative, non-limiting embodiment, an integrated circuit may include: a plurality of routers configured to provide a mesh network among a plurality of muti-cluster tiles (MCTs), where each MCT comprises a plurality of processing cores, and a control circuit coupled to the plurality of routers, where the control circuit is configured to control at least one of the plurality of routers to enable or disable at least a portion of the mesh network to create, among the plurality of processing cores, isolated partitions of processing cores.Type: ApplicationFiled: February 13, 2023Publication date: August 15, 2024Inventors: Gary L. Miller, Paul Kimelman
-
Publication number: 20240272978Abstract: Systems and methods for debugging multi-core processors with configurable isolated partitions have been described. In an illustrative, non-limiting embodiment, an integrated circuit, may include: a plurality of Cross-Trigger Matrices (CTMs) configured to establish a debug network among a plurality of multi-cluster tiles (MCTs), where each MCT includes a plurality of processor cores, and where each processor core is assigned to a respective isolated partition of processor cores; and a System Interface (SI) coupled to the plurality of CTMs, where the SI is configured to control the plurality of CTMs to enable or disable at least a portion of the debug network to allow an isolated partition to be debugged independently of another isolated partition. A method may include enabling or disabling, by the SI, buses between the MCTs to create isolated debug networks, each isolated debug network corresponding to a distinct isolated partition of processor cores.Type: ApplicationFiled: August 25, 2023Publication date: August 15, 2024Inventors: Gary L. Miller, Devendra Bahadur Singh, Jonathan Gamoneda, Paul Kimelman, Oded Yishay
-
Patent number: 12000876Abstract: A capacitive sensing system includes a controller, a node connected to one side of a capacitance, the controller configured to measure the capacitance by measuring a time for a voltage across the capacitance to reach a predetermined reference voltage, a noise measurement circuit configured to measure electrical noise on the node, and the controller receiving the measurement of noise from the noise measurement circuit.Type: GrantFiled: August 14, 2019Date of Patent: June 4, 2024Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Krishnasawamy Nagaraj, Paul Kimelman, Abhijit Kumar Das
-
Publication number: 20230418478Abstract: Tweakable block cipher encryption is described using a buffer identifier and a memory address.Type: ApplicationFiled: June 23, 2022Publication date: December 28, 2023Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Jan Hoogerbrugge, Paul Kimelman
-
Publication number: 20220207332Abstract: A scalable neural network accelerator may include a first circuit for selecting a sub array of an array of registers, wherein the sub array comprises LH rows of registers and LW columns of registers, and wherein LH and RH are integers. The accelerator may also include a register for storing a value that determines LH. In addition, the accelerator may include a first load circuit for loading data received from the memory bus into registers of the sub array.Type: ApplicationFiled: December 31, 2020Publication date: June 30, 2022Inventors: Adam Fuks, Paul Kimelman, Franciscus Petrus Widdershoven, Brian Christopher Kahne, Xiaomin Lu
-
Patent number: 11200170Abstract: A data processing system includes a processor, a memory, and a cache. The cache includes a cache array, cache control circuitry coupled to receive an access address corresponding to a read access request from the processor and configured to determine whether the received access address hits or misses in the cache array, pre-load control storage circuitry outside the cache array and configured to store a pre-load cache line address and a corresponding stride value, and pre-load control circuitry coupled to the cache control circuit rand the pre-load control storage circuitry. The pre-load control circuitry is configured to receive the access address corresponding to the read access request from the processor and selectively initiating a pre-load from the memory to the cache based on whether a cache line address portion of the access address matches the stored pre-load cache line address.Type: GrantFiled: December 4, 2019Date of Patent: December 14, 2021Assignee: NXP USA, Inc.Inventors: Paul Kimelman, Brian Christopher Kahne
-
Patent number: 11119922Abstract: In a data processing system having a local first level cache which covers an address range of a backing store, a distributed second level cache has a plurality of distributed cache portions, each assigned as a home cache portion for a corresponding non-overlapping address sub-range of the address range of the backing store. Upon receipt of a read access request to a read-only address location of the backing store, the local first level cache is configured to, when the read-only address location misses in the local first level cache, send the read access request to a most local distributed cache portion of the plurality of distributed cache portions for the local first level cache to determine whether the read-only access location hits or misses in the most local distributed cache portion, in which the most local distributed cache portion is not the home cache portion for the read-only address location.Type: GrantFiled: February 21, 2020Date of Patent: September 14, 2021Assignee: NXP USA, Inc.Inventors: Paul Kimelman, Brian Christopher Kahne
-
Publication number: 20210263851Abstract: In a data processing system having a local first level cache which covers an address range of a backing store, a distributed second level cache has a plurality of distributed cache portions, each assigned as a home cache portion for a corresponding non-overlapping address sub-range of the address range of the backing store. Upon receipt of a read access request to a read-only address location of the backing store, the local first level cache is configured to, when the read-only address location misses in the local first level cache, send the read access request to a most local distributed cache portion of the plurality of distributed cache portions for the local first level cache to determine whether the read-only access location hits or misses in the most local distributed cache portion, in which the most local distributed cache portion is not the home cache portion for the read-only address location.Type: ApplicationFiled: February 21, 2020Publication date: August 26, 2021Inventors: Paul Kimelman, Brian Christopher Kahne
-
Publication number: 20210173781Abstract: A data processing system includes a processor, a memory, and a cache. The cache includes a cache array, cache control circuitry coupled to receive an access address corresponding to a read access request from the processor and configured to determine whether the received access address hits or misses in the cache array, pre-load control storage circuitry outside the cache array and configured to store a pre-load cache line address and a corresponding stride value, and pre-load control circuitry coupled to the cache control circuit rand the pre-load control storage circuitry. The pre-load control circuitry is configured to receive the access address corresponding to the read access request from the processor and selectively initiating a pre-load from the memory to the cache based on whether a cache line address portion of the access address matches the stored pre-load cache line address.Type: ApplicationFiled: December 4, 2019Publication date: June 10, 2021Inventors: Paul Kimelman, Brian Christopher Kahne
-
Patent number: 10949352Abstract: A cache is shared by a first and second processor, and is divided into a first cache portion corresponding to a first requestor identifier (ID) and a second cache portion corresponding to a second requestor ID. The first cache portion is accessed in response to memory access requests associated with the first requestor ID, and the second cache portion is accessed in response to memory access requests associated with the second requestor ID. A memory controller communicates with a shared memory, which is a backing store for the cache. A corresponding requestor ID is received with each memory access request. Each memory access request includes a corresponding access address identifying a memory location in the shared memory and a corresponding index portion, wherein each corresponding index portion selects a set in a selected cache portion of the first and second cache portions selected based on the received corresponding requestor ID.Type: GrantFiled: March 5, 2020Date of Patent: March 16, 2021Assignee: NXP USA, Inc.Inventor: Paul Kimelman
-
Patent number: 10628312Abstract: A data processing system including a cache operably coupled to an interconnect and a cache controller. The cache is accessible by each bus initiator of a plurality of bus initiators. The cache includes a plurality of entries. Each entry includes a status field having coherency bits. When an entry of the plurality of entries is in a first protocol mode, the cache controller uses the coherency bits of the entry in implementing a first cache coherency protocol for data of the entry. When the entry is in a second protocol mode, the cache controller uses the coherency bits of the entry in implementing a second cache coherency protocol. The second cache coherency protocol is utilized in implementing a paced data transfer operation between a first bus initiator of the plurality of bus initiators and a second bus initiator of the plurality of bus initiators using the cache entry.Type: GrantFiled: September 26, 2018Date of Patent: April 21, 2020Assignee: NXP USA, Inc.Inventors: Paul Kimelman, Brian Christopher Kahne, Ehud Kalekin
-
Publication number: 20200097407Abstract: A data processing system including a cache operably coupled to an interconnect and a cache controller. The cache is accessible by each bus initiator of a plurality of bus initiators. The cache includes a plurality of entries. Each entry includes a status field having coherency bits. When an entry of the plurality of entries is in a first protocol mode, the cache controller uses the coherency bits of the entry in implementing a first cache coherency protocol for data of the entry. When the entry is in a second protocol mode, the cache controller uses the coherency bits of the entry in implementing a second cache coherency protocol. The second cache coherency protocol is utilized in implementing a paced data transfer operation between a first bus initiator of the plurality of bus initiators and a second bus initiator of the plurality of bus initiators using the cache entry.Type: ApplicationFiled: September 26, 2018Publication date: March 26, 2020Inventors: PAUL KIMELMAN, BRIAN CHRISTOPHER KAHNE, EHUD KALEKIN
-
Publication number: 20190369148Abstract: A capacitive sensing system includes a controller, a node connected to one side of a capacitance, the controller configured to measure the capacitance by measuring a time for a voltage across the capacitance to reach a predetermined reference voltage, a noise measurement circuit configured to measure electrical noise on the node, and the controller receiving the measurement of noise from the noise measurement circuit.Type: ApplicationFiled: August 14, 2019Publication date: December 5, 2019Inventors: Krishnasawamy Nagaraj, Paul Kimelman, Abhijit Kumar Das
-
Patent number: 10466286Abstract: A system includes a controller, a node connected to one side of a capacitance, the controller configured to measure the capacitance by measuring a time for a voltage across the capacitance to reach a predetermined reference voltage, and the controller causing the time period for capacitance measurements to vary even when the capacitance is constant.Type: GrantFiled: December 18, 2017Date of Patent: November 5, 2019Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Krishnasawamy Nagaraj, Paul Kimelman, Abhijit Kumar Das
-
Patent number: 10422822Abstract: A capacitive sensing system includes a controller, a node connected to one side of a capacitance, the controller configured to measure the capacitance by measuring a time for a voltage across the capacitance to reach a predetermined reference voltage, a noise measurement circuit configured to measure electrical noise on the node, and the controller receiving the measurement of noise from the noise measurement circuit.Type: GrantFiled: July 30, 2018Date of Patent: September 24, 2019Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Krishnasawamy Nagaraj, Paul Kimelman, Abhijit Kumar Das
-
Publication number: 20190131960Abstract: A delay circuit, including a connector pad to receive a data input, a pad pin to receive a clock input having a clock edge, a first data line to receive the data input, a second data line to receive the data input, the second data line including a delay circuit that outputs a delayed data output, and at least one logic gate to accept the data input and delayed data output and output a logic state, wherein the logic state determines whether there is a glitch in the delayed data output, and wherein the delay circuit includes at least one delay element to register an output of the at least one logic gate at the clock edge to recognize the glitch.Type: ApplicationFiled: October 26, 2017Publication date: May 2, 2019Inventor: Paul KIMELMAN