Patents by Inventor Hanno Lieske

Hanno Lieske has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11176689
    Abstract: A first comparator compares first input data with second input data, and provides one when the first input data is larger than the second input data and zero when the first input data is equal to or smaller than the second input data as a first comparison result. A data generator generates data based on the second input data. A second comparator compares the first input data with the generated data, and provide one when the first input data is larger than the generated data and zero when the first input data is equal to or smaller than the generated data as a second comparison result. A data initializer initializes third input data. An adder adds the first and second comparison results to the third input data initialized in advance, and to provide the added data as the current third input data.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: November 16, 2021
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Hanno Lieske
  • Patent number: 10996877
    Abstract: Limitations on memory access decrease the computing capability of related-art semiconductor devices during convolution processing in a convolutional neural network. A semiconductor device according to an aspect of the present invention includes an accelerator section that performs computation on a plurality of intermediate layers included in a convolutional neural network by using a memory having a plurality of banks capable of changing the read/write status on an individual bank basis. The accelerator section includes a network layer control section that controls a memory control section in such a manner as to change the read/write status assigned to the banks storing input data or output data of the intermediate layers in accordance with the transfer amounts and transfer rates of the input data and output data of the intermediate layers included in the convolutional neural network.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: May 4, 2021
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Manabu Sasamoto, Atsushi Nakamura, Hanno Lieske, Shigeru Matsuo
  • Publication number: 20200311952
    Abstract: A first comparator compares first input data with second input data, and provides one when the first input data is larger than the second input data and zero when the first input data is equal to or smaller than the second input data as a first comparison result. A data generator generates data based on the second input data. A second comparator compares the first input data with the generated data, and provide one when the first input data is larger than the generated data and zero when the first input data is equal to or smaller than the generated data as a second comparison result. A data initializer initializes third input data. An adder adds the first and second comparison results to the third input data initialized in advance, and to provide the added data as the current third input data.
    Type: Application
    Filed: December 7, 2017
    Publication date: October 1, 2020
    Inventor: Hanno LIESKE
  • Publication number: 20190361620
    Abstract: Limitations on memory access decrease the computing capability of related-art semiconductor devices during convolution processing in a convolutional neural network. A semiconductor device according to an aspect of the present invention includes an accelerator section that performs computation on a plurality of intermediate layers included in a convolutional neural network by using a memory having a plurality of banks capable of changing the read/write status on an individual bank basis. The accelerator section includes a network layer control section that controls a memory control section in such a manner as to change the read/write status assigned to the banks storing input data or output data of the intermediate layers in accordance with the transfer amounts and transfer rates of the input data and output data of the intermediate layers included in the convolutional neural network.
    Type: Application
    Filed: May 7, 2019
    Publication date: November 28, 2019
    Inventors: Manabu SASAMOTO, Atsushi NAKAMURA, Hanno LIESKE, Shigeru MATSUO
  • Patent number: 9996500
    Abstract: This present invention provides a fast data transfer for a concurrent transfer of multiple ROI areas between an internal memory array and a single memory where each PE can specify the parameter set for the area to be transferred independently from the other PE. For example, for a read transfer, the requests are generated in a way that first the first element of each ROI area is requested from the single memory for each PE before the following elements of each ROI area are requested. After the first element from each ROI area has been received from the single memory in a control processor and has been transferred from the control processor over a bus system to the internal memory array, all elements are in parallel stored to the internal memory array. Then, the second element of each ROI area is requested from the single memory for each PE. The transfer finishes after all elements of each ROI area are transferred to their assigned PEs.
    Type: Grant
    Filed: September 27, 2011
    Date of Patent: June 12, 2018
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Hanno Lieske
  • Patent number: 9143804
    Abstract: Compared to the related art, where the time distance in horizontal direction is 2 Synchronization Intervals (SI) between 2 Processing Units which are processing neighboring MB rows, the current invention enables to reduce the distance from 2 SIs to 1 SI, which increases the start-up phase performance. This is reached by dividing the filter task into sub tasks and reordering the execution order of these sub tasks. The sub tasks include a vertical edge filter task and a horizontal edge filter task. Further, the synchronization is scheduled in between the vertical edge filter task and the horizontal edge filter task.
    Type: Grant
    Filed: October 29, 2009
    Date of Patent: September 22, 2015
    Assignee: NEC CORPORATION
    Inventor: Hanno Lieske
  • Publication number: 20140237214
    Abstract: This present invention provides a fast data transfer for a concurrent transfer of multiple ROI areas between an internal memory array and a single memory where each PE can specify the parameter set for the area to be transferred independently from the other PE. For example, for a read transfer, the requests are generated in a way that first the first element of each ROI area is requested from the single memory for each PE before the following elements of each ROI area are requested. After the first element from each ROI area has been received from the single memory in a control processor and has been transferred from the control processor over a bus system to the internal memory array, all elements are in parallel stored to the internal memory array. Then, the second element of each ROI area is requested from the single memory for each PE. The transfer finishes after all elements of each ROI area are transferred to their assigned PEs.
    Type: Application
    Filed: September 27, 2011
    Publication date: August 21, 2014
    Applicant: RENESAS ELECTRONICS CORPORATION
    Inventor: Hanno Lieske
  • Patent number: 8683106
    Abstract: Nowadays, many architectures have processing units with different bandwidth requirements which are connected over a pipelined ring bus. The proposed invention can optimize the data transfer for the case where processing units with lower bandwidth requirements can be grouped and controlled together for a data transfer, so that the available bus bandwidth can be optimally utilized.
    Type: Grant
    Filed: March 3, 2008
    Date of Patent: March 25, 2014
    Assignee: NEC Corporation
    Inventors: Hanno Lieske, Shorin Kyo
  • Publication number: 20130159625
    Abstract: An information processing device includes an internal memory which is capable of performing processing faster than an external memory, and a memory controller which controls data transfer between the internal memory and the external memory. The memory controller controls a first data transfer and a second data transfer. The first data transfer is a data transfer from the external memory to the internal memory, and the second data transfer is a data transfer from the internal memory to the external memory. The second data transfer transfers a part of the data area of the internal memory transferred in the first data transfer, and the data area which is read out in a non-continuous way from the internal memory is transferred in place to the external memory in the second data transfer.
    Type: Application
    Filed: September 6, 2010
    Publication date: June 20, 2013
    Inventor: Hanno Lieske
  • Publication number: 20120213297
    Abstract: Compared to the related art, where the time distance in horizontal direction is 2 Synchronization Intervals (SI) between 2 Processing Units which are processing neighboring MB rows, the current invention enables to reduce the distance from 2 SIs to 1 SI, which increases the start-up phase performance. This is reached by dividing the filter task into sub tasks and reordering the execution order of these sub tasks. The sub tasks include a vertical edge filter task and a horizontal edge filter task. Further, the synchronization is scheduled in between the vertical edge filter task and the horizontal edge filter task.
    Type: Application
    Filed: October 29, 2009
    Publication date: August 23, 2012
    Inventor: Hanno Lieske
  • Patent number: 8190856
    Abstract: A processor of SIMD/MIMD dual mode architecture comprises common controlled first processing elements, self-controlled second processing elements and a pipelined (ring) network connecting the first PEs and the second PEs sequentially. An access controller has access control lines, each access control line being connected to each PE of the first and second PEs to control data access timing between each PE and the network. Each PE can be self-controlled or common controlled, such as dual mode SIMD/MIMD architectures, reducing the wiring area requirement.
    Type: Grant
    Filed: March 6, 2007
    Date of Patent: May 29, 2012
    Assignee: NEC Corporation
    Inventors: Hanno Lieske, Shorin Kyo
  • Publication number: 20120030448
    Abstract: A single instruction multiple data (SIMD) processor having a plurality of processing elements and including: a splitting unit for splitting an address of the read-only parameter data in the data memory into a first part and a second part at a bit position corresponding to the number of the processor elements; and a comparing unit for comparing the number of shifting, on a ring bus, of the read-only parameter data, which is taken from the internal memory at the address in accordance with the first part, with a difference between an own processor element position and a portion of the global address of the read-only parameter data to be accessed, the portion designating a position in the ring of the processor element in which the read-only parameter data to be accessed is stored and corresponding to the second part, to cause the other processor elements to take the read-only parameter data.
    Type: Application
    Filed: September 25, 2009
    Publication date: February 2, 2012
    Applicant: NEC CORPORATION
    Inventor: Hanno Lieske
  • Publication number: 20110010526
    Abstract: Nowadays, many architectures have processing units with different bandwidth requirements which are connected over a pipelined ring bus. The proposed invention can optimize the data transfer for the case where processing units with lower bandwidth requirements can be grouped and controlled together for a data transfer, so that the available bus bandwidth can be optimally utilized.
    Type: Application
    Filed: March 3, 2008
    Publication date: January 13, 2011
    Inventors: Hanno Lieske, Shorin Kyo
  • Publication number: 20100088489
    Abstract: A processor of SIMD/MIMD dual mode architecture comprises common controlled first processing elements, self-controlled second processing elements and a pipelined (ring) network connecting the first PEs and the second PEs sequentially. An access controller has access control lines, each access control line being connected to each PE of the first and second PEs to control data access timing between each PE and the network. Each PE can be self-controlled or common controlled, such as dual mode SIMD/MIMD architectures, reducing the wiring area requirement.
    Type: Application
    Filed: March 6, 2007
    Publication date: April 8, 2010
    Inventors: Hanno Lieske, Shorin Kyo