Patents by Inventor Erik Norden

Erik Norden has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12098688
    Abstract: A method for operating an injection valve by determining an opening or closing time of the injection valve based on a sensor signal. The method includes: providing an evaluation point time series by sampling a sensor signal of a sensor of the injection valve; using a non-linear data-based first sub-model to obtain a first output vector based on the evaluation point time series, wherein each element of the first output vector is associated with a specific time; using a linear, data-based second sub-model to obtain a second output vector based on the evaluation point time series, wherein each element of the second output vector is associated with a specific time; limiting the time determined by the first output vector depending on the second output vector in order to obtain the opening or closing time.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: September 24, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andreas Hopf, Erik Tonner, Frank Kowol, Jens-Holger Barth, Konrad Groh, Matthias Woehrle, Mona Meister, Roland Norden
  • Patent number: 12092050
    Abstract: A method for training a data-based evaluation model to determine an opening or closing time of an injection valve based on a sensor signal. The method includes: measuring an operation of the injection valve in order to determine at least one sensor signal and an associated opening or closing time; sampling the sensor signal at a sampling rate in order to obtain a sensor signal time series with sensor signal values; determining a plurality of training data sets by assigning a plurality of evaluation point time series generated from a sensor signal time series to the opening or closing time associated with the sensor signal, wherein the evaluation point time series has a lower temporal resolution than the sensor signal time series; training the data-based evaluation model depending on the determined training data sets.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: September 17, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andreas Hopf, Erik Tonner, Frank Kowol, Jens-Holger Barth, Konrad Groh, Matthias Woehrle, Mona Meister, Roland Norden
  • Publication number: 20240271964
    Abstract: A linear position transducer (10) comprises a sensor rod (30), a plurality of Hall effect sensor elements (34), an axial ring magnet (40) and an embedded microcontroller system (24). The axial ring magnet is arranged around the sensor rod. The Hall effect sensor elements are arranged within an interior (31) of the sensor rod. The Hall effect sensor elements are arranged with an off-axis displacement with respect to an axis of the sensor rod and are configured to provide signals representing at least two components, transverse to each other, of a magnetic field at the respective position. The embedded microcontroller system is communicationally connected to the Hall effect sensor elements and is configured for determining a relative axial position between the axial ring magnet and the sensor rod on received signals representing the two components of the magnetic field from each of at least two Hall effect sensor elements.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 15, 2024
    Inventors: Linus FALK, Henrik NORDÉN, Erik LEJMAN, Sofia LÖFSTRAND
  • Publication number: 20240265233
    Abstract: Embodiments relate to a neural processor circuit with scalable architecture for instantiating one or more neural networks. The neural processor circuit includes a data buffer coupled to a memory external to the neural processor circuit, and a plurality of neural engine circuits. To execute tasks that instantiate the neural networks, each neural engine circuit generates output data using input data and kernel coefficients. A neural processor circuit may include multiple neural engine circuits that are selectively activated or deactivated according to configuration data of the tasks. Furthermore, an electronic device may include multiple neural processor circuits that are selectively activated or deactivated to execute the tasks.
    Type: Application
    Filed: March 22, 2024
    Publication date: August 8, 2024
    Applicant: Apple Inc.
    Inventors: Erik Norden, Liran Fishel, Sung Hee Park, Jaewon Shin, Christopher L. Mills, Seungjin Lee, Fernando A. Mujica
  • Patent number: 11989640
    Abstract: Embodiments relate to a neural processor circuit with scalable architecture for instantiating one or more neural networks. The neural processor circuit includes a data buffer coupled to a memory external to the neural processor circuit, and a plurality of neural engine circuits. To execute tasks that instantiate the neural networks, each neural engine circuit generates output data using input data and kernel coefficients. A neural processor circuit may include multiple neural engine circuits that are selectively activated or deactivated according to configuration data of the tasks. Furthermore, an electronic device may include multiple neural processor circuits that are selectively activated or deactivated to execute the tasks.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: May 21, 2024
    Assignee: Apple Inc.
    Inventors: Erik Norden, Liran Fishel, Sung Hee Park, Jaewon Shin, Christopher L. Mills, Seungjin Lee, Fernando A. Mujica
  • Publication number: 20230099652
    Abstract: Embodiments relate to a neural processor circuit with scalable architecture for instantiating one or more neural networks. The neural processor circuit includes a data buffer coupled to a memory external to the neural processor circuit, and a plurality of neural engine circuits. To execute tasks that instantiate the neural networks, each neural engine circuit generates output data using input data and kernel coefficients. A neural processor circuit may include multiple neural engine circuits that are selectively activated or deactivated according to configuration data of the tasks. Furthermore, an electronic device may include multiple neural processor circuits that are selectively activated or deactivated to execute the tasks.
    Type: Application
    Filed: November 21, 2022
    Publication date: March 30, 2023
    Inventors: Erik Norden, Liran Fishel, Sung Hee Park, Jaewon Shin, Christopher L. Mills, Seungjin Lee, Fernando A. Mujica
  • Publication number: 20220101096
    Abstract: Methods and apparatus for a knowledge-based deep learning refactoring model with tightly integrated functional nonparametric memory are disclosed. An example non-transitory computer readable medium comprises instructions that, when executed, cause a machine to at least estimate a first information extraction cost corresponding to retrieval of information from a local knowledge base, estimate a second information extraction cost corresponding retrieval of information from a remote knowledge base, select an information source based on the first and second estimated information extraction costs, query the selected information source, in response to determining that the selected information source was an external information source, store the queried information in the local knowledge base, organize the stored information in the local knowledge base, and return the queried information.
    Type: Application
    Filed: December 13, 2021
    Publication date: March 31, 2022
    Inventors: Gadi Singer, Nagib Hakim, Phillip Howard, Daniel Korat, Vasudev Lal, Arden Ma, Erik Norden, Ze'ev Rivlin, Ana Paula Quirino Simoes, Oren Pereg, Moshe Wasserblat
  • Publication number: 20210064958
    Abstract: Embodiments of the present disclosure are directed toward techniques and configurations for an optical accelerator including a photonics integrated circuit (PIC) for an optical neural network (ONN). In embodiments, an optical accelerator package includes the PIC and an electronics integrated circuit (EIC) that is heterogeneously integrated into the optical accelerator package to proximally provide pre- and post-processing of optical signal inputs and optical signal outputs provided to and received from an optical matrix multiplier of the PIC. In some embodiments, the EIC is a single EIC or discrete EICs to provide pre- and post-processing of the optical signal inputs and optical signal outputs including optical to electrical and electrical to optical transduction. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: November 17, 2020
    Publication date: March 4, 2021
    Inventors: Wenhua Lin, Erik Norden, Bharadwaj Parthasarathy, Jin Hong, Minnie Ho
  • Patent number: 10877754
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: December 29, 2020
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Publication number: 20200272464
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Application
    Filed: March 13, 2020
    Publication date: August 27, 2020
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Patent number: 10592239
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: March 17, 2020
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Publication number: 20190294441
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Application
    Filed: May 28, 2019
    Publication date: September 26, 2019
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Patent number: 10346163
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: July 9, 2019
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Publication number: 20190129719
    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.
    Type: Application
    Filed: November 1, 2017
    Publication date: May 2, 2019
    Inventors: Eric Bainville, Tal Uliel, Erik Norden, Jeffry E. Gonion, Ali Sazegari
  • Publication number: 20130058214
    Abstract: A communication system has special Agents in the subscriber terminals which detect the need of applications for data paths with QoS over the access network. The Agents have packet based control channels to a Remote Resource Manager installed outside the network typically as a web server and use the control channels for sending bandwidth allocation requests to the Remote Resource Manager which stores all bandwidth relevant information for a subscriber and delivers bandwidth and QoS class back to the Agents which adjust packet rate and packet QoS class marking accordingly. A Self-Sustaining Scheduler placed in the bottlenecks of the data path guarantees given delay times per QoS class and keeps packet drop rate below given limits if the Remote Resource Manager assigns bandwidth appropriately.
    Type: Application
    Filed: February 21, 2012
    Publication date: March 7, 2013
    Inventors: Andreas Foglar, Klaus Starnberger, Erik Norden
  • Patent number: 7222251
    Abstract: An idle mode system has a clock gating circuit, a bus interface unit, memory interfaces and an interrupt and idle control unit. The clock gating circuit receives a first clock and designated idle-acknowledge signals. The clock gating circuit produces a second clock signal based on the first clock signal when fewer than all designated idle-acknowledge signals are received. The clock gating circuit produces no second clock signal when all designated idle-acknowledge signals are received. The bus interface unit receives bus access requests and receives the first and second clock signals. When a bus access request is made, the bus interface unit de-asserts its idle-acknowledge signal and passes the bus access request. The memory interfaces operate on the second clock. One interface receives the bus access request from the bus interface unit, withdraws its idle-acknowledge signal, processes the bus access request, and re-asserts its idle-acknowledge signal upon completion.
    Type: Grant
    Filed: February 5, 2003
    Date of Patent: May 22, 2007
    Assignee: Infineon Technologies AG
    Inventors: Sagheer Ahmad, Erik Norden, Rob Ober
  • Publication number: 20060259742
    Abstract: A method and system of controlling out of order execution pipelines using pipeline skew parameters is disclosed. The pipeline skew parameters track the relative position of a load/store instruction in a load/store pipeline and a simultaneously issued integer instruction in a variable length integer pipeline. The pipeline skew parameters are used to improve data hazard detection, pipeline stalling, and instruction cancellation.
    Type: Application
    Filed: May 16, 2005
    Publication date: November 16, 2006
    Applicants: Infineon Technologies North America Corp., Infineon Technologies AG
    Inventors: Erik Norden, Roger Arnold, Robert Ober, Neil Hastie
  • Publication number: 20050198475
    Abstract: A thread selection unit for a block multi-threaded processor includes a priority thread selector and an execution thread selector. The priority thread selector uses a maxtime register for each active thread to limit the time an active thread can be the priority thread. The execution thread selector is configured to select the priority thread as the execution thread when the priority thread is unblocked. However, if the priority thread is blocked, the execution thread selector selects a non-priority thread as the execution thread.
    Type: Application
    Filed: February 6, 2004
    Publication date: September 8, 2005
    Applicant: Infineon Technologies, Inc.
    Inventors: Roger Arnold, Daniel Martin, Robert Ober, Erik Norden
  • Publication number: 20050177703
    Abstract: A multithreaded processor includes a thread ID for each set of fetched bits in an instruction fetch and issue unit. The thread ID attaches to the instructions and operands of the set of fetched bits. Pipeline stages in the multithreaded processor stores the thread ID associated with each operand or instruction in the pipeline stage. The thread ID are used to maintain data coherency and to generate program traces that include thread information for the instructions executed by the multithreaded processor.
    Type: Application
    Filed: February 6, 2004
    Publication date: August 11, 2005
    Applicant: Infineon Technologies, Inc.
    Inventors: Erik Norden, Robert Ober, Roger Arnold, Daniel Martin
  • Publication number: 20050177699
    Abstract: A microprocessor system includes an address generator, an address selector, and memory system having multiple memory towers, which can be independently addressed. The address generator simultaneously generates a first memory address and a second memory address that is 1 row greater than the first memory address. The address selector determines whether the row portion of the first memory address or the second memory address is used for each memory tower. Because each tower can be addressed independently, a single memory access can be used to access data spanning multiple rows of the memory system.
    Type: Application
    Filed: February 11, 2004
    Publication date: August 11, 2005
    Applicant: Infineon Technologies, Inc.
    Inventors: Klaus Oberlaender, Erik Norden