Patents by Inventor Alexandros IOSIFIDIS

Alexandros IOSIFIDIS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104358
    Abstract: A method and a system for using implicit neural representations for generation of interpretable time series are provided. The method includes: receiving time series information, such as pairings of time coordinate values with time series signal values, that relates to an event sequence; generating, based on the time series information, an implicit neural representation of the event sequence that includes a plurality of embedded values and a corresponding plurality of weights; and using the implicit neural representation to predict at least one item of information that relates to the event sequence and is not included in the received time series information, such as an interpolation or an extrapolation of the time series.
    Type: Application
    Filed: June 28, 2023
    Publication date: March 28, 2024
    Applicant: JPMorgan Chase Bank, N.A.
    Inventors: Elizabeth FONS, Svitlana VYETRENKO, Yousef EL-LAHAM, Alejandro SZTRAJMAN, Alexandros IOSIFIDIS
  • Publication number: 20220207330
    Abstract: Systems, methods, apparatuses, and computer program products for neural networks. In accordance with some example embodiments, an operational neuron model may comprise an artificial neuron comprising a composite nodal operator, a pool-operator, and an activation function operator. The nodal operator may comprise a linear function or non-linear function. In accordance with certain example embodiments, a generative neuron model may include a composite nodal-operator generated during the training using Taylor polynomial approximation without restrictions. In accordance with various example embodiments, a self-organized operational neural network (Self-ONN) may include one or more layers of generative neurons.
    Type: Application
    Filed: December 30, 2021
    Publication date: June 30, 2022
    Inventors: Serkan KIRANYAZ, Junaid MALIK, Turker INCE, Alexandros IOSIFIDIS, Moncef GABBOUJ
  • Publication number: 20210097389
    Abstract: Certain embodiments may generally relate to various techniques for machine learning. Feed-forward, fully-connected Artificial Neural Networks (ANNs), or the so-called Multi-Layer Perceptrons (MLPs) are well-known universal approximators. However, their learning performance may vary significantly depending on the function or the solution space that they attempt to approximate for learning. This is because they are based on a loose and crude model of the biological neurons promising only a linear transformation followed by a nonlinear activation function. Therefore, while they learn very well those problems with a monotonous, relatively simple and linearly separable solution space, they may entirely fail to do so when the solution space is highly nonlinear and complex. In order to address this drawback and also to accomplish a more generalized model of biological neurons and learning systems, Generalized Operational Perceptrons (GOPs) may be formed and they may encapsulate many linear and nonlinear operators.
    Type: Application
    Filed: February 7, 2017
    Publication date: April 1, 2021
    Inventors: Serkan KIRANYAZ, Turker INCE, Moncef GABBOUJ, Alexandros IOSIFIDIS
  • Publication number: 20190244093
    Abstract: Certain embodiments may generally relate to various techniques for machine learning. Feed-forward, fully-connected Artificial Neural Networks (ANNs), or the so-called Multi-Layer Perceptrons (MLPs) are well-known universal approximators. However, their learning performance may vary significantly depending on the function or the solution space that they attempt to approximate for learning. This is because they are based on a loose and crude model of the biological neurons promising only a linear transformation followed by a nonlinear activation function. Therefore, while they learn very well those problems with a monotonous, relatively simple and linearly separable solution space, they may entirely fail to do so when the solution space is highly nonlinear and complex. In order to address this drawback and also to accomplish a more generalized model of biological neurons and learning systems, Generalized Operational Perceptrons (GOPs) may be formed and they may encapsulate many linear and nonlinear operators.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 8, 2019
    Inventors: Serkan KIRANYAZ, Turker INCE, Moncef GABBOUJ, Alexandros IOSIFIDIS