Patents by Inventor NESE ALYUZ CIVITCI

NESE ALYUZ CIVITCI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11908345
    Abstract: Various systems and methods for engagement dissemination. A face detector detects a face in video data. A context filterer determines a student is on-platform and a section type. An appearance monitor selects an emotional and a behavioral classifiers. Emotional and behavioral components are classified based on the detected face. A context-performance monitor selects an emotional and a behavioral classifiers specific to the section type, Emotional and behavioral components are classified based on the log data. A fuser combines the emotional components into an emotional state of the student based on confidence values of the emotional components. The fuser combines the behavioral components a behavioral state of the student based on confidence values of the behavioral components. The user determine an engagement level of the student based on the emotional state and the behavioral state of the student.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Nese Alyuz Civitci, Cagri Cagatay Tanriover, Sinem Aslan, Eda Okur Kavil, Asli Arslan Esme, Ergin Genc
  • Publication number: 20210402898
    Abstract: Devices and methods for a vehicle are provided in this disclosure. A device for controlling an active seat of a vehicle may include a processor and a memory. The memory may be configured to store a transfer function. The processor may be configured to predict an acceleration of the active seat of the vehicle based on a first sensor data and the transfer function. The first sensor data may include information indicating an acceleration of a vibration source for the vehicle. The processor may be further configured to generate a control signal to control a movement of the active seat at a first instance of time based on the predicted acceleration.
    Type: Application
    Filed: September 9, 2021
    Publication date: December 30, 2021
    Inventors: Ignacio J. ALVAREZ, Nese ALYUZ CIVITCI, Maria Soledad ELLI, Javier FELIP LEON, Florian GEISSLER, David Israel GONZALEZ AGUIRRE, Neslihan KOSE CIHANGIR, Michael PAULITSCH, Rafael ROSALES, Javier S. TUREK
  • Publication number: 20210397863
    Abstract: Devices and methods for an occupant of a vehicle are provided in this disclosure. A device for identifying an occupant of a vehicle may include a processor. The processor may be configured to determine an identity for the occupant based on a first sensor data including information indicating a detection of a facial attribute of the occupant. The processor may further be configured to estimate a behavior for the occupant based on a second sensor data including information indicating a detection of a body attribute of the occupant. The processor may further be configured to determine a spoofing attempt result indicating whether the occupant attempts a spoofing based on the determined identity and the estimated behavior.
    Type: Application
    Filed: September 3, 2021
    Publication date: December 23, 2021
    Inventors: Neslihan KOSE CIHANGIR, Nese ALYUZ CIVITCI, Rafael ROSALES, Ignacio J. ALVAREZ
  • Patent number: 11013430
    Abstract: Methods and apparatuses for identifying food chewed or beverage drank are disclosed herein. In embodiments, a device may comprise one or more sensors to provide vibration signal data representative of vibrations sensed from a nasal bridge of a user wearing the device, a chewing analyzer to extract from the vibration signal data a first plurality of features associated with chewing activities, and/or a drinking analyzer is to extract from the vibration signal data a second plurality of features associated with drinking activities. The extracted first and/or second plurality of features, in turn, may be used to determine a category of food the user was chewing, or a category of beverage the user was drinking. In embodiments, the methods and apparatuses may use personalized models.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: May 25, 2021
    Assignee: Intel Coproration
    Inventors: Cagri Tanriover, Nese Alyuz Civitci, Asli Arslan Esme, Hector Alfonso Cordourier Maruri, Paulo Lopez Meyer
  • Patent number: 11006875
    Abstract: Technologies for emotion prediction based on breathing patterns include a wearable device. The wearable device includes a breathing sensor to generate breathing data, one or more processors, and one or more memory devices having stored therein a plurality of instructions that, when executed, cause the wearable device to calibrate a personalized emotion predictive model associated with the user, collect breathing data of the user of the wearable device, analyze the breathing data to determine a breathing pattern, predict, in response to an analysis of the breathing data, the emotional state of the user using the personalized emotion predictive model, and output the emotional state of the user.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: May 18, 2021
    Assignee: Intel Corporation
    Inventors: Cagri Tanriover, Nese Alyuz Civitci, Asli Arslan Esme, Hector Cordourier, Paulo Lopez Meyer
  • Publication number: 20200135045
    Abstract: Various systems and methods for engagement dissemination. A face detector detects a face in video data. A context filterer determines a student is on-platform and a section type. An appearance monitor selects an emotional and a behavioral classifiers. Emotional and behavioral components are classified based on the detected face. A context-performance monitor selects an emotional and a behavioral classifiers specific to the section type, Emotional and behavioral components are classified based on the log data. A fuser combines the emotional components into an emotional state of the student based on confidence values of the emotional components. The fuser combines the behavioral components a behavioral state of the student based on confidence values of the behavioral components. The user determine an engagement level of the student based on the emotional state and the behavioral state of the student.
    Type: Application
    Filed: December 27, 2018
    Publication date: April 30, 2020
    Inventors: Nese Alyuz Civitci, Cagri Cagatay Tanriover, Sinem Aslan, Eda Okur Kavil, Asli Arslan Esme, Ergin Genc
  • Patent number: 10299716
    Abstract: Apparatus, systems, and/or methods may provide a mental state determination. For example, a data collector may collect image data for a side of a face of a user from an image capture device on the user (e.g., a wearable device). The image data may include two or more perspectives of a feature on the side of the face of the user. In addition, a state determiner may determine a mental state of the user based on the image data. In one example, fields of view may be combined to determine a total region and/or a total overlap region. Changing the position that one or more image capture devices point may modulate the total region and/or the total overlap region. In addition, one or more sensors may be utilized to further improve mental data determinations.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: May 28, 2019
    Assignee: Intel Corporation
    Inventors: Cagri Tanriover, Sinem Aslan, Nese Alyuz Civitci, Ece Oktay, Eda Okur, Asli Arslan Esme
  • Publication number: 20190038201
    Abstract: Technologies for emotion prediction based on breathing patterns include a wearable device. The wearable device includes a breathing sensor to generate breathing data, one or more processors, and one or more memory devices having stored therein a plurality of instructions that, when executed, cause the wearable device to calibrate a personalized emotion predictive model associated with the user, collect breathing data of the user of the wearable device, analyze the breathing data to determine a breathing pattern, predict, in response to an analysis of the breathing data, the emotional state of the user using the personalized emotion predictive model, and output the emotional state of the user.
    Type: Application
    Filed: March 30, 2018
    Publication date: February 7, 2019
    Inventors: Cagri Tanriover, Nese Alyuz Civitci, Asli Arslan Esme, Hector Cordourier, Paulo Lopez Meyer
  • Publication number: 20190038186
    Abstract: Methods and apparatuses for identifying food chewed or beverage drank are disclosed herein. In embodiments, a device may comprise one or more sensors to provide vibration signal data representative of vibrations sensed from a nasal bridge of a user wearing the device, a chewing analyzer to extract from the vibration signal data a first plurality of features associated with chewing activities, and/or a drinking analyzer is to extract from the vibration signal data a second plurality of features associated with drinking activities. The extracted first and/or second plurality of features, in turn, may be used to determine a category of food the user was chewing, or a category of beverage the user was drinking. In embodiments, the methods and apparatuses may use personalized models.
    Type: Application
    Filed: June 26, 2018
    Publication date: February 7, 2019
    Inventors: CAGRI TANRIOVER, NESE ALYUZ CIVITCI, ASLI ARSLAN ESME, HECTOR ALFONSO CORDOURIER MARURI, PAULO LOPEZ MEYER
  • Publication number: 20170188928
    Abstract: Apparatus, systems, and/or methods may provide a mental state determination. For example, a data collector may collect image data for a side of a face of a user from an image capture device on the user (e.g., a wearable device). The image data may include two or more perspectives of a feature on the side of the face of the user. In addition, a state determiner may determine a mental state of the user based on the image data. In one example, fields of view may be combined to determine a total region and/or a total overlap region. Changing the position that one or more image capture devices point may modulate the total region and/or the total overlap region. In addition, one or more sensors may be utilized to further improve mental data determinations.
    Type: Application
    Filed: December 22, 2016
    Publication date: July 6, 2017
    Inventors: Cagri Tanriover, Sinem Aslan, Nese Alyuz Civitci, Ece Oktay, Eda Okur, Asli Arslan Esme
  • Publication number: 20170169715
    Abstract: Embodiments herein relate to generating a personalized model using a machine learning process, identifying a learning engagement state of a learner based at least in part on the personalized model, and tailoring computerized provision of an educational program to the learner based on the learning engagement state. An apparatus to provide a computer-aided educational program may include one or more processors operating modules that may receive indications of interactions of a learner and indications of physical responses of the learner, generate a personalized model using a machine learning process based at least in part on the interactions of the learner and the indications of physical responses of the learner during a calibration time period, and identify a current learning state of the learner based at least in part on the personalized model during a usage time period. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 15, 2017
    Inventors: NESE ALYUZ CIVITCI, EDA OKUR, ASLI ARSLAN ESME, SINEM ASLAN, ECE OKTAY, SINEM E. METE, HASAN UNLU, DAVID STANHILL, VLADIMIR SHLAIN, PINI ABRAMOVITCH, EYAL ROND
  • Publication number: 20170039876
    Abstract: Embodiments herein relate to identifying a learning engagement state of a learner. A computing platform with one or more processors running modules may receive indications of interactions of a learner with an educational program as well as indications of physical responses of the learner collected substantially simultaneously as the learner interacts with the educational program. A current learning engagement state of the learner may be identified based at least in part on the received indications by using an artificial neural network associated that is calibrated to the learner. The artificial neural network may be trained and updated in part by human observation and learner self-reporting of the learner's current learning engagement state.
    Type: Application
    Filed: August 6, 2015
    Publication date: February 9, 2017
    Inventors: NESE ALYUZ CIVITCI, EDA OKUR, ASLI ARSLAN ESME, SINEM ASLAN, ECE OKTAY, SINEM E. METE, DAVID STANHILL, VLADIMIR SHLAIN, PINI ABRAMOVITCH, EYAL ROND, ALEX KUNIN, ILAN PAPINI