Patents by Inventor Asli Arslan Esme

Asli Arslan Esme has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12183218
    Abstract: Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 14, 2023
    Date of Patent: December 31, 2024
    Assignee: Tahoe Research, Ltd.
    Inventors: Sinem Aslan, Asli Arslan Esme, Gila Kamhi, Ron Ferens, Itai Diner
  • Patent number: 11908345
    Abstract: Various systems and methods for engagement dissemination. A face detector detects a face in video data. A context filterer determines a student is on-platform and a section type. An appearance monitor selects an emotional and a behavioral classifiers. Emotional and behavioral components are classified based on the detected face. A context-performance monitor selects an emotional and a behavioral classifiers specific to the section type, Emotional and behavioral components are classified based on the log data. A fuser combines the emotional components into an emotional state of the student based on confidence values of the emotional components. The fuser combines the behavioral components a behavioral state of the student based on confidence values of the behavioral components. The user determine an engagement level of the student based on the emotional state and the behavioral state of the student.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Nese Alyuz Civitci, Cagri Cagatay Tanriover, Sinem Aslan, Eda Okur Kavil, Asli Arslan Esme, Ergin Genc
  • Publication number: 20230360551
    Abstract: Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: March 14, 2023
    Publication date: November 9, 2023
    Applicant: Tahoe Research, Ltd.
    Inventors: Sinem ASLAN, Asli Arslan ESME, Gila KAMHI, Ron FERENS, Itai DINER
  • Patent number: 11610500
    Abstract: Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: March 21, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Sinem Aslan, Asli Arslan Esme, Gila Kamhi, Ron Ferens, Itai Diner
  • Patent number: 11256104
    Abstract: Herein is disclosed a virtual embodiment display system comprising one or more image sensors, configured to receive one or more images of a vehicle occupant; one or more processors, configured to determine a gaze direction of the vehicle occupant from the one or more images; select a display location corresponding to the determined gaze direction; and control an image display device to display a virtual embodiment of an intelligent agent at the display location; the image display device, configured to display the virtual embodiment of the intelligent agent at the selected display location according to the one or more processors.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: February 22, 2022
    Assignee: Intel Corporation
    Inventors: Cagri Tanriover, Richard Beckwith, Asli Arslan Esme, John Sherry
  • Patent number: 11013430
    Abstract: Methods and apparatuses for identifying food chewed or beverage drank are disclosed herein. In embodiments, a device may comprise one or more sensors to provide vibration signal data representative of vibrations sensed from a nasal bridge of a user wearing the device, a chewing analyzer to extract from the vibration signal data a first plurality of features associated with chewing activities, and/or a drinking analyzer is to extract from the vibration signal data a second plurality of features associated with drinking activities. The extracted first and/or second plurality of features, in turn, may be used to determine a category of food the user was chewing, or a category of beverage the user was drinking. In embodiments, the methods and apparatuses may use personalized models.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: May 25, 2021
    Assignee: Intel Coproration
    Inventors: Cagri Tanriover, Nese Alyuz Civitci, Asli Arslan Esme, Hector Alfonso Cordourier Maruri, Paulo Lopez Meyer
  • Patent number: 11006875
    Abstract: Technologies for emotion prediction based on breathing patterns include a wearable device. The wearable device includes a breathing sensor to generate breathing data, one or more processors, and one or more memory devices having stored therein a plurality of instructions that, when executed, cause the wearable device to calibrate a personalized emotion predictive model associated with the user, collect breathing data of the user of the wearable device, analyze the breathing data to determine a breathing pattern, predict, in response to an analysis of the breathing data, the emotional state of the user using the personalized emotion predictive model, and output the emotional state of the user.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: May 18, 2021
    Assignee: Intel Corporation
    Inventors: Cagri Tanriover, Nese Alyuz Civitci, Asli Arslan Esme, Hector Cordourier, Paulo Lopez Meyer
  • Publication number: 20200379266
    Abstract: Herein is disclosed a virtual embodiment display system comprising one or more image sensors, configured to receive one or more images of a vehicle occupant; one or more processors, configured to determine a gaze direction of the vehicle occupant from the one or more images; select a display location corresponding to the determined gaze direction; and control an image display device to display a virtual embodiment of an intelligent agent at the display location; the image display device, configured to display the virtual embodiment of the intelligent agent at the selected display location according to the one or more processors.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 3, 2020
    Inventors: Cagri TANRIOVER, Richard BECKWITH, Asli ARSLAN ESME, John SHERRY
  • Patent number: 10747007
    Abstract: Herein is disclosed a virtual embodiment display system comprising one or more image sensors, configured to receive one or more images of a vehicle occupant; one or more processors, configured to determine a gaze direction of the vehicle occupant from the one or more images; select a display location corresponding to the determined gaze direction; and control an image display device to display a virtual embodiment of an intelligent agent at the display location; the image display device, configured to display the virtual embodiment of the intelligent agent at the selected display location according to the one or more processors.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Cagri Tanriover, Richard Beckwith, Asli Arslan Esme, John Sherry
  • Publication number: 20200135045
    Abstract: Various systems and methods for engagement dissemination. A face detector detects a face in video data. A context filterer determines a student is on-platform and a section type. An appearance monitor selects an emotional and a behavioral classifiers. Emotional and behavioral components are classified based on the detected face. A context-performance monitor selects an emotional and a behavioral classifiers specific to the section type, Emotional and behavioral components are classified based on the log data. A fuser combines the emotional components into an emotional state of the student based on confidence values of the emotional components. The fuser combines the behavioral components a behavioral state of the student based on confidence values of the behavioral components. The user determine an engagement level of the student based on the emotional state and the behavioral state of the student.
    Type: Application
    Filed: December 27, 2018
    Publication date: April 30, 2020
    Inventors: Nese Alyuz Civitci, Cagri Cagatay Tanriover, Sinem Aslan, Eda Okur Kavil, Asli Arslan Esme, Ergin Genc
  • Patent number: 10299716
    Abstract: Apparatus, systems, and/or methods may provide a mental state determination. For example, a data collector may collect image data for a side of a face of a user from an image capture device on the user (e.g., a wearable device). The image data may include two or more perspectives of a feature on the side of the face of the user. In addition, a state determiner may determine a mental state of the user based on the image data. In one example, fields of view may be combined to determine a total region and/or a total overlap region. Changing the position that one or more image capture devices point may modulate the total region and/or the total overlap region. In addition, one or more sensors may be utilized to further improve mental data determinations.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: May 28, 2019
    Assignee: Intel Corporation
    Inventors: Cagri Tanriover, Sinem Aslan, Nese Alyuz Civitci, Ece Oktay, Eda Okur, Asli Arslan Esme
  • Publication number: 20190049736
    Abstract: Herein is disclosed a virtual embodiment display system comprising one or more image sensors, configured to receive one or more images of a vehicle occupant; one or more processors, configured to determine a gaze direction of the vehicle occupant from the one or more images; select a display location corresponding to the determined gaze direction; and control an image display device to display a virtual embodiment of an intelligent agent at the display location; the image display device, configured to display the virtual embodiment of the intelligent agent at the selected display location according to the one or more processors.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 14, 2019
    Inventors: Cagri TANRIOVER, Richard BECKWITH, Asli ARSLAN ESME, John SHERRY
  • Publication number: 20190038186
    Abstract: Methods and apparatuses for identifying food chewed or beverage drank are disclosed herein. In embodiments, a device may comprise one or more sensors to provide vibration signal data representative of vibrations sensed from a nasal bridge of a user wearing the device, a chewing analyzer to extract from the vibration signal data a first plurality of features associated with chewing activities, and/or a drinking analyzer is to extract from the vibration signal data a second plurality of features associated with drinking activities. The extracted first and/or second plurality of features, in turn, may be used to determine a category of food the user was chewing, or a category of beverage the user was drinking. In embodiments, the methods and apparatuses may use personalized models.
    Type: Application
    Filed: June 26, 2018
    Publication date: February 7, 2019
    Inventors: CAGRI TANRIOVER, NESE ALYUZ CIVITCI, ASLI ARSLAN ESME, HECTOR ALFONSO CORDOURIER MARURI, PAULO LOPEZ MEYER
  • Publication number: 20190038201
    Abstract: Technologies for emotion prediction based on breathing patterns include a wearable device. The wearable device includes a breathing sensor to generate breathing data, one or more processors, and one or more memory devices having stored therein a plurality of instructions that, when executed, cause the wearable device to calibrate a personalized emotion predictive model associated with the user, collect breathing data of the user of the wearable device, analyze the breathing data to determine a breathing pattern, predict, in response to an analysis of the breathing data, the emotional state of the user using the personalized emotion predictive model, and output the emotional state of the user.
    Type: Application
    Filed: March 30, 2018
    Publication date: February 7, 2019
    Inventors: Cagri Tanriover, Nese Alyuz Civitci, Asli Arslan Esme, Hector Cordourier, Paulo Lopez Meyer
  • Publication number: 20190038179
    Abstract: Methods and apparatus for identifying breathing patterns are disclosed herein. An example wearable device includes a sensor positioned to generate vibration signal data from a nasal bridge of a user, a breathing phase detector to identify a first and second breathing phases based on the vibration signal data, a phase timing calculator to calculate a first time period for the first breathing phase and a second time period for the second breathing phase, a breathing pattern detector to generate a breathing pattern metric based on the first and second time periods, a breathing activity detector to identify a breathing activity associated with the vibration signal data based on the breathing pattern metric, and an alert generator to activate an output device to generate at least one of an audible, tactile, or visual alert based on at least one of the breathing activity and a change associated with the breathing activity.
    Type: Application
    Filed: August 4, 2017
    Publication date: February 7, 2019
    Inventors: Cagri Tanriover, Hector Cordourier Maruri, Paulo Lopez Meyer, Asli Arslan Esme
  • Publication number: 20180308376
    Abstract: Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: June 26, 2018
    Publication date: October 25, 2018
    Inventors: Sinem Aslan, Asli Arslan Esme, Gila Kamhi, Ron Ferens, Itai Diner
  • Patent number: 10013892
    Abstract: Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: July 3, 2018
    Assignee: Intel Corporation
    Inventors: Sinem Aslan, Asli Arslan Esme, Gila Kamhi, Ron Ferens, Itai Diner
  • Publication number: 20170188928
    Abstract: Apparatus, systems, and/or methods may provide a mental state determination. For example, a data collector may collect image data for a side of a face of a user from an image capture device on the user (e.g., a wearable device). The image data may include two or more perspectives of a feature on the side of the face of the user. In addition, a state determiner may determine a mental state of the user based on the image data. In one example, fields of view may be combined to determine a total region and/or a total overlap region. Changing the position that one or more image capture devices point may modulate the total region and/or the total overlap region. In addition, one or more sensors may be utilized to further improve mental data determinations.
    Type: Application
    Filed: December 22, 2016
    Publication date: July 6, 2017
    Inventors: Cagri Tanriover, Sinem Aslan, Nese Alyuz Civitci, Ece Oktay, Eda Okur, Asli Arslan Esme
  • Publication number: 20170169715
    Abstract: Embodiments herein relate to generating a personalized model using a machine learning process, identifying a learning engagement state of a learner based at least in part on the personalized model, and tailoring computerized provision of an educational program to the learner based on the learning engagement state. An apparatus to provide a computer-aided educational program may include one or more processors operating modules that may receive indications of interactions of a learner and indications of physical responses of the learner, generate a personalized model using a machine learning process based at least in part on the interactions of the learner and the indications of physical responses of the learner during a calibration time period, and identify a current learning state of the learner based at least in part on the personalized model during a usage time period. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 15, 2017
    Inventors: NESE ALYUZ CIVITCI, EDA OKUR, ASLI ARSLAN ESME, SINEM ASLAN, ECE OKTAY, SINEM E. METE, HASAN UNLU, DAVID STANHILL, VLADIMIR SHLAIN, PINI ABRAMOVITCH, EYAL ROND
  • Publication number: 20170039876
    Abstract: Embodiments herein relate to identifying a learning engagement state of a learner. A computing platform with one or more processors running modules may receive indications of interactions of a learner with an educational program as well as indications of physical responses of the learner collected substantially simultaneously as the learner interacts with the educational program. A current learning engagement state of the learner may be identified based at least in part on the received indications by using an artificial neural network associated that is calibrated to the learner. The artificial neural network may be trained and updated in part by human observation and learner self-reporting of the learner's current learning engagement state.
    Type: Application
    Filed: August 6, 2015
    Publication date: February 9, 2017
    Inventors: NESE ALYUZ CIVITCI, EDA OKUR, ASLI ARSLAN ESME, SINEM ASLAN, ECE OKTAY, SINEM E. METE, DAVID STANHILL, VLADIMIR SHLAIN, PINI ABRAMOVITCH, EYAL ROND, ALEX KUNIN, ILAN PAPINI