Patents by Inventor Ozlem Kalinli

Ozlem Kalinli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8493390
    Abstract: Methods and systems for adapting a display screen output based on a display user's attention. Gaze direction tracking is employed to determine a sub-region of a display screen area to which a user is attending. Display of the attended sub-region is modified relative to the remainder of the display screen, for example, by changing the quantity of data representing an object displayed within the attended sub-region relative to an object displayed in an unattended sub-region of the display screen.
    Type: Grant
    Filed: December 8, 2010
    Date of Patent: July 23, 2013
    Assignee: Sony Computer Entertainment America, Inc.
    Inventor: Ozlem Kalinli
  • Publication number: 20120281181
    Abstract: Methods of eye gaze tracking are provided using magnetized contact lenses tracked by magnetic sensors and/or reflecting contact lenses tracked by video-based sensors. Tracking information of contact lenses from magnetic sensors and video-based sensors may be used to improve eye tracking and/or combined with other sensor data to improve accuracy. Furthermore, reflective contact lenses improve blink detection while eye gaze tracking is otherwise unimpeded by magnetized contact lenses. Additionally, contact lenses may be adapted for viewing 3D information.
    Type: Application
    Filed: May 5, 2011
    Publication date: November 8, 2012
    Applicant: Sony Computer Entertainment Inc.
    Inventors: Ruxin Chen, Ozlem Kalinli
  • Publication number: 20120268359
    Abstract: An electronic device may be controlled using nerve analysis by measuring a nerve activity level for one or more body parts of a user of the device using one or more nerve sensors associated with the electronic device. A relationship can be determined between the user's one or more body parts and an intended interaction by the user with one or more components of the electronic device using each nerve activity level determined. A control input or reduced set of likely actions can be established for the electronic device based on the relationship determined.
    Type: Application
    Filed: April 19, 2011
    Publication date: October 25, 2012
    Applicant: Sony Computer Entertainment Inc.
    Inventors: Ruxin Chen, Ozlem Kalinli, Richard L. Marks, Jeffrey R. Stafford
  • Publication number: 20120259638
    Abstract: Audio or visual orientation cues can be used to determine the relevance of input speech. The presence of a user's face may be identified during speech during an interval of time. One or more facial orientation characteristics associated with the user's face during the interval of time may be determined. In some cases, orientation characteristics for input sound can be determined. A relevance of the user's speech during the interval of time may be characterized based on the one or more orientation characteristics.
    Type: Application
    Filed: April 8, 2011
    Publication date: October 11, 2012
    Inventor: OZLEM KALINLI
  • Publication number: 20120259554
    Abstract: A tongue tracking interface apparatus for control of a computer program may include a mouthpiece configured to be worn over one or more teeth of a user of the computer program. The mouthpiece can include one or more sensors configured to determine one or more tongue orientation characteristics of the user. Other sensors such as microphones, pressure sensors, etc. located around the head, face, and neck, can also be used for determining tongue orientation characteristics.
    Type: Application
    Filed: April 8, 2011
    Publication date: October 11, 2012
    Applicant: Sony Computer Entertainment Inc.
    Inventors: Ruxin Chen, Ozlem Kalinli
  • Publication number: 20120253812
    Abstract: In syllable or vowel or phone boundary detection during speech, an auditory spectrum may be determined for an input window of sound and one or more multi-scale features may be extracted from the auditory spectrum. Each multi-scale feature can be extracted using a separate two-dimensional spectro-temporal receptive filter. One or more feature maps corresponding to the one or more multi-scale features can be generated and an auditory gist vector can be extracted from each of the one or more feature maps. A cumulative gist vector may be obtained through augmentation of each auditory gist vector extracted from the one or more feature maps. One or more syllable or vowel or phone boundaries in the input window of sound can be detected by mapping the cumulative gist vector to one or more syllable or vowel or phone boundary characteristics using a machine learning algorithm.
    Type: Application
    Filed: April 1, 2011
    Publication date: October 4, 2012
    Applicant: Sony Computer Entertainment Inc.
    Inventors: OZLEM KALINLI, Ruxin Chen
  • Publication number: 20120146891
    Abstract: Methods and systems for adapting a display screen output based on a display user's attention. Gaze direction tracking is employed to determine a sub-region of a display screen area to which a user is attending. Display of the attended sub-region is modified relative to the remainder of the display screen, for example, by changing the quantity of data representing an object displayed within the attended sub-region relative to an object displayed in an unattended sub-region of the display screen.
    Type: Application
    Filed: December 8, 2010
    Publication date: June 14, 2012
    Applicant: Sony Computer Entertainment Inc.
    Inventor: Ozlem Kalinli
  • Publication number: 20120116756
    Abstract: In a spoken language processing method for tone/intonation recognition, an auditory spectrum may be determined for an input window of sound and one or more multi-scale features may be extracted from the auditory spectrum. Each multi-scale feature can be extracted using a separate two-dimensional spectro-temporal receptive filter. One or more feature maps corresponding to the one or more multi-scale features can be generated and an auditory gist vector can be extracted from each of the one or more feature maps. A cumulative gist vector may be obtained through augmentation of each auditory gist vector extracted from the one or more feature maps. One or more tonal characteristics corresponding to the input window of sound can be determined by mapping the cumulative gist vector to one or more tonal characteristics using a machine learning algorithm.
    Type: Application
    Filed: November 10, 2010
    Publication date: May 10, 2012
    Applicant: Sony Computer Entertainment Inc.
    Inventor: Ozlem Kalinli
  • Publication number: 20100318354
    Abstract: Technologies are described herein for noise adaptive training to achieve robust automatic speech recognition. Through the use of these technologies, a noise adaptive training (NAT) approach may use both clean and corrupted speech for training. The NAT approach may normalize the environmental distortion as part of the model training. A set of underlying “pseudo-clean” model parameters may be estimated directly. This may be done without point estimation of clean speech features as an intermediate step. The pseudo-clean model parameters learned from the NAT technique may be used with a Vector Taylor Series (VTS) adaptation. Such adaptation may support decoding noisy utterances during the operating phase of a automatic voice recognition system.
    Type: Application
    Filed: June 12, 2009
    Publication date: December 16, 2010
    Applicant: Microsoft Corporation
    Inventors: Michael Lewis Seltzer, James Garnet Droppo, Ozlem Kalinli, Alejandro Acero