Patents by Inventor Isaac Levin

Isaac Levin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240202598
    Abstract: A method for semi-automated labeling of data for machine learning training. Data is collected via time-series sensors to form an unlabeled dataset. After receiving one or more event type labels for a subset of the dataset, thereby forming a labeled dataset, the remainder of the unlabeled dataset is automatically labeled. Potential new labels for the remainder of the unlabeled dataset are determined via cross correlation between the labeled dataset and unlabeled dataset. The potential new labels are presented as training data for a machine learning algorithm.
    Type: Application
    Filed: December 15, 2023
    Publication date: June 20, 2024
    Applicant: QEEXO, CO.
    Inventors: Stephanie Pavlick, Elias Lee Fallon, William Isaac Levine, Hasan Smeir
  • Patent number: 11543922
    Abstract: Techniques enabling improved classification of touch or hover interactions of objects with a touch sensitive surface of a device are presented. A speaker of the device can emit an ultrasonic audio signal comprising a first frequency distribution. A microphone of the device can detect a reflected audio signal comprising a second frequency distribution. The audio signal can be reflected off of an object in proximity to the surface to produce the reflected audio signal. A classification component can determine movement status of the object, or classify the touch or hover interaction, in relation to the surface, based on analysis of the signals. The classification component also can classify the touch or hover interaction based on such ultrasound data and/or touch surface or other sensor data. The classification component can be trained, using machine learning, to perform classifications of touch or hover interactions of objects with the surface.
    Type: Grant
    Filed: December 9, 2021
    Date of Patent: January 3, 2023
    Assignee: QEEXO, CO.
    Inventors: Taihei Munemoto, William Isaac Levine
  • Publication number: 20220100298
    Abstract: Techniques enabling improved classification of touch or hover interactions of objects with a touch sensitive surface of a device are presented. A speaker of the device can emit an ultrasonic audio signal comprising a first frequency distribution. A microphone of the device can detect a reflected audio signal comprising a second frequency distribution. The audio signal can be reflected off of an object in proximity to the surface to produce the reflected audio signal. A classification component can determine movement status of the object, or classify the touch or hover interaction, in relation to the surface, based on analysis of the signals. The classification component also can classify the touch or hover interaction based on such ultrasound data and/or touch surface or other sensor data. The classification component can be trained, using machine learning, to perform classifications of touch or hover interactions of objects with the surface.
    Type: Application
    Filed: December 9, 2021
    Publication date: March 31, 2022
    Applicant: QEEXO, CO.
    Inventors: Taihei MUNEMOTO, William Isaac Levine
  • Patent number: 11231815
    Abstract: Techniques enabling improved classification of touch or hover interactions of objects with a touch sensitive surface of a device are presented. A speaker of the device can emit an ultrasonic audio signal comprising a first frequency distribution. A microphone of the device can detect a reflected audio signal comprising a second frequency distribution. The audio signal can be reflected off of an object in proximity to the surface to produce the reflected audio signal. A classification component can determine movement status of the object, or classify the touch or hover interaction, in relation to the surface, based on analysis of the signals. The classification component also can classify the touch or hover interaction based on such ultrasound data and/or touch surface or other sensor data. The classification component can be trained, using machine learning, to perform classifications of touch or hover interactions of objects with the surface.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 25, 2022
    Assignee: QEEXO, CO.
    Inventors: Taihei Munemoto, William Isaac Levine
  • Publication number: 20200409489
    Abstract: Techniques enabling improved classification of touch or hover interactions of objects with a touch sensitive surface of a device are presented. A speaker of the device can emit an ultrasonic audio signal comprising a first frequency distribution. A microphone of the device can detect a reflected audio signal comprising a second frequency distribution. The audio signal can be reflected off of an object in proximity to the surface to produce the reflected audio signal. A classification component can determine movement status of the object, or classify the touch or hover interaction, in relation to the surface, based on analysis of the signals. The classification component also can classify the touch or hover interaction based on such ultrasound data and/or touch surface or other sensor data. The classification component can be trained, using machine learning, to perform classifications of touch or hover interactions of objects with the surface.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Taihei Munemoto, William Isaac Levine
  • Patent number: 7237227
    Abstract: Embodiments of the present invention provide methods and apparatuses for quickly and easily configuring an application user interface using a flexible generic layout file. For one embodiment, a free-form grid layout is provided that allows an application provider to create a desired number of placeholders, each of a desired size, by positioning objects at desired locations on the free-form grid. In this way the application provider configures the application user interface. For one embodiment, the placeholders are created by dragging selected objects, from a provided set of objects, onto the grid layout. For such an embodiment, a set of parameters that describe the objects on the grid layout (e.g., indicating number, size, and location) is stored to a database. At run-time, the parameters are used to dynamically generate HTML code, which when executed presents the application user interface.
    Type: Grant
    Filed: August 4, 2003
    Date of Patent: June 26, 2007
    Assignee: Siebel Systems, Inc.
    Inventors: Shu Lei, Yuhong Wang, Russell Richardson, Anil Mukundan, Vipul Shroff, Isaac Levin, Ravikumar Gampala
  • Publication number: 20040268299
    Abstract: Embodiments of the present invention provide methods and apparatuses for quickly and easily configuring an application user interface using a flexible generic layout file. For one embodiment, a free-form grid layout is provided that allows an application provider to create a desired number of placeholders, each of a desired size, by positioning objects at desired locations on the free-form grid. In this way the application provider configures the application user interface. For one embodiment, the placeholders are created by dragging selected objects, from a provided set of objects, onto the grid layout. For such an embodiment, a set of parameters that describe the objects on the grid layout (e.g., indicating number, size, and location) is stored to a database. At run-time, the parameters are used to dynamically generate HTML code, which when executed presents the application user interface.
    Type: Application
    Filed: August 4, 2003
    Publication date: December 30, 2004
    Inventors: Shu Lei, Yuhong Wang, Russell Richardson, Anil Mukundan, Vipul Shroff, Isaac Levin, Ravikumar Gampala
  • Patent number: 5771306
    Abstract: The apparatus for the recognition of speech comprises an acoustic preprocessor, a visual preprocessor, and a speech classifier that operates the acoustic and visual preprocessed data. The acoustic preprocessor comprises a log mel spectrum analyzer that produces an equal mel bandwidth log power spectrum. The visual processor detects the motion of a set of fiducial markers on the speaker's face and extracts a set of normalized distance vectors describing lip and mouth movement. The speech classifier uses a multilevel time-delay neural network operating on the preprocessed acoustic and visual data to form an output probability distribution that indicates the probability of each candidate utterance having been spoken, based on the acoustic and visual data.
    Type: Grant
    Filed: October 22, 1993
    Date of Patent: June 23, 1998
    Assignees: Ricoh Corporation, Ricoh Company, Ltd
    Inventors: David G. Stork, Gregory Joseph Wolff, Earl Isaac Levine