Patents by Inventor Allen Yang Yang

Allen Yang Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150093022
    Abstract: First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used.
    Type: Application
    Filed: December 8, 2014
    Publication date: April 2, 2015
    Inventors: Mohamed Nabil Hajj Chehade, Sina Fateh, Sleiman Itani, Allen Yang Yang
  • Publication number: 20140159862
    Abstract: A wearable sensor vehicle with a bio-input sensor and a processor. When the vehicle is worn, the sensor is arranged so as to sense bio-input from the user. The sensor senses bio-input, the processor compares the bio-input to a standard, and if the standard is met the processor indicates a response. The user may be uniquely identified from the bio-input. One or more systems on or communicating with the vehicle may be controlled transparently, without requiring direct action by the user. Control actions may include security identification of the user, logging in to accounts or programs, setting preferences, etc. The sensor collects bio-input substantially without instruction or dedicated action from the user; the processor compares bio-input against the standard substantially without instruction or dedicated action from the user; and the processor generates and/or implements a response based on the bio-input substantially without instruction or dedicated action from the user.
    Type: Application
    Filed: November 22, 2013
    Publication date: June 12, 2014
    Applicant: Atheer, Inc.
    Inventors: Allen Yang Yang, Mohamed Nabil Hajj Chehade, Sina Fateh, Sieiman Itani
  • Publication number: 20140139340
    Abstract: World data is established, including real-world position and/or real-world motion of an entity. Target data is established, including planned or ideal position and/or motion for the entity. Guide data is established, including information for guiding a person or other subject in bringing world data into match with target data. The guide data is outputted to the subject as virtual and/or augmented reality data. Evaluation data may be established, including a comparison of world data with target data. World data, target data, guide data, and/or evaluation data may be dynamically updated. Subjects may be instructed in positions and motions by using guide data to bring world data into match with target data, and by receiving evaluation data. Instruction includes physical therapy, sports, recreation, medical treatment, fabrication, diagnostics, repair of mechanical systems, etc.
    Type: Application
    Filed: November 22, 2013
    Publication date: May 22, 2014
    Applicant: Atheer, Inc.
    Inventors: Allen Yang Yang, Mohamed Nabil Hajj Chehade, Sina Fateh, Sleiman Itani
  • Publication number: 20140125557
    Abstract: Method and apparatus for interacting with a three dimensional interface. In the method, a three dimensional interface with at least one virtual object is generated. An interaction zone is defined and generated, enclosing some or all of the object. A stimulus of the interaction zone, e.g. approach/contact with a finger/stylus is defined, and a response to the stimulus is defined, e.g. changes to the object, system actions, feedback, etc. When the stimulus is sensed the response is executed. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, defines an interaction zone for the object, and defines a stimulus and a response. A display outputs the interface and object. A camera or other sensor detects stimulus of the interaction zone, whereupon the processor generates a response signal. The apparatus may be part of a head mounted display.
    Type: Application
    Filed: March 12, 2013
    Publication date: May 8, 2014
    Inventors: Iryna Issayeva, Sleiman Itani, Allen Yang Yang, Mohamed Nabil Hajj Chehade
  • Publication number: 20140118570
    Abstract: First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used.
    Type: Application
    Filed: March 5, 2013
    Publication date: May 1, 2014
    Applicant: Atheer, Inc.
    Inventors: Mohamed Nabil Hajj Chehade, Sina Fateh, Sleiman Itani, Allen Yang Yang
  • Publication number: 20140067869
    Abstract: A machine-implemented method includes establishing a virtual or augmented reality entity, and establishing a state for the entity having a state time and state properties including a state spatial arrangement. The data entity and state are stored, and are subsequently received and outputted at a time other than the state time so as to exhibit a “virtual time machine” functionality. An apparatus includes a processor, a data store, and an output. A data entity establisher, a state establisher, a storer, a data entity receiver, a state receiver, and an outputter are instantiated on the processor.
    Type: Application
    Filed: August 29, 2013
    Publication date: March 6, 2014
    Applicant: Atheer, Inc.
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Publication number: 20140067768
    Abstract: A machine-implemented method includes establishing a virtual or augmented reality entity, and establishing a state for the entity having a state time and state properties including a state spatial arrangement. The data entity and state are stored, and are subsequently received and outputted at a time other than the state time so as to exhibit a “virtual time machine” functionality. An apparatus includes a processor, a data store, and an output. A data entity establisher, a state establisher, a storer, a data entity receiver, a state receiver, and an outputter are instantiated on the processor.
    Type: Application
    Filed: August 29, 2013
    Publication date: March 6, 2014
    Applicant: Atheer, Inc.
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Haji Chehade, Allen Yang Yang, Sleiman Itani
  • Publication number: 20130336528
    Abstract: Disclosed are methods and apparatuses to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used.
    Type: Application
    Filed: May 23, 2013
    Publication date: December 19, 2013
    Applicant: Atheer, Inc.
    Inventors: Sleiman Itani, Allen Yang Yang
  • Publication number: 20130336529
    Abstract: Disclosed are methods and apparatuses for searching images. An image is received and a first search path is defined for the image. The first search path may be a straight line, horizontal, and/or near the bottom of the image, and/or may begin at one edge and move toward the other. A transition is defined for the image, distinguishing a feature to be found. The image is searched for the transition along the first search path. When the transition is detected, the image is searched along a second search path that follows the transition. The apparatus includes an image sensor and a processor. The sensor is adapted to obtain images. The processor is adapted to define a first search path and a transition for the image, to search for the transition along the first search path, and to search along a second search path upon detecting the transition, following the transition.
    Type: Application
    Filed: May 24, 2013
    Publication date: December 19, 2013
    Applicant: Atheer, Inc.
    Inventors: Sleiman Itani, Allen Yang Yang
  • Publication number: 20130257692
    Abstract: In the method, a processor generates a three dimensional interface with at least one virtual object, defines a stimulus of the interface, and defines a response to the stimulus. The stimulus is an approach to the virtual object with a finger or other end-effector to within a threshold of the virtual object. When the stimulus is sensed, the response is executed. Stimuli may include touch, click, double click, peg, scale, and swipe gestures. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, and defines a stimulus for the virtual object and a response to the stimulus. A display outputs the interface and object. A camera or other sensor detects the stimulus, e.g. a gesture with a finger or other end-effector, whereupon the processor executes the response. The apparatus may be part of a head mounted display.
    Type: Application
    Filed: April 1, 2013
    Publication date: October 3, 2013
    Applicant: Atheer, Inc.
    Inventors: Allen Yang Yang, Sleiman Itani
  • Patent number: 8406525
    Abstract: A method is disclosed for recognition of high-dimensional data in the presence of occlusion, including: receiving a target data that includes an occlusion and is of an unknown class, wherein the target data includes a known object; sampling a plurality of training data files comprising a plurality of distinct classes of the same object as that of the target data; and identifying the class of the target data through linear superposition of the sampled training data files using l1 minimization, wherein a linear superposition with a sparsest number of coefficients is used to identify the class of the target data.
    Type: Grant
    Filed: January 29, 2009
    Date of Patent: March 26, 2013
    Assignees: The Regents of the University of California, The Board of Trustees of the University of Illinois
    Inventors: Yi Ma, Allen Yang Yang, John Norbert Wright, Andrew William Wagner
  • Publication number: 20110064302
    Abstract: A method is disclosed for recognition of high-dimensional data in the presence of occlusion, including: receiving a target data that includes an occlusion and is of an unknown class, wherein the target data includes a known object; sampling a plurality of training data files comprising a plurality of distinct classes of the same object as that of the target data; and identifying the class of the target data through linear superposition of the sampled training data files using l1 minimization, wherein a linear superposition with a sparsest number of coefficients is used to identify the class of the target data.
    Type: Application
    Filed: January 29, 2009
    Publication date: March 17, 2011
    Inventors: Yi Ma, Allen Yang Yang, John Norbert Wright, Andrew William Wagner