Patents by Inventor Allen Yang Yang

Allen Yang Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11908211
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: February 20, 2024
    Assignee: West Texas Technology Partners, LLC
    Inventor: Allen Yang Yang
  • Patent number: 11828939
    Abstract: A data space such as a virtual/augmented reality environment is generated, through which a viewer/point of view may move. The physical world motion of a display outputting the data space is sensed, received, or computed. The motion of a physical world environment in which the display also is sensed, received, or computed. An output adjustment is determined from the display and environment motions, typically being equal to the environment motion(s). Motion of a point of view within the data space to be outputted by the display is determined. The viewpoint motion corresponds with the display motion within physical space adjusted by the output adjustment. At least part of the data space is outputted to the display from the point of view. The point of view is navigated through the data space according to the viewpoint motion.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: November 28, 2023
    Inventors: Allen Yang Yang, Sleiman Itani
  • Patent number: 11789583
    Abstract: A method, system, apparatus, and/or device for interacting with a three dimensional interface. The method, system, apparatus, and/or device may include: generating a zone associated with a virtual object, wherein the zone includes a first space approximate to at least a portion of the object that is distinct from a second space occupied by the object; determine an uncertainty level of a sensor to identify an input in the zone; in response to the uncertainty level exceeding a first level, increasing a size of the zone, wherein the increased size of the zone increases a precision level of the sensor; and in response to the uncertainty level being below a second level, decreasing the size of the zone, wherein the decreased size of the zone decreases the precision level of the sensor.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: October 17, 2023
    Inventors: Iryna Issayeva, Sleiman Itani, Allen Yang Yang, Mohamed Nabil Hajj Chehade
  • Patent number: 11768543
    Abstract: Embodiments include a method. The method includes maintaining, by a processing device, a real-time context describing a circumstance affecting an object. The method also includes determining a first saturation level of the object within a first portion of the field of view (FOV) of a sensor at a first point in time. The method also includes determining a second saturation level of the object within a second portion of the FOV of the sensor at a second point in time. The method also includes executing, by the processing device, an action to address a positive saturation determination between the first saturation level and the second saturation level.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: September 26, 2023
    Inventor: Allen Yang Yang
  • Patent number: 11763530
    Abstract: A system, apparatus, device, or method to output different iterations of data entities. The method may include establishing a first data entity; establishing a first state for the first data entity. The method may include establishing a second state for the first data entity. The method may include storing the first data entity, the first state, and the second state at a storage device. The method may include retrieving a first iteration of the first data entity exhibiting at least a portion of the first state. The method may include retrieving a second iteration of the first data entity exhibiting at least a portion of the second state. The method may include outputting the first iteration and the second iteration at an output time.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: September 19, 2023
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Publication number: 20230244353
    Abstract: In the method, a processor generates a three dimensional interface with at least one virtual object, defines a stimulus of the interface, and defines a response to the stimulus. The stimulus is an approach to the virtual object with a finger or other end-effector to within a threshold of the virtual object. When the stimulus is sensed, the response is executed. Stimuli may include touch, click, double click, peg, scale, and swipe gestures. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, and defines a stimulus for the virtual object and a response to the stimulus. A display outputs the interface and object. A camera or other sensor detects the stimulus, e.g. a gesture with a finger or other end-effector, whereupon the processor executes the response. The apparatus may be part of a head mounted display.
    Type: Application
    Filed: April 4, 2023
    Publication date: August 3, 2023
    Inventors: Allen Yang Yang, Sleiman Itani
  • Publication number: 20230173026
    Abstract: To prompt input and provide feedback on input to a user with an interface, inputs and graphical cursors associated with those inputs are defined. Each input may have several forms such as base, hover, engaged, completed, and error. User input is anticipated. The base form of the anticipated input cursor is displayed to prompt the user for the anticipated input. If user hover is detected that matches anticipated input, the hover form is displayed to confirm the match to the user. If user input is detected that matches anticipated input, the engaged form is displayed as confirmation. If user input is completed that matches anticipated input, the completed form is displayed as confirmation. If user hover or input does not match anticipated input, the error form is displayed to indicate mismatch. Not all cursors must have all forms, and some cursors may have multiples of some forms.
    Type: Application
    Filed: October 8, 2022
    Publication date: June 8, 2023
    Inventors: Sleiman Itani, Yu-Hsiang Chen, Mohamed Nabil Hajj Chehade, Allen Yang Yang
  • Patent number: 11620032
    Abstract: In the method, a processor generates a three dimensional interface with at least one virtual object, defines a stimulus of the interface, and defines a response to the stimulus. The stimulus is an approach to the virtual object with a finger or other end-effector to within a threshold of the virtual object. When the stimulus is sensed, the response is executed. Stimuli may include touch, click, double click, peg, scale, and swipe gestures. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, and defines a stimulus for the virtual object and a response to the stimulus. A display outputs the interface and object. A camera or other sensor detects the stimulus, e.g. a gesture with a finger or other end-effector, whereupon the processor executes the response. The apparatus may be part of a head mounted display.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: April 4, 2023
    Assignee: WEST TEXAS TECHNOLOGY PARTNERS, LLC
    Inventors: Allen Yang Yang, Sleiman Itani
  • Publication number: 20220309830
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Application
    Filed: June 13, 2022
    Publication date: September 29, 2022
    Inventor: Allen Yang Yang
  • Patent number: 11361185
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: June 14, 2022
    Assignee: West Texas Technology Partners, LLC
    Inventor: Allen Yang Yang
  • Publication number: 20220058881
    Abstract: A system, apparatus, device, or method to output different iterations of data entities. The method may include establishing a first data entity; establishing a first state for the first data entity. The method may include establishing a second state for the first data entity. The method may include storing the first data entity, the first state, and the second state at a storage device. The method may include retrieving a first iteration of the first data entity exhibiting at least a portion of the first state. The method may include retrieving a second iteration of the first data entity exhibiting at least a portion of the second state. The method may include outputting the first iteration and the second iteration at an output time.
    Type: Application
    Filed: August 31, 2021
    Publication date: February 24, 2022
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Publication number: 20210365492
    Abstract: Disclosed are methods and apparatuses to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used.
    Type: Application
    Filed: May 5, 2021
    Publication date: November 25, 2021
    Inventors: Sleiman Itani, Allen Yang Yang
  • Patent number: 11120627
    Abstract: A system, apparatus, device, or method to output different iterations of data entities. The method may include establishing a first data entity; establishing a first state for the first data entity. The method may include establishing a second state for the first data entity. The method may include storing the first data entity, the first state, and the second state at a storage device. The method may include retrieving a first iteration of the first data entity exhibiting at least a portion of the first state. The method may include retrieving a second iteration of the first data entity exhibiting at least a portion of the second state. The method may include outputting the first iteration and the second iteration at an output time.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: September 14, 2021
    Assignee: Atheer, Inc.
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Publication number: 20210247890
    Abstract: In the method, a processor generates a three dimensional interface with at least one virtual object, defines a stimulus of the interface, and defines a response to the stimulus. The stimulus is an approach to the virtual object with a finger or other end-effector to within a threshold of the virtual object. When the stimulus is sensed, the response is executed. Stimuli may include touch, click, double click, peg, scale, and swipe gestures. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, and defines a stimulus for the virtual object and a response to the stimulus. A display outputs the interface and object. A camera or other sensor detects the stimulus, e.g. a gesture with a finger or other end-effector, whereupon the processor executes the response. The apparatus may be part of a head mounted display.
    Type: Application
    Filed: April 27, 2021
    Publication date: August 12, 2021
    Inventors: Allen Yang Yang, Sleiman Itani
  • Publication number: 20210223547
    Abstract: A data space such as a virtual/augmented reality environment is generated, through which a viewer/point of view may move. The physical world motion of a display outputting the data space is sensed, received, or computed. The motion of a physical world environment in which the display also is sensed, received, or computed. An output adjustment is determined from the display and environment motions, typically being equal to the environment motion(s). Motion of a point of view within the data space to be outputted by the display is determined. The viewpoint motion corresponds with the display motion within physical space adjusted by the output adjustment. At least part of the data space is outputted to the display from the point of view. The point of view is navigated through the data space according to the viewpoint motion.
    Type: Application
    Filed: April 2, 2021
    Publication date: July 22, 2021
    Inventors: Allen Yang Yang, Sleiman Itani
  • Patent number: 11030237
    Abstract: Disclosed are methods and apparatuses to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: June 8, 2021
    Assignee: Atheer, Inc.
    Inventors: Sleiman Itani, Allen Yang Yang
  • Patent number: 11016631
    Abstract: In the method, a processor generates a three dimensional interface with at least one virtual object, defines a stimulus of the interface, and defines a response to the stimulus. The stimulus is an approach to the virtual object with a finger or other end-effector to within a threshold of the virtual object. When the stimulus is sensed, the response is executed. Stimuli may include touch, click, double click, peg, scale, and swipe gestures. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, and defines a stimulus for the virtual object and a response to the stimulus. A display outputs the interface and object. A camera or other sensor detects the stimulus, e.g. a gesture with a finger or other end-effector, whereupon the processor executes the response. The apparatus may be part of a head mounted display.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: May 25, 2021
    Assignee: Atheer, Inc.
    Inventors: Allen Yang Yang, Sleiman Itani
  • Patent number: 10996473
    Abstract: A data space such as a virtual/augmented reality environment is generated, through which a viewer/point of view may move. The physical world motion of a display outputting the data space is sensed, received, or computed. The motion of a physical world environment in which the display also is sensed, received, or computed. An output adjustment is determined from the display and environment motions, typically being equal to the environment motion(s). Motion of a point of view within the data space to be outputted by the display is determined. The viewpoint motion corresponds with the display motion within physical space adjusted by the output adjustment. At least part of the data space is outputted to the display from the point of view. The point of view is navigated through the data space according to the viewpoint motion.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: May 4, 2021
    Assignee: Atheer, Inc.
    Inventors: Allen Yang Yang, Sleiman Itani
  • Publication number: 20210109600
    Abstract: A method, system, apparatus, and/or device for sensing determining levels of an object to execute a command. The method, system, apparatus, and/or device may include: sensing an object that occupies a first portion of a field of view (FOV) of the sensor at a first point in time; determining a first saturation level of the object at the first portion of the FOV; sensing the object that occupies a second portion of the FOV of the sensor at a second point in time; determining a second saturation level of the object at the second portion of the FOV; determining that the first saturation level is different than the second saturation level; and in response to the first saturation level being different than the second saturation level, executing an executable command associated with the first saturation level and the second saturation level.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventor: Allen Yang Yang
  • Publication number: 20210077578
    Abstract: To prompt input and provide feedback on input to a user with an interface, inputs and graphical cursors associated with those inputs are defined. Each input may have several forms such as base, hover, engaged, completed, and error. User input is anticipated. The base form of the anticipated input cursor is displayed to prompt the user for the anticipated input. If user hover is detected that matches anticipated input, the hover form is displayed to confirm the match to the user. If user input is detected that matches anticipated input, the engaged form is displayed as confirmation. If user input is completed that matches anticipated input, the completed form is displayed as confirmation. If user hover or input does not match anticipated input, the error form is displayed to indicate mismatch. Not all cursors must have all forms, and some cursors may have multiples of some forms.
    Type: Application
    Filed: November 27, 2020
    Publication date: March 18, 2021
    Inventors: Sleiman Itani, Yu-Hsiang Chen, Mohamed Nabil Hajj Chehade, Allen Yang Yang