Patents Assigned to Atheer, Inc.
  • Patent number: 9924091
    Abstract: First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: March 20, 2018
    Assignee: Atheer, Inc.
    Inventors: Mohamed Nabil Hajj Chehade, Sina Fateh, Sleiman Itani, Allen Yang Yang
  • Patent number: 9916681
    Abstract: To integrate a sensory property such as occlusion, shadowing, reflection, etc. among physical and notional (e.g. virtual/augment) visual or other sensory content, providing an appearance of similar occlusion, shadowing, etc. in both models. A reference position, a physical data model representing physical entities, and a notional data model are created or accessed. A first sensory property from either data model is selected. A second sensory property is determined corresponding with the first sensory property, and notional sensory content is generated from the notional data model with the second sensory property applied thereto. The notional sensory content is outputted to the reference position with a see-through display. Consequently, notional entities may appear occluded by physical entities, physical entities may appear to cast shadows from notional light sources, etc.
    Type: Grant
    Filed: October 31, 2015
    Date of Patent: March 13, 2018
    Assignee: Atheer, Inc.
    Inventors: Greg James, Allen Yang Yang, Sleiman Itani
  • Patent number: 9894269
    Abstract: First and second images are captured at first and second focal lengths, the second focal length being longer than the first focal length. Element sets are defined with a first element of the first image and a corresponding second element of the second image. Element sets are identified as background if the second element thereof is more in-focus than or as in-focus as the first element. Background elements are subtracted from further analysis. Comparisons are based on relative focus, e.g. whether image elements are more or less in-focus. Measurement of absolute focus is not necessary, nor is measurement of absolute focus change; images need not be in-focus. More than two images, multiple element sets, and/or multiple categories and relative focus relationships also may be used.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: February 13, 2018
    Assignee: Atheer, Inc.
    Inventors: Mohamed Nabil Hajj Chehade, Sina Fateh, Sleiman Itani, Allen Yang Yang
  • Patent number: 9881026
    Abstract: Disclosed are method and apparatus to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: January 30, 2018
    Assignee: Atheer, Inc.
    Inventors: Sleiman Itani, Allen Yang Yang
  • Patent number: 9852652
    Abstract: World data is established, including real-world position and/or real-world motion of an entity. Target data is established, including planned or ideal position and/or motion for the entity. Guide data is established, including information for guiding a person or other subject in bringing world data into match with target data. The guide data is outputted to the subject as virtual and/or augmented reality data. Evaluation data may be established, including a comparison of world data with target data. World data, target data, guide data, and/or evaluation data may be dynamically updated. Subjects may be instructed in positions and motions by using guide data to bring world data into match with target data, and by receiving evaluation data. Instruction includes physical therapy, sports, recreation, medical treatment, fabrication, diagnostics, repair of mechanical systems, etc.
    Type: Grant
    Filed: November 22, 2013
    Date of Patent: December 26, 2017
    Assignee: Atheer, Inc.
    Inventors: Allen Yang Yang, Mohamed Nabil Hajj Chehade, Sina Fateh, Sleiman Itani
  • Patent number: 9842122
    Abstract: Disclosed are methods and apparatuses for searching images. An image is received and a first search path is defined for the image. The first search path may be a straight line, horizontal, and/or near the bottom of the image, and/or may begin at one edge and move toward the other. A transition is defined for the image, distinguishing a feature to be found. The image is searched for the transition along the first search path. When the transition is detected, the image is searched along a second search path that follows the transition. The apparatus includes an image sensor and a processor. The sensor is adapted to obtain images. The processor is adapted to define a first search path and a transition for the image, to search for the transition along the first search path, and to search along a second search path upon detecting the transition, following the transition.
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: December 12, 2017
    Assignee: Atheer, Inc.
    Inventors: Sleiman Itani, Allen Yang Yang
  • Patent number: 9747306
    Abstract: Disclosed are methods and apparatuses to recognize actors during normal system operation. The method includes defining actor input such as hand gestures, executing and detecting input, and identifying salient features of the actor therein. A model is defined from salient features, and a data set of salient features and/or model are retained, and may be used to identify actors for other inputs. A command such as “unlock” may be executed in response to actor input. Parameters may be applied to further define where, when, how, etc. actor input is executed, such as defining a region for a gesture. The apparatus includes a processor and sensor, the processor defining actor input, identifying salient features, defining a model therefrom, and retaining a data set. A display may also be used to show actor input, a defined region, relevant information, and/or an environment. A stylus or other non-human actor may be used.
    Type: Grant
    Filed: May 23, 2013
    Date of Patent: August 29, 2017
    Assignee: ATHEER, INC.
    Inventors: Sleiman Itani, Allen Yang Yang
  • Patent number: 9710110
    Abstract: A free space input standard is instantiated on a processor. Free space input is sensed and communicated to the processor. If the free space input satisfies the free space input standard, a touch screen input response is invoked in an operating system. The free space input may be sensed using continuous implicit, discrete implicit, active explicit, or passive explicit approaches. The touch screen input response may be invoked through communicating virtual touch screen input, a virtual input event, or a virtual command to or within the operating system. In this manner free space gestures may control existing touch screen interfaces and devices, without modifying those interfaces and devices directly to accept free space gestures.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: July 18, 2017
    Assignee: ATHEER, INC.
    Inventors: Shashwat Kandadai, Nathan Abercrombie, Yu-Hsiang Chen, Sleiman Itani
  • Patent number: 9710067
    Abstract: A machine implemented method includes sensing entities in first and second domains. If a first stimulus is present and an entity is in the first domain, the entity is transferred from first to second domain via a bridge. If a second stimulus is present and an entity is in the second domain, the entity is transferred from second first domain via the bridge. At least some of the first domain is outputted. An apparatus includes a processor that defines first and second domains and a bridge that enables transfer of entities between domains, an entity identifier that identifies entities in the domains, a stimulus identifier that identifies stimuli, and a display that outputs at least some of the first domain. The processor transfers entities from first to second domain responsive to a first stimulus, and transfers entities from second to first domain responsive to a second stimulus.
    Type: Grant
    Filed: September 5, 2013
    Date of Patent: July 18, 2017
    Assignee: ATHEER, INC.
    Inventor: Michael Lamberty
  • Patent number: 9700202
    Abstract: Systems and methods for improving the peripheral vision of a subject are disclosed. In one aspect, embodiments of the present disclosure includes a method, which may be embodied on a system, for improving peripheral vision of a subject using a visual marker on a display screen, the method includes, displaying a peripheral target on the display screen, the peripheral target having a visually discernable characteristic and determining whether the subject is able to correctly identify the peripheral target displayed on the display screen using the peripheral vision. The visual marker is intended for viewing using central vision of the subject and the peripheral target is intended for identification using the peripheral vision of the subject.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: July 11, 2017
    Assignee: ATHEER, INC.
    Inventor: Sina Fateh
  • Patent number: 9703383
    Abstract: A machine implemented method includes sensing entities in first and second domains. If a first stimulus is present and an entity is in the first domain, the entity is transferred from first to second domain via a bridge. If a second stimulus is present and an entity is in the second domain, the entity is transferred from second first domain via the bridge. At least some of the first domain is outputted. An apparatus includes a processor that defines first and second domains and a bridge that enables transfer of entities between domains, an entity identifier that identifies entities in the domains, a stimulus identifier that identifies stimuli, and a display that outputs at least some of the first domain. The processor transfers entities from first to second domain responsive to a first stimulus, and transfers entities from second to first domain responsive to a second stimulus.
    Type: Grant
    Filed: September 5, 2013
    Date of Patent: July 11, 2017
    Assignee: ATHEER, INC.
    Inventor: Michael Lamberty
  • Patent number: 9684820
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: June 20, 2017
    Assignee: ATHEER, INC.
    Inventor: Allen Yang Yang
  • Patent number: 9665987
    Abstract: A machine-implemented method includes obtaining input data and generating output data. The status of at least one contextual factor is determined and compared with a standard. If the status meets the standard, a transformation is applied to the output data. The output data is then outputted to the viewer. Through design and/or selection of contextual factors, standards, and transformations, output data may be selectively outputted to viewers in a context-suitable fashion, e.g. on a head mounted display the viewer's central vision may be left unobstructed while the viewer walks, drives, etc. An apparatus includes at least one sensor that senses a contextual factor. A processor determines the status of the contextual factor, determines if the status meets a standard, generates output data, and applies a transformation to the output data if the status meets the standard. A display outputs the output data to the viewer.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: May 30, 2017
    Assignee: ATHEER, INC.
    Inventor: Sina Fateh
  • Patent number: 9606359
    Abstract: A first optic receives optical environment content for delivery to the see-through display. The see-through display delivers output optical content to the second optic and delivers the optical environment content to the second optic. The second optic delivers the optical output content and optical environment content to a viewing position. The first optic alters the focal vergence of the optical environment content; the second optic alters the focal vergence of the optical environment content and the focal vergence of the optical output content. The focal vergences of the optical output content and optical environment content thus are independently controllable. The first and second optics may render the focal vergence of the optical environment content after first and second optics substantially equal to optical environment content unmodified by either the first or second optics. The focal vergences of optical environment content and output content may be equal after alteration.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: March 28, 2017
    Assignee: ATHEER, INC.
    Inventor: Sleiman Itani
  • Patent number: 9589000
    Abstract: A machine-implemented method includes establishing a virtual or augmented reality entity, and establishing a state for the entity having a state time and state properties including a state spatial arrangement. The data entity and state are stored, and are subsequently received and outputted at a time other than the state time so as to exhibit a “virtual history machine” functionality. An apparatus includes a processor, a data store, and an output. A data entity establisher, a state establisher, a storer, a data entity receiver, a state receiver, and an outputter are instantiated on the processor.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: March 7, 2017
    Assignee: ATHEER, INC.
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Patent number: 9576188
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 21, 2017
    Assignee: Atheer, Inc.
    Inventor: Allen Yang Yang
  • Patent number: 9557822
    Abstract: To distinguish a region (e.g. hand) within a data set (e.g. digital image), data elements (e.g. pixels) representing a transition (e.g. hand outline) are identified. The direction toward the region is determined, for example using weighted direction matrices yielding a numerical maximum when aligned inward. A test element displaced one or more steps inward from the boundary element is tested against a standard for identifying the region. If the tested element meets the standard, that element is identified as part of the region. By examining data elements away from the transition, noise in the transition itself is avoided without altering the transition (e.g. by smoothing) while still only examining a linear data set (i.e. a contour or trace of the feature rather than a flooded interior thereof). The direction to the exterior of the region, an exterior contour, other features, and/or the transition also may be identified/followed.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: January 31, 2017
    Assignee: ATHEER, INC.
    Inventors: Mohamed Nabil Hajj Chehade, Allen Yang Yang
  • Patent number: 9442575
    Abstract: A free space input standard is instantiated on a processor. Free space input is sensed and communicated to the processor. If the free space input satisfies the free space input standard, a touch screen input response is invoked in an operating system. The free space input may be sensed using continuous implicit, discrete implicit, active explicit, or passive explicit approaches. The touch screen input response may be invoked through communicating virtual touch screen input, a virtual input event, or a virtual command to or within the operating system. In this manner free space gestures may control existing touch screen interfaces and devices, without modifying those interfaces and devices directly to accept free space gestures.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: September 13, 2016
    Assignee: ATHEER, INC.
    Inventors: Shashwat Kandadai, Nathan Abercrombie, Yu-Hsiang Chen, Sleiman Itani
  • Patent number: D771040
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: November 8, 2016
    Assignee: ATHEER, INC.
    Inventor: Frank Nuovo
  • Patent number: D771736
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: November 15, 2016
    Assignee: ATHEER, INC.
    Inventor: Frank Nuovo