Patents by Inventor Adam G. Poulos

Adam G. Poulos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966510
    Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.
    Type: Grant
    Filed: February 27, 2023
    Date of Patent: April 23, 2024
    Assignee: APPLE INC.
    Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Patent number: 11960657
    Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: April 16, 2024
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
  • Publication number: 20240054736
    Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.
    Type: Application
    Filed: June 30, 2023
    Publication date: February 15, 2024
    Inventors: Aaron M. Burns, Adam G. Poulos, Alexis H. Palangie, Benjamin R. Blachnitzky, Charilaos Papadopoulos, David M. Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S. Carlin
  • Publication number: 20240054746
    Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.
    Type: Application
    Filed: June 30, 2023
    Publication date: February 15, 2024
    Inventors: Aaron M. Burns, Adam G. Poulos, Alexis H. Palangie, Benjamin R. Blachnitzky, Charilaos Papadopoulos, David M. Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S. Carlin
  • Publication number: 20240056492
    Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.
    Type: Application
    Filed: June 30, 2023
    Publication date: February 15, 2024
    Inventors: Aaron M Burns, Adam G Poulos, Alexis H Palangie, Benjamin R Blachnitzky, Charilaos Papadopoulos, David M Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S Carlin
  • Publication number: 20230376110
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and an extremity tracker. The method includes obtaining extremity tracking data via the extremity tracker. The method includes displaying a computer-generated representation of a trackpad that is spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation region that is separate from the computer-generated representation of the trackpad. The method includes identifying a first location within the computer-generated representation of the trackpad based on the extremity tracking data. The method includes mapping the first location to a corresponding location within the content manipulation region. The method includes displaying an indicator indicative of the mapping. The indicator may overlap the corresponding location within the content manipulation region.
    Type: Application
    Filed: February 27, 2023
    Publication date: November 23, 2023
    Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230333651
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, an extremity tracking system, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying a computer-generated object on the display. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining a multi-finger gesture based on extremity tracking data from the extremity tracking system and the finger manipulation data. The method includes registering an engagement event with respect to the computer-generated object according to the multi-finger gesture.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230333650
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying first instructional content that is associated with a first gesture. The first instructional content includes a first object. The method includes determining an engagement score that characterizes a level of user engagement with respect to the first object. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining that the finger-wearable device performs the first gesture based on a function of the finger manipulation data.
    Type: Application
    Filed: February 27, 2023
    Publication date: October 19, 2023
    Inventors: Benjamin Hylak, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230325047
    Abstract: A method includes displaying a plurality of computer-generated objects, including a first computer-generated object at a first position within an environment and a second computer-generated object at a second position within the environment. The first computer-generated object corresponds to a first user interface element that includes a first set of controls for modifying a content item. The method includes, while displaying the plurality of computer-generated objects, obtaining extremity tracking data. The method includes moving the first computer-generated object from the first position to a third position within the environment based on the extremity tracking data. The method includes, in accordance with a determination that the third position satisfies a proximity threshold with respect to the second position, merging the first computer-generated object with the second computer-generated object in order to generate a third computer-generated object for modifying the content item.
    Type: Application
    Filed: March 15, 2023
    Publication date: October 12, 2023
    Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
  • Publication number: 20230325004
    Abstract: Methods for interacting with objects and user interface elements in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, a user can directly or indirectly interact with objects. In some embodiments, while performing an indirect manipulation, manipulations of virtual objects are scaled. In some embodiments, while performing a direct manipulation, manipulations of virtual objects are not scaled. In some embodiments, an object can be reconfigured from an indirect manipulation mode into a direct manipulation mode by moving the object to a respective position in the three-dimensional environment in response to a respective gesture.
    Type: Application
    Filed: March 10, 2023
    Publication date: October 12, 2023
    Inventors: Aaron M. BURNS, Alexis H. PALANGIE, Nathan GITTER, Nicolai GEORG, Benjamin R. BLACHNITZKY, Arun Rakesh YOGANANDAN, Benjamin HYLAK, Adam G. POULOS
  • Publication number: 20230315202
    Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.
    Type: Application
    Filed: February 27, 2023
    Publication date: October 5, 2023
    Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
  • Publication number: 20230297168
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes, while displaying a plurality of content items on the display, obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes selecting a first one of the plurality of content items based on a first portion of the finger manipulation data in combination with satisfaction of a proximity threshold. The first one of the plurality of content items is associated with a first dimensional representation. The method includes changing the first one of the plurality of content items from the first dimensional representation to a second dimensional representation based on a second portion of the finger manipulation data.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 21, 2023
    Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
  • Publication number: 20230297172
    Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 21, 2023
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
  • Patent number: 10802278
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors of the NED system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: October 13, 2020
    Assignee: Microsoft Technology Licensing LLC
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Patent number: 10620717
    Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: April 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
  • Patent number: 10613642
    Abstract: Embodiments are disclosed herein that relate to tuning gesture recognition characteristics for a device configured to receive gesture-based user inputs. For example, one disclosed embodiment provides a head-mounted display device including a plurality of sensors, a display configured to present a user interface, a logic machine, and a storage machine that holds instructions executable by the logic machine to detect a gesture based upon information received from a first sensor of the plurality of sensors, perform an action in response to detecting the gesture, and determine whether the gesture matches an intended gesture input. The instructions are further executable to update a gesture parameter that defines the intended gesture input if it is determined that the gesture detected does not match the intended gesture input.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: April 7, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Adam G. Poulos, John Bevis, Jeremy Lee, Daniel Joseph McCulloch, Nicholas Gervase Fajt
  • Patent number: 10330931
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: June 25, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Publication number: 20190162964
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors of the NED system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Application
    Filed: January 9, 2019
    Publication date: May 30, 2019
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Patent number: 10254546
    Abstract: A mixed reality system may comprise a head-mounted display (HMD) device with a location sensor and a base station, mounted a predetermined offset from the location sensor, that emits an electromagnetic field (EMF). An EMF sensor affixed to an object may sense the EMF, forming a magnetic tracking system. The HMD device may determine a relative location of the EMF sensor therefrom and determine a location of the EMF sensor in space based on the relative location, the predetermined offset, and the location of the location sensor. An optical tracking system comprising a marker an optical sensor configured to capture optical data may be included to augment the magnetic tracking system based on the optical data and a location of the optical sensor or marker. The HMD device may display augmented reality images and overlay a hologram corresponding to the location of the EMF sensor over time.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: April 9, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam G. Poulos, Arthur Tomlin, Alexandru Octavian Balan, Constantin Dulu, Christopher Douglas Edmonds
  • Patent number: 10235807
    Abstract: A system and method are disclosed for building virtual content from within a virtual environment using virtual tools to build and modify the virtual content.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: March 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Thomas, Jonathan Paulovich, Adam G. Poulos, Omer Bilal Orhan, Marcus Ghaly, Cameron G. Brown, Nicholas Gervase Fajt, Matthew Kaplan