Patents by Inventor Nicolai Georg
Nicolai Georg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11966510Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.Type: GrantFiled: February 27, 2023Date of Patent: April 23, 2024Assignee: APPLE INC.Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Patent number: 11960657Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.Type: GrantFiled: March 21, 2023Date of Patent: April 16, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
-
Publication number: 20230376110Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and an extremity tracker. The method includes obtaining extremity tracking data via the extremity tracker. The method includes displaying a computer-generated representation of a trackpad that is spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation region that is separate from the computer-generated representation of the trackpad. The method includes identifying a first location within the computer-generated representation of the trackpad based on the extremity tracking data. The method includes mapping the first location to a corresponding location within the content manipulation region. The method includes displaying an indicator indicative of the mapping. The indicator may overlap the corresponding location within the content manipulation region.Type: ApplicationFiled: February 27, 2023Publication date: November 23, 2023Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230333651Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, an extremity tracking system, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying a computer-generated object on the display. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining a multi-finger gesture based on extremity tracking data from the extremity tracking system and the finger manipulation data. The method includes registering an engagement event with respect to the computer-generated object according to the multi-finger gesture.Type: ApplicationFiled: March 20, 2023Publication date: October 19, 2023Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230333650Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying first instructional content that is associated with a first gesture. The first instructional content includes a first object. The method includes determining an engagement score that characterizes a level of user engagement with respect to the first object. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining that the finger-wearable device performs the first gesture based on a function of the finger manipulation data.Type: ApplicationFiled: February 27, 2023Publication date: October 19, 2023Inventors: Benjamin Hylak, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230325047Abstract: A method includes displaying a plurality of computer-generated objects, including a first computer-generated object at a first position within an environment and a second computer-generated object at a second position within the environment. The first computer-generated object corresponds to a first user interface element that includes a first set of controls for modifying a content item. The method includes, while displaying the plurality of computer-generated objects, obtaining extremity tracking data. The method includes moving the first computer-generated object from the first position to a third position within the environment based on the extremity tracking data. The method includes, in accordance with a determination that the third position satisfies a proximity threshold with respect to the second position, merging the first computer-generated object with the second computer-generated object in order to generate a third computer-generated object for modifying the content item.Type: ApplicationFiled: March 15, 2023Publication date: October 12, 2023Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
-
Publication number: 20230325004Abstract: Methods for interacting with objects and user interface elements in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, a user can directly or indirectly interact with objects. In some embodiments, while performing an indirect manipulation, manipulations of virtual objects are scaled. In some embodiments, while performing a direct manipulation, manipulations of virtual objects are not scaled. In some embodiments, an object can be reconfigured from an indirect manipulation mode into a direct manipulation mode by moving the object to a respective position in the three-dimensional environment in response to a respective gesture.Type: ApplicationFiled: March 10, 2023Publication date: October 12, 2023Inventors: Aaron M. BURNS, Alexis H. PALANGIE, Nathan GITTER, Nicolai GEORG, Benjamin R. BLACHNITZKY, Arun Rakesh YOGANANDAN, Benjamin HYLAK, Adam G. POULOS
-
Publication number: 20230324985Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on one or more processors of an electronic device to perform operations. The operations include presenting, on a display of the electronic device, a simulated reality (SR) environment at a first immersion level, where the first immersion level is associated with a first location of a reality boundary. Using a rotatable input device of the electronic device, input is received representing a request to change the first immersion level to a second immersion level. In accordance with receiving the input, the SR environment is presented at the second immersion level, where the second immersion level is associated with a second location of the reality boundary.Type: ApplicationFiled: June 8, 2023Publication date: October 12, 2023Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole
-
Patent number: 11782548Abstract: Detecting a touch includes receiving image data of a touching object of a user selecting selectable objects of a target surface, determining a rate of movement of the touching object, in response to determining that the rate of movement satisfies a predetermined threshold, modifying a touch detection parameter for detecting a touch event between the touching object and the target surface, and detecting one or more additional touch events using the modified touch detection parameter.Type: GrantFiled: March 24, 2021Date of Patent: October 10, 2023Assignee: Apple Inc.Inventors: Lejing Wang, Benjamin R. Blachnitzky, Lilli I. Jonsson, Nicolai Georg
-
Publication number: 20230315202Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.Type: ApplicationFiled: February 27, 2023Publication date: October 5, 2023Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230297172Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.Type: ApplicationFiled: March 21, 2023Publication date: September 21, 2023Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
-
Publication number: 20230297168Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes, while displaying a plurality of content items on the display, obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes selecting a first one of the plurality of content items based on a first portion of the finger manipulation data in combination with satisfaction of a proximity threshold. The first one of the plurality of content items is associated with a first dimensional representation. The method includes changing the first one of the plurality of content items from the first dimensional representation to a second dimensional representation based on a second portion of the finger manipulation data.Type: ApplicationFiled: March 15, 2023Publication date: September 21, 2023Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
-
Patent number: 11709370Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting, on a display of an electronic device, first content representing a standard view of a physical setting depicted in image data generated by an image sensor of the electronic device. While presenting the first content, an interaction with an input device of the electronic device is detected that is indicative of a request to present an enriched view of the physical setting. In accordance with detecting the interaction, second content is formed representing the enriched view of the physical setting by applying an enrichment effect that alters or supplements the image data generated by the image sensor. The second content representing the enriched view of the physical setting is presented on the display.Type: GrantFiled: May 2, 2019Date of Patent: July 25, 2023Assignee: Apple Inc.Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole
-
Patent number: 11709541Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting first content representing a virtual reality setting on a display of an electronic device. Using an input device of the electronic device, input is received representing a request to present a view corresponding to a physical setting in which the electronic device is located. In accordance with receiving the input, the first content is simultaneously presented on the display with second content representing the view corresponding to the physical setting obtained using an image sensor of the electronic device.Type: GrantFiled: May 1, 2019Date of Patent: July 25, 2023Assignee: Apple Inc.Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole
-
Patent number: 11360558Abstract: A system may include finger devices. A touch sensor may be mounted in a finger device housing to gather input from an external object as the object moves along an exterior surface of the housing. The touch sensor may include capacitive sensor electrodes. Sensors such as force sensors, ultrasonic sensors, inertial measurement units, optical sensors, and other components may be used in gathering finger input from a user. Finger input from a user may be used to manipulate virtual objects in a mixed reality or virtual reality environment while a haptic output device in a finger device provides associated haptic output. A user may interact with real-world objects while computer-generated content is overlaid over some or all of the objects. Object rotations and other movements may be converted into input for a mixed reality or virtual reality system using force measurements or other sensors measurements made with the finger devices.Type: GrantFiled: April 26, 2019Date of Patent: June 14, 2022Assignee: Apple Inc.Inventors: Paul X. Wang, Nicolai Georg, Benjamin R. Blachnitzky, Alhad A. Palkar, Minhazul Islam, Alex J. Lehmann, Madeleine S. Cordier, Joon-Sup Han, Hongcheng Sun, Sang E. Lee, Kevin Z. Lo, Lilli Ing-Marie Jonsson, Luis Deliz Centeno, Yuhao Pan, Stephen E. Dey, Paul N. DuMontelle, Jonathan C. Atler, Tianjia Sun, Jian Li, Chang Zhang
-
Patent number: 11348305Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include obtaining first content representing a physical environment in which an electronic device is located using an image sensor of the electronic device. A physical feature corresponding to a physical object in the physical environment is detected using the first content. A feature descriptor corresponding to a physical parameter of the physical feature is determined using the first content. Second content representing a computer generated reality (CGR) environment is generated based on the feature descriptor and presented on a display of the electronic device.Type: GrantFiled: January 5, 2021Date of Patent: May 31, 2022Assignee: Apple Inc.Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole
-
Publication number: 20210364809Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting, on a display of an electronic device, first content representing a standard view of a physical setting depicted in image data generated by an image sensor of the electronic device. While presenting the first content, an interaction with an input device of the electronic device is detected that is indicative of a request to present an enriched view of the physical setting. In accordance with detecting the interaction, second content is formed representing the enriched view of the physical setting by applying an enrichment effect that alters or supplements the image data generated by the image sensor. The second content representing the enriched view of the physical setting is presented on the display.Type: ApplicationFiled: May 2, 2019Publication date: November 25, 2021Inventors: Earl M. OLSON, Nicolai GEORG, Omar R. KHAN, James M.A. BEGOLE
-
Publication number: 20210150801Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include obtaining first content representing a physical environment in which an electronic device is located using an image sensor of the electronic device. A physical feature corresponding to a physical object in the physical environment is detected using the first content. A feature descriptor corresponding to a physical parameter of the physical feature is determined using the first content. Second content representing a computer generated reality (CGR) environment is generated based on the feature descriptor and presented on a display of the electronic device.Type: ApplicationFiled: January 5, 2021Publication date: May 20, 2021Inventors: Earl M. OLSON, Nicolai GEORG, Omar R. KHAN, James M.A. BEGOLE
-
Publication number: 20210081034Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting first content representing a virtual reality setting on a display of an electronic device. Using an input device of the electronic device, input is received representing a request to present a view corresponding to a physical setting in which the electronic device is located. In accordance with receiving the input, the first content is simultaneously presented on the display with second content representing the view corresponding to the physical setting obtained using an image sensor of the electronic device.Type: ApplicationFiled: May 1, 2019Publication date: March 18, 2021Inventors: Earl M. OLSON, Nicolai GEORG, Omar R. KHAN, James M.A. BEGOLE
-
Patent number: 10950031Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include obtaining first content representing a physical environment in which an electronic device is located using an image sensor of the electronic device. A physical feature corresponding to a physical object in the physical environment is detected using the first content. A feature descriptor corresponding to a physical parameter of the physical feature is determined using the first content. Second content representing a computer generated reality (CGR) environment is generated based on the feature descriptor and presented on a display of the electronic device.Type: GrantFiled: May 8, 2019Date of Patent: March 16, 2021Assignee: Apple Inc.Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole