Patents by Inventor Benjamin R. Blachnitzky
Benjamin R. Blachnitzky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250093990Abstract: Detecting a touch includes receiving image data of a touching object of a user selecting selectable objects of a target surface, determining a rate of movement of the touching object, in response to determining that the rate of movement satisfies a predetermined threshold, modifying a touch detection parameter for detecting a touch event between the touching object and the target surface, and detecting one or more additional touch events using the modified touch detection parameter.Type: ApplicationFiled: December 4, 2024Publication date: March 20, 2025Inventors: Lejing Wang, Benjamin R. Blachnitzky, Lilli I. Jonsson, Nicolai Georg
-
Patent number: 12242668Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, an extremity tracking system, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying a computer-generated object on the display. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining a multi-finger gesture based on extremity tracking data from the extremity tracking system and the finger manipulation data. The method includes registering an engagement event with respect to the computer-generated object according to the multi-finger gesture.Type: GrantFiled: March 20, 2023Date of Patent: March 4, 2025Assignee: APPLE INC.Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Nicolai Georg
-
Patent number: 12189888Abstract: Detecting a touch includes receiving image data of a touching object of a user selecting selectable objects of a target surface, determining a rate of movement of the touching object, in response to determining that the rate of movement satisfies a predetermined threshold, modifying a touch detection parameter for detecting a touch event between the touching object and the target surface, and detecting one or more additional touch events using the modified touch detection parameter.Type: GrantFiled: October 9, 2023Date of Patent: January 7, 2025Assignee: Apple Inc.Inventors: Lejing Wang, Benjamin R. Blachnitzky, Lilli I. Jonsson, Nicolai Georg
-
Patent number: 12189853Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.Type: GrantFiled: March 11, 2024Date of Patent: January 7, 2025Assignee: APPLE INC.Inventors: Adam G Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20250004545Abstract: A head-mounted device may have a head-mounted support structure, a gaze tracker in the head-mounted support structure, and one or more displays in the head-mounted support structure. For example, two displays may display images to two eye boxes. The display may display a virtual keyboard and may display a text input in response to a gaze location that is determined by the gaze tracker. The gaze tracker may additionally determine a gaze swipe input, or a camera in the support structure may determine a hand swipe input, and the swipe input may be used with the gaze location to determine the text input. In particular, the swipe input may create a swipe input curve that is fit to the text input to determine the text input. A user's hand may be used as a secondary input to indicate the start or end of a text input.Type: ApplicationFiled: March 22, 2024Publication date: January 2, 2025Inventors: Paul X. Wang, Ashwin Kumar Asoka Kumar Shenoi, Shuxin Yu, Benjamin R. Blachnitzky, Jie Gu, Fletcher R. Rothkopf
-
Publication number: 20250004581Abstract: In one implementation. a method for dynamically selecting an operation modality for a physical object. The method includes: obtaining a user input vector that includes at least one user input indicator value associated with one of a plurality of different input modalities; obtaining tracking data associated with a physical object; generating a first characterization vector for the physical object, including a pose value and a user grip value, based on the user input vector and the tracking data, wherein the pose value characterizes a spatial relationship between the physical object and a user of the computing system and the user grip value characterizes a manner in which the physical object is being held by the user; and selecting, based on the first characterization vector, a first operation modality as a current operation modality for the physical object.Type: ApplicationFiled: July 1, 2022Publication date: January 2, 2025Inventors: Aaron M. Burns, Anette L. Freiin von Kapri, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Christopher L. Nolet, David M. Schattel, Samantha Koire
-
Patent number: 12158988Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and an extremity tracker. The method includes obtaining extremity tracking data via the extremity tracker. The method includes displaying a computer-generated representation of a trackpad that is spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation region that is separate from the computer-generated representation of the trackpad. The method includes identifying a first location within the computer-generated representation of the trackpad based on the extremity tracking data. The method includes mapping the first location to a corresponding location within the content manipulation region. The method includes displaying an indicator indicative of the mapping. The indicator may overlap the corresponding location within the content manipulation region.Type: GrantFiled: February 27, 2023Date of Patent: December 3, 2024Assignee: APPLE INC.Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20240221301Abstract: Various implementations disclosed herein provide augmentations in extended reality (XR) using sensor data from a user worn device. The sensor data may be used understand that a user's state is associated with providing user assistance, e.g., a user's appearance or behavior or an understanding of the environment may be used to recognize a need or desire for user assistance. The augmentations may assist the user by enhancing or supplementing the user's abilities, e.g., providing guidance or other information about an environment to disabled/impaired person.Type: ApplicationFiled: December 28, 2023Publication date: July 4, 2024Inventors: Aaron M. Burns, Benjamin R. Blachnitzky, Laura Sugden, Charilaos Papadopoulos, James T. Turner
-
Publication number: 20240211044Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.Type: ApplicationFiled: March 11, 2024Publication date: June 27, 2024Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Patent number: 12008208Abstract: A method includes displaying a plurality of computer-generated objects, including a first computer-generated object at a first position within an environment and a second computer-generated object at a second position within the environment. The first computer-generated object corresponds to a first user interface element that includes a first set of controls for modifying a content item. The method includes, while displaying the plurality of computer-generated objects, obtaining extremity tracking data. The method includes moving the first computer-generated object from the first position to a third position within the environment based on the extremity tracking data. The method includes, in accordance with a determination that the third position satisfies a proximity threshold with respect to the second position, merging the first computer-generated object with the second computer-generated object in order to generate a third computer-generated object for modifying the content item.Type: GrantFiled: March 15, 2023Date of Patent: June 11, 2024Inventors: Nicolai Georg, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky
-
Patent number: 11966510Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.Type: GrantFiled: February 27, 2023Date of Patent: April 23, 2024Assignee: APPLE INC.Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Patent number: 11960657Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.Type: GrantFiled: March 21, 2023Date of Patent: April 16, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
-
Publication number: 20240056492Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.Type: ApplicationFiled: June 30, 2023Publication date: February 15, 2024Inventors: Aaron M Burns, Adam G Poulos, Alexis H Palangie, Benjamin R Blachnitzky, Charilaos Papadopoulos, David M Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S Carlin
-
Publication number: 20240054736Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.Type: ApplicationFiled: June 30, 2023Publication date: February 15, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Alexis H. Palangie, Benjamin R. Blachnitzky, Charilaos Papadopoulos, David M. Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S. Carlin
-
Publication number: 20240054746Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.Type: ApplicationFiled: June 30, 2023Publication date: February 15, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Alexis H. Palangie, Benjamin R. Blachnitzky, Charilaos Papadopoulos, David M. Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S. Carlin
-
Publication number: 20230376110Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and an extremity tracker. The method includes obtaining extremity tracking data via the extremity tracker. The method includes displaying a computer-generated representation of a trackpad that is spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation region that is separate from the computer-generated representation of the trackpad. The method includes identifying a first location within the computer-generated representation of the trackpad based on the extremity tracking data. The method includes mapping the first location to a corresponding location within the content manipulation region. The method includes displaying an indicator indicative of the mapping. The indicator may overlap the corresponding location within the content manipulation region.Type: ApplicationFiled: February 27, 2023Publication date: November 23, 2023Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230333651Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, an extremity tracking system, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying a computer-generated object on the display. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining a multi-finger gesture based on extremity tracking data from the extremity tracking system and the finger manipulation data. The method includes registering an engagement event with respect to the computer-generated object according to the multi-finger gesture.Type: ApplicationFiled: March 20, 2023Publication date: October 19, 2023Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230333645Abstract: In one implementation, a method of processing input for multiple devices is performed by a first electronic device one or more processors and non-transitory memory. The method includes determining a gaze direction. The method includes selecting a target electronic device based on determining that the gaze direction is directed to the target electronic device. The method includes receiving, via an input device, one or more inputs. The method includes processing the one or more inputs based on the target electronic device.Type: ApplicationFiled: May 12, 2023Publication date: October 19, 2023Inventors: Alexis H. Palangie, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky
-
Publication number: 20230333650Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and a communication interface provided to communicate with a finger-wearable device. The method includes displaying first instructional content that is associated with a first gesture. The first instructional content includes a first object. The method includes determining an engagement score that characterizes a level of user engagement with respect to the first object. The method includes obtaining finger manipulation data from the finger-wearable device via the communication interface. The method includes determining that the finger-wearable device performs the first gesture based on a function of the finger manipulation data.Type: ApplicationFiled: February 27, 2023Publication date: October 19, 2023Inventors: Benjamin Hylak, Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230325004Abstract: Methods for interacting with objects and user interface elements in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, a user can directly or indirectly interact with objects. In some embodiments, while performing an indirect manipulation, manipulations of virtual objects are scaled. In some embodiments, while performing a direct manipulation, manipulations of virtual objects are not scaled. In some embodiments, an object can be reconfigured from an indirect manipulation mode into a direct manipulation mode by moving the object to a respective position in the three-dimensional environment in response to a respective gesture.Type: ApplicationFiled: March 10, 2023Publication date: October 12, 2023Inventors: Aaron M. BURNS, Alexis H. PALANGIE, Nathan GITTER, Nicolai GEORG, Benjamin R. BLACHNITZKY, Arun Rakesh YOGANANDAN, Benjamin HYLAK, Adam G. POULOS