Patents by Inventor Ioana Negoita
Ioana Negoita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240302898Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. The method includes, while displaying the first UI element, determining, based on eye tracking data, that a targeting criterion is satisfied with respect to the first selection region or the second selection region. The eye tracking data is associated with one or more eyes of a user of the electronic device. The method includes, while displaying the first UI element, selecting the first UI element based at least in part on determining that the targeting criterion is satisfied.Type: ApplicationFiled: April 30, 2024Publication date: September 12, 2024Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Patent number: 12086945Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.Type: GrantFiled: October 30, 2023Date of Patent: September 10, 2024Assignee: APPLE INC.Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
-
Publication number: 20240257430Abstract: A method includes: presenting a posture summary interface including: a representation of the user, a visualization of a current accumulated strain value for the user, and a first affordance for initiating an animated posture summary associated with the accumulated strain value for the user over a respective time window; and in response to detecting a user input directed to the first affordance within the posture summary interface, presenting an animation of the representation of the user over the respective time window that corresponds to one or more instances in which head pose information changes associated with the user caused an increase or a decrease to the accumulated strain value greater than a significance threshold, wherein an appearance of the visualization of the current accumulated strain value for the user changes to represent the accumulated strain value for the user over the respective time window.Type: ApplicationFiled: April 10, 2024Publication date: August 1, 2024Inventors: Matthew S. DeMers, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, James J. Dunne, Thomas G. Salter, Thomas J. Moore
-
Patent number: 12008160Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.Type: GrantFiled: March 6, 2023Date of Patent: June 11, 2024Assignee: APPLE INC.Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Patent number: 12002140Abstract: A method includes: presenting a posture summary interface including: a representation of the user, a visualization of a current accumulated strain value for the user, and a first affordance for initiating an animated posture summary associated with the accumulated strain value for the user over a respective time window; and in response to detecting a user input directed to the first affordance within the posture summary interface, presenting an animation of the representation of the user over the respective time window that corresponds to one or more instances in which head pose information changes associated with the user caused an increase or a decrease to the accumulated strain value greater than a significance threshold, wherein an appearance of the visualization of the current accumulated strain value for the user changes to represent the accumulated strain value for the user over the respective time window.Type: GrantFiled: May 22, 2023Date of Patent: June 4, 2024Assignee: APPLE INC.Inventors: Matthew S. DeMers, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, James J. Dunne, Thomas G. Salter, Thomas J. Moore
-
Publication number: 20240089695Abstract: A method includes obtaining an image of a machine-readable data representation that is located on a physical object using a camera of an electronic device. The machine-readable data representation includes an encoded form of a data value. The method further includes decoding the machine-readable data representation to determine the data value, whereby the data value includes a content identifier and a content source identifier. The method also includes selecting a content source based on the content source identifier, obtaining a content item and content location information based on the content identifier from the content source, determining a content position and a content orientation for the content item relative to the physical object based on the content location information, and displaying a representation of the content item using the electronic device according to the content position and the content orientation.Type: ApplicationFiled: November 16, 2023Publication date: March 14, 2024Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
-
Publication number: 20240062485Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.Type: ApplicationFiled: October 30, 2023Publication date: February 22, 2024Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
-
Publication number: 20240023830Abstract: In one implementation, a method is performed for tiered posture awareness. The method includes: while presenting a three-dimensional (3D) environment, via the display device, obtaining head pose information for a user associated with the computing system; determining an accumulated strain value for the user based on the head pose information; and in accordance with a determination that the accumulated strain value for the user exceeds a first posture awareness threshold: determining a location for virtual content based on a height value associated with the user and a depth value associated with the 3D environment; and presenting, via the display device, the virtual content at the determined location while continuing to present the 3D environment via the display device.Type: ApplicationFiled: May 22, 2023Publication date: January 25, 2024Inventors: Thomas G. Salter, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Edith M. Arnold, Edwin Iskandar, Ioana Negoita, James J. Dunne, Johahn Y. Leung, Karthik Jayaraman Raghuram, Matthew S. DeMers, Thomas J. Moore
-
Publication number: 20230394755Abstract: A method includes presenting a representation of a three-dimensional (3D) environment from a current point-of-view. The method includes identifying a region of interest within the 3D environment. The region of interest is located at a first distance from the current point-of-view. The method includes receiving, via the audio sensor, an audible signal and converting the audible signal to audible signal data. The method includes displaying, on the display, a visual representation of the audible signal data at a second distance from the current point-of-view that is a function of the first distance between the region of interest and the current point-of-view.Type: ApplicationFiled: May 31, 2023Publication date: December 7, 2023Inventors: Ioana Negoita, Alesha Unpingco, Bryce L. Schmidtchen, Devin W. Chalmers, Lee Sparks, Thomas J. Moore
-
Patent number: 11836872Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.Type: GrantFiled: February 1, 2022Date of Patent: December 5, 2023Assignee: APPLE INC.Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
-
Publication number: 20230377480Abstract: In some implementations, a method includes: while presenting a 3D environment, obtaining user profile and head pose information for a user; determining locations for visual cues within the 3D environment for a first portion of a guided stretching session based on the user profile and head pose information; presenting the visual cues at the determined locations within the 3D environment and a directional indicator; and in response to detecting a change to the head pose information: updating a location for the directional indicator based on the change to the head pose information; and in accordance with a determination that the change to the head pose information satisfies a criterion associated with a first visual cue among the visual cues, providing at least one of audio, haptic, or visual feedback indicating that the first visual cue has been completed for the first portion of the guided stretching session.Type: ApplicationFiled: May 22, 2023Publication date: November 23, 2023Inventors: James J. Dunne, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, Irida Mance, Matthew S. DeMers, Thomas G. Salter
-
Patent number: 11825375Abstract: A method includes determining a device location of an electronic device, transmitting a request to a content source, the request including the device location of the electronic device, and receiving, from the content source in response to the request, a content item that is associated with display location information that describes a content position for the content item relative to a physical environment. The content item is selected by the content source based on the content position for the content item being within an area that is defined based on the device location. The method also includes displaying a representation of the content item as part of a computer-generated reality scene in which the representation of the content item is positioned relative to the physical environment according to the content position for the content item from the display location information for the content item.Type: GrantFiled: November 17, 2022Date of Patent: November 21, 2023Assignee: APPLE INC.Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
-
Patent number: 11800059Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.Type: GrantFiled: August 9, 2021Date of Patent: October 24, 2023Assignee: Apple Inc.Inventors: Devin W. Chalmers, Jae Hwang Lee, Rahul Nair, Ioana Negoita
-
Publication number: 20230333643Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.Type: ApplicationFiled: March 6, 2023Publication date: October 19, 2023Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Patent number: 11733956Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.Type: GrantFiled: September 3, 2019Date of Patent: August 22, 2023Assignee: APPLE INC.Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita
-
Publication number: 20230252736Abstract: In one implementation, a method for surfacing an XR object corresponding to an electronic message. The method includes: obtaining an electronic message from a sender; in response to determining that the electronic message is associated with a real-world object, determining whether a current field-of-view (FOV) of a physical environment includes the real-world object; and in accordance with a determination that the current FOV of the physical environment includes the real-world object, presenting, via the display device, an extended reality (XR) object that corresponds to the electronic message in association with the real-world object.Type: ApplicationFiled: January 25, 2023Publication date: August 10, 2023Inventors: Elizabeth V. Petrov, Ioana Negoita
-
Publication number: 20230074109Abstract: A method includes determining a device location of an electronic device, transmitting a request to a content source, the request including the device location of the electronic device, and receiving, from the content source in response to the request, a content item that is associated with display location information that describes a content position for the content item relative to a physical environment. The content item is selected by the content source based on the content position for the content item being within an area that is defined based on the device location. The method also includes displaying a representation of the content item as part of a computer-generated reality scene in which the representation of the content item is positioned relative to the physical environment according to the content position for the content item from the display location information for the content item.Type: ApplicationFiled: November 17, 2022Publication date: March 9, 2023Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
-
Patent number: 11533580Abstract: A method includes determining a device location of an electronic device, and obtaining a content item to be output for display by the electronic device based on the device location, wherein the content item comprises coarse content location information and fine content location information. The method also includes determining an anchor in a physical environment based on the content item, determining a content position and a content orientation for the content item relative to the anchor based on the fine content location information, and displaying a representation of the content item using the electronic device using the content position and the content orientation.Type: GrantFiled: April 29, 2020Date of Patent: December 20, 2022Assignee: APPLE INC.Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
-
Publication number: 20210368136Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.Type: ApplicationFiled: August 9, 2021Publication date: November 25, 2021Inventors: Devin William CHALMERS, Jae Hwang LEE, Rahul NAIR, Ioana NEGOITA
-
Publication number: 20210349676Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.Type: ApplicationFiled: September 3, 2019Publication date: November 11, 2021Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita