Patents by Inventor Ioana Negoita

Ioana Negoita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240302898
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. The method includes, while displaying the first UI element, determining, based on eye tracking data, that a targeting criterion is satisfied with respect to the first selection region or the second selection region. The eye tracking data is associated with one or more eyes of a user of the electronic device. The method includes, while displaying the first UI element, selecting the first UI element based at least in part on determining that the targeting criterion is satisfied.
    Type: Application
    Filed: April 30, 2024
    Publication date: September 12, 2024
    Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
  • Patent number: 12086945
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Grant
    Filed: October 30, 2023
    Date of Patent: September 10, 2024
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Publication number: 20240257430
    Abstract: A method includes: presenting a posture summary interface including: a representation of the user, a visualization of a current accumulated strain value for the user, and a first affordance for initiating an animated posture summary associated with the accumulated strain value for the user over a respective time window; and in response to detecting a user input directed to the first affordance within the posture summary interface, presenting an animation of the representation of the user over the respective time window that corresponds to one or more instances in which head pose information changes associated with the user caused an increase or a decrease to the accumulated strain value greater than a significance threshold, wherein an appearance of the visualization of the current accumulated strain value for the user changes to represent the accumulated strain value for the user over the respective time window.
    Type: Application
    Filed: April 10, 2024
    Publication date: August 1, 2024
    Inventors: Matthew S. DeMers, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, James J. Dunne, Thomas G. Salter, Thomas J. Moore
  • Patent number: 12008160
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.
    Type: Grant
    Filed: March 6, 2023
    Date of Patent: June 11, 2024
    Assignee: APPLE INC.
    Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
  • Patent number: 12002140
    Abstract: A method includes: presenting a posture summary interface including: a representation of the user, a visualization of a current accumulated strain value for the user, and a first affordance for initiating an animated posture summary associated with the accumulated strain value for the user over a respective time window; and in response to detecting a user input directed to the first affordance within the posture summary interface, presenting an animation of the representation of the user over the respective time window that corresponds to one or more instances in which head pose information changes associated with the user caused an increase or a decrease to the accumulated strain value greater than a significance threshold, wherein an appearance of the visualization of the current accumulated strain value for the user changes to represent the accumulated strain value for the user over the respective time window.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: June 4, 2024
    Assignee: APPLE INC.
    Inventors: Matthew S. DeMers, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, James J. Dunne, Thomas G. Salter, Thomas J. Moore
  • Publication number: 20240089695
    Abstract: A method includes obtaining an image of a machine-readable data representation that is located on a physical object using a camera of an electronic device. The machine-readable data representation includes an encoded form of a data value. The method further includes decoding the machine-readable data representation to determine the data value, whereby the data value includes a content identifier and a content source identifier. The method also includes selecting a content source based on the content source identifier, obtaining a content item and content location information based on the content identifier from the content source, determining a content position and a content orientation for the content item relative to the physical object based on the content location information, and displaying a representation of the content item using the electronic device according to the content position and the content orientation.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 14, 2024
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Publication number: 20240062485
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Publication number: 20240023830
    Abstract: In one implementation, a method is performed for tiered posture awareness. The method includes: while presenting a three-dimensional (3D) environment, via the display device, obtaining head pose information for a user associated with the computing system; determining an accumulated strain value for the user based on the head pose information; and in accordance with a determination that the accumulated strain value for the user exceeds a first posture awareness threshold: determining a location for virtual content based on a height value associated with the user and a depth value associated with the 3D environment; and presenting, via the display device, the virtual content at the determined location while continuing to present the 3D environment via the display device.
    Type: Application
    Filed: May 22, 2023
    Publication date: January 25, 2024
    Inventors: Thomas G. Salter, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Edith M. Arnold, Edwin Iskandar, Ioana Negoita, James J. Dunne, Johahn Y. Leung, Karthik Jayaraman Raghuram, Matthew S. DeMers, Thomas J. Moore
  • Publication number: 20230394755
    Abstract: A method includes presenting a representation of a three-dimensional (3D) environment from a current point-of-view. The method includes identifying a region of interest within the 3D environment. The region of interest is located at a first distance from the current point-of-view. The method includes receiving, via the audio sensor, an audible signal and converting the audible signal to audible signal data. The method includes displaying, on the display, a visual representation of the audible signal data at a second distance from the current point-of-view that is a function of the first distance between the region of interest and the current point-of-view.
    Type: Application
    Filed: May 31, 2023
    Publication date: December 7, 2023
    Inventors: Ioana Negoita, Alesha Unpingco, Bryce L. Schmidtchen, Devin W. Chalmers, Lee Sparks, Thomas J. Moore
  • Patent number: 11836872
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Publication number: 20230377480
    Abstract: In some implementations, a method includes: while presenting a 3D environment, obtaining user profile and head pose information for a user; determining locations for visual cues within the 3D environment for a first portion of a guided stretching session based on the user profile and head pose information; presenting the visual cues at the determined locations within the 3D environment and a directional indicator; and in response to detecting a change to the head pose information: updating a location for the directional indicator based on the change to the head pose information; and in accordance with a determination that the change to the head pose information satisfies a criterion associated with a first visual cue among the visual cues, providing at least one of audio, haptic, or visual feedback indicating that the first visual cue has been completed for the first portion of the guided stretching session.
    Type: Application
    Filed: May 22, 2023
    Publication date: November 23, 2023
    Inventors: James J. Dunne, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, Irida Mance, Matthew S. DeMers, Thomas G. Salter
  • Patent number: 11825375
    Abstract: A method includes determining a device location of an electronic device, transmitting a request to a content source, the request including the device location of the electronic device, and receiving, from the content source in response to the request, a content item that is associated with display location information that describes a content position for the content item relative to a physical environment. The content item is selected by the content source based on the content position for the content item being within an area that is defined based on the device location. The method also includes displaying a representation of the content item as part of a computer-generated reality scene in which the representation of the content item is positioned relative to the physical environment according to the content position for the content item from the display location information for the content item.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: November 21, 2023
    Assignee: APPLE INC.
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Patent number: 11800059
    Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: October 24, 2023
    Assignee: Apple Inc.
    Inventors: Devin W. Chalmers, Jae Hwang Lee, Rahul Nair, Ioana Negoita
  • Publication number: 20230333643
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.
    Type: Application
    Filed: March 6, 2023
    Publication date: October 19, 2023
    Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
  • Patent number: 11733956
    Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: August 22, 2023
    Assignee: APPLE INC.
    Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita
  • Publication number: 20230252736
    Abstract: In one implementation, a method for surfacing an XR object corresponding to an electronic message. The method includes: obtaining an electronic message from a sender; in response to determining that the electronic message is associated with a real-world object, determining whether a current field-of-view (FOV) of a physical environment includes the real-world object; and in accordance with a determination that the current FOV of the physical environment includes the real-world object, presenting, via the display device, an extended reality (XR) object that corresponds to the electronic message in association with the real-world object.
    Type: Application
    Filed: January 25, 2023
    Publication date: August 10, 2023
    Inventors: Elizabeth V. Petrov, Ioana Negoita
  • Publication number: 20230074109
    Abstract: A method includes determining a device location of an electronic device, transmitting a request to a content source, the request including the device location of the electronic device, and receiving, from the content source in response to the request, a content item that is associated with display location information that describes a content position for the content item relative to a physical environment. The content item is selected by the content source based on the content position for the content item being within an area that is defined based on the device location. The method also includes displaying a representation of the content item as part of a computer-generated reality scene in which the representation of the content item is positioned relative to the physical environment according to the content position for the content item from the display location information for the content item.
    Type: Application
    Filed: November 17, 2022
    Publication date: March 9, 2023
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Patent number: 11533580
    Abstract: A method includes determining a device location of an electronic device, and obtaining a content item to be output for display by the electronic device based on the device location, wherein the content item comprises coarse content location information and fine content location information. The method also includes determining an anchor in a physical environment based on the content item, determining a content position and a content orientation for the content item relative to the anchor based on the fine content location information, and displaying a representation of the content item using the electronic device using the content position and the content orientation.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: December 20, 2022
    Assignee: APPLE INC.
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Publication number: 20210368136
    Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.
    Type: Application
    Filed: August 9, 2021
    Publication date: November 25, 2021
    Inventors: Devin William CHALMERS, Jae Hwang LEE, Rahul NAIR, Ioana NEGOITA
  • Publication number: 20210349676
    Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.
    Type: Application
    Filed: September 3, 2019
    Publication date: November 11, 2021
    Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita