Patents by Inventor Ioana Negoita

Ioana Negoita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250111862
    Abstract: In some examples, an electronic device presents, via a display, a representation of a prediction of a food being consumed by a user of the electronic device in a computer-generated environment. In some examples, the electronic device presents, via the display, an indication of possible non-compliance of medication in the computer-generated environment. In some examples, the electronic device initiates a smoking detection mode in response to the acquisition and processing of data from the user of the electronic device or from the physical environment of the user of the electronic device.
    Type: Application
    Filed: August 13, 2024
    Publication date: April 3, 2025
    Inventors: Ioana NEGOITA, Brian W. TEMPLE, Ian PERRY, David LOEWENTHAL, Trent A. GREENE
  • Publication number: 20250111472
    Abstract: Some examples of the disclosure are directed to systems and methods for changing a level of zoom of displayed content. In some examples, an electronic device displays visual content. In some examples, in response to detecting a change in position and/or orientation of the user of the electronic device, in accordance with a determination that the position and/or orientation of the user satisfies the one or more criteria, the electronic device increases the size of the content and/or zooms in on the content.
    Type: Application
    Filed: September 27, 2024
    Publication date: April 3, 2025
    Inventors: Gregory LUTTER, Ioana NEGOITA, Thomas G. SALTER
  • Publication number: 20250103695
    Abstract: This relates generally to systems and methods for tracking and recording pupil dilation and/or signs of unconsciousness in response to detecting specific movements of the electronic device. In some examples, the electronic device captures and tracks first biometric data including pupil dilation using one or more input devices. In some examples, in response to detecting a movement of the electronic device, such as a rapid acceleration or deceleration of the electronic device, the electronic device captures second biometric data. In some examples, the electronic device displays a virtual object while presenting an extended reality environment, such as a visual indication in response to detecting that the second biometric data meets one or more criteria based on a comparison of the second biometric data with the first biometric data. In some examples, the electronic device initiates an emergency response based on the second biometric data.
    Type: Application
    Filed: September 19, 2024
    Publication date: March 27, 2025
    Inventors: Ioana NEGOITA, Ian PERRY, Timothy PSIAKI, David LOEWENTHAL, Trent A. GREENE, Brian W. TEMPLE
  • Publication number: 20250104368
    Abstract: A method is performed at an electronic device with a one or more processors, a non-transitory memory, and a display. The method includes obtaining a first semantic label value associated with a first object. The method includes determining, based on the first semantic label value, a first display priority value associated with the first object. The method includes prioritizing the first object over a second object based on the first display priority value. The method includes, in response to determining that the first object satisfies an offscreen criterion, displaying, on the display, an offscreen indicator that is associated with the first object according to the prioritization.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 27, 2025
    Inventors: Thomas G. Salter, Gregory Patrick Lane Lutter, Rahul Nair, Devin William Chalmers, Ioana Negoita
  • Publication number: 20250099814
    Abstract: Some examples of the disclosure are directed to systems and methods for presenting extended reality environments and, more particularly, to displaying one or more images relating to exercises in a physical environment while presenting an extended reality environment. In some situations, the electronic device detects an initiation of an exercise activity of a user of the electronic device using at least an optical sensor. In some examples, the electronic device presents a user interface including a representation of the identified exercise activity in the extended reality environment. In some examples, in response to detecting progression of the identified exercise activity, the user interface is updated with the updated representation of the exercise activity. In some examples, the electronic device presents a rest user interface during rest periods and/or after detecting rest.
    Type: Application
    Filed: September 23, 2024
    Publication date: March 27, 2025
    Inventors: Thomas G. SALTER, Christopher I. WORD, Jeffrey S. NORRIS, Ioana NEGOITA, Trent A. GREENE, Finnegan N. SINCLAIR, Brian W. TEMPLE, Ian PERRY, Michael J. ROCKWELL
  • Publication number: 20250090934
    Abstract: Some examples of the disclosure are directed to systems and methods for displaying one or more user interfaces based on a context of an electronic device within a physical environment. In some examples, the electronic device detects initiation of an exercise activity associated with a user of the electronic device, optionally while a computer-generated environment is presented at the electronic device. In some examples, in response to detecting the initiation of the exercise activity, the electronic device activates an exercise tracking mode of operation. In some examples, while the exercise tracking mode of operation is active, the electronic device captures one or more images of a physical environment. In some examples, in accordance with detecting, in the one or more images, a feature of the physical environment, the electronic device performs a first operation associated with the exercise tracking mode of operation.
    Type: Application
    Filed: September 3, 2024
    Publication date: March 20, 2025
    Inventors: Ioana NEGOITA, Ian PERRY, Trent A. GREENE, Brian W. TEMPLE, David LOEWENTHAL, Thomas J. MOORE, Thomas G. SALTER
  • Publication number: 20250094016
    Abstract: Some examples of the disclosure are directed to systems and methods for moving virtual objects in three-dimensional environments in accordance with detected movement of the electronic device. In some examples, the electronic device detects movement according to a first or second movement pattern described in more detail herein. In some examples, in response to detecting the first movement pattern, the electronic device applies a first correction factor to movement of a virtual object in the environment. In some examples, in response to detecting the first movement pattern, the electronic device applies a first correction factor to movement of a virtual object in the environment.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 20, 2025
    Inventors: Ioana NEGOITA, Ian PERRY, Trent A. GREENE, Thomas J. MOORE, David LOEWENTHAL, Brian W. TEMPLE, Gregory LUTTER, Allison W. DRYER, Thomas G. SALTER
  • Patent number: 12213019
    Abstract: A method includes obtaining an image of a machine-readable data representation that is located on a physical object using a camera of an electronic device. The machine-readable data representation includes an encoded form of a data value. The method further includes decoding the machine-readable data representation to determine the data value, whereby the data value includes a content identifier and a content source identifier. The method also includes selecting a content source based on the content source identifier, obtaining a content item and content location information based on the content identifier from the content source, determining a content position and a content orientation for the content item relative to the physical object based on the content location information, and displaying a representation of the content item using the electronic device according to the content position and the content orientation.
    Type: Grant
    Filed: November 16, 2023
    Date of Patent: January 28, 2025
    Assignee: APPLE INC.
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Patent number: 12198277
    Abstract: A method is performed at an electronic device with a one or more processors, a non-transitory memory, and a display. The method includes presenting, on the display, a plurality of objects including a first object and a second object. The method includes obtaining a first display priority value that is associated with the first object. The method includes prioritizing the first object over the second object based on a function of the first display priority value. The method includes, in response to determining that each of the first object and the second object satisfies an offscreen criterion, displaying, on the display, a first offscreen indicator that is associated with the first object according to the prioritization.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: January 14, 2025
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Gregory Patrick Lane Lutter, Rahul Nair, Devin William Chalmers, Ioana Negoita
  • Publication number: 20240412516
    Abstract: In one implementation, a method of tracking contexts is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an environment at a particular time. The method includes detecting a context based at least in part on the image of the environment. The method includes, in accordance with a determination that the context is included within a predefined set of contexts, storing, in a database, an entry including data indicating detection of the context in association with data indicating the particular time. The method includes receiving a query regarding the context. The method includes providing a response to the query based on the data indicating the particular time.
    Type: Application
    Filed: September 16, 2022
    Publication date: December 12, 2024
    Inventors: Elizabeth V. Petrov, Devin W. Chalmers, Ioana Negoita
  • Publication number: 20240302898
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. The method includes, while displaying the first UI element, determining, based on eye tracking data, that a targeting criterion is satisfied with respect to the first selection region or the second selection region. The eye tracking data is associated with one or more eyes of a user of the electronic device. The method includes, while displaying the first UI element, selecting the first UI element based at least in part on determining that the targeting criterion is satisfied.
    Type: Application
    Filed: April 30, 2024
    Publication date: September 12, 2024
    Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
  • Patent number: 12086945
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Grant
    Filed: October 30, 2023
    Date of Patent: September 10, 2024
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Publication number: 20240257430
    Abstract: A method includes: presenting a posture summary interface including: a representation of the user, a visualization of a current accumulated strain value for the user, and a first affordance for initiating an animated posture summary associated with the accumulated strain value for the user over a respective time window; and in response to detecting a user input directed to the first affordance within the posture summary interface, presenting an animation of the representation of the user over the respective time window that corresponds to one or more instances in which head pose information changes associated with the user caused an increase or a decrease to the accumulated strain value greater than a significance threshold, wherein an appearance of the visualization of the current accumulated strain value for the user changes to represent the accumulated strain value for the user over the respective time window.
    Type: Application
    Filed: April 10, 2024
    Publication date: August 1, 2024
    Inventors: Matthew S. DeMers, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, James J. Dunne, Thomas G. Salter, Thomas J. Moore
  • Patent number: 12008160
    Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.
    Type: Grant
    Filed: March 6, 2023
    Date of Patent: June 11, 2024
    Assignee: APPLE INC.
    Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
  • Patent number: 12002140
    Abstract: A method includes: presenting a posture summary interface including: a representation of the user, a visualization of a current accumulated strain value for the user, and a first affordance for initiating an animated posture summary associated with the accumulated strain value for the user over a respective time window; and in response to detecting a user input directed to the first affordance within the posture summary interface, presenting an animation of the representation of the user over the respective time window that corresponds to one or more instances in which head pose information changes associated with the user caused an increase or a decrease to the accumulated strain value greater than a significance threshold, wherein an appearance of the visualization of the current accumulated strain value for the user changes to represent the accumulated strain value for the user over the respective time window.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: June 4, 2024
    Assignee: APPLE INC.
    Inventors: Matthew S. DeMers, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, James J. Dunne, Thomas G. Salter, Thomas J. Moore
  • Publication number: 20240089695
    Abstract: A method includes obtaining an image of a machine-readable data representation that is located on a physical object using a camera of an electronic device. The machine-readable data representation includes an encoded form of a data value. The method further includes decoding the machine-readable data representation to determine the data value, whereby the data value includes a content identifier and a content source identifier. The method also includes selecting a content source based on the content source identifier, obtaining a content item and content location information based on the content identifier from the content source, determining a content position and a content orientation for the content item relative to the physical object based on the content location information, and displaying a representation of the content item using the electronic device according to the content position and the content orientation.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 14, 2024
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Publication number: 20240062485
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Publication number: 20240023830
    Abstract: In one implementation, a method is performed for tiered posture awareness. The method includes: while presenting a three-dimensional (3D) environment, via the display device, obtaining head pose information for a user associated with the computing system; determining an accumulated strain value for the user based on the head pose information; and in accordance with a determination that the accumulated strain value for the user exceeds a first posture awareness threshold: determining a location for virtual content based on a height value associated with the user and a depth value associated with the 3D environment; and presenting, via the display device, the virtual content at the determined location while continuing to present the 3D environment via the display device.
    Type: Application
    Filed: May 22, 2023
    Publication date: January 25, 2024
    Inventors: Thomas G. Salter, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Edith M. Arnold, Edwin Iskandar, Ioana Negoita, James J. Dunne, Johahn Y. Leung, Karthik Jayaraman Raghuram, Matthew S. DeMers, Thomas J. Moore
  • Publication number: 20230394755
    Abstract: A method includes presenting a representation of a three-dimensional (3D) environment from a current point-of-view. The method includes identifying a region of interest within the 3D environment. The region of interest is located at a first distance from the current point-of-view. The method includes receiving, via the audio sensor, an audible signal and converting the audible signal to audible signal data. The method includes displaying, on the display, a visual representation of the audible signal data at a second distance from the current point-of-view that is a function of the first distance between the region of interest and the current point-of-view.
    Type: Application
    Filed: May 31, 2023
    Publication date: December 7, 2023
    Inventors: Ioana Negoita, Alesha Unpingco, Bryce L. Schmidtchen, Devin W. Chalmers, Lee Sparks, Thomas J. Moore
  • Patent number: 11836872
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore