Patents by Inventor David H. Y. Huang

David H. Y. Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11798242
    Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: October 24, 2023
    Assignee: APPLE INC.
    Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
  • Publication number: 20230298281
    Abstract: In an exemplary process, a set of parameters corresponding to characteristics of a physical setting of a user is obtained. Based on the parameters, at least one display placement value and a fixed boundary location corresponding to the physical setting are obtained. In accordance with a determination that the at least one display placement value satisfies a display placement criterion, a virtual display is displayed at the fixed boundary location corresponding to the physical setting.
    Type: Application
    Filed: December 20, 2022
    Publication date: September 21, 2023
    Inventors: Timothy R. PEASE, Alexandre DA VEIGA, David H. Y. HUANG, Peng LIU, Robert K. MOLHOLM
  • Patent number: 11698677
    Abstract: In accordance with various implementations, a method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes determining an engagement score that characterizes a level of engagement between a user and a first object. The first object is located at a first location on the display. The method includes determining an interruption priority value that characterizes a relative importance of signaling a presence of a second object to the user. The second object is detectable by the electronic device. In some implementations, the method includes presenting the second object according to one or more output modalities of the electronic device. The method includes, in response to determining that the engagement score and the interruption priority value collectively together satisfy an interruption condition, presenting a notification.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: July 11, 2023
    Assignee: APPLE INC.
    Inventors: Shih Sang Chiu, David H. Y. Huang, Benjamin Hunter Boesel
  • Publication number: 20230045634
    Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
    Type: Application
    Filed: July 15, 2022
    Publication date: February 9, 2023
    Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
  • Patent number: 11403821
    Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: August 2, 2022
    Assignee: APPLE INC.
    Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
  • Publication number: 20210405743
    Abstract: In one implementation, a method for dynamic media item delivery. The method includes: presenting, via the display device, a first set of media items associated with first metadata; obtaining user reaction information gathered by one or more input devices while presenting the first set of media items; obtaining, via a qualitative feedback classifier, an estimated user reaction state to the first set of media items based on the user reaction information; obtaining one or more target metadata characteristics based on the estimated user reaction state and the first metadata; obtaining a second set of media items associated with second metadata that corresponds to the one or more target metadata characteristics; and presenting, via the display device, the second set of media items associated with the second metadata.
    Type: Application
    Filed: May 18, 2021
    Publication date: December 30, 2021
    Inventors: Benjamin Hunter Boesel, Shih Sang Chiu, Jonathan Perron, David H. Y. Huang
  • Publication number: 20200098188
    Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 26, 2020
    Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang