Patents by Inventor David H. Huang
David H. Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12198280Abstract: In an exemplary process, a set of parameters corresponding to characteristics of a physical setting of a user is obtained. Based on the parameters, at least one display placement value and a fixed boundary location corresponding to the physical setting are obtained. In accordance with a determination that the at least one display placement value satisfies a display placement criterion, a virtual display is displayed at the fixed boundary location corresponding to the physical setting.Type: GrantFiled: December 20, 2022Date of Patent: January 14, 2025Assignee: Apple Inc.Inventors: Timothy R. Pease, Alexandre Da Veiga, David H. Huang, Peng Liu, Robert K. Molholm
-
Patent number: 12198261Abstract: Various implementations disclosed herein include devices, systems, and methods that detects user interactions with a content object of a set of views of a three-dimensional (3D) environment and provides a different set of views with a different positional constraint. For example, an example process may include associating a content object with a region of a physical environment, providing a first set of views of the physical environment, wherein the content object is displayed using a first positional constraint when included in the first set of views, detecting an interaction associated with the region of the physical environment, and in accordance with detecting the interaction, providing a second set of views of the physical environment, wherein the content object is displayed using a second positional constraint when included in the second set of views, the second positional constraint different than the first positional constraint.Type: GrantFiled: July 10, 2023Date of Patent: January 14, 2025Assignee: Apple Inc.Inventors: David H. Huang, Bart Trzynadlowski
-
Publication number: 20240310907Abstract: In one implementation, a method of activating a user interface element is performed at a device including an input device, an eye tracker, a display, one or more processors, and non-transitory memory. The method includes displaying, on the display, a plurality of user interface elements and receiving, via the input device, a user input corresponding to an input location. The method includes determining, using the eye tracker, a gaze location. The method includes, in response to determining that the input location is at least a threshold distance from the gaze location, activating a first user interface element at the gaze location and, in response to determining that it is not, activating a second user interface element at the input location.Type: ApplicationFiled: June 14, 2022Publication date: September 19, 2024Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
-
Publication number: 20240310971Abstract: In some embodiments, an electronic device emphasizes and/or deemphasizes user interfaces based on the gaze of a user. In some embodiments, an electronic device defines levels of immersion for different user interfaces independently of one another. In some embodiments, an electronic device resumes display of a user interface at a previously-displayed level of immersion after (e.g., temporarily) reducing the level of immersion associated with the user interface. In some embodiments, an electronic device allows objects, people, and/or portions of an environment to be visible through a user interface displayed by the electronic device. In some embodiments, an electronic device reduces the level of immersion associated with a user interface based on characteristics of the electronic device and/or physical environment of the electronic device.Type: ApplicationFiled: May 22, 2024Publication date: September 19, 2024Inventors: Ieyuki KAWASHIMA, Stephen O. LEMAY, William A. SORRENTINO, III, Jeffrey M. FAULKNER, Israel PASTRANA VICENTE, Gary Ian BUTCHER, Kristi E. BAUERLY, Shih-Sang CHIU, Benjamin Hunter BOESEL, David H. HUANG, Dorian D. DARGAN
-
Publication number: 20240241616Abstract: In one implementation, a method for navigating windows in 3D. The method includes: displaying a first content pane with a first appearance at a first z-depth within an extended reality (XR) environment, wherein the first content pane includes first content and an input field; detecting a user input directed to the input field; and, in response to detecting the user input directed to the input field: moving the content first pane to a second z-depth within the XR environment, wherein the second z-depth is different from the first z-depth; modifying the first content pane by changing the first content pane from the first appearance to a second appearance; and displaying a second content pane with the first appearance at the first z-depth within the XR environment.Type: ApplicationFiled: May 11, 2022Publication date: July 18, 2024Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
-
Publication number: 20240231569Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes displaying, in a first area, a first content pane including first content including a link to second content. The method includes, while displaying the first content pane in the first area, receiving a user input selecting the link to the second content and indicating a second area separate from the first area and not displaying a content pane. The method includes, in response to receiving the user input selecting the link to the second content and indicating the second area, displaying, in the second area, a second content pane including the second content.Type: ApplicationFiled: May 31, 2022Publication date: July 11, 2024Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
-
Publication number: 20240112419Abstract: In one implementation, a method for dynamically determining presentation and transitional regions for content delivery. The method includes obtaining a first set of characteristics associated with a physical environment; and detecting a request to cause presentation of virtual content. In response to detecting the request, the method also includes obtaining a second set of characteristics associated with the virtual content; generating a presentation region for the virtual content based at least in part on the first and second sets of characteristics; and generating a transitional region provided to at least partially surround the presentation region based at least in part on the first and second sets of characteristics. The method further includes concurrently presenting the virtual content within the presentation region and the transitional region at least partially surrounding the presentation region.Type: ApplicationFiled: March 20, 2023Publication date: April 4, 2024Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
-
Publication number: 20240037886Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20240004536Abstract: Methods for preventing three-dimensional content from obscuring portions of a web browser or other user interface in a three-dimensional environment. In some embodiments, the methods include applying one or more visual treatments to the three-dimensional content. In some embodiments, the methods further include applying one or more visual treatments to portions of the web browser or portions of the other user interface. In some embodiments, the one or more visual treatments are applied at least from a viewpoint of a user. In some embodiments, applying the or more visual treatments is based on a three-dimensional visual effect of the three-dimensional content.Type: ApplicationFiled: June 15, 2023Publication date: January 4, 2024Inventors: Samuel M. WEINIG, Lucie BELANGER, Angel Suet Yan CHEUNG, David H. HUANG, Dean JACKSON
-
Publication number: 20230419625Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.Type: ApplicationFiled: September 13, 2023Publication date: December 28, 2023Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20230384907Abstract: In some embodiments, a computer system facilitates manipulation of a three-dimensional environment relative to a viewpoint of a user of the computer system. In some embodiments, a computer system facilitates manipulation of virtual objects in a virtual environment. In some embodiments, a computer system facilitates manipulation of a three-dimensional environment relative to a reference point determined based on attention of a user of the computer system.Type: ApplicationFiled: April 11, 2023Publication date: November 30, 2023Inventors: Benjamin H. BOESEL, Jonathan RAVASZ, Shih-Sang CHIU, Jordan A. CAZAMIAS, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, David H. HUANG
-
Publication number: 20230351676Abstract: Various implementations disclosed herein include devices, systems, and methods that detects user interactions with a content object of a set of views of a three-dimensional (3D) environment and provides a different set of views with a different positional constraint. For example, an example process may include associating a content object with a region of a physical environment, providing a first set of views of the physical environment, wherein the content object is displayed using a first positional constraint when included in the first set of views, detecting an interaction associated with the region of the physical environment, and in accordance with detecting the interaction, providing a second set of views of the physical environment, wherein the content object is displayed using a second positional constraint when included in the second set of views, the second positional constraint different than the first positional constraint.Type: ApplicationFiled: July 10, 2023Publication date: November 2, 2023Inventors: David H. HUANG, Bart TRZYNADLOWSKI
-
Publication number: 20230333641Abstract: In accordance with various implementations, a method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes determining an engagement score associated with an object that is visible at the display. The engagement score characterizes a level of user engagement with respect to the object. The method includes, in response to determining that the engagement score satisfies an engagement criterion, determining an ambience vector associated with the object and presenting content based on the ambience vector. The ambience vector represents a target ambient environment.Type: ApplicationFiled: December 23, 2022Publication date: October 19, 2023Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
-
Publication number: 20230290047Abstract: Various implementations disclosed herein include devices, systems, and methods that adjusts content during an immersive experience. For example, an example process may include presenting a representation of a physical environment using content from a sensor located in the physical environment, detecting an object in the physical environment using the sensor, presenting a video, wherein the presented video occludes a portion of the presented representation of the physical environment, presenting a representation of the detected object, and in accordance with determining that the detected object meets a set of criteria, adjusting a level of occlusion of the presented representation of the detected object by the presented video, where the representation of the detected object indicates at least an estimate of a position between the sensor and the detected object, and is at least partially occluded by the presented video.Type: ApplicationFiled: February 17, 2023Publication date: September 14, 2023Inventors: Benjamin H. BOESEL, Emilie KIM, David H. HUANG, Shih-Sang (Carnaven) CHIU
-
Publication number: 20230281933Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.Type: ApplicationFiled: November 10, 2022Publication date: September 7, 2023Inventors: Timothy R. PEASE, Alexandre DA VEIGA, Benjamin H. BOESEL, David H. HUANG, Jonathan PERRON, Shih-Sang CHIU, Spencer H. RAY
-
SYSTEM AND METHOD OF THREE-DIMENSIONAL PLACEMENT AND REFINEMENT IN MULTI-USER COMMUNICATION SESSIONS
Publication number: 20230273706Abstract: Some examples of the disclosure are directed to methods for spatial placement of avatars in a communication session. In some examples, while a first electronic device is presenting a three-dimensional environment, the first electronic device may receive an input corresponding to a request to enter a communication session with a second electronic device. In some examples, in response to receiving the input, the first electronic device may scan an environment surrounding the first electronic device. In some examples, the first electronic device may identify a placement location in the three-dimensional environment at which to display a virtual object representing a user of the second electronic device. In some examples, the first electronic device displays the virtual object representing the user of the second electronic device at the placement location in the three-dimensional environment. Some examples of the disclosure are directed to methods for spatial refinement in the communication session.Type: ApplicationFiled: February 24, 2023Publication date: August 31, 2023Inventors: Connor A. SMITH, Benjamin H. BOESEL, David H. HUANG, Jeffrey S. NORRIS, Jonathan PERRON, Jordan A. CAZAMIAS, Miao REN, Shih-Sang CHIU -
Patent number: 11520456Abstract: In some embodiments, an electronic device emphasizes and/or deemphasizes user interfaces based on the gaze of a user. In some embodiments, an electronic device defines levels of immersion for different user interfaces independently of one another. In some embodiments, an electronic device resumes display of a user interface at a previously-displayed level of immersion after (e.g., temporarily) reducing the level of immersion associated with the user interface. In some embodiments, an electronic device allows objects, people, and/or portions of an environment to be visible through a user interface displayed by the electronic device. In some embodiments, an electronic device reduces the level of immersion associated with a user interface based on characteristics of the electronic device and/or physical environment of the electronic device.Type: GrantFiled: September 25, 2021Date of Patent: December 6, 2022Assignee: Apple Inc.Inventors: Ieyuki Kawashima, Stephen O. Lemay, William A. Sorrentino, III, Alan C. Dye, M. Evans Hankey, Julian Jaede, Jonathan P. Ive, Kristi E. Bauerly, Benjamin Hunter Boesel, Shih-Sang Chiu, David H. Huang
-
Publication number: 20220155909Abstract: In some embodiments, an electronic device emphasizes and/or deemphasizes user interfaces based on the gaze of a user. In some embodiments, an electronic device defines levels of immersion for different user interfaces independently of one another. In some embodiments, an electronic device resumes display of a user interface at a previously-displayed level of immersion after (e.g., temporarily) reducing the level of immersion associated with the user interface. In some embodiments, an electronic device allows objects, people, and/or portions of an environment to be visible through a user interface displayed by the electronic device. In some embodiments, an electronic device reduces the level of immersion associated with a user interface based on characteristics of the electronic device and/or physical environment of the electronic device.Type: ApplicationFiled: September 25, 2021Publication date: May 19, 2022Inventors: Ieyuki KAWASHIMA, Stephen O. LEMAY, William A. SORRENTINO, III, Alan C. DYE, M. Evans HANKEY, Julian JAEDE, Jonathan P. IVE, Kristi E. Bauerly, Benjamin Hunter Boesel, Shih-Sang Chiu, David H. Huang
-
Patent number: 8539200Abstract: A system, method, and computer readable medium for an operating system (OS) mediated launch of an OS dependent application is disclosed. An application running within an OS may operate outside an OS environment by constructing for example a capsule file, passing the capsule file to firmware interface, and restarting the system. The firmware interface may load various drivers and applications contained within the capsule file and execute them to perform a task. Upon completion of the task, the OS is booted again and the original application may resume control, making use of any information stored by the firmware interface in a dedicated status table or file. Other embodiments may be employed, and other embodiments are described and claimed.Type: GrantFiled: April 23, 2008Date of Patent: September 17, 2013Assignee: Intel CorporationInventors: David H. Huang, Xin Li, Ruth Li, Vincent J. Zimmer
-
Publication number: 20090327679Abstract: A system, method, and computer readable medium for an operating system (OS) mediated launch of an OS dependent application is disclosed. An application running within an OS may operate outside an OS environment by constructing for example a capsule file, passing the capsule file to firmware interface, and restarting the system. The firmware interface may load various drivers and applications contained within the capsule file and execute them to perform a task. Upon completion of the task, the OS is booted again and the original application may resume control, making use of any information stored by the firmware interface in a dedicated status table or file. Other embodiments may be employed, and other embodiments are described and claimed.Type: ApplicationFiled: April 23, 2008Publication date: December 31, 2009Inventors: David H. Huang, Xin Li, Ruth Li, Vincent J. Zimmer