Patents by Inventor Hugo D. Verweij
Hugo D. Verweij has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12079542Abstract: A headset can include left and right ear-worn speakers and a control. In response to a control input of the control, the ear-worn speakers can be driven with driver signals that include a control sound having a virtual location determined by spatial auditory cues. The control sound can indicate a behavior of the control as a result of the control input. Other aspects are also described and claimed.Type: GrantFiled: February 13, 2023Date of Patent: September 3, 2024Assignee: Apple Inc.Inventors: Darius A. Satongar, Per Haakan Linus Persson, Hugo D. Verweij, Stuart J. Wood, Andrew E. Greenwood
-
Publication number: 20240272782Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: August 15, 2024Inventors: Israel PASTRANA VICENTE, Evgenii KRIVORUCHKO, Danielle M. PRICE, Jonathan R. DASCOLA, Kristi E. BAUERLY, Marcos ALONSO, Hugo D. VERWEIJ, Lorena S. PAZMINO, Jonathan RAVASZ, Zoey C. Taylor, Miquel ESTANY RODRIGUEZ, James J. Owen
-
Publication number: 20240233288Abstract: A computer system displays an immersion control, a volume control, an element configured to permit or restrict breakthrough in different modes of operation of the computer system, and/or an option selectable to cause initiation of display of a representation of content from a second computer system via a display generation component of the computer system. While a second computer system is displaying content, a first computer system detects an input corresponding to a request to display a representation of the content from the second computer system via a display generation component of the first computer system, and in response, initiates a process to display the representation of content and to de-emphasize the content displayed by the second computer system. A first computer system facilitates disambiguation of a second computer system from a plurality of computer systems for display of a representation of content from the second computer system.Type: ApplicationFiled: September 22, 2023Publication date: July 11, 2024Inventors: Matan S. STAUBER, Lorena S. PAZMINO, Karen EL ASMAR, Danielle M. PRICE, Hugo D. VERWEIJ, Zoey C. TAYLOR
-
Publication number: 20240152256Abstract: A computer system concurrently displays, via a display generation component, a browser toolbar, for a browser that includes a plurality of tabs and a window including first content associated with a first tab of the plurality of tabs. The browser toolbar and the window are overlaying a view of a three-dimensional environment. While displaying the browser toolbar and the window that includes the first content overlaying the view of the three-dimensional environment, the computer system detects a first air gesture that meets first gesture criteria, the air gesture comprising a gaze input directed at a location in the view of the three-dimensional environment that is occupied by the browser toolbar and a hand movement. In response to detecting the first air gesture that meets the first gesture criteria, the computer system displays second content in the window, the second content associated with a second tab of the plurality of tabs.Type: ApplicationFiled: September 18, 2023Publication date: May 9, 2024Inventors: Jonathan R. Dascola, Nathan Gitter, Jay Moon, Stephen O. Lemay, Joseph M.W. Luxton, Angel Suet Y. Cheung, Danielle M. Price, Hugo D. Verweij, Kristi E.S. Bauerly, Katherine W. Kolombatovich, Jordan A. Cazamias
-
Publication number: 20240103617Abstract: Gaze enrollment, including displaying an enrollment progress user indicator, animating movement of user interface elements, changing the appearances of user interface elements, and/or moving a user interface element over time, enable a computer system to more accurately track the gaze of a user of the computer system.Type: ApplicationFiled: September 21, 2023Publication date: March 28, 2024Inventors: Giancarlo YERKES, Adam L. AMADIO, Amy E. DEDONATO, Kirsty KEATCH, Stephen O. LEMAY, Israel PASTRANA VICENTE, Danielle M. PRICE, William A. SORRENTINO, III, Lynn I. STREJA, Hugo D. VERWEIJ, Hana Z. WANG
-
Publication number: 20240103685Abstract: A computer system displays an immersion control, a volume control, an element configured to permit or restrict breakthrough in different modes of operation of the computer system, and/or an option selectable to cause initiation of display of a representation of content from a second computer system via a display generation component of the computer system. While a second computer system is displaying content, a first computer system detects an input corresponding to a request to display a representation of the content from the second computer system via a display generation component of the first computer system, and in response, initiates a process to display the representation of content and to de-emphasize the content displayed by the second computer system. A first computer system facilitates disambiguation of a second computer system from a plurality of computer systems for display of a representation of content from the second computer system.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Lorena S. PAZMINO, Karen EL ASMAR, Matan STAUBER, Danielle M. PRICE, Hugo D. VERWEIJ, Zoey C. TAYLOR
-
Publication number: 20240103676Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Israel PASTRANA VICENTE, Danielle M. PRICE, Evgenii KRIVORUCHKO, Jonathan R. DASCOLA, Jonathan RAVASZ, Marcos ALONSO, Hugo D. VERWEIJ, Zoey C. TAYLOR, Lorena S. PAZMINO
-
Publication number: 20240078079Abstract: A wearable audio output device includes a first portion configured to be inserted in a user's ear and a second portion that extends from the first portion and includes one or more input devices. The wearable audio output device detects an input via the one or more input devices. In response to detecting the input, and in accordance with a determination that the input is a swipe gesture along the second portion of the wearable audio output device, the wearable audio output device adjusts an output volume for the wearable audio output device based on movement of the swipe gesture along the second portion of the wearable audio output device.Type: ApplicationFiled: September 1, 2023Publication date: March 7, 2024Inventors: Taylor G. Carrigan, Hugo D. Verweij, Mitchell R. Lerner, Charles C. Hoyt, Pavel Pivonka, Vincenzo O. Guiliani, Patrick L. Coffman
-
Publication number: 20230394886Abstract: The present disclosure generally relates to personalized spatial audio profiles and providing personalized spatial audio.Type: ApplicationFiled: June 2, 2023Publication date: December 7, 2023Inventors: Taylor G. CARRIGAN, Patrick L. COFFMAN, Amy E. DEDONATO, Charles C. HOYT, Christine A. FRANCO, Mitchell L. LERNER, Camille MOUSSETTE, Pavel PIVONKA, Christopher J. SANDERS, Hugo D. VERWEIJ
-
Publication number: 20230352007Abstract: An example process includes: receiving a first natural language input; initiating, by a digital assistant operating on the electronic device, a first task based on the first natural language input; determining whether the first task is of a predetermined type; and in accordance with a determination that the first task is of a predetermined type: determining whether one or more criteria are satisfied; and providing a response to the first natural language input, where providing the response includes: in accordance with a determination that the one or more criteria are not satisfied, outputting a first sound indicative of the initiated first task and a first verbal response indicative of the initiated first task; and in accordance with a determination that the one or more criteria are satisfied, outputting the first sound without outputting the first verbal response.Type: ApplicationFiled: September 21, 2022Publication date: November 2, 2023Inventors: Daniel A. CASTELLANI, James N. JONES, Pedro MARI, Jessica J. PECK, Hugo D. VERWEIJ, Garrett L. WEINBERG
-
Publication number: 20230351869Abstract: A computing device receives an input that corresponds to a first part of a multi-part operation performed by an application executing on the computing device. In response to receiving the input corresponding to the first part of the multi-part operation, the computing device initiates an ongoing haptic output sequence. After initiating the ongoing haptic output sequence, the computing device receives an input that corresponds to a second part of the multi-part operation. In response to receiving the input corresponding to the second part of the multi-part operation, the computing device terminates the ongoing haptic output sequence.Type: ApplicationFiled: July 7, 2023Publication date: November 2, 2023Inventors: Camille Moussette, Gary I. Butcher, Hugo D. Verweij, Jonathan Ive
-
Patent number: 11804215Abstract: An example process includes: receiving a first natural language input; initiating, by a digital assistant operating on the electronic device, a first task based on the first natural language input; determining whether the first task is of a predetermined type; and in accordance with a determination that the first task is of a predetermined type: determining whether one or more criteria are satisfied; and providing a response to the first natural language input, where providing the response includes: in accordance with a determination that the one or more criteria are not satisfied, outputting a first sound indicative of the initiated first task and a first verbal response indicative of the initiated first task; and in accordance with a determination that the one or more criteria are satisfied, outputting the first sound without outputting the first verbal response.Type: GrantFiled: September 21, 2022Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Daniel A. Castellani, James N. Jones, Pedro Mari, Jessica J. Peck, Hugo D. Verweij, Garrett L. Weinberg, Mitchell R. Lerner
-
Patent number: 11790739Abstract: An electronic device, in response to detecting occurrence of a first condition at the device, generates a first alert that corresponds to a respective application in a first class of applications, the first alert including: a first haptic component and a first audio component composed from an audio waveform that is designated for use by the respective application in the first class of applications. In response to detecting occurrence of a second condition at the device, the device generates a second alert that corresponds to a respective application in a second class of applications different from the first class of applications, the second alert including: a second haptic component and a second audio component composed from an audio waveform that is designated for use by applications in the second class of applications.Type: GrantFiled: March 4, 2021Date of Patent: October 17, 2023Assignee: APPLE INC.Inventors: Camille Moussette, Gary I. Butcher, Hugo D. Verweij, Jonathan Ive
-
Publication number: 20230306695Abstract: The present disclosure relates to techniques for providing computer-generated user experience sessions in an extended reality environment. In some embodiments, a computer system provides a computer-generated user experience session with particles that move based on breathing characteristics of a user. In some embodiments, a computer system provides a computer-generated user experience session with options selected based on characteristics of an XR environment. In some embodiments, a computer system provides a computer-generated user experience session with a soundscape having randomly selected curated sound components.Type: ApplicationFiled: February 13, 2023Publication date: September 28, 2023Inventors: Philipp ROCKEL, Gary I. BUTCHER, Dorian D. DARGAN, Amy E. DEDONATO, James M. DESSERO, Charles C. HOYT, Matan STAUBER, Hugo D. VERWEIJ
-
Patent number: 11726634Abstract: An electronic device with one or more processors and memory, and in communication with a display and an audio system presents, under control of the electronic device, via the audio system, a first audio output, the first audio output having a volume and an audio property other than volume (e.g., a reverberation time, a low-pass filter cutoff, or a stereo balance). While the audio system presents a first audio output, the device receives an input that corresponds to a request to present a second audio output, and in response, the device concurrently presents, via the audio system, an adjusted version of the first audio output in which the audio property other than volume of the first audio output has been adjusted and the second audio output.Type: GrantFiled: August 4, 2020Date of Patent: August 15, 2023Assignee: APPLE INC.Inventors: Marcos Alonso Ruiz, Nathan de Vries, David C. Graham, Freddy A. Anzures, Hugo D. Verweij, Afrooz Family, Matthew I. Brown
-
Publication number: 20230195412Abstract: A headset can include left and right ear-worn speakers and a control. In response to a control input of the control, the ear-worn speakers can be driven with driver signals that include a control sound having a virtual location determined by spatial auditory cues. The control sound can indicate a behavior of the control as a result of the control input. Other aspects are also described and claimed.Type: ApplicationFiled: February 13, 2023Publication date: June 22, 2023Inventors: Darius A. Satongar, Per Haakan Linus Persson, Hugo D. Verweij, Stuart J. Wood, Andrew E. Greenwood
-
Patent number: 11625222Abstract: A headset can include left and right ear-worn speakers and a control. In response to a control input of the control, the ear-worn speakers can be driven with driver signals that include a control sound having a virtual location determined by spatial auditory cues. The control sound can indicate a behavior of the control as a result of the control input. Other aspects are also described and claimed.Type: GrantFiled: April 22, 2020Date of Patent: April 11, 2023Assignee: APPLE INC.Inventors: Darius A. Satongar, Per Haakan Linus Persson, Hugo D. Verweij, Stuart J. Wood, Andrew E. Greenwood
-
Publication number: 20230080470Abstract: An electronic device detects a first input directed to a first affordance in a set of one or more affordances displayed on a display. In response to detecting the first input, the electronic device initiates presentation of a first audio output having a first audio profile. The electronic device later detects a second input directed to a second affordance in the set. In response to detecting the second input and if audio alteration criteria are satisfied, the electronic device causes: (i) presentation of altered first audio output having an altered audio profile and (ii) presentation of a second audio output having a second audio profile. In response to detecting the second input and if the audio alteration criteria are not satisfied, the electronic device causes: (i) continued presentation of the first audio output and (ii) presentation of a third audio output having a third audio profile.Type: ApplicationFiled: September 16, 2022Publication date: March 16, 2023Inventors: Marcos Alonso Ruiz, David C. Graham, Freddy A. Anzures, Hugo D. Verweij, Afrooz Family, Matthew I. Brown
-
Patent number: 11521477Abstract: An electronic device can output a priming cue prior to outputting an alert on the electronic device. A priming cue can be a haptic priming cue, a visual priming cue, an audio priming cue, or various combinations of these priming cues. The priming cue can be perceived by a user either consciously or subconsciously and can increase a user's perceptual state for the alert.Type: GrantFiled: April 2, 2021Date of Patent: December 6, 2022Assignee: Apple Inc.Inventors: Camille Moussette, Hugo D. Verweij
-
Publication number: 20220283643Abstract: An electronic device with a touch-sensitive surface, a display, tactile output generator(s) for generating tactile outputs, and orientation sensor(s) for determining a current orientation of the electronic device displays a user interface on the display, where the user interface includes an indicator of current device orientation. The device detects movement of the device. In response to detecting the movement: in accordance with a determination that the current orientation of the device meets criteria: the device changes the user interface to indicate that the criteria are met by the current orientation of the device and generates a tactile output upon changing the user interface to indicate that the first criteria are met by the current orientation of the device; and in accordance with a determination that the current orientation of the device does not meet the criteria, the device changes the user interface without generating the tactile output.Type: ApplicationFiled: May 23, 2022Publication date: September 8, 2022Inventors: Imran A. Chaudhri, Sebastian J. Bauer, Gary I. Butcher, Camille Moussette, Jean-Pierre M. Mouilleseaux, Madeleine S. Cordier, Joshua B. Kopin, Hugo D. Verweij