Patents by Inventor Stephen O'Connor

Stephen O'Connor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12265690
    Abstract: A computer system displays a first view of a user interface of a first application with a first size at a first position corresponding to a location of at least a portion of a palm that is currently facing a viewpoint corresponding to a view of a three-dimensional environment provided via a display generation component. While displaying the first view, the computer system detects a first input that corresponds to a request to transfer display of the first application from the palm to a first surface that is within a first proximity of the viewpoint. In response to detecting the first input, the computer system displays a second view of the user interface of the first application with a second size and an orientation that corresponds to the first surface at a second position defined by the first surface.
    Type: Grant
    Filed: November 29, 2023
    Date of Patent: April 1, 2025
    Assignee: APPLE INC.
    Inventors: Stephen O. Lemay, Gregg S. Suzuki, Matthew J. Sundstrom, Jonathan Ive, Jeffrey M. Faulkner, Jonathan R. Dascola, William A. Sorrentino, III, Kristi E. S. Bauerly, Giancarlo Yerkes, Peter D. Anton
  • Patent number: 12265657
    Abstract: In some embodiments, an electronic device navigates between user interfaces based at least on detecting a gaze of the user. In some embodiments, an electronic device enhances interactions with control elements of user interfaces. In some embodiments, an electronic device scrolls representations of categories and subcategories in a coordinated manner. In some embodiments, an electronic device navigates back from user interfaces having different levels of immersion in different ways.
    Type: Grant
    Filed: June 16, 2023
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Israel Pastrana Vicente, Jay Moon, Jesse Chand, Jonathan R. Dascola, William A. Sorrentino, III, Stephen O. Lemay, Dorian D. Dargan
  • Patent number: 12265364
    Abstract: An electronic device, with a display, a touch-sensitive surface, one or more processors and memory, displays a first representation of a first controllable external device, where the first controllable external device is situated at a location. The device detects a first user input corresponding to a selection of the first representation of the first controllable external device. The device, after detecting the first user input, adds data identifying the first controllable external device and a first state of the first controllable external device in a scene profile.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Patrick L. Coffman, Arian Behzadi, Christopher Patrick Foss, Cyrus Daniel Irani, Ieyuki Kawashima, Stephen O. Lemay, Christopher D. Soli, Christopher Wilson
  • Patent number: 12265703
    Abstract: Features are described for controlling the functionality of an electronic device, where the device operates according to a restricted mode of operation in which functions that the electronic device is otherwise capable of performing are not immediately available.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Heena Ko, Catherine Lee, Reed E. Olsen, Paul W. Salzman, Matthew J. Sundstrom, Kevin Lynch, Stephen O. Lemay, David S. Clark
  • Publication number: 20250103132
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 27, 2025
    Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Stephen O. LEMAY, Jonathan PERRON, William A. SORRENTINO, III, Giancarlo YERKES, Alan C. DYE
  • Publication number: 20250102291
    Abstract: A method includes displaying, on the touch-sensitive display of an electronic device with one or more cameras, a first user interface of an application. The first user interface includes a representation of a field of view of at least one of the one or more cameras, which is updated over time based on changes to current visual data detected by at least one of the one or more cameras. The field of view includes a physical object in a three-dimensional space. A representation of a measurement of the physical object is superimposed on an image of the physical object in the representation of the field of view. While displaying the first user interface, a first touch input in the first user interface displayed on the touch-sensitive display is detected. In response to detecting the first touch input, a process for sharing information about the measurement is initiated.
    Type: Application
    Filed: December 10, 2024
    Publication date: March 27, 2025
    Inventors: Allison W. Dryer, Grant R. Paul, Giancarlo Yerkes, Stephen O. Lemay, Jonathan R. Dascola
  • Publication number: 20250094021
    Abstract: An electronic device invokes a pairing mode for pairing the electronic device with an external device, displays an indication of a movement of the external device that meets respective criteria, detects that the external device has moved so that it meets the respective criteria, and initiates a process for registering the external device as a paired device in response to detecting that the external device ahs moved so that it meets the respective criteria.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 20, 2025
    Inventors: Lawrence Y. YANG, Christopher WILSON, Wan Si WAN, Gary Ian BUTCHER, Imran CHAUDHRI, Alan C. DYE, Jonathan P. IVE, Stephen O. LEMAY, Lee S. BROUGHTON
  • Patent number: 12246453
    Abstract: A method of operating an autonomous cleaning robot includes presenting, on a display of a handheld computing device, a graphical representation of a map including a plurality of selectable rooms, presenting, on the display, at least one selectable graphical divider representing boundaries of at least one of the plurality of selectable rooms, the at least one selectable graphical divider being adjustable to change at least one of the boundaries of the plurality of selectable rooms, receiving input, at the handheld computing device, representing a selection of an individual selectable graphical divider, receiving input, at the handheld computing device, representing at least one adjustment to the individual selectable graphical divider, the at least one adjustment including at least one of moving, rotating, or deleting the individual selectable graphical divider, and presenting, on the display, a graphical representation of a map wherein the individual selectable graphical divider is adjusted.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: March 11, 2025
    Assignee: iRobot Corporation
    Inventors: Vanessa Wiegel, Stephen O'Dea, Kathleen Ann Mahoney, Qunxi Huang, Michael Foster, Brian Ratta, Garrett Strobel, Scott Marchant
  • Publication number: 20250077942
    Abstract: A unified boundary machine learning model is capable of processing perception data received from various types of perception sensors on an autonomous vehicle to generate perceived boundaries of various semantic boundary types. Such perceived boundaries may then be used, for example, to control the autonomous vehicle, e.g., by generating a trajectory therefor. In some instances, the various semantic boundary types detectable by a unified boundary machine learning model may include at least a virtual construction semantic boundary type associated with a virtual boundary formed by multiple spaced apart construction elements, as well as an additional semantic boundary type associated with one or more other types of boundaries such as boundaries defined by physical barriers, painted or taped lines, road edges, etc.
    Type: Application
    Filed: September 3, 2023
    Publication date: March 6, 2025
    Inventors: Mohamed Chaabane, Benjamin Kaplan, Yevgeni Litvin, Stephen O'Hara, Sean Vig
  • Publication number: 20250074451
    Abstract: A unified boundary machine learning model is capable of processing perception data received from various types of perception sensors on an autonomous vehicle to generate perceived boundaries of various semantic boundary types. Such perceived boundaries may then be used, for example, to control the autonomous vehicle, e.g., by generating a trajectory therefor. In some instances, the various semantic boundary types detectable by a unified boundary machine learning model may include at least a virtual construction semantic boundary type associated with a virtual boundary formed by multiple spaced apart construction elements, as well as an additional semantic boundary type associated with one or more other types of boundaries such as boundaries defined by physical barriers, painted or taped lines, road edges, etc.
    Type: Application
    Filed: September 5, 2023
    Publication date: March 6, 2025
    Inventors: Mohamed Chaabane, Benjamin Kaplan, Yevgeni Litvin, Stephen O'Hara, Sean Vig
  • Publication number: 20250078429
    Abstract: In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on a viewpoint of a user in the three-dimensional environment. In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on viewpoints of a plurality of users in the three-dimensional environment. In some embodiments, the electronic device modifies an appearance of a real object that is between a virtual object and the viewpoint of a user in a three-dimensional environment. In some embodiments, the electronic device automatically selects a location for a user in a three-dimensional environment that includes one or more virtual objects and/or other users.
    Type: Application
    Filed: November 19, 2024
    Publication date: March 6, 2025
    Inventors: Jonathan R. DASCOLA, Alexis Henri PALANGIE, Peter D. ANTON, Stephen O. LEMAY, Jonathan RAVASZ, Shih-Sang CHIU, Christopher D. MCKENZIE, Dorian D. DARGAN
  • Patent number: 12242707
    Abstract: The present disclosure generally relates to selecting and opening applications. An electronic device includes a display and a rotatable input mechanism rotatable around a rotation axis substantially perpendicular to a normal axis that is normal to a face of the display. The device detects a user input, and in response to detecting the user input, displays a first subset of application views of a set of application views. The first subset of application views is displayed along a first dimension of the display substantially perpendicular to both the rotation axis and the normal axis. The device detects a rotation of the rotatable input mechanism, and in response to detecting the rotation, displays a second subset of application views of the set of application views. Displaying the second subset of application views includes moving the set of application views on the display along the first dimension of the display.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: March 4, 2025
    Assignee: Apple Inc.
    Inventors: Matthew J. Sundstrom, Taylor G. Carrigan, Christopher Patrick Foss, Ieyuki Kawashima, Stephen O. Lemay, Marco Triverio
  • Publication number: 20250065912
    Abstract: A live map system may be used to propagate observations collected by autonomous vehicles operating in an environment to other autonomous vehicles and thereby supplement a digital map used in the control of the autonomous vehicles. In addition, a live map system in some instances may be used to propagate location-based teleassist triggers to autonomous vehicles operating within an environment. A location-based teleassist trigger may be generated, for example, in association with a teleassist session conducted between an autonomous vehicle and a remote teleassist system proximate a particular location, and may be used to automatically trigger a teleassist session for another autonomous vehicle proximate that location and/or to propagate a suggested action to that other autonomous vehicle.
    Type: Application
    Filed: November 8, 2024
    Publication date: February 27, 2025
    Inventors: Niels Joubert, Benjamin Kaplan, Stephen O'Hara
  • Patent number: 12236080
    Abstract: A computer-implemented method for use in conjunction with a computing device with a touch screen display comprises: detecting one or more finger contacts with the touch screen display, applying one or more heuristics to the one or more finger contacts to determine a command for the device, and processing the command. The one or more heuristics comprise: a heuristic for determining that the one or more finger contacts correspond to a one-dimensional vertical screen scrolling command, a heuristic for determining that the one or more finger contacts correspond to a two-dimensional screen translation command, and a heuristic for determining that the one or more finger contacts correspond to a command to transition from displaying a respective item in a set of items to displaying a next item in the set of items.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: February 25, 2025
    Assignee: Apple Inc.
    Inventors: Stephen O. Lemay, Richard Williamson
  • Patent number: D1066407
    Type: Grant
    Filed: December 14, 2023
    Date of Patent: March 11, 2025
    Assignee: Apple Inc.
    Inventors: Allison W. Dryer, Alan C. Dye, Stephen O. Lemay, Richard D. Lyons, Grant R. Paul, Giancarlo Yerkes
  • Patent number: D1068807
    Type: Grant
    Filed: June 4, 2023
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Jesse Chand, Jonathan R. Dascola, Nathan Gitter, Stephen O. Lemay, Richard D. Lyons, Israel Pastrana Vicente, Lorena S. Pazmino, William A. Sorrentino, Matan Stauber
  • Patent number: D1068810
    Type: Grant
    Filed: June 4, 2023
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Gregory M. Apodaca, Jesse Chand, Jonathan R. Dascola, Thalia Jimena Echevarria Fiol, Miquel Estany Rodriguez, Stephen O. Lemay, Richard D. Lyons, James J. Owen, Israel Pastrana Vicente, Lorena S. Pazmino, William A. Sorrentino, III, Matan Stauber, Wan Si Wan
  • Patent number: D1068822
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Freddy Anzures, Imran Chaudhri, Greg Christie, Stephen O. Lemay, Mike Matas, Bas Ording, Marcel van Os
  • Patent number: D1068825
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Marcos Alonso Ruiz, Patrick Lee Coffman, Richard Dellinger, Stephen O. Lemay, Brandon Walkin
  • Patent number: D1068837
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventor: Stephen O. Lemay