Patents by Inventor Stephen O'Connor

Stephen O'Connor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12265703
    Abstract: Features are described for controlling the functionality of an electronic device, where the device operates according to a restricted mode of operation in which functions that the electronic device is otherwise capable of performing are not immediately available.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Heena Ko, Catherine Lee, Reed E. Olsen, Paul W. Salzman, Matthew J. Sundstrom, Kevin Lynch, Stephen O. Lemay, David S. Clark
  • Patent number: 12265364
    Abstract: An electronic device, with a display, a touch-sensitive surface, one or more processors and memory, displays a first representation of a first controllable external device, where the first controllable external device is situated at a location. The device detects a first user input corresponding to a selection of the first representation of the first controllable external device. The device, after detecting the first user input, adds data identifying the first controllable external device and a first state of the first controllable external device in a scene profile.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Patrick L. Coffman, Arian Behzadi, Christopher Patrick Foss, Cyrus Daniel Irani, Ieyuki Kawashima, Stephen O. Lemay, Christopher D. Soli, Christopher Wilson
  • Publication number: 20250103132
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 27, 2025
    Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Stephen O. LEMAY, Jonathan PERRON, William A. SORRENTINO, III, Giancarlo YERKES, Alan C. DYE
  • Publication number: 20250102291
    Abstract: A method includes displaying, on the touch-sensitive display of an electronic device with one or more cameras, a first user interface of an application. The first user interface includes a representation of a field of view of at least one of the one or more cameras, which is updated over time based on changes to current visual data detected by at least one of the one or more cameras. The field of view includes a physical object in a three-dimensional space. A representation of a measurement of the physical object is superimposed on an image of the physical object in the representation of the field of view. While displaying the first user interface, a first touch input in the first user interface displayed on the touch-sensitive display is detected. In response to detecting the first touch input, a process for sharing information about the measurement is initiated.
    Type: Application
    Filed: December 10, 2024
    Publication date: March 27, 2025
    Inventors: Allison W. Dryer, Grant R. Paul, Giancarlo Yerkes, Stephen O. Lemay, Jonathan R. Dascola
  • Publication number: 20250094021
    Abstract: An electronic device invokes a pairing mode for pairing the electronic device with an external device, displays an indication of a movement of the external device that meets respective criteria, detects that the external device has moved so that it meets the respective criteria, and initiates a process for registering the external device as a paired device in response to detecting that the external device ahs moved so that it meets the respective criteria.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 20, 2025
    Inventors: Lawrence Y. YANG, Christopher WILSON, Wan Si WAN, Gary Ian BUTCHER, Imran CHAUDHRI, Alan C. DYE, Jonathan P. IVE, Stephen O. LEMAY, Lee S. BROUGHTON
  • Patent number: 12246453
    Abstract: A method of operating an autonomous cleaning robot includes presenting, on a display of a handheld computing device, a graphical representation of a map including a plurality of selectable rooms, presenting, on the display, at least one selectable graphical divider representing boundaries of at least one of the plurality of selectable rooms, the at least one selectable graphical divider being adjustable to change at least one of the boundaries of the plurality of selectable rooms, receiving input, at the handheld computing device, representing a selection of an individual selectable graphical divider, receiving input, at the handheld computing device, representing at least one adjustment to the individual selectable graphical divider, the at least one adjustment including at least one of moving, rotating, or deleting the individual selectable graphical divider, and presenting, on the display, a graphical representation of a map wherein the individual selectable graphical divider is adjusted.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: March 11, 2025
    Assignee: iRobot Corporation
    Inventors: Vanessa Wiegel, Stephen O'Dea, Kathleen Ann Mahoney, Qunxi Huang, Michael Foster, Brian Ratta, Garrett Strobel, Scott Marchant
  • Publication number: 20250077942
    Abstract: A unified boundary machine learning model is capable of processing perception data received from various types of perception sensors on an autonomous vehicle to generate perceived boundaries of various semantic boundary types. Such perceived boundaries may then be used, for example, to control the autonomous vehicle, e.g., by generating a trajectory therefor. In some instances, the various semantic boundary types detectable by a unified boundary machine learning model may include at least a virtual construction semantic boundary type associated with a virtual boundary formed by multiple spaced apart construction elements, as well as an additional semantic boundary type associated with one or more other types of boundaries such as boundaries defined by physical barriers, painted or taped lines, road edges, etc.
    Type: Application
    Filed: September 3, 2023
    Publication date: March 6, 2025
    Inventors: Mohamed Chaabane, Benjamin Kaplan, Yevgeni Litvin, Stephen O'Hara, Sean Vig
  • Publication number: 20250074451
    Abstract: A unified boundary machine learning model is capable of processing perception data received from various types of perception sensors on an autonomous vehicle to generate perceived boundaries of various semantic boundary types. Such perceived boundaries may then be used, for example, to control the autonomous vehicle, e.g., by generating a trajectory therefor. In some instances, the various semantic boundary types detectable by a unified boundary machine learning model may include at least a virtual construction semantic boundary type associated with a virtual boundary formed by multiple spaced apart construction elements, as well as an additional semantic boundary type associated with one or more other types of boundaries such as boundaries defined by physical barriers, painted or taped lines, road edges, etc.
    Type: Application
    Filed: September 5, 2023
    Publication date: March 6, 2025
    Inventors: Mohamed Chaabane, Benjamin Kaplan, Yevgeni Litvin, Stephen O'Hara, Sean Vig
  • Publication number: 20250078429
    Abstract: In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on a viewpoint of a user in the three-dimensional environment. In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on viewpoints of a plurality of users in the three-dimensional environment. In some embodiments, the electronic device modifies an appearance of a real object that is between a virtual object and the viewpoint of a user in a three-dimensional environment. In some embodiments, the electronic device automatically selects a location for a user in a three-dimensional environment that includes one or more virtual objects and/or other users.
    Type: Application
    Filed: November 19, 2024
    Publication date: March 6, 2025
    Inventors: Jonathan R. DASCOLA, Alexis Henri PALANGIE, Peter D. ANTON, Stephen O. LEMAY, Jonathan RAVASZ, Shih-Sang CHIU, Christopher D. MCKENZIE, Dorian D. DARGAN
  • Patent number: 12242707
    Abstract: The present disclosure generally relates to selecting and opening applications. An electronic device includes a display and a rotatable input mechanism rotatable around a rotation axis substantially perpendicular to a normal axis that is normal to a face of the display. The device detects a user input, and in response to detecting the user input, displays a first subset of application views of a set of application views. The first subset of application views is displayed along a first dimension of the display substantially perpendicular to both the rotation axis and the normal axis. The device detects a rotation of the rotatable input mechanism, and in response to detecting the rotation, displays a second subset of application views of the set of application views. Displaying the second subset of application views includes moving the set of application views on the display along the first dimension of the display.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: March 4, 2025
    Assignee: Apple Inc.
    Inventors: Matthew J. Sundstrom, Taylor G. Carrigan, Christopher Patrick Foss, Ieyuki Kawashima, Stephen O. Lemay, Marco Triverio
  • Publication number: 20250065912
    Abstract: A live map system may be used to propagate observations collected by autonomous vehicles operating in an environment to other autonomous vehicles and thereby supplement a digital map used in the control of the autonomous vehicles. In addition, a live map system in some instances may be used to propagate location-based teleassist triggers to autonomous vehicles operating within an environment. A location-based teleassist trigger may be generated, for example, in association with a teleassist session conducted between an autonomous vehicle and a remote teleassist system proximate a particular location, and may be used to automatically trigger a teleassist session for another autonomous vehicle proximate that location and/or to propagate a suggested action to that other autonomous vehicle.
    Type: Application
    Filed: November 8, 2024
    Publication date: February 27, 2025
    Inventors: Niels Joubert, Benjamin Kaplan, Stephen O'Hara
  • Patent number: 12236080
    Abstract: A computer-implemented method for use in conjunction with a computing device with a touch screen display comprises: detecting one or more finger contacts with the touch screen display, applying one or more heuristics to the one or more finger contacts to determine a command for the device, and processing the command. The one or more heuristics comprise: a heuristic for determining that the one or more finger contacts correspond to a one-dimensional vertical screen scrolling command, a heuristic for determining that the one or more finger contacts correspond to a two-dimensional screen translation command, and a heuristic for determining that the one or more finger contacts correspond to a command to transition from displaying a respective item in a set of items to displaying a next item in the set of items.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: February 25, 2025
    Assignee: Apple Inc.
    Inventors: Stephen O. Lemay, Richard Williamson
  • Patent number: 12236036
    Abstract: Systems and methods for arranging applications on an electronic device with a touch-sensitive display. An example method includes, at an electronic device with a touch-sensitive display, concurrently displaying a first application window and a second application window. The method includes receiving an input directed to first application window followed by a drag input. The method also includes that in response to detecting the drag input: moving the first application window in accordance with the drag input, and enlarging the second application window to an enlarged size that is larger than a size at which the second application window was displayed prior to detecting the first input.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: February 25, 2025
    Assignee: Apple Inc.
    Inventors: Stephen O. Lemay, Christopher P. Foss, Woo-Ram Lee, Lawrence Y. Yang, Caelan G. Stack
  • Patent number: 12236952
    Abstract: At an electronic device with a display, a microphone, and an input device: while the display is on, receiving user input via the input device, the user input meeting a predetermined condition; in accordance with receiving the user input meeting the predetermined condition, sampling audio input received via the microphone; determining whether the audio input comprises a spoken trigger; and in accordance with a determination that audio input comprises the spoken trigger, triggering a virtual assistant session.
    Type: Grant
    Filed: September 26, 2023
    Date of Patent: February 25, 2025
    Assignee: Apple Inc.
    Inventors: Stephen O. Lemay, Brandon J. Newendorp, Jonathan R. Dascola
  • Publication number: 20250060871
    Abstract: The present disclosure generally relates to user interfaces for managing input mechanisms. In some examples, the electronic device transitions from a first mode into a second mode in accordance with a determination that the one or more characteristics of a user input detected via a second input mechanism of the electronic device meet a set of predefined criteria. In the first mode, a first input mechanism of the electronic device is restricted for user input. In the second mode, the first input mechanism of the electronic device is unrestricted for user input.
    Type: Application
    Filed: November 5, 2024
    Publication date: February 20, 2025
    Inventors: Bronwyn JONES, Gary Ian BUTCHER, Stephen O. LEMAY, Nathan DE VRIES, Molly Pray WIEBE, Aled Hywel WILLIAMS
  • Patent number: 12229475
    Abstract: Systems and methods for a media content system. A media content provider includes storage for storing and serving video content to subscribers. The media content provider records and or otherwise stores video content from around the world. The system includes display devices configured to identify and tailor content to multiple individual users. Each user may have individual settings which provide for a customized viewing environment and experience. The system is configured to identify users of the system in order to tailor the content as appropriate. In addition, identification of users allows for the identification of the subscription content that corresponds to the user. Based upon identification of a user and corresponding subscription, the user's subscription content may be streamed to any location. In this manner, the users subscribed content may follow the user from home to a friend's house, or elsewhere.
    Type: Grant
    Filed: November 7, 2023
    Date of Patent: February 18, 2025
    Assignee: Apple Inc.
    Inventors: Gregory N. Christie, Alessandro Sabatelli, William M. Bachman, Imran Chaudhri, Jeffrey Robbin, Jim Young, Joe Howard, Marcel Van Os, Patrick L. Coffman, Stephen O. Lemay, Jeffrey Ma, Lynne Kress
  • Publication number: 20250053225
    Abstract: A computer system, while displaying a first view of a three-dimensional environment that corresponds to a first viewpoint of a user, displays a first user interface object at a first position in the three-dimensional environment that has a first spatial relationship with the first viewpoint of the user. While displaying the first view of the three-dimensional environment including the first user interface object at the first position in the three-dimensional environment, the computer system, in response to detecting a first input that is directed to at least a first portion of the first user interface object, displays a second user interface object at a second position in the three-dimensional environment and moves the first user interface object from the first position to a third position in the three-dimensional environment that has a greater distance from the first viewpoint of the user than the first position in the three-dimensional environment.
    Type: Application
    Filed: October 24, 2024
    Publication date: February 13, 2025
    Inventors: Lorena S. Pazmino, Jonathan R. Dascola, Israel Pastrana Vicente, Matan Stauber, Jesse Chand, William A. Sorrentino, III, Richard D. Lyons, Stephen O. Lemay
  • Patent number: D1066407
    Type: Grant
    Filed: December 14, 2023
    Date of Patent: March 11, 2025
    Assignee: Apple Inc.
    Inventors: Allison W. Dryer, Alan C. Dye, Stephen O. Lemay, Richard D. Lyons, Grant R. Paul, Giancarlo Yerkes
  • Patent number: D1068807
    Type: Grant
    Filed: June 4, 2023
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Jesse Chand, Jonathan R. Dascola, Nathan Gitter, Stephen O. Lemay, Richard D. Lyons, Israel Pastrana Vicente, Lorena S. Pazmino, William A. Sorrentino, Matan Stauber
  • Patent number: D1068825
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Marcos Alonso Ruiz, Patrick Lee Coffman, Richard Dellinger, Stephen O. Lemay, Brandon Walkin