Patents by Inventor Connor Smith

Connor Smith has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11968056
    Abstract: Avatars may be displayed in a multiuser communication session using various spatial modes. One technique for presenting avatars includes presenting avatars such that an attention direction of the avatar is retargeted to match the intent of the remote user corresponding to the avatar. Another technique for presenting avatars includes a pinned mode in which a spatial relationship between one or more avatars remains displayed in a consistent spatial relationship to a local user regardless of movements of the local user. Another technique for presenting avatars includes providing user-selectable presentation modes between a room scale mode and a stationary mode for presenting a representation of a multiuser communication session.
    Type: Grant
    Filed: March 23, 2023
    Date of Patent: April 23, 2024
    Assignee: Apple Inc.
    Inventors: Connor A. Smith, Bruno M. Sommer, Jonathan R. Dascola, Nicholas W. Henderson, Timofey Grechkin
  • Patent number: 11947733
    Abstract: Techniques for displaying a virtual object in an enhanced reality setting in accordance with a physical muting mode being active are described. In some examples, a system obtains context data for one or more physical elements in a physical setting, wherein the context data includes first context data and second context data that is different from the first context data. In some examples, in response to obtaining the context data for the one or more physical elements in the physical setting, a system causes display of a virtual object that represents the one or more physical elements using the first context data without using the second context data, in accordance with a determination that a physical muting mode is active.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Clément Pierre Nicolas Boissière, Shaun Budhram, Tucker Bull Morgan, Bruno M. Sommer, Connor A. Smith
  • Publication number: 20240103677
    Abstract: A computer system optionally displays a user interface object that reveals content based on whether the content is private or shared. A computer system optionally displays a user interface object that includes shared content based on whether participants have entitlement to the content. A computer system optionally displays a sharing indicator that indicates that the respective content is shared with one or more other participants.
    Type: Application
    Filed: September 13, 2023
    Publication date: March 28, 2024
    Inventors: Christopher D. MCKENZIE, Jay MOON, Steve O. LEMAY, Rajat BHARDWAJ, Shih-Sang CHIU, Connor A. SMITH, Joseph P. CERRA, Willem MATTELAER
  • Publication number: 20240095984
    Abstract: Some examples of the disclosure are directed to systems and methods for presenting content in a three-dimensional environment by one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device and the second electronic device are grouped in a first spatial group within the multi-user communication session. In some examples, if the second electronic device determines that the first electronic device changes states (and/or vice versa), the user of the first electronic device and the user of the second electronic device are no longer grouped into the same spatial group within the multi-user communication session.
    Type: Application
    Filed: September 8, 2023
    Publication date: March 21, 2024
    Inventors: Miao REN, Shih-Sang CHIU, Connor A. SMITH, Joseph P. CERRA, Willem MATTELAER
  • Publication number: 20240094863
    Abstract: Some examples of the disclosure are directed to methods for application-based spatial refinement in a multi-user communication session including a first electronic device and a second electronic device. While the first electronic device is presenting a three-dimensional environment, the first electronic device receives an input corresponding to a request to move a shared object in the three-dimensional environment. In accordance with a determination that the shared object is an object of a first type, the first electronic device moves the shared object and an avatar of a user in the three-dimensional environment in accordance with the input. In accordance with a determination that the shared object is an object of a second type, different from the first type, and the input is a first type of input, the first electronic device moves the shared object in the three-dimensional environment in accordance with the input, without moving the avatar.
    Type: Application
    Filed: September 11, 2023
    Publication date: March 21, 2024
    Inventors: Connor A. SMITH, Christopher D. MCKENZIE, Nathan GITTER
  • Publication number: 20240084268
    Abstract: Provided herein are purification, production and manufacturing methods for recombinant viral vector particles such as recombinant adeno-associated viral (rAAV) vector particles substantially free of empty viral particles; a population of recombinant adeno-associated vims (rAAV) particles purified using the method described herein, and a pharmaceutical composition comprising the purified rAAV.
    Type: Application
    Filed: January 21, 2022
    Publication date: March 14, 2024
    Applicant: ASKLEPIOS BIOPHARMACEUTICAL, INC.
    Inventors: Tamara Zekovic, Connor Smith, Paul Greback-Clarke, Eric Vorst, Eva Graham, Jacob Smith, Irnela Bajrovic, Jordan Hobbs, Robert Tikkanen, Josh Grieger
  • Patent number: 11912545
    Abstract: A wireless hoist system including a first hoist device having a first motor and a first wireless transceiver and a second hoist device having a second motor and a second wireless transceiver. The wireless hoist system includes a controller in wireless communication with the first wireless transceiver and the second wireless. The controller is configured to receive a user input and determine a first operation parameter and a second operation parameter based on the user input. The controller is also configured to provide, wirelessly, a first control signal indicative of the first operation parameter to the first hoist device and provide, wirelessly, a second control signal indicative of the second operation parameter to the second hoist device. The first hoist device operates based on the first control signal and the second hoist device operates based on the second control signal.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: February 27, 2024
    Assignee: Milwaukee Electric Tool Corporation
    Inventors: Matthew Post, Gareth Mueckl, Matthew N. Thurin, Joshua D. Widder, Timothy J. Bartlett, Patrick D. Gallagher, Jarrod P. Kotes, Karly M. Schober, Kenneth W. Wolf, Terry L. Timmons, Mallory L. Marksteiner, Jonathan L. Lambert, Ryan A. Spiering, Jeremy R. Ebner, Benjamin A. Smith, James Wekwert, Brandon L. Yahr, Troy C. Thorson, Connor P. Sprague, John E. Koller, Evan M. Glanzer, John S. Scott, William F. Chapman, III, Timothy R. Obermann
  • Patent number: 11869503
    Abstract: As noted above, example techniques relate to offline voice control. A local voice input engine may process voice inputs locally when processing voice inputs via a cloud-based voice assistant service is not possible. Some techniques involve local (on-device) voice-assisted set-up of a cloud-based voice assistant service. Further example techniques involve local voice-assisted troubleshooting the cloud-based voice assistant service. Other techniques relate to interactions between local and cloud-based processing of voice inputs on a device that supports both local and cloud-based processing.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: January 9, 2024
    Assignee: Sonos, Inc.
    Inventor: Connor Smith
  • Publication number: 20240005622
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a communication session in which a first device receives and uses streamed avatar data to render views that include a time-varying avatar, e.g., video content of some or all of another user sent from the other user's device during the communication session. In order to efficiently use resources (e.g., power, bandwidth, etc.), some implementations adapt the avatar provision process (e.g., video framerate, image resolution, etc.) based on user context, e.g., whether the viewer is looking at the avatar, whether the avatar is within the viewer's foveal region, or whether the avatar is within the viewer's field of view.
    Type: Application
    Filed: June 21, 2023
    Publication date: January 4, 2024
    Inventors: Hayden J. Lee, Connor A. Smith, Alexandre Da Veiga, Leanid Vouk, Sebastian P. Herscher
  • Patent number: 11862161
    Abstract: As noted above, example techniques relate to toggling a cloud-based VAS between enabled and disabled modes. An example implementation involves a NMD detecting that the housing is in a first orientation and enabling a first mode. Enabling the first mode includes disabling voice input processing via a cloud-based VAS and enabling local voice input processing. In the first mode, the NMD captures sound data associated with a first voice input and detects, via a local natural language unit, that the first voice input comprises sound data matching one or more keywords. The NMD determines an intent of the first voice input and performs a first command according to the determined intent. The NMD may detect that the housing is in a second orientation and enables the second mode. Enabling the second mode includes enabling voice input processing via the cloud-based VAS.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: January 2, 2024
    Assignee: Sonos, Inc.
    Inventors: Fiede Schillmoeller, Connor Smith
  • Patent number: 11854547
    Abstract: In one aspect, a playback device includes a voice assistant service (VAS) wake-word engine and a command keyword engine. The playback device detects, via the command keyword engine, a first command keyword of in voice input of sound detected by one or more microphones of the playback device. The playback device determines an intent based on at least one keyword in the voice input via a local natural language unit (NLU). After detecting the first command keyword event and determining the intent, the playback device performs a first playback command corresponding to the first command keyword and according to the determined intent. When the playback device detects, via the wake-word engine, a wake-word in voice input, the playback device streams sound data corresponding to at least a portion of the voice input to one or more remote servers associated with the VAS.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: December 26, 2023
    Assignee: Sonos, Inc.
    Inventors: Connor Smith, John Tolomei, Kurt Soto
  • Publication number: 20230401805
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a communication session in which the participants view an extended reality (XR) environment that represents a portion of a first user's physical space merged with a portion of a second user's physical space. The respective spaces are aligned based on selected vertical surface (e.g., walls) within each physical environment. For example, each user may manually select a respective wall of their own physical room and each may be presented with a view in which the two rooms appear to be stitched together along the selected walls. In some implementations, the rooms are aligned and merged to give the appearance that the walls were knocked down/erased and turned into portals into the other user's room.
    Type: Application
    Filed: June 5, 2023
    Publication date: December 14, 2023
    Inventors: Hayden J. Lee, Connor A. Smith
  • Patent number: 11816759
    Abstract: Split applications and virtual objects in a multi-user communication session may include presenting, for a first device, a first environmental representation of the multi-user communication session, wherein the first device and the second device are active in the multi-user communication session, and wherein the first environmental representation and a second environmental representation of the multi-user communication session for the second device comprise one or more shared virtual objects presented in a common spatial configuration; duplicating a particular shared virtual object that is located at an initial location in the common spatial configuration; determining a modified location for the duplicated virtual object in the first environmental representation; and presenting a modified first environmental representation comprising the duplicated virtual object at the modified location.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: November 14, 2023
    Assignee: Apple Inc.
    Inventors: Connor A. Smith, Nicholas W. Henderson
  • Patent number: 11805176
    Abstract: Facilitating collaboration in a multi-user communication session may include, at a first device, providing a toolkit of system-level tools for interacting within the session with a second device. A first user interaction with a first tool is detected which results in a virtual modification to an environment of the first device. In response to the first user interaction, the virtual modification is provided for presentation in a second environment representation of the session corresponding to the second device.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: October 31, 2023
    Assignee: Apple Inc.
    Inventors: Miao Ren, Connor A. Smith, Bruno M. Sommer, Tucker B. Morgan
  • Publication number: 20230316658
    Abstract: Presentation of objects within the multiuser communication session is changed based on a shared or unshared status, or whether a user is interacting with the object. The movement of a user of a first device is monitored within a first physical environment comprising shared objects and unshared objects, where the shared objects are visible to the user and an additional user of a second device in a second physical environment, the first device and the second device are active in a multiuser communication session, and the unshared objects are visible to the user and are not visible to the additional user. An interaction of the user with an unshared object is detected. In accordance with the detection of the interaction of the user with the unshared object, a representation of at least a portion of the unshared object is provided for presentation by the second device.
    Type: Application
    Filed: March 10, 2023
    Publication date: October 5, 2023
    Inventors: Connor A. Smith, Nicholas W. Henderson, Luis R. Deliz Centeno, Bruno M. Sommer, Timofey Grechkin
  • Publication number: 20230318980
    Abstract: A server transmits an encoded game frame over a network to a respective client system as a set of packets. In response to transmitting the set of packets, the server determines a bandwidth estimate based on the size of the encoded game frame and the timing data associated with the transmitted set of packets. The server then compares the bandwidth estimate to a current video bitrate of the game stream being transmitted from the server to the respective client device. In response to the comparison indicating an underutilization of the network, the server increases the encoding bitrate. Further, in response to the comparison indication an overutilization of the network, the server decreases the encoding bitrate.
    Type: Application
    Filed: April 4, 2022
    Publication date: October 5, 2023
    Inventors: Teng Wei, Connor Smith, David Chu, Devdeep Ray, Bernhard Reinert, Zengbin Zhang
  • Publication number: 20230308610
    Abstract: Recommended avatar placement in a multi-user communication session may include obtaining geometric information associated with a physical environment of a user of a communication device participating in a multi-user communication session; determining an activity type for the multi-user communication session; determining a recommended avatar placement for the user based on the geometric information and the activity type; and displaying an indication of the recommended avatar placement in an environmental representation of the multi-user communication session.
    Type: Application
    Filed: March 24, 2023
    Publication date: September 28, 2023
    Inventors: Nicholas W. Henderson, Connor A. Smith
  • Publication number: 20230299989
    Abstract: Avatars may be displayed in a multiuser communication session using various spatial modes. One technique for presenting avatars includes presenting avatars such that an attention direction of the avatar is retargeted to match the intent of the remote user corresponding to the avatar. Another technique for presenting avatars includes a pinned mode in which a spatial relationship between one or more avatars remains displayed in a consistent spatial relationship to a local user regardless of movements of the local user. Another technique for presenting avatars includes providing user-selectable presentation modes between a room scale mode and a stationary mode for presenting a representation of a multiuser communication session.
    Type: Application
    Filed: March 23, 2023
    Publication date: September 21, 2023
    Inventors: Connor A. Smith, Bruno M. Sommer, Jonathan R. Dascola, Nicholas W. Henderson, Timofey Grechkin
  • Publication number: 20230274504
    Abstract: Some examples of the disclosure are directed to selective display of avatars corresponding to users of electronic devices in a multi-user communication session. In some examples, when immersive content is shared in the communication session, the avatars remain displayed when presenting the content in the three-dimensional environment. In some examples, when perspective-limited immersive content is shared in the communication session, the avatars cease being displayed when presenting the content in the three-dimensional environment. In some examples, when content presented in a full-screen mode is shared in the communication session, the avatars remain displayed when presenting the content in the full-screen mode in the three-dimensional environment. In some examples, when object-bounded content is shared in the communication session, the avatars remain displayed when presenting the object-bounded content in the three-dimensional environment.
    Type: Application
    Filed: February 24, 2023
    Publication date: August 31, 2023
    Inventors: Miao REN, Connor A. SMITH, Hayden J. LEE, Bruno M. SOMMER
  • Publication number: 20230274738
    Abstract: In one aspect, a playback device includes a voice assistant service (VAS) wake-word engine and a command keyword engine. The playback device detects, via the command keyword engine, a first command keyword, and determines whether one or more playback conditions corresponding to the first command keyword are satisfied. Based on (a) detecting the first command keyword and (b) determining that the one or more playback conditions corresponding to the first command keyword are satisfied, the playback device playback device performs a first playback command corresponding to the first command keyword. When the playback device detects, via the wake-word engine, a wake-word in voice input, the playback device streams sound data corresponding to at least a portion of the voice input to one or more remote servers associated with the VAS.
    Type: Application
    Filed: November 4, 2022
    Publication date: August 31, 2023
    Inventors: Connor Smith, John Tolomei, Kurt Soto