Patents by Inventor Connor Smith
Connor Smith has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12290184Abstract: An adjustable back restaurant highchair is a restaurant style highchair with an adjustable back system. The back section of the highchair moves in its entirety, forward and backward along the seat section. Each corner of the seat back is provided with a pin that rides in four (4) corresponding adjustable tracks. Movement is enabled when a parent or care provider lifts the seat back up slightly, slides it forward or backward, and lowers it back into the tracks. Such adjustability accommodates children of all sizes, while ensuring that the smallest of them remain upright and cannot slide down. This eliminates the current practice of bracing the child up with towels, blankets, or coats, that are jammed in between their back and the seat support. The device remains stackable and keeps the simple design of its conventional counterpart.Type: GrantFiled: January 15, 2023Date of Patent: May 6, 2025Inventors: Connor Smith, Samantha Smith
-
Publication number: 20250138697Abstract: A computer system optionally displays a user interface object that reveals content based on whether the content is private or shared. A computer system optionally displays a user interface object that includes shared content based on whether participants have entitlement to the content. A computer system optionally displays a sharing indicator that indicates that the respective content is shared with one or more other participants.Type: ApplicationFiled: October 9, 2024Publication date: May 1, 2025Inventors: Connor A. SMITH, Joseph P. CERRA, Willem MATTELAER
-
Patent number: 12272005Abstract: Some examples of the disclosure are directed to selective display of avatars corresponding to users of electronic devices in a multi-user communication session. In some examples, when immersive content is shared in the communication session, the avatars remain displayed when presenting the content in the three-dimensional environment. In some examples, when perspective-limited immersive content is shared in the communication session, the avatars cease being displayed when presenting the content in the three-dimensional environment. In some examples, when content presented in a full-screen mode is shared in the communication session, the avatars remain displayed when presenting the content in the full-screen mode in the three-dimensional environment. In some examples, when object-bounded content is shared in the communication session, the avatars remain displayed when presenting the object-bounded content in the three-dimensional environment.Type: GrantFiled: February 24, 2023Date of Patent: April 8, 2025Assignee: Apple Inc.Inventors: Miao Ren, Connor A. Smith, Hayden J. Lee, Bruno M. Sommer
-
Publication number: 20250039099Abstract: A server transmits an encoded game frame over a network to a respective client system as a set of packets. In response to transmitting the set of packets, the server determines a bandwidth estimate based on the size of the encoded game frame and the timing data associated with the transmitted set of packets. The server then compares the bandwidth estimate to a current video bitrate of the game stream being transmitted from the server to the respective client device. In response to the comparison indicating an underutilization of the network, the server increases the encoding bitrate. Further, in response to the comparison indication an overutilization of the network, the server decreases the encoding bitrate.Type: ApplicationFiled: July 31, 2024Publication date: January 30, 2025Inventors: Teng Wei, Connor Smith, David Chu, Devdeep Ray, Bernhard Reinert, Zengbin Zhang
-
Publication number: 20250031002Abstract: Systems, devices, and methods for presenting audio associated with events associated with spatialized audio effects or non-spatialized audio effects in three-dimensional environments are disclosed. The spatialized and/or non-spatialized audio effects correspond with displaying, via a display generation component, a three-dimensional environment from a viewpoint of a user. While displaying the three-dimensional environment from the viewpoint of the user, the computer system detects an event. When the computer system detects an event which corresponds to a spatialized sound effect, the computer system presents the sound effect as emanating from a location in the three-dimensional environment associated with the event.Type: ApplicationFiled: July 23, 2024Publication date: January 23, 2025Inventors: Matthew B. HAWKINS, Danielle M. PRICE, Connor A. SMITH, Anish KANNAN
-
Publication number: 20250029328Abstract: Some examples of the disclosure are directed to systems and methods for presenting content in a shared computer generated environment of a multi-user communication session. In some examples, an electronic device displays an indication that a location in the shared computer generated environment corresponding to the electronic device is further than a threshold distance from a respective location associated with the multi-user communication session. In some examples, an electronic device presents a representation of a user of a multi-user communication session moving away from a point of reference of the multi-user communication session. In some examples, the electronic device presents content associated with a user of a multi-user communication session moving away from a point of reference of the multi-user communication session.Type: ApplicationFiled: July 18, 2024Publication date: January 23, 2025Inventors: Connor A. SMITH, Ronak J. SHAH, Joseph P. CERRA, Shih-Sang CHIU, Kevin LEE
-
Publication number: 20250013344Abstract: Some examples of the disclosure are directed to methods for application-based spatial refinement in a multi-user communication session including a first electronic device and a second electronic device. While the first electronic device is presenting a three-dimensional environment, the first electronic device receives an input corresponding to a request to move a shared object in the three-dimensional environment. In accordance with a determination that the shared object is an object of a first type, the first electronic device moves the shared object and an avatar of a user in the three-dimensional environment in accordance with the input. In accordance with a determination that the shared object is an object of a second type, different from the first type, and the input is a first type of input, the first electronic device moves the shared object in the three-dimensional environment in accordance with the input, without moving the avatar.Type: ApplicationFiled: September 25, 2024Publication date: January 9, 2025Inventors: Connor A. SMITH, Christopher D. MCKENZIE, Nathan GITTER
-
Publication number: 20250013343Abstract: Some examples of the disclosure are directed to systems and methods for managing locations of users in a spatial group within a communication session based on the display of shared content in a three-dimensional environment. In some examples, a first electronic device and a second electronic device are in communication within a communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device. In some examples, in response to detecting an input corresponding to a request to display shared content in the three-dimensional environment, if the shared content is a first type of content, the first electronic positions the avatar a first distance away from the viewpoint, and if the shared content is a second type of content, the first electronic device positions the avatar a second distance away from the viewpoint.Type: ApplicationFiled: September 23, 2024Publication date: January 9, 2025Inventors: Connor A. SMITH, Willem MATTELAER, Joseph P. CERRA, Kevin LEE
-
Publication number: 20250004622Abstract: Various implementations disclosed herein include devices, systems, and methods for manipulating and/or annotating objects in a graphical environment. In some implementations, a device includes a display, one or more processors, and a memory. In some implementations, a method includes detecting a gesture being performed using a first object in association with a second object in a graphical environment. A distance is determined, via the one or more sensors, between a representation of the first object and the second object. If the distance is greater than a threshold, a change in the graphical environment is displayed according to the gesture and a gaze. If the distance is not greater than the threshold, the change in the graphical environment is displayed according to the gesture and a projection of the representation of the first object on the second object.Type: ApplicationFiled: September 2, 2022Publication date: January 2, 2025Inventors: Connor A. Smith, Fatima Broom, Luis R. Deliz Centeno, Miao Ren
-
Publication number: 20240428488Abstract: Some examples of the disclosure are directed to systems and methods for presenting content in a three-dimensional environment by one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device and the second electronic device are grouped in a first spatial group within the multi-user communication session. In some examples, if the second electronic device determines that the first electronic device changes states (and/or vice versa), the user of the first electronic device and the user of the second electronic device are no longer grouped into the same spatial group within the multi-user communication session.Type: ApplicationFiled: August 28, 2024Publication date: December 26, 2024Inventors: Miao REN, Shih-Sang CHIU, Connor A. SMITH, Joseph P. CERRA, Willem MATTELAER
-
Publication number: 20240404206Abstract: In some embodiments, a computer system changes visual appearance of visual representations of participants moving within a simulated threshold distance of a user of the computer system. In some embodiments, a computer system arranges representations of users according to templates. In some embodiments, a computer system arranges representations of users based on shared content. In some embodiments, a computer system changes a spatial arrangement of participants in accordance with a quantity of participants that are a first type of participant. In some embodiments, a computer system changes a spatial arrangement of elements of a real-time communication session to join a group of participants. In some embodiments, a computer system facilitates interaction with groups of spatial representations of participants of a communication session. In some embodiments, a computer system facilitates updates of a spatial arrangement of participants based on a spatial distribution of the participants.Type: ApplicationFiled: June 4, 2024Publication date: December 5, 2024Inventors: Shih-Sang CHIU, Rajat BHARDWAJ, Stephen O. LEMAY, Connor A. SMITH, Joseph P. CERRA, Kevin LEE
-
Publication number: 20240402868Abstract: The present disclosure generally relates to user interfaces for electronic devices, including user interfaces for real-time communications.Type: ApplicationFiled: March 19, 2024Publication date: December 5, 2024Inventors: Jesse CHAND, Shih-Sang CHIU, Wesley M. HOLDER, Stephen O. LEMAY, William A. SORRENTINO, III, Rajat BHARDWAJ, Giancarlo YERKES, Jason D. RICKWALD, Rupert BURTON, Kaely COON, Connor A. SMITH, Joseph P. CERRA, Tommy ROCHETTE
-
Patent number: 12148078Abstract: Some examples of the disclosure are directed to systems and methods for presenting content in a three-dimensional environment by one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device and the second electronic device are grouped in a first spatial group within the multi-user communication session. In some examples, if the second electronic device determines that the first electronic device changes states (and/or vice versa), the user of the first electronic device and the user of the second electronic device are no longer grouped into the same spatial group within the multi-user communication session.Type: GrantFiled: September 8, 2023Date of Patent: November 19, 2024Assignee: Apple Inc.Inventors: Miao Ren, Shih-Sang Chiu, Connor A. Smith, Joseph P. Cerra, Willem Mattelaer
-
Publication number: 20240347057Abstract: As noted above, example techniques relate to offline voice control. A local voice input engine may process voice inputs locally when processing voice inputs via a cloud-based voice assistant service is not possible. Some techniques involve local (on-device) voice-assisted set-up of a cloud-based voice assistant service. Further example techniques involve local voice-assisted troubleshooting the cloud-based voice assistant service. Other techniques relate to interactions between local and cloud-based processing of voice inputs on a device that supports both local and cloud-based processing.Type: ApplicationFiled: January 4, 2024Publication date: October 17, 2024Inventor: Connor Smith
-
Patent number: 12113948Abstract: Some examples of the disclosure are directed to systems and methods for managing locations of users in a spatial group within a communication session based on the display of shared content in a three-dimensional environment. In some examples, a first electronic device and a second electronic device are in communication within a communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device. In some examples, in response to detecting an input corresponding to a request to display shared content in the three-dimensional environment, if the shared content is a first type of content, the first electronic positions the avatar a first distance away from the viewpoint, and if the shared content is a second type of content, the first electronic device positions the avatar a second distance away from the viewpoint.Type: GrantFiled: January 24, 2024Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Connor A. Smith, Willem Mattelaer, Joseph P. Cerra, Kevin Lee
-
Patent number: 12112011Abstract: Some examples of the disclosure are directed to methods for application-based spatial refinement in a multi-user communication session including a first electronic device and a second electronic device. While the first electronic device is presenting a three-dimensional environment, the first electronic device receives an input corresponding to a request to move a shared object in the three-dimensional environment. In accordance with a determination that the shared object is an object of a first type, the first electronic device moves the shared object and an avatar of a user in the three-dimensional environment in accordance with the input. In accordance with a determination that the shared object is an object of a second type, different from the first type, and the input is a first type of input, the first electronic device moves the shared object in the three-dimensional environment in accordance with the input, without moving the avatar.Type: GrantFiled: September 11, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Connor A. Smith, Christopher D. McKenzie, Nathan Gitter
-
Publication number: 20240321272Abstract: As noted above, example techniques relate to toggling a cloud-based VAS between enabled and disabled modes. An example implementation involves a NMD detecting that the housing is in a first orientation and enabling a first mode. Enabling the first mode includes disabling voice input processing via a cloud-based VAS and enabling local voice input processing. In the first mode, the NMD captures sound data associated with a first voice input and detects, via a local natural language unit, that the first voice input comprises sound data matching one or more keywords. The NMD determines an intent of the first voice input and performs a first command according to the determined intent. The NMD may detect that the housing is in a second orientation and enables the second mode. Enabling the second mode includes enabling voice input processing via the cloud-based VAS.Type: ApplicationFiled: December 28, 2023Publication date: September 26, 2024Inventors: Fiede Schillmoeller, Connor Smith
-
Patent number: 12101197Abstract: Various implementations present a representation of a communication session involving multiple devices in different presentation modes based on spatial transforms between a physical environment and the representation of the communication session. For example, a representation of a communication session is presented based on the position of a first device within a first physical environment and a second spatial transform between the first physical environment and the representation of the communication session, in accordance with a determination to switch the first presentation mode to a second presentation mode. Then the representation of the communication session is presented based on the position of the first device within the first physical environment and the first spatial transform in accordance with a determination to switch the second presentation mode back to the first presentation mode.Type: GrantFiled: June 30, 2022Date of Patent: September 24, 2024Assignee: Apple Inc.Inventors: Kevin Lee, Connor A. Smith, Luis R. Deliz Centeno
-
Patent number: 12099695Abstract: Some examples of the disclosure are directed to systems and methods for managing locations of users in a spatial group within a communication session based on the display of shared content in a three-dimensional environment. In some examples, a first electronic device and a second electronic device are in communication within a communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device. In some examples, in response to detecting an input corresponding to a request to display shared content in the three-dimensional environment, if the shared content is a first type of content, the first electronic positions the avatar a first distance away from the viewpoint, and if the shared content is a second type of content, the first electronic device positions the avatar a second distance away from the viewpoint.Type: GrantFiled: January 24, 2024Date of Patent: September 24, 2024Assignee: Apple Inc.Inventors: Connor A. Smith, Willem Mattelaer, Joseph P. Cerra, Kevin Lee
-
Publication number: 20240283669Abstract: Avatars may be displayed in a multiuser communication session using various spatial modes. One technique for presenting avatars includes presenting avatars such that an attention direction of the avatar is retargeted to match the intent of the remote user corresponding to the avatar. Another technique for presenting avatars includes a pinned mode in which a spatial relationship between one or more avatars remains displayed in a consistent spatial relationship to a local user regardless of movements of the local user. Another technique for presenting avatars includes providing user-selectable presentation modes between a room scale mode and a stationary mode for presenting a representation of a multiuser communication session.Type: ApplicationFiled: March 18, 2024Publication date: August 22, 2024Inventors: Connor A. Smith, Bruno M. Sommer, Jonathan R. Dascola, Nicholas W. Henderson, Timofey Grechkin