Patents by Inventor Avi Bar-Zeev
Avi Bar-Zeev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12361646Abstract: Various implementations provide a method for determining how a second user prefers to be depicted/augmented in a first user's view of a multi-user environment in a privacy preserving way. For example, a method may include determining that a physical environment includes a second device, where a second user associated with the second device is to be depicted in a view of a three-dimensional (3D) environment. The method may further include determining position data indicative of a location of the second device relative to the first device. The method may further include sending the position data indicative of the location of the second device relative to the first device, to an information system (e.g., a user preference system). The method may further include receiving a user preference setting associated with the second user for depicting or augmenting the second user in the 3D environment from the information system.Type: GrantFiled: May 9, 2023Date of Patent: July 15, 2025Assignee: Apple Inc.Inventors: Avi Bar-zeev, Ranjit Desai, Rahul Nair
-
Patent number: 12320977Abstract: A display system includes a head-mounted display unit and a wake control system. The head-mounted display unit provides content to a user and is operable in a low-power state and a high-power state that consumes more power to provide the content to the user than the low-power state. The wake control system determines when to operate in the high-power state. The wake control system may assess a first wake criterion with low power, assess a second wake criterion with higher power than the first wake criterion upon satisfaction of the first wake criterion, and cause the head-mounted display unit to operate in the high-power state upon satisfaction of the second wake criterion.Type: GrantFiled: June 22, 2023Date of Patent: June 3, 2025Assignee: APPLE INC.Inventors: Fletcher R. Rothkopf, David A. Kalinowski, Jae Hwang Lee, Avi Bar-Zeev, Grant H. Mulliken, Paul Meade, Nathanael D. Parkhill, Ray L. Chang, Arthur Y. Zhang
-
Publication number: 20250085132Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.Type: ApplicationFiled: November 22, 2024Publication date: March 13, 2025Applicant: Apple Inc.Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair
-
Publication number: 20250060821Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.Type: ApplicationFiled: November 4, 2024Publication date: February 20, 2025Inventors: Grant H. Mulliken, Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Holly Gerhard, Lilli I. Jonsson
-
Patent number: 12188777Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.Type: GrantFiled: June 20, 2023Date of Patent: January 7, 2025Assignee: Apple Inc.Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair
-
Patent number: 12164687Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.Type: GrantFiled: August 6, 2021Date of Patent: December 10, 2024Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Grant H. Mulliken, Holly E. Gerhard, Lilli I. Jonsson
-
Publication number: 20240394952Abstract: A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.Type: ApplicationFiled: August 7, 2024Publication date: November 28, 2024Applicant: Apple Inc.Inventors: Arthur Y Zhang, Ray L. Chang, Timothy R. Oriol, Ling Su, Gurjeet S. Saund, Guy Cote, Jim C. Chou, Hao Pan, Tobias Eble, Avi Bar-Zeev, Sheng Zhang, Justin A. Hensley, Geoffrey Stahl
-
Patent number: 12147733Abstract: In an exemplary technique, audio information responsive to received input is provided. While providing the audio information, one or more conditions for stopping the provision of audio information are detected, and in response, the provision of the audio information is stopped. After stopping the provision of the audio information, if the one or more conditions for stopping the provision of audio information have ceased, then resumed audio information is provided, where the resumed audio information includes a rephrased version of a previously provided segment of the audio information.Type: GrantFiled: November 14, 2023Date of Patent: November 19, 2024Assignee: Apple Inc.Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
-
Patent number: 12148111Abstract: The present disclosure relates to techniques for providing tangibility visualization of virtual objects within a computer-generated reality (CGR) environment, such as a CGR environment based on virtual reality and/or a CGR environment based on mixed reality. A visual feedback indicating tangibility is provided for a virtual object within a CGR environment that does not correspond to a real, tangible object in the real environment. A visual feedback indicating tangibility is not provided for a virtual representation of a real object within a CGR environment that corresponds to a real, tangible object in the real environment.Type: GrantFiled: September 11, 2023Date of Patent: November 19, 2024Assignee: Apple Inc.Inventors: Alexis Palangie, Avi Bar-Zeev
-
Patent number: 12120493Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.Type: GrantFiled: July 12, 2023Date of Patent: October 15, 2024Assignee: APPLE INC.Inventor: Avi Bar-Zeev
-
Patent number: 12086919Abstract: A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.Type: GrantFiled: June 29, 2023Date of Patent: September 10, 2024Assignee: Apple Inc.Inventors: Arthur Y Zhang, Ray L. Chang, Timothy R. Oriol, Ling Su, Gurjeet S. Saund, Guy Cote, Jim C. Chou, Hao Pan, Tobias Eble, Avi Bar-Zeev, Sheng Zhang, Justin A. Hensley, Geoffrey Stahl
-
Publication number: 20240202959Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: ApplicationFiled: January 30, 2024Publication date: June 20, 2024Applicant: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20240160022Abstract: A mixed reality system including a head-mounted display (HMD) and a base station. Information collected by HMD sensors may be transmitted to the base via a wired or wireless connection. On the base, a rendering engine renders frames including virtual content based in part on the sensor information, and an encoder compresses the frames according to an encoding protocol before sending the frames to the HMD over the connection. Instead of using a previous frame to estimate motion vectors in the encoder, motion vectors from the HMD and the rendering engine are input to the encoder and used in compressing the frame. The motion vectors may be embedded in the data stream along with the encoded frame data and transmitted to the HMD over the connection. If a frame is not received at the HMD, the HMD may synthesize a frame from a previous frame using the motion vectors.Type: ApplicationFiled: January 24, 2024Publication date: May 16, 2024Applicant: Apple Inc.Inventors: Geoffrey Stahl, Avi Bar-Zeev
-
Publication number: 20240161636Abstract: This disclosure describes an unmanned aerial vehicle (“UAV”) configured to autonomously deliver items of inventory to various destinations. The UAV may receive inventory information and a destination location and autonomously retrieve the inventory from a location within a materials handling facility, compute a route from the materials handling facility to a destination and travel to the destination to deliver the inventory.Type: ApplicationFiled: July 19, 2023Publication date: May 16, 2024Inventors: Gur KIMCHI, Daniel BUCHMUELLER, Scott A. GREEN, Brian C. BECHMAN, Scott ISAACS, Amir NAVOT, Fabian HENSEL, Avi BAR-ZEEV, Severan Sylvain Jean-Michel RAULT
-
Publication number: 20240086147Abstract: In an exemplary technique, audio information responsive to received input is provided. While providing the audio information, one or more conditions for stopping the provision of audio information are detected, and in response, the provision of the audio information is stopped. After stopping the provision of the audio information, if the one or more conditions for stopping the provision of audio information have ceased, then resumed audio information is provided, where the resumed audio information includes a rephrased version of a previously provided segment of the audio information.Type: ApplicationFiled: November 14, 2023Publication date: March 14, 2024Inventors: Rahul NAIR, Golnaz ABDOLLAHIAN, Avi BAR-ZEEV, Niranjan MANJUNATH
-
Patent number: 11922652Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: GrantFiled: January 13, 2023Date of Patent: March 5, 2024Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 11914152Abstract: A mixed reality system including a head-mounted display (HMD) and a base station. Information collected by HMD sensors may be transmitted to the base via a wired or wireless connection. On the base, a rendering engine renders frames including virtual content based in part on the sensor information, and an encoder compresses the frames according to an encoding protocol before sending the frames to the HMD over the connection. Instead of using a previous frame to estimate motion vectors in the encoder, motion vectors from the HMD and the rendering engine are input to the encoder and used in compressing the frame. The motion vectors may be embedded in the data stream along with the encoded frame data and transmitted to the HMD over the connection. If a frame is not received at the HMD, the HMD may synthesize a frame from a previous frame using the motion vectors.Type: GrantFiled: February 4, 2022Date of Patent: February 27, 2024Assignee: Apple Inc.Inventors: Geoffrey Stahl, Avi Bar-Zeev
-
Publication number: 20240007790Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.Type: ApplicationFiled: July 12, 2023Publication date: January 4, 2024Inventor: Avi Bar-Zeev
-
Patent number: 11861265Abstract: In an exemplary technique, speech input including one or more instructions is received. After the speech input has stopped, if it is determined that one or more visual characteristics indicate that further speech input is not expected, a response to the one or more instructions is provided. If it is determined that one or more visual characteristics indicate that further speech input is expected, a response to the one or more instructions is not provided.Type: GrantFiled: March 20, 2023Date of Patent: January 2, 2024Assignee: Apple Inc.Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
-
Publication number: 20230419622Abstract: The present disclosure relates to techniques for providing tangibility visualization of virtual objects within a computer-generated reality (CGR) environment, such as a CGR environment based on virtual reality and/or a CGR environment based on mixed reality. A visual feedback indicating tangibility is provided for a virtual object within a CGR environment that does not correspond to a real, tangible object in the real environment. A visual feedback indicating tangibility is not provided for a virtual representation of a real object within a CGR environment that corresponds to a real, tangible object in the real environment.Type: ApplicationFiled: September 11, 2023Publication date: December 28, 2023Inventors: Alexis PALANGIE, Avi BAR-ZEEV