Patents by Inventor Avi Bar-Zeev

Avi Bar-Zeev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11726324
    Abstract: A display system includes a head-mounted display unit and a wake control system. The head-mounted display unit provides content to a user and is operable in a low-power state and a high-power state that consumes more power to provide the content to the user than the low-power state. The wake control system determines when to operate in the high-power state. The wake control system may assess a first wake criterion with low power, assess a second wake criterion with higher power than the first wake criterion upon satisfaction of the first wake criterion, and cause the head-mounted display unit to operate in the high-power state upon satisfaction of the second wake criterion.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: August 15, 2023
    Assignee: APPLE INC.
    Inventors: Fletcher R. Rothkopf, David A. Kalinowski, Jae Hwang Lee, Avi Bar-Zeev, Grant H. Mulliken, Paul Meade, Nathanael D. Parkhill, Ray L. Chang, Arthur Y. Zhang
  • Patent number: 11714592
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: August 1, 2023
    Assignee: Apple Inc.
    Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
  • Patent number: 11712628
    Abstract: In various implementations, methods and devices for attenuation of co-user interactions in SR space are described. In one implementation, a method of attenuating avatars based on a breach of avatar social interaction criteria is performed at a device provided to deliver simulated reality (SR) content. In one implementation, a method of close collaboration in SR setting is performed at a device provided to deliver SR content.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: August 1, 2023
    Assignee: APPLE INC.
    Inventors: Avi Bar-Zeev, Alexis Henri Palangie, Luis Rafael Deliz Centeno, Rahul Nair
  • Patent number: 11714519
    Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: August 1, 2023
    Assignee: Apple Inc.
    Inventors: Luis R. Deliz Centeno, Avi Bar-Zeev
  • Patent number: 11709068
    Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: July 25, 2023
    Assignee: Apple Inc.
    Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair
  • Patent number: 11710284
    Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: July 25, 2023
    Assignee: Campfire 3D, Inc.
    Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
  • Publication number: 20230229387
    Abstract: In an exemplary technique, speech input including one or more instructions is received. After the speech input has stopped, if it is determined that one or more visual characteristics indicate that further speech input is not expected, a response to the one or more instructions is provided. If it is determined that one or more visual characteristics indicate that further speech input is expected, a response to the one or more instructions is not provided.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Rahul NAIR, Golnaz ABDOLLAHIAN, Avi BAR-ZEEV, Niranjan MANJUNATH
  • Patent number: 11697068
    Abstract: Systems and methods to provide a mobile computing platform as a physical interface for an interactive space are presented herein. The interactive space may be experienced by a user of a host device (e.g., headset). The interactive space may include views of virtual content. A position and/or heading of the mobile computing platform relative to a perceived position and/or heading of the virtual content of the interactive space may be determined. Remote command information may be determined based on the relative position information and/or user input information conveying user entry and/or selection of one or more input elements of the mobile computing platform. The remote command information may be configured to effectuate user interactions with the virtual content in the interactive space based on user interactions with the mobile computing platform.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: July 11, 2023
    Assignee: Campfire 3D, Inc.
    Inventors: Avi Bar-Zeev, Gerald Wright, Jr., Alexander Tyurin, Diego Leyton
  • Patent number: 11688147
    Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: June 27, 2023
    Assignee: Campfire 3D, Inc.
    Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
  • Publication number: 20230154036
    Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Applicant: Campfire 3D, Inc.
    Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
  • Publication number: 20230143213
    Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non- transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
    Type: Application
    Filed: January 4, 2023
    Publication date: May 11, 2023
    Inventors: AVI BAR-ZEEV, ALEXANDER TYURIN, GERALD V. WRIGHT, JR.
  • Patent number: 11609739
    Abstract: In an exemplary technique for providing audio information, an input is received, and audio information responsive to the received input is provided using a speaker. While providing the audio information, an external sound is detected. If it is determined that the external sound is a communication of a first type, then the provision of the audio information is stopped. If it is determined that the external sound is a communication of a second type, then the provision of the audio information continues.
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: March 21, 2023
    Assignee: Apple Inc.
    Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
  • Patent number: 11587295
    Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: February 21, 2023
    Assignee: Meta View, Inc.
    Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
  • Publication number: 20230045634
    Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
    Type: Application
    Filed: July 15, 2022
    Publication date: February 9, 2023
    Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
  • Publication number: 20220322006
    Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.
    Type: Application
    Filed: June 15, 2022
    Publication date: October 6, 2022
    Inventor: Avi Bar-Zeev
  • Patent number: 11403821
    Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: August 2, 2022
    Assignee: APPLE INC.
    Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
  • Patent number: 11363378
    Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: June 14, 2022
    Assignee: APPLE INC.
    Inventor: Avi Bar-Zeev
  • Publication number: 20220179542
    Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.
    Type: Application
    Filed: February 24, 2022
    Publication date: June 9, 2022
    Inventors: Luis R. DELIZ CENTENO, Avi BAR-ZEEV
  • Publication number: 20220155603
    Abstract: A mixed reality system including a head-mounted display (HMD) and a base station. Information collected by HMD sensors may be transmitted to the base via a wired or wireless connection. On the base, a rendering engine renders frames including virtual content based in part on the sensor information, and an encoder compresses the frames according to an encoding protocol before sending the frames to the HMD over the connection. Instead of using a previous frame to estimate motion vectors in the encoder, motion vectors from the HMD and the rendering engine are input to the encoder and used in compressing the frame. The motion vectors may be embedded in the data stream along with the encoded frame data and transmitted to the HMD over the connection. If a frame is not received at the HMD, the HMD may synthesize a frame from a previous frame using the motion vectors.
    Type: Application
    Filed: February 4, 2022
    Publication date: May 19, 2022
    Applicant: Apple Inc.
    Inventors: Geoffrey Stahl, Avi Bar-Zeev
  • Patent number: 11320958
    Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: May 3, 2022
    Assignee: APPLE INC.
    Inventors: Luis R. Deliz Centeno, Avi Bar-Zeev