Patents by Inventor Avi Bar-Zeev
Avi Bar-Zeev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11726324Abstract: A display system includes a head-mounted display unit and a wake control system. The head-mounted display unit provides content to a user and is operable in a low-power state and a high-power state that consumes more power to provide the content to the user than the low-power state. The wake control system determines when to operate in the high-power state. The wake control system may assess a first wake criterion with low power, assess a second wake criterion with higher power than the first wake criterion upon satisfaction of the first wake criterion, and cause the head-mounted display unit to operate in the high-power state upon satisfaction of the second wake criterion.Type: GrantFiled: June 21, 2019Date of Patent: August 15, 2023Assignee: APPLE INC.Inventors: Fletcher R. Rothkopf, David A. Kalinowski, Jae Hwang Lee, Avi Bar-Zeev, Grant H. Mulliken, Paul Meade, Nathanael D. Parkhill, Ray L. Chang, Arthur Y. Zhang
-
Patent number: 11714592Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: GrantFiled: September 27, 2021Date of Patent: August 1, 2023Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
-
Patent number: 11712628Abstract: In various implementations, methods and devices for attenuation of co-user interactions in SR space are described. In one implementation, a method of attenuating avatars based on a breach of avatar social interaction criteria is performed at a device provided to deliver simulated reality (SR) content. In one implementation, a method of close collaboration in SR setting is performed at a device provided to deliver SR content.Type: GrantFiled: September 17, 2019Date of Patent: August 1, 2023Assignee: APPLE INC.Inventors: Avi Bar-Zeev, Alexis Henri Palangie, Luis Rafael Deliz Centeno, Rahul Nair
-
Patent number: 11714519Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.Type: GrantFiled: February 24, 2022Date of Patent: August 1, 2023Assignee: Apple Inc.Inventors: Luis R. Deliz Centeno, Avi Bar-Zeev
-
Patent number: 11709068Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.Type: GrantFiled: September 25, 2018Date of Patent: July 25, 2023Assignee: Apple Inc.Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair
-
Patent number: 11710284Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: December 14, 2021Date of Patent: July 25, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Publication number: 20230229387Abstract: In an exemplary technique, speech input including one or more instructions is received. After the speech input has stopped, if it is determined that one or more visual characteristics indicate that further speech input is not expected, a response to the one or more instructions is provided. If it is determined that one or more visual characteristics indicate that further speech input is expected, a response to the one or more instructions is not provided.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Rahul NAIR, Golnaz ABDOLLAHIAN, Avi BAR-ZEEV, Niranjan MANJUNATH
-
Patent number: 11697068Abstract: Systems and methods to provide a mobile computing platform as a physical interface for an interactive space are presented herein. The interactive space may be experienced by a user of a host device (e.g., headset). The interactive space may include views of virtual content. A position and/or heading of the mobile computing platform relative to a perceived position and/or heading of the virtual content of the interactive space may be determined. Remote command information may be determined based on the relative position information and/or user input information conveying user entry and/or selection of one or more input elements of the mobile computing platform. The remote command information may be configured to effectuate user interactions with the virtual content in the interactive space based on user interactions with the mobile computing platform.Type: GrantFiled: October 16, 2019Date of Patent: July 11, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Gerald Wright, Jr., Alexander Tyurin, Diego Leyton
-
Patent number: 11688147Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: December 14, 2021Date of Patent: June 27, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Publication number: 20230154036Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: ApplicationFiled: January 13, 2023Publication date: May 18, 2023Applicant: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20230143213Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non- transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: ApplicationFiled: January 4, 2023Publication date: May 11, 2023Inventors: AVI BAR-ZEEV, ALEXANDER TYURIN, GERALD V. WRIGHT, JR.
-
Patent number: 11609739Abstract: In an exemplary technique for providing audio information, an input is received, and audio information responsive to the received input is provided using a speaker. While providing the audio information, an external sound is detected. If it is determined that the external sound is a communication of a first type, then the provision of the audio information is stopped. If it is determined that the external sound is a communication of a second type, then the provision of the audio information continues.Type: GrantFiled: April 24, 2019Date of Patent: March 21, 2023Assignee: Apple Inc.Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
-
Patent number: 11587295Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: October 5, 2021Date of Patent: February 21, 2023Assignee: Meta View, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Publication number: 20230045634Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.Type: ApplicationFiled: July 15, 2022Publication date: February 9, 2023Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
-
Publication number: 20220322006Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.Type: ApplicationFiled: June 15, 2022Publication date: October 6, 2022Inventor: Avi Bar-Zeev
-
Patent number: 11403821Abstract: In one implementation, a method of providing contextual computer-generated reality (CGR) digital assistant is performed at a device provided to deliver a CGR scene, the device including one or more processors, non-transitory memory, and one or more displays. The method includes obtaining image data characterizing a field of view captured by an image sensor. The method further includes identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants. The method additionally includes selecting a visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation is selected based on context and in response to identifying the contextual trigger. The method also includes presenting the CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, where the visual representation provides information associated with the contextual trigger.Type: GrantFiled: September 20, 2019Date of Patent: August 2, 2022Assignee: APPLE INC.Inventors: Avi Bar-Zeev, Golnaz Abdollahian, Devin William Chalmers, David H. Y. Huang, Banafsheh Jalali
-
Patent number: 11363378Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.Type: GrantFiled: November 3, 2020Date of Patent: June 14, 2022Assignee: APPLE INC.Inventor: Avi Bar-Zeev
-
Publication number: 20220179542Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.Type: ApplicationFiled: February 24, 2022Publication date: June 9, 2022Inventors: Luis R. DELIZ CENTENO, Avi BAR-ZEEV
-
Publication number: 20220155603Abstract: A mixed reality system including a head-mounted display (HMD) and a base station. Information collected by HMD sensors may be transmitted to the base via a wired or wireless connection. On the base, a rendering engine renders frames including virtual content based in part on the sensor information, and an encoder compresses the frames according to an encoding protocol before sending the frames to the HMD over the connection. Instead of using a previous frame to estimate motion vectors in the encoder, motion vectors from the HMD and the rendering engine are input to the encoder and used in compressing the frame. The motion vectors may be embedded in the data stream along with the encoded frame data and transmitted to the HMD over the connection. If a frame is not received at the HMD, the HMD may synthesize a frame from a previous frame using the motion vectors.Type: ApplicationFiled: February 4, 2022Publication date: May 19, 2022Applicant: Apple Inc.Inventors: Geoffrey Stahl, Avi Bar-Zeev
-
Patent number: 11320958Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.Type: GrantFiled: May 1, 2019Date of Patent: May 3, 2022Assignee: APPLE INC.Inventors: Luis R. Deliz Centeno, Avi Bar-Zeev