Patents by Inventor Avi Bar-Zeev

Avi Bar-Zeev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210232288
    Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective M corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.
    Type: Application
    Filed: May 1, 2019
    Publication date: July 29, 2021
    Inventors: Luis R. DELIZ CENTENO, Avi BAR-ZEEV
  • Publication number: 20210224031
    Abstract: In an exemplary technique for providing audio information, an input is received, and audio information responsive to the received input is provided using a speaker. While providing the audio information responsive to received input information, an external sound is detected. If it is determined that the external sound is a communication of a first type, then the provision of the audio information is stopped. If it is determined that the external sound is a communication of a second type, then the provision of the audio information continues.
    Type: Application
    Filed: April 24, 2019
    Publication date: July 22, 2021
    Inventors: Rahul NAIR, Golnaz ABDOLLAHIAN, Avi BAR-ZEEV, Niranjan MANJUNATH
  • Patent number: 11043018
    Abstract: A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: June 22, 2021
    Assignee: Apple Inc.
    Inventors: Arthur Y Zhang, Ray L. Chang, Timothy R. Oriol, Ling Su, Gurjeet S. Saund, Guy Cote, Jim C. Chou, Hao Pan, Tobias Eble, Avi Bar-Zeev, Sheng Zhang, Justin A. Hensley, Geoffrey Stahl
  • Publication number: 20210165229
    Abstract: A mixed reality system including a head-mounted display (HMD) and a base station. Information collected by HMD sensors may be transmitted to the base via a wired or wireless connection. On the base, a rendering engine renders frames including virtual content based in part on the sensor information, and an encoder compresses the frames according to an encoding protocol before sending the frames to the HMD over the connection. Instead of using a previous frame to estimate motion vectors in the encoder, motion vectors from the HMD and the rendering engine are input to the encoder and used in compressing the frame. The motion vectors may be embedded in the data stream along with the encoded frame data and transmitted to the HMD over the connection. If a frame is not received at the HMD, the HMD may synthesize a frame from a previous frame using the motion vectors.
    Type: Application
    Filed: February 5, 2021
    Publication date: June 3, 2021
    Applicant: Apple Inc.
    Inventors: Geoffrey Stahl, Avi Bar-Zeev
  • Patent number: 10997544
    Abstract: Disclosed are methods and systems for delivery of items using an unmanned aerial vehicle (“UAV”). A user may be provided with a delivery location identifier (“DLI”) that is to be placed at a delivery location within a delivery destination to identify where a UAV is to position an item as part of a delivery to the delivery destination. For example, the delivery destination may be a user's home. Within the deliver destination of the user's home, the user may select a delivery location, such as a spot in the back yard wherein the UAV is to position the ordered item as part of the delivery. To aid the UAV in navigating to the delivery location, the user places the DLI at the delivery location. The UAV detects the DLI and positions the item at or near the DLI as part of the item delivery.
    Type: Grant
    Filed: May 4, 2017
    Date of Patent: May 4, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Avi Bar-Zeev, Brian C. Beckman, Steven Gregory Dunn, Atishkumar Kalyan, Amir Navot, Frederik Schaffalitzky
  • Patent number: 10976817
    Abstract: In one implementation, a method includes: synthesizing an AR/VR content stream by embedding a plurality of glints provided for eye tracking into one or more content frames of the AR/VR content stream; displaying, via the one or more AR/VR displays, the AR/VR content stream to a user of the HMD; obtaining, via the image sensor, light intensity data corresponding to the one or more content frames of the AR/VR content stream that include the plurality of glints, wherein the light intensity data includes a projection of an eye of the user of the HMD having projected thereon the plurality of glints; and determining an orientation of the eye of the user of the HMD based on the light intensity data.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: April 13, 2021
    Assignee: APPLE INC.
    Inventors: Jae Hwang Lee, Avi Bar-Zeev, Fletcher R. Rothkopf
  • Publication number: 20210051406
    Abstract: In one implementation, a method of transforming a sound into a virtual sound for a synthesized reality (SR) setting is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, a microphone, a speaker, and a display. The method includes displaying, on the display, an image representation of a synthesized reality (SR) setting including a plurality of surfaces associated with an acoustic reverberation property of the SR setting. The method includes recording, via the microphone, a real sound produced in a physical setting. The method further includes generating, using the one or more processors, a virtual sound by transforming the real sound based on the acoustic reverberation property of the SR setting. The method further includes playing, via the speaker, the virtual sound.
    Type: Application
    Filed: November 3, 2020
    Publication date: February 18, 2021
    Inventor: Avi Bar-Zeev
  • Patent number: 10914957
    Abstract: A mixed reality system including a head-mounted display (HMD) and a base station. Information collected by HMD sensors may be transmitted to the base via a wired or wireless connection. On the base, a rendering engine renders frames including virtual content based in part on the sensor information, and an encoder compresses the frames according to an encoding protocol before sending the frames to the HMD over the connection. Instead of using a previous frame to estimate motion vectors in the encoder, motion vectors from the HMD and the rendering engine are input to the encoder and used in compressing the frame. The motion vectors may be embedded in the data stream along with the encoded frame data and transmitted to the HMD over the connection. If a frame is not received at the HMD, the HMD may synthesize a frame from a previous frame using the motion vectors.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: February 9, 2021
    Assignee: Apple Inc.
    Inventors: Geoffrey Stahl, Avi Bar-Zeev
  • Patent number: 10847041
    Abstract: Described is an airborne monitoring station (“AMS”) for use in monitoring a coverage area and/or unmanned aerial vehicles (“UAVs”) positioned within a coverage area of the AMS. For example, the AMS may be an airship that remains at a high altitude (e.g., 45,000 feet) that monitors a coverage area that is within a line-of-sight of the AMS. As UAVs enter, navigate within and exit the coverage area, the AMS may wirelessly communicate with the UAVs, facilitate communication between the UAVs and one or more remote computing resources, and/or monitor a position of the UAVs.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: November 24, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Amir Navot, Gur Kimchi, Brandon William Porter, Avi Bar-Zeev, Daniel Buchmueller
  • Publication number: 20200356167
    Abstract: In one implementation, a method includes: synthesizing an AR/VR content stream by embedding a plurality of glints provided for eye tracking into one or more content frames of the AR/VR content stream; displaying, via the one or more AR/VR displays, the AR/VR content stream to a user of the HMD; obtaining, via the image sensor, light intensity data corresponding to the one or more content frames of the AR/VR content stream that include the plurality of glints, wherein the light intensity data includes a projection of an eye of the user of the HMD having projected thereon the plurality of glints; and determining an orientation of the eye of the user of the HMD based on the light intensity data.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Inventors: Jae Hwang Lee, Avi Bar-Zeev, Fletcher R. Rothkopf
  • Patent number: 10778826
    Abstract: Described are systems and methods for facilitating communication between a user and other users, services, and so forth. A wearable device, such as a pair of glasses, may be worn and used in conjunction with another user device, such as a smartphone, to support communications between the user and others. Inputs such as motion of the head, orientation of the head, verbal input, and so forth may be used to initiate particular functions on the wearable device, the user device, or with a service. For example, a user may turn their head to the left and speak to send a message to a particular person. A display light in the field of view of the user may illuminate to a particular color that has been previously associated with the particular person. This provides visual feedback to the user about the recipient of the message.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: September 15, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Chia-Jean Wang, Babak Amir Parviz, Avi Bar-Zeev
  • Patent number: 10768698
    Abstract: In one implementation, a method includes: synthesizing an AR/VR content stream by embedding a plurality of glints provided for eye tracking into one or more content frames of the AR/VR content stream; displaying, via the one or more AR/VR displays, the AR/VR content stream to a user of the HMD; obtaining, via the image sensor, light intensity data corresponding to the one or more content frames of the AR/VR content stream that include the plurality of glints, wherein the light intensity data includes a projection of an eye of the user of the HMD having projected thereon the plurality of glints; and determining an orientation of the eye of the user of the HMD based on the light intensity data.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: September 8, 2020
    Assignee: APPLE INC.
    Inventors: Jae Hwang Lee, Avi Bar-Zeev, Fletcher R. Rothkopf
  • Publication number: 20200264006
    Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.
    Type: Application
    Filed: September 25, 2018
    Publication date: August 20, 2020
    Applicant: Apple Inc.
    Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair
  • Publication number: 20200258278
    Abstract: Techniques for alerting a user, who is immersed in a virtual reality environment, to physical obstacles in their physical environment are disclosed.
    Type: Application
    Filed: March 27, 2020
    Publication date: August 13, 2020
    Inventors: Seyedkoosha MIRHOSSEINI, Avi BAR-ZEEV, Duncan A. K. MCROBERTS
  • Patent number: 10732074
    Abstract: Systems and methods for providing a multi-direction wind tunnel, or “windball,” are disclosed. The system can have a series of fans configured to provide air flow in a plurality of directions to enable accurate testing of aircraft, unmanned aerial vehicles (UAVs), and other vehicles capable of multi-dimensional flight. The system can comprise a spherical or polyhedral test chamber with a plurality of fans. The fans can be arranged in pairs, such that a first fan comprises an intake fan and a second fan comprises an exhaust fan. The direction of the air flow can be controlled by activating one or more pairs of fans, each pair of fan creating a portion of the air flow in a particular direction. The direction of the air flow can also be controlled by rotating one or more pairs of fans with respect to the test chamber on a gimbal device, or similar.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: August 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Brian C. Beckman, Avi Bar-Zeev, Steven Gregory Dunn, Amir Navot
  • Patent number: 10726861
    Abstract: A system and method providing semi-private conversation using an area microphone between one local user in a group of local users and a remote user. The local and remote users may be in different physical environments, using devices coupled by a network. A conversational relationship is defined between a local user and a remote user. The local user's voice is isolated from other voices in the environment, and transmitted to the remote user. Directional output technology may be used to direct the local user's utterances to the remote user in the remote environment.
    Type: Grant
    Filed: November 15, 2010
    Date of Patent: July 28, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jason S. Flaks, Avi Bar-Zeev
  • Publication number: 20200225747
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR
  • Publication number: 20200225746
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
  • Patent number: 10676192
    Abstract: An airbag container may be inflated and used to protect an item placed within the airbag container. The airbag container may include an inflatable portion that includes sidewalls that extend between a cover to a base. The airbag container may include an orifice to receive gas to inflate the sidewalls, the cover, and the base to an inflation pressure. The airbag container may include an inner cavity defined within the sidewalls, the cover, and the base. The inner cavity may be unpressurized when the inflatable body is at the inflation pressure. The cover may be at least partially separable from the sidewalls to enable insertion of an item in the inner cavity. The cover may then be securable to the sidewalls to securely contain the item in the inner cavity.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: June 9, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Avi Bar-Zeev, Gur Kimchi
  • Patent number: 10665033
    Abstract: An optical see-through head-mounted display device includes a see-through lens which combines an augmented reality image with light from a real-world scene, while an opacity filter is used to selectively block portions of the real-world scene so that the augmented reality image appears more distinctly. The opacity filter can be a see-through LCD panel, for instance, where each pixel of the LCD panel can be selectively controlled to be transmissive or opaque, based on a size, shape and position of the augmented reality image. Eye tracking can be used to adjust the position of the augmented reality image and the opaque pixels. Peripheral regions of the opacity filter, which are not behind the augmented reality image, can be activated to provide a peripheral cue or a representation of the augmented reality image. In another aspect, opaque pixels are provided at a time when an augmented reality image is not present.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: May 26, 2020
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Avi Bar-Zeev, Bob Crocco, Alex Aben-Athar Kipman, John Lewis