Patents by Inventor Amnon Shashua

Amnon Shashua has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10843690
    Abstract: A system for navigating a host vehicle may: receive, from an image capture device, an image representative of an environment of the host vehicle; determine a navigational action for accomplishing a navigational goal of the host vehicle; analyze the image to identify a target vehicle in the environment of the host vehicle; determine a next-state distance between the host vehicle and the target vehicle that would result if the navigational action was taken; determine a maximum braking capability of the host vehicle, a maximum acceleration capability of the host vehicle, and a speed of the host vehicle; determine a stopping distance for the host vehicle; determine a speed of the target vehicle and assume a maximum braking capability of the target vehicle; and implement the navigational action if the stopping distance for the host vehicle is less than the next-state distance summed together with a target vehicle travel distance.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: November 24, 2020
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Shai Shalev-Shwartz, Shaked Shammah, Amnon Shashua
  • Patent number: 10845816
    Abstract: Systems and methods are provided for navigating an autonomous vehicle using reinforcement learning techniques.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: November 24, 2020
    Assignee: MOBILEYE VISION TECHNOLOGIES LTD.
    Inventors: Shai Shalev-Shwartz, Amnon Shashua, Shaked Shammah
  • Patent number: 10841476
    Abstract: A wearable apparatus and method are provided for selectively disregarding triggers originating from persons other than a user of the wearable apparatus. The wearable apparatus comprises a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to receive the captured image data and identify in the image data a trigger. The trigger is associated with at least one action to be performed by the wearable apparatus. The processing device is also programmed to determine, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus, and forgo performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: November 17, 2020
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 10832063
    Abstract: Systems and methods are provided for detecting an object in front of a vehicle. In one implementation, an object detecting system includes an image capture device configured to acquire a plurality of images of an area, a data interface, and a processing device programmed to compare a first image to a second image to determine displacement vectors between pixels, to search for a region of coherent expansion that is a set of pixels in at least one of the first image and the second image, for which there exists a common focus of expansion and a common scale magnitude such that the set of pixels satisfy a relationship between pixel positions, displacement vectors, the common focus of expansion, and the common scale magnitude, and to identify presence of a substantially upright object based on the set of pixels.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: November 10, 2020
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Amnon Shashua, Erez Dagan, Tomer Baba, Yoni Myers, Yossi Hollander
  • Publication number: 20200348672
    Abstract: A navigational system for a host vehicle may comprise at least one processing device. The processing device may be programmed to receive a first output and a second output associated with a host vehicle, wherein at least one of the outputs is received from a sensor onboard the host vehicle. The processing device may identify a target object in the first output and determine whether a characteristic of the target object triggers a navigational constraint by verifying the identification of the target object based on the first output; and, if the navigational constraint is not verified based on the first output, then verifying the identification of the target object based on a combination of the first output and the second output. In response to the verification, the processing device may cause at least one navigational change to the host vehicle.
    Type: Application
    Filed: July 22, 2020
    Publication date: November 5, 2020
    Inventors: Amnon Shashua, Shai Shalev-Shwartz, Shaked Shammah
  • Patent number: 10824865
    Abstract: A wearable apparatus is provided for capturing and processing images from an environment of a user. In one implementation, a system for controlling one or more controllable devices includes a transceiver and at least one processing device. The processing device is programmed to obtain one or more images captured by an image sensor included in a wearable apparatus, analyze the one or more images to identify a controllable device in an environment of a user of the wearable apparatus, analyze the one or more images to detect a visual trigger associated with the controllable device and, based on the detection of the visual trigger, transmit, via the transceiver, a command. The command may be configured to change at least one aspect of the controllable device.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: November 3, 2020
    Assignee: OrCam Technologies, Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Publication number: 20200333784
    Abstract: A navigational system for a host vehicle may comprise at least one processing device. The processing device may be programmed to receive a first output and a second output associated with the host vehicle and identify a representation of a target object in the first output. The processing device may determine whether a characteristic of the target object triggers a navigational constraint by verifying the identification of the target object based on the first output and, if the at least one navigational constraint is not verified based on the first output, then verifying the identification of the target object based on a combination of the first output and the second output. In response to the verification, the processing device may cause at least one navigational change to the host vehicle.
    Type: Application
    Filed: June 26, 2020
    Publication date: October 22, 2020
    Inventors: AMNON SHASHUA, Shai Shalev-Shwartz, Shaked Shammah
  • Publication number: 20200326707
    Abstract: Systems and methods are provided for constructing, using, and updating the sparse map for autonomous vehicle navigation. In one implementation, a non-transitory computer-readable medium includes a sparse map for autonomous vehicle navigation along a road segment. The sparse map includes a polynomial representation of a target trajectory for the autonomous vehicle along the road segment and a plurality of predetermined landmarks associated with the road segment, wherein the plurality of predetermined landmarks are spaced apart by at least 50 meters. The sparse map has a data density of no more than 1 megabyte per kilometer.
    Type: Application
    Filed: June 26, 2020
    Publication date: October 15, 2020
    Inventors: AMNON SHASHUA, Yoram Gdalyahu, Ofer Springer, Aran Reisman, Daniel Braunstein
  • Patent number: 10795375
    Abstract: Systems and methods are provided for navigating an autonomous vehicle using reinforcement learning techniques.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 6, 2020
    Assignee: MOBILEYE VISION TECHNOLOGIES LTD.
    Inventors: Shai Shalev-Shwartz, Shaked Shammah, Amnon Shashua, Gideon Stein, Ori Buberman
  • Patent number: 10782703
    Abstract: Systems and methods are provided for navigating an autonomous vehicle using reinforcement learning techniques.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: September 22, 2020
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Shai Shalev-Shwartz, Shaked Shammah, Amnon Shashua, Yoav Taieb, Gideon Stein
  • Publication number: 20200296521
    Abstract: A system may include a wearable camera configured to capture images and a microphone configured to capture sounds, and a processor programmed to receive the images captured by the camera and audio signals representative of sounds received by the microphone. The processor may also be programmed to determine a look direction for a user based upon detection of a representation of a body part of the user in at least one of the captured images and a pointing direction of the body part relative to an optical axis associated with the wearable camera. The processor may further be programmed to cause selective conditioning of an audio signal received by the microphone from a region associated with the look direction of the user and cause transmission of the conditioned audio signal to an interface device.
    Type: Application
    Filed: May 29, 2020
    Publication date: September 17, 2020
    Applicant: Orcam Technologies Ltd.
    Inventors: Yonatan WEXLER, Amnon SHASHUA
  • Publication number: 20200250289
    Abstract: A wearable device may include a housing, a sensor in the housing configured to generate an output, and a transmitter in the housing. The wearable device may also include a processor programmed to alternatively operate in a normal radiation mode and a low radiation mode. The transmitter may be permitted to function at a normal capacity when operating in the normal radiation mode and may be caused to function at a reduced capacity when operating in the low radiation mode. During operation at the normal capacity, the transmitter may transmit at a higher radiation intensity than during operation at the reduced capacity. The processor may also be programmed to detect, based on the output generated by the sensor, whether the housing is currently worn by the user, and cause the transmitter to operate in the low radiation mode after detecting that the housing is being worn by the user.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 6, 2020
    Applicant: Orcam Technologies Ltd.
    Inventors: Yonatan WEXLER, Amnon SHASHUA
  • Publication number: 20200252218
    Abstract: A wearable device for authenticating an identity of a wearer may include a wearable housing configured to be worn by the wearer and at least one sensor in the housing, the at least one sensor being configured to generate an output indicative of at least one aspect of an environment of the wearer. The wearable device may also include at least one processor programmed to alternatively operate in an unrestricted operation mode and a restricted operation mode, and detect, based on the output generated by the at least one sensor, whether the wearer of the housing is authenticated with the wearable device. The at least one processor may also be programmed to operate in the unrestricted operation mode after the at least one processor detects that the wearer of the housing is authenticated with the wearable device.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 6, 2020
    Applicant: Orcam Technologies Ltd.
    Inventors: Yonatan WEXLER, Amnon SHASHUA
  • Patent number: 10733446
    Abstract: A wearable apparatus is provided for capturing and processing images from an environment of a user. In one implementation, the wearable apparatus is used for causing a device paired to the wearable apparatus to execute a selected function. The wearable apparatus includes an image capture device, a transmitter and at least one processing device. The at least one processing device is programmed to obtain images captured by the image capture device; analyze the images to detect a contextual situation associated with images; based on the detected contextual situation, associate with the at least one image a category tag, wherein the category tag is associated with a selected function; determine image-related information associated with the detected contextual situation; and cause the transmitter to transmit the determined image-related information to the paired device to cause the paired device to execute the selected function based on the determined image-related information.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: August 4, 2020
    Assignee: ORCAM TECHNOLOGIES LTD.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Publication number: 20200234065
    Abstract: Systems and methods are provided for detecting an object in front of a vehicle. In one implementation, an object detecting system includes an image capture device configured to acquire a plurality of images of an area, a data interface, and a processing device programmed to compare a first image to a second image to determine displacement vectors between pixels, to search for a region of coherent expansion that is a set of pixels in at least one of the first image and the second image, for which there exists a common focus of expansion and a common scale magnitude such that the set of pixels satisfy a relationship between pixel positions, displacement vectors, the common focus of expansion, and the common scale magnitude, and to identify presence of a substantially upright object based on the set of pixels.
    Type: Application
    Filed: February 18, 2020
    Publication date: July 23, 2020
    Inventors: Amnon Shashua, Erez Dagan, Tomer Baba, Yoni Myers, Yossi Hollander
  • Patent number: 10719711
    Abstract: A wearable apparatus is provided for capturing and processing images from an environment of a user. In one implementation, a system for controlling one or more controllable devices includes a transceiver and at least one processing device. The processing device is programmed to obtain one or more images captured by an image sensor included in a wearable apparatus, analyze the one or more images to identify a controllable device in an environment of a user of the wearable apparatus, analyze the one or more images to detect a visual trigger associated with the controllable device and, based on the detection of the visual trigger, transmit, via the transceiver, a command. The command may be configured to change at least one aspect of the controllable device.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: July 21, 2020
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Publication number: 20200225681
    Abstract: A control system for a vehicle may include a forward-facing camera to capture a plurality of images of a road ahead of the vehicle and a processing device. The processing device may be configured to: provide feedback to a vehicle operator of the vehicle to change lanes to a new lane, in which the vehicle is not already traveling, based on the ending of a current lane, the ending of the current lane indicated by a first traffic cone identified in the plurality of images; and update a distance from the vehicle to a second traffic cone, based on a position of the vehicle, the second traffic cone used to constrain vehicle operation to the new lane.
    Type: Application
    Filed: January 29, 2020
    Publication date: July 16, 2020
    Applicant: Mobileye Vision Technologies Ltd.
    Inventors: Gideon STEIN, Amnon SHASHUA
  • Publication number: 20200223451
    Abstract: The present disclosure relates to systems and methods for host vehicle navigation. Disclosed systems and methods may receive, from a camera, a plurality of images representative of an environment of the host vehicle; analyze the plurality of images to identify at least one pedestrian in the environment of the host vehicle; identify eyes of the at least one pedestrian represented in at least one of the plurality of images; and determine, based on analysis of the at least one of the plurality of images and based on the identification of the eyes of the at least one pedestrian in the at least one of the plurality of images, a looking direction of the at least one pedestrian.
    Type: Application
    Filed: March 17, 2020
    Publication date: July 16, 2020
    Inventors: Amnon SHASHUA, Shai SHALEV-SHWARTZ, Shaked SHAMMAH
  • Publication number: 20200219414
    Abstract: Devices and a method are provided for providing feedback to a user. In one implementation, the method comprises obtaining a plurality of images from an image sensor. The image sensor is configured to be positioned for movement with the user's head. The method further comprises monitoring the images, and determining whether relative motion occurs between a first portion of a scene captured in the plurality of images and other portions of the scene captured in the plurality of images. If the first portion of the scene moves less than at least one other portion of the scene, the method comprises obtaining contextual information from the first portion of the scene. The method further comprises providing the feedback to the user based on at least part of the contextual information.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Inventors: Yonatan WEXLER, Amnon SHASHUA
  • Publication number: 20200221218
    Abstract: The present disclosure relates to systems and methods for directing the audio output of a wearable device having a plurality of speakers. In one implementation, the system may include an image sensor configured to capture one or more images from an environment of the user of the wearable apparatus, a plurality of speakers, and at least one processing device. The at least one processing device may be configured to analyze the one or more images to determine at least one indicator of head orientation of the user of the wearable apparatus, select at least one of the plurality of speakers based on the at least one indicator of head orientation, and output the audio to the user of the wearable apparatus via the selected at least one of the plurality of speakers.
    Type: Application
    Filed: February 11, 2020
    Publication date: July 9, 2020
    Inventors: Yonatan Wexler, Amnon Shashua