Patents by Inventor Nadav Israel

Nadav Israel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11714877
    Abstract: A machine learning system to determine an identity of a user is trained using triplets of ad hoc synthetic data and actual data. The data may comprise multimodal images of a hand. Each triplet comprises an anchor, a positive, and a negative image. Synthetic triplets for different synthesized identities are generated on an ad hoc basis and provided as input during training of the machine learning system. The machine learning system uses a pairwise label-based loss function, such as a triplet loss function during training. Synthetic triplets may be generated to provide more challenging training data, to provide training data for categories that are underrepresented in the actual data, and so forth. The system uses substantially less memory during training, and the synthetic triplets need not be retained further reducing memory use. Ongoing training is supported as new actual triplets become available, and may be supplemented by additional synthetic triplets.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: August 1, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Alon Shoshan, Miriam Farber, Nadav Israel Bhonker, Igor Kviatkovsky, Manoj Aggarwal, Gerard Guy Medioni
  • Patent number: 11670104
    Abstract: A scanner acquires a set of images of a hand of a user to facilitate identification. These images may vary, due to changes in relative position, pose, lighting, obscuring objects such as a sleeve, and so forth. A first neural network determines output data comprising a spatial mask and a feature map for individual images in the set. The output data for two or more images is combined to provide aggregate data that is representative of the two or more images. The aggregate data may then be processed using a second neural network, such as convolutional neural network, to determine an embedding vector. The embedding vector may be stored and associated with a user account. At a later time, images acquired from the scanner may be processed to produce an embedding vector that is compared to the stored embedding vector to identify a user at the scanner.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: June 6, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Lior Zamir, Miriam Farber, Igor Kviatkovsky, Nadav Israel Bhonker, Manoj Aggarwal, Gerard Guy Medioni
  • Patent number: 11537813
    Abstract: During a training phase, a first machine learning system is trained using actual data, such as multimodal images of a hand, to generate synthetic image data. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user. For example, the candidate synthetic image data is compared to previously generated synthetic image data to avoid duplicative synthetic identities. The second machine learning system is then trained using the approved candidate synthetic image data.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: December 27, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Igor Kviatkovsky, Nadav Israel Bhonker, Alon Shoshan, Manoj Aggarwal, Gerard Guy Medioni
  • Patent number: 10126826
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: November 13, 2018
    Assignee: Eyesight Mobile Technologies Ltd.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20180024643
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: September 29, 2017
    Publication date: January 25, 2018
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20160343145
    Abstract: The invention provides a system method for object detection and tracking in a video stream. Frames of the video stream are divided into regions of interest and a probability is calculated for each region of interest that the region contains at least a portion of an object to be tracked. The regions of interest in each frame are then classified based on the calculated probabilities. A region of interest (RI) frame is then constructed for each video frame that reports the classification of regions of interest in the video frame. Two or more RI frames are then compared in order to determine a motion of the object. The invention also provides a system executing the method of the invention, as well as a device comprising the system. The device may be for example, a portable computer, a mobile telephone, or an entertainment device.
    Type: Application
    Filed: August 1, 2016
    Publication date: November 24, 2016
    Inventors: Nadav Israel, Itay Katz, Dudi Cohen, Amnon Shenfeld
  • Publication number: 20160306435
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: June 27, 2016
    Publication date: October 20, 2016
    Inventors: Itay KATZ, Nadav Israel, Tamir Anavi, Shahaf Grofit, ltay Bar-Yosef
  • Patent number: 9405970
    Abstract: Provided is a system and method for object detection and tracking in a video stream. Frames of the video stream are divided into regions of interest and a probability that the region contains at least a portion of an object to be tracked is calculated for each region of interest. The regions of interest in each frame are then classified based on the calculated probabilities. A region of interest (RI) frame is then constructed for each video frame that reports the classification of regions of interest in the video frame. Two or more RI frames are then compared in order to determine a motion of the object. Also provided is a system executing the presently described method, as well as a device including the system. The device may be for example, a portable computer, a mobile telephone, or an entertainment device.
    Type: Grant
    Filed: February 2, 2010
    Date of Patent: August 2, 2016
    Assignee: eyeSight Mobile Technologies Ltd.
    Inventors: Nadav Israel, Itay Katz, Dudi Cohen, Amnon Shenfeld
  • Patent number: 9377867
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: June 28, 2016
    Assignee: EYESIGHT MOBILE TECHNOLOGIES LTD.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20140306877
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: August 8, 2012
    Publication date: October 16, 2014
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Patent number: 8842919
    Abstract: Systems, methods, and computer-readable media for gesture recognition are disclosed. The systems include, for example, at least one processor that is configured to receive at least one image from at least one image sensor. The processor may also be configured to detect, in the image, data corresponding to an anatomical structure of a user. The processor may also be configured to identify, in the image, information corresponding to a suspected hand gesture by the user. In addition, the processor may also be configured to discount the information corresponding to the suspected hand gesture if the data corresponding to the anatomical structure of the user is not identified in the image.
    Type: Grant
    Filed: February 8, 2014
    Date of Patent: September 23, 2014
    Assignee: Eyesight Mobile Technologies Ltd.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20140157210
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: February 8, 2014
    Publication date: June 5, 2014
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20110291925
    Abstract: Provided is a system and method for object detection and tracking in a video stream. Frames of the video stream are divided into regions of interest and a probability that the region contains at least a portion of an object to be tracked is calculated for each region of interest. The regions of interest in each frame are then classified based on the calculated probabilities. A region of interest (RI) frame is then constructed for each video frame that reports the classification of regions of interest in the video frame. Two or more RI frames are then compared in order to determine a motion of the object. Also provided is a system executing the presently described method, as well as a device including the system. The device may be for example, a portable computer, a mobile telephone, or an entertainment device.
    Type: Application
    Filed: February 2, 2010
    Publication date: December 1, 2011
    Applicant: EYESIGHT MOBILE TECHNOLOGIES LTD.
    Inventors: Nadav Israel, Itay Katz, Dudi Cohen, Amnon Shenfeld