Patents by Inventor OFIR LEVY

OFIR LEVY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11727725
    Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: August 15, 2023
    Inventors: Lior Wolf, Ofir Levy
  • Publication number: 20230186584
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Applicant: Tahoe Research, Ltd.
    Inventors: Amit BLEIWEISS, Chen PAZ, Ofir LEVY, Itamar BEN-ARI, Yaron YANAI
  • Patent number: 11574453
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: February 7, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Publication number: 20210166055
    Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.
    Type: Application
    Filed: February 11, 2021
    Publication date: June 3, 2021
    Inventors: Lior WOLF, Ofir Levy
  • Publication number: 20210056768
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: September 4, 2020
    Publication date: February 25, 2021
    Applicant: INTEL CORPORATION
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 10922577
    Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: February 16, 2021
    Inventors: Lior Wolf, Ofir Levy
  • Patent number: 10769862
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Publication number: 20200065608
    Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.
    Type: Application
    Filed: October 28, 2019
    Publication date: February 27, 2020
    Inventors: Lior Wolf, Ofir Levy
  • Patent number: 10460194
    Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from YouTube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.
    Type: Grant
    Filed: March 6, 2015
    Date of Patent: October 29, 2019
    Inventors: Lior Wolf, Ofir Levy
  • Patent number: 10248839
    Abstract: In accordance with some embodiments, connected-component labeling is performed in both the screen dimensions (which may be referred to as the x and y dimensions) and a depth dimension to label objects in a depth image. Then the contour of labeled blobs may be used to identify an object in the depth image. Using contours may be advantageous in some embodiments because it reduces the amount of data that must be handled and the extent of computations, compared to conventional techniques which use bit map based operations.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventors: Ofir Levy, Maoz Madmony, Orly Weisel
  • Publication number: 20180357834
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: August 2, 2018
    Publication date: December 13, 2018
    Applicant: INTEL CORPORATION
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 10068385
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: September 4, 2018
    Assignee: Intel Corporation
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Publication number: 20170169620
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Application
    Filed: December 15, 2015
    Publication date: June 15, 2017
    Applicant: INTEL CORPORATION
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Publication number: 20170154432
    Abstract: In accordance with some embodiments, connected-component labeling is performed in both the screen dimensions (which may be referred to as the x and y dimensions) and a depth dimension to label objects in a depth image. Then the contour of labeled blobs may be used to identify an object in the depth image. Using contours may be advantageous in some embodiments because it reduces the amount of data that must be handled and the extent of computations, compared to conventional techniques which use bit map based operations.
    Type: Application
    Filed: November 30, 2015
    Publication date: June 1, 2017
    Inventors: Ofir Levy, Maoz Madmony, Orly Weisel
  • Publication number: 20170017857
    Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from YouTube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.
    Type: Application
    Filed: March 6, 2015
    Publication date: January 19, 2017
    Inventors: Lior Wolf, Ofir Levy
  • Publication number: 20150095776
    Abstract: A device may comprise a display interface and a processor coupled to the display interface. The processor may be configured to couple to a remote network-connected device over a computer network; generate a graphic representation of the network-connected device on the display and send the generated graphic representation to the display interface. A status of the network-connected device may then be received over the computer network and cause, responsive to receiving the status of the network-connected device, the graphic representation of the network-connected device to change appearance depending upon the received state the network-connected device.
    Type: Application
    Filed: December 6, 2013
    Publication date: April 2, 2015
    Applicant: Western Digital Technologies, Inc.
    Inventors: MICHAEL F. EGAN, OFIR LEVY