Patents by Inventor Elad Arbel

Elad Arbel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240029409
    Abstract: There is provided a method for training a ground truth generator machine learning model, comprising: creating a ground truth multi-record training dataset wherein a record comprises: a first image of a sample of tissue of a subject depicting a first group of biological objects, a second image of the sample depicting a second group of biological objects presenting at least one biomarker, and ground truth labels indicating a respective biological object category of a plurality of biological object categories for biological object members of the first group and the second group; and training the ground truth generator machine learning model on the ground truth multi-record training dataset for automatically generating ground truth labels selected from the plurality of biological object categories for biological objects depicted in an input set of images of a first type corresponding to the first image and a second type corresponding to the second image.
    Type: Application
    Filed: August 18, 2021
    Publication date: January 25, 2024
    Applicant: NEWTRON GROUP SA
    Inventors: Frederik AIDT, Jesper LOHSE, Elad ARBEL, Itay REMER, Amir BEN-DOR, Oded BEN-DAVID
  • Patent number: 11847751
    Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: December 19, 2023
    Assignee: AGILENT TECHNOLOGIES, INC.
    Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
  • Patent number: 11748881
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: September 5, 2023
    Assignee: Agilent Technologies, Inc.
    Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
  • Publication number: 20230266819
    Abstract: There is provided a computer implemented method of automatically creating a training dataset comprising a plurality of records, wherein a record includes: an image of a sample of an object, an indication of monitored manipulations by a user of a presentation of the sample, and a ground truth indication of a monitored gaze of the user viewing the sample on a display or via an optical device mapped to pixels of the image of the sample, wherein the monitored gaze comprises at least one location of the sample the user is viewing and an amount of time spent viewing the at least one location.
    Type: Application
    Filed: July 20, 2021
    Publication date: August 24, 2023
    Applicant: 5301 Stevens Creek Blvd.
    Inventors: Elad ARBEL, Itay REMER, Amir BEN-DOR
  • Publication number: 20220366564
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.
    Type: Application
    Filed: June 23, 2022
    Publication date: November 17, 2022
    Applicant: Agilent Technologies, Inc.
    Inventors: Elad ARBEL, Itay REMER, Amir BEN-DOR
  • Patent number: 11494988
    Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: November 8, 2022
    Assignee: AGILENT TECHNOLOGIES, INC.
    Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
  • Publication number: 20220351475
    Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.
    Type: Application
    Filed: July 18, 2022
    Publication date: November 3, 2022
    Applicant: AGILENT TECHNOLOGIES, INC.
    Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
  • Patent number: 11410303
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: August 9, 2022
    Assignee: Agilent Technologies Inc.
    Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
  • Patent number: 11145058
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation via multiple regression layers, implementing instance segmentation based on partial annotations, and/or implementing user interface configured to facilitate user annotation for instance segmentation. In various embodiments, a computing system might generate a user interface configured to collect training data for predicting instance segmentation within biological samples, and might display, within a display portion of the user interface, the first image comprising a field of view of a biological sample. The computing system might receive, from a user via the user interface, first user input indicating a centroid for each of a first plurality of objects of interest and second user input indicating a border around each of the first plurality of objects of interest.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: October 12, 2021
    Assignee: Agilent Technologies, Inc.
    Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
  • Publication number: 20200327671
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation via multiple regression layers, implementing instance segmentation based on partial annotations, and/or implementing user interface configured to facilitate user annotation for instance segmentation. In various embodiments, a computing system might generate a user interface configured to collect training data for predicting instance segmentation within biological samples, and might display, within a display portion of the user interface, the first image comprising a field of view of a biological sample. The computing system might receive, from a user via the user interface, first user input indicating a centroid for each of a first plurality of objects of interest and second user input indicating a border around each of the first plurality of objects of interest.
    Type: Application
    Filed: April 10, 2020
    Publication date: October 15, 2020
    Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
  • Publication number: 20200327667
    Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.
    Type: Application
    Filed: April 10, 2020
    Publication date: October 15, 2020
    Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
  • Publication number: 20190362556
    Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 28, 2019
    Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
  • Patent number: 10229495
    Abstract: A method of operating a data processing system to automatically process a color digital image of a specimen that has been stained with a first dye and a second different dye is disclosed. The method includes receiving a color image that includes a plurality of pixels, each pixel being characterized by a pixel vector having components determined by the intensity of light in each of a corresponding number of wavelength bands. A plurality of pixel vectors of the image are transformed to a hue space divided into a plurality of bins, each bin being characterized by a number of pixels that have been transformed into that bin. The data processing system automatically finds first and second color vectors characterizing the first and second dyes, respectively based on the number of pixels that were transformed into each of the bins.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: March 12, 2019
    Assignee: Agilent Technologies, Inc.
    Inventors: Amir Ben-Dor, Elad Arbel
  • Publication number: 20180144464
    Abstract: A method of operating a data processing system to automatically process a color digital image of a specimen that has been stained with a first dye and a second different dye is disclosed. The method includes receiving a color image that includes a plurality of pixels, each pixel being characterized by a pixel vector having components determined by the intensity of light in each of a corresponding number of wavelength bands. A plurality of pixel vectors of the image are transformed to a hue space divided into a plurality of bins, each bin being characterized by a number of pixels that have been transformed into that bin. The data processing system automatically finds first and second color vectors characterizing the first and second dyes, respectively based on the number of pixels that were transformed into each of the bins.
    Type: Application
    Filed: November 22, 2016
    Publication date: May 24, 2018
    Applicant: Agilent Technologies, Inc.
    Inventors: Amir Ben-Dor, Elad Arbel