Patents by Inventor Elad Arbel
Elad Arbel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240029409Abstract: There is provided a method for training a ground truth generator machine learning model, comprising: creating a ground truth multi-record training dataset wherein a record comprises: a first image of a sample of tissue of a subject depicting a first group of biological objects, a second image of the sample depicting a second group of biological objects presenting at least one biomarker, and ground truth labels indicating a respective biological object category of a plurality of biological object categories for biological object members of the first group and the second group; and training the ground truth generator machine learning model on the ground truth multi-record training dataset for automatically generating ground truth labels selected from the plurality of biological object categories for biological objects depicted in an input set of images of a first type corresponding to the first image and a second type corresponding to the second image.Type: ApplicationFiled: August 18, 2021Publication date: January 25, 2024Applicant: NEWTRON GROUP SAInventors: Frederik AIDT, Jesper LOHSE, Elad ARBEL, Itay REMER, Amir BEN-DOR, Oded BEN-DAVID
-
Patent number: 11847751Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.Type: GrantFiled: July 18, 2022Date of Patent: December 19, 2023Assignee: AGILENT TECHNOLOGIES, INC.Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
-
Patent number: 11748881Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.Type: GrantFiled: June 23, 2022Date of Patent: September 5, 2023Assignee: Agilent Technologies, Inc.Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
-
Publication number: 20230266819Abstract: There is provided a computer implemented method of automatically creating a training dataset comprising a plurality of records, wherein a record includes: an image of a sample of an object, an indication of monitored manipulations by a user of a presentation of the sample, and a ground truth indication of a monitored gaze of the user viewing the sample on a display or via an optical device mapped to pixels of the image of the sample, wherein the monitored gaze comprises at least one location of the sample the user is viewing and an amount of time spent viewing the at least one location.Type: ApplicationFiled: July 20, 2021Publication date: August 24, 2023Applicant: 5301 Stevens Creek Blvd.Inventors: Elad ARBEL, Itay REMER, Amir BEN-DOR
-
Publication number: 20220366564Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.Type: ApplicationFiled: June 23, 2022Publication date: November 17, 2022Applicant: Agilent Technologies, Inc.Inventors: Elad ARBEL, Itay REMER, Amir BEN-DOR
-
Patent number: 11494988Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.Type: GrantFiled: May 21, 2019Date of Patent: November 8, 2022Assignee: AGILENT TECHNOLOGIES, INC.Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
-
Publication number: 20220351475Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.Type: ApplicationFiled: July 18, 2022Publication date: November 3, 2022Applicant: AGILENT TECHNOLOGIES, INC.Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
-
Patent number: 11410303Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.Type: GrantFiled: April 10, 2020Date of Patent: August 9, 2022Assignee: Agilent Technologies Inc.Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
-
Patent number: 11145058Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation via multiple regression layers, implementing instance segmentation based on partial annotations, and/or implementing user interface configured to facilitate user annotation for instance segmentation. In various embodiments, a computing system might generate a user interface configured to collect training data for predicting instance segmentation within biological samples, and might display, within a display portion of the user interface, the first image comprising a field of view of a biological sample. The computing system might receive, from a user via the user interface, first user input indicating a centroid for each of a first plurality of objects of interest and second user input indicating a border around each of the first plurality of objects of interest.Type: GrantFiled: April 10, 2020Date of Patent: October 12, 2021Assignee: Agilent Technologies, Inc.Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
-
Publication number: 20200327671Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation via multiple regression layers, implementing instance segmentation based on partial annotations, and/or implementing user interface configured to facilitate user annotation for instance segmentation. In various embodiments, a computing system might generate a user interface configured to collect training data for predicting instance segmentation within biological samples, and might display, within a display portion of the user interface, the first image comprising a field of view of a biological sample. The computing system might receive, from a user via the user interface, first user input indicating a centroid for each of a first plurality of objects of interest and second user input indicating a border around each of the first plurality of objects of interest.Type: ApplicationFiled: April 10, 2020Publication date: October 15, 2020Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
-
Publication number: 20200327667Abstract: Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.Type: ApplicationFiled: April 10, 2020Publication date: October 15, 2020Inventors: Elad Arbel, Itay Remer, Amir Ben-Dor
-
Publication number: 20190362556Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.Type: ApplicationFiled: May 21, 2019Publication date: November 28, 2019Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
-
Patent number: 10229495Abstract: A method of operating a data processing system to automatically process a color digital image of a specimen that has been stained with a first dye and a second different dye is disclosed. The method includes receiving a color image that includes a plurality of pixels, each pixel being characterized by a pixel vector having components determined by the intensity of light in each of a corresponding number of wavelength bands. A plurality of pixel vectors of the image are transformed to a hue space divided into a plurality of bins, each bin being characterized by a number of pixels that have been transformed into that bin. The data processing system automatically finds first and second color vectors characterizing the first and second dyes, respectively based on the number of pixels that were transformed into each of the bins.Type: GrantFiled: November 22, 2016Date of Patent: March 12, 2019Assignee: Agilent Technologies, Inc.Inventors: Amir Ben-Dor, Elad Arbel
-
Publication number: 20180144464Abstract: A method of operating a data processing system to automatically process a color digital image of a specimen that has been stained with a first dye and a second different dye is disclosed. The method includes receiving a color image that includes a plurality of pixels, each pixel being characterized by a pixel vector having components determined by the intensity of light in each of a corresponding number of wavelength bands. A plurality of pixel vectors of the image are transformed to a hue space divided into a plurality of bins, each bin being characterized by a number of pixels that have been transformed into that bin. The data processing system automatically finds first and second color vectors characterizing the first and second dyes, respectively based on the number of pixels that were transformed into each of the bins.Type: ApplicationFiled: November 22, 2016Publication date: May 24, 2018Applicant: Agilent Technologies, Inc.Inventors: Amir Ben-Dor, Elad Arbel