Patents by Inventor Oytun Akman

Oytun Akman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230386012
    Abstract: A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more “bad” or a “do not know” classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
    Type: Application
    Filed: August 14, 2023
    Publication date: November 30, 2023
    Applicant: Bright Machines, Inc.
    Inventors: Melinda Varga, Konstantinos Boulis, Oytun Akman, Julio Soldevilla Estrada, Brian Philip Mathews
  • Patent number: 11727553
    Abstract: A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more “bad” or a “do not know” classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: August 15, 2023
    Assignee: Bright Machines, Inc.
    Inventors: Melinda Varga, Konstantinos Boulis, Oytun Akman, Julio Soldevilla Estrada, Brian Philip Mathews
  • Publication number: 20230015238
    Abstract: A method for vision-based tool localization (VTL) in a robotic assembly system including one or more calibrated cameras, the method comprising capturing a plurality of images of the tool contact area from a plurality of different vantage points, determining an estimated position of the tool contact area based on an image, and refining the estimated position based on another image from another vantage point. The method further comprises providing the refined position to the robotic assembly system to enable accurate control of the tool by the robotic assembly system.
    Type: Application
    Filed: July 12, 2022
    Publication date: January 19, 2023
    Applicant: Bright Machines, Inc.
    Inventors: Barrett Clark, Oytun Akman, Matthew Brown, Ronald Poelman, Brian Philip Mathews, Emmanuel Gallo, Ali Shafiekhani
  • Publication number: 20220147026
    Abstract: A robotic cell calibration method comprising a robotic cell system having elements comprising: one or more cameras, one or more sensors, components, and a robotic arm. The method comprises localizing positions of the one or more cameras and components relative to a position of the robotic arm using a common coordinate frame, moving the robotic arm in a movement pattern, and using the cameras and sensors to determine robotic arm position at multiple times during the movement. The method includes identifying a discrepancy in robotic arm position between a predicted position and the determined position in real time, and computing, by an auto-calibrator, a compensation for the identified discrepancy, the auto-calibrator solving for the elements in the robotic cell system as a system. The method includes modifying actions of the robotic arm in real time during the movement based on the compensation.
    Type: Application
    Filed: November 9, 2021
    Publication date: May 12, 2022
    Applicant: Bright Machines, Inc.
    Inventors: Ronald Poelman, Barrett Clark, Oytun Akman, Matthew Brown
  • Patent number: 11295522
    Abstract: A method, system, and apparatus create a 3D CAD model. Scan data from two or more structured scans of a real-world scene are acquired and each scan processed independently by segmenting the scan data into multiple segments, filtering the scan data, and fitting an initial model that is used as a model candidate. Model candidates are clustered into groups and a refined model is fit onto the model candidates in the same group. A grid of cells representing points is mapped over the refined model. Each of the grid cells is labeled by processing each scan independently, labeling each cell located within the refined model as occupied, utilizing back projection to label remaining cells as occluded or empty. The labels from multiple scans are then combined. Based on the labeling, model details are extracted to further define and complete the refined model.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: April 5, 2022
    Assignee: AUTODESK, INC.
    Inventors: Oytun Akman, Ronald Poelman, Yan Fu
  • Patent number: 11204639
    Abstract: In general, this disclosure describes an artificial reality system that provides asymmetric user experiences to users associated with user devices that operate according to different modes of engagement with the artificial reality system. Different user devices may have different capabilities, be used by users having different roles for an artificial reality application, or otherwise be configured to interact in a variety of ways with an artificial reality system.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: December 21, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Oytun Akman, Ioannis Pavlidis, Ananth Ranganathan, Meghana Reddy Guduru, Jeffrey Witthuhn
  • Patent number: 11080286
    Abstract: A method, system, apparatus, article of manufacture, and computer-readable storage medium provide the ability to merge multiple point cloud scans. A first raw scan file and a second raw scan file (each including multiple points) are imported. The scan files are segmented by extracting segments based on geometry in the scene. The segments are filtered. A set of candidate matching feature pairs are acquired by registering features from one scan to features from another scan. The two raw scan files are merged based on the candidate matching feature pairs.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: August 3, 2021
    Assignee: AUTODESK, INC.
    Inventors: Oytun Akman, Ronald Poelman, Seth Koterba
  • Publication number: 20210142456
    Abstract: A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more “bad” or a “do not know” classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
    Type: Application
    Filed: November 12, 2020
    Publication date: May 13, 2021
    Applicant: Bright Machines, Inc.
    Inventors: Melinda Varga, Konstantinos Boulis, Oytun Akman, Julio Soldevilla Estrada, Brian Philip Mathews
  • Publication number: 20210134057
    Abstract: A method, system, and apparatus create a 3D CAD model. Scan data from two or more structured scans of a real-world scene are acquired and each scan processed independently by segmenting the scan data into multiple segments, filtering the scan data, and fitting an initial model that is used as a model candidate. Model candidates are clustered into groups and a refined model is fit onto the model candidates in the same group. A grid of cells representing points is mapped over the refined model. Each of the grid cells is labeled by processing each scan independently, labeling each cell located within the refined model as occupied, utilizing back projection to label remaining cells as occluded or empty. The labels from multiple scans are then combined. Based on the labeling, model details are extracted to further define and complete the refined model.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 6, 2021
    Applicant: Autodesk, Inc.
    Inventors: Oytun Akman, Ronald Poelman, Yan Fu
  • Publication number: 20210026443
    Abstract: In general, this disclosure describes an artificial reality system that provides asymmetric user experiences to users associated with user devices that operate according to different modes of engagement with the artificial reality system. Different user devices may have different capabilities, be used by users having different roles for an artificial reality application, or otherwise be configured to interact in a variety of ways with an artificial reality system.
    Type: Application
    Filed: October 12, 2020
    Publication date: January 28, 2021
    Inventors: Oytun Akman, Ioannis Pavlidis, Ananth Ranganathan, Meghana Reddy Guduru, Jeffrey Witthuhn
  • Patent number: 10825243
    Abstract: A method, system, and apparatus create a 3D CAD model. Scan data from two or more structured scans of a real-world scene are acquired and each scan processed independently by segmenting the scan data into multiple segments, filtering the scan data, and fitting an initial model that is used as a model candidate. Model candidates are clustered into groups and a refined model is fit onto the model candidates in the same group. A grid of cells representing points is mapped over the refined model. Each of the grid cells is labeled by processing each scan independently, labeling each cell located within the refined model as occupied, utilizing back projection to label remaining cells as occluded or empty. The labels from multiple scans are then combined. Based on the labeling, model details are extracted to further define and complete the refined model.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: November 3, 2020
    Assignee: AUTODESK, INC.
    Inventors: Oytun Akman, Ronald Poelman, Yan Fu
  • Patent number: 10802579
    Abstract: In general, this disclosure describes an artificial reality system that provides asymmetric user experiences to users associated with user devices that operate according to different modes of engagement with the artificial reality system. Different user devices may have different capabilities, be used by users having different roles for an artificial reality application, or otherwise be configured to interact in a variety of ways with an artificial reality system.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: October 13, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Oytun Akman, Ioannis Pavlidis, Ananth Ranganathan, Meghana Reddy Guduru, Jeffrey Witthuhn
  • Publication number: 20200249749
    Abstract: In general, this disclosure describes an artificial reality system that provides asymmetric user experiences to users associated with user devices that operate according to different modes of engagement with the artificial reality system. Different user devices may have different capabilities, be used by users having different roles for an artificial reality application, or otherwise be configured to interact in a variety of ways with an artificial reality system.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 6, 2020
    Inventors: Oytun Akman, Ioannis Pavlidis, Ananth Ranganathan, Meghana Reddy Guduru, Jeffrey Witthuhn
  • Patent number: 10347034
    Abstract: A method, apparatus, and system provides the ability to process and render a point cloud. The points in the point cloud are grouped into three-dimensional (3D) voxels. A position of each of the points is stored in the point data file. The position is with respect to a location of the point's corresponding 3D voxel. Surface normal data for a surface normal associated with each of the points is also stored in the point data file. The points are organized into levels of details (LODs). The point data file is provided to a graphics processing unit (GPU) that processes the point data file to render the point cloud. During rendering, a LOD is selected to determine the points in the point cloud to render.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: July 9, 2019
    Assignee: AUTODESK, INC.
    Inventors: David Timothy Rudolf, Ronald Poelman, Oytun Akman
  • Patent number: 10268917
    Abstract: A method, apparatus, system, and computer readable storage medium provide the ability to pre-segment point cloud data. Point cloud data is obtained and segmented. Based on the segment information, a determination is made regarding points needed for shape extraction. Needed points are fetched and used to extract shapes. The extracted shapes are used to cull points from the point cloud data.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: April 23, 2019
    Assignee: AUTODESK, INC.
    Inventors: Ronald Poelman, Oytun Akman
  • Publication number: 20180322124
    Abstract: A method, system, apparatus, article of manufacture, and computer-readable storage medium provide the ability to merge multiple point cloud scans. A first raw scan file and a second raw scan file (each including multiple points) are imported. The scan files are segmented by extracting segments based on geometry in the scene. The segments are filtered. A set of candidate matching feature pairs are acquired by registering features from one scan to features from another scan.
    Type: Application
    Filed: June 28, 2018
    Publication date: November 8, 2018
    Applicant: Autodesk, Inc.
    Inventors: Oytun Akman, Ronald Poelman, Seth Koterba
  • Patent number: 10042899
    Abstract: A method, system, apparatus, article of manufacture, and computer-readable storage medium provide the ability to merge multiple point cloud scans. A first raw scan file and a second raw scan file (each including multiple points) are imported. The scan files are segmented by extracting segments based on geometry in the scene. The segments are filtered to reduce a number of segments and identify features. A set of candidate matching feature pairs are acquired by coarsely registering features from one scan to features from another scan. The candidate pairs are refined by improving alignment based on corresponding points in the features. The candidate pairs are scored and then merged based on the scores.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: August 7, 2018
    Assignee: Autodesk, Inc.
    Inventors: Oytun Akman, Ronald Poelman, Seth Koterba
  • Publication number: 20180137671
    Abstract: A method, apparatus, and system provides the ability to process and render a point cloud. The points in the point cloud are grouped into three-dimensional (3D) voxels. A position of each of the points is stored in the point data file. The position is with respect to a location of the point's corresponding 3D voxel. Surface normal data for a surface normal associated with each of the points is also stored in the point data file. The points are organized into levels of details (LODs). The point data file is provided to a graphics processing unit (GPU) that processes the point data file to render the point cloud. During rendering, a LOD is selected to determine the points in the point cloud to render.
    Type: Application
    Filed: November 11, 2016
    Publication date: May 17, 2018
    Applicant: Autodesk, Inc.
    Inventors: David Timothy Rudolf, Ronald Poelman, Oytun Akman
  • Publication number: 20170286430
    Abstract: A method, system, apparatus, article of manufacture, and computer-readable storage medium provide the ability to merge multiple point cloud scans. A first raw scan file and a second raw scan file (each including multiple points) are imported. The scan files are segmented by extracting segments based on geometry in the scene. The segments are filtered to reduce a number of segments and identify features. A set of candidate matching feature pairs are acquired by coarsely registering features from one scan to features from another scan. The candidate pairs are refined by improving alignment based on corresponding points in the features. The candidate pairs are scored and then merged based on the scores.
    Type: Application
    Filed: June 16, 2017
    Publication date: October 5, 2017
    Applicant: Autodesk, Inc.
    Inventors: Oytun Akman, Ronald Poelman, Seth Koterba
  • Patent number: 9740711
    Abstract: A method, system, apparatus, article of manufacture, and computer-readable storage medium provide the ability to merge multiple point cloud scans. A first raw scan file and a second raw scan file (each including multiple points) are imported. The scan files are segmented by extracting segments. Features are extracted from the segments. A set of candidate matching feature pairs are acquired by registering/matching/pairing features from one scan to features from another scan. The candidate pairs are refined based on an evaluation of all of the matching pairs. The candidate pairs are further refined by extracting sample points from the segments (within the matched pairs) and refining the pairs based on the points. The feature pairs are scored and then merged based on the scores.
    Type: Grant
    Filed: December 2, 2014
    Date of Patent: August 22, 2017
    Assignee: Autodesk, Inc.
    Inventors: Oytun Akman, Ronald Poelman, Seth Koterba