Patents by Inventor Barry Saylor

Barry Saylor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11860602
    Abstract: A system is provided for programming workpiece feature inspection operations for a coordinate measuring machine. The system includes a user interface with a simulation portion (e.g., including a 3D view of a workpiece) and an editing user interface portion (e.g., including an editable plan representation of a current workpiece feature inspection plan). Transparency operations are performed including automatically identifying as a target feature a workpiece feature in the 3D view that corresponds to a workpiece feature or inspection operation representation in the editable plan representation that is indicated by a current feature-directed operation (e.g., a selection operation for selecting a workpiece feature or inspection operation in the editable plan representation). An occluding workpiece feature that would otherwise be occluding at least a portion of the target feature in the 3D view is then automatically rendered as at least partially transparent in the 3D view.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: January 2, 2024
    Assignee: Mitutoyo Corporation
    Inventors: Dahai Yu, Barry Saylor, Kai Kircher
  • Publication number: 20210325844
    Abstract: A system is provided for programming workpiece feature inspection operations for a coordinate measuring machine. The system includes a user interface with a simulation portion (e.g., including a 3D view of a workpiece) and an editing user interface portion (e.g., including an editable plan representation of a current workpiece feature inspection plan). Transparency operations are performed including automatically identifying as a target feature a workpiece feature in the 3D view that corresponds to a workpiece feature or inspection operation representation in the editable plan representation that is indicated by a current feature-directed operation (e.g., a selection operation for selecting a workpiece feature or inspection operation in the editable plan representation). An occluding workpiece feature that would otherwise be occluding at least a portion of the target feature in the 3D view is then automatically rendered as at least partially transparent in the 3D view.
    Type: Application
    Filed: December 10, 2018
    Publication date: October 21, 2021
    Inventors: Dahai Yu, Barry Saylor, Kai Kircher
  • Publication number: 20190220185
    Abstract: The present invention relates to an image measurement apparatus images an object to be measured, and measures dimensions and shape of the object to be measured based on an image of the object to be measured displayed on the touch-sensitive panel display. The apparatus comprising: an controller that identifies a command corresponding to a gesture contact-input with respect to the touch-sensitive panel display from a signal output from the touch-sensitive panel display in response to the gesture, and executes the command with respect to a part in the image measurement apparatus, the part being the target of the execution of such command. The gesture is a gesture performed in the state in which the simultaneous touch is made at two or more points.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 18, 2019
    Inventors: Gyokubu Cho, Koichi Komatsu, Yasuhiro Takahama, Hiraku Ishiyama, Barry Saylor, Dahai Yu, Ryan Northrup
  • Patent number: 9235337
    Abstract: A system, a GUI and a method are provided that allow intuitive, flexible and robust editing of individually editable instructions generated by conversion operations applied to a defined step-and-repeat programming element in a machine vision inspection system. The method includes: a) Converting a defined step-and-repeat programming element to generate N independently-editable code blocks including an independently-editable substantial replication of a repeated operation block of the step-and-repeat element, each of code blocks executed at a respective location corresponding to a step and repeat location; b) Representing the code blocks in an editing GUI; and c) Eliminating the defined step-and-repeat programming element as an editable element. The defined step-and-repeat programming element may be represented in the editing GUI. The conversion operations may be initiated by a user activating a step and repeat conversion command directed to the defined step-and-repeat programming element.
    Type: Grant
    Filed: May 9, 2013
    Date of Patent: January 12, 2016
    Assignee: Mitutoyo Corporation
    Inventors: Ryan Northrup, Barry Saylor
  • Patent number: 9013574
    Abstract: A machine vision system program editing environment including synchronized selection and/or identification of related features in a plurality of different user interface windows is provided. In particular, one of the windows is an editing window where a part program representation is displayed for editing by a user. In one embodiment, a user may select data or another feature of interest in a window that is not the editing window (e.g., a results window, or graphical workpiece inspection feature display window) and the associated part program instruction representation is automatically highlighted and/or selected in the editing window. Conversely, a part program instruction representation may be selected by a user in the editing window and the associated results or feature in another window is automatically highlighted and/or selected. User interface navigation, rapid program quality assessment, and overall part program creation and editing efficiency are significantly enhanced in such an editing environment.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: April 21, 2015
    Assignee: Mitutoyo Corporation
    Inventors: Barry Saylor, Ryan Northrup, Akira Takada, Kozo Ariga
  • Patent number: 8957960
    Abstract: A machine vision system program editing environment includes near real time context generation. Rather than requiring execution of all preceding instructions of a part program in order to generate a realistic context for subsequent edits, surrogate data operations using previously saved data replace execution of certain sets of instructions. The surrogate data may be saved during the actual execution of operations that are recorded in a part program. An edit mode of execution substitutes that data as a surrogate for executing the operations that would otherwise generate that data. Significant time savings may be achieved for context generation, such that editing occurs within an operating context which may be repeatedly refreshed for accuracy in near real time. This supports convenient program modification by relatively unskilled users, using the native user interface of the machine vision system, rather than difficult to use text-based or graphical object-based editing environments.
    Type: Grant
    Filed: November 15, 2011
    Date of Patent: February 17, 2015
    Assignee: Mitutoyo Corporation
    Inventors: Barry Saylor, Dahai Yu, Ryan Northrup, Gyokubu Cho, Akira Takada
  • Patent number: 8902307
    Abstract: A machine vision system editing environment is provided for a part program in which a continuous stream of image acquisition operations are performed during a run mode. In one embodiment, a new common syntax and representations are utilized wherein continuous image acquisition operations are recorded in the same way as regular operations, with the running of the part program being performed in two stages. In the first stage, the portion of the part program that is to have the continuous stream of image acquisition is scanned for image acquisition operations, and the most efficient order for acquiring the images is determined, after which the image acquisition process is begun. Then, in the second stage, while the image acquisition process is being performed, the portion of the part program is scanned again, with the image analysis operations then being performed.
    Type: Grant
    Filed: November 15, 2011
    Date of Patent: December 2, 2014
    Assignee: Mitutoyo Corporation
    Inventors: Mark Delaney, Barry Saylor
  • Publication number: 20140337780
    Abstract: A system, a GUI and a method are provided that allow intuitive, flexible and robust editing of individually editable instructions generated by conversion operations applied to a defined step-and-repeat programming element in a machine vision inspection system. The method includes: a) Converting a defined step-and-repeat programming element to generate N independently-editable code blocks including an independently-editable substantial replication of a repeated operation block of the step-and-repeat element, each of code blocks executed at a respective location corresponding to a step and repeat location; b) Representing the code blocks in an editing GUI; and c) Eliminating the defined step-and-repeat programming element as an editable element. The defined step-and-repeat programming element may be represented in the editing GUI. The conversion operations may be initiated by a user activating a step and repeat conversion command directed to the defined step-and-repeat programming element.
    Type: Application
    Filed: May 9, 2013
    Publication date: November 13, 2014
    Inventors: Ryan Northrup, Barry Saylor
  • Publication number: 20130120553
    Abstract: A machine vision system editing environment is provided for a part program in which a continuous stream of image acquisition operations are performed during a run mode. In one embodiment, a new common syntax and representations are utilized wherein continuous image acquisition operations are recorded in the same way as regular operations, with the running of the part program being performed in two stages. In the first stage, the portion of the part program that is to have the continuous stream of image acquisition is scanned for image acquisition operations, and the most efficient order for acquiring the images is determined, after which the image acquisition process is begun. Then, in the second stage, while the image acquisition process is being performed, the portion of the part program is scanned again, with the image analysis operations then being performed.
    Type: Application
    Filed: November 15, 2011
    Publication date: May 16, 2013
    Applicant: MITUTOYO CORPORATION
    Inventors: Mark Delaney, Barry Saylor
  • Publication number: 20130123945
    Abstract: A machine vision system program editing environment includes near real time context generation. Rather than requiring execution of all preceding instructions of a part program in order to generate a realistic context for subsequent edits, surrogate data operations using previously saved data replace execution of certain sets of instructions. The surrogate data may be saved during the actual execution of operations that are recorded in a part program. An edit mode of execution substitutes that data as a surrogate for executing the operations that would otherwise generate that data. Significant time savings may be achieved for context generation, such that editing occurs within an operating context which may be repeatedly refreshed for accuracy in near real time. This supports convenient program modification by relatively unskilled users, using the native user interface of the machine vision system, rather than difficult to use text-based or graphical object-based editing environments.
    Type: Application
    Filed: November 15, 2011
    Publication date: May 16, 2013
    Applicant: MITUTOYO CORPORATION
    Inventors: Barry Saylor, Dahai Yu, Ryan Northrup, Gyokubu Cho, Akira Takada
  • Publication number: 20060171580
    Abstract: A system and method for tool enhancements are provided which allow users to utilize video tools in a controlled manner. The video tools balance a minimal amount of cursor positioning and “mouse clicks” against a level of video tool “customization” control desired by a user when applying the video tools. Tool construction methods using multiple mouse clicks are provided as an alternative to using drag-and-draw and one-click tools. Multi-click-plus tools give more specific information and provide a precise way to rapidly create customized tools.
    Type: Application
    Filed: July 20, 2005
    Publication date: August 3, 2006
    Inventors: Charles Blanford, Dahai Yu, Barry Saylor