Patents by Inventor Heather Kerrick

Heather Kerrick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10751879
    Abstract: One embodiment of the present invention sets forth a technique for controlling the execution of a physical process. The technique includes receiving, as input to a machine learning model that is configured to adapt a simulation of the physical process executing in a virtual environment to a physical world, simulated output for controlling how the physical process performs a task in the virtual environment and real-world data collected from the physical process performing the task in the physical world. The technique also includes performing, by the machine learning model, one or more operations on the simulated output and the real-world data to generate augmented output. The technique further includes transmitting the augmented output to the physical process to control how the physical process performs the task in the physical world.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: August 25, 2020
    Assignee: AUTODESK, INC.
    Inventors: Hui Li, Evan Patrick Atherton, Erin Bradner, Nicholas Cote, Heather Kerrick
  • Publication number: 20200264583
    Abstract: A robot is configured to assist an end-user with creative tasks. While the end-user modifies the work piece, the robot observes the modifications made by the end-user and determines one or more objectives that the end-user may endeavor to accomplish. The robot then determines a set of actions to perform that assist the end-user with accomplishing the objectives.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Inventors: Evan Patrick Atherton, David Thomasson, Maurice Ugo Conti, Heather Kerrick
  • Patent number: 10708479
    Abstract: One embodiment of the present invention sets forth a technique for determining a location of an object that is being manipulated or processed by a robot. The technique includes capturing a digital image of the object while the object is disposed by the robot within an imaging space, wherein the digital image includes a direct view of the object and a reflected view of the object, detecting a visible feature of the object in the direct view and the visible feature of the object in the reflected view, and computing a first location of the visible feature in a first direction based on a position of the visible feature in the direct view. The technique further includes computing a second location of the visible feature in a second direction based on a position of the visible feature in the reflected view and causing the robot to move the object to a processing station based at least in part on the first location and the second location.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: July 7, 2020
    Assignee: Autodesk, Inc.
    Inventors: Evan Atherton, David Thomasson, Heather Kerrick, Maurice Conti
  • Publication number: 20200147794
    Abstract: An assembly engine is configured to generate, based on a computer-aided design (CAD) assembly, a set of motion commands that causes the robot to manufacture a physical assembly corresponding to the CAD assembly. The assembly engine analyzes the CAD assembly to determine an assembly sequence for various physical components to be included in the physical assembly. The assembly sequence indicates the order in which each physical component should be incorporated into the physical assembly and how those physical components should be physically coupled together. The assembly engine further analyzes the CAD assembly to determine different component paths that each physical component should follow when being incorporated into the physical assembly. Based on the assembly sequence and the component paths, the assembly engine generates a set of motion commands that the robot executes to assemble the physical components into the physical assembly.
    Type: Application
    Filed: October 29, 2019
    Publication date: May 14, 2020
    Inventors: Heather KERRICK, Erin BRADNER, Hui LI, Evan Patrick ATHERTON, Nicholas COTE
  • Patent number: 10642244
    Abstract: A robot is configured to assist an end-user with creative tasks. While the end-user modifies the work piece, the robot observes the modifications made by the end-user and determines one or more objectives that the end-user may endeavor to accomplish. The robot then determines a set of actions to perform that assist the end-user with accomplishing the objectives.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: May 5, 2020
    Assignee: Autodesk, Inc.
    Inventors: Evan Patrick Atherton, David Thomasson, Maurice Ugo Conti, Heather Kerrick
  • Patent number: 10579046
    Abstract: A robot system is configured to fabricate three-dimensional (3D) objects using closed-loop, computer vision-based control. The robot system initiates fabrication based on a set of fabrication paths along which material is to be deposited. During deposition of material, the robot system captures video data and processes that data to determine the specific locations where the material is deposited. Based on these locations, the robot system adjusts future deposition locations to compensate for deviations from the fabrication paths. Additionally, because the robot system includes a 6-axis robotic arm, the robot system can deposit material at any locations, along any pathway, or across any surface. Accordingly, the robot system is capable of fabricating a 3D object with multiple non-parallel, non-horizontal, and/or non-planar layers.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: March 3, 2020
    Assignee: AUTODESK, INC.
    Inventors: Evan Atherton, David Thomasson, Maurice Ugo Conti, Heather Kerrick, Nicholas Cote
  • Publication number: 20200030986
    Abstract: A motion capture setup records the movements of an operator, and a control engine then translates those movements into control signals for controlling a robot. The control engine may directly translate the operator movements into analogous movements to be performed by the robot, or the control engine may compute robot dynamics that cause a portion of the robot to mimic a corresponding portion of the operator.
    Type: Application
    Filed: September 30, 2019
    Publication date: January 30, 2020
    Inventors: Evan Patrick Atherton, David Thomasson, Maurice Ugo Conti, Heather Kerrick
  • Publication number: 20190337161
    Abstract: One embodiment of the present invention sets forth a technique for determining a location of an object that is being manipulated or processed by a robot. The technique includes capturing a digital image of the object while the object is disposed by the robot within an imaging space, wherein the digital image includes a direct view of the object and a reflected view of the object, detecting a visible feature of the object in the direct view and the visible feature of the object in the reflected view, and computing a first location of the visible feature in a first direction based on a position of the visible feature in the direct view. The technique further includes computing a second location of the visible feature in a second direction based on a position of the visible feature in the reflected view and causing the robot to move the object to a processing station based at least in part on the first location and the second location.
    Type: Application
    Filed: July 16, 2019
    Publication date: November 7, 2019
    Inventors: Evan ATHERTON, David THOMASSON, Heather KERRICK, Maurice CONTI
  • Patent number: 10444716
    Abstract: Methods, systems, and apparatus, including medium-encoded computer program products, for passing actionable information between different buildings to facilitate building management without human intervention include, in one aspect, a method including: determining, in a building information modelling (BIM) system of a first building, a set of rules defining actions to be taken by a building automation system of the first building in response to a defined set of remote information received from a BIM system of a second building, the set of remote information corresponding to one or more sensors in or associated with the second building; receiving data from the BIM system of the second building in accordance with the set of remote information; and using the building automation system of the first building to automatically change configuration, use, or operation of the first building in response to the received data in accordance with the set of rules.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: October 15, 2019
    Assignee: Autodesk, Inc.
    Inventors: Florencio Mazzoldi, Olivier Dionne, Thomas White, Heather Kerrick, Christopher C. Romes
  • Patent number: 10427305
    Abstract: A motion capture setup records the movements of an operator, and a control engine then translates those movements into control signals for controlling a robot. The control engine may directly translate the operator movements into analogous movements to be performed by the robot, or the control engine may compute robot dynamics that cause a portion of the robot to mimic a corresponding portion of the operator.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: October 1, 2019
    Assignee: AUTODESK, INC.
    Inventors: Evan Patrick Atherton, David Thomasson, Maurice Ugo Conti, Heather Kerrick
  • Patent number: 10363667
    Abstract: One embodiment of the present invention sets forth a technique for determining a location of an object that is being manipulated or processed by a robot. The technique includes capturing a digital image of the object while the object is disposed by the robot within an imaging space, wherein the digital image includes a direct view of the object and a reflected view of the object, detecting a visible feature of the object in the direct view and the visible feature of the object in the reflected view, and computing a first location of the visible feature in a first direction based on a position of the visible feature in the direct view. The technique further includes computing a second location of the visible feature in a second direction based on a position of the visible feature in the reflected view and causing the robot to move the object to a processing station based at least in part on the first location and the second location.
    Type: Grant
    Filed: November 29, 2016
    Date of Patent: July 30, 2019
    Assignee: AUTODESK, INC.
    Inventors: Evan Atherton, David Thomasson, Heather Kerrick, Maurice Conti
  • Publication number: 20190084158
    Abstract: A robot system models the behavior of a user when the user occupies an operating zone associated with a robot. The robot system predicts future behaviors of the user, and then determines whether those predicted behaviors interfere with anticipated behaviors of the robot. When such interference may occur, the robot system generates dynamics adjustments that can be implemented by the robot to avoid such interference. The robot system may also generate dynamics adjustments that can be implemented by the user to avoid such interference.
    Type: Application
    Filed: September 19, 2017
    Publication date: March 21, 2019
    Inventors: Evan ATHERTON, David THOMASSON, Heather KERRICK, Hui LI
  • Publication number: 20190076949
    Abstract: A control application implements computer vision techniques to cause a positioning robot and a welding robot to perform fabrication operations. The control application causes the positioning robot to place elements of a structure at certain positions based on real-time visual feedback captured by the positioning robot. The control application also causes the welding robot to weld those elements into place based on real-time visual feedback captured by the welding robot. By analyzing the real-time visual feedback captured by both robots, the control application adjusts the positioning and welding operations in real time.
    Type: Application
    Filed: September 12, 2017
    Publication date: March 14, 2019
    Inventors: Evan ATHERTON, David THOMASSON, Heather KERRICK, Hui LI
  • Publication number: 20180345496
    Abstract: One embodiment of the present invention sets forth a technique for controlling the execution of a physical process. The technique includes receiving, as input to a machine learning model that is configured to adapt a simulation of the physical process executing in a virtual environment to a physical world, simulated output for controlling how the physical process performs a task in the virtual environment and real-world data collected from the physical process performing the task in the physical world. The technique also includes performing, by the machine learning model, one or more operations on the simulated output and the real-world data to generate augmented output. The technique further includes transmitting the augmented output to the physical process to control how the physical process performs the task in the physical world.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 6, 2018
    Inventors: Hui LI, Evan Patrick ATHERTON, Erin BRADNER, Nicholas COTE, Heather KERRICK
  • Publication number: 20180348735
    Abstract: An agent engine allocates a collection of agents to scan the surface of an object model. Each agent operates autonomously and implements particular behaviors based on the actions of nearby agents. Accordingly, the collection of agents exhibits swarm-like behavior. Over a sequence of time steps, the agents traverse the surface of the object model. Each agent acts to avoid other agents, thereby maintaining a relatively consistent distribution of agents across the surface of the object model over all time steps. At a given time step, the agent engine generates a slice through the object model that intersects each agent in a group of agents. The slice associated with a given time step represents a set of locations where material should be deposited to fabricate a 3D object. Based on a set of such slices, a robot engine causes a robot to fabricate the 3D object.
    Type: Application
    Filed: June 2, 2017
    Publication date: December 6, 2018
    Inventors: Evan Patrick ATHERTON, David THOMASSON, Maurice Ugo CONTI, Heather KERRICK, Nicholas COTE, Hui LI
  • Publication number: 20180349527
    Abstract: One embodiment of the present invention sets forth a technique for generating simulated training data for a physical process. The technique includes receiving, as input to at least one machine learning model, a first simulated image of a first object, wherein the at least one machine learning model includes mappings between simulated images generated from models of physical objects and real-world images of the physical objects. The technique also includes performing, by the at least one machine learning model, one or more operations on the first simulated image to generate a first augmented image of the first object. The technique further includes transmitting the first augmented image to a training pipeline for an additional machine learning model that controls a behavior of the physical process.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 6, 2018
    Inventors: Hui LI, Evan Patrick ATHERTON, Erin BRADNER, Nicholas COTE, Heather KERRICK
  • Publication number: 20180341730
    Abstract: A robotic assembly cell is configured to generate a physical mesh of physical polygons based on a simulated mesh of simulated triangles. A control application configured to operate the assembly cell selects a simulated polygon in the simulated mesh and then causes a positioning robot in the cell to obtain a physical polygon that is similar to the simulated polygon. The positioning robot positions the polygon on the physical mesh, and a welding robot in the cell then welds the polygon to the mesh. The control application captures data that reflects how the physical polygon is actually positioned on the physical mesh, and then updates the simulated mesh to be geometrically consistent with the physical mesh. In doing so, the control application may execute a multi-objective solver to generate an updated simulated mesh that meets specific design criteria.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: Evan Patrick ATHERTON, David THOMASSON, Maurice Ugo CONTI, Heather KERRICK, Nicholas COTE
  • Publication number: 20180307206
    Abstract: A robot system is configured to fabricate three-dimensional (3D) objects using closed-loop, computer vision-based control. The robot system initiates fabrication based on a set of fabrication paths along which material is to be deposited. During deposition of material, the robot system captures video data and processes that data to determine the specific locations where the material is deposited. Based on these locations, the robot system adjusts future deposition locations to compensate for deviations from the fabrication paths. Additionally, because the robot system includes a 6-axis robotic arm, the robot system can deposit material at any locations, along any pathway, or across any surface. Accordingly, the robot system is capable of fabricating a 3D object with multiple non-parallel, non-horizontal, and/or non-planar layers.
    Type: Application
    Filed: April 24, 2017
    Publication date: October 25, 2018
    Inventors: Evan ATHERTON, David THOMASSON, Maurice Ugo CONTI, Heather KERRICK, Nicholas COTE
  • Publication number: 20180304550
    Abstract: A robot system is configured to fabricate three-dimensional (3D) objects using closed-loop, computer vision-based control. The robot system initiates fabrication based on a set of fabrication paths along which material is to be deposited. During deposition of material, the robot system captures video data and processes that data to determine the specific locations where the material is deposited. Based on these locations, the robot system adjusts future deposition locations to compensate for deviations from the fabrication paths. Additionally, because the robot system includes a 6-axis robotic arm, the robot system can deposit material at any locations, along any pathway, or across any surface. Accordingly, the robot system is capable of fabricating a 3D object with multiple non-parallel, non-horizontal, and/or non-planar layers.
    Type: Application
    Filed: April 24, 2017
    Publication date: October 25, 2018
    Inventors: Evan ATHERTON, David THOMASSON, Maurice Ugo CONTI, Heather KERRICK, Nicholas COTE
  • Publication number: 20180307207
    Abstract: A robot system is configured to fabricate three-dimensional (3D) objects using closed-loop, computer vision-based control. The robot system initiates fabrication based on a set of fabrication paths along which material is to be deposited. During deposition of material, the robot system captures video data and processes that data to determine the specific locations where the material is deposited. Based on these locations, the robot system adjusts future deposition locations to compensate for deviations from the fabrication paths. Additionally, because the robot system includes a 6-axis robotic arm, the robot system can deposit material at any locations, along any pathway, or across any surface. Accordingly, the robot system is capable of fabricating a 3D object with multiple non-parallel, non-horizontal, and/or non-planar layers.
    Type: Application
    Filed: April 24, 2017
    Publication date: October 25, 2018
    Inventors: Evan ATHERTON, David THOMASSON, Maurice Ugo CONTI, Heather KERRICK, Nicholas COTE