Patents by Inventor Patrick S. Piemonte

Patrick S. Piemonte has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210247203
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.
    Type: Application
    Filed: February 23, 2021
    Publication date: August 12, 2021
    Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
  • Patent number: 11069091
    Abstract: Communications devices and methods perform spatial, visual content and a separate preview of other content apart from the performed content. Content may include 3-D performances or AR content. Immersive visual content may be received by the communications device and simplified into transcript cells and/or performed render nodes based on metadata, visual attributes, and/or capabilities of the communications device for performance. Render nodes may preview other content, which is performable and selectable with ease from the communications device. Devices may perform both a piece of content and display, in context, render nodes for other visual content, as well as buffer and prepare unseen other content such that a user may seamlessly preview, select, and perform other visual content. Example GUIs may arrange nodes at a distance or arrayed long a selection line in the same coordinates as performed visual content. Users may input commands to move between or modify the nodes.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: July 20, 2021
    Inventor: Patrick S. Piemonte
  • Publication number: 20210166490
    Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
    Type: Application
    Filed: February 12, 2021
    Publication date: June 3, 2021
    Applicant: Apple Inc.
    Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D. Gosnell, Peter Meier
  • Publication number: 20210141525
    Abstract: A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
    Type: Application
    Filed: September 17, 2020
    Publication date: May 13, 2021
    Inventors: Patrick S. Piemonte, Bradford A. Moore, Billy P. Chen
  • Patent number: 10976178
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: April 13, 2021
    Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
  • Publication number: 20210056733
    Abstract: Communications devices and methods perform spatial, visual content and a separate preview of other content apart from the performed content. Content may include 3-D performances or AR content. Immersive visual content may be received by the communications device and simplified into transcript cells and/or performed render nodes based on metadata, visual attributes, and/or capabilities of the communications device for performance. Render nodes may preview other content, which is performable and selectable with ease from the communications device. Devices may perform both a piece of content and display, in context, render nodes for other visual content, as well as buffer and prepare unseen other content such that a user may seamlessly preview, select, and perform other visual content. Example GUIs may arrange nodes at a distance or arrayed long a selection line in the same coordinates as performed visual content. Users may input commands to move between or modify the nodes.
    Type: Application
    Filed: August 19, 2019
    Publication date: February 25, 2021
    Inventor: Patrick S. Piemonte
  • Patent number: 10922886
    Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: February 16, 2021
    Assignee: Apple Inc.
    Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D Gosnell, Peter Meier
  • Patent number: 10782873
    Abstract: A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: September 22, 2020
    Assignee: APPLE INC.
    Inventors: Patrick S. Piemonte, Bradford A. Moore, Billy P. Chen
  • Publication number: 20200233212
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
    Type: Application
    Filed: February 10, 2020
    Publication date: July 23, 2020
    Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
  • Patent number: 10621945
    Abstract: Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: April 14, 2020
    Assignee: Apple Inc.
    Inventors: Marcel Van Os, Patrick S. Piemonte, Billy P. Chen, Christopher Blumenberg
  • Patent number: 10558037
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: February 11, 2020
    Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
  • Patent number: 10504288
    Abstract: Methods, hardware, and software create augmented reality through several distinct users. Different users may select locations for the augmented reality creation and add augmented objects, elements, and other perceivables to underlying reality through separate communications devices. The users may interface with a GUI for augmenting underlying media, including use of several tools to add particular and separate augmented features. Users may take turns separately editing and adding augmented elements, and once the users are finished collaborating, the resulting augmented reality can be shared with others for re-creation and performance at the selected locations. Users may invite each other to collaborate on augmented reality through the GUI as well, potentially in a contact-based invitation method or any other known communication or chat configuration.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: December 10, 2019
    Assignee: Patrick Piemonte & Ryan Staake
    Inventors: Patrick S. Piemonte, Ryan P. Staake
  • Publication number: 20190318540
    Abstract: Methods, hardware, and software create augmented reality through several distinct users. Different users may select locations for the augmented reality creation and add augmented objects, elements, and other perceivables to underlying reality through separate communications devices. The users may interface with a GUI for augmenting underlying media, including use of several tools to add particular and separate augmented features. Users may take turns separately editing and adding augmented elements, and once the users are finished collaborating, the resulting augmented reality can be shared with others for re-creation and performance at the selected locations. Users may invite each other to collaborate on augmented reality through the GUI as well, potentially in a contact-based invitation method or any other known communication or chat configuration.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventors: Patrick S. Piemonte, Ryan P. Staake
  • Patent number: 10437460
    Abstract: Methods and apparatus for a map tool on a mobile device for implementing cartographically aware gestures directed to a map view of a map region. The map tool may base a cartographically aware gesture on an actual gesture input directed to a map view and based on map data for the map region that may include metadata corresponding to elements within the map region. The map tool may then determine, based on one or more elements of the map data, a modification to be applied to an implementation to the gesture. Given the modification to the gesture implementation, the map tool may then render, based on performing the modification to the gesture, an updated map view instead of an updated map view based solely on the user gesture.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: October 8, 2019
    Assignee: Apple Inc.
    Inventors: Bradford A. Moore, Billy P. Chen, Christopher Blumenberg, Patrick S. Piemonte
  • Patent number: 10366523
    Abstract: Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode.
    Type: Grant
    Filed: October 9, 2015
    Date of Patent: July 30, 2019
    Assignee: Apple Inc.
    Inventors: Billy P. Chen, Patrick S. Piemonte, Christopher Blumenberg
  • Publication number: 20190057670
    Abstract: Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Applicant: Apple Inc.
    Inventors: Marcel Van Os, Patrick S. Piemonte, Billy P. Chen, Christopher Blumenberg
  • Patent number: 10109255
    Abstract: Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: October 23, 2018
    Assignee: Apple Inc.
    Inventors: Marcel Van Os, Patrick S. Piemonte, Billy P. Chen, Christopher Blumenberg
  • Publication number: 20180293771
    Abstract: Methods, hardware, and software create and transmit augmented reality in context with captured real world media, so as to replicate a similar augmented reality at a different instance. A computer processor in a communications device handles a combination of augmented reality information, anchor information that provides the context-matching, and captured real world media information. The computer processor determines if the real world subject matter has suitable anchor information to control how augmented reality elements should appear contextually with such media. A graphical user interface on the communications device may provide a user with several options for creation of augmented reality. The augmented, anchor, and any additional information is transmitted to a different device to identify triggering or context-matching media. The augmented reality is performed based on the triggering media as perceived on the different device.
    Type: Application
    Filed: September 5, 2017
    Publication date: October 11, 2018
    Inventors: Patrick S. Piemonte, Ryan P. Staake
  • Publication number: 20180283896
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.
    Type: Application
    Filed: September 21, 2016
    Publication date: October 4, 2018
    Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
  • Publication number: 20180089899
    Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
    Type: Application
    Filed: September 22, 2017
    Publication date: March 29, 2018
    Applicant: Apple Inc.
    Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D. Gosnell, Peter Meier