Patents by Inventor Patrick S. Piemonte
Patrick S. Piemonte has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12147034Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.Type: GrantFiled: February 10, 2020Date of Patent: November 19, 2024Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
-
Publication number: 20240185539Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.Type: ApplicationFiled: February 9, 2024Publication date: June 6, 2024Applicant: Apple Inc.Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D. Gosnell, Peter Meier
-
Patent number: 11953339Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.Type: GrantFiled: February 23, 2021Date of Patent: April 9, 2024Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
-
Patent number: 11935197Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.Type: GrantFiled: February 12, 2021Date of Patent: March 19, 2024Assignee: Apple Inc.Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D Gosnell, Peter Meier
-
Publication number: 20240068835Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.Type: ApplicationFiled: November 7, 2023Publication date: February 29, 2024Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
-
Patent number: 11422694Abstract: A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.Type: GrantFiled: September 17, 2020Date of Patent: August 23, 2022Assignee: APPLE INC.Inventors: Patrick S. Piemonte, Bradford A. Moore, Billy P. Chen
-
Publication number: 20210247203Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.Type: ApplicationFiled: February 23, 2021Publication date: August 12, 2021Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
-
Patent number: 11069091Abstract: Communications devices and methods perform spatial, visual content and a separate preview of other content apart from the performed content. Content may include 3-D performances or AR content. Immersive visual content may be received by the communications device and simplified into transcript cells and/or performed render nodes based on metadata, visual attributes, and/or capabilities of the communications device for performance. Render nodes may preview other content, which is performable and selectable with ease from the communications device. Devices may perform both a piece of content and display, in context, render nodes for other visual content, as well as buffer and prepare unseen other content such that a user may seamlessly preview, select, and perform other visual content. Example GUIs may arrange nodes at a distance or arrayed long a selection line in the same coordinates as performed visual content. Users may input commands to move between or modify the nodes.Type: GrantFiled: August 19, 2019Date of Patent: July 20, 2021Inventor: Patrick S. Piemonte
-
Publication number: 20210166490Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.Type: ApplicationFiled: February 12, 2021Publication date: June 3, 2021Applicant: Apple Inc.Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D. Gosnell, Peter Meier
-
Publication number: 20210141525Abstract: A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.Type: ApplicationFiled: September 17, 2020Publication date: May 13, 2021Inventors: Patrick S. Piemonte, Bradford A. Moore, Billy P. Chen
-
Patent number: 10976178Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.Type: GrantFiled: September 21, 2016Date of Patent: April 13, 2021Inventors: Patrick S. Piemonte, Jason D. Gosnell, Kjell F. Bronder, Daniel De Rocha Rosario, Shaun D. Budhram, Scott Herz
-
Publication number: 20210056733Abstract: Communications devices and methods perform spatial, visual content and a separate preview of other content apart from the performed content. Content may include 3-D performances or AR content. Immersive visual content may be received by the communications device and simplified into transcript cells and/or performed render nodes based on metadata, visual attributes, and/or capabilities of the communications device for performance. Render nodes may preview other content, which is performable and selectable with ease from the communications device. Devices may perform both a piece of content and display, in context, render nodes for other visual content, as well as buffer and prepare unseen other content such that a user may seamlessly preview, select, and perform other visual content. Example GUIs may arrange nodes at a distance or arrayed long a selection line in the same coordinates as performed visual content. Users may input commands to move between or modify the nodes.Type: ApplicationFiled: August 19, 2019Publication date: February 25, 2021Inventor: Patrick S. Piemonte
-
Patent number: 10922886Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.Type: GrantFiled: September 22, 2017Date of Patent: February 16, 2021Assignee: Apple Inc.Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D Gosnell, Peter Meier
-
Patent number: 10782873Abstract: A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.Type: GrantFiled: December 20, 2016Date of Patent: September 22, 2020Assignee: APPLE INC.Inventors: Patrick S. Piemonte, Bradford A. Moore, Billy P. Chen
-
Publication number: 20200233212Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.Type: ApplicationFiled: February 10, 2020Publication date: July 23, 2020Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
-
Patent number: 10621945Abstract: Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.Type: GrantFiled: October 22, 2018Date of Patent: April 14, 2020Assignee: Apple Inc.Inventors: Marcel Van Os, Patrick S. Piemonte, Billy P. Chen, Christopher Blumenberg
-
Patent number: 10558037Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.Type: GrantFiled: September 20, 2017Date of Patent: February 11, 2020Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
-
Patent number: 10504288Abstract: Methods, hardware, and software create augmented reality through several distinct users. Different users may select locations for the augmented reality creation and add augmented objects, elements, and other perceivables to underlying reality through separate communications devices. The users may interface with a GUI for augmenting underlying media, including use of several tools to add particular and separate augmented features. Users may take turns separately editing and adding augmented elements, and once the users are finished collaborating, the resulting augmented reality can be shared with others for re-creation and performance at the selected locations. Users may invite each other to collaborate on augmented reality through the GUI as well, potentially in a contact-based invitation method or any other known communication or chat configuration.Type: GrantFiled: April 17, 2018Date of Patent: December 10, 2019Assignee: Patrick Piemonte & Ryan StaakeInventors: Patrick S. Piemonte, Ryan P. Staake
-
Publication number: 20190318540Abstract: Methods, hardware, and software create augmented reality through several distinct users. Different users may select locations for the augmented reality creation and add augmented objects, elements, and other perceivables to underlying reality through separate communications devices. The users may interface with a GUI for augmenting underlying media, including use of several tools to add particular and separate augmented features. Users may take turns separately editing and adding augmented elements, and once the users are finished collaborating, the resulting augmented reality can be shared with others for re-creation and performance at the selected locations. Users may invite each other to collaborate on augmented reality through the GUI as well, potentially in a contact-based invitation method or any other known communication or chat configuration.Type: ApplicationFiled: April 17, 2018Publication date: October 17, 2019Inventors: Patrick S. Piemonte, Ryan P. Staake
-
Patent number: 10437460Abstract: Methods and apparatus for a map tool on a mobile device for implementing cartographically aware gestures directed to a map view of a map region. The map tool may base a cartographically aware gesture on an actual gesture input directed to a map view and based on map data for the map region that may include metadata corresponding to elements within the map region. The map tool may then determine, based on one or more elements of the map data, a modification to be applied to an implementation to the gesture. Given the modification to the gesture implementation, the map tool may then render, based on performing the modification to the gesture, an updated map view instead of an updated map view based solely on the user gesture.Type: GrantFiled: September 11, 2012Date of Patent: October 8, 2019Assignee: Apple Inc.Inventors: Bradford A. Moore, Billy P. Chen, Christopher Blumenberg, Patrick S. Piemonte