Patents by Inventor Semih Energin
Semih Energin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12205228Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.Type: GrantFiled: April 10, 2023Date of Patent: January 21, 2025Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Semih Energin, Jeffrey Jesus Evertt
-
Publication number: 20230245395Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.Type: ApplicationFiled: April 10, 2023Publication date: August 3, 2023Inventors: Semih ENERGIN, Jeffrey Jesus EVERTT
-
Patent number: 11651555Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.Type: GrantFiled: May 31, 2018Date of Patent: May 16, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Semih Energin, Jeffrey Jesus Evertt
-
Patent number: 10792564Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.Type: GrantFiled: November 3, 2017Date of Patent: October 6, 2020Assignee: Amazon Technologies, Inc.Inventors: John Russell Seghers, Semih Energin, Forrest Power Trepte, James Jefferson Gault, Quais Taraki, Robin Dale Reigstad, Jr., Noah Lake Callaway
-
Patent number: 10729976Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.Type: GrantFiled: October 20, 2017Date of Patent: August 4, 2020Assignee: Amazon Technologies, Inc.Inventors: Semih Energin, John Russell Seghers
-
Patent number: 10672103Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: GrantFiled: May 30, 2019Date of Patent: June 2, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Patent number: 10606609Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: GrantFiled: February 11, 2019Date of Patent: March 31, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Publication number: 20190371060Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.Type: ApplicationFiled: May 31, 2018Publication date: December 5, 2019Inventors: Semih ENERGIN, Jeffrey Jesus EVERTT
-
Publication number: 20190279335Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: ApplicationFiled: May 30, 2019Publication date: September 12, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Publication number: 20190171463Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: ApplicationFiled: February 11, 2019Publication date: June 6, 2019Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Patent number: 10311543Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: GrantFiled: October 27, 2016Date of Patent: June 4, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Patent number: 10249095Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: GrantFiled: April 7, 2017Date of Patent: April 2, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Patent number: 10158700Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.Type: GrantFiled: November 14, 2014Date of Patent: December 18, 2018Assignee: Amazon Technologies, Inc.Inventors: John Russell Seghers, Semih Energin, Forrest Power Trepte, James Jefferson Gault, Quais Taraki, Robin Dale Reigstad, Jr., Noah Lake Callaway
-
Patent number: 10130885Abstract: Techniques for enabling selection of one or more viewports from a scene representation are disclosed herein. In some aspects, scene configuration information including a position of at least one viewport relative to the scene may be received. Each of the at least one viewport may be associated with a streaming camera view. A scene representation may then be defined based, at least in part, on the scene configuration information. One or more viewport representations corresponding to each of the at least one viewport may be positioned within the scene representation, based, at least in part on, the scene configuration information. The scene representation, including the at least one viewport representation, may be displayed, for example, to a user. Each viewport representation may allow the respective streaming camera view associated with the corresponding viewport may to be displayed, such as by selection of each viewport representation.Type: GrantFiled: November 21, 2017Date of Patent: November 20, 2018Assignee: Amazon Technologies, Inc.Inventor: Semih Energin
-
Publication number: 20180293798Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: ApplicationFiled: April 7, 2017Publication date: October 11, 2018Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Publication number: 20180122043Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: ApplicationFiled: October 27, 2016Publication date: May 3, 2018Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Patent number: 9849384Abstract: Techniques for enabling selection of one or more viewports from a scene representation are disclosed herein. In some aspects, scene configuration information including a position of at least one viewport relative to the scene may be received. Each of the at least one viewport may be associated with a streaming camera view. A scene representation may then be defined based, at least in part, on the scene configuration information. One or more viewport representations corresponding to each of the at least one viewport may be positioned within the scene representation, based, at least in part on, the scene configuration information. The scene representation, including the at least one viewport representation, may be displayed, for example, to a user. Each viewport representation may allow the respective streaming camera view associated with the corresponding viewport may to be displayed, such as by selection of each viewport representation.Type: GrantFiled: December 16, 2014Date of Patent: December 26, 2017Assignee: Amazon Technologies, Inc.Inventor: Semih Energin
-
Patent number: 9839843Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.Type: GrantFiled: November 14, 2014Date of Patent: December 12, 2017Assignee: Amazon Technologies, Inc.Inventors: John Russell Seghers, Semih Energin, Forrest Power Trepte, James Jefferson Gault, Quais Taraki, Robin Dale Reigstad, Jr., Noah Lake Callaway
-
Patent number: 9821222Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.Type: GrantFiled: November 14, 2014Date of Patent: November 21, 2017Assignee: Amazon Technologies, Inc.Inventors: Semih Energin, John Russell Seghers