Patents by Inventor Semih Energin

Semih Energin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230245395
    Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.
    Type: Application
    Filed: April 10, 2023
    Publication date: August 3, 2023
    Inventors: Semih ENERGIN, Jeffrey Jesus EVERTT
  • Patent number: 11651555
    Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: May 16, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Semih Energin, Jeffrey Jesus Evertt
  • Patent number: 10792564
    Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: October 6, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: John Russell Seghers, Semih Energin, Forrest Power Trepte, James Jefferson Gault, Quais Taraki, Robin Dale Reigstad, Jr., Noah Lake Callaway
  • Patent number: 10729976
    Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: August 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Semih Energin, John Russell Seghers
  • Patent number: 10672103
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 10606609
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: March 31, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Publication number: 20190371060
    Abstract: An improved human-computer interface (“HCI”) is disclosed herein for viewing a three-dimensional (“3D”) representation of a real-world environment from different, changing, and/or multiple perspectives. An AR device may capture, in real-time, a 3D representation of a scene using a surface reconstruction (“SR”) camera and a traditional Red Green & Blue (“RGB”) camera. The 3D representation may be transmitted to and viewed on a user's computing device, enabling the user to navigate the 3D representation. The user may view the 3D representation in a free-third-person mode, enabling the user to virtually walk or fly through the representation captured by the AR device. The user may also select a floor plan mode for a top-down or isomorphic perspective. Enabling a user to view a scene from different perspectives enhances understanding, speeds trouble-shooting, and fundamentally improves the capability of the computing device, the AR device, and the combination thereof.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 5, 2019
    Inventors: Semih ENERGIN, Jeffrey Jesus EVERTT
  • Publication number: 20190279335
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Application
    Filed: May 30, 2019
    Publication date: September 12, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Publication number: 20190171463
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Application
    Filed: February 11, 2019
    Publication date: June 6, 2019
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Patent number: 10311543
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: June 4, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 10249095
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: April 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Patent number: 10158700
    Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: December 18, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: John Russell Seghers, Semih Energin, Forrest Power Trepte, James Jefferson Gault, Quais Taraki, Robin Dale Reigstad, Jr., Noah Lake Callaway
  • Patent number: 10130885
    Abstract: Techniques for enabling selection of one or more viewports from a scene representation are disclosed herein. In some aspects, scene configuration information including a position of at least one viewport relative to the scene may be received. Each of the at least one viewport may be associated with a streaming camera view. A scene representation may then be defined based, at least in part, on the scene configuration information. One or more viewport representations corresponding to each of the at least one viewport may be positioned within the scene representation, based, at least in part on, the scene configuration information. The scene representation, including the at least one viewport representation, may be displayed, for example, to a user. Each viewport representation may allow the respective streaming camera view associated with the corresponding viewport may to be displayed, such as by selection of each viewport representation.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: November 20, 2018
    Assignee: Amazon Technologies, Inc.
    Inventor: Semih Energin
  • Publication number: 20180293798
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Application
    Filed: April 7, 2017
    Publication date: October 11, 2018
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Publication number: 20180122043
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 9849384
    Abstract: Techniques for enabling selection of one or more viewports from a scene representation are disclosed herein. In some aspects, scene configuration information including a position of at least one viewport relative to the scene may be received. Each of the at least one viewport may be associated with a streaming camera view. A scene representation may then be defined based, at least in part, on the scene configuration information. One or more viewport representations corresponding to each of the at least one viewport may be positioned within the scene representation, based, at least in part on, the scene configuration information. The scene representation, including the at least one viewport representation, may be displayed, for example, to a user. Each viewport representation may allow the respective streaming camera view associated with the corresponding viewport may to be displayed, such as by selection of each viewport representation.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: December 26, 2017
    Assignee: Amazon Technologies, Inc.
    Inventor: Semih Energin
  • Patent number: 9839843
    Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: December 12, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: John Russell Seghers, Semih Energin, Forrest Power Trepte, James Jefferson Gault, Quais Taraki, Robin Dale Reigstad, Jr., Noah Lake Callaway
  • Patent number: 9821222
    Abstract: Techniques for coordination of content presentation operations are described herein. In some cases, a client may generate client metadata associated with client event data. The client metadata may include, for example, an indication of any one or more of a time, a frame, a location, an angle, a direction, a speed, a force, or other information associated with the client event data. Also, in some cases, the content provider may generate content provider metadata associated with image data. For example, the content provider metadata may indicate a location of a virtual camera associated with the respective image data and/or a location of one or more objects represented within the respective image data.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: November 21, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Semih Energin, John Russell Seghers