Patents by Inventor Grigor Shirakyan

Grigor Shirakyan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11146744
    Abstract: A system and method for dynamically modifying a live image of a subject using an automated interactive system is provided. The system includes a motorized platform including at least one actuator, a control unit including a processor and a storage device, and a payload including one or more sensors and a camera. The method includes (i) collecting sensor data about at least one of the subject and an environment, (ii) moving the camera along or around at least one degree of freedom, (iii) capturing the live image of the subject in at least one position from with the camera, (iv) storing the live image of the subject in the data storage device, (v) sending instructions to physically move the payload, (vi) applying at least one environment modification rule to modify the live image of the subject, and (vii) displaying a modified live image of the subject on a display unit.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: October 12, 2021
    Assignee: Emergent Machines, Inc.
    Inventor: Grigor Shirakyan
  • Publication number: 20200195860
    Abstract: A system and method for dynamically modifying a live image of a subject using an automated interactive system is provided. The system includes a motorized platform including at least one actuator, a control unit including a processor and a storage device, and a payload including one or more sensors and a camera. The method includes (i) collecting sensor data about at least one of the subject and an environment, (ii) moving the camera along or around at least one degree of freedom, (iii) capturing the live image of the subject in at least one position from with the camera, (iv) storing the live image of the subject in the data storage device, (v) sending instructions to physically move the payload, (vi) applying at least one environment modification rule to modify the live image of the subject, and (vii) displaying a modified live image of the subject on a display unit.
    Type: Application
    Filed: February 24, 2020
    Publication date: June 18, 2020
    Applicant: Emergent Machines, Inc.
    Inventor: Grigor Shirakyan
  • Patent number: 10062180
    Abstract: Various technologies described herein pertain to correction of an input depth image captured by a depth sensor. The input depth image can include pixels, and the pixels can have respective depth values in the input depth image. Moreover, per-pixel correction values for the pixels can be determined utilizing depth calibration data for a non-linear error model calibrated for the depth sensor. The per-pixel correction values can be determined based on portions of the depth calibration data respectively corresponding to the pixels and the depth values. The per-pixel correction values can be applied to the depth values to generate a corrected depth image. Further, the corrected depth image can be output.
    Type: Grant
    Filed: April 22, 2014
    Date of Patent: August 28, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu
  • Patent number: 10052766
    Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: August 21, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
  • Patent number: 9878447
    Abstract: Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: January 30, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Bryan J. Thibodeau, Michael Revow, Mihai Jalobeanu, Grigor Shirakyan
  • Publication number: 20160297068
    Abstract: Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment.
    Type: Application
    Filed: April 10, 2015
    Publication date: October 13, 2016
    Inventors: Bryan J. Thibodeau, Michael Revow, Mihai Jalobeanu, Grigor Shirakyan
  • Publication number: 20160059417
    Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
    Type: Application
    Filed: November 10, 2015
    Publication date: March 3, 2016
    Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
  • Publication number: 20150375396
    Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
    Type: Application
    Filed: June 25, 2014
    Publication date: December 31, 2015
    Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
  • Patent number: 9211643
    Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: December 15, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
  • Publication number: 20150302570
    Abstract: Various technologies described herein pertain to correction of an input depth image captured by a depth sensor. The input depth image can include pixels, and the pixels can have respective depth values in the input depth image. Moreover, per-pixel correction values for the pixels can be determined utilizing depth calibration data for a non-linear error model calibrated for the depth sensor. The per-pixel correction values can be determined based on portions of the depth calibration data respectively corresponding to the pixels and the depth values. The per-pixel correction values can be applied to the depth values to generate a corrected depth image. Further, the corrected depth image can be output.
    Type: Application
    Filed: April 22, 2014
    Publication date: October 22, 2015
    Applicant: Microsoft Corporation
    Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu
  • Publication number: 20140363073
    Abstract: The subject disclosure is directed towards detecting planes in a scene using depth data of a scene image, based upon a relationship between pixel depths, row height and two constants. Samples of a depth image are processed to fit values for the constants to a plane formulation to determine which samples indicate a plane. A reference plane may be determined from those samples that indicate a plane, with pixels in the depth image processed to determine each pixel's relationship to the plane based on the pixel's depth, location and associated fitted values, e.g., below the plane, on the plane or above the plane.
    Type: Application
    Filed: June 11, 2013
    Publication date: December 11, 2014
    Inventors: Grigor Shirakyan, Mihai R. Jalobeanu
  • Publication number: 20140128994
    Abstract: A “Logical Sensor Server” or “LSS” acts as a smart hub between related or unrelated sensors, devices, or other systems by translating, morphing, or forwarding signals or events published by various input sources into signals or higher-order events that can be consumed or used by other subscribing sensors, devices, or systems. More specifically, the LSS acts alone or in combination with a Logical Sensor Platform (LSP) to enable various techniques that allow messages received from different input sources to be authored, transformed and made available to one or more subscribers in a manner that allows intelligent event-driven behavior to emerge from a collection of relatively simple input sources. Any combination of automatic configuration or user input is used to define the format of transformed inputs to be received by particular subscribers relative to one or more publications. Subscribers receiving transformed events control their own actions based on those events.
    Type: Application
    Filed: November 7, 2012
    Publication date: May 8, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Kimberly Denise Auyang Hallman, Desney Tan, Ira Snyder, Mats Myrberg, Michael Hall, Michael Koenig, Andrew Wilson, Grigor Shirakyan, Matthew Dyor
  • Patent number: 8260775
    Abstract: Computer-readable media and a computing device are described for providing geotemporal search and a search interface therefor. A search interface having a location portion and a timeline portion is provided. A geographic area is selected in the location portion by adjusting the visible area of a map. A temporal window is selected in the timeline portion by adjusting sliders along a timeline to a desired start and end time. The start and end times can be in the past, present, or future. A geotemporal search is executed based on the selected geographic area and temporal window to identify search results having associated metadata indicating a relationship to the selected geographic area and temporal window. One or more search terms are optionally provided to further refine the geotemporal search.
    Type: Grant
    Filed: January 12, 2010
    Date of Patent: September 4, 2012
    Assignee: Microsoft Corporation
    Inventors: David Dongjah Ahn, Michael Paul Bieniosek, Ian Robert Collins, Franco Salvetti, Toby Takeo Sterrett, Giovanni Lorenzo Thione, Grigor Shirakyan, Hamed Esfahani
  • Patent number: 8082218
    Abstract: Conflicts among programs are detected, and advice is given based on the detected conflicts. A set of conflict rules defines what constitutes a conflict, and a set of advice rules defines what advice is to be given in response to a conflict that has been detected. The conflict rules may be provided by a different party from the action rules, so the decision as to what constitutes a conflict can be made separately from the decision as to what advice should be given when a conflict is detected.
    Type: Grant
    Filed: August 21, 2007
    Date of Patent: December 20, 2011
    Assignee: Microsoft Corporation
    Inventors: Karthik Lakshminarayanan, Grigor Shirakyan, R. C. Vikram Kakumani, Terrence Lui
  • Publication number: 20110173193
    Abstract: Computer-readable media and a computing device are described for providing geotemporal search and a search interface therefor. A search interface having a location portion and a timeline portion is provided. A geographic area is selected in the location portion by adjusting the visible area of a map. A temporal window is selected in the timeline portion by adjusting sliders along a timeline to a desired start and end time. The start and end times can be in the past, present, or future. A geotemporal search is executed based on the selected geographic area and temporal window to identify search results having associated metadata indicating a relationship to the selected geographic area and temporal window. One or more search terms are optionally provided to further refine the geotemporal search.
    Type: Application
    Filed: January 12, 2010
    Publication date: July 14, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: DAVID DONGJAH AHN, MICHAEL PAUL BIENIOSEK, IAN ROBERT COLLINS, FRANCO SALVETTI, TOBY TAKEO STERRETT, GIOVANNI LORENZO THIONE, GRIGOR SHIRAKYAN, HAMED ESFAHANI
  • Publication number: 20090055340
    Abstract: Conflicts among programs are detected, and advice is given based on the detected conflicts. A set of conflict rules defines what constitutes a conflict, and a set of advice rules defines what advice is to be given in response to a conflict that has been detected. The conflict rules may be provided by a different party from the action rules, so the decision as to what constitutes a conflict can be made separately from the decision as to what advice should be given when a conflict is detected.
    Type: Application
    Filed: August 21, 2007
    Publication date: February 26, 2009
    Applicant: Microsoft Corporation
    Inventors: Karthik Lakshminarayanan, Grigor Shirakyan, R.C. Vikram Kakumani, Terrence Lui