Patents by Inventor Grigor Shirakyan
Grigor Shirakyan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11146744Abstract: A system and method for dynamically modifying a live image of a subject using an automated interactive system is provided. The system includes a motorized platform including at least one actuator, a control unit including a processor and a storage device, and a payload including one or more sensors and a camera. The method includes (i) collecting sensor data about at least one of the subject and an environment, (ii) moving the camera along or around at least one degree of freedom, (iii) capturing the live image of the subject in at least one position from with the camera, (iv) storing the live image of the subject in the data storage device, (v) sending instructions to physically move the payload, (vi) applying at least one environment modification rule to modify the live image of the subject, and (vii) displaying a modified live image of the subject on a display unit.Type: GrantFiled: February 24, 2020Date of Patent: October 12, 2021Assignee: Emergent Machines, Inc.Inventor: Grigor Shirakyan
-
Publication number: 20200195860Abstract: A system and method for dynamically modifying a live image of a subject using an automated interactive system is provided. The system includes a motorized platform including at least one actuator, a control unit including a processor and a storage device, and a payload including one or more sensors and a camera. The method includes (i) collecting sensor data about at least one of the subject and an environment, (ii) moving the camera along or around at least one degree of freedom, (iii) capturing the live image of the subject in at least one position from with the camera, (iv) storing the live image of the subject in the data storage device, (v) sending instructions to physically move the payload, (vi) applying at least one environment modification rule to modify the live image of the subject, and (vii) displaying a modified live image of the subject on a display unit.Type: ApplicationFiled: February 24, 2020Publication date: June 18, 2020Applicant: Emergent Machines, Inc.Inventor: Grigor Shirakyan
-
Patent number: 10062180Abstract: Various technologies described herein pertain to correction of an input depth image captured by a depth sensor. The input depth image can include pixels, and the pixels can have respective depth values in the input depth image. Moreover, per-pixel correction values for the pixels can be determined utilizing depth calibration data for a non-linear error model calibrated for the depth sensor. The per-pixel correction values can be determined based on portions of the depth calibration data respectively corresponding to the pixels and the depth values. The per-pixel correction values can be applied to the depth values to generate a corrected depth image. Further, the corrected depth image can be output.Type: GrantFiled: April 22, 2014Date of Patent: August 28, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu
-
Patent number: 10052766Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.Type: GrantFiled: November 10, 2015Date of Patent: August 21, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
-
Patent number: 9878447Abstract: Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment.Type: GrantFiled: April 10, 2015Date of Patent: January 30, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Bryan J. Thibodeau, Michael Revow, Mihai Jalobeanu, Grigor Shirakyan
-
Publication number: 20160297068Abstract: Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment.Type: ApplicationFiled: April 10, 2015Publication date: October 13, 2016Inventors: Bryan J. Thibodeau, Michael Revow, Mihai Jalobeanu, Grigor Shirakyan
-
Publication number: 20160059417Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.Type: ApplicationFiled: November 10, 2015Publication date: March 3, 2016Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
-
Publication number: 20150375396Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.Type: ApplicationFiled: June 25, 2014Publication date: December 31, 2015Inventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
-
Patent number: 9211643Abstract: Various technologies described herein pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.Type: GrantFiled: June 25, 2014Date of Patent: December 15, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu, Bryan Joseph Thibodeau
-
Publication number: 20150302570Abstract: Various technologies described herein pertain to correction of an input depth image captured by a depth sensor. The input depth image can include pixels, and the pixels can have respective depth values in the input depth image. Moreover, per-pixel correction values for the pixels can be determined utilizing depth calibration data for a non-linear error model calibrated for the depth sensor. The per-pixel correction values can be determined based on portions of the depth calibration data respectively corresponding to the pixels and the depth values. The per-pixel correction values can be applied to the depth values to generate a corrected depth image. Further, the corrected depth image can be output.Type: ApplicationFiled: April 22, 2014Publication date: October 22, 2015Applicant: Microsoft CorporationInventors: Grigor Shirakyan, Michael Revow, Mihai Jalobeanu
-
Publication number: 20140363073Abstract: The subject disclosure is directed towards detecting planes in a scene using depth data of a scene image, based upon a relationship between pixel depths, row height and two constants. Samples of a depth image are processed to fit values for the constants to a plane formulation to determine which samples indicate a plane. A reference plane may be determined from those samples that indicate a plane, with pixels in the depth image processed to determine each pixel's relationship to the plane based on the pixel's depth, location and associated fitted values, e.g., below the plane, on the plane or above the plane.Type: ApplicationFiled: June 11, 2013Publication date: December 11, 2014Inventors: Grigor Shirakyan, Mihai R. Jalobeanu
-
Publication number: 20140128994Abstract: A “Logical Sensor Server” or “LSS” acts as a smart hub between related or unrelated sensors, devices, or other systems by translating, morphing, or forwarding signals or events published by various input sources into signals or higher-order events that can be consumed or used by other subscribing sensors, devices, or systems. More specifically, the LSS acts alone or in combination with a Logical Sensor Platform (LSP) to enable various techniques that allow messages received from different input sources to be authored, transformed and made available to one or more subscribers in a manner that allows intelligent event-driven behavior to emerge from a collection of relatively simple input sources. Any combination of automatic configuration or user input is used to define the format of transformed inputs to be received by particular subscribers relative to one or more publications. Subscribers receiving transformed events control their own actions based on those events.Type: ApplicationFiled: November 7, 2012Publication date: May 8, 2014Applicant: MICROSOFT CORPORATIONInventors: Kimberly Denise Auyang Hallman, Desney Tan, Ira Snyder, Mats Myrberg, Michael Hall, Michael Koenig, Andrew Wilson, Grigor Shirakyan, Matthew Dyor
-
Patent number: 8260775Abstract: Computer-readable media and a computing device are described for providing geotemporal search and a search interface therefor. A search interface having a location portion and a timeline portion is provided. A geographic area is selected in the location portion by adjusting the visible area of a map. A temporal window is selected in the timeline portion by adjusting sliders along a timeline to a desired start and end time. The start and end times can be in the past, present, or future. A geotemporal search is executed based on the selected geographic area and temporal window to identify search results having associated metadata indicating a relationship to the selected geographic area and temporal window. One or more search terms are optionally provided to further refine the geotemporal search.Type: GrantFiled: January 12, 2010Date of Patent: September 4, 2012Assignee: Microsoft CorporationInventors: David Dongjah Ahn, Michael Paul Bieniosek, Ian Robert Collins, Franco Salvetti, Toby Takeo Sterrett, Giovanni Lorenzo Thione, Grigor Shirakyan, Hamed Esfahani
-
Patent number: 8082218Abstract: Conflicts among programs are detected, and advice is given based on the detected conflicts. A set of conflict rules defines what constitutes a conflict, and a set of advice rules defines what advice is to be given in response to a conflict that has been detected. The conflict rules may be provided by a different party from the action rules, so the decision as to what constitutes a conflict can be made separately from the decision as to what advice should be given when a conflict is detected.Type: GrantFiled: August 21, 2007Date of Patent: December 20, 2011Assignee: Microsoft CorporationInventors: Karthik Lakshminarayanan, Grigor Shirakyan, R. C. Vikram Kakumani, Terrence Lui
-
Publication number: 20110173193Abstract: Computer-readable media and a computing device are described for providing geotemporal search and a search interface therefor. A search interface having a location portion and a timeline portion is provided. A geographic area is selected in the location portion by adjusting the visible area of a map. A temporal window is selected in the timeline portion by adjusting sliders along a timeline to a desired start and end time. The start and end times can be in the past, present, or future. A geotemporal search is executed based on the selected geographic area and temporal window to identify search results having associated metadata indicating a relationship to the selected geographic area and temporal window. One or more search terms are optionally provided to further refine the geotemporal search.Type: ApplicationFiled: January 12, 2010Publication date: July 14, 2011Applicant: MICROSOFT CORPORATIONInventors: DAVID DONGJAH AHN, MICHAEL PAUL BIENIOSEK, IAN ROBERT COLLINS, FRANCO SALVETTI, TOBY TAKEO STERRETT, GIOVANNI LORENZO THIONE, GRIGOR SHIRAKYAN, HAMED ESFAHANI
-
Publication number: 20090055340Abstract: Conflicts among programs are detected, and advice is given based on the detected conflicts. A set of conflict rules defines what constitutes a conflict, and a set of advice rules defines what advice is to be given in response to a conflict that has been detected. The conflict rules may be provided by a different party from the action rules, so the decision as to what constitutes a conflict can be made separately from the decision as to what advice should be given when a conflict is detected.Type: ApplicationFiled: August 21, 2007Publication date: February 26, 2009Applicant: Microsoft CorporationInventors: Karthik Lakshminarayanan, Grigor Shirakyan, R.C. Vikram Kakumani, Terrence Lui