Patents by Inventor Kyungsuk David Lee
Kyungsuk David Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9529513Abstract: Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user, and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.Type: GrantFiled: August 5, 2013Date of Patent: December 27, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexandru Balan, Mark J. Finocchio, Kyungsuk David Lee
-
Patent number: 9424490Abstract: Embodiments are disclosed that relate to processing image pixels. For example, one disclosed embodiment provides a system for classifying pixels comprising retrieval logic; a pixel storage allocation including a plurality of pixel slots, each pixel slot being associated individually with a pixel, where the retrieval logic is configured to cause the pixels to be allocated into the pixel slots in an input sequence; pipelined processing logic configured to output, for each of the pixels, classification information associated with the pixel; and scheduling logic configured to control dispatches from the pixel slots to the pipelined processing logic, where the scheduling logic and pipelined processing logic are configured to act in concert to generate the classification information for the pixels in an output sequence that differs from and is independent of the input sequence.Type: GrantFiled: June 27, 2014Date of Patent: August 23, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam James Muff, John Allen Tardif, Susan Carrie, Mark J. Finocchio, Kyungsuk David Lee, Christopher Douglas Edmonds, Randy Crane
-
Patent number: 9378590Abstract: An augmented reality submission includes a hologram to virtually augment a world space object and a compensation offer for presenting the hologram to a viewer of the world space object. The augmented reality submission is selected as a winning submission if the submission satisfies a selection criteria.Type: GrantFiled: April 23, 2013Date of Patent: June 28, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kyungsuk David Lee, Alexandru Balan, Mark J. Finocchio
-
Patent number: 9344707Abstract: A depth sensor obtains images of articulated portions of a user's body such as the hand. A predefined model of the articulated body portions is provided. Representative attract points of the model are matched to centroids of the depth sensor data, and a rigid transform of the model is performed, in an initial, relatively coarse matching process. This matching process is then refined in a non-rigid transform of the model, using attract point-to-centroid matching. In a further refinement, an iterative process rasterizes the model to provide depth pixels of the model, and compares the depth pixels of the model to the depth pixels of the depth sensor. The refinement is guided by whether the depth pixels of the model are overlapping or non-overlapping with the depth pixels of the depth sensor. Collision, distance and angle constraints are also imposed on the model.Type: GrantFiled: November 28, 2012Date of Patent: May 17, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Kyungsuk David Lee, Alexandru Balan
-
Publication number: 20160132786Abstract: Various embodiments relating to partitioning a data set for training machine-learning classifiers based on an output of a globally trained machine-learning classifier are disclosed. In one embodiment, a first machine-learning classifier may be trained on a set of training data to produce a corresponding set of output data. The set of training data may be partitioned into a plurality of subsets based on the set of output data. Each subset may correspond to a different class. A second machine-learning classifier may be trained on the set of training data using a plurality of classes corresponding to the plurality of subsets to produce, for each data object of the set of training data, a probability distribution having for each class a probability that the data object is a member of the class.Type: ApplicationFiled: November 12, 2014Publication date: May 12, 2016Inventors: Alexandru Balan, Bradford Jason Snow, Christopher Douglas Edmonds, Henry Nelson Jerez, Kyungsuk David Lee, Mark J. Finocchio, Miguel Susffalich, Cem Keskin
-
Publication number: 20150379376Abstract: Embodiments are disclosed that relate to processing image pixels. For example, one disclosed embodiment provides a system for classifying pixels comprising retrieval logic; a pixel storage allocation including a plurality of pixel slots, each pixel slot being associated individually with a pixel, where the retrieval logic is configured to cause the pixels to be allocated into the pixel slots in an input sequence; pipelined processing logic configured to output, for each of the pixels, classification information associated with the pixel; and scheduling logic configured to control dispatches from the pixel slots to the pipelined processing logic, where the scheduling logic and pipelined processing logic are configured to act in concert to generate the classification information for the pixels in an output sequence that differs from and is independent of the input sequence.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Adam James Muff, John Allen Tardif, Susan Carrie, Mark J. Finocchio, Kyungsuk David Lee, Christopher Douglas Edmonds, Randy Crane
-
Patent number: 9183676Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.Type: GrantFiled: April 27, 2012Date of Patent: November 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis
-
Patent number: 9171380Abstract: Embodiments related to detecting object information from image data collected by an image sensor are disclosed. In one example embodiment, the object information is detected by receiving a frame of image data from the image sensor and detecting a change in a threshold condition related to an object within the frame. The embodiment further comprises adjusting a setting that changes a power consumption of the image sensor in response to detecting the threshold condition.Type: GrantFiled: December 6, 2011Date of Patent: October 27, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kyungsuk David Lee, Mark Finocchio, Richard Moore, Alexandru Balan, Rod G. Fleck
-
Patent number: 9122053Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.Type: GrantFiled: April 10, 2012Date of Patent: September 1, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, Jr.
-
Patent number: 9070194Abstract: A planar surface within a physical environment is detected enabling presentation of a graphical user interface overlaying the planar surface. Detection of planar surfaces may be performed, in one example, by obtaining a collection of three-dimensional surface points of a physical environment imaged via an optical sensor subsystem. A plurality of polygon sets of points are sampled within the collection. Each polygon set of points includes three or more localized points of the collection that defines a polygon. Each polygon is classified into one or more groups of polygons having a shared planar characteristic with each other polygon of that group. One or more planar surfaces within the collection are identified such that each planar surface is at least partially defined by a group of polygons containing at least a threshold number of polygons.Type: GrantFiled: October 25, 2012Date of Patent: June 30, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kyungsuk David Lee, Alexandru Balan, Jeffrey N. Margolis
-
Patent number: 9008355Abstract: Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.Type: GrantFiled: June 4, 2010Date of Patent: April 14, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Relja Markovic, Stephen Latta, Kyungsuk David Lee, Oscar Omar Garza Santos, Kevin Geisner
-
Publication number: 20150040040Abstract: Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user. and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.Type: ApplicationFiled: August 5, 2013Publication date: February 5, 2015Inventors: Alexandru Balan, Mark J. Finocchio, Kyungsuk David Lee
-
Patent number: 8929612Abstract: A system and method are disclosed relating to a pipeline for generating a computer model of a target user, including a hand model of the user's hands, captured by an image sensor in a NUI system. The computer model represents a best estimate of the position of a user's hand or hands and whether the hand or hand is in an open or closed state. The generated hand model may be used by a gaming or other application to determine such things as user gestures and control actions.Type: GrantFiled: November 18, 2011Date of Patent: January 6, 2015Assignee: Microsoft CorporationInventors: Anthony Ambrus, Kyungsuk David Lee, Andrew Campbell, David Haley, Brian Mount, Albert Robles, Daniel Osborn, Shawn Wright, Nahil Sharkasi, Dave Hill, Daniel McCulloch, Alexandru Balan
-
Publication number: 20140380254Abstract: Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.Type: ApplicationFiled: September 4, 2014Publication date: December 25, 2014Inventors: Kevin Geisner, Stephen Latta, Gregory N. Snook, Relja Markovic, Arthur Charles Tomlin, Mark Mihelich, Kyungsuk David Lee, David Jason Christopher Horbach, Matthew Jon Puls
-
Patent number: 8897491Abstract: A system and method are disclosed relating to a pipeline for generating a computer model of a target user, including a hand model of the user's hands and fingers, captured by an image sensor in a NUI system. The computer model represents a best estimate of the position and orientation of a user's hand or hands. The generated hand model may be used by a gaming or other application to determine such things as user gestures and control actions.Type: GrantFiled: October 19, 2011Date of Patent: November 25, 2014Assignee: Microsoft CorporationInventors: Anthony Ambrus, Kyungsuk David Lee, Andrew Campbell, David Haley, Brian Mount, Albert Robles, Daniel Osborn, Shawn Wright, Nahil Sharkasi, Dave Hill, Daniel McCulloch
-
Patent number: 8878906Abstract: Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map.Type: GrantFiled: November 28, 2012Date of Patent: November 4, 2014Assignee: Microsoft CorporationInventors: Jamie D. J. Shotton, Mark J. Finocchio, Richard E. Moore, Alexandru O. Balan, Kyungsuk David Lee
-
Publication number: 20140313225Abstract: An augmented reality submission includes a hologram to virtually augment a world space object and a compensation offer for presenting the hologram to a viewer of the world space object. The augmented reality submission is selected as a winning submission if the submission satisfies a selection criteria.Type: ApplicationFiled: April 23, 2013Publication date: October 23, 2014Inventors: Kyungsuk David Lee, Alexandru Balan, Mark J. Finocchio
-
Patent number: 8856691Abstract: Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.Type: GrantFiled: May 29, 2009Date of Patent: October 7, 2014Assignee: Microsoft CorporationInventors: Kevin Geisner, Stephen Latta, Gregory N. Snook, Relja Markovic, Arthur Charles Tomlin, Mark Mihelich, Kyungsuk David Lee, David Jason Christopher Horbach, Matthew Jon Puls
-
Publication number: 20140118397Abstract: A planar surface within a physical environment is detected enabling presentation of a graphical user interface overlaying the planar surface. Detection of planar surfaces may be performed, in one example, by obtaining a collection of three-dimensional surface points of a physical environment imaged via an optical sensor subsystem. A plurality of polygon sets of points are sampled within the collection. Each polygon set of points includes three or more localized points of the collection that defines a polygon. Each polygon is classified into one or more groups of polygons having a shared planar characteristic with each other polygon of that group. One or more planar surfaces within the collection are identified such that each planar surface is at least partially defined by a group of polygons containing at least a threshold number of polygons.Type: ApplicationFiled: October 25, 2012Publication date: May 1, 2014Inventors: Kyungsuk David Lee, Alexandru Balan, Jeffrey N. Margolis
-
Publication number: 20140002607Abstract: Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map.Type: ApplicationFiled: November 28, 2012Publication date: January 2, 2014Inventors: Jamie D.J. Shotton, Mark J. Finocchio, Richard E. Moore, Alexandru O. Balan, Kyungsuk David Lee