Patents by Inventor Vivek Pradeep
Vivek Pradeep has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190294871Abstract: Methods, apparatuses, and computer-readable mediums for generating human action data sets are disclosed by the present disclosure. In an aspect, an apparatus may receive a set of reference images, where each of the images within the set of reference images includes a person, and a background image. The apparatus may identify body parts of the person from the set of reference image and generate a transformed skeleton image by mapping each of the body parts of the person to corresponding skeleton parts of a target skeleton. The apparatus may generate a mask of the transformed skeleton image. The apparatus may generate, using machine learning, a frame of the person formed according to the target skeleton within the background image.Type: ApplicationFiled: March 23, 2018Publication date: September 26, 2019Inventors: Hamidreza VAEZI JOZE, Ilya ZHARKOV, Vivek PRADEEP, Mehran KHODABANDEH
-
Publication number: 20190220698Abstract: Methods and systems for automatically generating training data for use in machine learning are disclosed. The methods can involve the use of environmental data derived from first and second environmental sensors for a single event. The environmental data types derived from each environmental sensor are different. The event is detected based on first environmental data derived from the first environmental sensor, and a portion of second environmental data derived from the second environmental sensor is selected to generate training data for the detected event. The resulting training data can be employed to train machine learning models.Type: ApplicationFiled: January 12, 2018Publication date: July 18, 2019Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Vivek PRADEEP
-
Publication number: 20180342045Abstract: Resolution enhancement techniques are described. An apparatus may receive first image data at a first resolution, and second image data at a resolution less than the first resolution. The second image data may be scaled to the first resolution and compared to the first image data. Application of a neural network may scale the first image data to a resolution higher than the first resolution. The application of the neural network may incorporate signals based on the scaled second image data. The signals may include information obtained by comparing the scaled second image data to the resolution of the first image data.Type: ApplicationFiled: May 26, 2017Publication date: November 29, 2018Inventors: Moshe R. Lutz, Vivek Pradeep
-
Publication number: 20180342044Abstract: Resolution enhancement techniques are described. An apparatus may receive first image data at a first resolution, and second image data at a resolution less than the first resolution. The second image data may be scaled to the first resolution and compared to the first image data. Application of a neural network may scale the first image data to a resolution higher than the first resolution. The application of the neural network may incorporate signals based on the scaled second image data. The signals may include information obtained by comparing the scaled second image data to the resolution of the first image data.Type: ApplicationFiled: May 26, 2017Publication date: November 29, 2018Inventors: Moshe R. Lutz, Vivek Pradeep
-
Publication number: 20180293221Abstract: A method to execute computer-actionable directives conveyed in human speech comprises: receiving audio data recording speech from one or more speakers; converting the audio data into a linguistic representation of the recorded speech; detecting a target corresponding to the linguistic representation; committing to the data structure language data associated with the detected target and based on the linguistic representation; parsing the data structure to identify one or more of the computer-actionable directives; and submitting the one or more of the computer-actionable directives to the computer for processing.Type: ApplicationFiled: June 11, 2018Publication date: October 11, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Erich-Soren FINKELSTEIN, Han Yee Mimi FUNG, Aleksandar UZELAC, Oz SOLOMON, Keith Coleman HEROLD, Vivek PRADEEP, Zongyi LIU, Kazuhito KOISHIDA, Haithem ALBADAWI, Steven Nabil BATHICHE, Christopher Lance NUESMEYER, Michelle Lynn HOLTMANN, Christopher Brian QUIRK, Pablo Luis SALA
-
Publication number: 20180232608Abstract: Computing devices and methods for associating a semantic identifier with an object are disclosed. In one example, a three-dimensional model of an environment comprising the object is generated. Image data of the environment is sent to a user computing device for display by the user computing device. User input comprising position data of the object and the semantic identifier is received. The position data is mapped to a three-dimensional location in the three-dimensional model at which the object is located. Based at least on mapping the position data to the three-dimensional location of the object, the semantic identifier is associated with the object.Type: ApplicationFiled: December 5, 2017Publication date: August 16, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Michelle Lynn Holtmann, Steven Nabil Bathiche
-
Publication number: 20180232571Abstract: An intelligent assistant device is configured to communicate non-verbal cues. Image data indicating presence of a human is received from one or more cameras of the device. In response, one or more components of the device are actuated to non-verbally communicate the presence of the human. Data indicating context information of the human is received from one or more of the sensors. Using at least this data one or more contexts of the human are determined, and one or more components of the device are actuated to non-verbally communicate the one or more contexts of the human.Type: ApplicationFiled: March 26, 2018Publication date: August 16, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Steven Nabil BATHICHE, Vivek PRADEEP, Alexander Norman BENNETT, Daniel Gordon O'NEIL, Anthony Christian REED, Krzysztof Jan LUCHOWIEC, Tsitsi Isabel KOLAWOLE
-
Publication number: 20180231653Abstract: An entity-tracking computing system receives sensor information from a plurality of different sensors. The positions of entities detected by the various sensors are resolved to an environment-relative coordinate system so that entities identified by one sensor can be tracked across the fields of detection of other sensors.Type: ApplicationFiled: August 21, 2017Publication date: August 16, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Vivek PRADEEP, Pablo Luis SALA, John Guido Atkins WEISS, Moshe Randall LUTZ
-
Publication number: 20180233145Abstract: A first intelligent assistant computing device configured to receive and respond to natural language inputs provided by human users syncs to a reference clock of a wireless computer network. The first intelligent assistant computing device receives a communication sent by a second intelligent assistant computing device indicating a signal emission time at which the second intelligent assistant computing device emitted a position calibration signal. The first intelligent assistant computing device records a signal detection time at which the position calibration signal was detected. Based on a difference between 1) the signal emission time and the signal detection time, and 2) a known propagation speed of the position calibration signal, a distance between the first and second intelligent assistant computing devices is calculated.Type: ApplicationFiled: December 5, 2017Publication date: August 16, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Steven Nabil BATHICHE, Flavio Protasio RIBEIRO, Vivek PRADEEP
-
Patent number: 10001845Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.Type: GrantFiled: June 14, 2017Date of Patent: June 19, 2018Assignee: Microsoft Technology Licensing, LLCInventors: David Kim, Shahram Izadi, Vivek Pradeep, Steven Bathiche, Timothy Andrew Large, Karlton David Powell
-
Patent number: 9958585Abstract: Example embodiments simultaneously acquire multiple different focus state images for a scene at a video rate. The focus state images are acquired from a static arrangement of static optical elements. The focus state images are suitable for and sufficient for determining the depth of an object in the scene using depth from defocus (DFD) processing. The depth of the object in the scene is determined from the focus state images using DFD processing. The static optical elements may be off-the-shelf components that are used without modification. The static elements may include image sensors aligned to a common optical path, a beam splitter in the common optical path, and telecentric lenses that correct light in multiple optical paths produced by the beam splitter. The multiple optical paths may differ by a defocus delta. Simultaneous acquisition of the multiple focus state images facilitates mitigating motion blur associated with conventional DFD processing.Type: GrantFiled: August 17, 2015Date of Patent: May 1, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Karlton Powell, Vivek Pradeep
-
Publication number: 20170285763Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.Type: ApplicationFiled: June 14, 2017Publication date: October 5, 2017Inventors: David KIM, Shahram IZADI, Vivek PRADEEP, Steven BATHICHE, Timothy Andrew LARGE, Karlton David POWELL
-
Patent number: 9779508Abstract: A combination of three computational components may provide memory and computational efficiency while producing results with little latency, e.g., output can begin with the second frame of video being processed. Memory usage may be reduced by maintaining key frames of video and pose information for each frame of video. Additionally, only one global volumetric structure may be maintained for the frames of video being processed. To be computationally efficient, only depth information may be computed from each frame. Through fusion of multiple depth maps from different frames into a single volumetric structure, errors may average out over several frames, leading to a final output with high quality.Type: GrantFiled: March 26, 2014Date of Patent: October 3, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Christoph Rhemann, Shahram Izadi, Christopher Zach, Michael Bleyer, Steven Bathiche
-
Patent number: 9720506Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.Type: GrantFiled: January 14, 2014Date of Patent: August 1, 2017Assignee: Microsoft Technology Licensing, LLCInventors: David Kim, Shahram Izadi, Vivek Pradeep, Steven Bathiche, Timothy Andrew Large, Karlton David Powell
-
Publication number: 20170193666Abstract: Methods and apparatus for capturing motion from a self-tracking device are disclosed. In embodiments, a device self-tracks motion of the device relative to a first reference frame while recording motion of a subject relative to a second reference frame, the second reference frame being a reference frame relative to the device. In the embodiments, the subject may be a real object or, alternately, the subject may be a virtual subject and a motion of the virtual object may be recorded relative to the second reference frame by associating a position offset relative to the device with the position of the virtual object in the recorded motion. The motion of the subject relative to the first reference frame may be determined from the tracked motion of the device relative to the first frame and the recorded motion of the subject relative to the second reference frame.Type: ApplicationFiled: January 5, 2016Publication date: July 6, 2017Applicant: Microsoft Technology Licensing, LLCInventors: John Weiss, Vivek Pradeep, Xiaoyan Hu
-
Patent number: 9606506Abstract: A holographic interaction device is described. In one or more implementations, an input device includes an input portion comprising a plurality of controls that are configured to generate signals to be processed as inputs by a computing device that is communicatively coupled to the controls. The input device also includes a holographic recording mechanism disposed over a surface of the input portion, the holographic recording mechanism is configured to output a hologram in response to receipt of light, from a light source, that is viewable by a user over the input portion.Type: GrantFiled: October 15, 2013Date of Patent: March 28, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Timothy Andrew Large, Neil Emerton, Moshe R. Lutz, Vivek Pradeep, John G. A. Weiss, Quintus Travis
-
Patent number: 9584766Abstract: Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.Type: GrantFiled: June 3, 2015Date of Patent: February 28, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Stephen G. Latta, Steven Nabil Bathiche, Kevin Geisner, Alice Jane Bernheim Brush
-
Publication number: 20170053411Abstract: Example embodiments simultaneously acquire multiple different focus state images for a scene at a video rate. The focus state images are acquired from a static arrangement of static optical elements. The focus state images are suitable for and sufficient for determining the depth of an object in the scene using depth from defocus (DFD) processing. The depth of the object in the scene is determined from the focus state images using DFD processing. The static optical elements may be off-the-shelf components that are used without modification. The static elements may include image sensors aligned to a common optical path, a beam splitter in the common optical path, and telecentric lenses that correct light in multiple optical paths produced by the beam splitter. The multiple optical paths may differ by a defocus delta. Simultaneous acquisition of the multiple focus state images facilitates mitigating motion blur associated with conventional DFD processing.Type: ApplicationFiled: August 17, 2015Publication date: February 23, 2017Inventors: Karlton Powell, Vivek Pradeep
-
Publication number: 20160330360Abstract: Remote depth sensing techniques are described via relayed depth from diffusion. In one or more implementations, a remote depth sensing system is configured to sense depth as relayed from diffusion. The system includes an image capture system including an image sensor and an imaging lens configured to transmit light to the image sensor through an intermediate image plane that is disposed between the imaging lens and the image sensor, the intermediate plane having an optical diffuser disposed proximal thereto that is configured to diffuse the transmitted light. The system also includes a depth sensing module configured to receive one or more images from the image sensor and determine a distance to one or more objects in an object scene captured by the one or more images using a depth by diffusion technique that is based at least in part on an amount of blurring exhibited by respective said objects in the one or more images.Type: ApplicationFiled: May 5, 2015Publication date: November 10, 2016Inventors: Karlton D. Powell, Vivek Pradeep
-
Patent number: 9430095Abstract: Global and local light detection techniques in optical sensor systems are described. In one or more implementations, a global lighting value is generated that describes a global lighting level for a plurality of optical sensors based on a plurality of inputs received from the plurality of optical sensors. An illumination map is generated that describes local lighting conditions of respective ones of the plurality of optical sensors based on the plurality of inputs received from the plurality of optical sensors. Object detection is performed using an image captured using the plurality of optical sensors along with the global lighting value and the illumination map.Type: GrantFiled: January 23, 2014Date of Patent: August 30, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Liang Wang, Pablo Sala, Luis Eduardo Cabrera-Cordon, Steven Nabil Bathiche