Patents by Inventor Vivek Pradeep

Vivek Pradeep has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104103
    Abstract: Methods and systems for generating and using a semantic index are provided. In some examples, content data is received. The content data includes a plurality of subsets of content data. Each of the plurality of subsets of content data are labelled, based on a semantic context corresponding to the content data. The plurality of subsets of content data and their corresponding labels are stored. The plurality of subsets of content data are grouped, based on their labels, thereby generating one or more groups of subsets of content data. Further, a computing device is adapted to perform an action, based on the one or more groups of subsets of content data.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eric Chris Wolfgang SOMMERLADE, Vivek PRADEEP, Steven N. BATHICHE, Nathan LUQUETTA-FISH
  • Patent number: 11727819
    Abstract: The present disclosure discloses an interactive system for teaching sequencing and programming to children. In some embodiments, the interactive system comprises a plurality of tiles organisable to form a structural pattern, wherein each tile comprises an RFID tag storing at least a pre-defined command corresponding to a first action and an identifier associated with a second set of actions, and an interactive robot. In some embodiments, the interactive robot when placed on the tile is configured for, receiving a voice command from a user, reading at least the pre-defined command and the identifier from the RFID tag associated with the tile, comparing the command received from the user and the pre-defined command, and performing one or more actions from among a third set of actions based on a result of comparison.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: August 15, 2023
    Assignee: GRASP IO INNOVATIONS PVT LTD.
    Inventors: Shanmugha Tumkur Srinivas, Vivek Pradeep Kumar, Jayakrishnan Kundully, Rahul Kothari, Rudresh Jayaram, Vineetha Menon, Nischitha Thulasiraju, John Solomon Johnson
  • Patent number: 11669943
    Abstract: A computational photography system is described herein including a guidance system and a detail enhancement system. The guidance system uses a first neural network that maps an original image provided by an image sensor to a guidance image, which represents a color-corrected and lighting-corrected version of the original image. A combination unit combines the original image and the guidance image to produce a combined image. A detail-enhancement system then uses a second neural network to map the combined image to a predicted image. The predicted image supplements the guidance provided by the first neural network by sharpening details in the original image. A training system is also described herein for training the first and second neural networks. The training system alternates in the data it feeds the second neural network, first using a guidance image as input to the second neural network, and then using a corresponding ground-truth image.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: June 6, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luming Liang, Ilya Dmitriyevich Zharkov, Vivek Pradeep, Faezeh Amjadi
  • Publication number: 20220358332
    Abstract: A method of training a neural network for detecting target features in images is described. The neural network is trained using a first data set that includes labeled images, where at least some of the labeled images having subjects with labeled features, including: dividing each of the labeled images of the first data set into a respective plurality of tiles, and generating, for each of the plurality of tiles, a plurality of feature anchors that indicate target features within the corresponding tile. Target features that correspond to the plurality of feature anchors are detected in a second data set of unlabeled images. Images of the second data set having target features that were not detected are labeled. A third data set that includes the first data set and the labeled images of the second data set is generated. The neural network is trained using the third data set.
    Type: Application
    Filed: May 7, 2021
    Publication date: November 10, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Hamidreza Vaezi JOZE, Vivek PRADEEP, Karthik VIJAYAN
  • Patent number: 11429807
    Abstract: Methods and systems for automatically generating training data for use in machine learning are disclosed. The methods can involve the use of environmental data derived from first and second environmental sensors for a single event. The environmental data types derived from each environmental sensor are different. The event is detected based on first environmental data derived from the first environmental sensor, and a portion of second environmental data derived from the second environmental sensor is selected to generate training data for the detected event. The resulting training data can be employed to train machine learning models.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: August 30, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Vivek Pradeep
  • Publication number: 20220221932
    Abstract: Aspects of the present disclosure relate to systems and methods for controlling a function of a computing system using gaze detection. In examples, one or more images of a user are received and gaze information may be determined from the received one or more images. Non-gaze information may be received when the gaze information is determined to satisfy a condition. Accordingly, a function may be enabled based on the received non-gaze information. In examples, the gaze information may be determined by extracting a plurality of features from the received one or more images, providing the plurality of features to a neural network, and determining, utilizing the neural network, a location at a display device at which a gaze of the user is directed.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 14, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Steven N. BATHICHE, Eric Chris Wolfgang Sommerlade, Vivek PRADEEP, Alexandros NEOFYTOU
  • Publication number: 20220122235
    Abstract: A computational photography system is described herein including a guidance system and a detail enhancement system. The guidance system uses a first neural network that maps an original image provided by an image sensor to a guidance image, which represents a color-corrected and lighting-corrected version of the original image. A combination unit combines the original image and the guidance image to produce a combined image. A detail-enhancement system then uses a second neural network to map the combined image to a predicted image. The predicted image supplements the guidance provided by the first neural network by sharpening details in the original image. A training system is also described herein for training the first and second neural networks. The training system alternates in the data it feeds the second neural network, first using a guidance image as input to the second neural network, and then using a corresponding ground-truth image.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Luming LIANG, Ilya Dmitriyevich ZHARKOV, Vivek PRADEEP, Faezeh AMJADI
  • Patent number: 11092491
    Abstract: An optical system, comprising a multi-spectral optical element, a switchable filter, a dual bandpass filter, and a sensor. The multi-spectral optical element receives light in at least a first spectral band and a second spectral band. The dual bandpass filter filters out wavelengths of light in a transition region of the switchable filter between the first spectral band and the second spectral band. The switchable filter filters light received from the dual bandpass filter in the first spectral band in a first mode where the switchable filter transmits light in the first spectral band and in a second mode where the switchable filter does not transmit light in the first spectral band. The sensor is disposed at an image plane, and the multi-spectral optical element is configured to produce a modulation transfer function value that is a above a predetermined threshold for each of the spectral bands.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: August 17, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Karlton David Powell, Vivek Pradeep
  • Patent number: 11010601
    Abstract: An intelligent assistant device is configured to communicate non-verbal cues. Image data indicating presence of a human is received from one or more cameras of the device. In response, one or more components of the device are actuated to non-verbally communicate the presence of the human. Data indicating context information of the human is received from one or more of the sensors. Using at least this data one or more contexts of the human are determined, and one or more components of the device are actuated to non-verbally communicate the one or more contexts of the human.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: May 18, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Steven Nabil Bathiche, Vivek Pradeep, Alexander Norman Bennett, Daniel Gordon O'Neil, Anthony Christian Reed, Krzysztof Jan Luchowiec, Tsitsi Isabel Kolawole
  • Patent number: 10845188
    Abstract: Methods and apparatus for capturing motion from a self-tracking device are disclosed. In embodiments, a device self-tracks motion of the device relative to a first reference frame while recording motion of a subject relative to a second reference frame, the second reference frame being a reference frame relative to the device. In the embodiments, the subject may be a real object or, alternately, the subject may be a virtual subject and a motion of the virtual object may be recorded relative to the second reference frame by associating a position offset relative to the device with the position of the virtual object in the recorded motion. The motion of the subject relative to the first reference frame may be determined from the tracked motion of the device relative to the first frame and the recorded motion of the subject relative to the second reference frame.
    Type: Grant
    Filed: January 5, 2016
    Date of Patent: November 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John Weiss, Vivek Pradeep, Xiaoyan Hu
  • Patent number: 10824921
    Abstract: A first intelligent assistant computing device configured to receive and respond to natural language inputs provided by human users syncs to a reference clock of a wireless computer network. The first intelligent assistant computing device receives a communication sent by a second intelligent assistant computing device indicating a signal emission time at which the second intelligent assistant computing device emitted a position calibration signal. The first intelligent assistant computing device records a signal detection time at which the position calibration signal was detected. Based on a difference between 1) the signal emission time and the signal detection time, and 2) a known propagation speed of the position calibration signal, a distance between the first and second intelligent assistant computing devices is calculated.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Steven Nabil Bathiche, Flavio Protasio Ribeiro, Vivek Pradeep
  • Patent number: 10817760
    Abstract: Computing devices and methods for associating a semantic identifier with an object are disclosed. In one example, a three-dimensional model of an environment comprising the object is generated. Image data of the environment is sent to a user computing device for display by the user computing device. User input comprising position data of the object and the semantic identifier is received. The position data is mapped to a three-dimensional location in the three-dimensional model at which the object is located. Based at least on mapping the position data to the three-dimensional location of the object, the semantic identifier is associated with the object.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: October 27, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vivek Pradeep, Michelle Lynn Holtmann, Steven Nabil Bathiche
  • Patent number: 10789514
    Abstract: A first intelligent assistant computing device configured to receive and respond to natural language inputs provided by human users syncs to a reference clock of a wireless computer network. The first intelligent assistant computing device receives a communication sent by a second intelligent assistant computing device indicating a signal emission time at which the second intelligent assistant computing device emitted a position calibration signal. The first intelligent assistant computing device records a signal detection time at which the position calibration signal was detected. Based on a difference between 1) the signal emission time and the signal detection time, and 2) a known propagation speed of the position calibration signal, a distance between the first and second intelligent assistant computing devices is calculated.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: September 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Steven Nabil Bathiche, Flavio Protasio Ribeiro, Vivek Pradeep
  • Patent number: 10783411
    Abstract: Computing devices and methods for associating a semantic identifier with an object are disclosed. In one example, a three-dimensional model of an environment comprising the object is generated. Image data of the environment is sent to a user computing device for display by the user computing device. User input comprising position data of the object and the semantic identifier is received. The position data is mapped to a three-dimensional location in the three-dimensional model at which the object is located. Based at least on mapping the position data to the three-dimensional location of the object, the semantic identifier is associated with the object.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: September 22, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vivek Pradeep, Michelle Lynn Holtmann, Steven Nabil Bathiche
  • Patent number: 10748043
    Abstract: Computing devices and methods for associating a semantic identifier with an object are disclosed. In one example, a three-dimensional model of an environment comprising the object is generated. Image data of the environment is sent to a user computing device for display by the user computing device. User input comprising position data of the object and the semantic identifier is received. The position data is mapped to a three-dimensional location in the three-dimensional model at which the object is located. Based at least on mapping the position data to the three-dimensional location of the object, the semantic identifier is associated with the object.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: August 18, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vivek Pradeep, Michelle Lynn Holtmann, Steven Nabil Bathiche
  • Patent number: 10679044
    Abstract: Methods, apparatuses, and computer-readable mediums for generating human action data sets are disclosed by the present disclosure. In an aspect, an apparatus may receive a set of reference images, where each of the images within the set of reference images includes a person, and a background image. The apparatus may identify body parts of the person from the set of reference image and generate a transformed skeleton image by mapping each of the body parts of the person to corresponding skeleton parts of a target skeleton. The apparatus may generate a mask of the transformed skeleton image. The apparatus may generate, using machine learning, a frame of the person formed according to the target skeleton within the background image.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: June 9, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Hamidreza Vaezi Joze, Ilya Zharkov, Vivek Pradeep, Mehran Khodabandeh
  • Patent number: 10666848
    Abstract: Remote depth sensing techniques are described via relayed depth from diffusion. In one or more implementations, a remote depth sensing system is configured to sense depth as relayed from diffusion. The system includes an image capture system including an image sensor and an imaging lens configured to transmit light to the image sensor through an intermediate image plane that is disposed between the imaging lens and the image sensor, the intermediate plane having an optical diffuser disposed proximal thereto that is configured to diffuse the transmitted light. The system also includes a depth sensing module configured to receive one or more images from the image sensor and determine a distance to one or more objects in an object scene captured by the one or more images using a depth by diffusion technique that is based at least in part on an amount of blurring exhibited by respective said objects in the one or more images.
    Type: Grant
    Filed: May 5, 2015
    Date of Patent: May 26, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Karlton D. Powell, Vivek Pradeep
  • Patent number: 10628714
    Abstract: An entity-tracking computing system receives sensor information from a plurality of different sensors. The positions of entities detected by the various sensors are resolved to an environment-relative coordinate system so that entities identified by one sensor can be tracked across the fields of detection of other sensors.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: April 21, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vivek Pradeep, Pablo Luis Sala, John Guido Atkins Weiss, Moshe Randall Lutz
  • Publication number: 20200105153
    Abstract: The present disclosure discloses an interactive system for teaching sequencing and programming to children. In some embodiments, the interactive system comprises a plurality of tiles organisable to form a structural pattern, wherein each tile comprises an RFID tag storing at least a pre-defined command corresponding to a first action and an identifier associated with a second set of actions, and an interactive robot. In some embodiments, the interactive robot when placed on the tile is configured for, receiving a voice command from a user, reading at least the pre-defined command and the identifier from the RFID tag associated with the tile, comparing the command received from the user and the pre-defined command, and performing one or more actions from among a third set of actions based on a result of comparison.
    Type: Application
    Filed: June 13, 2018
    Publication date: April 2, 2020
    Applicant: GRASP IO INNOVATIONS PVT LTD.
    Inventors: Shanmugha TUMKUR SRINIVAS, Vivek Pradeep KUMAR, Jayakrishnan KUNDULLY, Rahul KOTHARI, Rudresh JAYARAM, Vineetha MENON, Nischitha THULASIRAJU, John Solomon JOHNSON
  • Patent number: 10438322
    Abstract: Resolution enhancement techniques are described. An apparatus may receive first image data at a first resolution, and second image data at a resolution less than the first resolution. The second image data may be scaled to the first resolution and compared to the first image data. Application of a neural network may scale the first image data to a resolution higher than the first resolution. The application of the neural network may incorporate signals based on the scaled second image data. The signals may include information obtained by comparing the scaled second image data to the resolution of the first image data.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: October 8, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Moshe R. Lutz, Vivek Pradeep