Patents by Inventor Vivek Pradeep
Vivek Pradeep has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250053748Abstract: A technique uses a machine-trained model to generate a response based on a prompt which expresses current input information and abstract token information. The abstract token information summarizes a full dialogue history of a dialogue, and is generated by the model itself. The technique reduces the size of the prompt by incorporating the abstract summary information in lieu of the full dialogue history. A training system trains the machine-trained model by successively improving the predictive accuracy of the machine-trained model, while rewarding the machine-trained model based on an extent to which the machine-trained model compresses instances of abstract token information.Type: ApplicationFiled: August 10, 2023Publication date: February 13, 2025Applicant: Microsoft Technology Licensing, LLCInventors: Mohsen FAYYAZ, Eric Chris Wolfgang SOMMERLADE, Justin James WAGLE, Vivek PRADEEP
-
Publication number: 20250005072Abstract: Machine learning techniques are leveraged to provide personalized assistance on a computing device. In some configurations a timeline of a user's interactions with the computing device is generated. For example, screenshots and audio streams may be saved as entries in the timeline. Context—the state of the computing device when the entry is created, such as which documents and websites are open—is also stored. Entries in the timeline are processed by a model to generate embedding vectors. The timeline may be searched by finding the embedding vector that is closest to an embedding vector derived from a search query. The user may select a query result, causing the associated context to be restored. For example, if the query is “show me all documents related to my upcoming trip to Japan”, the query result may open documents and websites that were open when booking a flight to Japan.Type: ApplicationFiled: June 29, 2023Publication date: January 2, 2025Inventors: Elizabeth Picchietti SALOWITZ, David Ben PERRY, Carlos A.C. PESSOA, Vivek PRADEEP, Sharath VISWANATHAN, Nathan James LUQUETTA-FISH, Steven BATHICHE, Eric Chris Wolfgang SOMMERLADE, Jose Antonio LARA SILVA
-
Publication number: 20240273104Abstract: Methods and systems for generating and using a semantic index are provided. In some examples, content data is received. The content data includes a plurality of subsets of content data. Each of the plurality of subsets of content data are labelled, based on a semantic context corresponding to the content data. The plurality of subsets of content data and their corresponding labels are stored. The plurality of subsets of content data are grouped, based on their labels, thereby generating one or more groups of subsets of content data. Further, a computing device is adapted to perform an action, based on the one or more groups of subsets of content data.Type: ApplicationFiled: April 29, 2024Publication date: August 15, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Vivek PRADEEP, Steven N. BATHICHE, Nathan LUQUETTA-FISH
-
Publication number: 20240256035Abstract: Aspects of the present disclosure relate to systems and methods for controlling a function of a computing system using gaze detection. In examples, one or more images of a user are received and gaze information may be determined from the received one or more images. Non-gaze information may be received when the gaze information is determined to satisfy a condition. Accordingly, a function may be enabled based on the received non-gaze information. In examples, the gaze information may be determined by extracting a plurality of features from the received one or more images, providing the plurality of features to a neural network, and determining, utilizing the neural network, a location at a display device at which a gaze of the user is directed.Type: ApplicationFiled: February 27, 2024Publication date: August 1, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Steven N. BATHICHE, Eric Chris Wolfgang Sommerlade, Vivek PRADEEP, Alexandros NEOFYTOU
-
Publication number: 20240184852Abstract: A method of training a neural network for detecting target features in images is described. The neural network is trained using a first data set that includes labeled images, where at least some of the labeled images having subjects with labeled features, including: dividing each of the labeled images of the first data set into a respective plurality of tiles, and generating, for each of the plurality of tiles, a plurality of feature anchors that indicate target features within the corresponding tile. Target features that correspond to the plurality of feature anchors are detected in a second data set of unlabeled images. Images of the second data set having target features that were not detected are labeled. A third data set that includes the first data set and the labeled images of the second data set is generated. The neural network is trained using the third data set.Type: ApplicationFiled: February 7, 2024Publication date: June 6, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Hamidreza Vaezi JOZE, Vivek PRADEEP, Karthik VIJAYAN
-
Patent number: 12001437Abstract: Methods and systems for generating and using a semantic index are provided. In some examples, content data is received. The content data includes a plurality of subsets of content data. Each of the plurality of subsets of content data are labelled, based on a semantic context corresponding to the content data. The plurality of subsets of content data and their corresponding labels are stored. The plurality of subsets of content data are grouped, based on their labels, thereby generating one or more groups of subsets of content data. Further, a computing device is adapted to perform an action, based on the one or more groups of subsets of content data.Type: GrantFiled: September 26, 2022Date of Patent: June 4, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Vivek Pradeep, Steven N. Bathiche, Nathan Luquetta-Fish
-
Publication number: 20240104103Abstract: Methods and systems for generating and using a semantic index are provided. In some examples, content data is received. The content data includes a plurality of subsets of content data. Each of the plurality of subsets of content data are labelled, based on a semantic context corresponding to the content data. The plurality of subsets of content data and their corresponding labels are stored. The plurality of subsets of content data are grouped, based on their labels, thereby generating one or more groups of subsets of content data. Further, a computing device is adapted to perform an action, based on the one or more groups of subsets of content data.Type: ApplicationFiled: September 26, 2022Publication date: March 28, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Vivek PRADEEP, Steven N. BATHICHE, Nathan LUQUETTA-FISH
-
Patent number: 11727819Abstract: The present disclosure discloses an interactive system for teaching sequencing and programming to children. In some embodiments, the interactive system comprises a plurality of tiles organisable to form a structural pattern, wherein each tile comprises an RFID tag storing at least a pre-defined command corresponding to a first action and an identifier associated with a second set of actions, and an interactive robot. In some embodiments, the interactive robot when placed on the tile is configured for, receiving a voice command from a user, reading at least the pre-defined command and the identifier from the RFID tag associated with the tile, comparing the command received from the user and the pre-defined command, and performing one or more actions from among a third set of actions based on a result of comparison.Type: GrantFiled: June 13, 2018Date of Patent: August 15, 2023Assignee: GRASP IO INNOVATIONS PVT LTD.Inventors: Shanmugha Tumkur Srinivas, Vivek Pradeep Kumar, Jayakrishnan Kundully, Rahul Kothari, Rudresh Jayaram, Vineetha Menon, Nischitha Thulasiraju, John Solomon Johnson
-
Patent number: 11669943Abstract: A computational photography system is described herein including a guidance system and a detail enhancement system. The guidance system uses a first neural network that maps an original image provided by an image sensor to a guidance image, which represents a color-corrected and lighting-corrected version of the original image. A combination unit combines the original image and the guidance image to produce a combined image. A detail-enhancement system then uses a second neural network to map the combined image to a predicted image. The predicted image supplements the guidance provided by the first neural network by sharpening details in the original image. A training system is also described herein for training the first and second neural networks. The training system alternates in the data it feeds the second neural network, first using a guidance image as input to the second neural network, and then using a corresponding ground-truth image.Type: GrantFiled: October 16, 2020Date of Patent: June 6, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Luming Liang, Ilya Dmitriyevich Zharkov, Vivek Pradeep, Faezeh Amjadi
-
Publication number: 20220358332Abstract: A method of training a neural network for detecting target features in images is described. The neural network is trained using a first data set that includes labeled images, where at least some of the labeled images having subjects with labeled features, including: dividing each of the labeled images of the first data set into a respective plurality of tiles, and generating, for each of the plurality of tiles, a plurality of feature anchors that indicate target features within the corresponding tile. Target features that correspond to the plurality of feature anchors are detected in a second data set of unlabeled images. Images of the second data set having target features that were not detected are labeled. A third data set that includes the first data set and the labeled images of the second data set is generated. The neural network is trained using the third data set.Type: ApplicationFiled: May 7, 2021Publication date: November 10, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Hamidreza Vaezi JOZE, Vivek PRADEEP, Karthik VIJAYAN
-
Patent number: 11429807Abstract: Methods and systems for automatically generating training data for use in machine learning are disclosed. The methods can involve the use of environmental data derived from first and second environmental sensors for a single event. The environmental data types derived from each environmental sensor are different. The event is detected based on first environmental data derived from the first environmental sensor, and a portion of second environmental data derived from the second environmental sensor is selected to generate training data for the detected event. The resulting training data can be employed to train machine learning models.Type: GrantFiled: January 12, 2018Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventor: Vivek Pradeep
-
Publication number: 20220221932Abstract: Aspects of the present disclosure relate to systems and methods for controlling a function of a computing system using gaze detection. In examples, one or more images of a user are received and gaze information may be determined from the received one or more images. Non-gaze information may be received when the gaze information is determined to satisfy a condition. Accordingly, a function may be enabled based on the received non-gaze information. In examples, the gaze information may be determined by extracting a plurality of features from the received one or more images, providing the plurality of features to a neural network, and determining, utilizing the neural network, a location at a display device at which a gaze of the user is directed.Type: ApplicationFiled: January 12, 2021Publication date: July 14, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Steven N. BATHICHE, Eric Chris Wolfgang Sommerlade, Vivek PRADEEP, Alexandros NEOFYTOU
-
Publication number: 20220122235Abstract: A computational photography system is described herein including a guidance system and a detail enhancement system. The guidance system uses a first neural network that maps an original image provided by an image sensor to a guidance image, which represents a color-corrected and lighting-corrected version of the original image. A combination unit combines the original image and the guidance image to produce a combined image. A detail-enhancement system then uses a second neural network to map the combined image to a predicted image. The predicted image supplements the guidance provided by the first neural network by sharpening details in the original image. A training system is also described herein for training the first and second neural networks. The training system alternates in the data it feeds the second neural network, first using a guidance image as input to the second neural network, and then using a corresponding ground-truth image.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: Luming LIANG, Ilya Dmitriyevich ZHARKOV, Vivek PRADEEP, Faezeh AMJADI
-
Patent number: 11092491Abstract: An optical system, comprising a multi-spectral optical element, a switchable filter, a dual bandpass filter, and a sensor. The multi-spectral optical element receives light in at least a first spectral band and a second spectral band. The dual bandpass filter filters out wavelengths of light in a transition region of the switchable filter between the first spectral band and the second spectral band. The switchable filter filters light received from the dual bandpass filter in the first spectral band in a first mode where the switchable filter transmits light in the first spectral band and in a second mode where the switchable filter does not transmit light in the first spectral band. The sensor is disposed at an image plane, and the multi-spectral optical element is configured to produce a modulation transfer function value that is a above a predetermined threshold for each of the spectral bands.Type: GrantFiled: June 22, 2020Date of Patent: August 17, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Karlton David Powell, Vivek Pradeep
-
Patent number: 11010601Abstract: An intelligent assistant device is configured to communicate non-verbal cues. Image data indicating presence of a human is received from one or more cameras of the device. In response, one or more components of the device are actuated to non-verbally communicate the presence of the human. Data indicating context information of the human is received from one or more of the sensors. Using at least this data one or more contexts of the human are determined, and one or more components of the device are actuated to non-verbally communicate the one or more contexts of the human.Type: GrantFiled: March 26, 2018Date of Patent: May 18, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Steven Nabil Bathiche, Vivek Pradeep, Alexander Norman Bennett, Daniel Gordon O'Neil, Anthony Christian Reed, Krzysztof Jan Luchowiec, Tsitsi Isabel Kolawole
-
Patent number: 10845188Abstract: Methods and apparatus for capturing motion from a self-tracking device are disclosed. In embodiments, a device self-tracks motion of the device relative to a first reference frame while recording motion of a subject relative to a second reference frame, the second reference frame being a reference frame relative to the device. In the embodiments, the subject may be a real object or, alternately, the subject may be a virtual subject and a motion of the virtual object may be recorded relative to the second reference frame by associating a position offset relative to the device with the position of the virtual object in the recorded motion. The motion of the subject relative to the first reference frame may be determined from the tracked motion of the device relative to the first frame and the recorded motion of the subject relative to the second reference frame.Type: GrantFiled: January 5, 2016Date of Patent: November 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: John Weiss, Vivek Pradeep, Xiaoyan Hu
-
Patent number: 10824921Abstract: A first intelligent assistant computing device configured to receive and respond to natural language inputs provided by human users syncs to a reference clock of a wireless computer network. The first intelligent assistant computing device receives a communication sent by a second intelligent assistant computing device indicating a signal emission time at which the second intelligent assistant computing device emitted a position calibration signal. The first intelligent assistant computing device records a signal detection time at which the position calibration signal was detected. Based on a difference between 1) the signal emission time and the signal detection time, and 2) a known propagation speed of the position calibration signal, a distance between the first and second intelligent assistant computing devices is calculated.Type: GrantFiled: December 5, 2017Date of Patent: November 3, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Steven Nabil Bathiche, Flavio Protasio Ribeiro, Vivek Pradeep
-
Patent number: 10817760Abstract: Computing devices and methods for associating a semantic identifier with an object are disclosed. In one example, a three-dimensional model of an environment comprising the object is generated. Image data of the environment is sent to a user computing device for display by the user computing device. User input comprising position data of the object and the semantic identifier is received. The position data is mapped to a three-dimensional location in the three-dimensional model at which the object is located. Based at least on mapping the position data to the three-dimensional location of the object, the semantic identifier is associated with the object.Type: GrantFiled: December 5, 2017Date of Patent: October 27, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Michelle Lynn Holtmann, Steven Nabil Bathiche
-
Patent number: 10789514Abstract: A first intelligent assistant computing device configured to receive and respond to natural language inputs provided by human users syncs to a reference clock of a wireless computer network. The first intelligent assistant computing device receives a communication sent by a second intelligent assistant computing device indicating a signal emission time at which the second intelligent assistant computing device emitted a position calibration signal. The first intelligent assistant computing device records a signal detection time at which the position calibration signal was detected. Based on a difference between 1) the signal emission time and the signal detection time, and 2) a known propagation speed of the position calibration signal, a distance between the first and second intelligent assistant computing devices is calculated.Type: GrantFiled: December 5, 2017Date of Patent: September 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Steven Nabil Bathiche, Flavio Protasio Ribeiro, Vivek Pradeep
-
Patent number: 10783411Abstract: Computing devices and methods for associating a semantic identifier with an object are disclosed. In one example, a three-dimensional model of an environment comprising the object is generated. Image data of the environment is sent to a user computing device for display by the user computing device. User input comprising position data of the object and the semantic identifier is received. The position data is mapped to a three-dimensional location in the three-dimensional model at which the object is located. Based at least on mapping the position data to the three-dimensional location of the object, the semantic identifier is associated with the object.Type: GrantFiled: December 5, 2017Date of Patent: September 22, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Vivek Pradeep, Michelle Lynn Holtmann, Steven Nabil Bathiche