Patents by Inventor Deepak S. Vembar
Deepak S. Vembar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180299952Abstract: Systems, apparatuses and methods may provide away to enhance an augmented reality (AR) and/or virtual reality (VR) user experience with environmental information captured from sensors located in one or more physical environments. More particularly, systems, apparatuses and methods may provide a way to track, by an eye tracker sensor, a gaze of a user, and capture, by the sensors, environmental information. The systems, apparatuses and methods may render feedback, by one or more feedback devices or display device, for a portion of the environment information based on the gaze of the user.Type: ApplicationFiled: April 17, 2017Publication date: October 18, 2018Inventors: Altug Koker, Michael Apodaca, Kai Xiao, Chandrasekaran Sakthivel, Jeffery S. Boles, Adam T. Lake, James M. Holland, Pattabhiraman K, Sayan Lahiri, Radhakrishnan Venkataraman, Kamal Sinha, Ankur N. Shah, Deepak S. Vembar, Abhishek R. Appu, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall
-
Publication number: 20180300932Abstract: An embodiment of an electronic processing system may include an application processor, persistent storage media communicatively coupled to the application processor, and a graphics subsystem communicatively coupled to the application processor. The graphics subsystem may include a first graphics engine to process a graphics workload, and a second graphics engine to offload at least a portion of the graphics workload from the first graphics engine. The second graphics engine may include a low precision compute engine. The system may further include a wearable display housing the second graphics engine. Other embodiments are disclosed and claimed.Type: ApplicationFiled: April 17, 2017Publication date: October 18, 2018Inventors: Atsuo Kuwahara, Deepak S. Vembar, Chandrasekaran Sakthivel, Radhakrishnan Venkataraman, Brent E. Insko, Anupreet S. Kalra, Hugues Labbe, Abhishek R. Appu, Ankur N. Shah, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Prasoonkumar Surti, Murali Ramadoss
-
Publication number: 20180293697Abstract: An embodiment of a graphics apparatus may include a context engine to determine contextual information, a recommendation engine communicatively coupled to the context engine to determine a recommendation based on the contextual information, and a configuration engine communicatively coupled to the recommendation engine to adjust a configuration of a graphics operation based on the recommendation. Other embodiments are disclosed and claimed.Type: ApplicationFiled: April 10, 2017Publication date: October 11, 2018Inventors: Joydeep Ray, Ankur N. Shah, Abhishek R. Appu, Deepak S. Vembar, ElMoustapha Ould-Ahmed-Vall, Atsuo Kuwahara, Travis T. Schluessler, Linda L. Hurd, Josh B. Mastronarde, Vasanth Ranganathan
-
Publication number: 20180292895Abstract: An embodiment of a graphics apparatus may include a facial expression detector to detect a facial expression of a user, and a parameter adjuster communicatively coupled to the facial expression detector to adjust a graphics parameter based on the detected facial expression of the user. The detected facial expression may include one or more of a squinting, blinking, winking, and facial muscle tension of the user. The graphics parameter may include one or more of a frame resolution, a screen contrast, a screen brightness, and a shading rate. Other embodiments are disclosed and claimed.Type: ApplicationFiled: April 10, 2017Publication date: October 11, 2018Inventors: Travis T. Schluessler, Joydeep Ray, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Jefferson Amstutz, Carson Brownlee, Vivek Tiwari, Sayan Lahiri, Kai Xiao, Abhishek R. Appu, ElMoustapha Ould-Ahmed-Vall, Deepak S. Vembar, Ankur N. Shah, Balaji Vembu, Josh B. Mastronarde
-
Patent number: 10097912Abstract: Systems and methods may provide for determining a usage configuration of a wearable device and setting an activation state of an air conduction speaker of the wearable device based at least in part on the usage configuration. Additionally, an activation state of a tissue conduction speaker of the wearable device may be set based at least in part on the usage configuration. In one example, the usage configuration is determined based on a set of status signals that indicate one or more of a physical position, a physical activity, a current activation state, an interpersonal proximity state or a manual user request associated with one or more of the air conduction speaker or the tissue conduction speaker.Type: GrantFiled: March 27, 2015Date of Patent: October 9, 2018Assignee: Intel CorporationInventors: Glen J. Anderson, Ryan S. Brotman, Giuseppe Raffa, John C. Weast, Daniel S. Lake, Deepak S. Vembar, Lenitra M. Durham, Brad Jackson
-
Publication number: 20180288423Abstract: An embodiment of a graphics apparatus may include a focus identifier to identify a focus area, and a color compressor to selectively compress color data based on the identified focus area. Another embodiment of a graphics apparatus may include a motion detector to detect motion of a real object, a motion predictor to predict a motion of the real object, and an object placer to place a virtual object relative to the real object based on the predicted motion of the real object. Another embodiment of a graphics apparatus may include a frame divider to divide a frame into viewports, a viewport prioritizer to prioritize the viewports, a renderer to render a viewport of the frame in order in accordance with the viewport priorities, and a viewport transmitter to transmit a completed rendered viewport. Other embodiments are disclosed and claimed.Type: ApplicationFiled: April 1, 2017Publication date: October 4, 2018Applicant: Intel CorporationInventors: Deepak S. Vembar, Atsuo Kuwahara, Chandrasekaran Sakthivel, Radhakrishnan Venkataraman, Brent E. Insko, Anupreet S. Kalra, Hugues Labbe, Abhishek R. Appu, Ankur N. Shah, Joydeep Ray, ElMoustapha Ould-Ahmed-Vall, James M. Holland
-
Publication number: 20180284871Abstract: Methods and apparatus relating to techniques for shutting down one or more GPU (Graphics Processing Unit) components in response to unchanged scene detection are described. In one embodiment, one or more components of a processor enter a low power consumption state in response to a determination that a scene to be displayed is static. The static scene is displayed on a display device (e.g., based on information to be retrieved from memory) for as long as no change to the static scene is detected. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: April 1, 2017Publication date: October 4, 2018Applicant: Intel CorporationInventors: Prasoonkumar Surti, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Deepak S. Vembar, Abhishek R. Appu, Ankur N. Shah
-
Publication number: 20180284872Abstract: An embodiment may include an application processor, persistent storage media coupled to the application processor, and a graphics subsystem coupled to the application processor. The system may further include any of a performance analyzer to analyze a performance of the graphics subsystem to provide performance analysis information, a content-based depth analyzer to analyze content to provide content-based depth analysis information, a focus analyzer to analyze a focus area to provide focus analysis information, an edge analyzer to provide edge analysis information, a frame analyzer to provide frame analysis information, and/or a variance analyzer to analyze respective amounts of variance for the frame. The system may further include a workload adjuster to adjust a workload of the graphics subsystem based on the analysis information. Other embodiments are disclosed and claimed.Type: ApplicationFiled: April 1, 2017Publication date: October 4, 2018Inventors: Travis T. Schluessler, Joydeep Ray, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Abhishek R. Appu, Kamal Sinha, James M. Holland, Pattabhiraman K., Sayan Lahiri, Radhakrishnan Venkataraman, Carson Brownlee, Vivek Tiwari, Kai Xiao, Jefferson Amstutz, Deepak S. Vembar, Ankur N. Shah, ElMoustapha Ould-Ahmed-Vall
-
Publication number: 20180286016Abstract: Systems, apparatuses and methods may provide for technology that identifies, at an image post-processor, unresolved surface data and identifies, at the image post-processor, control data associated with the unresolved surface data. Additionally, the technology may resolve, at the image post-processor, the unresolved surface data and the control data into a final image.Type: ApplicationFiled: April 1, 2017Publication date: October 4, 2018Applicant: Intel CorporationInventors: Tomer Bar-On, Hugues Labbe, Adam T. Lake, Kai Xiao, Ankur N. Shah, Johannes Guenther, Abhishek R. Appu, Joydeep Ray, Deepak S. Vembar, ElMoustapha Ould-Ahmed-Vall
-
Patent number: 10075835Abstract: One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.Type: GrantFiled: May 26, 2017Date of Patent: September 11, 2018Assignee: Intel CorporationInventors: Deepak S. Vembar, Lenitra M. Durham, Glen J. Anderson, Cory J. Booth, Joshua Ekandem, Kathy Yuen, Giuseppe Raffa, John C. Weast
-
Publication number: 20180253894Abstract: Techniques are provided for 3D model reconstruction of dynamic scenes using hybrid foreground-background processing. A methodology implementing the techniques according to an embodiment includes receiving multiple static images of a scene. Each static image is generated by a static camera, positioned at a fixed location and oriented at a fixed viewing angle. The method also includes receiving multiple dynamic images of the scene, each dynamic image generated by a movable camera. The method further includes performing 3D reconstruction of the scene foreground, based on the static images, and performing 3D reconstruction of the scene background, based on the static images and the dynamic images. The method further includes superimposing the reconstructed 3D foreground and 3D background, with alignment based on calibration parameters (e.g., focal length, principal point, rotation, or translation) of the static and movable cameras, to provide a hybrid 3D reconstruction of the scene for 3D rendering.Type: ApplicationFiled: November 2, 2016Publication date: September 6, 2018Applicant: INTEL CORPORATIONInventors: RANGANATH KRISHNAN, DEEPAK S. VEMBAR, ROBERT ADAMS, BRADLEY A. JACKSON
-
Publication number: 20180240244Abstract: Techniques for high-fidelity three-dimensional (3D) reconstruction of a dynamic scene as a set of voxels are provided. One technique includes: receiving, by a processor, image data from each of two or more spatially-separated sensors observing the scene from a corresponding two or more vantage points; generating, by the processor, the set of voxels from the image data on a frame-by-frame basis; reconstructing, by the processor, surfaces from the set of voxels to generate low-fidelity mesh data; identifying, by the processor, performers in the scene from the image data; obtaining, by the processor, high-fidelity mesh data corresponding to the identified performers; and merging, by the processor, the low-fidelity mesh data with the high-fidelity mesh data to generate high-fidelity 3D output. The identifying of the performers includes: segmenting, by the processor, the image data into objects; and classifying, by the processor, those of the objects representing the performers.Type: ApplicationFiled: November 4, 2016Publication date: August 23, 2018Applicant: INTEL CORPORATIONInventors: SRIDHAR UYYALA, IGNACIO J. ALVAREZ, BRADLEY A. JACKSON, DEEPAK S. VEMBAR
-
Patent number: 9961026Abstract: Technologies for generating a text message from user-selectable icons include a wearable computing device that determines a context associated with the wearable computing device. The wearable computing device determines user-selectable icons from predetermined user-selectable icons based on the context associated with the wearable computing device. Each of the user-selectable icons may have one or more textual meanings associated therewith for text message generation. The determined user-selectable icons may be displayed on a display of the wearable computing device.Type: GrantFiled: October 31, 2013Date of Patent: May 1, 2018Assignee: Intel CorporationInventors: Glen J. Anderson, Ryan S. Brotman, Wen-Ling M. Huang, Francisco Javier Fernandez, Jamie Sherman, Deepak S. Vembar, Philip Muse, Lenitra M. Durham, Pete A. Denman, Giuseppe Beppe Raffa, Ramune Nagisetty
-
Publication number: 20180047332Abstract: In one example, a head mounted display system includes detecting a position of a head of a user of the head mounted display, predicting a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and rendering image data based on the predicted head position.Type: ApplicationFiled: August 11, 2017Publication date: February 15, 2018Applicant: INTEL CORPORATIONInventors: Atsuo Kuwahara, Deepak S. Vembar, Paul S. Diefenbaugh, Vallabhajosyula S. Somayazulu, Kofi C. Whitney
-
Patent number: 9864844Abstract: Systems and methods may provide for obtaining first wearable sensor data associated with a first fitness session and first fitness equipment and obtaining second wearable sensor data associated with a second fitness session and second fitness equipment. Additionally, an effort normalization may be conducted between one or more settings of the second fitness equipment and one or more settings of the first fitness equipment based on the first wearable sensor data and the second wearable sensor data. In one example, a user prompt is generated during the second fitness session via a user interface of one or more of the second fitness equipment or a wearable device based on the normalization.Type: GrantFiled: June 26, 2015Date of Patent: January 9, 2018Assignee: Intel CorporationInventors: Lenitra M. Durham, Giuseppe Raffa, Glen J. Anderson, Deepak S. Vembar, Jamie Sherman
-
Publication number: 20170262972Abstract: A mechanism is described for facilitating generation of voxel representations and assignment of trust metrics according to one embodiment. A method of embodiments, as described herein, includes detecting, by a computing device, a plurality of data sources hosting image capturing devices capable of capturing images of objects in a space, where the images are capable of being received by the computing device. The method may further include tagging trust metrics to the images based on credibility of the plurality of data sources, where a trust metrics to indicate veracity of a corresponding image. The method may further include aggregating the images into an aggregated image representation, and generating a voxel representation of the aggregated image representation such that the images are presented as voxel images, where veracities of the voxel images are secured based on tagging of the trust metrics to their corresponding images.Type: ApplicationFiled: March 25, 2016Publication date: September 14, 2017Inventors: ROBERT ADAMS, BRADLEY JACKSON, IGNACIO J. ALVAREZ, SRIDHAR UYYALA, DEEPAK S. VEMBAR
-
Publication number: 20170265051Abstract: One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.Type: ApplicationFiled: May 26, 2017Publication date: September 14, 2017Inventors: Deepak S. Vembar, Lenitra M. Durham, Glen J. Anderson, Cory J. Booth, Joshua Ekandem, Kathy Yuen, Giuseppe Raffa, John C. Weast
-
Patent number: 9705547Abstract: One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.Type: GrantFiled: March 26, 2015Date of Patent: July 11, 2017Assignee: Intel CorporationInventors: Deepak S. Vembar, Lenitra M. Durham, Glen J. Anderson, Cory J. Booth, Joshua Ekandem, Kathy Yuen, Giuseppe Raffa, John C. Weast
-
Patent number: 9602490Abstract: The present application is directed to user authentication confidence based on multiple devices. A user may possess at least one device. The device may determine a device confidence level that the identity of the user is authentic based on at least data collected by a data collection module in the device. For example, a confidence module in the device may receive the data from the data collection module, determine a quality corresponding to the data and determine the device confidence level based on the quality. If the user possesses two or more devices, at least one of the devices may collect device confidence levels from other devices to determine a total confidence level. For example, a device may authenticate the other devices and then receive device confidence levels for use in determining the total confidence level, which may be used to set an operational mode in a device or system.Type: GrantFiled: November 10, 2014Date of Patent: March 21, 2017Assignee: Intel CorporationInventors: Lenitra M. Durham, Deepak S. Vembar, John C. Weast, Cory J. Booth
-
Publication number: 20160375307Abstract: Systems and methods may provide for obtaining first wearable sensor data associated with a first fitness session and first fitness equipment and obtaining second wearable sensor data associated with a second fitness session and second fitness equipment. Additionally, an effort normalization may be conducted between one or more settings of the second fitness equipment and one or more settings of the first fitness equipment based on the first wearable sensor data and the second wearable sensor data. In one example, a user prompt is generated during the second fitness session via a user interface of one or more of the second fitness equipment or a wearable device based on the normalization.Type: ApplicationFiled: June 26, 2015Publication date: December 29, 2016Inventors: Lenitra M. Durham, Giuseppe Raffa, Glen J. Anderson, Deepak S. Vembar, Jamie Sherman