Patents by Inventor Lior Shapira
Lior Shapira has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10547613Abstract: A device provisioning service (DPS) fields requests from unprovisioned devices so that those unprovisioned devices can obtain network credentials or other data used in provisioning the unprovisioned device. The DPS can identify the device securely and associate with a known user account, or the user provisioning the device can supply network credentials over a side channel after supplying a provision code indicative of possession of the unprovisioned device. The provision code can be unique to the unprovisioned device or a short-sequence code that is not necessarily unique, but that is sufficiently uncommon that a specific short-sequence code would not likely be used more than once at a time. In order to communicate with the DPS, a provisioning device might connect the unprovisioned device and the DPS. If the provisioning device is a trusted device, it can perform some of the steps otherwise required by the DPS.Type: GrantFiled: May 17, 2017Date of Patent: January 28, 2020Assignee: Amazon Technologies, Inc.Inventors: Andrew Jay Roths, Omar Abdul Baki, Lior Shapira, Sudharsan Sampath, Kadirvel Chockalingam Vanniarajan
-
Patent number: 10489524Abstract: A method for generating synthetic data records which include datasets that capture state-based transitions, according to which a state transition family is randomly selecting, according to the distribution of samples between the different clusters of users and the context variables are randomly sampled according to their distribution within the chosen cluster. The relevant Markov Chains models are selected according to the sampled context and the initial state of the sequence is randomly selected according to the distribution of states. A random walk process is initialized on the graph models and the random walk is performed process on each context separately, assuming context independency. The cause condition of the current transition is sampled for each state transition, based on the distributions on the selected edge.Type: GrantFiled: December 28, 2015Date of Patent: November 26, 2019Assignee: Deutsche Telekom AGInventors: Ariel Bar, Barak Chizi, Dudu Mimran, Lior Rokach, Bracha Shapira, Andreas Grothe, Rahul Swaminathan
-
Patent number: 10445939Abstract: Tactile virtual reality (VR) and/or mixed reality (MR) experiences are described. Techniques described herein include receiving data from a sensor and accessing a position and an orientation of a real object that is physically present in a real scene. Furthermore, techniques described herein include identifying the real object based at least in part on the position and the orientation of the real object and causing a graphical element corresponding to the real object to be rendered on a display of a VR and/or MR display device. The graphical element can be determined based at least in part on a VR and/or MR application. The techniques described herein include determining an interaction with the real object and causing a functionality associated with the graphical element to be performed in the VR or MR environment rendered via the VR and/or MR display device, respectively.Type: GrantFiled: January 10, 2018Date of Patent: October 15, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Lior Shapira, Judith Amores Fernandez, Xavier Benavides Palos
-
Publication number: 20180190034Abstract: Tactile virtual reality (VR) and/or mixed reality (MR) experiences are described. Techniques described herein include receiving data from a sensor and accessing a position and an orientation of a real object that is physically present in a real scene. Furthermore, techniques described herein include identifying the real object based at least in part on the position and the orientation of the real object and causing a graphical element corresponding to the real object to be rendered on a display of a VR and/or MR display device. The graphical element can be determined based at least in part on a VR and/or MR application. The techniques described herein include determining an interaction with the real object and causing a functionality associated with the graphical element to be performed in the VR or MR environment rendered via the VR and/or MR display device, respectively.Type: ApplicationFiled: January 10, 2018Publication date: July 5, 2018Inventors: Lior Shapira, Judith Amores Fernandez, Xavier Benavides Palos
-
Patent number: 9959675Abstract: A “Layout Optimizer” provides various real-time iterative constraint-satisfaction methodologies that use constraint-based frameworks to generate optimized layouts that map or embed virtual objects into environments. The term environment refers to combinations of environmental characteristics, including, but not limited to, 2D or 3D scene geometry or layout, scene colors, patterns, and/or textures, scene illumination, scene heat sources, fixed or moving people, objects or fluids, etc., any of which may evolve or change over time. A set of parameters are specified or selected for each object. Further, the environmental characteristics are determined automatically or specified by users. Relationships between objects and/or the environment derived from constraints associated with objects and the environment are then used to iteratively determine optimized self-consistent and scene-consistent object layouts.Type: GrantFiled: June 9, 2014Date of Patent: May 1, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Ran Gal, Pushmeet Kohli, Eyal Ofek, Lior Shapira
-
Patent number: 9911232Abstract: An “Anchored Environment Generator” generates a physically constrained virtual environment that is molded and anchored to a real-world environment around a user (or multiple users). This molding and anchoring of the physically constrained virtual environment ensures that at least a portion of the physically constrained virtual environment matches tactile truth for one or more surfaces and objects within the real-world environment. Real objects and surfaces in the real-world environment may appear as different virtual objects, and may have different functionality, in the physically constrained virtual environment. Consequently, users may move around within the physically constrained virtual environment while touching and interacting with virtual objects in the physically constrained virtual environment. In some implementations, the physically constrained virtual environment is constructed from virtual building blocks that are consistent with a theme-based specification (e.g.Type: GrantFiled: February 27, 2015Date of Patent: March 6, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Lior Shapira, Daniel Freedman
-
Patent number: 9898864Abstract: A “Shared Tactile Immersive Virtual Environment Generator” (STIVE Generator) constructs fully immersive shared virtual reality (VR) environments wherein multiple users share tactile interactions via virtual elements that are mapped and rendered to real objects that can be touched and manipulated by multiple users. Generation of real-time environmental models of shared real-world spaces enables mapping of virtual interactive elements to real objects combined with multi-viewpoint presentation of the immersive VR environment to multiple users. Real-time environmental models classify geometry, positions, and motions of real-world surfaces and objects. Further, a unified real-time tracking model comprising position, orientation, skeleton models and hand models is generated for each user. The STIVE Generator then renders frames of the shared immersive virtual reality corresponding to a real-time field of view of each particular user.Type: GrantFiled: May 28, 2015Date of Patent: February 20, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Lior Shapira, Ran Gal, Douglas Christopher Burger
-
Patent number: 9898869Abstract: Tactile virtual reality (VR) and/or mixed reality (MR) experiences are described. Techniques described herein include receiving data from a sensor and accessing a position and an orientation of a real object that is physically present in a real scene. Furthermore, techniques described herein include identifying the real object based at least in part on the position and the orientation of the real object and causing a graphical element corresponding to the real object to be rendered on a display of a VR and/or MR display device. The graphical element can be determined based at least in part on a VR and/or MR application. The techniques described herein include determining an interaction with the real object and causing a functionality associated with the graphical element to be performed in the VR or MR environment rendered via the VR and/or MR display device, respectively.Type: GrantFiled: November 4, 2015Date of Patent: February 20, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Lior Shapira, Xavier Benavides Palos, Judith Amores Fernandez
-
Patent number: 9836117Abstract: A “Tactile Autonomous Drone” (TAD) (e.g., flying drones, mobile robots, etc.) supplies real-time tactile feedback to users immersed in virtual reality (VR) environments. TADs are not rendered into the VR environment, and are therefore not visible to users immersed in the VR environment. In various implementations, one or more TADs track users as they move through a real-world space while immersed in the VR environment. One or more TADs apply tracking information to autonomously position themselves, or one or more physical surfaces or objects carried by the TADs, in a way that enables physical contact between those surfaces or objects and one or more portions of the user's body. Further, this positioning of surfaces or objects corresponds to some real-time virtual event, virtual object, virtual character, virtual avatar of another user, etc., in the VR environment to provide real-time tactile feedback to users immersed in the VR environment.Type: GrantFiled: May 28, 2015Date of Patent: December 5, 2017Assignee: Microsoft Technology Licensing, LLCInventor: Lior Shapira
-
Patent number: 9679144Abstract: An “AR Privacy API” provides an API that allows applications and web browsers to use various content rendering abstractions to protect user privacy in a wide range of web-based immersive augmented reality (AR) scenarios. The AR Privacy API extends the traditional concept of “web pages” to immersive “web rooms” wherein any desired combination of existing or new 2D and 3D content is rendered within a user's room or other space. Advantageously, the AR Privacy API and associated rendering abstractions are useable by a wide variety of applications and web content for enhancing the user's room or other space with web-based immersive AR content. Further, the AR Privacy API is implemented using any existing or new web page coding platform, including, but not limited to HTML, XML, CSS, JavaScript, etc., thereby enabling existing web content and coding techniques to be smoothly integrated into a wide range of web room AR scenarios.Type: GrantFiled: November 15, 2013Date of Patent: June 13, 2017Assignee: Microsoft Technology Licensing, LLCInventors: David Molnar, John Vilk, Eyal Ofek, Alexander Moshchuk, Jiahe Wang, Ran Gal, Lior Shapira, Douglas Christopher Burger, Blair MacIntyre, Benjamin Livshits
-
Publication number: 20170069134Abstract: Tactile virtual reality (VR) and/or mixed reality (MR) experiences are described. Techniques described herein include receiving data from a sensor and accessing a position and an orientation of a real object that is physically present in a real scene. Furthermore, techniques described herein include identifying the real object based at least in part on the position and the orientation of the real object and causing a graphical element corresponding to the real object to be rendered on a display of a VR and/or MR display device. The graphical element can be determined based at least in part on a VR and/or MR application. The techniques described herein include determining an interaction with the real object and causing a functionality associated with the graphical element to be performed in the VR or MR environment rendered via the VR and/or MR display device, respectively.Type: ApplicationFiled: November 4, 2015Publication date: March 9, 2017Inventors: Lior Shapira, Xavier Benavides Palos, Judith Amores Fernandez
-
Publication number: 20160349835Abstract: A “Tactile Autonomous Drone” (TAD) (e.g., flying drones, mobile robots, etc.) supplies real-time tactile feedback to users immersed in virtual reality (VR) environments. TADs are not rendered into the VR environment, and are therefore not visible to users immersed in the VR environment. In various implementations, one or more TADs track users as they move through a real-world space while immersed in the VR environment. One or more TADs apply tracking information to autonomously position themselves, or one or more physical surfaces or objects carried by the TADs, in a way that enables physical contact between those surfaces or objects and one or more portions of the user's body. Further, this positioning of surfaces or objects corresponds to some real-time virtual event, virtual object, virtual character, virtual avatar of another user, etc., in the VR environment to provide real-time tactile feedback to users immersed in the VR environment.Type: ApplicationFiled: May 28, 2015Publication date: December 1, 2016Inventor: Lior Shapira
-
Publication number: 20160350973Abstract: A “Shared Tactile Immersive Virtual Environment Generator” (STIVE Generator) constructs fully immersive shared virtual reality (VR) environments wherein multiple users share tactile interactions via virtual elements that are mapped and rendered to real objects that can be touched and manipulated by multiple users. Generation of real-time environmental models of shared real-world spaces enables mapping of virtual interactive elements to real objects combined with multi-viewpoint presentation of the immersive VR environment to multiple users. Real-time environmental models classify geometry, positions, and motions of real-world surfaces and objects. Further, a unified real-time tracking model comprising position, orientation, skeleton models and hand models is generated for each user. The STIVE Generator then renders frames of the shared immersive virtual reality corresponding to a real-time field of view of each particular user.Type: ApplicationFiled: May 28, 2015Publication date: December 1, 2016Inventors: Lior Shapira, Ran Gal, Douglas Christopher Burger
-
Publication number: 20160253842Abstract: An “Anchored Environment Generator” generates a physically constrained virtual environment that is molded and anchored to a real-world environment around a user (or multiple users). This molding and anchoring of the physically constrained virtual environment ensures that at least a portion of the physically constrained virtual environment matches tactile truth for one or more surfaces and objects within the real-world environment. Real objects and surfaces in the real-world environment may appear as different virtual objects, and may have different functionality, in the physically constrained virtual environment. Consequently, users may move around within the physically constrained virtual environment while touching and interacting with virtual objects in the physically constrained virtual environment. In some implementations, the physically constrained virtual environment is constructed from virtual building blocks that are consistent with a theme-based specification (e.g.Type: ApplicationFiled: February 27, 2015Publication date: September 1, 2016Inventors: Lior Shapira, Daniel Freedman
-
Publication number: 20160019711Abstract: Surface reconstruction contour completion embodiments are described which provide dense reconstruction of a scene from images captured from one or more viewpoints. Both a room layout and the full extent of partially occluded objects in a room can be inferred using a Contour Completion Random Field model to augment a reconstruction volume. The augmented reconstruction volume can then be used by any surface reconstruction pipeline to show previously occluded objects and surfaces.Type: ApplicationFiled: September 28, 2015Publication date: January 21, 2016Inventors: Lior Shapira, Ran Gal, Eyal Ofek, Pushmeet Kohli, Nathan Silberman
-
Publication number: 20150356774Abstract: A “Layout Optimizer” provides various real-time iterative constraint-satisfaction methodologies that use constraint-based frameworks to generate optimized layouts that map or embed virtual objects into environments. The term environment refers to combinations of environmental characteristics, including, but not limited to, 2D or 3D scene geometry or layout, scene colors, patterns, and/or textures, scene illumination, scene heat sources, fixed or moving people, objects or fluids, etc., any of which may evolve or change over time. A set of parameters are specified or selected for each object. Further, the environmental characteristics are determined automatically or specified by users. Relationships between objects and/or the environment derived from constraints associated with objects and the environment are then used to iteratively determine optimized self-consistent and scene-consistent object layouts.Type: ApplicationFiled: June 9, 2014Publication date: December 10, 2015Inventors: Ran Gal, Pushmeet Kohli, Eyal Ofek, Lior Shapira
-
Patent number: 9171403Abstract: Surface reconstruction contour completion embodiments are described which provide dense reconstruction of a scene from images captured from one or more viewpoints. Both a room layout and the full extent of partially occluded objects in a room can be inferred using a Contour Completion Random Field model to augment a reconstruction volume. The augmented reconstruction volume can then be used by any surface reconstruction pipeline to show previously occluded objects and surfaces.Type: GrantFiled: February 13, 2014Date of Patent: October 27, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Lior Shapira, Ran Gal, Eyal Ofek, Pushmeet Kohli, Nathan Silberman
-
Patent number: 9152679Abstract: According to an example, data pertaining to a plurality of entities recommended for a user may be accessed, in which the data identifies relationships between the plurality of entities with respect to each other. In addition, a relevance map for the user that displays graphical representations of the plurality of entities over a substantially optimized use of space available for display of the graphical representations in the relevance map may be generated, in which the graphical representations of the plurality of entities are arranged in the relevance map according to a predetermined arrangement scheme.Type: GrantFiled: January 31, 2013Date of Patent: October 6, 2015Assignee: Hewlett-Packard Development Company, L.P.Inventors: Zachi Karni, Lior Shapira
-
Publication number: 20150228114Abstract: Surface reconstruction contour completion embodiments are described which provide dense reconstruction of a scene from images captured from one or more viewpoints. Both a room layout and the full extent of partially occluded objects in a room can be inferred using a Contour Completion Random Field model to augment a reconstruction volume. The augmented reconstruction volume can then be used by any surface reconstruction pipeline to show previously occluded objects and surfaces.Type: ApplicationFiled: February 13, 2014Publication date: August 13, 2015Applicant: Microsoft CorporationInventors: Lior Shapira, Ran Gal, Eyal Ofek, Pushmeet Kohli, Nathan Silberman
-
Publication number: 20150143459Abstract: An “AR Privacy API” provides an API that allows applications and web browsers to use various content rendering abstractions to protect user privacy in a wide range of web-based immersive augmented reality (AR) scenarios. The AR Privacy API extends the traditional concept of “web pages” to immersive “web rooms” wherein any desired combination of existing or new 2D and 3D content is rendered within a user's room or other space. Advantageously, the AR Privacy API and associated rendering abstractions are useable by a wide variety of applications and web content for enhancing the user's room or other space with web-based immersive AR content. Further, the AR Privacy API is implemented using any existing or new web page coding platform, including, but not limited to HTML, XML, CSS, JavaScript, etc., thereby enabling existing web content and coding techniques to be smoothly integrated into a wide range of web room AR scenarios.Type: ApplicationFiled: November 15, 2013Publication date: May 21, 2015Applicant: Microsoft CorporationInventors: David Molnar, John Vilk, Eyal Ofek, Alexander Moshchuk, Jiahe Wang, Ran Gal, Lior Shapira, Douglas Christopher Burger, Blair MacIntyre, Benjamin Livshits