Virtual 3d Environment Patents (Class 715/757)
  • Patent number: 11115468
    Abstract: A method enabling live management of events in the real world through a persistent virtual world system comprises providing in the memory of a server a database with structured data storing a persistent virtual world system comprising virtual replicas of real-world elements; synchronizing the virtual replicas with the respective real-world elements by using a plurality of connected elements connected to the server via a network, the connected elements comprising sensing mechanisms configured to capture data from the real-world elements; and managing events in the real world through the synchronized persistent virtual world system. Management of events comprises monitoring activity in the real world through a persistent virtual world system; detecting the events; checking whether events fall within one or more predetermined analysis requirement parameters; defining timeframe of events; storing the events in the memory; recreating the events; and analyzing the events.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: September 7, 2021
    Assignee: THE CALANY Holding S. À R.L.
    Inventor: Cevat Yerli
  • Patent number: 11103791
    Abstract: A game includes player characters in a virtual space. The game is progressed in accordance with game progress information, which is useable for generating a game screen. The game screen is configured to be displayed on a display device. An instruction for registering a player character, which is manipulated in the game by a first player, is received, and the player character of the first player is registered as a copy non-player character. The copy non-player character is configured to be manipulated in the game by a second player. The game progress information is updated, and data of the player character of the first player is updated in accordance with a manipulation of the copy non-player character by the second player during a period in which the first player is in an off-line state.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: August 31, 2021
    Assignee: KABUSHIKI KAISHA SQUARE ENIX
    Inventors: Jin Fujisawa, Takashi Anzai, Yoichi Kuroda, Yoshitaka Katsume
  • Patent number: 11108991
    Abstract: Aspects of the subject disclosure may include, for example, an assessment of a context associated with a conference, an identification of an object associated with the conference in accordance with the context, and a presentation of the object as part of the conference. The conference may include a videoconference and the object may include a physical object, a virtual object, or a combination thereof. Other embodiments are disclosed.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: August 31, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Zhu Liu, Tan Xu, Bernard S. Renger
  • Patent number: 11093023
    Abstract: A method of simulating physics in a virtual worlds system includes selecting at least one of the client devices participating in an instance of a scene as a physics host, the physics host determining subsequent states of objects and sending the subsequent states to one or more processors of a server, the subsequent states of objects comprising one or more of: subsequent locations, orientations, velocities and accelerations determined based on characteristics of the objects and constraints for simulating physics consistent with the new instance of the scene of the virtual worlds system.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: August 17, 2021
    Assignee: PFAQUTRUMA RESEARCH LLC
    Inventors: Brian Shuster, Aaron Burch
  • Patent number: 11087387
    Abstract: In various example embodiments, a system and method for dynamically generating virtual marketplace platforms are presented. The system receives a set of facility data, a set of selections for a set of user interaction objects, and a set of object placement selections. The set of user interaction objects includes a first subset and a second subset. The system generates a virtual facility comprising the facility data and the set of user interaction objects distributed based on the object placement selections. The system receives user data for a user interacting with the virtual facility and one or more objects of the first and second subset of user interaction objects, dynamically arranges the second subset of user interaction objects into a second arrangement, and causes presentation of the virtual facility and the set of user interaction objects to the user.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: August 10, 2021
    Assignee: eBay Inc.
    Inventors: David Beach, John Tapley
  • Patent number: 11080290
    Abstract: A document is received, the document including metadata for a data visualization of a data set. The data set includes a plurality of data columns, each of the plurality of columns having a column name and a plurality of data values. A first set of columns of the plurality of columns is present in the data visualization. The first set of columns is determined based on the metadata. A second set of columns of the plurality of columns is determined, where the second set of columns includes remaining columns of the plurality of columns excluding the first set of columns. The data set is ordered by having the first set of columns prior to the second set of columns. A composite index is generated on the ordered data set.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: August 3, 2021
    Assignee: SAP SE
    Inventors: Dharmesh Rana, Swati Krishna Setty, Tejram Jagannath Sonwane
  • Patent number: 11074928
    Abstract: A computer-implemented method includes determining a meeting has initialized between a first user and a second user, wherein vocal and video recordings are produced for at least the first user. The method receives the vocal and video recordings for the first user. The method analyzes the vocal and video recordings for the first user according to one or more parameters for speech and one or more parameters for gestures. The method determines one or more emotions and a role in the meeting for the first user based at least on the analyzed vocal and video recordings. The method sends an output of analysis to at least one of the first user and the second user, wherein the output of analysis includes at least the determined one or more emotions and the role in the meeting for the first user.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: July 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Eli M. Dow, Thomas D. Fitzsimmons, Tynan J. Garrett, Emily M. Metruck
  • Patent number: 11065551
    Abstract: Methods and systems are provided for delivering a virtual reality (VR) experience of a real world space to a remote user via a head mounted display (HMD). A method provides for sending a request for the VR experience of the real world space and identifying a viewing location made by the user. The method includes operations for mapping the viewing location to a real world capture system for capturing video and audio at a location that corresponds to the viewing location and receiving real world coordinates for the real world capture system. Further, the method accesses a user profile of the user and receives a video stream of the real world space captured by the real world capture system. The method is able to identify and reskin a real world object with a graphical content element by overlaying the graphical content item in place of the image data associated with the real world object.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: July 20, 2021
    Assignee: Sony Interactive Entertainment LLC
    Inventors: Mohammed Khan, Miao Li, Ken Miyaki
  • Patent number: 11061531
    Abstract: Disclosed is a system and method for an interactive communication experience on mobile devices. In general, the present disclosure discusses dynamically manipulating or modifying graphic user representations during an electronic communication. The modification or manipulation of these graphic user representations enables users to convey nuances of mood and feelings rather than being confined to conveying them through conventional communications, including text, images, video, or selecting an appropriate emoticon or avatar from a palette of predetermined emoticons or avatars.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: July 13, 2021
    Assignee: VERIZON MEDIA INC.
    Inventor: Aaron Druck
  • Patent number: 11043031
    Abstract: Systems and methods for inserting and transforming content are provided. For example, the inserted content may include augmented reality content that is inserted into a physical space or a representation of the physical space such as an image. An example system and method may include receiving an image and identifying a physical location associated with a display management entity within the image. The example system and method may also include retrieving content display parameters associated with the display management entity. Additionally, the example system and method may also include identifying content to display and displaying the content using the display parameters associated with the display management entity.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: June 22, 2021
    Assignee: GOOGLE LLC
    Inventors: Brett Barros, Xavier Benavides Palos
  • Patent number: 11035948
    Abstract: The disclosure discloses a virtual reality feedback device, and a positioning method, a feedback method, and a positioning system thereof. The method for positioning a virtual reality feedback device includes: obtaining first time point information of a first microwave signal, wherein the first time point information includes a reception time point and a transmission time point of the first microwave signal; obtaining a second time point information of a second microwave signal, wherein the second time point information includes a reception time point and a transmission time point of the second microwave signal; and determining a position of the virtual reality feedback device according to a transmission speed of the first microwave signal, a transmission speed of the second microwave signal, the first time point information, and the second time point information.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: June 15, 2021
    Assignees: BOE Technology Group Co., Ltd., Hefei BOE Optoelectronics Technology Co., Ltd.
    Inventors: Hui Luo, Hui Wang, Xin Yi, Yanni Liu
  • Patent number: 11030781
    Abstract: Systems and methods for aggregating and storing different types of data, and generating interactive user interfaces for analyzing the stored data. In some embodiments, entity data is received for a plurality of entities from one or more data sources, and used to determine attribute values for the entities for one or more given time periods. The plurality of entities may be categorized into one or more entity groups, and aggregate attribute values may be generated based upon the entity groups. A first interactive user interface is generated displaying the one or more entity groups in association with the aggregated attribute values associated with the entity group. In response to a received indication of a user selection of an entity group, a second interactive user interface is generated displaying the one or more entities associated with the selected entity group, each entity displayed in association with the attribute values associated with the entity.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: June 8, 2021
    Assignee: Palantir Technologies Inc.
    Inventors: Sean Kelley, Dylan Scott, Ayush Sood, Kevin Verdieck, Izaak Baker, Eliot Ball, Zachary Bush, Allen Cai, Jerry Chen, Aditya Dahiya, Daniel Deutsch, Calvin Fernandez, Jonathan Hong, Jiaji Hu, Audrey Kuan, Lucas Lemanowicz, Clark Minor, Nicholas Miyake, Michael Nazario, Brian Ngo, Mikhail Proniushkin, Siddharth Rajgarhia, Christopher Rogers, Kayo Teramoto, David Tobin, Grace Wang, Wilson Wong, Holly Xu, Xiaohan Zhang
  • Patent number: 11020665
    Abstract: An information processing method and apparatus, a storage medium, and an electronic device is provided. The method includes: a custom model editing control in a graphical user interface is provided; in response to a first touch operation acting on the custom model editing control, a first virtual character is controlled to build a custom model building in at least one first building area in a first game scene; and in response to a trigger event indicating the end of custom model editing, the custom model building is saved as a custom model.
    Type: Grant
    Filed: September 29, 2018
    Date of Patent: June 1, 2021
    Assignee: NETEASE (HANGZHOU) NETWORK CO., LTD.
    Inventor: Changkun Wan
  • Patent number: 11023107
    Abstract: A virtual assistant ecosystem is presented. One can instantiate or construct a customized virtual assistant when needed by capturing a digital representation of one or more objects. A virtual assistant engine analyzes the digital representation to determine the nature or type of the objects present. The engine further obtains attributes for a desirable assistant based on the type of objects. Once the attributes are compiled the engine can then create the specific type of assistant required by the circumstances.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: June 1, 2021
    Assignee: Nant Holdings IP, LLC
    Inventor: Patrick Soon-Shiong
  • Patent number: 11012482
    Abstract: A spatially-aware multimedia router system includes at least one media server computer configured to receive and analyze incoming data comprising incoming multimedia streams from client devices, and adapt outbound multimedia streams for individual client devices based on the incoming data received from the client devices. The incoming multimedia streams include elements from within a virtual environment. The outbound multimedia streams are adapted for the individual client devices based on user priority data and spatial orientation data that describes spatial relationships between corresponding user graphical representations and sources of the incoming multimedia streams within the virtual environment.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: May 18, 2021
    Assignee: TMRW Foundation IP S. À R.L.
    Inventor: Cevat Yerli
  • Patent number: 11010826
    Abstract: A system and method implemented in a computer infrastructure having computer executable code, includes receiving one or more bids for at least one of an enhanced rendering quality and an enhanced rendering order of an object in a virtual universe (VU) and performing a bid resolution for the received one or more bids. Additionally, the method includes rendering one or more objects in the VU with the at least one of the enhanced rendering quality and the enhanced rendering order based on the bid resolution.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: May 18, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kulvir S. Bhogal, Rick A. Hamilton, II, Brian M. O'Connell, Clifford A. Pickover
  • Patent number: 11005846
    Abstract: Provided are a method and an apparatus for providing a trust-based media service. First user related data and second user related data are collected from a media service and other service, the trust is analyzed based on the collected data, trust information including the trust index of the first user or the second user is obtained, and the trust information is provided. The trust index is calculated based on a value of trustworthiness for a user obtained based on a first individual measurement index calculated based on the collected data and a value of relationship between the first user and the second user obtained based on a second individual measurement index calculated based on the collected data.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: May 11, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Young Seog Yoon
  • Patent number: 11003256
    Abstract: An apparatus for generating a graphical representation of a content display in a virtual reality environment, comprising: controller circuitry configured to determine a position of a device relative to a virtual reality headset in the real world, and when the device is at a predetermined position in front of the virtual reality headset, the controller circuitry is further configured to: generate a graphical representation of a content display in the virtual reality environment, the position of the graphical representation in the virtual reality environment being determined by the real world position of the device in front of the virtual reality headset, wherein the size of the graphical representation of a content display is changed in dependence on the position of the distance between the virtual reality headset and the device, wherein, in response to a user input, the controller circuitry is configured to: lock the position of the graphical representation of the content display in the virtual reality enviro
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: May 11, 2021
    Assignee: Sony Corporation
    Inventors: Michael John Williams, Paul Edward Prayle, Michael Goldman, William Jack Leathers-Smith
  • Patent number: 10997697
    Abstract: Apparatus and methods for applying motion blur to overcapture content. In one embodiment, the motion blur is applied by selecting a number of frames of the captured image content for application of motion blur; selecting a plurality of pixel locations within the number of frames of the captured image content for the application of motion blur; applying motion blur to the captured image content in accordance with the selected number of frames and the selected plurality of pixel locations; and outputting the captured image content with the applied motion blur. In some implementations, motion blur is applied via implementation of a virtualized neutral density filter. Computerized devices and computer-readable apparatus for the application of motion blur are also disclosed.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: May 4, 2021
    Assignee: GoPro, Inc.
    Inventor: David A. Newman
  • Patent number: 10985938
    Abstract: A project team identification tool utilizes media components and sensors installed throughout a smart building, to detect individual persons and groups of people gathered together within the smart building. After detecting the people that are present within the smart building, the PTI tool references employee profile information to identify the detected people. The PTI tool is further configured to predict a project team the identified people belong to, as well as one or more projects associated with the predicted project teams. The PTI tool utilizes the advanced technology offered by the smart building to provide a unique solution for seamlessly identifying a project team of people meeting within the smart building.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: April 20, 2021
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventor: Georgios Krasadakis
  • Patent number: 10983594
    Abstract: Systems, apparatuses and methods may provide away to enhance an augmented reality (AR) and/or virtual reality (VR) user experience with environmental information captured from sensors located in one or more physical environments. More particularly, systems, apparatuses and methods may provide a way to track, by an eye tracker sensor, a gaze of a user, and capture, by the sensors, environmental information. The systems, apparatuses and methods may render feedback, by one or more feedback devices or display device, for a portion of the environment information based on the gaze of the user.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: April 20, 2021
    Assignee: Intel Corporation
    Inventors: Altug Koker, Michael Apodaca, Kai Xiao, Chandrasekaran Sakthivel, Jeffery S. Boles, Adam T. Lake, James M. Holland, Pattabhiraman K, Sayan Lahiri, Radhakrishnan Venkataraman, Kamal Sinha, Ankur N. Shah, Deepak S. Vembar, Abhishek R. Appu, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall
  • Patent number: 10977359
    Abstract: A system includes a processor and machine readable instructions stored on a tangible machine readable medium and executable by the processor, for a computer program, configured to allow one or more accounts of an enterprise to access the computer program before the enterprise purchases and manages the computer program and to allow the computer program to implement, after the enterprise purchases and manages the computer program, one or more policies of the enterprise regarding use of the computer program without modifying the computer program.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: April 13, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Saeed Javed Akhter, Krassimir Emilov Karamfilov, Yavor Vesselinov Angelov
  • Patent number: 10969924
    Abstract: Provided is an information processing apparatus including: a first information acquisition unit configured to acquire first information indicating behavior of at least one user; a second information acquisition unit configured to acquire second information on the at least one user, the second information being different from the first information; and a display control unit configured to display, in a display unit, a user object which is configured based on the first information and represents the corresponding at least one user and a virtual space which is configured based on the second information and in which the user object are arranged.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: April 6, 2021
    Assignees: SONY CORPORATION, SO-NET CORPORATION
    Inventors: Masatomo Kurata, Hideyuki Ono, Sota Matsuzawa, Akikazu Takeuchi, Takayoshi Muramatsu
  • Patent number: 10949671
    Abstract: An augmented reality system according to the present invention comprises a mobile terminal which, in displaying a 3D virtual image on a display, displays a dotted guide along the boundary of characters displayed on the display and when handwriting is detected along the dotted guide, recognizes the characters and displays a virtual object corresponding to the content of the characters, wherein, if the virtual object is touched, a pre-configured motion of the virtual object corresponding to the touched area is reproduced.
    Type: Grant
    Filed: August 3, 2019
    Date of Patent: March 16, 2021
    Assignee: VIRNECT INC.
    Inventor: Tae Jin Ha
  • Patent number: 10928991
    Abstract: A system and method for facilitating user interactions with a virtual space through a graphical chat interface is disclosed. One or more potential inputs to the virtual space and/or virtual space status information may be determined dynamically for a user participating in a chat session through a graphical chat interface. An activity notification may be generated for the user based one the determined potential inputs and/or the virtual space status information. The generated activity notification may comprise graphical representation for the notification and as well as representation information for one or more controls facilitating the user to provide inputs requested by the activity notification through the graphical chat interface. User acceptance to the activity notification via the graphical chat interface may be received. One or more activity commands may be generated based on the received user acceptance and executed in the virtual space.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: February 23, 2021
    Assignee: Kabam, Inc.
    Inventors: Michael C. Caldarone, Kellen Christopher Smalley, Matthew Curtis, James Koh
  • Patent number: 10929980
    Abstract: Fiducial markers are printed patterns detected by algorithms in imagery from image sensors for applications such as automated processes and augmented reality graphics. The present invention sets forth extensions and improvements to detection technology to achieve improved performance, and discloses applications of fiducial markers including multi-camera systems, remote control devices, augmented reality applications for mobile devices, helmet tracking, and weather stations.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: February 23, 2021
    Assignee: Millennium Three Technologies, Inc.
    Inventor: Mark Fiala
  • Patent number: 10931728
    Abstract: A system and method provides a video chat capability where the video portion of the chat is initially impaired, but gets progressively clearer, either as time elapses, or as the users speak or participate with relevant information.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: February 23, 2021
    Assignee: Zoosk, Inc.
    Inventors: Eric R. Barnett, Behzad Behrouzi, Charles E. Gotlieb
  • Patent number: 10917613
    Abstract: Embodiments of the present invention describe virtual object placement in augmented reality environments. Embodiments describe, determining a physical meeting room structure based on the meeting data and user data and identifying, by an augmented reality device, a physical room layout in which a first user is located. Embodiments describe determining an augmented room layout for the first user based on the identified physical room layout, in which determining the augmented room layout comprises: executing an optimization algorithm, and computing an optimization score for each iteration of potential room layouts produced by the optimization algorithm. Additionally, embodiments describe generating an augmented reality representation of a meeting environment tailored to the physical room layout based on the physical meeting room structure and the augmented room layout, and displaying to the first user the augmented reality representation of the meeting environment.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Giacomo Giuseppe Chiarella, Daniel Thomas Cunnington, John Jesse Wood, Eunjin Lee
  • Patent number: 10904482
    Abstract: A method and an apparatus for generating a video file, and a storage medium are disclosed in embodiments of this disclosure. The method includes: starting an image acquisition apparatus to acquire user image frames in real time, and starting a video decoding component to decode a predetermined source video, when a simulated video call request is received; synchronously obtaining a user image frame currently acquired by the image acquisition apparatus and a source video image frame from the source video currently decoded by the video decoding component; synthesizing the synchronously obtained user image frame with the source video image frame to obtain a simulated video call image frame; and displaying the simulated video call image frame in a simulated video call window, and generating a video file associated with the simulated video call according to the obtained simulated video call image frame.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: January 26, 2021
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Zi Wang
  • Patent number: 10902158
    Abstract: An image of a building may be received from a client device, and the building in the image may be geo-located and identified. Based on the geo-location, a set of attributes for the building may be determined. A user query may be received from the client device, and based on a property of the user query, a unit in the building may be identified. A daylight livability index (DLLI) for the unit may be determined, based on the identified unit and the set of attributes for the building, and the user may be notified of the DLLI.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Su Liu, Howard N. Anglin, Shi Kun Li, Cheng Xu
  • Patent number: 10863899
    Abstract: Systems and methods for locating the center of a lens in the eye are provided. These systems and methods can be used to improve the effectiveness of a wide variety of different ophthalmic procedures. In one embodiment, a system and method is provided for determining the center of eye lens by illuminating the eye with a set of light sources, and measuring the resulting first image of the light sources reflected from an anterior surface of the lens and the resulting second image of the light sources reflected from a posterior surface of the lens. The location of the center of the lens of the eye is then determined using the measurements. In one embodiment, the center of the lens is determined by interpolating between the measures of the images. Such a determination provides an accurate location of the geometric center of the lens.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: December 15, 2020
    Assignee: AMO Development, LLC
    Inventors: Zsolt Bor, Anthony Dennison, Michael Campos, Peter Patrick De Guzman
  • Patent number: 10866929
    Abstract: Provided is a group-based communication interface configured to efficiently share files among a plurality of group-based communication feeds. Each file share may initiate a subsidiary group-based communication feed to organize and manage discussions regarding shared files. The subsidiary group-based communication feed is unique to the particular file share. Subsequent file shares of the file initiate additional subsidiary group-based communication feeds, such that each discussion stemming from a file share does not overlap with another discussion regarding a different file share of the same file.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: December 15, 2020
    Assignee: Slack Technologies, Inc.
    Inventors: Milo Watanabe, Ayesha Bose, Bernadette Le, Faisal Yaqub, Fayaz Ashraf, Marcel Weekes, Wayne Fan, Adam Cole, Jordan Williams, Patrick Kane, Oluwatosin Afolabi
  • Patent number: 10860705
    Abstract: A human challenge can be presented in an augmented reality user interface. A user can use a camera of a smart device to capture a video stream of the user's surroundings, and the smart device can superimpose a representation of an object on the image or video stream being captured by the smart device. The smart device can display in the user interface the image or video stream and the object superimposed thereon. The user will be prompted to perform a task with respect to one or more of these augmented reality objects displayed in the user interface. If the user properly performs the task, e.g., selects the correct augmented reality objects, the application will validate the user as a person.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: December 8, 2020
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventor: Jayaraman Ganeshmani
  • Patent number: 10854007
    Abstract: Embodiments relate to supplementing a mixed reality system with information from a space model. The space model is a hierarchical or tree model of a physical space, where nodes represent physical places in the physical space and a parent-child relationship between nodes in the tree indicates a physical containment relationship for physical places represented by the nodes. The space model models containment relationships (e.g., building-floor-room) and does not necessarily include a two or three dimensional map of the physical place. Some of the nodes of the space model include representations of sensors and store measures therefrom. The mixed reality system includes a three-dimensional model possibly modeling part of the physical space. The mixed reality system renders views of the three-dimensional model according to the sensor measures stored in the representations.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: December 1, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stefan Krzysztof Gawrys, Patrick James Gorman
  • Patent number: 10845960
    Abstract: A method and system for dynamically displaying icons on a mobile terminal. The method comprises: A: acquiring a spatial position status of a current mobile terminal, and obtaining, according to the spatial position status, the included angle between the mobile terminal and the horizontal plane; B: acquiring a spatial position status of a display interface of the current mobile terminal, and obtaining, according to the spatial position status of the display interface, the included angle between the display interface and the horizontal plane; and C: controlling, according to the included angle between the mobile terminal and the horizontal plane and the included angle between the display interface and the horizontal plane, the angle of inclination of icons in the display interface with respect to the display interface to be the same as the included angle between the mobile terminal and the horizontal plane.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 24, 2020
    Assignee: JRD COMMUNICATION (SHENZHEN) LTD.
    Inventor: Shuwei Huang
  • Patent number: 10846937
    Abstract: Systems and methods of rendering a three-dimensional (3D) virtual environment rendering are disclosed. The system comprises a central processing device, a plurality of user devices in data communication with the central processing device, a plurality of application servers in data communication with the central processing device, and software executing on the central processor. The software creates and renders a 3D virtual environment, receives user data from each of the plurality of user devices, renders the user data received from each of the user devices in the 3D virtual environment, receives application data from each of the application servers, renders the application data received from each of the application servers in the 3D virtual environment, and outputs the rendered 3D virtual environment to each of the user devices. The 3D virtual environment serves as a direct user interface with the Internet by allowing users to visually navigate the world wide web.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: November 24, 2020
    Assignee: Roam Holdings, LLC
    Inventors: Joseph D. Rogers, Marc E. Rogers
  • Patent number: 10825223
    Abstract: A mixed reality system including a display and camera is configured to receive video of a physical scene from the camera and construct a 3D model of the physical scene based on the video. Spatial sensing provides pose (position and orientation) updates corresponding to a physical pose of the display. First user inputs allow a user to define an input path. The input path may be displayed as a graphic path or line. The input path is mapped to a 3D path in the 3D model. Second user inputs define animation features in association with the 3D path. Animation features include an object (e.g., a character), animation commands, etc. The animation commands may be manually mapped to points on the 3D path and executed during an animation of the object guided by the 3D path.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: November 3, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Richard Carl Roesler, Chuan Xia, John Alexander McElmurray
  • Patent number: 10812780
    Abstract: An image processing method includes presetting an image processing model and performing the following processing based on the model when a first three-dimensional effect plane image is displayed in response to an operation of a user. The method further includes mapping the first three-dimensional effect plane image to the projection plane, determining, according to the three-dimensional position relationship among the viewpoint, the projection plane, and the view window and the size of the view window, a first visual area obtained by projection onto the projection plane through the viewpoint and the view window, and clipping a first image in the first visual area, and displaying the first image.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: October 20, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Javin Zhong, Fay Cheng, Jun Da Bei
  • Patent number: 10802287
    Abstract: A head-mounted display (HMD) with a rolling illumination display panel can dynamically target a render time for a given frame based on eye tracking. Using this approach, re-projection adjustments are minimized at the location of the display(s) where the user is looking, which mitigates unwanted, re-projection-based visual artifacts in that “region of interest.” For example, logic of the HMD may predict a location on the display panel where a user will be looking during an illumination time period for a given frame, determine a time, within that illumination time period, at which an individual subset of the pixels that corresponds to the predicted location will be illuminated, predict a pose that the HMD will be in at the determined time, and send pose data indicative of this predicted pose to an application for rendering the frame.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: October 13, 2020
    Assignee: Valve Corporation
    Inventor: Jeremy Adam Selan
  • Patent number: 10762219
    Abstract: Optimizations are provided to control access to virtual content included within a three-dimensional (3D) mesh. Specifically, after the 3D mesh is accessed, then objects represented by the 3D mesh are segmented so that they are distinguishable from one another. Once segmented, then a permissions is assigned to each object or even to groups of objects. For instance, all of the objects that are associated with a particular sub-space (e.g., a bedroom or a living room) may be assigned the same permissions. By assigning permissions to objects, it is possible to control which requesting entities will have access to the objects as well as how much access each of those requesting entities is afforded.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: September 1, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yuri Pekelny, Michael Bleyer, Raymond Kirk Price
  • Patent number: 10747389
    Abstract: For display of a 3D model (20) in virtual reality (VR), merely converting the CAD display from the 2D screen to a 3D image may not sufficiently reduce the information clutter. To provide metadata for the 3D CAD model (20) with less occlusion or clutter, a separate space (32) is generated in the virtual reality environment. Metadata and information about the 3D CAD model (20) and/or selected part (36) of the 3D CAD model (20) is displayed (58) in the separate space (32). The user may view the 3D CAD model (20) in one space (30), the metadata with or without a representation of a part in another space (32), or combinations thereof.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: August 18, 2020
    Assignee: Siemens Aktiengesellschaft
    Inventors: Mareike Kritzler, Matthias Mayr
  • Patent number: 10739937
    Abstract: Methods, including computer programs encoded on a computer storage medium, for controlling a 3D modeling application based on natural user input received at middleware. In one aspect, a method includes: receiving data indicating that an application operating at the application layer is interpreted as spatial data about one or entities at one or more corresponding locations within an environment context from one or more participating systems; receiving, through an interface in communication with the one or more systems that provide spatial data, multiple sets of spatial data provided by or derived from the one or more participating systems that are generated while the application manages one or more interactions in the environment context; determining adjustment to apply to the environment context.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: August 11, 2020
    Assignee: Abantech LLC
    Inventor: Gregory Emmanuel Melencio
  • Patent number: 10742519
    Abstract: This disclosure relates generally to performing user segmentation, and more particularly to predicting attribute values for user segmentation. In one embodiment, the method includes segregating a user with an incomplete attribute value and a user with complete attribute values for an attribute into a first group and a second group respectively, computing prior probability for each suggestive attribute value, identified for the incomplete attribute value, based on number of users in second group having the suggestive attribute value as attribute value for the attribute. Computing likelihood for each suggestive attribute value based on similarity of the attribute values of the user of the first group with users of the second group, computing a posterior probability for each suggestive attribute value based on the prior probability and the likelihood, selecting a suggestive attribute value with the highest posterior probability as the attribute value for the incomplete attribute value of the user.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: August 11, 2020
    Assignee: Tate Consultancy Services Limited
    Inventors: Akshay Kumar Singhal, Mohan Raj Velayudhan Kumar, Sandip Jadhav, Rahul Ramesh Kelkar, Harrick Mayank Vin
  • Patent number: 10732797
    Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: August 4, 2020
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
  • Patent number: 10712814
    Abstract: Methods, systems, and apparatus for performing virtual reality simulations using virtual reality systems. In some aspects a method includes the actions of logging user actions in a virtual reality system, wherein the user actions include one or more of (i) a path traveled by user in the virtual reality system, or (ii) user interactions with objects in the virtual reality system; aggregating logged action over a first user and a second user; and deriving modifications to the virtual reality system based at least in part on the aggregated logged actions. The modifications to the VR system can include modifying at least one of (i) appearance of objects shown in the VR system, (ii) floor plan of the VR system, and (iii) location of objects shown in the VR system.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: July 14, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Sunny Webb, Matthew Thomas Short, Manish Mehta, Robert Dooley, Grace T. Cheng, Alpana Dubey
  • Patent number: 10692327
    Abstract: Disclosed is a gaming system including a display and a game controller, the game controller being arranged to identify a player from received player identification data, receive associated player game data of the identified player, and control the display of one or more characteristics of, or associated with, an avatar such that the player is provided with a graphical representation of the player game data. A method of gaming is also disclosed.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: June 23, 2020
    Assignee: Aristocrat Technologies Australia Pty Limited
    Inventors: Thomas Samuel Barbalet, Peter Jude Mastera, Lattamore Osburn, Mark Hripko, Steven Rood
  • Patent number: 10691303
    Abstract: A suite of tools for creating an Immersive Virtual Environment (IVE) comprises multiple software applications having multiple respective user interfaces that can each interface with a centralized database to design and develop immersive content for a game-based learning product in an IVE. According to some embodiments, this data-driven IVE production process may utilize tools having a “picker”-type selection in which selection menus are populated directly from database data, helping reduce data duplication, simplify tool design, and streamline the IVE production process.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: June 23, 2020
    Assignee: Cubic Corporation
    Inventors: Kathleen Kershaw, Brian Hinken, Nicholas Kemner, Katelyn Procci, Andre Balta, Etienne Magnin, Shawn Fiore
  • Patent number: 10679421
    Abstract: An interactive spa that may allow users to experience what it would be like to use a fully operational spa. The spa may contain a cutaway, or a void, on one of the sides of the spa to allow users to enter and exit the spa without needing to climb over the sides. The spa may also have windows that allow users to view certain internal aspects of the spa for display purposes. The spa may also utilize one or more virtual reality sensors to allow users to experience, to an extent, what it would be like use an operational spa. The spa may include a monitor stand that may have one or more touch-screen monitors, which may allow users to browse through information about the spa or view the virtual reality content that users are seeing while within the spa.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: June 9, 2020
    Assignee: Bullfrog International, LC
    Inventors: Eric Hales, Todd Anderson, Samson Madsen
  • Patent number: 10646285
    Abstract: Surgical navigation system: 3D display system with see-through visor; a tracking system for real-time tracking of: surgeon's head, see-through visor, patient anatomy and surgical instrument to provide current position and orientation data; a source of an operative plan, a patient anatomy data and a virtual surgical instrument model; a surgical navigation image generator to generate a surgical navigation image with a three-dimensional image representing simultaneously a virtual image of the surgical instrument corresponding to the current position and orientation of the surgical instrument and a virtual image of the surgical instrument, the see-through visor, the patient anatomy and the surgical instrument; the 3D display system configured to show the surgical navigation image at the see-through visor, such that an augmented reality image collocated with the patient anatomy in the surgical field underneath the see-through visor is visible to a viewer looking from above the see-through visor towards the surgica
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: May 12, 2020
    Assignee: HOLO SURGICAL INC.
    Inventors: Kris B. Siemionow, Cristian J. Luciano
  • Patent number: 10649724
    Abstract: Examples of interface systems and methods for voice-based interaction in one or more virtual areas that define respective persistent virtual communication contexts are described. These examples enable communicants to use voice commands to, for example, search for communication opportunities in the different virtual communication contexts, enter specific ones of the virtual communication contexts, and bring other communicants into specific ones of the virtual communication contexts. In this way, these examples allow communicants to exploit the communication opportunities that are available in virtual areas, even when hands-based or visual methods of interfacing with the virtual areas are not available.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: May 12, 2020
    Assignee: Sococo, Inc.
    Inventor: David Van Wie