Virtual 3d Environment Patents (Class 715/757)
-
Patent number: 10983594Abstract: Systems, apparatuses and methods may provide away to enhance an augmented reality (AR) and/or virtual reality (VR) user experience with environmental information captured from sensors located in one or more physical environments. More particularly, systems, apparatuses and methods may provide a way to track, by an eye tracker sensor, a gaze of a user, and capture, by the sensors, environmental information. The systems, apparatuses and methods may render feedback, by one or more feedback devices or display device, for a portion of the environment information based on the gaze of the user.Type: GrantFiled: July 18, 2019Date of Patent: April 20, 2021Assignee: Intel CorporationInventors: Altug Koker, Michael Apodaca, Kai Xiao, Chandrasekaran Sakthivel, Jeffery S. Boles, Adam T. Lake, James M. Holland, Pattabhiraman K, Sayan Lahiri, Radhakrishnan Venkataraman, Kamal Sinha, Ankur N. Shah, Deepak S. Vembar, Abhishek R. Appu, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall
-
Patent number: 10985938Abstract: A project team identification tool utilizes media components and sensors installed throughout a smart building, to detect individual persons and groups of people gathered together within the smart building. After detecting the people that are present within the smart building, the PTI tool references employee profile information to identify the detected people. The PTI tool is further configured to predict a project team the identified people belong to, as well as one or more projects associated with the predicted project teams. The PTI tool utilizes the advanced technology offered by the smart building to provide a unique solution for seamlessly identifying a project team of people meeting within the smart building.Type: GrantFiled: January 9, 2018Date of Patent: April 20, 2021Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventor: Georgios Krasadakis
-
Patent number: 10977359Abstract: A system includes a processor and machine readable instructions stored on a tangible machine readable medium and executable by the processor, for a computer program, configured to allow one or more accounts of an enterprise to access the computer program before the enterprise purchases and manages the computer program and to allow the computer program to implement, after the enterprise purchases and manages the computer program, one or more policies of the enterprise regarding use of the computer program without modifying the computer program.Type: GrantFiled: June 30, 2017Date of Patent: April 13, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Saeed Javed Akhter, Krassimir Emilov Karamfilov, Yavor Vesselinov Angelov
-
Patent number: 10969924Abstract: Provided is an information processing apparatus including: a first information acquisition unit configured to acquire first information indicating behavior of at least one user; a second information acquisition unit configured to acquire second information on the at least one user, the second information being different from the first information; and a display control unit configured to display, in a display unit, a user object which is configured based on the first information and represents the corresponding at least one user and a virtual space which is configured based on the second information and in which the user object are arranged.Type: GrantFiled: January 8, 2014Date of Patent: April 6, 2021Assignees: SONY CORPORATION, SO-NET CORPORATIONInventors: Masatomo Kurata, Hideyuki Ono, Sota Matsuzawa, Akikazu Takeuchi, Takayoshi Muramatsu
-
Patent number: 10949671Abstract: An augmented reality system according to the present invention comprises a mobile terminal which, in displaying a 3D virtual image on a display, displays a dotted guide along the boundary of characters displayed on the display and when handwriting is detected along the dotted guide, recognizes the characters and displays a virtual object corresponding to the content of the characters, wherein, if the virtual object is touched, a pre-configured motion of the virtual object corresponding to the touched area is reproduced.Type: GrantFiled: August 3, 2019Date of Patent: March 16, 2021Assignee: VIRNECT INC.Inventor: Tae Jin Ha
-
Patent number: 10928991Abstract: A system and method for facilitating user interactions with a virtual space through a graphical chat interface is disclosed. One or more potential inputs to the virtual space and/or virtual space status information may be determined dynamically for a user participating in a chat session through a graphical chat interface. An activity notification may be generated for the user based one the determined potential inputs and/or the virtual space status information. The generated activity notification may comprise graphical representation for the notification and as well as representation information for one or more controls facilitating the user to provide inputs requested by the activity notification through the graphical chat interface. User acceptance to the activity notification via the graphical chat interface may be received. One or more activity commands may be generated based on the received user acceptance and executed in the virtual space.Type: GrantFiled: July 31, 2017Date of Patent: February 23, 2021Assignee: Kabam, Inc.Inventors: Michael C. Caldarone, Kellen Christopher Smalley, Matthew Curtis, James Koh
-
Patent number: 10931728Abstract: A system and method provides a video chat capability where the video portion of the chat is initially impaired, but gets progressively clearer, either as time elapses, or as the users speak or participate with relevant information.Type: GrantFiled: June 25, 2018Date of Patent: February 23, 2021Assignee: Zoosk, Inc.Inventors: Eric R. Barnett, Behzad Behrouzi, Charles E. Gotlieb
-
Patent number: 10929980Abstract: Fiducial markers are printed patterns detected by algorithms in imagery from image sensors for applications such as automated processes and augmented reality graphics. The present invention sets forth extensions and improvements to detection technology to achieve improved performance, and discloses applications of fiducial markers including multi-camera systems, remote control devices, augmented reality applications for mobile devices, helmet tracking, and weather stations.Type: GrantFiled: November 25, 2019Date of Patent: February 23, 2021Assignee: Millennium Three Technologies, Inc.Inventor: Mark Fiala
-
Patent number: 10917613Abstract: Embodiments of the present invention describe virtual object placement in augmented reality environments. Embodiments describe, determining a physical meeting room structure based on the meeting data and user data and identifying, by an augmented reality device, a physical room layout in which a first user is located. Embodiments describe determining an augmented room layout for the first user based on the identified physical room layout, in which determining the augmented room layout comprises: executing an optimization algorithm, and computing an optimization score for each iteration of potential room layouts produced by the optimization algorithm. Additionally, embodiments describe generating an augmented reality representation of a meeting environment tailored to the physical room layout based on the physical meeting room structure and the augmented room layout, and displaying to the first user the augmented reality representation of the meeting environment.Type: GrantFiled: January 3, 2020Date of Patent: February 9, 2021Assignee: International Business Machines CorporationInventors: Giacomo Giuseppe Chiarella, Daniel Thomas Cunnington, John Jesse Wood, Eunjin Lee
-
Patent number: 10902158Abstract: An image of a building may be received from a client device, and the building in the image may be geo-located and identified. Based on the geo-location, a set of attributes for the building may be determined. A user query may be received from the client device, and based on a property of the user query, a unit in the building may be identified. A daylight livability index (DLLI) for the unit may be determined, based on the identified unit and the set of attributes for the building, and the user may be notified of the DLLI.Type: GrantFiled: November 16, 2017Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Su Liu, Howard N. Anglin, Shi Kun Li, Cheng Xu
-
Patent number: 10904482Abstract: A method and an apparatus for generating a video file, and a storage medium are disclosed in embodiments of this disclosure. The method includes: starting an image acquisition apparatus to acquire user image frames in real time, and starting a video decoding component to decode a predetermined source video, when a simulated video call request is received; synchronously obtaining a user image frame currently acquired by the image acquisition apparatus and a source video image frame from the source video currently decoded by the video decoding component; synthesizing the synchronously obtained user image frame with the source video image frame to obtain a simulated video call image frame; and displaying the simulated video call image frame in a simulated video call window, and generating a video file associated with the simulated video call according to the obtained simulated video call image frame.Type: GrantFiled: August 31, 2020Date of Patent: January 26, 2021Assignee: Tencent Technology (Shenzhen) Company LimitedInventor: Zi Wang
-
Patent number: 10866929Abstract: Provided is a group-based communication interface configured to efficiently share files among a plurality of group-based communication feeds. Each file share may initiate a subsidiary group-based communication feed to organize and manage discussions regarding shared files. The subsidiary group-based communication feed is unique to the particular file share. Subsequent file shares of the file initiate additional subsidiary group-based communication feeds, such that each discussion stemming from a file share does not overlap with another discussion regarding a different file share of the same file.Type: GrantFiled: October 31, 2018Date of Patent: December 15, 2020Assignee: Slack Technologies, Inc.Inventors: Milo Watanabe, Ayesha Bose, Bernadette Le, Faisal Yaqub, Fayaz Ashraf, Marcel Weekes, Wayne Fan, Adam Cole, Jordan Williams, Patrick Kane, Oluwatosin Afolabi
-
Patent number: 10863899Abstract: Systems and methods for locating the center of a lens in the eye are provided. These systems and methods can be used to improve the effectiveness of a wide variety of different ophthalmic procedures. In one embodiment, a system and method is provided for determining the center of eye lens by illuminating the eye with a set of light sources, and measuring the resulting first image of the light sources reflected from an anterior surface of the lens and the resulting second image of the light sources reflected from a posterior surface of the lens. The location of the center of the lens of the eye is then determined using the measurements. In one embodiment, the center of the lens is determined by interpolating between the measures of the images. Such a determination provides an accurate location of the geometric center of the lens.Type: GrantFiled: July 23, 2018Date of Patent: December 15, 2020Assignee: AMO Development, LLCInventors: Zsolt Bor, Anthony Dennison, Michael Campos, Peter Patrick De Guzman
-
Patent number: 10860705Abstract: A human challenge can be presented in an augmented reality user interface. A user can use a camera of a smart device to capture a video stream of the user's surroundings, and the smart device can superimpose a representation of an object on the image or video stream being captured by the smart device. The smart device can display in the user interface the image or video stream and the object superimposed thereon. The user will be prompted to perform a task with respect to one or more of these augmented reality objects displayed in the user interface. If the user properly performs the task, e.g., selects the correct augmented reality objects, the application will validate the user as a person.Type: GrantFiled: May 16, 2019Date of Patent: December 8, 2020Assignee: CAPITAL ONE SERVICES, LLCInventor: Jayaraman Ganeshmani
-
Patent number: 10854007Abstract: Embodiments relate to supplementing a mixed reality system with information from a space model. The space model is a hierarchical or tree model of a physical space, where nodes represent physical places in the physical space and a parent-child relationship between nodes in the tree indicates a physical containment relationship for physical places represented by the nodes. The space model models containment relationships (e.g., building-floor-room) and does not necessarily include a two or three dimensional map of the physical place. Some of the nodes of the space model include representations of sensors and store measures therefrom. The mixed reality system includes a three-dimensional model possibly modeling part of the physical space. The mixed reality system renders views of the three-dimensional model according to the sensor measures stored in the representations.Type: GrantFiled: December 3, 2018Date of Patent: December 1, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Stefan Krzysztof Gawrys, Patrick James Gorman
-
Patent number: 10846937Abstract: Systems and methods of rendering a three-dimensional (3D) virtual environment rendering are disclosed. The system comprises a central processing device, a plurality of user devices in data communication with the central processing device, a plurality of application servers in data communication with the central processing device, and software executing on the central processor. The software creates and renders a 3D virtual environment, receives user data from each of the plurality of user devices, renders the user data received from each of the user devices in the 3D virtual environment, receives application data from each of the application servers, renders the application data received from each of the application servers in the 3D virtual environment, and outputs the rendered 3D virtual environment to each of the user devices. The 3D virtual environment serves as a direct user interface with the Internet by allowing users to visually navigate the world wide web.Type: GrantFiled: September 26, 2019Date of Patent: November 24, 2020Assignee: Roam Holdings, LLCInventors: Joseph D. Rogers, Marc E. Rogers
-
Patent number: 10845960Abstract: A method and system for dynamically displaying icons on a mobile terminal. The method comprises: A: acquiring a spatial position status of a current mobile terminal, and obtaining, according to the spatial position status, the included angle between the mobile terminal and the horizontal plane; B: acquiring a spatial position status of a display interface of the current mobile terminal, and obtaining, according to the spatial position status of the display interface, the included angle between the display interface and the horizontal plane; and C: controlling, according to the included angle between the mobile terminal and the horizontal plane and the included angle between the display interface and the horizontal plane, the angle of inclination of icons in the display interface with respect to the display interface to be the same as the included angle between the mobile terminal and the horizontal plane.Type: GrantFiled: September 5, 2017Date of Patent: November 24, 2020Assignee: JRD COMMUNICATION (SHENZHEN) LTD.Inventor: Shuwei Huang
-
Patent number: 10825223Abstract: A mixed reality system including a display and camera is configured to receive video of a physical scene from the camera and construct a 3D model of the physical scene based on the video. Spatial sensing provides pose (position and orientation) updates corresponding to a physical pose of the display. First user inputs allow a user to define an input path. The input path may be displayed as a graphic path or line. The input path is mapped to a 3D path in the 3D model. Second user inputs define animation features in association with the 3D path. Animation features include an object (e.g., a character), animation commands, etc. The animation commands may be manually mapped to points on the 3D path and executed during an animation of the object guided by the 3D path.Type: GrantFiled: May 31, 2018Date of Patent: November 3, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Richard Carl Roesler, Chuan Xia, John Alexander McElmurray
-
Patent number: 10812780Abstract: An image processing method includes presetting an image processing model and performing the following processing based on the model when a first three-dimensional effect plane image is displayed in response to an operation of a user. The method further includes mapping the first three-dimensional effect plane image to the projection plane, determining, according to the three-dimensional position relationship among the viewpoint, the projection plane, and the view window and the size of the view window, a first visual area obtained by projection onto the projection plane through the viewpoint and the view window, and clipping a first image in the first visual area, and displaying the first image.Type: GrantFiled: April 24, 2018Date of Patent: October 20, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Javin Zhong, Fay Cheng, Jun Da Bei
-
Patent number: 10802287Abstract: A head-mounted display (HMD) with a rolling illumination display panel can dynamically target a render time for a given frame based on eye tracking. Using this approach, re-projection adjustments are minimized at the location of the display(s) where the user is looking, which mitigates unwanted, re-projection-based visual artifacts in that “region of interest.” For example, logic of the HMD may predict a location on the display panel where a user will be looking during an illumination time period for a given frame, determine a time, within that illumination time period, at which an individual subset of the pixels that corresponds to the predicted location will be illuminated, predict a pose that the HMD will be in at the determined time, and send pose data indicative of this predicted pose to an application for rendering the frame.Type: GrantFiled: March 27, 2019Date of Patent: October 13, 2020Assignee: Valve CorporationInventor: Jeremy Adam Selan
-
Patent number: 10762219Abstract: Optimizations are provided to control access to virtual content included within a three-dimensional (3D) mesh. Specifically, after the 3D mesh is accessed, then objects represented by the 3D mesh are segmented so that they are distinguishable from one another. Once segmented, then a permissions is assigned to each object or even to groups of objects. For instance, all of the objects that are associated with a particular sub-space (e.g., a bedroom or a living room) may be assigned the same permissions. By assigning permissions to objects, it is possible to control which requesting entities will have access to the objects as well as how much access each of those requesting entities is afforded.Type: GrantFiled: May 18, 2018Date of Patent: September 1, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yuri Pekelny, Michael Bleyer, Raymond Kirk Price
-
Patent number: 10747389Abstract: For display of a 3D model (20) in virtual reality (VR), merely converting the CAD display from the 2D screen to a 3D image may not sufficiently reduce the information clutter. To provide metadata for the 3D CAD model (20) with less occlusion or clutter, a separate space (32) is generated in the virtual reality environment. Metadata and information about the 3D CAD model (20) and/or selected part (36) of the 3D CAD model (20) is displayed (58) in the separate space (32). The user may view the 3D CAD model (20) in one space (30), the metadata with or without a representation of a part in another space (32), or combinations thereof.Type: GrantFiled: June 9, 2017Date of Patent: August 18, 2020Assignee: Siemens AktiengesellschaftInventors: Mareike Kritzler, Matthias Mayr
-
Patent number: 10739937Abstract: Methods, including computer programs encoded on a computer storage medium, for controlling a 3D modeling application based on natural user input received at middleware. In one aspect, a method includes: receiving data indicating that an application operating at the application layer is interpreted as spatial data about one or entities at one or more corresponding locations within an environment context from one or more participating systems; receiving, through an interface in communication with the one or more systems that provide spatial data, multiple sets of spatial data provided by or derived from the one or more participating systems that are generated while the application manages one or more interactions in the environment context; determining adjustment to apply to the environment context.Type: GrantFiled: June 15, 2018Date of Patent: August 11, 2020Assignee: Abantech LLCInventor: Gregory Emmanuel Melencio
-
Patent number: 10742519Abstract: This disclosure relates generally to performing user segmentation, and more particularly to predicting attribute values for user segmentation. In one embodiment, the method includes segregating a user with an incomplete attribute value and a user with complete attribute values for an attribute into a first group and a second group respectively, computing prior probability for each suggestive attribute value, identified for the incomplete attribute value, based on number of users in second group having the suggestive attribute value as attribute value for the attribute. Computing likelihood for each suggestive attribute value based on similarity of the attribute values of the user of the first group with users of the second group, computing a posterior probability for each suggestive attribute value based on the prior probability and the likelihood, selecting a suggestive attribute value with the highest posterior probability as the attribute value for the incomplete attribute value of the user.Type: GrantFiled: March 7, 2016Date of Patent: August 11, 2020Assignee: Tate Consultancy Services LimitedInventors: Akshay Kumar Singhal, Mohan Raj Velayudhan Kumar, Sandip Jadhav, Rahul Ramesh Kelkar, Harrick Mayank Vin
-
Patent number: 10732797Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: October 11, 2017Date of Patent: August 4, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 10712814Abstract: Methods, systems, and apparatus for performing virtual reality simulations using virtual reality systems. In some aspects a method includes the actions of logging user actions in a virtual reality system, wherein the user actions include one or more of (i) a path traveled by user in the virtual reality system, or (ii) user interactions with objects in the virtual reality system; aggregating logged action over a first user and a second user; and deriving modifications to the virtual reality system based at least in part on the aggregated logged actions. The modifications to the VR system can include modifying at least one of (i) appearance of objects shown in the VR system, (ii) floor plan of the VR system, and (iii) location of objects shown in the VR system.Type: GrantFiled: April 19, 2018Date of Patent: July 14, 2020Assignee: Accenture Global Solutions LimitedInventors: Sunny Webb, Matthew Thomas Short, Manish Mehta, Robert Dooley, Grace T. Cheng, Alpana Dubey
-
Patent number: 10692327Abstract: Disclosed is a gaming system including a display and a game controller, the game controller being arranged to identify a player from received player identification data, receive associated player game data of the identified player, and control the display of one or more characteristics of, or associated with, an avatar such that the player is provided with a graphical representation of the player game data. A method of gaming is also disclosed.Type: GrantFiled: October 29, 2018Date of Patent: June 23, 2020Assignee: Aristocrat Technologies Australia Pty LimitedInventors: Thomas Samuel Barbalet, Peter Jude Mastera, Lattamore Osburn, Mark Hripko, Steven Rood
-
Patent number: 10691303Abstract: A suite of tools for creating an Immersive Virtual Environment (IVE) comprises multiple software applications having multiple respective user interfaces that can each interface with a centralized database to design and develop immersive content for a game-based learning product in an IVE. According to some embodiments, this data-driven IVE production process may utilize tools having a “picker”-type selection in which selection menus are populated directly from database data, helping reduce data duplication, simplify tool design, and streamline the IVE production process.Type: GrantFiled: September 11, 2018Date of Patent: June 23, 2020Assignee: Cubic CorporationInventors: Kathleen Kershaw, Brian Hinken, Nicholas Kemner, Katelyn Procci, Andre Balta, Etienne Magnin, Shawn Fiore
-
Patent number: 10679421Abstract: An interactive spa that may allow users to experience what it would be like to use a fully operational spa. The spa may contain a cutaway, or a void, on one of the sides of the spa to allow users to enter and exit the spa without needing to climb over the sides. The spa may also have windows that allow users to view certain internal aspects of the spa for display purposes. The spa may also utilize one or more virtual reality sensors to allow users to experience, to an extent, what it would be like use an operational spa. The spa may include a monitor stand that may have one or more touch-screen monitors, which may allow users to browse through information about the spa or view the virtual reality content that users are seeing while within the spa.Type: GrantFiled: November 13, 2018Date of Patent: June 9, 2020Assignee: Bullfrog International, LCInventors: Eric Hales, Todd Anderson, Samson Madsen
-
Patent number: 10649724Abstract: Examples of interface systems and methods for voice-based interaction in one or more virtual areas that define respective persistent virtual communication contexts are described. These examples enable communicants to use voice commands to, for example, search for communication opportunities in the different virtual communication contexts, enter specific ones of the virtual communication contexts, and bring other communicants into specific ones of the virtual communication contexts. In this way, these examples allow communicants to exploit the communication opportunities that are available in virtual areas, even when hands-based or visual methods of interfacing with the virtual areas are not available.Type: GrantFiled: January 15, 2016Date of Patent: May 12, 2020Assignee: Sococo, Inc.Inventor: David Van Wie
-
Patent number: 10646285Abstract: Surgical navigation system: 3D display system with see-through visor; a tracking system for real-time tracking of: surgeon's head, see-through visor, patient anatomy and surgical instrument to provide current position and orientation data; a source of an operative plan, a patient anatomy data and a virtual surgical instrument model; a surgical navigation image generator to generate a surgical navigation image with a three-dimensional image representing simultaneously a virtual image of the surgical instrument corresponding to the current position and orientation of the surgical instrument and a virtual image of the surgical instrument, the see-through visor, the patient anatomy and the surgical instrument; the 3D display system configured to show the surgical navigation image at the see-through visor, such that an augmented reality image collocated with the patient anatomy in the surgical field underneath the see-through visor is visible to a viewer looking from above the see-through visor towards the surgicaType: GrantFiled: August 9, 2018Date of Patent: May 12, 2020Assignee: HOLO SURGICAL INC.Inventors: Kris B. Siemionow, Cristian J. Luciano
-
Patent number: 10632385Abstract: Systems and methods for capturing participant likeness for a video game character are disclosed. In some embodiments, a method comprises receiving, at a pose generation system, multiple videos of one or more live events, the multiple videos recorded from a plurality of camera angles. A target participant may be identified, at the pose generation system, in the multiple videos. A set of poses may be generated, at the pose generation system, of the target participant from the multiple videos, the set of poses associated with a movement type or game stimulus. The set of poses may be received, at a model processing system, from the pose generation system. The method may further comprise generating, at the model processing system, a graphic dataset based on the set of poses, and storing, at the model processing system, the graphic dataset to assist in rendering gameplay of a video game.Type: GrantFiled: August 30, 2018Date of Patent: April 28, 2020Assignee: Electronic Arts Inc.Inventors: Roy Harvey, Tom Waterson
-
Patent number: 10579234Abstract: Systems and techniques are disclosed for opportunistic presentation of functionality for increasing efficiencies of medical image review, such as based on deep learning algorithms applied to medical data. One of the methods includes a user interface that displays, in a first portion of the user interface, one or more medical images associated with a patient. Widgets selected from a multitude of widgets are displayed in a second portion of the user interface, with the selection being based on a context associated with the user interface. User input associated with each interaction with the displayed widgets are responded to, and the first portion or second portion are updated in response to the received user input.Type: GrantFiled: September 9, 2016Date of Patent: March 3, 2020Assignee: MERGE HEALTHCARE SOLUTIONS INC.Inventor: Murray A. Reicher
-
Patent number: 10572005Abstract: Content from a user computing device may be transmitted to at least one recipient computing device. A plurality of avatars is displayed that each represent different recipients associated with recipient computing devices. A group communication session is established among the user computing device and the recipient computing devices. During the group communication session: initial content is transmitted from the user computing device to each recipient computing device; based on determining that the user is gazing at a selected avatar, a private communication session is established between the user computing device and the recipient computing device associated with the selected avatar. During the private communication session, subsequent content is transmitted from the user computing device to such recipient computing device, and is not transmitted to the other recipient computing devices.Type: GrantFiled: July 29, 2016Date of Patent: February 25, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jessica Ellen Zahn, Peter William Carlson, Shawn Crispin Wright, Eric Scott Rehmeyer, John Copic
-
Patent number: 10565770Abstract: The invention relates to a communication system and a method for providing a virtual meeting of a first user (U1, U2, U3, U4) and a second user (U1, U2, U3, U4), comprising a first communication device (12, 14, 16, 18, 24, 26, 28, 32, 34) with a first display device (12a, 14a, 16a, 18a, 24a, 26a, 28a, 32a) associated with the first user (U1, U2, U3, U4), and a second communication device (12, 14, 16, 18, 24, 26, 28, 32, 34) with a second display device (12a, 14a, 16a, 18a, 24a, 26a, 28a, 32a) associated with the second user (U1, U2, U3, U4).Type: GrantFiled: August 5, 2016Date of Patent: February 18, 2020Assignee: Apple Inc.Inventor: Eberhard Schmidt
-
Patent number: 10556185Abstract: Methods and systems are provided for delivering a virtual reality (VR) experience of a real world space to a remote user via a head mounted display (HMD). A method provides for sending a request for the VR experience of the real world space and identifying a seat selection made by the user. The method includes operations for mapping the seat selection to a real world capture system for capturing video and audio at a location that corresponds to the seat selection and receiving real world coordinates for the real world capture system. Further, the method accesses a user profile of the user and receives a video stream of the real world space captured by the real world capture system. The method is able to identify and reskin a real world object with a graphical content element by overlaying the graphical content item in place of the image data associated with the real world object.Type: GrantFiled: February 21, 2018Date of Patent: February 11, 2020Assignee: Sony Interactive Entertainment America LLCInventors: Mohammed Khan, Miao Li, Ken Miyaki
-
Patent number: 10540797Abstract: An image management system includes a computing platform including a hardware processor and a system memory storing an image customization software code, and a database of personas assumable by a user, the database communicatively coupled to the image customization software code. The hardware processor executes the image customization software code to receive a wireless signal associating a persona stored in the database with the user, receive a digital image including an image of the user, and detect the image of the user in the digital image. The hardware processor further executes the image customization software code to obtain the persona from the database, and output a customized image to be rendered on a display, where the persona modifies the image of the user in the customized image.Type: GrantFiled: August 2, 2018Date of Patent: January 21, 2020Assignee: Disney Enterprises, Inc.Inventors: Michael P. Goslin, Mark Arana, Leon Silverman
-
Patent number: 10534801Abstract: A map screen determining method and apparatus, which includes obtaining profile data in a track dimension of a target digital person, where the target digital person is generated by a digital person generation system and consists of multiple dimensions of target user profiles corresponding to a target user, and the target user profiles are generated by processing multiple dimensions of data from multiple data sources, determining map data of a target area, where the target area is a specific map area that needs to be presented to a user, and determining a tracing map screen according to the profile data in the track dimension of the target digital person and the map data, where the tracing map screen represents a track feature of the target digital person in the target area.Type: GrantFiled: March 9, 2017Date of Patent: January 14, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xueyan Huang, Hao Wu
-
Patent number: 10534787Abstract: A method and apparatus for delivering engineering data to a portable device for use in performing an operation. Engineering data for each part in a set of parts is stored in a file system on the portable device. A set of entries, each entry including a part identifier and a target locator for a part, is created for the set of parts to form a table. An initial locator constructed by a visualization tool is matched to a target locator in the table for a selected part. The target locator identifies a physical location in the file system of requested engineering data for the selected part. A local server on the portable device retrieves the requested engineering data based on the target locator. The local server serves the requested engineering data to a browser, which displays the requested engineering data for use in performing the operation on the selected part.Type: GrantFiled: February 25, 2014Date of Patent: January 14, 2020Assignee: The Boeing CompanyInventors: David Joseph Kasik, David D. Briggs
-
Patent number: 10516870Abstract: There is provided an information processing device including a change unit configured to change arrangement of a first object corresponding to a first user and a second object corresponding to a second user from first arrangement to second arrangement in accordance with reproduction of a video including depth information, in a virtual space corresponding to the video.Type: GrantFiled: June 18, 2019Date of Patent: December 24, 2019Assignee: SONY CORPORATIONInventor: Tsuyoshi Ishikawa
-
Patent number: 10511706Abstract: Disclosed are a persuasive interaction restriction method and system for the intervention of smart device overuse based on context-awareness. A method of restricting an interaction of a smart device based on context awareness is executed by a computer, and may include training a user's interactivity with a smart device and an app installed on the smart device, setting an intervention method related to restriction on the use of the app based on the trained interactivity, recognizing whether context requires intervention based on predetermined interactivity restraint setting information, and restricting the use of the app based on the intervention method set if the context is recognized to require intervention.Type: GrantFiled: September 18, 2018Date of Patent: December 17, 2019Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Uichin Lee, Joonyoung Park, Jaejeung Kim, Inyeop Kim
-
Patent number: 10489653Abstract: Systems and methods for presenting an augmented reality view are disclosed. Embodiments include a system with a database for personalizing an augmented reality view of a physical environment using at least one of a location of a physical environment or a location of a user. The system may further include a hardware device in communication with the database, the hardware device including a renderer configured to render the augmented reality view for display and a controller configured to determine a scope of the augmented reality view authenticating the augmented reality view. The hardware device may include a processor configured to receive the augmented reality view of the physical environment, and present, via a display, augmented reality content to the user while the user is present in the physical environment, based on the determined scope of the augmented reality view.Type: GrantFiled: September 26, 2018Date of Patent: November 26, 2019Assignee: CAPITAL ONE SERVICES, LLCInventors: Jason Richard Hoover, Micah Price, Sunil Subrahmanyam Vasisht, Qiaochu Tang, Geoffrey Dagley, Stephen Michael Wylie
-
Patent number: 10482660Abstract: System integrating content in real-time into dynamic 3D scene includes external server including CMS, a device including content integrating engine to process in real-time 3D scenes, and display device to display combined 3D scene output. CMS searches for social media posts on social media servers. Social media posts includes message and URL to media content. Content integrating engine includes content retriever, content queue, 3D scene component processors to process each 3D scene's visual components, scene manager and combiner. Content retriever establishes direct connection to external server, and retrieves URLs from server storage and stores URLs in content queue. Scene manager, at time of low intensity during 3D scene, signals to content retriever to retrieve media content corresponding to URLs in content queue, one scene component processor to process display setting change, or another scene component processor to process media content. Combiner to generate combined 3D scene output.Type: GrantFiled: February 12, 2016Date of Patent: November 19, 2019Assignee: Vixi, Inc.Inventor: Gregory Lawrence Harvey
-
Patent number: 10482570Abstract: A system for performing memory allocation for seamless media content presentation includes a computing platform having a CPU, a GPU having a GPU memory, and a main memory storing a memory allocation software code. The CPU executes the memory allocation software code to transfer a first dataset of media content to the GPU memory, seamlessly present the media content to a system user, register a location of the system user during the seamless presentation of the media content, and register a timecode status of the media content at the location. The CPU further executes the memory allocation software code to identify a second dataset of the media content based on the location and the timecode status, transfer a first differential dataset to the GPU memory, continue to seamlessly present the media content to the system user, and transfer a second differential dataset out of the GPU memory.Type: GrantFiled: September 26, 2017Date of Patent: November 19, 2019Assignee: Disney Enterprises, Inc.Inventors: Kenneth J. Mitchell, Charalampos Koniaris, Floyd M. Chitalu
-
Patent number: 10467814Abstract: A computer system for managing multiple distinct perspectives within a mixed-reality design environment loads a three-dimensional architectural model into memory. The three-dimensional architectural model is associated with a virtual coordinate system. The three-dimensional architectural model comprises at least one virtual object that is associated with an independently executable software object that comprises independent variables and functions that are specific to a particular architectural element that is represented by the at least one virtual object. The computer system associates the virtual coordinate system with a physical coordinate system within a real-world environment. The computer system transmits to each device of multiple different devices rendering information. The rendering information comprises three-dimensional image data for rendering the three-dimensional architectural model and coordinate information that maps the virtual coordinate system to the physical coordinate system.Type: GrantFiled: June 9, 2017Date of Patent: November 5, 2019Assignee: DIRTT Environmental Solutions, Ltd.Inventors: Barrie A. Loberg, Joseph Howell, Robert Blodgett, Simon Francis Stannus, Matthew Hibberd, Tyler West
-
Patent number: 10464570Abstract: Electronic device using intelligent analysis for identifying characteristics of users located within a specific space. The electronic device includes a controller configured to identify each of a plurality of users located within a specific space, and generate a control command for controlling operation of at least one device associated with the specific space based on characteristic information related to each of the identified plurality of users.Type: GrantFiled: September 12, 2017Date of Patent: November 5, 2019Assignee: LG Electronics Inc.Inventors: Sungil Cho, Youngjun Kim, Yujune Jang
-
Patent number: 10460486Abstract: Systems and methods for aggregating and storing different types of data, and generating interactive user interfaces for analyzing the stored data. In some embodiments, entity data is received for a plurality of entities from one or more data sources, and used to determine attribute values for the entities for one or more given time periods. The plurality of entities may be categorized into one or more entity groups, and aggregate attribute values may be generated based upon the entity groups. A first interactive user interface is generated displaying the one or more entity groups in association with the aggregated attribute values associated with the entity group. In response to a received indication of a user selection of an entity group, a second interactive user interface is generated displaying the one or more entities associated with the selected entity group, each entity displayed in association with the attribute values associated with the entity.Type: GrantFiled: September 8, 2017Date of Patent: October 29, 2019Assignee: PALANTIR TECHNOLOGIES INC.Inventors: Sean Kelley, Dylan Scott, Ayush Sood, Kevin Verdieck, Izaak Baker, Eliot Ball, Zachary Bush, Allen Cai, Jerry Chen, Aditya Dahiya, Daniel Deutsch, Calvin Fernandez, Jonathan Hong, Jiaji Hu, Audrey Kuan, Lucas Lemanowicz, Clark Minor, Nicholas Miyake, Michael Nazario, Brian Ngo, Mikhail Proniushkin, Siddharth Rajgarhia, Christopher Rogers, Kayo Teramoto, David Tobin, Grace Wang, Wilson Wong, Holly Xu, Xiaohan Zhang
-
Patent number: 10448012Abstract: A production system determines which areas or portions of a video file (e.g., for a scene) are static and which areas contain motion. Instead of steaming redundant image data for the static areas of the video, the production only sends image data for or updates the areas of each frame that contain motion to minimize an amount of data being streamed to a head mounted display (HMD) without compromising image quality.Type: GrantFiled: November 22, 2017Date of Patent: October 15, 2019Assignee: Pixvana, Inc.Inventors: William Hensler, Forest Key, Sean Safreed, Scott Squires
-
Patent number: 10440361Abstract: Variable image data reduction is applied to at least a subset of each frame of a video file. The variable image data reduction includes reducing data by one or more techniques, such as compression, decimation, distortion, and so forth, across the frame by a different degree or amount based on a viewing location of the user. Thus, the amount of data sent to an encoder for delivery to a client device (e.g., head mounted display (HMD) or other computing device) is lowered by prioritizing image quality for the viewing location of the user while one or more data reducing techniques are applied to the remainder of the frame.Type: GrantFiled: November 22, 2017Date of Patent: October 8, 2019Assignee: Pixvana, Inc.Inventors: William Hensler, Forest Key, Sean Safreed, Scott Squires
-
Patent number: 10431003Abstract: Systems and methods of rendering a three-dimensional (3D) virtual environment rendering are disclosed. The system comprises a central processing device, a plurality of user devices in data communication with the central processing device, a plurality of application servers in data communication with the central processing device, and software executing on the central processor. The software creates and renders a 3D virtual environment, receives user data from each of the plurality of user devices, renders the user data received from each of the user devices in the 3D virtual environment, receives application data from each of the application servers, renders the application data received from each of the application servers in the 3D virtual environment, and outputs the rendered 3D virtual environment to each of the user devices. The 3D virtual environment serves as a direct user interface with the Internet by allowing users to visually navigate the world wide web.Type: GrantFiled: February 29, 2016Date of Patent: October 1, 2019Assignee: Roam Holdings, LLCInventors: Joseph D. Rogers, Marc E. Rogers