Patents Examined by Zhengxi Liu
-
Patent number: 10650092Abstract: Approaches presented herein enable optimizing a display of tabular data from a 2-D table as a folding 3-D table having a plurality of vectors in a GUI. More specifically, a scaling ratio is calculated to fit the plurality of vectors within a display area of the GUI based on a cumulative width of the plurality of vectors and a width of the display area of the GUI. A maximum angle of rotation for at least one vector is calculated based on a legibility of the vector. The scaling ratio can be applied to a width of at least one vector to yield a modified width of the vector. The 2-D table is then rendered as a 3-D table in which the at least one vector is depicted as a modified vector angled within a maximum angle of rotation between a horizontal and a vertical axis.Type: GrantFiled: November 19, 2019Date of Patent: May 12, 2020Assignee: International Business Machines CorporationInventors: Tian Qi Han, Dong Ni, Hua Hong Wang, Hao Zhang
-
Patent number: 10643391Abstract: A VR system for vehicles that may implement methods that address problems with vehicles in motion that may result in motion sickness for passengers. The VR system may provide virtual views that match visual cues with the physical motions that a passenger experiences. The VR system may provide immersive VR experiences by replacing the view of the real world with virtual environments. Active vehicle systems and/or vehicle control systems may be integrated with the VR system to provide physical effects with the virtual experiences. The virtual environments may be altered to accommodate a passenger upon determining that the passenger is prone to or is exhibiting signs of motion sickness.Type: GrantFiled: September 22, 2017Date of Patent: May 5, 2020Assignee: Apple Inc.Inventors: Mark B. Rober, Sawyer I. Cohen, Daniel Kurz, Tobias Holl, Benjamin B. Lyon, Peter Georg Meier, Jeffrey M. Riepling, Holly Gerhard
-
Patent number: 10628993Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.Type: GrantFiled: September 22, 2017Date of Patent: April 21, 2020Assignee: CANON KABUSHIKI KAISHAInventors: Kazuhiro Yoshimura, Kaori Taya, Shugo Higuchi, Tatsuro Koizumi
-
Patent number: 10621774Abstract: Embodiments of the disclosure provide systems and method for rendering reflections. To add reflections to a pixel in an image, ray marching is used to attempt to find a ray intersection for primary reflections. When using rasterization to render a scene, objects outside the viewport are culled. As such, ray marching may fail in various situations, such as when a ray marched ray exits the viewport without intersecting any other object of the scene. In such a situation where ray marching fails, the ray can be re-cast as a ray traced ray. The ray traced ray is cast into the full 3D (three-dimensional) scene with all objects present (i.e., objects are not culled). Ray tracing is then used to attempt to find a ray intersection, i.e., for a primary reflection. The disclosed embodiments can be used in real-time or near-real time applications, such as video games.Type: GrantFiled: August 10, 2018Date of Patent: April 14, 2020Assignee: Electronic Arts Inc.Inventor: Yasin Uludag
-
Patent number: 10607409Abstract: Systems and methods for constructing and saving files containing computer-generated image data with associated virtual camera location data during 3-D visualization of an object (e.g., an aircraft). The process tags computer-generated images with virtual camera location and settings information selected by the user while navigating a 3-D visualization of an object. The virtual camera location data in the saved image file can be used later as a way to return the viewpoint to the virtual camera location in the 3-D environment from where the image was taken. For example, these tagged images can later be drag-and-dropped onto the display screen while the 3-D visualization application is running to activate the process of retrieving and displaying a previously selected image. Multiple images can be loaded and then used to determine the relative viewpoint offset between images.Type: GrantFiled: July 19, 2016Date of Patent: March 31, 2020Assignee: The Boeing CompanyInventors: James J. Troy, Christopher D. Esposito, Vladimir Karakusevic
-
Patent number: 10593096Abstract: Provided are a method and an apparatus for transforming coordinates of pixels representing boundary points of an object on a cube map, when pixels respectively correspond to different faces of a cube map. Distances between the boundary pixels may be calculated by using the transformed coordinates. Based on the on the calculated distances, a level of detail (LOD) for texturing the cube map of the pixels may be determined.Type: GrantFiled: October 26, 2017Date of Patent: March 17, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Seok Kang
-
Patent number: 10593240Abstract: User interface systems for sterile fields and other working environments are disclosed herein. In some embodiments, a user interface system can include a projector that projects a graphical user interface onto a data board or other substrate disposed within a working environment. The system can also include a camera or other sensor that detects user interaction with the data board or substrate. Detected user interactions can be processed or interpreted by a controller that interfaces with equipment disposed outside of the working environment, thereby allowing user interaction with such equipment from within the working environment. The data board can be an inexpensive, disposable, single-use component of the system that can be easily sterilized or another component suitably prepared for use in a sterile field.Type: GrantFiled: June 6, 2018Date of Patent: March 17, 2020Assignee: Medos International SàrlInventors: Mark Hall, Roman Lomeli, J. Riley Hawkins, Joern Richter
-
Patent number: 10558743Abstract: Approaches presented herein enable optimizing a display of tabular data from a 2-D table as a folding 3-D table having a plurality of vectors in a GUI. More specifically, a scaling ratio is calculated to fit the plurality of vectors within a display area of the GUI based on a cumulative width of the plurality of vectors and a width of the display area of the GUI. This scaling ratio is applied to a width of at least one vector to yield a modified width of the vector. The 2-D table is then rendered as a 3-D table in which the at least one vector is depicted as a modified vector angled between a horizontal and a vertical axis. This modified vector has an actual width equal to the modified width and a diagonal length equal to the width of the at least one vector.Type: GrantFiled: June 25, 2019Date of Patent: February 11, 2020Assignee: International Business Machines CorporationInventors: Tian Qi Han, Dong Ni, Hua Hong Wang, Hao Zhang
-
Patent number: 10529117Abstract: In one embodiment, a computing system may receive a focal surface map, which may be specified by an application. The system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate first coordinates in the 3D space based on the determined orientation and generate second coordinates using the first coordinates and the focal surface map. Each of the first coordinates is associated with one of the second coordinates. For each of the first coordinates, the system may determine visibility of one or more objects defined within the 3D space by projecting a ray from the first coordinate through the associated second coordinate to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: GrantFiled: April 16, 2018Date of Patent: January 7, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10502554Abstract: In a process for determining the 3D coordinates of an object (1), a partial surface of the object (1) is recorded by a 3D measuring device (2), and the 3D coordinates of this partial surface of the object (1) are determined. Additional partial surfaces of the object (1) are recorded by the 3D measuring device (2), and the 3D coordinates of these partial surfaces are determined. The 3D coordinates of the partial surfaces of the object (1) are assembled by a processing device (3). In order to improve this process, the exposures and/or the 3D coordinates of one or more partial surfaces of the object (1) are represented on a head-mounted display (4) (FIG.1).Type: GrantFiled: August 27, 2015Date of Patent: December 10, 2019Assignee: CARL ZEISS OPTOTECHNIK GMBHInventor: Alexander Frey
-
Patent number: 10504297Abstract: In the disclosed systems and methods for competitive scene completion, a user selects from among one or more affordances, each representing a different challenge and comprising a scene image and plurality of markers. Each marker has independent scene coordinates and a type. The scene and markers of the selected challenge is displayed. For each user marker selection, corresponding furnishing units are displayed. The units match the marker type and include units retained and not retained by the user. User unit selection results in display of a graphic of the selected unit at the corresponding coordinates within the scene. The scene is thereby populated with graphics. The unit selections are submitted upon satisfaction of a challenge completion criterion specifying which scene coordinates must be populated. Responsive to submission, the user unit selections are subjected to community vote on a remote server and the results are provided to the user.Type: GrantFiled: July 13, 2018Date of Patent: December 10, 2019Inventors: Scott Cuthbertson, Barlow Gilmore, Martin Robaszewski, Brandon Jones, Jakub Fiedorowicz, Christianne Amodio, Ngan Vu, Chris McGill, Chris Hosking, Jeff Tseng, Jose Estuardo Avila
-
Patent number: 10497138Abstract: Architecture that enables the drawing of markup in a scene that neither obscures the scene nor is undesirably obscured by the scene. When drawing markup such as text, lines, and other graphics, into the scene, a determination is made as to the utility to the viewer of drawing the markup with greater prominence than an occluding scene object. The utility of the markup is based on the distance of the scene object and markup from the camera. Thus, if an object that appears small in the scene and is in front of the markup, the markup will be drawn more clearly, whereas if the same object appears large in the scene and is in front of the markup, the markup is rendered faint, if drawn at all.Type: GrantFiled: September 10, 2018Date of Patent: December 3, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Nikolai Faaland, David Buerer
-
Patent number: 10475245Abstract: In some embodiments, folding patterns are provided that can be folded to create three-dimensional target objects for use in augmented reality environments. In some embodiments, the folding patterns may include joining means to secure the folded shape and thereby enhance usability and durability. In some embodiments, the folding patterns may also include patterns that can be used to identify a particular virtual object to be rendered in association with the three-dimensional target object.Type: GrantFiled: February 27, 2018Date of Patent: November 12, 2019Assignee: L'OrealInventor: Seyed Morteza Haerihosseini
-
Patent number: 10467818Abstract: A system, a method and a computer program are provided to assist a user in virtually trying on and selecting a wardrobe article that may belong to the user, or that may be available from another source. The system, method and computer program display a real-world image of the wardrobe article on the user, so that the user may determine the real-world look and fit of the article on the user, including how the article would look and fit with respect to other items worn by the user, thereby minimizing any need for the wearer to actually try on an article to determine how the article will actually fit and look on the wearer, or how the article will look with respect to other articles worn by the wearer.Type: GrantFiled: May 31, 2018Date of Patent: November 5, 2019Inventor: Marie Manvel
-
Patent number: 10453273Abstract: A method, system, and computer program, for providing the virtual object in the virtual or semi-virtual environment, based on a characteristic associated with the user. In one example embodiment, the system comprises at least one computer processor, and a memory storing instructions that, when executed by the at least one computer processor, perform a set of operations comprising determining the characteristic associated with the user in the virtual or semi-virtual environment with respect to a predetermined reference location in the environment, and providing a virtual object based on the characteristic.Type: GrantFiled: June 28, 2017Date of Patent: October 22, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Carlos G. Perez, Vidya Srinivasan, Colton B. Marshall, Aniket Handa, Harold Anthony Martinez Molina
-
Patent number: 10438064Abstract: A picture of a person's face is present in a mixed reality system. The mixed reality system has a monitoring or updating process that attempts to detect the presence of faces in the mixed reality. When a picture of a face is detected, the system detects edges of a picture frame or physical borders of the picture. If the picture is bounded by edges or is otherwise in a suitable physical form, then a canvas object is generated. The canvas object is arranged in the mixed reality to coincide with the picture of the face. A face recognition algorithm is used to find an identity of the face. Information updates specific to the identity of the face are obtained, applied to the canvas, and the canvas is rendered in the mixed reality. A viewer of the mixed reality will observe the information updates replacing or overlaying the picture.Type: GrantFiled: January 2, 2018Date of Patent: October 8, 2019Assignee: Microsoft Technology Licensing, LLCInventor: Fahrim Ur Rahman
-
Patent number: 10431010Abstract: A makeup application device utilized by a makeup professional obtains a makeup consultation request from a user of a client device and obtains at least one digital image of a facial region of the user from the client device. A three-dimensional (3D) facial model is generated based on the at least one digital image, and user input is obtained from the makeup professional for applying virtual cosmetic effects to the 3D facial model. The makeup application device generates a command based on the user input from the makeup professional for applying a virtual cosmetic effect and transmits the command to the client device. The command causes a virtual cosmetic effect to be applied to the at least one digital image of the facial region of the user and displaying the at least one digital image.Type: GrantFiled: May 18, 2018Date of Patent: October 1, 2019Assignee: PERFECT CORP.Inventors: Chia-Che Yang, Po-Ho Wu
-
Patent number: 10409359Abstract: Generally, the described techniques provide for dividing a frame into bins and grouping the bins according to load information associated with the bins. For example, a device may divide a frame into a plurality of bins. The device may determine load information for each bin of the plurality of bins and order the plurality of bins, based on the load information for each bin, in a plurality of bin groups each associated with a power mode of the device. The device may then execute one or more rendering commands for each bin group of the plurality of groups at the power mode associated with the each bin group. By providing for bin-level granularity in power-mode allocation, the described techniques may improve rendering performance.Type: GrantFiled: January 17, 2018Date of Patent: September 10, 2019Assignee: QUALCOMM IncorporatedInventor: Aditya Nellutla
-
Patent number: 10409905Abstract: Approaches presented herein enable optimizing a display of tabular data from a 2-D table as a folding 3-D table having a plurality of vectors in a GUI. More specifically, a scaling ratio is calculated to fit the plurality of vectors within a display area of the GUI based on a cumulative width of the plurality of vectors and a width of the display area of the GUI. This scaling ratio is applied to a width of at least one vector to yield a modified width of the vector. The 2-D table is then rendered as a 3-D table in which the at least one vector is depicted as a modified vector angled between a horizontal and a vertical axis. This modified vector has an actual width equal to the modified width and a diagonal length equal to the width of the at least one vector.Type: GrantFiled: December 21, 2017Date of Patent: September 10, 2019Assignee: International Business Machines CorporationInventors: Tian Qi Han, Dong Ni, Hua Hong Wang, Hao Zhang
-
Patent number: 10395409Abstract: A method, system and computer-program product for real-time virtual 3D reconstruction of a live scene in an animation system. The method comprises receiving 3D positional tracking data for a detected live scene by the processor, determining an event by analyzing the 3D positional tracking data by the processor, comprising steps of determining event characteristics from the 3D positional tracking data, receiving pre-defined event characteristics, determining an event probability by comparing the event characteristics to the pre-defined event characteristics, and selecting an event assigned to the event probability, determining a 3D animation data set from a plurality of 3D animation data sets assigned to the selected event and stored in the data base by the processor, and providing the 3D animation data set to the output device.Type: GrantFiled: October 15, 2018Date of Patent: August 27, 2019Assignee: Virtually Live (Switzerland)GMBHInventors: Karl-Heinz Hugel, Florian Struck