Space Transformation Patents (Class 345/427)
  • Patent number: 11030821
    Abstract: A target object detecting unit 13 that detects a target object existing within a predetermined distance from an HMD 200 from a moving image of a real world captured by a camera 202 installed in the HMD 200 displaying a virtual space image and an image superimposition unit 15 that causes an image of a predetermined range including the target object to be displayed superimposed on a virtual space image are provided, and when a user performs a predetermined work with a hand within a range photographed in the camera 202 installed in the HMD 200, the captured image of the predetermined range including the hand is displayed superimposed on the virtual space image, and thus the user can appropriately perform the predetermined work while looking at the captured image displayed superimposed on the virtual space image even while wearing the HMD 200.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: June 8, 2021
    Assignee: Alpha Code Inc.
    Inventor: Takuhiro Mizuno
  • Patent number: 11016644
    Abstract: A method and a terminal device for providing a suspend button display are disclosed in order to improve display flexibility of a suspend button. In the solutions, when a terminal device detects that a target object performs a sliding operation on a suspend button, the terminal device may control the suspend button to present an effect of a dynamic change. In this way, the suspend button may present a plurality of display forms. Therefore, according to the method, the display flexibility of the suspend button can be improved, so that visual experience of a user is improved.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: May 25, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Liang, Xue Yang, Kang Li, Lina Tao, Haigen Lu, Guangfeng Gao, Yu Song, Xueyan Huang
  • Patent number: 11017607
    Abstract: Methods, systems, and media for enhancing one or more publications by receiving live video captured by a user, the live video comprising video of a publication, the publication comprising copyrighted content; identifying at least one first trigger in the live video, identifying one or more first three-dimensional, interactive media associated with the at least one first trigger and pertaining to the copyrighted content, and presenting to the user the first three-dimensional, interactive media; and identifying at least one second trigger in the first three-dimensional, interactive media, identifying one or more second three-dimensional, interactive media associated with the at least one second trigger and pertaining to the copyrighted content, and presenting to the user the second three-dimensional, interactive media to progressively deepen and enrich the engagement with the copyrighted content of the publication.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: May 25, 2021
    Assignee: A BIG CHUNK OF MUD LLC
    Inventor: J. Michelle Haines
  • Patent number: 11017747
    Abstract: Aspects of the present disclosure include a computing device for adaptive calibration for dynamic rotation. In an example, a computing device may include an orientation sensor to generate orientation information corresponding to an orientation of the computing device. The computing device monitor a rotation of the computing device based on the orientation information and determine a resting rotational angle of the computing device does not match a desired endpoint orientation angle. The computing device may set the endpoint orientation angle equal to the resting rotational angle and map a set of image orientation angles of an image according to the endpoint orientation angle and a second endpoint orientation angle. The computing device may determine the computing device is rotating based on the orientation information and cause dynamic display of the image based on the set of image orientation angles in response to a rotation of the computing device.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: May 25, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jonathan M. Cain
  • Patent number: 11017431
    Abstract: Provided as an information processing apparatus that includes: a search unit to search for, on a basis of position information of a user, a predetermined point specified in accordance with a user status and scenario progress from surroundings of a user; and an output control unit that performs control in a manner that a voice of a character corresponding to the scenario progress performs guidance to the predetermined point that has been searched.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: May 25, 2021
    Assignee: SONY CORPORATION
    Inventor: Tomohiko Gotoh
  • Patent number: 11010965
    Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: May 18, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Hendrik Mark Langerak, Michelle Brook
  • Patent number: 11004224
    Abstract: Motion and video data from vehicle sensors and camera arrays attached to a vehicle collect video and sensor data along a path driven by the vehicle. A system processes such data to produce high-accuracy structured map data, as might be used to precisely locate a moving vehicle in its environment. Positions are calculated from the sensor data. The positions are updated based on the video data. When loops in the vehicle path are detected, a loop closure error is calculated and used to update the positions as well as to reduce bias in the sensors when calculating future positions. Positions of features in the video are used to create or update structured map data.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: May 11, 2021
    Assignee: VELODYNE LIDAR USA, INC.
    Inventors: Nikhil Naikal, Alonso Patron-Perez, Alexander Marques, John Kua, Aaron Matthew Bestick, Christopher D. Thompson, Andrei Claudiu Cosma
  • Patent number: 11000252
    Abstract: A device for visualizing a 3D object includes a processor configured to provide an image in a 2D projection plane, and to project an initial 3D object from an initial plane with an inverse projection transformation in the 2D projection plane of the image to achieve an inverse 2D object. The inverse projection transformation is a projection transformation, where a vanishing point is at the other side of the 2D projection plane than the initial object. The processor is further configured to point-mirror the inverse 2D object to achieve a mirrored non-inverse 2D object, and to project the mirrored non-inverse 2D object back to the initial plane to provide a corrected 3D object. Further, the processor is configured to project the corrected 3D object again to the 2D projection plane of the image to provide a final 3D object appearing to be non-inversely projected.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: May 11, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Martijn Van Geloof
  • Patent number: 10997438
    Abstract: An obstacle detection method and apparatus are provided. The obstacle detection method provided includes: obtaining a to-be-detected image; determining a road surface area and a non-road surface area in the to-be-detected image according to pixel information contained in the to-be-detected image; respectively determining an outermost layer contour line of the road surface area and a contour line of the non-road surface area; and when the contour line of at least one non-road surface area is located in the area contained in the outermost layer contour line of the road surface area, determining a physical object contained in the at least one non-road surface area as an obstacle. The present application is applied to a process of detecting an obstacle.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: May 4, 2021
    Assignee: CLOUDMINDS (SHANGHAI) ROBOTICS CO., LTD.
    Inventors: Yibing Nan, Shiguo Lian
  • Patent number: 10997752
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing an edge prediction neural network and edge-guided colorization neural network to transform grayscale digital images into colorized digital images. In one or more embodiments, the disclosed systems apply a color edge prediction neural network to a grayscale image to generate a color edge map indicating predicted chrominance edges. The disclosed systems can present the color edge map to a user via a colorization graphical user interface and receive user color points and color edge modifications. The disclosed systems can apply a second neural network, an edge-guided colorization neural network, to the color edge map or a modified edge map, user color points, and the grayscale image to generate an edge-constrained colorized digital image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: May 4, 2021
    Assignee: ADOBE INC.
    Inventors: Seungjoo Yoo, Richard Zhang, Matthew Fisher, Jingwan Lu
  • Patent number: 10997745
    Abstract: Embodiments of the present disclosure relate to the field of spatial positioning, and disclose a method and apparatus for spatial positioning based on augmented reality. In the embodiments of the present disclosure, the augmented reality based method for spatial positioning including: acquiring an offset angle of a dual-camera coordinate system relative to a world coordinate system, wherein the world coordinate system is a coordinate system that is preset by using a known target position of a positioning photosphere as a reference coordinate; acquiring an actual position of the positioning photosphere in the world coordinate system; calculating a coordinate value of the dual camera in the world coordinate system in accordance with the offset angle, the actual position of the positioning photosphere, and the target position; and determining a position of a virtual object in a virtual space in the world coordinate system in accordance with the coordinate value of the dual camera in the world coordinate system.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: May 4, 2021
    Assignee: HUAQIN TECHNOLOGY CO., LTD.
    Inventors: Pingping Liu, Yuantong Zhang, Zhenghua Wei
  • Patent number: 10984599
    Abstract: In an augmented reality (AR) system, a controller receives an image from a camera, obtains a camera distance index, and receives an instruction to display an object onto the image on a display. In response, the controller retrieves real world dimensions of the object, obtains an AR position of the object in an AR coordinate system, and calculates a distance scaling factor based on the distance index and a depth between a viewpoint and the object. The controller transforms the AR position and the real world dimensions of the object into a display position in a display coordinate system and calculates display dimensions for the object based on the distance scaling factor and the real world dimensions of the object. The controller generates a display object image by scaling the object image to the display dimensions and displays the display object image onto the display at the display position.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: April 20, 2021
    Inventor: Chi Fai Ho
  • Patent number: 10977816
    Abstract: An image processing apparatus includes a calculation circuit configured to calculate a disparity between images, a determination circuit configured to determine a pixel position at which intensive disparity retrieval is to be performed based on distribution statistics of the disparity, an interpolation circuit configured to generate an interpolated image by performing interpolation at pixel positions at which intensive disparity retrieval is to be performed, and an output circuit configured to output a corrected disparity with which the distance to the object is determined. The calculation circuit calculates a first disparity based on the first and second images and generate distribution statistics from the first disparity in the first image, and calculates a second disparity based on the second image and the interpolated image generated from the first image by the interpolation circuit. The output circuit generates the corrected disparity based on differences between the first and second disparities.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: April 13, 2021
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATION
    Inventor: Toru Sano
  • Patent number: 10970915
    Abstract: A setting apparatus sets a virtual viewpoint corresponding to a virtual viewpoint image that is generated based on images obtained by image capturing from a plurality of directions. The setting apparatus includes one or more hardware processors, and one or more memories that store instructions executable by the one or more hardware processors to cause the setting apparatus to determine a common image capturing area that is included within each of a plurality of fields of view, of a plurality of image capturing apparatuses used for obtaining at least a part of the plurality of captured images, and to cause a graphical user interface (GUI), used for setting the virtual viewpoint, to identifiably display the determined common image capturing area. In addition, the setting apparatus sets of the virtual viewpoint according to a user input based on the GUI identifiably displaying the determined common image capturing area.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: April 6, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takashi Hanamoto
  • Patent number: 10964095
    Abstract: This patent provides a method for performing advanced volume rendering strategies. In tandem volume rendering, the volume is divided and a first portion of the volume undergoes a first volume rendering strategy and a second portion undergoes a second volume rendering strategy. Additional volume rendering strategies disclosed herein include preemptive volume rendering.
    Type: Grant
    Filed: September 27, 2020
    Date of Patent: March 30, 2021
    Inventor: Robert Edwin Douglas
  • Patent number: 10965861
    Abstract: The present technology relates to an image processing device, an image processing method, and a program that enable refocusing accompanied by desired blurring. A light collection processing unit performs a light collection process to generate a processing result image focused at a predetermined distance, by shifting the pixels of images of a plurality of viewpoints in accordance with a shift amount and integrating the pixel values. In the light collection process, the pixels are shifted in accordance with a corrected shift amount obtained by correcting the shift amount with correction coefficients for the respective viewpoints. The present technology can be applied in a case where a refocused image is obtained from images of a plurality of viewpoints, for example.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: March 30, 2021
    Assignee: SONY CORPORATION
    Inventors: Kengo Hayasaka, Katsuhisa Ito
  • Patent number: 10955256
    Abstract: A mapping system, method and computer program product are provided to apply texture, such as windows and doors, to the visual representation of a plurality of buildings. In the context of a method, images of each of a plurality of buildings are analyzed to detect building texture for one or more faces of the buildings. A representation of the building texture is spread to visual representations of one or more faces of a respective building that do not have a detected texture. The building texture may be spread from another face of the same building that has detected texture and is of a corresponding size to the face of the building without detected texture. Or, the building texture may be spread from a face of a neighboring building has a detected texture and is of a corresponding size to the face of the building without detected texture.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: March 23, 2021
    Assignee: HERE GLOBAL B.V.
    Inventors: Kenneth M. Smith, Jeffrey R. Bach, Mark Thompson, Mitchel Deason, Marc Fredrickson
  • Patent number: 10957027
    Abstract: Techniques related to generating a virtual view of a scene from a position between positions of known input images for presentation to a viewer are discussed. Such techniques include applying a convolution mask that approximates an inverse of a linear combination of at least the horizontal and vertical convolution matrices representative of gradient detection in the input images to a virtual intermediate gradient image that is a combination of at least an interpolated virtual image at the position, an interpolated horizontal gradient map at the position, and an interpolated vertical gradient map at the position to generate a final virtual image for the position.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: March 23, 2021
    Assignee: Intel Corporation
    Inventors: Vladan Popovic, Fan Zhang, Oscar Nestares, Kalpana Seshadrinathan
  • Patent number: 10944961
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: March 9, 2021
    Assignee: FotoNation Limited
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Patent number: 10942977
    Abstract: Techniques are provided for enhancing the targeting, selection, review, and presentation of online social network (OSN) data pertinent to an evidence context. In one formulation, a concept taxonomy is determined from a request frame relating to targeted OSN data. The concept taxonomy is transformed into query terms for searching targeted OSN data stored in an OSN data collection; a review set of OSN data entities aligned with the request frame is formulated. In another formulation, a presentation model is generated corresponding to an evidence context which is populated with OSN data entities in a review set; the presentation model is displayed on a user interface. Indications of relevance by a user on the user interface change content or properties of the OSN data entities in the review set.
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: March 9, 2021
    Inventors: Kevin L Miller, Jon Mills, William F Hamilton, Andrew Z Adkins, III, Glenn Sturm
  • Patent number: 10944952
    Abstract: An apparatus comprises receiver (101) receiving a light intensity image, confidence map, and image property map. A filter unit (103) is arranged to filter the image property map in response to the light intensity image and the confidence map. Specifically, for a first position, the filter unit (103) determines a combined neighborhood image property value in response to a weighted combination of neighborhood image property values in a neighborhood around the first position, the weight for a first neighborhood image property value at a second position being dependent on a confidence value for the first neighborhood image property value and a difference between light intensity values for the first position and for the second position; and determines a first filtered image property value for the first position as a combination of a first image property value at the first position in the image property map and the combined neighbor image property value.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: March 9, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Patrick Luc Els Vandewalle
  • Patent number: 10937232
    Abstract: In some examples, a system includes a range sensor configured to receive signals reflected from objects in an environment and generate two or more successive scans of the environment at different times. The system also includes a camera configured to capture two or more successive camera images of the environment, wherein each of the two or more successive camera images of the environment is captured by the camera at a different location within the environment. The system further includes processing circuitry configured to generate a three-dimensional map of the environment based on the two or more successive scans and the two or more successive camera images.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: March 2, 2021
    Assignee: Honeywell International Inc.
    Inventor: Jinming Huang
  • Patent number: 10939001
    Abstract: A Multifunctional Peripheral (MFP) includes a display unit that displays information and a printer that prints an image on a recording medium, and when data printable by the printer exists in a specific location, a preview combination home screen including a preview of an image based on the data is displayed on the display unit.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: March 2, 2021
    Assignee: KYOCERA DOCUMENT SOLUTIONS INC.
    Inventor: Hiroshi Yoshimoto
  • Patent number: 10938548
    Abstract: An event interface system facilitates the creation and deployment of a first blockchain object and a second blockchain object on a first blockchain and a second blockchain respectively. The system also provides an interface between the first blockchain object and the second blockchain object via the event hub. Additionally, the system can allow interaction between blockchain objects on a private blockchain and a participant on the system.
    Type: Grant
    Filed: May 23, 2018
    Date of Patent: March 2, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Marc E. Mercuri, Zeyad Rajabi, Eric I. Maino
  • Patent number: 10932743
    Abstract: A method for determining image values in marked pixels of at least one projection image is provided. The at least one projection image is part of a projection image set provided for reconstruction of a three-dimensional image dataset and acquired in each case using a projection geometry in an acquisition procedure. The image values are determined through evaluation of at least one epipolar consistency condition that is to be at least approximately fulfilled, that results from the projection geometries of the different projection images of the projection image set, and that requires the agreement of two transformation values in transformation images determined from different projection images by Radon transform and subsequent derivation as a condition transformation.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: March 2, 2021
    Assignee: Siemens Healthcare GmbH
    Inventor: Michael Manhart
  • Patent number: 10922051
    Abstract: A computing device may include a processor, a plurality of input devices communicatively coupled to the processor of the computing device, a voice recognition device to detect audible input from a user, and a profile manager to manage application specific profiles for the plurality of input devices and, when executed by the processor, establish an application specific profile based on the audible input received by the voice recognition device.
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: February 16, 2021
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: David H Hanes
  • Patent number: 10922832
    Abstract: Embodiments described herein provide an apparatus comprising a processor to divide a first image projection into a plurality of regions, the plurality of regions comprising a plurality of points, determine an accuracy rating for the plurality of regions, and apply one of a first rendering technique to a first region in the plurality of regions when the accuracy rating for the first region in the plurality of regions fails to meet an accuracy threshold or a second rendering technique to the first region in the plurality of regions when the accuracy rating for the first region in the plurality of regions meets an accuracy threshold, and a memory communicatively coupled to the processor. Other embodiments may be described and claimed.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: February 16, 2021
    Assignee: INTEL CORPORATION
    Inventors: Jason Tanner, Kai Xiao, Jill Boyce, Narayan Biswal, Jeffrey Tripp
  • Patent number: 10924721
    Abstract: Apparatus, systems, methods, and articles of manufacture are disclosed for assigning a color to a point in three-dimensional (3D) video. An example system includes an aggregator to access data from real cameras, the data including spatial coordinates and colors for a plurality of two-dimensional (2D) points in video data captured by the real cameras. The aggregator is to create a point cloud correlating the 2D points to the 3D points. The example system also includes a selector to select a subset of the real cameras based on a position of a virtual camera. In addition, the example system includes an analyzer to: select a point from the point cloud in a field of view of the virtual camera; select one of the subset of real cameras having a non-occluded view of the point and a perspective closest to that of the virtual camera; and assign a color to the point based on color data associated with the selected one of the real cameras.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: February 16, 2021
    Assignee: Intel Corporation
    Inventors: Daniel Rivas Perpen, Diego Prilusky
  • Patent number: 10921878
    Abstract: In one embodiment, a method includes receiving, from the first user, a request to create a joint virtual space to use with one or more second users, determining a first area in a first room associated with the first user based at least in part on space limitations associated with the first room and locations of one or more items in the first room, retrieving information associated with one or more second rooms for each of the second users, creating, based on first area of the first room and the information associated with each of the second rooms, the joint virtual space, and providing access to the joint virtual space to the first user and each of the one or more second users.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 16, 2021
    Assignee: Facebook, Inc.
    Inventors: Gioacchino Noris, Panya Inversin, James Allan Booth, Sarthak Ray, Alessia Marra
  • Patent number: 10909748
    Abstract: A projection device viewpoint image of a three-dimensional projection target is acquired, a three-dimensional model corresponding to the projection target is prepared as projection contents, the three-dimensional model is converted into a two-dimensional image that coincides with the projection device viewpoint image, and the two-dimensional image that coincides with the projector viewpoint image is projected to the projection target.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: February 2, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Hideyuki Nakamura, Ryuji Fuchikami, Ikuo Fuchigami
  • Patent number: 10891787
    Abstract: A biological model creation apparatus sets a plurality of control points respectively corresponding to a plurality of target points set on a plurality of valve annuli of a specified heart, on a plurality of valve annuli in a mesh model of a heart. Then, the biological model creation apparatus determines the positions of the control points in the mesh model on the basis of a first and second evaluation value regarding the positions of the plurality of control points. The first evaluation value indicates a degree of matching to relative positions among target points belonging to the same valve annulus. The second evaluation value indicates a degree of matching to relative positions among target points belonging to different valve annuli. Then, the biological model creation apparatus deforms the mesh model such that the positions of the plurality of control points coincide with the positions of their corresponding target points.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: January 12, 2021
    Assignees: FUJITSU LIMITED, THE UNIVERSITY OF TOKYO
    Inventors: Kohei Hatanaka, Toshiaki Hisada, Seiryo Sugiura, Takumi Washio, Jun-ichi Okada
  • Patent number: 10881964
    Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for automated detection of emergent behaviors in interactive agents of an interactive environment. The disclosed system represents an artificial intelligence based entity that utilizes a trained machine-learning-based clustering algorithm to group users together based on similarities in behavior. The clusters are processed based on a determination of the type of activity of the clustered users. In order to identify new categories of behavior and to label those new categories, a separate cluster analysis is performed using interaction data obtained at a subsequent time. The additional cluster analysis determines any change in behavior between the clustered sets and/or change in the number of users in a cluster. The identification of emergent user behavior enables the subject system to treat users differently based on their in-game behavior and to adapt in near real-time to changes in their behavior.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: January 5, 2021
    Assignee: Electronic Arts Inc.
    Inventor: Thomas Bradley Dills
  • Patent number: 10878638
    Abstract: In one embodiment, a computing system accesses a tracking record of a real-world object during a first movement session. The first tracking record comprises a plurality of locations of the real-world object relative to a first user. The system determines a display position of a virtual object representing the real-world object on a display screen of the second user based on the tracking record of the real-world object and the current location of the second user. Based on the determined position of the virtual object on the display screen, the system displays the virtual object on the display screen.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: December 29, 2020
    Assignee: Facebook, Inc.
    Inventor: David Michael Viner
  • Patent number: 10872374
    Abstract: Intuitively understandable visual representations of personal budgeting information are provided by creating proportionate bubble graphics for portions of a budget, each graphic having a visual size depiction proportionate to its percentage of the total budget being considered.
    Type: Grant
    Filed: July 12, 2012
    Date of Patent: December 22, 2020
    Assignee: MX TECHNOLOGIES, INC.
    Inventor: John Ryan Caldwell
  • Patent number: 10867429
    Abstract: Methods and systems are described in some examples for changing the traversal of an acceleration data structure in a highly dynamic query-specific manner, with each query specifying test parameters, a test opcode and a mapping of test results to actions. In an example ray tracing implementation, traversal of a bounding volume hierarchy by a ray is performed with the default behavior of the traversal being changed in accordance with results of a test performed using the test opcode and test parameters specified in the ray data structure and another test parameter specified in a node of the bounding volume hierarchy. In an example implementation a traversal coprocessor is configured to perform the traversal of the bounding volume hierarchy.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: December 15, 2020
    Assignee: NVIDIA Corporation
    Inventors: Samuli Laine, Timo Aila, Tero Karras, Gregory Muthler, William Parsons Newhall, Jr., Ronald Charles Babich, Jr., Craig Kolb, Ignacio Llamas, John Burgess
  • Patent number: 10860748
    Abstract: A system includes a display, a processor, and memory storing instructions that cause the processor to receive a request to scale one or more objects of on a model depicted on the display, identify the one or more objects of the model, determine one or more model parameters of the model, such that the model parameters include information indicative of the one or more objects, the model, the display, or any combination thereof, calculate a size for the one or more objects of the model based on the one or more model parameters; and scale the one or more objects based on the size for the objects.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: December 8, 2020
    Assignee: General Electric Company
    Inventors: Brian Christopher Wheeler, Jason Anton Byers, Prashant Madhukar Kulkarni
  • Patent number: 10852816
    Abstract: A method for improving user interaction with a virtual environment includes presenting a virtual environment to a user, measuring a first position of a user's gaze relative to a virtual environment, receiving a magnification input, and changing a magnification of the virtual environment centered on the first position and based on the magnification input.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: December 1, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Sophie Stellmach
  • Patent number: 10853960
    Abstract: Provided is a stereo matching method and apparatus. A stereo matching apparatus may generate a disparity map by transforming a disparity map of a previous frame based on determined motion information of a camera between the previous frame and the current frame, calculate a confidence for the generated disparity map, and adjust a disparity map corresponding to the current frame based on the confidence and the generated disparity map.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: December 1, 2020
    Assignees: Samsung Electronics Co., Ltd., Gwangju Institute of Science and Technololgy
    Inventors: Kee Chang Lee, Wonhee Lee, Kuk-Jin Yoon, Yongho Shin, Yeong Won Kim, Jin Ho Song
  • Patent number: 10848747
    Abstract: An exemplary depth capture system emits a first structured light pattern onto a surface of an object included in a real-world scene using a first structured light emitter included within the depth capture system. The depth capture system also emits, concurrently with the emitting of the first structured light pattern, a second structured light pattern onto the surface of the object using a second structured light emitter included within the depth capture system. The emitting of the second structured light pattern is different than the emitting of the first structured light pattern. The depth capture system detects the first and second structured light patterns using one or more optical sensors included within the depth capture system. Based on the detection of the structured light patterns, the depth capture system generates depth data representative of the surface of the object included in the real-world scene.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: November 24, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Steven L. Smith, Syed Meeran Kamal, Yongsheng Pan, Lama Hewage Ravi Prathapa Chandrasiri, Sergey Virodov, Mohammad Raheel Khalid
  • Patent number: 10839064
    Abstract: Methods and systems for securely entering credentials via a head-mounted display device are described herein. A display of a head-mounted device may display, in a first arrangement, a plurality of graphical user interface (GUI) elements. Each of the plurality of GUI elements may indicate a different character of a plurality of characters. The head-mounted device may receive a first user selection of a GUI element from the plurality of GUI elements displayed in the first arrangement. The method may comprise storing the first user selection of the GUI element. After receiving the first user selection of the GUI element, the plurality of GUI elements may be displayed on the display of the head-mounted device and in a second arrangement different from the first arrangement. The head-mounted device may receive a second user selection of a GUI element from the plurality of GUI elements displayed in the second arrangement.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: November 17, 2020
    Assignee: Citrix Systems, Inc.
    Inventor: Shaunak Mistry
  • Patent number: 10824320
    Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the content item was composed using a set of camera feeds that capture at least one scene from a set of different positions. A viewport interface can be provided on a display screen of the computing device through which playback of the content item is presented, the viewport interface being configured to allow a user operating the computing device to virtually navigate the at least one scene by changing i) a direction of the viewport interface relative to the scene or ii) a zoom level of the viewport interface. A navigation indicator can be provided in the viewport interface, the navigation indicator being configured to visually indicate any changes to a respective direction and zoom level of the viewport interface during playback of the content item.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: November 3, 2020
    Assignee: Facebook, Inc.
    Inventors: Joyce Hsu, Charles Matthew Sutton, Jaime Leonardo Rovira, Anning Hu, Chetan Parag Gupta, Cliff Warren
  • Patent number: 10818071
    Abstract: Techniques of rendering images includes generating signed distance values (SDVs) along a ray from a specified viewpoint in terms of projected distances along that ray from given depth images. For each pixel in an image of from the perspective of the specified viewpoint, a ray is traced into the three-dimensional scene represented by the image. An iterative step is performed along the ray, obtaining in each iteration a three-dimensional world-space point p. The result is the signed distance sj as measured from depth view Dj. If the absolute value of the signed distance sj is greater than some truncation threshold parameter, the signed distance sj is replaced by a special undefined value. The defined signed-distance values are aggregated to obtain an overall signed distance s. Finally, the roots or zero set (isosurface) of the signed distance field is determined.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: October 27, 2020
    Assignee: Google LLC
    Inventors: Hugues Hoppe, Ricardo Martin Brualla, Harris Nover
  • Patent number: 10809071
    Abstract: Provided is a process executed by a robot, including: traversing, to a first position, a first distance in a backward direction; after traversing the first distance, rotating 180 degrees in a first rotation; after the first rotation, traversing, to a second position, a second distance in the second direction; and after traversing the second distance, rotating 180 degrees in a second rotation such that the field of view of the sensor points in the first direction.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: October 20, 2020
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Sebastian Schweigert, Lukas Fath, Chen Zhang
  • Patent number: 10805549
    Abstract: A method for performing auto-exposure (AE) control in a depth sensing system includes: performing a rough pattern detection on a first reference frame to obtain a rough pattern detection result regarding a known pattern that is projected by the depth sensing system and calculating a pattern brightness mean; performing a depth decoding process with respect to a second reference frame that is derived based on the pattern brightness mean, thereby to obtain a depth decoding result; performing a mapping process according to the depth decoding result to obtain a mapping result; performing a fine pattern detection on the second reference frame according to the mapping result and the rough pattern detection result to obtain a fine pattern detection result; calculating a frame brightness mean according to the fine pattern detection result; and performing an AE control process over the depth sensing system according to the frame brightness mean.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: October 13, 2020
    Assignee: HIMAX TECHNOLOGIES LIMITED
    Inventor: Chin-Jung Tsai
  • Patent number: 10802600
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: October 13, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Patent number: 10783710
    Abstract: According to embodiments of the invention, methods, and a computer system for configuring navigational controls in a geometric environment are disclosed. The method may include obtaining a data set for geometric representation on a display, forming one or more reference surfaces, calculating a fit score and a confidence score using one or more of the reference surfaces, and configuring the navigational system to a control scheme when a computational operation on the fit score and the confidence score is outside of a threshold value. The control scheme may be a geometric control scheme, a planar control scheme, and a roaming control scheme.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: September 22, 2020
    Assignee: International Business Machines Corporation
    Inventor: Raymond S. Glover
  • Patent number: 10778758
    Abstract: A method for scalable and secure vehicle to everything communications may include receiving, at a communications management device, telematics data from a plurality of vehicles. The method may further include segregating the plurality of vehicles into initial clusters based on the telematics data, and dividing the initial clusters into binary space partitions having various sizes, where the size of each binary space partition is based on a maximum number of vehicles in the binary space partition. The method may include determining, for a selected vehicle within an associated binary space partition, a metric representing a suitability of communications between the selected vehicle and other vehicles in the associated binary space partition, receiving updated telematics data from the plurality of vehicles, shifting the initial clusters based on the updated telematics data, and updating the binary space partition based on the shifted clusters.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: September 15, 2020
    Assignee: Verizon Patent and Licensing, Inc.
    Inventors: Ming Chen, Pramod Kalyanasundaram, Jianxiu Hao, Dahai Ren
  • Patent number: 10769667
    Abstract: Aspects of the subject disclosure may include, for example, a processing system including a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, including detecting a vehicle, obtaining a demographic profile for an occupant of the vehicle, obtaining a directed advertisement for the vehicle based on the demographic profile of the occupant, generating a message for the vehicle based on the directed advertisement, and broadcasting the message to the vehicle, wherein an on-board device of the vehicle receives the message. Other embodiments are disclosed.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: September 8, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Simon D. Byers, Brian Amento, Kermit Hal Purdy
  • Patent number: 10771760
    Abstract: An information processing device decides a viewpoint position and generates a virtual viewpoint image based on the decided viewpoint position by using a plurality of images shot by a plurality of imaging apparatuses. The information processing device includes a determining unit configured to determine a scene related to the virtual viewpoint image to be generated, and a deciding unit configured to decide the viewpoint position related to the virtual viewpoint image in the scene determined by the determining unit, based on the scene determined by the determining unit.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: September 8, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kazuna Maruyama
  • Patent number: 10764565
    Abstract: An image taken of a three-dimensional virtual space including a virtual designating object and an object arranged therein is displayed. When the position of the virtual designating object does not coincide with the object, the virtual designating object is arranged in a predetermined orientation. The position of the virtual designating object is determined based on an acquired designated position on a two-dimensional image, which is generated from the three-dimensional virtual space taken by a virtual camera. When the position of the virtual designating object corresponds to the object, the displayed orientation of the virtual designating object is based on the position.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: September 1, 2020
    Assignee: NINTENDO CO., LTD.
    Inventors: Yasuyuki Oyagi, Junji Morii, Taku Matoba, Katsuhisa Sato