Abstract: A system and method of operation of an augmented reality system includes: a position sensor (140) for calculating a current location (144); an orientation sensor (142), coupled to the position sensor (140), for calculating a current orientation (146); and a control mechanism (118), coupled to the position sensor (140), for presenting a system object (126) based on the current location (144), the current orientation (146), an object location (128), an object orientation (130), an access right (120), a visibility (134), and a persistence (136).
Abstract: A vehicle head-up display that automatically repositions an eyebox by measuring a distance to a driver's face and a method of controlling the same are disclosed. A method of controlling a vehicle head-up display according to some embodiments includes performing a distance measurement by using a displacement sensor for measuring a distance to a face of a driver and performing a display position adjustment including adjusting a display position and a display angle of head-up display information based on a measured distance to the face of the driver. Positioning of driver's face and then automatically adjusting the display position and the display angle of the head-up display information adaptively to individual drivers can reduce the inconveniences of the manual setting of the head-up display in a vehicle frequented with car-sharing drivers.
Abstract: A client device receives a first image frame from a server, stores the first image frame and generates a first modified image that corresponds to the first image frame. The client transmits, to a remote device, the generated first modified image. The remote device uses the first modified image to determine the instruction for displaying the second image frame. The client receives, from the remote device, an instruction for displaying a second image frame. In response to receiving the instruction, the client device displays, on a display communicatively coupled to the client device, the second image frame.
Abstract: A control line determinator determines whether or not a work state of a work implement is a predetermined work state. A display controller generates a display signal including a target surface of a construction object or a control line indicating a surface which is different from the target surface and which a bucket is to be prevented from entering. The display controller makes a display form of the control line or the target surface in the display signal different according to whether or not the work state is the predetermined work state.
Abstract: An internet or cloud-based system, method, or platform (“platform”) used to facilitate the conversion of electronic two-dimensional drawings to three-dimensional models. A group of people (“crowd”) that has been found qualified to make such conversions, are selected for the conversion. The two-dimensional drawings are transmitted to the crowd for conversion to three-dimensional models. In some embodiments, multiple instances of the same two-dimensional drawings (or image data) is sent to multiple, independent crowd members in order that multiple versions of the same three-dimensional model can be created. Once the models are complete and returned, they are compared to each other on multiple features or characteristics. If two or more three-dimensional models are found to match within the prescribed tolerances, they are determined to be an accurate representation of the product or device shown in the two-dimensional drawings.
Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
Type:
Grant
Filed:
February 3, 2020
Date of Patent:
June 28, 2022
Assignee:
NVIDIA CORPORATION
Inventors:
Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
Abstract: Some embodiments provide a non-transitory machine-readable medium that stores a program executable by at least one processing unit of a device. The program provides a display area for viewing a 3D model that includes a plurality of three-dimensional (3D) objects. The program further provides a tool for viewing 3D objects in the 3D model. The program also determines a 3D object in the plurality of 3D objects in the 3D model to hide. The program further hides the determined 3D object in the 3D model.
Type:
Grant
Filed:
October 9, 2019
Date of Patent:
June 21, 2022
Assignee:
SAP SE
Inventors:
Jitesh Nayak, Patrick Ashby, Suvodeep Das, David John Valentine, Alexandr Gavrilov
Abstract: A landmark detection system can more accurately detect landmarks in images using a detection scheme that penalizes for dispersion parameters, such as variance or scale. The landmark detection system can be trained using both labeled and unlabeled training data in a semi-supervised approach. The landmark detection system can further implement tracking of an object across multiple images using landmark data.
Type:
Grant
Filed:
December 30, 2020
Date of Patent:
June 7, 2022
Assignee:
Snap Inc.
Inventors:
Sergey Tulyakov, Roman Furko, Aleksei Stoliar
Abstract: A synchronous display method, a storage medium and an electronic device are provided. The method includes: screenshot information of a first terminal is acquired, and the screenshot information into a bitmap corresponding to the screenshot information is converted; a rotation state of a screen of the first terminal is acquired; and when it is determined that the screen rotates according to the rotation state, a transposition operation corresponding to the rotation state is performed on the bitmap, the bitmap after the transposition operation is converted into a picture byte stream, and the picture byte stream is transmitted to a second terminal to synchronously display the screenshot information corresponding to the bitmap.
Abstract: A method of tracking motion of a body part, the method comprising: (a) gathering motion data from a body part repositioned within a range of motion, the body part having mounted thereto a motion sensor; (b) gathering a plurality of radiographic images taken of the body part while the body part is in different positions within the range of motion, the plurality of radiographic images having the body part and the motion sensor within a field of view; and, (c) constructing a virtual three dimensional model of the body part from the plurality of radiographic images using a structure of the motion sensor identifiable within at least two of the plurality of radiographic images to calibrate the radiographic images.
Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
Abstract: An image processing apparatus includes an acquisition unit configured to acquire a three-dimensional shape data of an object based on images captured by a plurality of cameras, a generation unit configured to generate information based on a relationship between the three-dimensional shape data acquired by the acquisition unit and positions of the plurality of cameras, and a correction unit configured to correct the three-dimensional shape data based on the information generated by the generation unit.
Abstract: The present invention teaches a real-time hybrid ray tracing system for non-planar specular reflections. The high complexity of a non-planar surface is reduced to low complexity of multiple small planar surfaces. Advantage is taken of the planar nature of triangles that comprise building blocks of a non-planar surface. All secondary rays bouncing from a given surface triangle toward object triangles keep a close direction to each other. A collective control of secondary rays is enabled by this closeness and by decoupling secondary rays from primary rays. The result is a high coherence of secondary rays.
Abstract: An electronic chip and a chip assembly are described. The electronic chip comprises one or more processing cores and at least one hardware interface coupled to at least one of the one or more processing cores. At least one of the one or more processing cores implements a game engine in hardware.
Abstract: A system and method for generating a multi-user presentation including receiving a graphic, anchoring a first instance of the graphic in a collaboration environment with a first position, first scale, and first orientation relative to an initial position of a presenting user in the collaboration environment; applying a first anchor transform to a second instance of the graphic, the first anchor transform anchoring the second instance of the graphic in the collaboration environment with the first position, the first scale, and the first orientation relative to an initial position of a first attendee in the collaboration environment; sending the first instance of the graphic for presentation to the presenter, and the second instance of the graphic for presentation to the first attendee; receiving a first presenter interaction modifying the first instance, where the modified first instance includes a first modification to one or more of the first position, the first scale, and the first orientation relative to t
Abstract: Global position system tagging the movement of an object and extrapolating its direction and speed can be used for various services including emergency-based services. Location data can be computed using edge computing nodes. The extrapolation system can account for feedback from responding user devices and utilize the user device's location at the time of reporting to facilitate determining the direction, location, and/or speed of a moving object. This data can then be utilized to generate augmented reality displays for mobile devices and/or vehicles that utilize the system. The ability to calculate directional information with edge computing nodes can comprise an ability to add enriched data by predicting an object's whereabouts, route, and/or final destination.
Type:
Grant
Filed:
March 1, 2021
Date of Patent:
March 22, 2022
Assignees:
AT&T Mobility II LLC, AT&T Intellectual Property I, L.P.
Abstract: Disclosed are video playback and data providing methods in a virtual scene, and a client and server implementing the same. The video playback method comprises: receiving current video segment data sent from a server, wherein the current video segment data represents a video segment, and the current video segment data comprises at least one specified viewing angle and a data identifier representing video segment data to which the specified viewing angle is directed; playing the video segment represented by the current video segment data, and acquiring a current viewing angle of a user during playback; and determining a target specified viewing angle matching the current viewing angle from the at least one specified viewing angle, wherein a video segment represented by video segment data to which the target specified viewing angle is directed is played at the end of the playback of the video segment represented by the current video segment data.
Type:
Grant
Filed:
January 5, 2018
Date of Patent:
March 8, 2022
Assignee:
YOUKU INFORMATION TECHNOLOGY (BEIJING) CO., LTD.
Inventors:
Yuxing Wu, Xiaojie Sheng, Wuping Du, Wei Li, Ji Wang
Abstract: This application discloses an object loading method performed at an electronic device. The electronic device determines a visible space located within an acquisition range of an image acquisition device located at a first position in a virtual scene and determines a target subspace located within a visible distance threshold indicated by a target type of a plurality of types in the visible space based on the first position, each type of the plurality of types having a visible distance threshold of an object in a subspace of the virtual scene. The electronic device then acquires an object whose visible distance is not greater than the visible distance threshold indicated by the target type in the target subspace as a to-be-rendered object and loads the to-be-rendered object in a storage resource of the user terminal to render an image of the virtual scene.
Type:
Grant
Filed:
October 22, 2020
Date of Patent:
March 8, 2022
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: The disclosed embodiments relate to image processing methods and apparatuses. In one embodiment, a method includes: mapping an inputted three-dimensional (3D) model map into an asymmetric cubemap, the asymmetric cubemap located at a different place than the mapping center of the inputted 3D model map; and stretching the asymmetric cubemap mapped for the inputted 3D model map into a two-dimensional (2D) stretched plane map.
Abstract: An image signal processing device of the present disclosure includes a luminance correction section that performs, on a basis of information on a maximum output luminance value in a display section, luminance correction on an image signal to be supplied to the display section, the maximum output luminance value being variable.