Abstract: A method for supporting a user in an agricultural activity with a control arrangement that has a mobile device and a server application which communicates with the mobile device. The control arrangement executes an augmented reality routine in which a real-world image generated by a camera and at least one item of added information are displayed in a visually superimposed manner on the mobile device. The server application has a plurality of application modules with which a predefined agricultural scenario is associated, and an amount of added information is stored in the database for each agricultural scenario. The control arrangement automatically determines in which of the predefined agricultural scenarios the mobile device is situated, and a partial amount of the amount of added information stored in the database for the determined agricultural scenario is shown on the display depending on at least one object depicted in the real-world image.
Abstract: Methods, apparatus and computer program products implement embodiments of the present invention that include receiving 3D image data with respect to a 3D region in a living body, the 3D region including a first body cavity having an opening to a second body cavity, and receiving coordinates of a seed location within the first cavity in proximity to the opening. The 3D image data is processed so as to identify, for each ray among a plurality of rays emanating from the seed location at different, respective angles, a respective distance from the seed location to a respective intersection point at which the ray intersects surface tissue in the second cavity. Among the rays, the ray for which the distance from the seed location to the respective intersection point is greatest is selected, and an image is rendered that includes the opening as seen from a location on the selected ray.
Abstract: Discussed herein are devices, systems, and methods for image processing. A method can include generating a synthetic image based on a two-dimensional (2D) image of the geographical region, performing a coarse registration to grossly, register the synthetic image to the 2D image, and performing a fine registration following the coarse registration to improve the registration between the synthetic image and the 2D image.
Abstract: The present disclosure proposes a dual-screen display method for eliminating AR/VR picture tearing and an AR/VR display device, each single screen in a dual-screen for the dual-screen display method being divided into a first half-screen and a second half-screen respectively. The method includes: performing rendering and scanning in parallel for the first half-screen and the second half-screen in unit of half a frame. A start time of scanning the second half-screen is adjusted to eliminate a picture tearing phenomenon.
Abstract: A method for three-dimension (3D) based shopping, the method may include receiving or generating a 3D representation of at least a part of a body of a certain customer; receiving a query to find a first wearable item that fits the part of the body; searching for the first wearable item; displaying on a display that is accessible to the certain customer a 3D model of the first wearable item when worn over the part of the body; and interacting with the certain customer till a completion of the 3D based shopping.
Abstract: An electronic apparatus includes a communicator, a camera, and a processor configured to obtain an image including a display apparatus through the camera, identify the display apparatus from the image, analyze at least one pattern of the remaining areas excluding an area corresponding to the display apparatus in the image, and obtain a pattern corresponding to the image based on patterns included in the image by ratios equal to or greater than a predetermined ratio among the at least one pattern, cluster the colors of the remaining areas into at least one color, and control the communicator such that an image generated by applying the at least one clustered color to the obtained pattern is output at the display apparatus.
Abstract: The present invention provides a virtual trading card system for capturing and storing at least one virtual trading card on a mobile device. The present invention provides a method for capturing and storing at least one virtual trading card on a mobile device. The present invention also provides an augmented reality (AR) movie system for displaying an AR image on a mobile device as a visual overlay atop a video signal.
Abstract: Disclosed are an extended reality (XR) device and a control method thereof, which are applicable to all of 5G communication technology field, a robot technology field, an autonomous driving technology field, and an AI technology field.
Abstract: The present disclosure relates to a visual display system for manipulating images of a real scene using augmented reality. In one implementation, the system may include at least one processor in communication with a first mobile device; and a storage medium storing instructions that, when executed, configure the at least one processor to perform operations. The operations may include receiving a request from a mobile device to access an account of a user, receiving a first image depicting a real scene from an image sensor of the mobile device, receiving a selection of a virtual object, receiving an augmented reality image comprising the virtual object overlaid on the first image, comparing the augmented reality image to one or more stored augmented reality images, authenticating the user based on the comparison, and authorizing access to the user account based on the authentication.
Abstract: Discussed herein are devices, systems, and methods for synthetic image generation. A method can include projecting a three-dimensional (3D) point set of a first geographical region to an image space of an image of a second geographical region to generate synthetic image data, identifying control points (CPs) between the image and the synthetic image data, adjusting a geometry of the synthetic image data based on the identified CPs, determining metadata for the synthetic image based on metadata of the image, and associating the determined metadata with the synthetic image data to generate the synthetic image.
Abstract: A method includes sequentially outputting from an imaging sensor each pixel row of a set of pixel rows of an image captured by the imaging sensor. The method further includes displaying, at a display device, a pixel row representative of a first pixel row of the captured image prior to a second pixel row of the captured image being output by the imaging sensor. An apparatus includes an imaging sensor having a first lens that imparts a first type of spatial distortion, a display device coupled to the imaging sensor, the display to display imagery captured by the imaging sensor with the first spatial distortion, and an eyepiece lens aligned with the display, the eyepiece lens imparting a second type of spatial distortion that compensates for the first type of spatial distortion.
Abstract: An image processing apparatus generates a plurality of virtual viewpoint images being temporally consecutive, and includes a data acquisition unit, a parameter acquisition unit, a viewpoint acquisition unit, and a generation unit. The data acquisition unit is configured to acquire image data that is obtained by capturing images in a plurality of directions by a plurality of image capturing devices. The parameter acquisition unit is configured to acquire a parameter related to the acquired image data and related to quality of the plurality of virtual viewpoint images. The viewpoint acquisition unit is configured to acquire viewpoint information representing a moving path of a virtual viewpoint. The generation unit is configured to generate the plurality of virtual viewpoint images according to a virtual viewpoint having a moving speed based on the acquired image data. The moving speed being determined based on the acquired parameter and the acquired viewpoint information.
Abstract: A method includes: detecting an object in a first image; receiving a selection of the object depicted in the image; associating the object with a second device based on the selection; and, in response to the selection: recording a series of odometry data; estimating a location of the first device based on the odometry data; recording a series of images; estimating a location of the second device based on the images; calculating a first reference vector in the reference frame of the first device defining the location of the second device relative to the location of the first device; receiving, from the second device, a second reference vector; calculating a rotation and an offset between the reference vectors; and transforming the reference frame of the first device to a common reference based on the rotation and the offset.
Type:
Grant
Filed:
May 30, 2019
Date of Patent:
May 18, 2021
Assignee:
Jido, Inc.
Inventors:
Mark Stauber, Jaeyong Sung, Amichai Levy
Abstract: A medical image processing apparatus comprises processing circuitry configured to: obtain volumetric medical imaging data comprising a voxel value; obtain an opacity value corresponding to the voxel value; obtain an extinction color and/or transmission color corresponding to the voxel value; modify the extinction color and/or transmission color using the opacity value, wherein the modifying of the extinction color and/or transmission color is performed using a combined opacity model that combines a first opacity model and a second, different opacity model, such that the first opacity model makes a higher contribution to the modifying than the second opacity model at lower values of opacity, and the second opacity model makes a higher contribution to the modifying than the first opacity model at higher values of opacity; and render the volumetric medical imaging data using the modified extinction color and/or transmission color.
Abstract: A method for creating a complete three-dimensional model of a vehicle, along with a corresponding virtual tour and means for navigating said model is disclosed.
Abstract: An electronic device is provided. The electronic device may include a display, a processor operatively connected with the display and configured to generate external reference time information, a display driver integrated circuit configured to periodically or randomly receive the external reference time information from the processor, wherein the display driver integrated circuit is configured to generate internal time information based on an internal clock, to output a clock image corresponding to the internal time information on the display, and if a time error between the external reference time information and the internal time information occurs during the outputting of the clock image, to output the internal time information, the time error of which is corrected, on the display.
Abstract: The disclosure describes techniques for apparel simulation. For example, processing circuitry may determine a body construct used for generating a shape of a virtual representation of a user, determine that one or more points on a virtual apparel are within the body construct, and determine, for each of the one or more points, a respective normal vector. Each respective normal vector intersects each respective point and is oriented towards the body construct. The processing circuitry may also extend each of the one or more points to corresponding points on the body construct based on each respective normal vector and generate graphical information of the virtual apparel based on the extension of each of the one or more points to the corresponding points on the body construct.
Abstract: An augmented reality display system is configured to use fiducial markers to align 3D content with real objects. The augmented reality display system can optionally include a depth sensor configured to detect a location of a real object. The augmented reality display system can also include a light source configured to illuminate at least a portion of the object with invisible light, and a light sensor configured to form an image using reflected portion of the invisible light. Processing circuitry of the display system can identify a location marker based on the difference between the emitted light and the reflected light and determine an orientation of the real object based on the location of the real object and a location of the location marker.
Type:
Grant
Filed:
December 12, 2017
Date of Patent:
February 16, 2021
Assignee:
Magic Leap, Inc.
Inventors:
Nastasja U. Robaina, Nicole Elizabeth Samec, Gregory Michael Link, Mark Baerenrodt
Abstract: A method of allocating virtual objects based on augmented reality (AR) includes displaying, by an AR client terminal of a receiving user, a live map associated with a location of the receiving user. The live map includes a target location to which a virtual object was bound by a distributing user. The AR client terminal of the receiving user scans an image of an environment of the target location. In response to determining that the scanned image includes a preset bound target, the AR client terminal of the receiving user receives information of the virtual object from a server terminal. The preset bound target is associated with the virtual object and the target location.
Type:
Grant
Filed:
June 17, 2020
Date of Patent:
January 19, 2021
Assignee:
Advanced New Technologies Co., Ltd.
Inventors:
Qinglong Duan, Guanhua Chen, Jing Ji, Jiahui Cheng, Lu Yuan
Abstract: A method, computer readable medium, and system are disclosed for implementing automatic level-of-detail for physically-based materials. The method includes the steps of identifying a declarative representation of a material to be rendered, creating a reduced complexity declarative representation of the material by applying one or more term rewriting rules to the declarative representation of the material, and returning the reduced complexity declarative representation of the material.