Three-dimension Patents (Class 345/419)
-
Patent number: 12166951Abstract: A method is described comprising: applying a random pattern to specified regions of an object; tracking the movement of the random pattern during a motion capture session; and generating motion data representing the movement of the object using the tracked movement of the random pattern.Type: GrantFiled: June 5, 2023Date of Patent: December 10, 2024Assignee: REARDEN MOVA, LLCInventors: Timothy Cotter, Stephen G. Perlman, John Speck, Roger van der Laan, Kenneth A. Pearce, Greg LaSalle
-
Patent number: 12165405Abstract: Ray-tracing for terrain mapping is provided. A system of an aerial vehicle can identify points generated from data captured by a sensor of the aerial vehicle. The points can each indicate a respective altitude value of a portion of terrain. The system can determine, based on the altitude values of the points, a threshold altitude of the terrain, and can identify a boundary defined in part based on the threshold altitude of the terrain. The system can generate a terrain map for the terrain based on applying a ray-tracing process to the points. The ray-tracing process can be performed within the boundary, using the points as respective sources and the aerial vehicle as a destination. The system can present a graphical representation of the terrain map in a graphical user interface of the aerial vehicle.Type: GrantFiled: June 23, 2022Date of Patent: December 10, 2024Assignee: LOCKHEED MARTIN CORPORATIONInventor: Jonathan Andrew Drosdeck
-
Patent number: 12166812Abstract: Systems and methods of delegating media capturing functionality from one device to another are presented. A first device configured with an object recognition engine captures a media representation of an environment and identifies an object within that environment. Then based on matched object traits from a database, the engine selects a delegation rules set, and delegates certain media capturing functionality to a second device according to the selected delegation rules set.Type: GrantFiled: April 7, 2023Date of Patent: December 10, 2024Assignee: Nant Holdings IP, LLCInventor: Patrick Soon-Shiong
-
Patent number: 12165273Abstract: An information processing device includes an acquisition unit, a setting unit, a determination unit, and a control unit. The acquisition unit acquires an operation angle that is an angle formed by a first direction in a predetermined space pointed by a user and a second direction in the predetermined space pointed by the user. The setting unit sets, as a reference angle, the operation angle acquired at a time point when an instruction to start moving a virtual object on a line extending in the first direction is detected. The determination unit determines whether or not the operation angle acquired in response to a change in the second direction is equal to or more than the reference angle.Type: GrantFiled: March 8, 2021Date of Patent: December 10, 2024Assignee: SONY GROUP CORPORATIONInventors: Miwa Ichikawa, Kunihito Sawai
-
Patent number: 12165317Abstract: A composite medical imaging system may direct a display device to display an image captured by an imaging device and showing a view of a surgical area and display an augmentation region within the image and that shows supplemental content. The view of the surgical area shows surface anatomy located at the surgical area and an object located at the surgical area. The augmentation region creates an occlusion over at least a portion of the view of the surgical area. The system may detect an overlap in the image between at least a portion of the object and at least a portion of the augmentation region. In response to the detection of the overlap, the system may adjust the image to decrease an extent of the occlusion within the overlap by the augmentation region.Type: GrantFiled: May 29, 2020Date of Patent: December 10, 2024Assignee: Intuitive Surgical Operations, Inc.Inventors: Daniel Proksch, Mahdi Azizian, A. Jonathan McLeod, Pourya Shirazian
-
Patent number: 12161423Abstract: A non-transitory machine-readable media stores instructions that, when run by one or more processors, cause the one or more processors to store a deformable model of a patient anatomy and deform the deformable model based on a measured deformation of a branched anatomical structure of the patient anatomy. The deformable model includes a skeleton tree of nodes and linkages representing the branched anatomical structure of the patient anatomy. Each of the nodes is located at a respective bifurcation of the branched anatomical structure, and at each respective bifurcation the corresponding linkages include an orientation. The deformable model is deformed by modifying the orientations of the linkages of the branched anatomical structure.Type: GrantFiled: September 12, 2023Date of Patent: December 10, 2024Assignee: INTUITIVE SURGICAL OPERATIONS, INC.Inventors: Prashant Chopra, Vincent Duindam, Lei Xu, Tao Zhao
-
Patent number: 12161512Abstract: An exemplary system accesses imagery of a surgical space captured by different imaging modalities and, based on the accessed imagery, generates composite imagery that includes integrated representations of the surgical space as captured by the different imaging modalities. An exemplary composite image includes a representation of the surgical space as captured by a first imaging modality augmented with an integrated representation of the surgical space as captured by a second imaging modality. The integrated representation of the surgical space as captured by the second imaging modality may be selectively movable and may be generated based on first imagery of the surgical space captured by the first imaging modality and second imagery of the surgical space captured by the second imaging modality in a manner that provides a visually realistic appearance of depth.Type: GrantFiled: May 29, 2020Date of Patent: December 10, 2024Assignee: Intuitive Surgical Operations, Inc.Inventors: Pourya Shirazian, Mahdi Azizian, A. Jonathan McLeod, Daniel Proksch
-
Patent number: 12165356Abstract: Systems, devices, media, and methods are presented for object modeling using augmented reality. An object modeling mode for generating three-dimensional models of objects is initiated by one or more processors of a device. The processors of the device detect an object within a field of view. Based on a position of the object, the processors select a set of movements forming a path for the device relative to the object and cause presentation of at least one of the movements. The processors detect a set of object surfaces as portions of the object are positioned in the field of view. In response to detecting at least a portion of the object surface, the processors modify a graphical depiction of a portion of the object. The processors then construct a three-dimensional model of the object from the set of images, depth measurements, and IMU readings collected during the reconstruction process.Type: GrantFiled: August 14, 2023Date of Patent: December 10, 2024Assignee: Eclo, Inc.Inventors: Ivan Kolesov, Alex Villanueva, Liangjia Zhu
-
Patent number: 12156704Abstract: A robot-assisted endoscope system and control methods thereof allow a user to perform intraluminal interventional procedures using a steerable sheath. A processor generates a ghost image based on a non-real-time insertion trajectory of the sheath, and a real-time image based on a real-time insertion trajectory for inserting an interventional tool through the sheath towards the target site. A display screen outputs navigation guidance data for informing a user how to manipulate the distal section of the sheath towards the target site such that the real-time image overlaps or coincides with at least part the ghost image and the real-time insertion trajectory becomes aligned with the non-real-time insertion trajectory.Type: GrantFiled: December 16, 2021Date of Patent: December 3, 2024Assignee: Canon U.S.A., Inc.Inventors: Brian Ninni, HuaLei Shelley Zhang
-
Patent number: 12160663Abstract: An image processing device acquires a plurality of images of an object to be measured by an imaging device, the plurality of images being taken while varying a focal position, and corrects blurring in the acquired plurality of images based on a point spread function for each focal position of an image forming optical system in the imaging device. The image processing device also calculates, for the plurality of images after correction of the blurring, an evaluation value for a focusing degree for each pixel, and generates three-dimensional shape data of the object to be measured based on the calculated evaluation value for the focusing degree for each pixel.Type: GrantFiled: February 17, 2023Date of Patent: December 3, 2024Assignee: TOKYO SEIMITSU CO., LTD.Inventor: Kyohei Hayashi
-
Patent number: 12158986Abstract: A method for determining a focal point depth of a user of a three-dimensional (“3D”) display device includes tracking a first gaze path of the user. The method also includes analyzing 3D data to identify one or more virtual objects along the first gaze path of the user. The method further includes when only one virtual object intersects the first gaze path of the user identifying a depth of the only one virtual object as the focal point depth of the user.Type: GrantFiled: April 20, 2023Date of Patent: December 3, 2024Assignee: Magic Leap, Inc.Inventor: Robert Blake Taylor
-
Patent number: 12159472Abstract: A method, apparatus, system, and computer program product provide the ability to extract level information and reference grid information from point cloud data. Point cloud data is obtained and organized into a three-dimensional structure of voxels. Potential boundary points are filtered from the boundary cells. Level information is extracted from a Z-axis histogram of the voxels positioned along the Z-axis of the three-dimensional voxel structure and further refined. Reference grid information is extracted from an X-axis histogram of the voxels positioned along the X-axis of the three-dimensional voxel structure and a Y-axis histogram of the voxels positioned along the Y-axis of the three-dimensional voxel structure and further refined.Type: GrantFiled: October 19, 2017Date of Patent: December 3, 2024Assignee: AUTODESK, INC.Inventor: Yan Fu
-
Patent number: 12159348Abstract: First, an image processing apparatus obtains data of a captured image obtained by image capturing with an image capturing apparatus that captures an image of a surrounding of a reference point, and obtains distance information indicating a distance from the reference point to an object present in a vicinity of the reference point. Next, the image processing apparatus obtains first three-dimensional shape data corresponding to a shape of the object, based on the distance information. Then, the image processing apparatus obtains second three-dimensional shape data that corresponds to the surrounding of the reference point other than the object and that is formed of one or more flat planes or curved planes. Then, the image processing apparatus obtains third three-dimensional shape data in which the first three-dimensional shape data and the second three-dimensional shape data are integrated, and maps the captured image to the third three-dimensional shape data.Type: GrantFiled: November 10, 2022Date of Patent: December 3, 2024Assignee: Canon Kabushiki KaishaInventor: Kina Itakura
-
Patent number: 12160555Abstract: Methods, devices and data stream are provided for signaling and decoding information representative of restrictions of navigation in a volumetric video. The data stream comprises metadata associated to video data representative of the volumetric video. The metadata comprise data representative of a viewing bounding box, data representative of a curvilinear path in the 3D space of said volumetric video; and data representative of at least one viewing direction range associated with a point on the curvilinear path.Type: GrantFiled: July 14, 2020Date of Patent: December 3, 2024Assignee: INTERDIGITAL VC HOLDINGS, INC.Inventors: Bertrand Chupeau, Gérard Briand, Renaud Dore
-
Patent number: 12154209Abstract: A method of improving texture fetching by a texturing/shading unit in a GPU pipeline by performing efficient convolution operations, includes receiving a shader and determining whether the shader is a kernel shader. In response to determining that the shader is a kernel shader, the shader is modified to perform a collective fetch of all texels used in convolution operations for a group of output pixels instead of performing independent fetches of texels for each output pixel in the group of output pixels.Type: GrantFiled: June 24, 2022Date of Patent: November 26, 2024Assignee: Imagination Technologies LimitedInventors: Rostam King, William Thomas
-
Patent number: 12155421Abstract: Disclosed is a portable radio-frequency, RF, spectrum measurement system. The system comprises a first communication interface, a second communication interface, a user interface, and a local processor. The first communication interface comprises an antenna being configured to receive an RF signal to be measured, and is configured to derive waveform data from the received RF signal. The second communication interface is configured to send the waveform data to a remote processor being connectable to the system, and to receive processed spectrum measurement data from the remote processor in turn. The local processor is supplied with the processed spectrum measurement data and configured to visually or acoustically indicate the processed spectrum measurement data on the user interface.Type: GrantFiled: May 18, 2022Date of Patent: November 26, 2024Assignee: Rohde & Schwarz GmbH & Co. KGInventors: Martin Bloss, Anugeetha Vishwanatha
-
Patent number: 12153733Abstract: An electronic device according to the present invention includes at least one memory and at least one processor which function as: a gaze acquisition unit configured to acquire right gaze information related to a gaze of a right eye of a user that wears a display apparatus of an optical see-through type on a head and left gaze information related to a gaze of a left eye of the user; and a correlation acquisition unit configured to acquire, as information on personal differences, correlation information related to correlation between the gaze of the right eye and the gaze of the left eye on a basis of the right gaze information and the left gaze information.Type: GrantFiled: February 16, 2023Date of Patent: November 26, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Takeshi Uchida
-
Patent number: 12154294Abstract: A model creation apparatus being configured to: hold at least one image of the registration target object in one or more postures and a reference model indicating a shape of a reference object; acquire information indicating a feature of the registration target object in a first posture; and correct, when a shape in the first posture that is indicated by the reference model is determined to be dissimilar based on a predetermined first condition, the reference model based on the information indicating the feature to thereby create the model indicating the shape of the registration target object.Type: GrantFiled: November 17, 2020Date of Patent: November 26, 2024Assignee: Hitachi, Ltd.Inventors: Taiki Yano, Nobutaka Kimura, Ryo Sakai
-
Patent number: 12154235Abstract: Disclosed is a mobile terminal that provides an augmented reality navigation screen in a state of being hold in a vehicle, the mobile terminal including: at least one camera configured to obtain a front image; a display; and at least one processor configured to calibrate the front image, and to drive an augmented reality navigation application so that the augmented reality navigation screen including at least one augmented reality (AR) graphic object and the calibrated front image is displayed on the display.Type: GrantFiled: February 21, 2023Date of Patent: November 26, 2024Assignee: LG Electronics Inc.Inventors: Dukyung Jung, Kihyung Lee, Jaeho Lee
-
Patent number: 12154226Abstract: A method for generating a three-dimensional (3D) model of an object includes receiving a two-dimensional (2D) view of at least one object as an input, measuring geometrical shape coordinates of the at least one object from the input, identifying texture parameters of the at least one object from the input, predicting geometrical shape coordinates and texture parameters of occluded portions of the at least one object in the 2D view by processing the measured geometrical shape coordinates of the at least one object, the identified texture parameters of the at least one object, and the occluded portions of the at least one object, and generating a 3D model of the at least one object by mapping the measured geometrical shape coordinates and the identified texture parameters to the predicted geometrical shape coordinates and the predicted texture parameters of the occluded portions of the at least one object.Type: GrantFiled: September 20, 2022Date of Patent: November 26, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sujoy Saha, Mahantesh Mallappa Ambi, Rajat Kumar Jain, Amita Badhwar, Aditi Singhal, Rakesh Abothula, Lokesh Rayasandra Boregowda
-
Patent number: 12154460Abstract: An object forming structure includes: at least two first original images, respectively provided with different vectors, and extended for forming at least two first original image three-dimensional shapes; at least one intersect fixed point, extended with at least one intersect direction through the intersect fixed point, wherein at least two second image three-dimensional shapes are formed through the first original image three-dimensional shapes being extended; and at least two third image three-dimensional shapes, stacked with the at least two second image three-dimensional shapes, wherein a Boolean function is utilized for confirming a selected desired zone for forming a new object formation; wherein, the new object formation formed via the third image three-dimensional shapes is obtained through a combination of intersect or union or equalize the different vectors, the first original image three-dimensional shapes are presented via the corresponding vectors for presenting different shapes in other angles.Type: GrantFiled: April 1, 2021Date of Patent: November 26, 2024Inventor: Chih-Chieh Lin
-
Patent number: 12154245Abstract: Apparatus for visualization within a three-dimensional (3D) model and methods used therein are described, wherein the apparatus includes a processor and a memory communicatively connected to the processor, wherein the memory includes instructions configuring the processor to receive a query image, extract neural network encodings from the received query image, query a synthetic image repository for at least a matching synthetic image, and display an estimated region of interest within the 3D model, wherein the synthetic image repository includes a plurality of synthetic images and their extracted neural network encodings, each synthetic image therein corresponds to a region of interest in the 3D model, and querying the synthetic image repository includes comparing the extracted neural network encodings between the query image and synthetic images.Type: GrantFiled: April 26, 2024Date of Patent: November 26, 2024Assignee: Anumana, Inc.Inventors: Rakesh Barve, Uddeshya Upadhyay, Abhijith Chunduru, Suthirth Vaidya, Arjun Puranik, Sai Saketh Chennamsetty
-
Patent number: 12154218Abstract: The present disclosure generally relates to user interfaces for adjusting simulated image effects. In some embodiments, user interfaces for adjusting a simulated depth effect is described. In some embodiments, user interfaces for displaying adjustments to a simulated depth effect is described. In some embodiments, user interfaces for indicating an interference to adjusting simulated image effects is described.Type: GrantFiled: June 22, 2022Date of Patent: November 26, 2024Assignee: Apple Inc.Inventors: Johnnie B. Manzari, Alan C. Dye, Richard David Seely, Andre Souza Dos Santos
-
Patent number: 12148090Abstract: In some implementations, a method of generating a third person view of a computer-generated reality (CGR) environment is performed at a device including non-transitory memory and one or more processors coupled with the non-transitory memory. The method includes: obtaining a first viewing vector associated with a first user within a CGR environment; determining a first viewing frustum for the first user within the CGR environment based on the first viewing vector associated with the first user and one or more depth attributes; generating a representation of the first viewing frustum; and displaying, via the display device, a third person view of the CGR environment including an avatar of the first user and the representation of the first viewing frustum adjacent to the avatar of the first user.Type: GrantFiled: August 14, 2023Date of Patent: November 19, 2024Inventors: Ian M. Richter, John Joon Park, David Michael Hobbins
-
Patent number: 12147651Abstract: An information processing apparatus according to the present disclosure includes: an acquisition unit that acquires a character string whose part of speech is to be estimated; and a generation unit that generates part-of-speech estimation information for estimating a part of speech of the character string based on a byte sequence obtained by converting the character string.Type: GrantFiled: January 6, 2022Date of Patent: November 19, 2024Assignee: SONY GROUP CORPORATIONInventors: Ryouhei Yasuda, Yuhei Taki, Hiro Iwase, Kunihito Sawai
-
Method for continued bounding volume hierarchy traversal on intersection without shader intervention
Patent number: 12148088Abstract: A hardware-based traversal coprocessor provides acceleration of tree traversal operations searching for intersections between primitives represented in a tree data structure and a ray. The primitives may include opaque and alpha triangles used in generating a virtual scene. The hardware-based traversal coprocessor is configured to determine primitives intersected by the ray, and return intersection information to a streaming multiprocessor for further processing. The hardware-based traversal coprocessor is configured to omit reporting of one or more primitives the ray is determined to intersect. The omitted primitives include primitives which are provably capable of being omitted without a functional impact on visualizing the virtual scene.Type: GrantFiled: March 31, 2023Date of Patent: November 19, 2024Assignee: NVIDIA CorporationInventors: Greg Muthler, Tero Karras, Samuli Laine, William Parsons Newhall, Jr., Ronald Charles Babich, Jr., John Burgess, Ignacio Llamas -
Patent number: 12144553Abstract: Disclosed are various embodiments for a dynamic flow apparatus for cardiovascular diagnosis and pre-procedure analysis of individual patients. The dynamic flow apparatus includes a three-dimensional (3D) cardiac model in an enclosed container. The 3D cardiac model can mimic an operation of an actual heart by pumping fluid through the 3D cardiac model and causing the model to expand and contrast. Data obtained from the operation of the 3D model can be used during a surgical procedure of an actual heart of an individual.Type: GrantFiled: March 5, 2019Date of Patent: November 19, 2024Assignee: RUTGERS, THE STATE UNIVERSITY OF NEW JERSEYInventor: Partho Sengupta
-
Patent number: 12148099Abstract: A method, computer readable medium, and system are disclosed for overlaying a cell onto a polygon meshlet. The polygon meshlet may include a grouping of multiple geometric shapes such as triangles, and the cell may include a square-shaped boundary. Additionally, every polygon (e.g., a triangle or other geometric shape) within the polygon meshlet that has at least one edge fully inside the cell is removed to create an intermediate meshlet. A selected vertex is determined from all vertices (e.g., line intersections) of the intermediate meshlet that are located within the cell, based on one or more criteria, and all the vertices of the intermediate meshlet that are located within the cell are replaced with the selected vertex to create a modified meshlet. The modified meshlet is then rendered (e.g., as part of a process to generate a scene to be viewed).Type: GrantFiled: September 13, 2023Date of Patent: November 19, 2024Assignee: NVIDIA CORPORATIONInventor: Holger Heinrich Gruen
-
Patent number: 12148095Abstract: Systems and methods for rendering a translucent object are provided. In one aspect, the system includes a processor coupled to a storage medium that stores instructions, which, upon execution by the processor, cause the processor to receive at least one mesh representing at least one translucent object. For each pixel to be rendered, the processor performs a rasterization-based differentiable rendering of the pixel to be rendered using the at least one mesh and determines a plurality of values for the pixel to be rendered based on the rasterization-based differentiable rendering. The rasterization-based differentiable rendering can include performing a probabilistic rasterization process along with aggregation techniques to compute the plurality of values for the pixel to be rendered. The plurality of values includes a set of color channel values and an opacity channel value. Once values are determined for all pixels, an image can be rendered.Type: GrantFiled: September 15, 2022Date of Patent: November 19, 2024Assignee: LEMON INC.Inventors: Tiancheng Zhi, Shen Sang, Guoxian Song, Chunpong Lai, Jing Liu, Linjie Luo
-
Patent number: 12148116Abstract: In one embodiment, a method of intermingling stereoscopic and conforming virtual content to a bounded surface is performed at a device that includes one or more processors, non-transitory memory, and one or more displays. The method includes displaying a bounded surface within a native user computer-generated reality (CGR) environment, wherein the bounded surface is displayed based on a first set of world coordinates characterizing the native user CGR environment. The method further includes displaying a first stereoscopic virtual object within a perimeter of a first side of the bounded surface, wherein the first stereoscopic virtual object is displayed in accordance with a second set of world coordinates that is different from the first set of world coordinates characterizing the native user CGR environment.Type: GrantFiled: July 11, 2023Date of Patent: November 19, 2024Assignee: APPLE INC.Inventors: Clement P. Boissiere, Samuel L Iglesias, Timothy Robert Oriol, Adam Michael O'Hern
-
Patent number: 12148089Abstract: Embodiments are disclosed for performing 3-D vectorization. The method includes obtaining a three-dimensional rendered image and a camera position. The method further includes obtaining a triangle mesh representing the three-dimensional rendered image. The method further involves creating a reduced triangle mesh by removing one or more triangles from the triangle mesh. The method further involves subdividing each triangle of the reduced triangle mesh into one or more subdivided triangles. The method further involves performing a mapping of each pixel of the three-dimensional rendered image to the reduced triangle mesh. The method further involves assigning a color value to each vertex of the reduced triangle mesh. The method further involves sorting each triangle of the reduced triangle mesh using a depth value of each triangle. The method further involves generating a two-dimensional triangle mesh using the sorted triangles of the reduced triangle mesh.Type: GrantFiled: August 16, 2022Date of Patent: November 19, 2024Assignee: Adobe Inc.Inventors: Ankit Phogat, Xin Sun, Vineet Batra, Sumit Dhingra, Nathan A. Carr, Milos Hasan
-
Patent number: 12146838Abstract: In an embodiment, a method for detecting cracks in road segments is provided. The method includes: receiving raw range data for a first image by a computing device from an imaging system, wherein the first image comprises a plurality of pixels; receiving raw intensity data for the first image by the computing device from an imaging system; fusing the raw range data and raw intensity data to generate fused data for the first image by the computing device; extracting a set of features from the fused data for the first image by the computing device; providing the set of features to a trained neural network by the computing device; and generating a label for each pixel of the plurality of pixels by the trained neural network, wherein a received label for a pixel indicates whether or not the pixel is associated with a crack.Type: GrantFiled: May 28, 2021Date of Patent: November 19, 2024Assignee: THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ALABAMAInventors: Wei Song, Shanglian Zhou
-
Patent number: 12148077Abstract: The present disclosure relates to techniques for providing an interactive computer-generated reality environment for creating a virtual drawing using one or more electronic devices. Specifically, the described techniques provide a user with a computer-generated reality environment, which can be based on different types of realities including virtual reality and mixed reality, for creating a virtual drawing on a drawing surface within the computer-generated reality environment. The computer-generated reality environment provides the user with a realistic and immersive experience while creating the virtual drawing.Type: GrantFiled: October 2, 2023Date of Patent: November 19, 2024Assignee: Apple Inc.Inventor: Edwin Iskandar
-
Patent number: 12147766Abstract: Implementations are described herein for learning mappings between a domain specific language (DSL) and images, and leveraging those mappings for various purposes. In various implementations, a method for using a DSL to generate training data may include processing data indicative of ground truth image(s) depicting a real plant using a trained image-to-DSL machine learning (ML) model to generate a first expression in the DSL that describes structure of the real plant. The first expression may include a plurality of parameters, and may be processed to programmatically generate a plurality of synthetic DSL expressions. Each respective synthetic DSL expression may describe structure of a respective synthetic plant for which parameter(s) have been altered from the first expression. The synthetic DSL expressions may be processed using a renderer to create three-dimensional (3D) synthetic plant models. Two-dimensional (2D) synthetic images may be generated that depict the 3D synthetic plant models in an area.Type: GrantFiled: October 19, 2022Date of Patent: November 19, 2024Assignee: Deere & CompanyInventors: Shuhao Fu, Alexander Ngai, Yueqi Li
-
Patent number: 12148071Abstract: Methods and apparatus for MRI reconstruction and data acquisition are provided. The method for MRI reconstruction includes: obtaining MRI images and k-space training data and dividing into anatomical sections; training reconstruction models to predict MRI images from k-space data for individual anatomical sections; while scanning an object, identifying the anatomical sections by scout scans or navigator signals; selecting suitable reconstruction models; reconstructing anatomical sections using the selected models, and merging the images from anatomical sections. Reconstructed images obtained by the above methods and apparatus have better image quality such as lesser noise and artifacts, and less MRI data is needed for the same image quality.Type: GrantFiled: October 29, 2021Date of Patent: November 19, 2024Assignee: HANGZHOU WEIYING MEDICAL TECHNOLOGY CO., LTD.Inventors: Ruixing Zhu, Zhizun Zhang, Hangxuan Li
-
Patent number: 12149815Abstract: A method of image processing includes: capturing a first video streaming from a physical environment using an image capturing device; conducting image processing on the first video streaming to provide a second video streaming to a virtual camera module; capturing a frame of a display area of a display device when the virtual camera is opened by an application; acquiring a position of the second video streaming in a frame of the display area, and generating a user interface according to the position of the second video streaming if the frame of the display area comprises the second video streaming; and receiving an operation instruction through the user interface, and operating the image capturing device according to the operation instruction.Type: GrantFiled: March 22, 2022Date of Patent: November 19, 2024Assignee: A Ver Information Inc.Inventors: Po-Hsun Wu, Yun-Long Sie
-
Patent number: 12148109Abstract: A system and method for displaying a virtual three-dimensional environment, including: displaying at least a portion of the virtual three-dimensional environment in a head-mounted display, where the head-mounted display includes a retinal tracking device; enabling a user to interact with the virtual three-dimensional environment; displaying a virtual representation of a written work within the virtual three-dimensional environment in the head-mounted display; enabling the user to interact with the virtual representation of the written work; tracking the user's reading position in the representation of the written work via the retinal tracking device; determining the content of the representation of the written book at the user's reading position; and modifying the virtual three-dimensional environment based at least in part on the content at the user's reading position.Type: GrantFiled: April 5, 2023Date of Patent: November 19, 2024Inventor: Stephen Constantinides
-
Patent number: 12148114Abstract: A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object with an associated material and added virtual content is provided to users. In one example of the collaborative session, a user selects the associated material of the collaborative object. Physical characteristics are assigned to the collaborative object as a function of the associated material to be perceived by the participants when the collaborative object is manipulated. In one example, the material associated to the collaborative object is metal, wherein the interaction between the users and the collaborative object generates a response of the collaborative object that is indicative of the physical properties of metal, such as inertial, acoustic, and malleability.Type: GrantFiled: August 31, 2022Date of Patent: November 19, 2024Assignee: Snap Inc.Inventors: Youjean Cho, Chen Ji, Fannie Liu, Andrés Monroy-Hernández, Tsung-Yu Tsai, Rajan Vaish
-
Patent number: 12148153Abstract: A system and method to detect abnormality of subjects directly from MRI k-space data are provided. The system includes: at least one computer hardware processor, at least one non-transitory computer-readable storage medium, and at least one computer program stored in the at least one non-transitory computer-readable storage medium and executable on the at least one computer hardware processor, wherein the at least one computer program includes: an acquisition module, configured to obtain target MRI k-space data by scanning a subject, wherein the target MRI k-space data are fully-sampled or undersampled or sparse MRI k-space data; a detection module, configured to obtain and output detection outcome from the target MRI k-space data using detection models; and a model training module, configured to train the detection models based on training data. Hence, the MRI scan time and related cost are reduced, and the accuracy of the detecting results is increased.Type: GrantFiled: October 29, 2021Date of Patent: November 19, 2024Assignee: HANGZHOU WEIYING MEDICAL TECHNOLOGY CO., LTD.Inventors: Ruixing Zhu, Zhizun Zhang, Hangxuan Li
-
Patent number: 12141926Abstract: A human-machine interaction, HMI, user interface (1) connected to at least one controller or actuator of a complex system (SYS) having a plurality of system components, C, represented by associated blocks, B, of a hierarchical system model (SYS-MOD) stored in a database, DB, (5) said user interface (1) comprising: an input unit (2) adapted to receive user input commands and a display unit (3) having a screen adapted to display a scene within a three-dimensional workspace, WSB1, associated with a selectable block, B1, representing a corresponding system component, C, of said complex system (SYS) by means of a virtual camera, VCB1, associated to the respective block, B1, and positioned in a three-dimensional coordinate system within a loaded three-dimensional workspace, WSB1, of said block, B1, wherein the virtual camera, VCB1, is moveable automatically in the three-dimensional workspace, WSB1, of the associated block, B1, in response to a user input command input to the input unit (2) of said user interface (1Type: GrantFiled: July 22, 2021Date of Patent: November 12, 2024Assignee: GALACTIFY GMBHInventors: Maximilian Rieger, Gregor Hohmann
-
Patent number: 12140767Abstract: To accommodate variations in the interpupillary distances associated with different users, a head-mounted device may have left-eye and right-eye optical modules that move with respect to each other. Each optical module may have a display that creates an image and a corresponding lens that provides the image to an associated eye box for viewing by a user. The optical modules each include a lens barrel to which the display and lens of that optical module are mounted and a head-mounted optical module illumination system. The illumination system may have light-emitting devices such as light-emitting diodes that extend along some or all of a peripheral edge of the display. The light-emitting diodes may be mounted on a flexible printed circuit with a tail that extends a lens barrel opening. A stiffener for the flexible printed circuit may have openings that receive the light-emitting diodes.Type: GrantFiled: October 17, 2023Date of Patent: November 12, 2024Assignee: Apple Inc.Inventors: Marinus Meursing, Keenan Molner, Chengyi Yang, Florian R. Fournier, Ivan S. Maric, Jason C. Sauers
-
Patent number: 12142013Abstract: Methods and devices for encoding and decoding a data stream representative of a 3D volumetric scene comprising haptic features associated with objects of the 3D scene are disclosed. At the encoding, haptic features are associated with objects of the scene, for instance as haptic maps. Haptic components are stored in points of the 3D scene as color may be. These components are projected onto patch pictures which are packed in atlas images. At the decoding, haptic components are un-projected onto reconstructed points as color may be according to the depth component of pixels of the decoded atlases.Type: GrantFiled: September 28, 2020Date of Patent: November 12, 2024Assignee: INTERDIGITAL CE PATENT HOLDINGSInventors: Fabien Danieau, Julien Fleureau, Gaetan Moisson-Franckhauser, Philippe Guillotel
-
Patent number: 12142075Abstract: This facial authentication device is provided with: a detecting means for detecting a plurality of facial feature point candidates, using a plurality of different techniques, for at least one facial feature point of a target face, from a plurality of facial images containing the target face; a reliability calculating means for calculating a reliability of each facial image, from statistical information obtained on the basis of the plurality of detected facial feature point candidates; and a selecting means for selecting a facial image to be used for authentication of the target face, from among the plurality of facial images, on the basis of the calculated reliabilities.Type: GrantFiled: July 17, 2023Date of Patent: November 12, 2024Assignee: NEC CORPORATIONInventor: Koichi Takahashi
-
Patent number: 12142077Abstract: In a computer-implemented method of augmenting a dataset used in facial expression analysis, a first facial image and a second facial image are added to a training/testing dataset and mapped to two respective points in a continuous dimensional emotion space. The position of a third point in the continuous dimensional emotion space between the first two points is determined. Augmentation is achieved when a labelled facial image is derived from the third point based on its position relative to the first and second facial expression.Type: GrantFiled: March 4, 2022Date of Patent: November 12, 2024Assignee: Opsis Pte., Ltd.Inventor: Stefan Winkler
-
Patent number: 12138553Abstract: Systems and methods for a computer-based process that detects improper behavior of avatars in a computer-generated environment, and marks these avatars accordingly, so that other users may perceive marked avatars as bad actors. Systems of embodiments of the disclosure may monitor avatar speech, text, and actions. If the system detects behavior it deems undesirable, such as behavior against any laws or rules of the environment, abusive or obscene language, and the like, avatars committing or associated with this behavior are marked in some manner that is visually apparent to other users. In this manner, improperly-behaving avatars may be more easily recognized and avoided, thus improving the experience of other users.Type: GrantFiled: July 24, 2023Date of Patent: November 12, 2024Assignee: Adeia Guides Inc.Inventors: Govind Raveendranathan Nair, Sangeeta Parida
-
Patent number: 12142171Abstract: Provided are a display device and a driving method therefor. Each pixel island in a display panel is divided into a plurality of sub-pixel subdivision units, different monocular viewpoint images are formed by rendering different grayscales for different sub-pixel subdivision units, and a main lobe angle of each lens is adjusted to satisfy that the monocular viewpoint images displayed by the sub-pixel subdivision units in a pixel island are projected to a corresponding independent visible region respectively through different lenses to form a viewpoint, so as to satisfy conditions for achieving super-multi-viewpoint 3D display.Type: GrantFiled: December 21, 2020Date of Patent: November 12, 2024Assignee: BOE Technology Group Co., Ltd.Inventors: Chunmiao Zhou, Tao Hong, Kuanjun Peng
-
Patent number: 12140791Abstract: A multi-zone backlight and multi-zone multiview display with multiple zones selectively provide broad-angle emitted light corresponding to a two-dimensional (2D) image and directional emitted light corresponding to a multiview image to each zone of the multiple zones. The multi-zone backlight includes broad-angle backlight to provide the broad-angle emitted light and a multiview backlight to provide the directional emitted light. Each of the broad-angle backlight and the multiview backlight is divided into a first zone and a second zone that may be independently activated to provide the broad-angle emitted light and multiview emitted light, respectively. The multi-zone multiview display includes the broad-angle backlight and the multiview backlight and further includes an array of light valves configured to modulate the broad-angle emitted light as a two-dimensional image and the directional emitted light as a multiview image on a zone-by-zone basis.Type: GrantFiled: October 19, 2021Date of Patent: November 12, 2024Assignee: LEIA INC.Inventors: David A. Fattal, Thomas Hoekman
-
Patent number: 12143728Abstract: The technical problem of enhancing the quality of an image captured by a front facing camera in low light conditions is addressed by displaying the viewfinder of a front facing camera with an illuminating border, termed a viewfinder ring flash. A viewfinder ring flash acts as a ring flash in low light conditions. A viewfinder ring flash may be automatically generated and presented in the camera view user interface (UI) when the digital sensor of a front facing camera detects a low light indication based on intensity of incident light detected by the digital image sensor of the camera.Type: GrantFiled: March 20, 2023Date of Patent: November 12, 2024Assignee: Snap Inc.Inventors: Newar Husam Al Majid, Christine Barron, Ryan Chan, Bertrand Saint-Preux, Shoshana Sternstein
-
Patent number: 12136242Abstract: An exemplary system receives first data representing one or more buildings, and generates second data representing the one or more buildings. Generating the second data includes, for each of the one or more buildings: (i) determining, based on the first data, a plurality of first edges defining an exterior surface of at least a portion of the building, where the first edges interconnect at a plurality of first points, (ii) encoding, in the second data, information corresponding to the quantity of the first points, (iii) encoding, in the second data, an absolute position of one of the first points, and (iv) for each of the remaining first points, encoding, in the second data, a position of that first point relative to a position of at least another one of the first points. The system outputs the second data.Type: GrantFiled: June 6, 2022Date of Patent: November 5, 2024Assignee: Apple Inc.Inventors: David Flynn, Khaled Mammou
-
Patent number: 12136235Abstract: Human model recovery may be realized utilizing pre-trained artificially neural networks. A first neural network may be trained to determine body keypoints of a person based on image(s) of a person. A second neural network may be trained to predict pose parameters associated with the person based on the body keypoints. A third neural network may be trained to predict shape parameters associated with the person based on depth image(s) of the person. A 3D human model may then be generated based on the pose and shape parameters respectively predicted by the second and third neural networks. The training of the second neural network may be conducted using synthetically generated body keypoints and the training of the third neural network may be conducted using normal maps. The pose and shape parameters predicted by the second and third neural networks may be further optimized through an iterative optimization process.Type: GrantFiled: December 22, 2021Date of Patent: November 5, 2024Assignee: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Meng Zheng, Srikrishna Karanam, Ziyan Wu