Three-dimension Patents (Class 345/419)
-
Patent number: 12143728Abstract: The technical problem of enhancing the quality of an image captured by a front facing camera in low light conditions is addressed by displaying the viewfinder of a front facing camera with an illuminating border, termed a viewfinder ring flash. A viewfinder ring flash acts as a ring flash in low light conditions. A viewfinder ring flash may be automatically generated and presented in the camera view user interface (UI) when the digital sensor of a front facing camera detects a low light indication based on intensity of incident light detected by the digital image sensor of the camera.Type: GrantFiled: March 20, 2023Date of Patent: November 12, 2024Assignee: Snap Inc.Inventors: Newar Husam Al Majid, Christine Barron, Ryan Chan, Bertrand Saint-Preux, Shoshana Sternstein
-
Patent number: 12142077Abstract: In a computer-implemented method of augmenting a dataset used in facial expression analysis, a first facial image and a second facial image are added to a training/testing dataset and mapped to two respective points in a continuous dimensional emotion space. The position of a third point in the continuous dimensional emotion space between the first two points is determined. Augmentation is achieved when a labelled facial image is derived from the third point based on its position relative to the first and second facial expression.Type: GrantFiled: March 4, 2022Date of Patent: November 12, 2024Assignee: Opsis Pte., Ltd.Inventor: Stefan Winkler
-
Patent number: 12138553Abstract: Systems and methods for a computer-based process that detects improper behavior of avatars in a computer-generated environment, and marks these avatars accordingly, so that other users may perceive marked avatars as bad actors. Systems of embodiments of the disclosure may monitor avatar speech, text, and actions. If the system detects behavior it deems undesirable, such as behavior against any laws or rules of the environment, abusive or obscene language, and the like, avatars committing or associated with this behavior are marked in some manner that is visually apparent to other users. In this manner, improperly-behaving avatars may be more easily recognized and avoided, thus improving the experience of other users.Type: GrantFiled: July 24, 2023Date of Patent: November 12, 2024Assignee: Adeia Guides Inc.Inventors: Govind Raveendranathan Nair, Sangeeta Parida
-
Patent number: 12142075Abstract: This facial authentication device is provided with: a detecting means for detecting a plurality of facial feature point candidates, using a plurality of different techniques, for at least one facial feature point of a target face, from a plurality of facial images containing the target face; a reliability calculating means for calculating a reliability of each facial image, from statistical information obtained on the basis of the plurality of detected facial feature point candidates; and a selecting means for selecting a facial image to be used for authentication of the target face, from among the plurality of facial images, on the basis of the calculated reliabilities.Type: GrantFiled: July 17, 2023Date of Patent: November 12, 2024Assignee: NEC CORPORATIONInventor: Koichi Takahashi
-
Patent number: 12142171Abstract: Provided are a display device and a driving method therefor. Each pixel island in a display panel is divided into a plurality of sub-pixel subdivision units, different monocular viewpoint images are formed by rendering different grayscales for different sub-pixel subdivision units, and a main lobe angle of each lens is adjusted to satisfy that the monocular viewpoint images displayed by the sub-pixel subdivision units in a pixel island are projected to a corresponding independent visible region respectively through different lenses to form a viewpoint, so as to satisfy conditions for achieving super-multi-viewpoint 3D display.Type: GrantFiled: December 21, 2020Date of Patent: November 12, 2024Assignee: BOE Technology Group Co., Ltd.Inventors: Chunmiao Zhou, Tao Hong, Kuanjun Peng
-
Patent number: 12142013Abstract: Methods and devices for encoding and decoding a data stream representative of a 3D volumetric scene comprising haptic features associated with objects of the 3D scene are disclosed. At the encoding, haptic features are associated with objects of the scene, for instance as haptic maps. Haptic components are stored in points of the 3D scene as color may be. These components are projected onto patch pictures which are packed in atlas images. At the decoding, haptic components are un-projected onto reconstructed points as color may be according to the depth component of pixels of the decoded atlases.Type: GrantFiled: September 28, 2020Date of Patent: November 12, 2024Assignee: INTERDIGITAL CE PATENT HOLDINGSInventors: Fabien Danieau, Julien Fleureau, Gaetan Moisson-Franckhauser, Philippe Guillotel
-
Patent number: 12140791Abstract: A multi-zone backlight and multi-zone multiview display with multiple zones selectively provide broad-angle emitted light corresponding to a two-dimensional (2D) image and directional emitted light corresponding to a multiview image to each zone of the multiple zones. The multi-zone backlight includes broad-angle backlight to provide the broad-angle emitted light and a multiview backlight to provide the directional emitted light. Each of the broad-angle backlight and the multiview backlight is divided into a first zone and a second zone that may be independently activated to provide the broad-angle emitted light and multiview emitted light, respectively. The multi-zone multiview display includes the broad-angle backlight and the multiview backlight and further includes an array of light valves configured to modulate the broad-angle emitted light as a two-dimensional image and the directional emitted light as a multiview image on a zone-by-zone basis.Type: GrantFiled: October 19, 2021Date of Patent: November 12, 2024Assignee: LEIA INC.Inventors: David A. Fattal, Thomas Hoekman
-
Patent number: 12141926Abstract: A human-machine interaction, HMI, user interface (1) connected to at least one controller or actuator of a complex system (SYS) having a plurality of system components, C, represented by associated blocks, B, of a hierarchical system model (SYS-MOD) stored in a database, DB, (5) said user interface (1) comprising: an input unit (2) adapted to receive user input commands and a display unit (3) having a screen adapted to display a scene within a three-dimensional workspace, WSB1, associated with a selectable block, B1, representing a corresponding system component, C, of said complex system (SYS) by means of a virtual camera, VCB1, associated to the respective block, B1, and positioned in a three-dimensional coordinate system within a loaded three-dimensional workspace, WSB1, of said block, B1, wherein the virtual camera, VCB1, is moveable automatically in the three-dimensional workspace, WSB1, of the associated block, B1, in response to a user input command input to the input unit (2) of said user interface (1Type: GrantFiled: July 22, 2021Date of Patent: November 12, 2024Assignee: GALACTIFY GMBHInventors: Maximilian Rieger, Gregor Hohmann
-
Patent number: 12140767Abstract: To accommodate variations in the interpupillary distances associated with different users, a head-mounted device may have left-eye and right-eye optical modules that move with respect to each other. Each optical module may have a display that creates an image and a corresponding lens that provides the image to an associated eye box for viewing by a user. The optical modules each include a lens barrel to which the display and lens of that optical module are mounted and a head-mounted optical module illumination system. The illumination system may have light-emitting devices such as light-emitting diodes that extend along some or all of a peripheral edge of the display. The light-emitting diodes may be mounted on a flexible printed circuit with a tail that extends a lens barrel opening. A stiffener for the flexible printed circuit may have openings that receive the light-emitting diodes.Type: GrantFiled: October 17, 2023Date of Patent: November 12, 2024Assignee: Apple Inc.Inventors: Marinus Meursing, Keenan Molner, Chengyi Yang, Florian R. Fournier, Ivan S. Maric, Jason C. Sauers
-
Patent number: 12135441Abstract: A lenticular lens having an array of elongate lenticular elements extending parallel to one another, the array having a first edge and a second edge extending parallel to the elongate lenticular elements; and a central line that is centered between the first edge and the second edge; characterized in that the focal length of the lenticular elements gradually decreases from the first edge of the array towards the central line as well as from the second edge of the array towards the central line. This improves the angular performance of the lenticular lens, so that at short viewing distances the image quality near the edges of the display is not impaired.Type: GrantFiled: December 30, 2019Date of Patent: November 5, 2024Assignee: ZHANGJIAGANG KANGDE XIN OPTRONICS MATERIAL CO., LTD.Inventors: Jan Van Der Horst, Silvino Jose Antuna Presa, Bas Koen Böggemann
-
Patent number: 12136235Abstract: Human model recovery may be realized utilizing pre-trained artificially neural networks. A first neural network may be trained to determine body keypoints of a person based on image(s) of a person. A second neural network may be trained to predict pose parameters associated with the person based on the body keypoints. A third neural network may be trained to predict shape parameters associated with the person based on depth image(s) of the person. A 3D human model may then be generated based on the pose and shape parameters respectively predicted by the second and third neural networks. The training of the second neural network may be conducted using synthetically generated body keypoints and the training of the third neural network may be conducted using normal maps. The pose and shape parameters predicted by the second and third neural networks may be further optimized through an iterative optimization process.Type: GrantFiled: December 22, 2021Date of Patent: November 5, 2024Assignee: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Meng Zheng, Srikrishna Karanam, Ziyan Wu
-
Patent number: 12136242Abstract: An exemplary system receives first data representing one or more buildings, and generates second data representing the one or more buildings. Generating the second data includes, for each of the one or more buildings: (i) determining, based on the first data, a plurality of first edges defining an exterior surface of at least a portion of the building, where the first edges interconnect at a plurality of first points, (ii) encoding, in the second data, information corresponding to the quantity of the first points, (iii) encoding, in the second data, an absolute position of one of the first points, and (iv) for each of the remaining first points, encoding, in the second data, a position of that first point relative to a position of at least another one of the first points. The system outputs the second data.Type: GrantFiled: June 6, 2022Date of Patent: November 5, 2024Assignee: Apple Inc.Inventors: David Flynn, Khaled Mammou
-
Patent number: 12136475Abstract: A medical information management apparatus includes a hardware processor that: generates first medical information that does not include subject identification information that uniquely identifies a subject on a basis of medical image information regarding the subject; generates second medical information that includes the subject identification information on a basis of the medical image information; and causes the first medical information that has been generated to be stored in a first medical information storage part.Type: GrantFiled: February 25, 2022Date of Patent: November 5, 2024Assignee: Konica Minolta, Inc.Inventors: Akihiro Kawabata, Jo Shikama
-
Patent number: 12135919Abstract: A centralized design engine receives a problem specification from an end-user and classifies that problem specification in a large database of previously received problem specifications. Upon identifying similar problem specifications in the large database, the design engine selects design strategies associated with those similar problem specifications. A given design strategy includes one or more optimization algorithms, one or more geometry kernels, and one or more analysis tools. The design engine executes an optimization algorithm to generate a set of parameters that reflect geometry. The design engine then executes a geometry kernel to generate geometry that reflects those parameters, and generates analysis results for each geometry. The optimization algorithms may then improve the generated geometries based on the analysis results in an iterative fashion. When suitable geometries are discovered, the design engine displays the geometries to the end-user, along with the analysis results.Type: GrantFiled: January 23, 2023Date of Patent: November 5, 2024Assignee: AUTODESK, INC.Inventor: Francesco Iorio
-
Patent number: 12131582Abstract: This facial authentication device is provided with: a detecting means for detecting a plurality of facial feature point candidates, using a plurality of different techniques, for at least one facial feature point of a target face, from a plurality of facial images containing the target face; a reliability calculating means for calculating a reliability of each facial image, from statistical information obtained on the basis of the plurality of detected facial feature point candidates; and a selecting means for selecting a facial image to be used for authentication of the target face, from among the plurality of facial images, on the basis of the calculated reliabilities.Type: GrantFiled: July 18, 2023Date of Patent: October 29, 2024Assignee: NEC CORPORATIONInventor: Koichi Takahashi
-
Patent number: 12131202Abstract: Disclosed herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for operating a user application including a user rendered context operating on a user device and maintaining a current render tree, and a user programmer context operating on a cloud computing system. The user rendered context of the user application can receive an input; and send, responsive to receiving the input, an event notification to the user programmer context of the user application. The user rendered context can further receive, from the user programmer context, a difference tree to update the current render tree, and update the current render tree based on the difference tree.Type: GrantFiled: May 26, 2022Date of Patent: October 29, 2024Assignee: ROKU, INC.Inventors: Mark Young, John Roberts, Chakri Kodali, Cameron Esfahani, David Lee Stern, Anthony John Wood, Benjamin Combee, Ilya Asnis
-
Patent number: 12131243Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating data specifying a three-dimensional mesh of an object using an auto-regressive neural network.Type: GrantFiled: February 8, 2021Date of Patent: October 29, 2024Assignee: DeepMind Technologies LimitedInventors: Charlie Thomas Curtis Nash, Iaroslav Ganin, Seyed Mohammadali Eslami, Peter William Battaglia
-
Patent number: 12129646Abstract: One variation of the tile display includes a set of tile assemblies, each tile assembly includes: a base plate; a tile panel; a tile interface; and a set of linear actuator assemblies arranged in a radial pattern about the base plate and cooperating to constrain the tile panel in angular roll, linear heave, and linear sway motion relative to the base plate. Each linear actuator assembly includes: a bearing housing defining a linear bearing, a floating bearing, and a through-hole; an actuator mounted to the bearing housing; a distal link coupled to the tile interface; a first support boom running through the linear bearing; a second support boom running through the floating bearing; and a driven boom running through the through-hole of the bearing housing. Each tile assembly also includes a primary controller configured to maneuver tile panels over ranges of angular pitch, angular yaw, and linear surge positions.Type: GrantFiled: July 20, 2022Date of Patent: October 29, 2024Assignee: BREAKFAST, LLCInventors: Andrew Laska, Andrew Zolty, Mattias Gunneras, Andrew McIntyre, Brandon Orr, Will Rigby, Michael Fazio, Mohammad Hosein Asgari, Lee Marom, Sebastian Schloesser
-
Patent number: 12130959Abstract: Described herein are embodiments of methods and apparatuses for an augmented reality system wherein a wearable augmented reality apparatus may efficiently manage and transfer data and approximate the position of its wearer and the perceived position of a virtual avatar in space. The embodiments may include methods of using data from various external or embedded sensors to estimate and/or determine fields related to the user, apparatus, and/or the avatar. The embodiments may further include methods by which the apparatus can approximate the perceived position of the apparatus and the avatar relative to the user when no predefined path is specified. The embodiments may further include methods by which information about a user's path is compressed and transferred.Type: GrantFiled: March 24, 2022Date of Patent: October 29, 2024Assignee: Peloton Interactive, Inc.Inventors: AbdurRahman Bin Shahzad Bhatti, Jensen Rarig Steven Turner
-
Patent number: 12132775Abstract: The operating method of a server for providing a metaverse in which a call between user terminals may be performed may include: generating a first avatar based on profile information of a first user terminal and a second avatar based on profile information of a second user terminal; providing a map space in which the first avatar and the second avatar are able to travel; mediating a call between the first user terminal and the second user terminal based on whether a plurality of avatars including the first avatar and the second avatar have entered a first audio space included in the map space; and completing matching between the first user terminal and the second user terminal when a first sign of attraction to the second avatar is transmitted from the first user terminal and a sign of attraction to the first avatar is transmitted from the second user terminal.Type: GrantFiled: October 4, 2022Date of Patent: October 29, 2024Assignee: Hyperconnect Inc.Inventors: Sang Il Ahn, Yoon Woo Jang, Dong Woo Kang, Ye Li Kim, Seo Hee Choi, Yae Jin Han, Sun Yeop Lee, Han Ryeol Seong, Eun Hee Choi, Dan Ha Kim
-
Patent number: 12124775Abstract: A system and method for generating computerized floor plans is provided. The system comprises a mobile computing device, such as a smart cellular telephone, a tablet computer, etc. having an internal digital gyroscope and camera, and an interior modeling software engine interacts with the gyroscope and camera to allow a user to quickly and conveniently take measurements of interior building features, and to create computerized floor plans of such features from any location within a space, without requiring the user to stay in a single location while taking the measurements. The system presents the user with a graphical user interface that allows a user to quickly and conveniently delineate wall corner features using a reticle displayed within the user interface. As corners are identified, the system processes the corner information and information from the gyroscope to calculate wall features and creates a floor plan of the space with high accuracy.Type: GrantFiled: April 26, 2022Date of Patent: October 22, 2024Assignee: Xactware Solutions, Inc.Inventors: Bradley McKay Childs, Jeffrey C. Taylor, Jeffery D. Lewis, Corey D. Reed
-
Patent number: 12125115Abstract: Augmented reality social media platforms and methods of managing the same in which a real-world defined area is virtually mapped to include a plurality of leasable virtual subdivisions each corresponding to a real-world subdivision of the real-world defined area. A platform may assign a virtual leasehold corresponding to a virtual subdivision of a plurality of leasable virtual subdivisions to a user of the platform. In one aspect the ability to assign a virtual leasehold to a user is based on the user's affiliation or non-affiliation with a real-world organization controlling the corresponding real-world defined area. In another aspect a virtual leasehold may be reassigned to a different virtual subdivision and/or a real-world offering associated with a virtual leasehold may be reallocated to a different virtual leasehold.Type: GrantFiled: January 12, 2024Date of Patent: October 22, 2024Assignee: Flying Eye Reality, Inc.Inventor: Raymond Charles Shingler
-
Patent number: 12125208Abstract: The invention describes a method for automatically localizing organ segments in a three-dimensional image comprising the following steps: providing a three-dimensional image showing at least one organ and at least one tubular network comprising a plurality of tubular structures, the organ comprising organ segments; performing automatic separation of the organ from other parts of the image; performing automatic tracing of the tubular network to obtain a branch map; performing automatic analysis of the branch map to identify specific tubular structures; performing automatically assigning regions of the organ to the specific tubular structures to segment the organ into localized organ segments; and outputting the localized organ segments and the traced and analyzed tubular network as image data. The invention further describes a localization arrangement and a medical imaging system.Type: GrantFiled: September 2, 2021Date of Patent: October 22, 2024Assignee: Siemens Healthineers AGInventors: Zhoubing Xu, Sasa Grbic, Dominik Neumann, Guillaume Chabin, Bruce Spottiswoode, Fei Gao, GĂĽnther Platsch
-
Patent number: 12118743Abstract: The present disclosure provides an electronic apparatus and an object detection method. The electronic apparatus includes a storage device and a processor. The storage device stores an estimation module. The processor is coupled to the storage device and configured to execute the estimation module. The processor acquires a sensed image provided by an image sensor, and inputs the sensed image to the estimation module so that the estimation module outputs a plurality of estimated parameters. The processor calculates two-dimensional image center coordinates of an object image in the sensed image based on the plurality of estimated parameters, and calculates three-dimensional center coordinates corresponding to the object image based on the two-dimensional image center coordinates and an offset parameter in the plurality of estimated parameters. Thus, the location of the object image in the sensed image can be determined accurately.Type: GrantFiled: March 4, 2022Date of Patent: October 15, 2024Assignee: VIA Technologies, Inc.Inventors: Winner Roedily, Hsueh-hsin Han
-
Patent number: 12118757Abstract: A video coding mechanism is disclosed. The mechanism includes receiving a bitstream comprising a plurality of two dimensional (2D) patches in an atlas frame and a camera offset for a camera. The patches are decoded and converted to a three dimensional (3D) patch coordinate system to obtain a point cloud frame. An offset matrix is determined based on the camera offset. The offset is then applied matrix to the point cloud frame.Type: GrantFiled: July 5, 2022Date of Patent: October 15, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Jeffrey Moguillansky, Vladyslav Zakharchenko, Jianle Chen
-
Patent number: 12118817Abstract: Provided is a system that estimates pose data of a person with high accuracy at low cost. At a time step at which image data is to be captured and obtained, a pose data generation system obtains pose data based on the image data. At a time step at which image data is not to be obtained, the pose data generation system predicts pose data at a current time step from a previous time step using IMU data and performs interpolation processing to obtain pose data. Thus, even when a rate of obtaining the image data is low, the pose data generation system performs the above interpolation processing using IMU data to obtain pose data with a high frame rate.Type: GrantFiled: March 9, 2022Date of Patent: October 15, 2024Assignee: MEGACHIPS CORPORATIONInventor: Mahito Matsumoto
-
Patent number: 12118807Abstract: The present application relates to a method for recognising at least one object in a three-dimensional scene, the method including, in an electronic processing device: determining a plurality of two-dimensional images of the scene, the images at least partially including the at least one object; determining a plurality of two-dimensional segmentations of the at least one object, the two-dimensional segmentations corresponding to the two dimensional images; generating a three-dimensional representation of the scene using the images; generating a mapping indicative of a correspondence between the images and the representation; and using the mapping to map the plurality of segmentations to the three dimensional representation, to thereby recognise the at least one object in the scene.Type: GrantFiled: September 28, 2019Date of Patent: October 15, 2024Assignee: SITESEE PTY LTDInventors: Lucio Piccoli, Laurie Opperman, Patrick Mahoney
-
Patent number: 12118673Abstract: A method for positioning of cameras on an object that enables accurate rendering of the scene around the object on a dome accurately in real time. The method involves providing a 3D model of the object having a surface, and selecting locations on the surface where the cameras are to be placed to provide a camera rig. The choice of locations is such that every camera has a field of view that overlaps with at least one other camera. The cameras are designated as Direct View Camera (DVC) or a Secondary View Camera (SVC). The method to render a virtual scene includes providing a projection rig that includes a plurality of projectors within a hollow half-sphere, wherein the hollow half-sphere includes an inner surface and an outer surface. Each of the plurality of projectors is designated as a Direct View Projector (DVP) or a Secondary View Projector (SVP).Type: GrantFiled: July 13, 2022Date of Patent: October 15, 2024Assignee: Orqa Holding LTDInventors: Srdjan Kovacevic, Ana Petrinec
-
Patent number: 12118656Abstract: Techniques for performing shader operations are provided. The techniques include, performing pixel shading at a shading rate defined by pixel shader variable rate shading (“VRS”) data, and updating the pixel VRS data that indicates one or more shading rates for one or more tiles based on whether the tiles of the one or more tiles include triangle edges or do not include triangle edges, to generate updated VRS data.Type: GrantFiled: April 20, 2023Date of Patent: October 15, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Skyler Jonathon Saleh, Vineet Goel, Pazhani Pillai, Ruijin Wu, Christopher J. Brennan, Andrew S. Pomianowski
-
Patent number: 12117307Abstract: Systems and methods are disclosed related to generating an interactive user interface that enables a user to move, rotate or otherwise edit 3D point cloud data in virtual 3D space to align or match point clouds captured from LiDAR scans prior to generation of a high definition map. A system may obtain point cloud data for two or more point clouds, render the point clouds for display in a user interface, then receive a user selection of one of the point clouds and commands from the user to move and/or rotate the selected point cloud. The system may adjust the displayed position of the selected point cloud relative to the other simultaneously displayed point cloud(s) in real time in response to the user commands, and store the adjusted point cloud position data for use in generating a new high definition map.Type: GrantFiled: March 22, 2021Date of Patent: October 15, 2024Assignee: Beijing Didi Infinity Technology and Development Co., Ltd.Inventors: Yan Zhang, Tingbo Hou
-
Patent number: 12118674Abstract: A configuration tool adapted to configure a quality control system to monitor and/or guide an operator in a working environment through recognition of objects, events or an operational process, comprises: a volumetric sensor adapted to capture volumetric image frames of the working environment while an object, event or operational process is demonstrated; a display, coupled to the volumetric sensor and configured to live display the volumetric image frames; and a processor configured to: generate a user interface in overlay of the volumetric image frames to enable a user to define a layout zone; and automatically generate a virtual box in the layout zone when an object, event or operational process is detected during demonstration of the object, event or operational process.Type: GrantFiled: May 16, 2023Date of Patent: October 15, 2024Assignee: ARKITE NVInventor: Ives De Saeger
-
Patent number: 12118677Abstract: In some examples, an apparatus includes a memory storing computer executable instructions for implementing a spatially aware computing scheme; and a processor coupled to the memory and configured to execute the executable instructions. Executing the executable instructions causes the processor to access a spatial augmentation layer (SAL) that includes augmented reality (AR) elements placed into an AR environment for a physical environment proximate to the apparatus, and display the AR elements on a display screen communicatively coupled to the apparatus, wherein at least some of the AR elements are elements placed into the AR environment by a user other than a user of the apparatus.Type: GrantFiled: December 22, 2021Date of Patent: October 15, 2024Inventors: Landon Nickerson, Sean Ong, Preston McCauley
-
Patent number: 12118145Abstract: An electronic apparatus according to the present invention, includes at least one memory and at least one processor which function as: a first acquisition unit configured to acquire right line-of-sight information on a line-of-sight of a right eye of a user; a second acquisition unit configured to acquire left line-of-sight information on a line-of-sight of a left eye of the user; and a control unit configured to control such that right eye calibration is performed on a basis of right line-of-sight information which is acquired by the first acquisition unit at a first timing, and left eye calibration is performed on a basis of left line-of-sight information which is acquired by the second acquisition unit at a second timing which is different from the first timing.Type: GrantFiled: May 23, 2023Date of Patent: October 15, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Yoshihiro Mizuo
-
Patent number: 12112433Abstract: Methods, systems, and apparatuses are provided to automatically reconstruct an image, such as a 3D image. For example, a computing device may obtain an image, and may apply a first trained machine learning process to the image to generate coefficient values characterizing the image in a plurality of dimensions. Further, the computing device may generate a mesh based on the coefficient values. The computing device may apply a second trained machine learning process to the coefficient values and the image to generate a displacement map. Based on the mesh and the displacement map, the computing device may generate output data characterizing an aligned mesh. The computing device may store the output data within a data repository. In some examples, the computing device provides the output data for display.Type: GrantFiled: April 6, 2022Date of Patent: October 8, 2024Assignee: QUALCOMM IncorporatedInventors: Wei-Lun Chang, Michel Adib Sarkis, Chieh-Ming Kuo, Kuang-Man Huang
-
Patent number: 12113947Abstract: A display apparatus (10) that is an example of an image processing apparatus includes: an output unit that outputs part of an image including recommended viewpoint information as a display image to a display unit; and a transition control unit that causes the display range of the image to shift, on the basis of the positional relationship between the viewpoint position corresponding to the display image output to the display unit and the viewpoint position corresponding to the recommended viewpoint information. The output unit outputs part of the image to the display unit, on the basis of the display range that has been made to shift.Type: GrantFiled: January 26, 2021Date of Patent: October 8, 2024Assignee: SONY GROUP CORPORATIONInventors: Sho Ogura, Yuya Yamashita
-
Patent number: 12112440Abstract: A Mixed-Reality visor (MR-visor) system and method utilizing environmental sensor feedback for replicating restricted external visibility during operation of manned vehicles, such as marine or aircraft. Adaptive hardware and software enable the user to reliably limit, modify and/or block views outside window(s) areas of the vehicle while maintaining visibility of the cabin interior and instrument control panel(s) without need for complex mechanical hardware alignment and setup. In the case of aircraft pilot training, the MR-visor can be worn by a pilot to replicate Instrument Meteorological Conditions (IMC) and other challenging scenarios.Type: GrantFiled: December 17, 2021Date of Patent: October 8, 2024Inventor: Wael Zohni
-
Patent number: 12111381Abstract: One or more embodiments of the present disclosure may relate to communicating RADAR (RAdio Detection And Ranging) data to a distributed map system that is configured to generate map data based on the RADAR data. In these or other embodiments, certain compression operations may be performed on the RADAR data to reduce the amount of data that is communicated from the ego-machines to the map system.Type: GrantFiled: March 21, 2022Date of Patent: October 8, 2024Assignee: NVIDIA CORPORATIONInventor: Niharika Arora
-
Patent number: 12112457Abstract: A system including server(s) and data repository, wherein server(s) is/are configured to receive images of real-world environment captured using camera(s), corresponding depth maps, and at least one of: pose information, relative pose information; generate three-dimensional model (3D) of real-world environment; store 3D model; utilise 3D model to generate output image from perspective of new pose; determine whether extended depth-of-field (EDOF) correction is required to be applied to any one of: at least one of images captured by camera(s) representing given object(s), 3D model, output image, based on whether optical focus of camera(s) was adjusted according to optical depth of given object from given pose of camera; and when it is determined that EDOF correction is required to be applied, apply EDOF correction to at least portion of any one of: at least one of images captured by camera(s), 3D model, output image.Type: GrantFiled: November 21, 2022Date of Patent: October 8, 2024Assignee: Varjo Technologies OyInventors: Mikko Ollila, Mikko Strandborg
-
Patent number: 12112422Abstract: A differentiable ray casting technique may be applied to a model of a three-dimensional (3D) scene (scene includes lighting configuration) or object to optimize one or more parameters of the model. The one or more parameters define geometry (topology and shape), materials, and lighting configuration (e.g., environment map, a high-resolution texture that represents the light coming from all directions in a sphere) for the model. Visibility is computed in 3D space by casting at least two rays from each ray origin (where the two rays define a ray cone). The model is rendered to produce a model image that may be compared with a reference image (or photograph) of a reference 3D scene to compute image space differences. Visibility gradients in 3D space are computed and backpropagated through the computations to reduce differences between the model image and the reference image.Type: GrantFiled: June 15, 2022Date of Patent: October 8, 2024Assignee: NVIDIA CorporationInventors: Jon Niklas Theodor Hasselgren, Carl Jacob Munkberg
-
Patent number: 12112439Abstract: Systems and methods for immersive and collaborative video surveillance, in the commercial security industry are provided. Some methods can include receiving a video data stream from a surveillance camera in a monitored region via a cloud network, a user interface device of or coupled to a virtual reality headset displaying the video data stream, and the user interface device receiving user input corresponding to a movement of a user's body to navigate the video data stream and simulate the user navigating the monitored region from within the monitored region.Type: GrantFiled: June 3, 2022Date of Patent: October 8, 2024Assignee: HONEYWELL INTERNATIONAL INC.Inventors: MuthuRamji Vadamalayan, Deepakumar Subbian, Kathiresan Periyasamy
-
Patent number: 12113950Abstract: A generation apparatus according to the present invention is a generation apparatus for generating a media file storing virtual viewpoint image data generated based on pieces of image data of an object captured from a plurality of directions with a plurality of cameras, and obtains a virtual viewpoint parameter to be used to generate virtual viewpoint image data. Further, the generation apparatus generates a media file storing the obtained virtual viewpoint parameter and virtual viewpoint image data generated based on the virtual viewpoint parameter. In this way, the generation apparatus can improve usability related to a virtual viewpoint image.Type: GrantFiled: August 17, 2022Date of Patent: October 8, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Hiroyasu Ito
-
Patent number: 12106529Abstract: An imaging apparatus includes: an image sensor that captures a subject image by receiving incident light, to generate image data; a controller that controls an image shooting operation using the image sensor; a recorder that records the image data as a result of the image shooting operation; and an adjuster that adjusts a light reception rate in each position on an incident surface, the light reception rate allowing the image sensor to receive the light, the incident surface being entered by the light corresponding to an image represented by the image data. The controller controls the adjuster to render the light reception rate in a position corresponding to part of the image on the incident surface different from the light reception rate in another position thereon, and causes the image sensor to capture the image with the light reception rate rendered different by the adjuster in the image shooting operation.Type: GrantFiled: February 15, 2023Date of Patent: October 1, 2024Assignee: Panasonic Intellectual Property Management Co., Ltd.Inventors: Takaaki Yamasaki, Shinichi Yamamoto, Wataru Okamoto
-
Patent number: 12102386Abstract: A method for acquiring data with the aid of a surgical microscopy system comprises recording a data set having a multiplicity of images and deciding whether or not the new data set ought to be stored in a data memory of a database. The decision is taken by an existing classifier. A new classifier is determined on the basis of training data which comprise the data sets stored in the data memory of the database and/or comprise data obtained from the data sets stored in the data memory of the database. The new classifier is then used instead of the existing classifier when deciding whether a subsequently recorded new data set ought to be stored.Type: GrantFiled: April 27, 2021Date of Patent: October 1, 2024Assignee: Carl Zeiss Meditec AGInventors: Stefan Saur, Alexander Urich
-
Patent number: 12106422Abstract: An instruction (or set of instructions) that can be included in a program to perform a ray tracing acceleration data structure traversal, with individual execution threads in a group of execution threads executing the program performing a traversal operation for a respective ray in a corresponding group of rays such that the group of rays performing the traversal operation together. The instruction(s), when executed by the execution threads in respect of a node of the ray tracing acceleration data structure, cause one or more rays from the group of plural rays that are performing the traversal operation together to be tested for intersection with the one or more volumes associated with the node being tested. A result of the ray-volume intersection testing can then be returned for the traversal operation.Type: GrantFiled: June 4, 2022Date of Patent: October 1, 2024Assignee: Arm LimitedInventors: Richard Bruce, William Robert Stoye, Mathieu Jean Joseph Robart
-
Patent number: 12099644Abstract: Provided is an information processing apparatus according to an embodiment includes a setting unit that, based on basic trigger region information defining a basic trigger region which is to be a trigger of an occurrence of an event in an application that presents predetermined content to a user based on position information regarding the user within a real space, determines an extended trigger region having a predetermined positional relationship with the basic trigger region, and sets information defining the determined extended trigger region onto a storage unit.Type: GrantFiled: August 27, 2020Date of Patent: September 24, 2024Assignee: SONY GROUP CORPORATIONInventors: Akane Kondo, Tomoya Narita
-
Patent number: 12100105Abstract: The present disclosure relates to a method and capturing arrangement for creating a three-dimensional model of a scene. The model comprises a three-dimensional space comprising a plurality of discrete three-dimensional volume elements (V1,1, V1,2) associated with three initial direction-independent color values and an initial opacity value. The method comprises obtaining a plurality of images of said scene and defining a minimization problem. Wherein the minimization problem comprises three residuals, one for each color value, wherein each residual is based on the difference between (a) the color value of each image element of each image and (b) an accumulated direction-independent color value of the volume along each ray path of each image element. The method further comprises creating the three-dimensional model of the scene by solving said minimization problem.Type: GrantFiled: November 22, 2022Date of Patent: September 24, 2024Assignee: DIMENSION STREAM LABS ABInventors: Ulf Assarsson, Erik Sintorn, Sverker Rasmuson
-
Patent number: 12100109Abstract: In some implementations, a method includes: identifying a plurality of subsets associated with a physical environment; determining a set of spatial characteristics for each of the plurality of subsets, wherein a first set of spatial characteristics characterizes dimensions of a first subset and a second set of spatial characteristics characterizes dimensions of a second subset; generating an adapted first extended reality (XR) content portion for the first subset based at least in part on the first set of spatial characteristics; generating an adapted second XR content portion for the second subset based at least in part on the second set of spatial characteristics; and generating one or more navigation options that allow a user to traverse between the first and second subsets based on the first and second sets of spatial characteristics.Type: GrantFiled: September 24, 2021Date of Patent: September 24, 2024Assignee: APPLE INC.Inventors: Gutemberg B. Guerra Filho, Raffi A. Bedikian, Ian M. Richter
-
Patent number: 12100067Abstract: User persona management systems and techniques are described. A system identifies a profile associated with a first user. The profile includes data defining avatars that each represent the first user and conditions for displaying respective avatars. The system determines, based on characteristics associated with the first user, that at least a first condition is met. The system selects, based on determining that at least the first condition is met, a display avatar of the avatars. The system outputs the display avatar for presentation to a second user, for instance by displaying the display avatar on a display and/or by transmitting the display avatar to a user device associated with the second user. The display avatar can be presented in accordance with the characteristics associated with the first user.Type: GrantFiled: June 20, 2022Date of Patent: September 24, 2024Assignee: QUALCOMM IncorporatedInventor: Gad Karmi
-
Patent number: 12100229Abstract: Various implementations disclosed herein include devices, systems, and methods that facilitate the creation of a 3D model for object detection based on a scan of the object. Some implementations provide a user interface that a user interacts with to facilitate a scan of an object to create 3D model of the object for later object detection. The user interface may include an indicator that provides visual or audible feedback to the user indicating the direction that the capturing device is facing relative to the object being scanned. The direction of the capture device may be identified using sensors on the device (e.g., inertial measurement unit (IMU), gyroscope, etc.) or other techniques (e.g., visual inertial odometry (VIO)) and based on the user positioning the device so that the object is in view.Type: GrantFiled: August 19, 2021Date of Patent: September 24, 2024Assignee: Apple Inc.Inventors: Etienne Guerard, Omar Shaik, Michelle Chua, Zachary Z. Becker
-
Patent number: 12097438Abstract: Example monitored online experience systems and methods are described. In one implementation, techniques initiate an online gaming experience with a first user and associate a second user with the online gaming experience. The techniques receive audio data from a game server and receive a voice overlay from the second user. During the online gaming experience, the techniques play the audio data in a speaker proximate the first person and play the voice overlay in the speaker proximate the first person.Type: GrantFiled: December 10, 2021Date of Patent: September 24, 2024Assignee: GUARDIANGAMER, INC.Inventor: Henderika Vogel