Three-dimension Patents (Class 345/419)
-
Patent number: 12266217Abstract: Generally discussed herein are examples of gesture-based extended reality (XR) with object recognition and tracking. A method, implemented by an extended reality (XR) device can include recognizing and tracking one or more objects in the image data, recognizing a gesture in the image data, analyzing the image data to determine whether a condition is satisfied, the condition indicating a recognized and tracked object of the one or more objects proximate which the recognized gesture is to be made, and in response to determining that the condition is satisfied, performing an augmentation of the image data based on satisfaction of the condition.Type: GrantFiled: May 11, 2022Date of Patent: April 1, 2025Assignee: Microsoft Technology Licensing, LLCInventor: Erik A. Hill
-
Patent number: 12266069Abstract: Various implementations disclosed herein include devices, systems, and methods that provide extended reality (XR) environments that include virtual content anchored to objects within physical environments. Such objects may be movable objects. In some implementations, the virtual content is adaptive in that the virtual content is presented based on a characteristic of the movable object. For example, a virtual game piece may be scaled, shaped, etc. to match a physical game piece to which it is anchored. As another example, a virtual gameboard may be scaled and positioned on a real table such that the edge of the gameboard aligns with the edge of the table and such that a virtual waterfall appears to flow over the edge of the real table.Type: GrantFiled: August 15, 2022Date of Patent: April 1, 2025Assignee: Apple Inc.Inventors: Andrew S. Robertson, David Hobbins, James R. Cooper, Joon Hyung Ahn
-
Patent number: 12266057Abstract: Systems, methods, and computer readable media for input modalities for an augmented reality (AR) wearable device are disclosed. The AR wearable device captures images using an image capturing device and processes the images to identify objects. The objects may be people, places, things, and so forth. The AR wearable device associates the objects with tags such as the name of the object or a function that can be provided by the selection of the object. The AR wearable device then matches the tags of the objects with tags associated with AR applications. The AR wearable device presents on a display of the AR wearable device indications of the AR applications with matching tags, which provides a user with the opportunity to invoke one of the AR applications. The AR wearable device recognizes a selection of an AR application in a number of different ways including gesture recognition and voice commands.Type: GrantFiled: June 2, 2022Date of Patent: April 1, 2025Assignee: Snap Inc.Inventors: Piotr Gurgul, Sharon Moll
-
Patent number: 12265705Abstract: A control apparatus controls a display device equipped with a display unit, and controls the display device that allows a user to view back of the display unit through the display unit in a case where the user views the display unit, and the control apparatus includes a processor configured to: change display of some of images displayed on the display unit, in response to a change in a position of a terminal device with respect to the display device, the terminal device being located behind the display unit and viewed by the user through the display unit, and maintain some other images displayed on the display unit in a state of being displayed at predetermined places, regardless of the change in the position of the terminal device.Type: GrantFiled: July 20, 2021Date of Patent: April 1, 2025Assignee: FUJIFILM Business Innovation Corp.Inventor: Momoko Fujiwara
-
Patent number: 12266053Abstract: Embodiments of the invention provide a computer system that includes a processor electronically coupled to a memory. The processor is operable to perform processor operations that include determining that a user is in a virtual reality (VR) environment and accessing a multisensory state associated with the VR environment and the user. A multisensory declutter analysis is applied to the multisensory state to generate decluttered multisensory streams. The decluttered multisensory streams are used to generate a decluttered multisensory view associated with the user.Type: GrantFiled: February 14, 2023Date of Patent: April 1, 2025Assignee: International Business Machines CorporationInventors: Paul R. Bastide, Matthew E. Broomhall, Robert E. Loredo
-
Patent number: 12266042Abstract: Provided are a device and method that enable highly accurate and efficient three-dimensional model generation processing. The device includes: a facial feature information detection unit that analyzes a facial image of a subject shot by an image capturing unit and detects facial feature information; an input data selection unit that selects, from a plurality of facial images shot by the image capturing unit and a plurality of pieces of facial feature information corresponding to the plurality of facial images, a set of a facial image and feature information optimal for generating a 3D model; and a facial expression 3D model generation unit that generates a 3D model using the facial image and the feature information selected by the input data selection unit.Type: GrantFiled: July 20, 2020Date of Patent: April 1, 2025Assignee: SONY GROUP CORPORATIONInventor: Seiji Kimura
-
Patent number: 12263407Abstract: Methods and systems for a virtual experience where a user holds or wears a computer device that displays a virtual world, and the user's physical movement is used to control a virtual avatar through that virtual world in a heightened way. For example, by simply walking around, the user may control a fast moving virtual airplane. A simulation system reads user location information from a sensor in the computer device, and feeds that into a matrix of movement rules. That changes the location data of the user avatar and viewpoint in the virtual world, as shown on the computer device.Type: GrantFiled: September 17, 2024Date of Patent: April 1, 2025Assignee: Monsarrat, Inc.Inventor: Jonathan Monsarrat
-
Patent number: 12264934Abstract: The invention relates to a method for depicting a virtual element in a display area of at least one display apparatus of a vehicle. Virtual elements that lie outside of the display area of a display apparatus are perceptible by the driver with a certain estimation of distance, or respectively direction, in that a driver of the vehicle is signaled when the determined three-dimensional coordinates of the at least one virtual element lie outside of the display area of the display apparatus.Type: GrantFiled: September 8, 2021Date of Patent: April 1, 2025Assignee: VOLKSWAGEN AKTIENGESELLSCHAFTInventors: Alexander Kunze, Yannis Tebaibi, Johanna Sandbrink, Vitalij Sadovitch
-
Patent number: 12265750Abstract: A platform for synchronizing augmented reality (AR) views and information between two or more network connected devices is disclosed. A first device captures a video stream and associated essential meta-data, embeds the essential meta-data into the video stream and transmits it to a second device. The second device receives the video stream, extracts the essential meta-data, inserts one or more AR objects into the video stream with reference to the enhanced meta-data, and transmits to the first device the AR objects and reference to the essential meta-data. The first device renders the one or more AR objects into the video stream, using the essential meta-data references to locate the AR objects in each video stream frame. The second device may also determine and transmit a modified video stream to the first device.Type: GrantFiled: April 20, 2021Date of Patent: April 1, 2025Assignee: STREEM, LLCInventor: Pavan K. Kamaraju
-
Patent number: 12266044Abstract: Data structures, methods and tiling engines for storing tiling data in memory wherein the tiles are grouped into tile groups and the primitives are grouped into primitive blocks. The methods include, for each tile group: determining, for each tile in the tile group, which primitives of each primitive block intersect that tile; storing in memory a variable length control data block for each primitive block that comprises at least one primitive that intersects at least one tile of the tile group; and storing in memory a control stream comprising a fixed sized primitive block entry for each primitive block that comprises at least one primitive that intersects at least one tile of the tile group, each primitive block entry identifying a location in memory of the control data block for the corresponding primitive block. Each primitive block entry may comprise valid tile information identifying which tiles of the tile group are valid for the corresponding primitive block.Type: GrantFiled: December 31, 2023Date of Patent: April 1, 2025Assignee: Imagination Technologies LimitedInventors: Xile Yang, Robert Brigg, Michael John Livesley
-
Patent number: 12260498Abstract: Provided is a method of creating digital twin content for a space based on object recognition in the space. A method of creating digital twin content includes identifying, by a computer system, an object in a space using a camera, determining a value of a parameter associated with the camera to define (or display) the identified object with a predetermined first size at a predetermined location on an image captured by the camera or a screen on which the image is displayed, and using a control result value based on the determined value of the parameter and the location information of the identified object to create the digital twin content for the space.Type: GrantFiled: January 30, 2024Date of Patent: March 25, 2025Assignee: CORNERS CO., LTD.Inventors: Jang Won Choi, Min Woo Park, Dae Gyun Lee, Dong Oh Kim
-
Patent number: 12251884Abstract: Systems and methods of support structures in powder-bed fusion (PBF) are provided. Support structures can be formed of bound powder, which can be, for example, compacted powder, compacted and sintered powder, powder with a binding agent applied, etc. Support structures can be formed of non-powder support material, such as a foam. Support structures can be formed to include resonant structures that can be removed by applying a resonance frequency. Support structures can be formed to include structures configured to melt when electrical current is applied for easy removal.Type: GrantFiled: April 28, 2017Date of Patent: March 18, 2025Assignee: DIVERGENT TECHNOLOGIES, INC.Inventors: Eahab Nagi El Naga, John Russell Bucknell, Chor Yen Yap, Broc William TenHouten, Antonio Bernerd Martinez
-
Patent number: 12254429Abstract: Systems and methods for evaluating construction of structures are disclosed. Building information modeling (BIM) data is received in a non-standardized format for a set of structures undergoing construction. Scheduling data is received associated with construction of each structure in the set of structures. A database is accessed that stores construction data that associates multiple elements of construction projects in a hierarchical configuration. Using the BIM data and the scheduling data, a model is generated that standardizes how particular elements of a particular structure relate to the multiple elements in the hierarchical configuration. Using the generated model, a status of construction of the particular structure is generated. In some implementations, the model is generated and/or trained using machine learning.Type: GrantFiled: May 31, 2022Date of Patent: March 18, 2025Assignee: Doxel, Inc.Inventors: Arunava Saha, Dobromir Voyager Montauk, Richard William Turner, Matthew Paul Orban
-
Patent number: 12254570Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.Type: GrantFiled: May 3, 2022Date of Patent: March 18, 2025Assignee: Adobe Inc.Inventors: Sai Bi, Yang Liu, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli
-
Patent number: 12251635Abstract: This application relates to a method for controlling a virtual character performed by a computer device, the method including: displaying at least a portion of a target virtual character in a virtual scene, the target virtual character being bound with basic bones and deformed bones; triggering the character action of the target virtual character in the virtual scene; when the character action comprises a character movement, controlling the target virtual character to implement the character movement in the virtual scene through a movement of a basic bone associated with the character movement; and when the character action comprises a local character deformation, controlling the target virtual character to implement the local character deformation in the virtual scene through a deformation of a deformed bone associated with the local character deformation.Type: GrantFiled: August 8, 2022Date of Patent: March 18, 2025Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Chang Guo
-
Patent number: 12254786Abstract: A method is described. The method comprises obtaining a recording of one or more physical interactions performed in relation to one or more physical components, in a simulation environment, playing back the recording such that the one or more physical interactions are represented in the simulation environment, and in the simulation environment, detecting a user interaction with one or more physical components of the simulation environment. The method further comprises determining whether the user interaction is congruent with at least one corresponding physical interaction of the one or more physical interactions in the recording, and in response to such congruence, continuing playback of the recording in the simulation environment.Type: GrantFiled: February 23, 2022Date of Patent: March 18, 2025Assignee: The Boeing CompanyInventors: Matthew James Gilchrist, Jesse Lee Kresevic, Nathan Cameron Perry, Christopher P. Galea
-
Patent number: 12254593Abstract: In a method for generating combined image data based on first magnetic resonance (MR) data and second MR data, the first MR data and the second MR data are provided, the first MR data having been generated by a first actuation of a magnetic resonance device from an examination area of an examination object using a first sequence module, and the second MR data having been generated by a second actuation of the magnetic resonance device from the examination area of the examination object using the first sequence module, the first MR data and the second MR data are registered to one another to generate first registered MR data and second registered MR data; the first registered MR data and the second registered MR data are statistically combined to generate combined image data, and the combined image data is provided as an output in electronic form as a data file.Type: GrantFiled: September 2, 2021Date of Patent: March 18, 2025Assignee: Siemens Healthineers AGInventors: Thomas Benkert, Marcel Dominik Nickel
-
Patent number: 12254575Abstract: An information processing apparatus includes an acquisition unit configured to acquire a position and orientation of a virtual viewpoint and a position and orientation of each of one or more virtual objects, a movement unit configured to move the virtual viewpoint, a control unit configured to control each of the one or more virtual objects to be interlocked or not to be interlocked with movement of the virtual viewpoint based on whether the virtual object satisfies a predetermined condition, and a generation unit configured to generate a virtual image including the controlled one or more virtual objects at a viewpoint as a result of moving the virtual viewpoint.Type: GrantFiled: February 1, 2023Date of Patent: March 18, 2025Assignee: Canon Kabushiki KaishaInventors: Naohito Nakamura, Takabumi Watanabe
-
Patent number: 12250491Abstract: A computer-implemented method for generating a bird's eye view image of a scene includes: (a) acquiring at least one lidar frame comprising points with inherent distance information and at least one camera image of the scene; (b) generating a mesh representation of the scene by using the at least one lidar frame, the mesh representation representing surfaces shown in the scene with inherent distance information; (c) generating a mask image by classifying pixels of the at least one camera image as representing ground pixels or non-ground pixels of the at least one camera image; and (d) generating the bird's eye view image by enhanced inverse perspective mapping exploiting distance information inherent to the surfaces of the mesh representation, pixels of the mask image classified as ground pixels, and the at least one camera image.Type: GrantFiled: December 9, 2022Date of Patent: March 11, 2025Assignee: DSPACE GMBHInventors: Tobias Biester, Boris Neubert, Dennis Keck
-
Patent number: 12245912Abstract: A device including a camera, a processing unit and a screen. The camera configured to, at an updated instant, acquire, by a cellphone or a tablet, an updated image representing a real dental scene. The processing unit configured to determine a virtual dental scene as a representation of the real scene on the updated image and identify an element of the virtual scene which does not comply with an assessment criterion, and highlighting and/or adding a message regarding this element in the virtual scene. The screen configured to display the updated image and present the virtual scene overlaid with the representation of the real scene on the updated image displayed on the screen, in transparent mode or not on the representation. The processing unit, the camera and the screen are configured to communicate with one another. The patient acquires the updated image and observes the screen.Type: GrantFiled: April 27, 2023Date of Patent: March 11, 2025Assignee: DENTAL MONITORINGInventors: Philippe Salah, Thomas Pellissard, Guillaume Ghyselinck, Laurent Debraux
-
Patent number: 12243174Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.Type: GrantFiled: May 9, 2023Date of Patent: March 4, 2025Assignee: Nant Holdings IP, LLCInventors: Matheen M. Siddiqui, Kamil A. Wnuk
-
Patent number: 12243101Abstract: Disclosed herein are systems and methods for item acquisition by selection of a virtual object placed in digital environment. According to an aspect, a system may include a display, a user interface, an image capture device, and at least one processor and memory. The processor(s) and memory may be configured to receive a coordinate for placement of a virtual object in a digital environment. The processor(s) and memory may also control the display to display the virtual object when a position corresponding to the received coordinate is within a field of view of the image capture device. Further, the processor(s) and memory may receive an input via the user interface for selecting the virtual object.Type: GrantFiled: March 26, 2024Date of Patent: March 4, 2025Inventor: Robert A. Rice
-
Patent number: 12242861Abstract: Methods, apparatus, systems, and articles of manufacture to load data into an accelerator are disclosed. An example apparatus includes data provider circuitry to load a first section and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine circuitry executes a machine learning operation using the first section of compressed machine learning parameter data. A compressed local data re-user circuitry determines if a second section is present in the additional amount of compressed machine learning parameter data. The processor engine circuitry executes a machine learning operation using the second section when the second section is present in the additional amount of compressed machine learning parameter data.Type: GrantFiled: January 18, 2024Date of Patent: March 4, 2025Assignee: Intel CorporationInventors: Arnab Raha, Deepak Mathaikutty, Debabrata Mohapatra, Sang Kyun Kim, Gautham Chinya, Cormac Brick
-
Patent number: 12243403Abstract: A safety monitoring device (10) includes the area setting unit (12) capable of setting a plurality of alarm activation target areas with respect to a sensing target area of a safety sensor, an alarm activation type setting unit (13) capable of setting the type of an arbitrary alarm activation function with respect to each of the alarm activation target areas, and a moving body recognition analysis unit (16) that recognizes the motion of a moving body on the basis of sensor data measured by the safety sensor. The safety monitoring device includes a sensor data acquisition unit (lidar data acquisition unit (15)) that acquires three-dimensional data as sensor data from the safety sensor. The moving body recognition analysis unit extracts a moving body from the sensing target area on the basis of the three-dimensional data and senses traveling of the moving body to each of the alarm activation target areas.Type: GrantFiled: May 24, 2021Date of Patent: March 4, 2025Assignee: Konica Minolta, Inc.Inventors: Masanori Yoshizawa, Hideki Morita
-
Patent number: 12245097Abstract: Examples of the disclosure describe systems and methods relating to mobile computing. According to an example method, a first user location of a user of a mobile computing system is determined. A first communication device in proximity to the first user location is identified based on the first user location. A first signal is communicated to the first communication device. A first information payload based on the first user location is received from the first communication device, in response to the first communication device receiving the first signal. Video or audio data based on the first information payload is presented to the user at a first time during which the user is at the first user location.Type: GrantFiled: March 25, 2020Date of Patent: March 4, 2025Assignee: Magic Leap, Inc.Inventor: Randall E. Hand
-
Patent number: 12243273Abstract: In one embodiment, a method includes initializing latent codes respectively associated with times associated with frames in a training video of a scene captured by a camera. For each of the frames, a system (1) generates rendered pixel values for a set of pixels in the frame by querying NeRF using the latent code associated with the frame, a camera viewpoint associated with the frame, and ray directions associated with the set of pixels, and (2) updates the latent code associated with the frame and the NeRF based on comparisons between the rendered pixel values and original pixel values for the set of pixels. Once trained, the system renders output frames for an output video of the scene, wherein each output frame is rendered by querying the updated NeRF using one of the updated latent codes corresponding to a desired time associated with the output frame.Type: GrantFiled: January 7, 2022Date of Patent: March 4, 2025Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Zhaoyang Lv, Miroslava Slavcheva, Tianye Li, Michael Zollhoefer, Simon Gareth Green, Tanner Schmidt, Michael Goesele, Steven John Lovegrove, Christoph Lassner, Changil Kim
-
Patent number: 12243123Abstract: A method for changing the linear arrangement of a plurality of information boards in an extended reality environment to appear to surround or partially surround a user to make viewing and interaction with the boards easier.Type: GrantFiled: September 1, 2023Date of Patent: March 4, 2025Assignee: VR-EDU, Inc.Inventor: Ethan Fieldman
-
Patent number: 12235623Abstract: A 3D printing method and a device for performing same are disclosed. In some examples, a 3D printing output method according to may include generating raw material data for at least one raw material to be used for printing a 3D object on the basis of a raw material characteristic requirement; designing the 3D object. The 3D printing method and a device may include performing a simulation on the 3D object designed on the basis of the raw material data and generating the 3D printing data for performing 3D printing on the 3D object on the basis of an evaluation criterion and the result of the simulation.Type: GrantFiled: November 22, 2019Date of Patent: February 25, 2025Assignee: WHOBORN INC.Inventor: Young Sik Bae
-
Patent number: 12236689Abstract: A system is comprised of one or more processors coupled to memory. The one or more processors are configured to receive image data based on an image captured using a camera of a vehicle and to utilize the image data as a basis of an input to a trained machine learning model to at least in part identify a distance of an object from the vehicle. The trained machine learning model has been trained using a training image and a correlated output of an emitting distance sensor.Type: GrantFiled: September 22, 2023Date of Patent: February 25, 2025Assignee: TESLA, INC.Inventors: James Anthony Musk, Swupnil Kumar Sahai, Ashok Kumar Elluswamy
-
Patent number: 12236466Abstract: A size comparison system may generate a size comparison by determining a size of an item based on extracted size data corresponding to the item. A comparison item is selected and the size comparison is generated between the item and the comparison item based on the size of the item. A visual rendering of the item and the comparison item is generated based on the size comparison and is displayed to a user.Type: GrantFiled: February 16, 2021Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventors: Carla L. Christensen, Bethany M. Grentz, Xiao Li, Sumana Adusumilli, Libo Wang
-
Patent number: 12233341Abstract: A game scene editing method, a storage medium, and an electronic device are disclosed. The editing method includes: scene editing information sent by a first client is acquired; first scene data of a first game scene is modified based on the scene editing information to obtain second scene data for updating the first game scene into a second game scene; and the second scene data is sent to the first client and a second client, to enable the first client and the second client to display the second game scene.Type: GrantFiled: April 15, 2021Date of Patent: February 25, 2025Assignee: NETEASE (HANGZHOU) NETWORK CO., LTD.Inventor: Jiahao Chen
-
Patent number: 12236543Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a display. A stimulus is presented at the second time via a virtual companion displayed via the display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.Type: GrantFiled: November 17, 2023Date of Patent: February 25, 2025Assignee: Magic Leap, Inc.Inventors: Andrew Rabinovich, John Monos
-
Patent number: 12236537Abstract: In some examples, a method includes determining, via a user device, three-dimensional spatial data for a physical environment, determining a geographic location of the physical environment, assigning a Spatial Anchor in the physical environment, and creating a digital element in an augmented reality environment created based on the spatial data of the physical environment, the digital element located in the augmented reality environment relative to a position of the Spatial Anchor.Type: GrantFiled: December 17, 2021Date of Patent: February 25, 2025Inventors: Landon Nickerson, Sean Ong, Preston McCauley
-
Patent number: 12229899Abstract: A tracking device, communicatively connected to a HMD device (head-mounted display device), is disclosed. The tracking device includes a client processor and a client memory. The client processor is configured to obtain an initial map from the HMD device. The client memory is configured to store the initial map. The client processor is further configured to update the initial map stored in the client memory to generate a client map according to a host bundle adjustment data sent by the HMD device.Type: GrantFiled: August 4, 2022Date of Patent: February 18, 2025Assignee: HTC CorporationInventor: Sheng-Hui Tao
-
Patent number: 12229910Abstract: A system of properly displaying chroma key content is presented. The system obtains a digital representation of a 3D environment, for example a digital photo, and gathers data from that digital representation. The system renders the digital representation in an environmental model and displays that digital representation upon an output device. Depending upon the context, content anchors of the environmental model are selected which will be altered by suitable chroma key content. The chroma key content takes into consideration the position and orientation of the chroma key content relative to the content anchor and relative to the point of view that the environmental model is displayed from in order to accurately display chroma key content in a realistic manner.Type: GrantFiled: October 19, 2022Date of Patent: February 18, 2025Assignee: NANTMOBILE, LLCInventors: Evgeny Dzhurinskiy, Ludmila Bezyaeva
-
Patent number: 12231822Abstract: With respect to an electronic device and an operating method for the electronic device, according to various embodiments, the electronic device comprises: a rotatable vision sensor configured to detect an external object in a space in which the electronic device is arranged; a rotatable projector configured to output a picture in the space in which the electronic device is arranged; a memory storing spatial information about the space in which the electronic device is arranged; and a processor, wherein the processor can be configured to: control the vision sensor so that the vision sensor tracks the external object while rotating, determine the position of the picture to be output by the projector based on the spatial information and external object information generated based on the tracking of the external object, and control the projector to output the picture at the determined position.Type: GrantFiled: October 29, 2019Date of Patent: February 18, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yunson Yoo, Kihwan Kim, Younjung Kim, Junyoung Kim, Sanghee Park, Minyoung Lee, Jinhak Lee, Ilkwang Choi
-
Patent number: 12229888Abstract: A display method is a display method performed by a display device that operates in conjunction with a mobile object, and includes: determining which one of first surrounding information, which is video showing a surrounding condition of the mobile object and is generated using two-dimensional information, and second surrounding information, which is video showing the surrounding condition of the mobile object and is generated using three-dimensional data, is to be displayed, based on a driving condition of the mobile object; and displaying the one of the first surrounding information and the second surrounding information that is determined to be displayed.Type: GrantFiled: July 11, 2023Date of Patent: February 18, 2025Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Tatsuya Koyama, Takahiro Nishi, Toshiyasu Sugio, Tadamasa Toma, Satoshi Yoshikawa, Toru Matsunobu
-
Patent number: 12229892Abstract: In implementations of systems for visualizing vector graphics in three-dimensional scenes, a computing device implements a projection system to receive input data describing a digital image depicting a three-dimensional scene and a vector graphic to be projected into the three-dimensional scene. The projection system generates a depth image by estimating disparity values for pixels of the digital image. A three-dimensional mesh is computed that approximates the three-dimensional scene based on the depth image. The projection system projects the vector graphic onto the digital image by transforming the vector graphic based on the three-dimensional mesh.Type: GrantFiled: January 23, 2023Date of Patent: February 18, 2025Assignee: Adobe Inc.Inventors: Ashish Jindal, Vineet Batra, Sumit Dhingra, Siddhartha Chaudhuri, Nathan Aaron Carr, Ankit Phogat
-
Patent number: 12229894Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for monitoring extended reality spaces. One of the methods includes selecting, from a physical space at a property, a first available portion of the physical space for representing an extended reality environment; causing presentation of a first portion of the extended reality environment at the first available portion of the physical space; predicting, using sensor data generated from one or more sensors at the property, that the first available portion of the physical space will likely be interfered with; in response to predicting that the first available portion of the physical space will likely be interfered with, selecting, from the plurality of available portions of the physical space, a second available portion for representing the environment; and causing presentation of a second portion of the extended reality environment at the second available portion of the physical space.Type: GrantFiled: December 15, 2022Date of Patent: February 18, 2025Assignee: ObjectVideo Labs, LLCInventors: Donald Gerard Madden, Ethan Shayne
-
Patent number: 12231611Abstract: Methods are described herein for signaling information regarding different viewpoints in a multi-viewpoint omnidirectional media presentation. Techniques disclosed include receiving, from a server, information identifying groups of viewpoints, including information identifying a first group of viewpoints and a second group of viewpoints. And further receiving, from the server, information identifying one or more omnidirectional videos captured from respective viewpoints belonging to the first group and information identifying one or more omnidirectional videos captured from respective viewpoints belonging to the second group.Type: GrantFiled: June 6, 2023Date of Patent: February 18, 2025Assignee: InterDigital Madison Patent Holdings, SASInventors: Yong He, Yan Ye, Ahmed Hamza
-
Patent number: 12223725Abstract: An object identification method is applied to a surveillance system. The surveillance system includes at least one surveillance apparatus. The object identification method includes acquiring a plurality of first feature vectors of a first moving object and a plurality of second feature vectors of at least one second moving object within a series of surveillance images from the surveillance apparatus, transforming the first feature vectors and the second feature vectors respectively into a first cluster distribution set and at least one second cluster distribution set, comparing similarity of the first cluster distribution set and the at least one second cluster distribution set, and setting a similarity ranking of the first cluster distribution set and the at least one second cluster distribution set according to a comparison result so as to determine whether the first moving object and the at least one second moving object are the same.Type: GrantFiled: November 29, 2021Date of Patent: February 11, 2025Assignee: VIVOTEK INC.Inventor: Cheng-Chieh Liu
-
Patent number: 12223700Abstract: Disclosed is an automatic image analysis method that can be used to automatically recognise at least one rare characteristic in an image to be analysed. The method comprises a learning phase during which at least one convolutional neural network is trained to recognise characteristics, a parameter space of dimension n, in which n?2, is constructed from at least one intermediate layer of the network, a presence probability function is determined for each characteristic in the parameter space from a projection of reference images in the parameter space. During a phase of analysing the image to be analysed, the method comprises a step of recognising the at least one rare characteristic in the image to be analysed on the basis of the presence probability function determined for the at least one rare characteristic.Type: GrantFiled: May 7, 2020Date of Patent: February 11, 2025Assignees: Université Brest Bretagne Occidentale, Institut National de la Santé et de la Recherche Médicale (INSERM)Inventor: Gwenolé Quellec
-
Patent number: 12223595Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.Type: GrantFiled: August 2, 2022Date of Patent: February 11, 2025Assignee: Xerox CorporationInventors: Matthew A. Shreve, Jeyasri Subramanian
-
Patent number: 12223921Abstract: A drive method for a display panel, and a display apparatus. The drive method includes: when current original gray-scale values corresponding to sub-pixels in the same region are the same in a plurality of continuous display frames, converting the current original gray-scale values into a first target gray-scale value and a second target gray-scale value; in a current display frame of the plurality of continuous display frames, controlling a data voltage corresponding to the first target gray-scale value to be input to a first sub-pixel unit in the region, and controlling a data voltage corresponding to the second target gray-scale value to be input to a second sub-pixel unit in the region, where each of the first sub-pixel unit and the second sub-pixel unit includes at least one sub-pixel.Type: GrantFiled: January 10, 2022Date of Patent: February 11, 2025Assignees: Wuhan BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.Inventors: Rongcheng Liu, Hui Wang, Wei Yuan, Yanwei Lv, Shaohui Li, Jiantao Liu
-
Patent number: 12223585Abstract: Apparatus and method for grouping rays based on quantized ray directions. For example, one embodiment of an apparatus comprises: An apparatus comprising: a ray generator to generate a plurality of rays; ray direction evaluation circuitry/logic to generate approximate ray direction data for each of the plurality of rays; ray sorting circuitry/logic to sort the rays into a plurality of ray queues based, at least in part, on the approximate ray direction data.Type: GrantFiled: October 3, 2023Date of Patent: February 11, 2025Assignee: Intel CorporationInventors: Karol Szerszen, Prasoonkumar Surti, Gabor Liktor, Karthik Vaidyanathan, Sven Woop
-
Patent number: 12225175Abstract: An apparatus for creating a virtual world includes a processor and a memory connected to the processor, in which the memory stores program instructions executed by the processor so as to receive an image of a real space through a device having a stereo camera, collect mesh data for the real space and an object existing in the real space through the image, determine coordinates for first edges of the real space from the mesh data for the real space, select one of a plurality of second edges based on an area of a virtual space defined by each of the plurality of second edges facing a predetermined direction, when there are more first edges of the real space than edges of a polygon preset for the real space, and output a virtual space defined by some of the first edges and the selected one second edge and a virtual object corresponding to the real object recognized in the real space.Type: GrantFiled: October 27, 2022Date of Patent: February 11, 2025Assignee: INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITYInventors: Soo Mi Choi, Jong Won Lee, Ho San Kang
-
Patent number: 12223594Abstract: A method is provided for generating a three-dimensional (3D) visually representative model of a vehicle. The method includes receiving images acquired from a number of viewpoints of different sections of the vehicle, and performing photogrammetry on the images to extract a profile of the vehicle. The method includes creating a wireframe mesh or point cloud from the profile, and generating the 3D model of the vehicle. The images are processed to determine areas on a surface of the vehicle in which a defect is detected, and markers are appended onto respective areas of the 3D model that correspond to the areas on the surface of the vehicle such that the defect is appended onto the 3D model. And the method includes generating a display of the 3D model of the vehicle including the markers that indicate the areas on the surface of the vehicle in which the defect is detected.Type: GrantFiled: December 10, 2021Date of Patent: February 11, 2025Assignee: The Boeing CompanyInventor: Mathew A. Coffman
-
Patent number: 12217505Abstract: Disclosed are an image-based indoor positioning service system and method. A service server includes a communication unit configured to receive a captured image of a node set in an indoor map, and a location estimation model generation unit configured to learn the captured image of the node received through the communication unit, segment the learned captured image to obtain objects, and selectively activate the objects in the learned image to generate a location estimation model.Type: GrantFiled: June 24, 2019Date of Patent: February 4, 2025Assignee: DABEEO, INC.Inventor: Ju Hum Park
-
Patent number: 12217341Abstract: Communications pertaining to a digital human can include communicating via a lighting system based on determining an aspect of a user based on one or more sensor-generated signals. A communicative lighting sequence can be determined based on the user attribute. The lighting sequence can correspond to a condition of a digital human and can be configured to communicate to the user the condition of the digital human. The light sequence can be generated with an LED array mounted on a digital human display assembly.Type: GrantFiled: January 7, 2022Date of Patent: February 4, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Pranav K. Mistry, Jiawei Zhang, Seungsu Hong, Abhijit Z. Bendale
-
Patent number: RE50324Abstract: A machine implemented method includes sensing entities in first and second domains. If a first stimulus is present and an entity is in the first domain, the entity is transferred from first to second domain via a bridge. If a second stimulus is present and an entity is in the second domain, the entity is transferred from second first domain via the bridge. At least some of the first domain is outputted. An apparatus includes a processor that defines first and second domains and a bridge that enables transfer of entities between domains, an entity identifier that identifies entities in the domains, a stimulus identifier that identifies stimuli, and a display that outputs at least some of the first domain. The processor transfers entities from first to second domain responsive to a first stimulus, and transfers entities from second to first domain responsive to a second stimulus.Type: GrantFiled: February 26, 2021Date of Patent: March 4, 2025Assignee: West Texas Technology Partners, LLCInventor: Michael Lamberty