Patents Issued in March 12, 2024
-
Patent number: 11928752Abstract: A processor device has a CPU cooperating with an input device and an output device, under control of stored instructions, and is arranged to receive service requests at the input device, assign service requests received in successive time periods to respective batches of requests; access stored service provider data to identify available service providers from among a pool of service providers; after completing the assignment of service requests to a batch, perform a matching process to endeavour to match each service request of the batch of requests to a service provider; and for each service provider to whom a match is made, output a notification of the respective potential match from the output device.Type: GrantFiled: September 21, 2022Date of Patent: March 12, 2024Assignee: GRABTAXI HOLDINGS PTE. LTD.Inventors: Kong-Wei Lye, Yang Cao, Swara Desai, Chen Liang, Xiaojia Mu, Yuliang Shen, Sien Y. Tan, Muchen Tang, Renrong Weng, Chang Zhao
-
Patent number: 11928753Abstract: Techniques related to automatically segmenting video frames into per pixel fidelity object of interest and background regions are discussed. Such techniques include applying tessellation to a video frame to generate feature frames corresponding to the video frame and applying a segmentation network implementing context aware skip connections to an input volume including the feature frames and a context feature volume corresponding to the video frame to generate a segmentation for the video frame.Type: GrantFiled: January 27, 2020Date of Patent: March 12, 2024Assignee: Intel CorporationInventors: Anthony Rhodes, Manan Goel
-
Patent number: 11928754Abstract: This disclosure provides systems, devices, apparatus, and methods, including computer programs encoded on storage media, for GPU wave-to-wave optimization. A graphics processor may execute a shader program for a first wave associated with a draw call or a compute kernel. The graphics processor may identify at least one first indication for the first wave associated with the draw call or the compute kernel. The graphics processor may store the at least one first indication for the first wave to a memory location. The graphics processor may execute the shader program for at least one second wave associated with the draw call or the compute kernel. The execution of the shader program for the at least one second wave may be based on the shader program for the at least one second wave reading the memory location to retrieve the at least one first indication.Type: GrantFiled: April 7, 2022Date of Patent: March 12, 2024Assignee: QUALCOMM IncorporatedInventor: Andrew Evan Gruber
-
Patent number: 11928755Abstract: Integrating virtual tours on digital resources is provided. A system receives a call generated by a client application executed on a client device responsive to a refresh of a digital resource. The system identifies a request for content for display in a content slot on the digital resource having a content slot size. The system transmits, to the client device, a viewer application configured to execute a priority caching function in the content slot. The viewer application downloads, based on the priority caching function and a computing characteristic of the client device, a first portion of a virtual tour. The viewer application renders the first portion of the virtual tour via the content slot. The viewer application establishes a controller that controls rendering of the virtual tour in response to a detection of an interaction on the digital resource outside the content slot.Type: GrantFiled: February 18, 2022Date of Patent: March 12, 2024Assignee: Threshold 360, Inc.Inventors: Daniel Kraus, Sean Kovacs
-
Patent number: 11928756Abstract: To present augmented reality features without localizing a user, a client device receives a request for presenting augmented reality features in a camera view of a computing device of the user. Prior to localizing the user, the client device obtains sensor data indicative of a pose of the user, and determines the pose of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state. Then the client device presents one or more augmented reality features in the camera view in accordance with the determined pose of the user while in the low accuracy state.Type: GrantFiled: September 22, 2021Date of Patent: March 12, 2024Assignee: GOOGLE LLCInventors: Mohamed Suhail Mohamed Yousuf Sait, Andre Le, Juan David Hincapie, Mirko Ranieri, Marek Gorecki, Wenli Zhao, Tony Shih, Bo Zhang, Alan Sheridan, Matt Seegmiller
-
Patent number: 11928757Abstract: Methods, systems, and non-transitory computer readable media are disclosed for intelligently generating partially textured accessible images. In one or more embodiments, the disclosed systems generate or access a color texture map specific to a given color vision deficiency. For example, the disclosed systems generate a color texture map that divides a color space into one or more textured segment of colors and one or more complementary untextured segments of colors. The disclosed systems can utilize the color texture map to intelligently apply different textures to subsets of pixels that contribute to color vision deficiency confusion. For example, the disclosed system maps pixels from a color image to the color texture map to identify a first subset of pixels corresponding to the textured segment of colors. The disclosed system generates a partially textured accessible image by applying a texture to the first subset of pixels.Type: GrantFiled: February 16, 2022Date of Patent: March 12, 2024Assignee: Adobe Inc.Inventors: Jose Ignacio Echevarria Vallespi, Rachel Franz, Paul Asente
-
Patent number: 11928758Abstract: This disclosure enables various technologies for augmented-reality, which allow a person to obtain (e.g., electronically reserve, electronically purchase, physically receive) a memorabilia item during an event at or remote from a venue in which a performer is performing or within a preset time period after the event (e.g., within few hours after the event) at or remote from the venue. These technologies may reduce a rise in counterfeits of the memorabilia item, while minimizing a price increase of the memorabilia item.Type: GrantFiled: March 5, 2021Date of Patent: March 12, 2024Inventor: Christopher Renwick Alston
-
Patent number: 11928759Abstract: The present disclosure describes methods and devices for generating a vector line drawing. A vector line drawing network may include a machine learning-based model that is trained to convert a raster image to a vector line drawing directly. The vector line drawing network may be trained end-to-end, using supervised learning, where only raster images are used as training data. A vector line drawing is generated stroke by stroke, over a series of time steps. In each time step, a dynamic drawing window is moved and scaled across the input raster image to sample a patch of the raster image, and a drawing stroke is predicted to draw a stroke in a corresponding patch in the canvas for the vector line drawing. The image patches are pasted in the canvas to assemble a final vector line drawing that corresponds to the input raster image.Type: GrantFiled: April 19, 2022Date of Patent: March 12, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Changqing Zou, Mingxue Wang, Himanshu Arora
-
Patent number: 11928760Abstract: Techniques are described for automatically detecting and accommodating state changes in a computer-generated forecast. In one or more embodiments, a representation of a time-series signal is generated within volatile and/or non-volatile storage of a computing device. The representation may be generated in such a way as to approximate the behavior of the time-series signal across one or more seasonal periods. Once generated, a set of one or more state changes within the representation of the time-series signal is identified. Based at least in part on at least one state change in the set of one or more state changes, a subset of values from the sequence of values is selected to train a model. An analytical output is then generated, within volatile and/or non-volatile storage of the computing device, using the trained model.Type: GrantFiled: February 26, 2021Date of Patent: March 12, 2024Assignee: Oracle International CorporationInventors: Dustin Garvey, Uri Shaft, Sampanna Shahaji Salunke, Lik Wong
-
Patent number: 11928761Abstract: The present disclosure relates to computer technology, and provides a method for identifying a K-line form and an electronic device. The method includes: obtaining, by a terminal device, data of N1 K-lines of a first stock in a first time window, where N1 is an integer greater than 1; obtaining, by the terminal device, a first target form corresponding to the data of the N1 K-lines and x key K-lines in, the data of the N1 K-lines, the first target form indicating a K-line form of the data of the N1 K-lines, where x?N1; displaying, by the terminal device, the N1 K-lines corresponding to the data of the N1 K-lines and drawing, by the terminal device, a first target form line on the N1 K-lines based on the first target form and the x key K-lines.Type: GrantFiled: September 1, 2022Date of Patent: March 12, 2024Assignee: FUTU NETWORK TECHNOLOGY (SHENZHEN) CO., LTDInventors: Xin Xie, Zheng Pei, Jinhui Hu
-
Patent number: 11928762Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for editing images using a web-based intermediary between a user interface on a client device and an image editing neural network(s) (e.g., a generative adversarial network) on a server(s). The present image editing system supports multiple users in the same software container, advanced concurrency of projection and transformation of the same image, clubbing transformation requests from several users hosted in the same software container, and smooth display updates during a progressive projection.Type: GrantFiled: September 3, 2021Date of Patent: March 12, 2024Assignee: Adobe Inc.Inventors: Akhilesh Kumar, Ratheesh Kalarot, Baldo Faieta, Shabnam Ghadar
-
Patent number: 11928763Abstract: Automating conversion of drawings to indoor maps and plans. One example is a computer-implemented method of determining a geo-location, the method comprising: determining a floor-level outline of a floor depicted in a CAD drawing; receiving an approximate geo-location of a building to which the CAD drawing applies; obtaining an overhead image of a target area encompassing the approximate geo-location, the overhead image comprising a plurality of buildings within the target area; identifying a plurality of building footprints within the target area; calculating, by a device, a plurality of distance functions that relate the floor-level outline to the each of the plurality of building footprints, the calculating creates a plurality of similarity scores; selecting a building footprint from plurality of building footprints, the selecting based on the plurality of similarity scores; and calculating a final geo-location of the building corresponding to the building footprint.Type: GrantFiled: July 13, 2023Date of Patent: March 12, 2024Assignee: Pointr LimitedInventors: Ege Çetintaş, Melih Peker, Can Tunca
-
Patent number: 11928764Abstract: Apparatuses, systems, and techniques to animate objects in computer-generated graphics. In at least one embodiment, one or more neural networks are trained to identify one or more forces to be applied to one or more objects based, at least in part, on training data corresponding to two or more aspects of motion of the one or more objects.Type: GrantFiled: September 15, 2020Date of Patent: March 12, 2024Assignee: NVIDIA CorporationInventors: Tingwu Wang, Yun Rong Guo, Maria Shugrina, Sanja Fidler
-
Patent number: 11928765Abstract: Embodiments of this disclosure include an information processing method, an information processing apparatus, and non-transitory computer-readable storage medium. A first key frame that includes initial posture data corresponding to an initial posture of a target virtual character is obtained. Target posture data is determined by inputting the initial posture data and a target task to a policy network trained by reinforcement learning. An output of the policy network indicates the target posture data corresponding to a target posture. At least one force to be acted on at least one first joint of the target virtual character is determined according to the initial and the target posture data. A posture of the target virtual character is adjusted from the initial posture to the target posture by applying the at least one force on the at least one first joint to obtain a second key frame.Type: GrantFiled: March 4, 2022Date of Patent: March 12, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Qunfen Lin
-
Patent number: 11928766Abstract: The present disclosure is related to a method to generate user representative avatars that fit within a design paradigm. The method includes receiving depth information corresponding to multiple user features of the user, determining one or more feature landmarks for the user based on the depth information, utilizing the one or more feature landmarks to classify a first user feature relative to an avatar feature category, selecting a first avatar feature from the avatar feature category based on the classification of the first user feature, combining the first avatar feature within an avatar representation to generate a user avatar, and output the user avatar for display.Type: GrantFiled: March 24, 2022Date of Patent: March 12, 2024Assignee: Disney Enterprises, Inc.Inventors: Dumene Comploi, Francisco E. Gonzalez
-
Patent number: 11928767Abstract: Embodiments of the present disclosure provide a method for audio-driven character lip sync, a model for audio-driven character lip sync, and a training method therefor. A target dynamic image is obtained by acquiring a character image of a target character and speech for generating a target dynamic image, processing the character image and the speech as image-audio data that may be trained, respectively, and mixing the image-audio data with auxiliary data for training. When a large amount of sample data needs to be obtained for training in different scenarios, a video when another character speaks is used as an auxiliary video for processing, so as to obtain the auxiliary data. The auxiliary data, which replaces non-general sample data, and other data are input into a model in a preset ratio for training. The auxiliary data may improve a process of training a synthetic lip sync action of the model, so that there are no parts unrelated to the synthetic lip sync action during the training process.Type: GrantFiled: June 21, 2023Date of Patent: March 12, 2024Assignee: NANJING SILICON INTELLIGENCE TECHNOLOGY CO., LTD.Inventors: Huapeng Sima, Zheng Liao
-
Patent number: 11928768Abstract: A method of controlling the order in which primitives, generated during tessellation, are output by the tessellation unit involves sub-dividing a patch, selecting one of the two sub-patches which are formed by the sub-division and tessellating that sub-patch until no further sub-division is possible before tessellating the other (non-selected) sub-patch. The method is recursively applied at each level of sub-division. Patches are output as primitives at the point in the method where they do not require any further sub-division. The selection of a sub-patch is made based on the values of one or more flags and any suitable tessellation method may be used to determine whether to sub-divide a patch. Methods of controlling the order in which vertices are output by the tessellation unit are also described and these may be used in combination with, or independently of, the method of controlling the primitive order.Type: GrantFiled: August 5, 2022Date of Patent: March 12, 2024Assignee: Imagination Technologies LimitedInventor: Peter Malcolm Lacey
-
Employing controlled illumination for hyperspectral or multispectral imaging of food in an appliance
Patent number: 11928769Abstract: In one embodiment, a method includes, by an electromagnetic device, emitting optical radiation on one or more objects disposed inside an interior of the electronic device, where the optical radiation is emitted by one or more radiation sources, capturing a set of 2D images of the one or more objects illuminated by the optical radiation, where variation of illumination or of an imaging process permits the set of 2D images to be combined into a representation of the one or more objects, determining whether the set of images comprises a representation of the one or more objects as imaged, in response to determining that the set of images comprises a representation of the one or more objects as imaged, generating a three-dimensional (3D) spectral data cube of the one or more first objects based on spectral information of the first set of images, and storing the 3D spectral data cube for processing by the electronic device.Type: GrantFiled: July 20, 2022Date of Patent: March 12, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Ian David Parker, Brian R. Patton, Pedro Martinez Lopez, Sergio Perdices-Gonzalez, Sajid Hassan Sadi -
Patent number: 11928770Abstract: Methods and systems are disclosed for traversing nodes in a BVH tree by an intersection engine. Techniques disclosed comprise receiving, by the intersection engine, a traversal instruction, including a tracing-mode, ray data, and an identifier of a node to be traversed. Where the tracing-mode includes a closest hit mode and a first hit mode. If the node to be traversed is an internal node, the intersection engine determines, based on the tracing-mode, an order in which children nodes of the node are to be next traversed and output identifiers of the children nodes in the determined order.Type: GrantFiled: December 27, 2021Date of Patent: March 12, 2024Assignee: Advanced Micro Devices, Inc.Inventors: John Alexandre Tsakok, Skyler Jonathon Saleh
-
Patent number: 11928771Abstract: An exemplary method of detecting a light source using an electronic device having a camera and a sensor may include: scanning a real environment using the camera to establish an environment map of the real environment; capturing, using the camera, a first image of a real light source from a first location in the real environment and a second image of the real light source from a second location in the real environment; tracking, using the sensor, a first position and a first orientation of the camera in the environment map while the first image is captured, and a second position and a second orientation of the camera in the environment map while the second image is captured; and computing a position of the real light source in the environment map based on the first position, the first orientation, the second position, and the second orientation.Type: GrantFiled: June 6, 2022Date of Patent: March 12, 2024Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Yi Xu, Shuxue Quan
-
Patent number: 11928772Abstract: In a ray tracer, to prevent any long-running query from hanging the graphics processing unit, a traversal coprocessor provides a preemption mechanism that will allow rays to stop processing or time out early. The example non-limiting implementations described herein provide such a preemption mechanism, including a forward progress guarantee, and additional programmable timeout options that can be time or cycle based. Those programmable options provide a means for quality of service timing guarantees for applications such as virtual reality (VR) that have strict timing requirements.Type: GrantFiled: August 17, 2022Date of Patent: March 12, 2024Assignee: NVIDIA CorporationInventors: Greg Muthler, Ronald Charles Babich, Jr., William Parsons Newhall, Jr., Peter Nelson, James Robertson, John Burgess
-
Patent number: 11928773Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.Type: GrantFiled: February 23, 2022Date of Patent: March 12, 2024Assignee: AUTODESK, INC.Inventors: Thomas Ryan Davies, Michael Haley, Ara Danielyan, Morgan Fabian
-
Patent number: 11928774Abstract: Disclosed herein is a web-based videoconference system that allows for multi-screen sharing. In some embodiments, data specifying a three-dimensional virtual space is received. The three-dimensional virtual space comprises a plurality of participants and an avatar representing each of the plurality of participants and three-dimensional models of a plurality of presentation screens. Multiple presentation streams may be shared on different presentation screens simultaneously.Type: GrantFiled: July 20, 2022Date of Patent: March 12, 2024Assignee: KATMAI TECH INC.Inventors: Gerard Cornelis Krol, Erik Stuart Braund
-
Patent number: 11928775Abstract: An apparatus includes circuitry configured to: select, from at least two images captured in different image-capturing directions and with image-capturing ranges overlapping with each other, an image to be at foreground as viewed from a virtual camera based on an orientation or an angle of view of the virtual camera and the image-capturing directions of the at least two images; map the at least two images onto a three-dimensional object to generate a virtual image, in which the at least two images overlap with each other, having a wider angle of view than the at least two images; and perform perspective projection on the virtual image using the virtual camera, to generate a plane image, based on the selected image to be at the foreground, as a display image.Type: GrantFiled: November 24, 2021Date of Patent: March 12, 2024Assignee: RICOH COMPANY, LTD.Inventor: Keiichi Kawaguchi
-
Patent number: 11928776Abstract: A graphics processing system performs hidden surface removal and texturing/shading on fragments of primitives. The system includes a primary depth buffer (PDB) for storing depth values of resolved fragments, and a secondary depth buffer (SDB) for storing depth values of unresolved fragments. Incoming fragments are depth tested against depth values from either the PDB or the SDB. When a fragment passes a depth test, its depth value is stored in the PDB if it is a resolved fragment (e.g. if it is opaque or translucent), and its depth value is stored in the SDB if it is an unresolved fragment (e.g. if it is a punch through fragment). This provides more opportunities for subsequent opaque objects to overwrite punch through fragments which passed a depth test, thereby reducing unnecessary processing and time which may be spent on fragments which ultimately will not contribute to the final rendered image.Type: GrantFiled: October 17, 2021Date of Patent: March 12, 2024Assignee: Imagination Technologies LimitedInventor: John Howson
-
Patent number: 11928777Abstract: Disclosed herein is a system that allows for the use of optical labels in a virtual environment. The system and methods generate a virtual optical label with embedded data. A 3D model of an object is rendered for display in a virtual environment, where the 3D model incorporates the virtual optical label. The virtual optical label is displayed on the 3D model in the virtual environment. A scanned image of the virtual optical label is received via a camera application of a mobile computing device. Based on receiving the scanned image, an action is performed based on the embedded data.Type: GrantFiled: May 3, 2023Date of Patent: March 12, 2024Assignee: Katmai Tech Inc.Inventors: Petr Polyakov, Erik Stuart Braund
-
Patent number: 11928778Abstract: A method for human body model reconstruction and a reconstruction system are disclosed. The method includes acquiring a target image, and acquiring a segmented image by segmenting the target image based on an object to be reconstructed in the target image, the target image being one front image of the object; acquiring an initial estimate shape and a part of texture information of the object respectively, according to the segmented image; determining an initial 3D model of the object through the initial estimate shape, the initial 3D model being a 3D model without texture; acquiring complete texture information of the object according to the part of texture information and a texture generation model; and generating a 3D reconstruction model of the object based on the initial 3D model and the complete texture information, the 3D reconstruction model being a 3D model with texture.Type: GrantFiled: March 29, 2022Date of Patent: March 12, 2024Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Zhong Li, Yi Xu, Shuxue Quan
-
Patent number: 11928779Abstract: Various implementations disclosed herein generate a mesh representing the surfaces in a physical environment. The mesh is generated using multi-resolution voxels based on detected depth information, e.g., from a depth camera. The techniques may use multiple hash tables to store the multi-resolution voxel data. For example, the hash tables may store each voxel's 3D position and a truncated signed distance field (TSDF) value corresponding to each voxels' distance to a nearest surface. Each of the multiple hash tables may include data corresponding to a different level of resolution and those resolutions may depend upon distance/noise or other factors. For example, voxels close to a depth camera may have a finer resolution and smaller size compared to voxels that are further from the depth camera. Techniques disclosed herein may involve using a meshing algorithm that combines multi-resolution voxel information stored in multiple hash tables to generate a single mesh.Type: GrantFiled: April 11, 2022Date of Patent: March 12, 2024Assignee: Apple Inc.Inventors: Maxime Meilland, Andrew Predoehl, Kyle L. Simek, Ming Chuang, Pedro A. Pinies Rodriguez
-
Patent number: 11928780Abstract: In one implementation, a method of enriching a three-dimensional scene model with a three-dimensional object model based on a semantic label is performed at a device including one or more processors and non-transitory memory. The method includes obtaining a three-dimensional scene model of a physical environment including a plurality of points, wherein each of the plurality of points is associated with a set of coordinates in a three-dimensional space, wherein a subset of the plurality of points is associated with a particular cluster identifier and a particular semantic label. The method includes retrieving a three-dimensional object model based on the particular semantic label, the three-dimensional object model including at least a plurality of points. The method includes updating the three-dimensional scene model by replacing the subset of the plurality of points with the three-dimensional object model.Type: GrantFiled: July 20, 2022Date of Patent: March 12, 2024Assignee: APPLE INC.Inventor: Payal Jotwani
-
Patent number: 11928781Abstract: An initial mesh is received comprising a hand of a subject. The initial mesh includes a plurality of vertices. A smoothed mesh is generated, and a discrete curvature of the smoothed mesh is determined for each vertex. One or more candidate finger vertices are identified based upon a determination that the discrete curvature for each of the one or more candidate vertices is greater than or equal to a threshold curvature. One or more seed vertices are identified from among the one or more candidate finger vertices based upon a determination that the discrete curvature for one or more other vertices within a neighborhood of each seed vertex is greater than or equal to the threshold curvature. Dilation is performed on the one or more seed vertices to grow one or more patches from the one or more seed vertices. The one or more patches are deprioritized for mesh simplification.Type: GrantFiled: April 1, 2022Date of Patent: March 12, 2024Assignee: Microsoft Technology Licensing, LLCInventor: Deboshmita Ghosh
-
Method for positioning vertebra in CT image, apparatus, device, and computer readable storage medium
Patent number: 11928782Abstract: The present disclosure provides a method of positioning vertebra in a CT image, an apparatus, a computer device, and a computer readable storage medium. The method includes: pre-processing vertebra CT image data; inputting the pre-processed vertebra CT image data into a pre-trained neural network to obtain regression results of heat maps of key points corresponding to the pre-processed vertebra CT image data; regressing of 3D heat maps corresponding to the positions of the key points of the vertebra mass center based on the regression results of the heat maps of the key points and the pre-processed vertebra CT image data; serving 3D heat maps corresponding to the positions of the key points of the vertebra mass center as labels, and networked regressing 3D heat map information to position the vertebra. Effects caused by scanning machine difference and scanning noise are avoided, and the vertebra with complex forms is accurately positioned.Type: GrantFiled: October 30, 2020Date of Patent: March 12, 2024Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.Inventors: Chan Zeng, Ge Li, Guanju Cheng, Peng Gao, Guotong Xie -
Patent number: 11928783Abstract: Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including receiving a video that includes a depiction of one or more real-world objects in a real-world environment and obtaining depth data related to the real-world environment. The operations include generating a three-dimensional (3D) model of the real-world environment based on the video and the depth data and adding an augmented reality (AR) item to the video based on the 3D model of the real-world environment. The operations include determining that the AR item has been placed on a vertical plane of the real-world environment and modifying an orientation of the AR item to correspond to an orientation of the vertical plane.Type: GrantFiled: December 30, 2021Date of Patent: March 12, 2024Assignee: Snap Inc.Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Gal Sasson
-
Patent number: 11928784Abstract: Examples of the disclosure describe systems and methods for sharing perspective views of virtual content. In an example method, a virtual object is presented, via a display, to a first user. A first perspective view of the virtual object is determined, wherein the first perspective view is based on a position of the virtual object and a position of the first user. The virtual object is presented, via a display, to a second user, wherein the virtual object is presented to the second user according to the first perspective view. A second perspective view of the virtual object is determined, wherein the second perspective view is based on an input from the first user. The virtual object is presented, via a display, to the second user, wherein presenting the virtual object to the second user comprises presenting a transition from the first perspective view to the second perspective view.Type: GrantFiled: January 9, 2023Date of Patent: March 12, 2024Assignee: Magic Leap, Inc.Inventor: Marc Alan McCall
-
Patent number: 11928785Abstract: Techniques (e.g., systems, apparatus, methods) for context-based management of tokens are described. In an example, a geographical location of a device is used as one possible context. This location can correspond to a physical location associated with an AR virtual object container. This container can be associated with a set of virtual object container information applicable to the context, such as to the device's location. Based on separately maintained virtual object information, virtual objects to be shown as being available from the container are determined. Each of such virtual objects can be associated with a set of tokens. In an AR session, the container and the virtual objects are presented. An interaction with the container or a virtual object can result in associating a relevant set of tokens with a user account by recording information about the container, the virtual object, the user account, and/or the context(s).Type: GrantFiled: September 13, 2023Date of Patent: March 12, 2024Assignee: Nant Holdings IP, LLCInventors: Nicholas J. Witchey, John Wiacek, Jake Fyfe, Patrick Soon-Shiong
-
Patent number: 11928786Abstract: A three-dimensional shape data editing apparatus includes a processor configured to set, based on three-dimensional shape data of a surface of a three-dimensional shape of an object configured by using a formation surface of at least one of plural flat surfaces or curved surfaces, for each of plural divided three-dimensional regions, a distance from a predetermined location of the region to the formation surface of the three-dimensional shape of the object configured by the formation surface.Type: GrantFiled: January 21, 2020Date of Patent: March 12, 2024Assignee: FUJIFILM Business Innovation Corp.Inventor: Tomonari Takahashi
-
Patent number: 11928787Abstract: Systems, apparatuses and methods may provide for technology that estimates poses of a plurality of input images, reconstructs a proxy three-dimensional (3D) geometry based on the estimated poses and the plurality of input images, detects a user selection of a virtual viewpoint, encodes, via a first neural network, the plurality of input images with feature maps, warps the feature maps of the encoded plurality of input images based on the virtual viewpoint and the proxy 3D geometry, and blends, via a second neural network, the warped feature maps into a single image, wherein the first neural network is deep convolutional network and the second neural network is a recurrent convolutional network.Type: GrantFiled: September 22, 2020Date of Patent: March 12, 2024Assignee: Intel CorporationInventors: Gernot Riegler, Vladlen Koltun
-
Patent number: 11928788Abstract: Technologies are provided for automated configuration of hanger placement within a model layout in a computer-aided design (CAD) application. The technologies rely, in some embodiments, on an initial selection of pipe elements and a traversal direction. Candidate positions for placement of hangers are then generated. At each candidate position, a set of pipes essentially parallel to one another can be identified. When the set of pipes includes two or more pipes, the candidate position is deemed to be satisfactory and placement of a hanger at the candidate position can be configured. The configuration includes configuration of a size of a hanger bearer relative to the pipe elements present in such a set. The configuration also includes configuration of an elevation of the hanger. A termination rule can be utilized to terminate the generation of candidate positions and associated configuration of hanger placement.Type: GrantFiled: June 4, 2021Date of Patent: March 12, 2024Assignee: EVOLVE MEP, LLCInventor: Xiao Chun Yao
-
Patent number: 11928789Abstract: A method of processing an image within a vehicle interior with a camera includes acquiring a live image of the vehicle interior. The live image is compared to an ideally aligned image of the vehicle interior having an associated region of interest to generate a homography matrix. The homography matrix is applied to the region of interest to generate a calibrated region of interest projected onto the live image for detecting an object therein.Type: GrantFiled: December 19, 2019Date of Patent: March 12, 2024Assignee: ZF FRIEDRICHSHAFEN AGInventors: Venkateswara Adusumalli, Robert Berg, Santanu Panja, James Oldiges, Sriharsha Yeluri, Radha Sivaraman
-
Patent number: 11928790Abstract: An object included in a low-resolution image can be recognized with a high degree of precision. An acquisition unit acquires, from a query image, an increased-resolution image, which is acquired by increasing the resolution of the query image, by performing pre-learned acquisition processing for increasing the resolution of an image. A feature extraction unit, using the increased-resolution image as input, extracts a feature vector of the increased-resolution image by performing pre-learned extraction processing for extracting a feature vector of an image. A recognition unit recognizes an object captured on the increased-resolution image on the basis of the feature vector of the increased-resolution image and outputs the recognized object as the object captured on the query image.Type: GrantFiled: August 8, 2019Date of Patent: March 12, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yukito Watanabe, Jun Shimamura, Atsushi Sagata
-
Patent number: 11928792Abstract: Disclosed is a fusion network-based method for image super-resolution and non-uniform motion deblurring. The method achieves, for the first time, restoration of a low-resolution non-uniform motion-blurred image based on a deep neural network. The network uses two branch modules to respectively extract features for image super-resolution and non-uniform motion deblurring, and achieves, by means of a feature fusion module that is trainable, adaptive fusion of outputs of the two branch modules for extracting features. Finally, an upsampling reconstruction module achieves a non-uniform motion deblurring and super-resolution task. According to the method, a self-generated set of training data is configured to perform offline training on a network, thereby achieving restoration of the low-resolution non-uniform motion-blurred image.Type: GrantFiled: January 15, 2021Date of Patent: March 12, 2024Assignee: XI'AN JIAOTONG UNIVERSITYInventors: Fei Wang, Xinyi Zhang, Hang Dong, Kanglong Zhang, Zhao Wei
-
Patent number: 11928793Abstract: A video quality assessment apparatus and method are provided. The video quality assessment apparatus includes a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to: identify whether a frame included in a video is a fully-blurred frame or a partially-blurred frame based on a blur level of the frame, obtain, in response to the frame being the fully-blurred frame, an analysis-based quality score with respect to the fully-blurred frame; obtain, in response to the frame being the partially-blurred frame, a model-based quality score with respect to the partially-blurred frame; and process the video based on at least one of the analysis-based quality score or the model-based quality score to obtain a processed video.Type: GrantFiled: June 22, 2021Date of Patent: March 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Anant Baijal, Hoshin Son, Eunae Cho, Sangshin Park, Seungwon Cha
-
Patent number: 11928794Abstract: An image processing device includes a first processing unit which executes blurring processing in a common focus position on each of plural images in mutually different focus positions, an integration unit which generates an integrated image resulting from integration of the plural images on which the blurring processing is executed, and a second processing unit which generates a composite image having a predetermined blur degree by executing sharpening processing on the integrated image generated by the integration unit based on information in which optical information indicating optical characteristics of an optical system in which the plural images are acquired and a blur degree of the blurring processing in the common focus position are composited together along a depth direction of the plural image.Type: GrantFiled: June 21, 2021Date of Patent: March 12, 2024Assignee: NIKON CORPORATIONInventor: Tsuneyuki Hagiwara
-
Patent number: 11928795Abstract: This disclosure describes methods, apparatuses, and techniques for capturing a fingerprint image using an electronic device with an under-display fingerprint sensor (UDFPS) embedded under a display screen of a display system. The display system utilizes a pulse-width modulation circuit to generate a pulse-width modulated (PWM) signal to control light emitted by the display screen. As the display screen illuminates a user's touch, the UDFPS captures light reflected off the user's touch, therefore, capturing the fingerprint image. The captured fingerprint image, however, includes a PWM noise. The electronic device uses a noise-filtering algorithm to filter out and/or reduce the PWM noise in the captured fingerprint image. In one aspect, the noise-filtering algorithm estimates and/or determines the PWM noise in the captured fingerprint image. The noise-filtering algorithm then reduces, extracts, and/or filters out the PWM noise from the captured fingerprint image.Type: GrantFiled: October 29, 2021Date of Patent: March 12, 2024Assignee: Google LLCInventors: Firas Sammoura, Omar Sze Leung
-
Patent number: 11928796Abstract: Encoding can involve correcting chroma components representing the chroma of an input image according to a first component representing a mapping of the luminance component of said input image used for reducing or increasing the dynamic range of said luminance component, and a reconstructed component representing an inverse mapping of said first component. At least one correction factor according to said at least one scaled chroma components can also be obtained and transmitted. Decoding can involve scaled chroma components being obtained by multiplying chroma components of an image by at least one corrected chroma correction function depending on said at least one correction factor. Components of a reconstructed image can then be derived as a function of said scaled chroma components and a corrected matrix that depends on an inverse of a theoretical color space matrix conversion and said at least one correction factor.Type: GrantFiled: November 30, 2018Date of Patent: March 12, 2024Assignee: InterDigital Patent Holdings, Inc.Inventors: Marie-Jean Colaitis, David Touze, Nicolas Caramelli
-
Patent number: 11928797Abstract: Disclosed are an electronic device, and a method for controlling same. Particularly, the present disclosure relates to an electronic device, and a method for controlling same, which can secure high visibility of a subject by acquiring, from an original image, a plurality of sub images having a smaller bit number than the original image, and acquiring a synthesized image on the basis of information about the shapes of objects included in the plurality of sub images.Type: GrantFiled: October 11, 2019Date of Patent: March 12, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Taehee Lee, Jihye Kim, Sahng-Gyu Park, Seunghoon Han
-
Patent number: 11928798Abstract: An image processing apparatus includes a correction unit, a first detection unit, a second detection unit, and a merging unit. The correction unit is configured to perform correction of a misalignment between a plurality of images captured at exposures different from each other. The first detection unit is configured to detect an inappropriate area from the plurality of images. The second detection unit is configured to detect a feature point for the correction by preferentially using an area other than the inappropriate area of the plurality of images. The merging unit is configured to merge the plurality of images on which the correction has been performed.Type: GrantFiled: August 31, 2020Date of Patent: March 12, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Keisuke Yanagisawa
-
Patent number: 11928799Abstract: An electronic device includes a plurality of cameras, and at least one processor connected to the plurality of cameras. The at least one processor is configured to, based on a first user command to obtain a live view image, segment an image frame obtained via a camera among the plurality of cameras into a plurality of regions based on a brightness of pixels and an object included in the image frame; obtain a plurality of camera parameter setting value sets, each including a plurality of parameter values with respect to the plurality of regions; based on a second user command to capture the live view image, obtain a plurality of image frames using the plurality of camera parameter setting value sets and at least one camera among the plurality of cameras; and obtain an image frame by merging the plurality of obtained image frames.Type: GrantFiled: June 4, 2021Date of Patent: March 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ashish Chopra, Bapi Reddy Karri
-
Patent number: 11928800Abstract: An image coordinate system transformation method includes obtaining video images acquired by adjacent cameras, the adjacent cameras including a first camera and a second camera that have an overlapping photography region on a ground plane, recognizing N groups of key points of a target object on the ground plane from the video images acquired by the adjacent cameras, each group of key points including first key point extracted from a video image of the first camera and a second key point extracted from a video image of the second camera, the first key point and the second key point being the same feature point of the same target object appearing in the adjacent cameras at the same moment, and N being an integer greater than or equal to 3, and calculating a transformation relationship between image coordinate systems of the adjacent cameras according to the N groups of key points.Type: GrantFiled: July 12, 2021Date of Patent: March 12, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Xiangqi Huang
-
Patent number: 11928801Abstract: With respect to a charged particle beam apparatus, provided is a technology capable of preventing a deterioration in image quality of a captured image. The charged particle beam apparatus includes an imaging device that irradiates a sample with a charged particle beam and forms an image from information of the sample and a computer. The computer stores each of images (scanned images) obtained by scanning the same area multiple times, classifies each of images into an image including a deteriorated image and an image not including the deteriorated image, and stores a target image obtained by performing image integration from the image not including the deteriorated image. The charged particle beam apparatus includes a database that stores data such as information obtained from an imaging device including the scanned image, classification results, and the target image.Type: GrantFiled: April 18, 2019Date of Patent: March 12, 2024Assignee: Hitachi High-Tech CorporationInventor: Ryo Komatsuzaki
-
Patent number: 11928802Abstract: Provided are an apparatus for acquiring a depth image, a method for fusing depth images, and a terminal device. The apparatus for acquiring a depth image includes an emitting module, a receiving module, and a processing unit. The emitting module is configured to emit a speckle array to an object, where the speckle array includes p mutually spaced apart speckles. The receiving module includes an image sensor. The processing unit is configured to receive the pixel signal and generate a sparse depth image based on the pixel signal, align an RGB image at a resolution of a*b with the sparse depth image, and fuse the aligned sparse depth image with the RGB image using a pre-trained image fusion model to obtain a dense depth image at a resolution of a*b.Type: GrantFiled: July 8, 2022Date of Patent: March 12, 2024Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventor: Xiage Qin