Patents Examined by Abderrahim Merouan
-
Patent number: 12154231Abstract: Certain aspects of the present disclosure present a method of graphics processing at a wearable display device. The method generally includes rendering a first image based on a position of the wearable display device and contours and depth information for one or more real-world objects associated with the position, the rendered first image comprising, at least, one or more virtual objects, re-projecting the first image based on an updated position of the wearable display device that is different than the position, rendering a second image using re-projected contours of the one or more real-world objects in the first image, updated depth information for the one or more real-world objects in the first image, updated depth information for the one or more virtual objects in the first image, and warped one or more virtual objects, and displaying the second image on a display of the wearable display device.Type: GrantFiled: July 20, 2022Date of Patent: November 26, 2024Assignee: QUALCOMM IncorporatedInventors: Sudipto Banerjee, Vinay Melkote Krishnaprasad
-
Patent number: 12124318Abstract: A multiple graphics processing unit (GPU) based parallel graphics system comprising multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation. Each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem. According to the principles of the present invention, pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and the video memory and the pixel processing subsystem in the primary GPU are used to carry out the image recomposition process, without the need for dedicated or specialized apparatus.Type: GrantFiled: June 9, 2023Date of Patent: October 22, 2024Assignee: Google LLCInventor: Reuven Bakalash
-
Patent number: 12106420Abstract: An operating method of a graphics processing unit includes: receiving a first read request for texels, detecting whether decompression data associated with each of the texels are present in a first cache, decompressing part of a first texture compression block associated with a first texel among the texels when a result of the detecting indicates decompression data for the first texel is not present in the first cache, to generate first decompression data, and generating first texture data corresponding to the first read request, based on the first decompression data and second decompression present in the first cache.Type: GrantFiled: March 10, 2022Date of Patent: October 1, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sangoak Woo, Jeongae Park
-
Patent number: 12056788Abstract: An apparatus to facilitate compute optimization is disclosed. The apparatus includes a mixed precision core including mixed-precision execution circuitry to execute one or more of the mixed-precision instructions to perform a mixed-precision dot-product operation comprising to perform a set of multiply and accumulate operations.Type: GrantFiled: March 1, 2022Date of Patent: August 6, 2024Assignee: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Linda L. Hurd, Dukhwan Kim, Mike B. Macpherson, John C. Weast, Feng Chen, Farshad Akhbari, Narayan Srinivasa, Nadathur Rajagopalan Satish, Joydeep Ray, Ping T. Tang, Michael S. Strickland, Xiaoming Chen, Anbang Yao, Tatiana Shpeisman
-
Patent number: 12056824Abstract: A method for simulating a solid body animation of a subject includes retrieving a first frame that includes a body image of a subject. The method also includes selecting, from the first frame, multiple key points within the body image of the subject that define a hull of a body part and multiple joint points that define a joint between two body parts, identifying a geometry, a speed, and a mass of the body part to include in a dynamic model of the subject, based on the key points and the joint points, determining, based on the dynamic model of the subject, a pose of the subject in a second frame after the first frame in a video stream, and providing the video stream to an immersive reality application running on a client device.Type: GrantFiled: December 20, 2021Date of Patent: August 6, 2024Assignee: Meta Platforms Technologies, LLCInventors: Jason Saragih, Shih-En Wei, Tomas Simon Kreuz, Kris Makoto Kitani, Ye Yuan
-
Patent number: 12039435Abstract: An apparatus to facilitate acceleration of machine learning operations is disclosed. The apparatus comprises at least one processor to perform operations to implement a neural network and accelerator logic to perform communicatively coupled to the processor to perform compute operations for the neural network.Type: GrantFiled: June 21, 2022Date of Patent: July 16, 2024Assignee: INTEL CORPORATIONInventors: Amit Bleiweiss, Anavai Ramesh, Asit Mishra, Deborah Marr, Jeffrey Cook, Srinivas Sridharan, Eriko Nurvitadhi, Elmoustapha Ould-Ahmed-Vall, Dheevatsa Mudigere, Mohammad Ashraf Bhuiyan, Md Faijul Amin, Wei Wang, Dhawal Srivastava, Niharika Maheshwari
-
Patent number: 12041389Abstract: A method for generating a texture map used during a video conference, the method may include obtaining multiple texture maps of multiple areas of at least a portion of a three-dimensional (3D) object in which the multiple texture maps comprise a first texture map of a first area and of a first resolution, and a second texture map of a second area and of a second resolution, in which the first area differs from the second area and the first resolution differs from the second resolution; generating a texture map of the at least portion of the 3D object. The generating is based on the multiple texture maps; and utilizing the visual representation of the at least portion of the 3D object based on the texture map of the at least portion of the 3D object during the video conference.Type: GrantFiled: July 4, 2022Date of Patent: July 16, 2024Assignee: TRUE MEETING INC.Inventors: Ran Oz, Yuval Gronau, Michael Rabinovich, Osnat Goren-Peyser, Tal Perl
-
Patent number: 12033251Abstract: Embodiments are disclosed for creating and managing semantic layers in a graphic design system. A method of creating and managing semantic layers includes receiving a selection of a content type to be generated, receiving a selection of a location in a digital canvas to place content of the content type, generating, using one or more machine learning models, content of the selected content type at the location in the digital canvas, and automatically adding the content to a layer associated with the digital canvas based on a semantic label associated with the content.Type: GrantFiled: January 27, 2022Date of Patent: July 9, 2024Assignee: Adobe Inc.Inventors: Gregory Cy Muscolino, Christian Cantrell, Archie Samuel Bagnall, Christopher James Gammon, Patrick James Hebron
-
Patent number: 12033279Abstract: A map having surfaces that are depicted at different levels that are not related to topography, with boundaries between the surfaces, where the boundaries are disposed at travel ways. The travel ways form cliff faces in the map between the surfaces, with information items disposed on the cliff faces at positions corresponding to items of interest at locations along the travel ways where the information items are disposed.Type: GrantFiled: December 20, 2022Date of Patent: July 9, 2024Assignee: Knowroads, LLCInventor: Felix Ross Gaiter
-
Patent number: 12008702Abstract: A configuration that causes an agent such as a character in a virtual world or a robot in the real world to perform actions by imitating actions of a human is to be achieved. An environment map including type and layout information about objects in the real world is generated, actions of a person acting in the real world are analyzed, time/action/environment map correspondence data including the environment map and time-series data of action analysis data is generated, a learning process using the time/action/environment map correspondence data is performed, an action model having the environment map as an input value and a result of action estimation as an output value is generated, and action control data for a character in a virtual world or a robot is generated with the use of the action model. For example, an agent is made to perform an action by imitating an action of a human.Type: GrantFiled: March 4, 2020Date of Patent: June 11, 2024Assignee: SONY GROUP CORPORATIONInventors: Takashi Seno, Yohsuke Kaji, Tomoya Ishikawa, Gaku Narita
-
Patent number: 11995477Abstract: A method for evaluating an updated analytical procedure in a monitoring system, comprising a plurality of monitoring device arranged to monitor similar environments, is provided. The method comprises identifying available processing resources, selecting a first monitoring device for which available processing resources have been identified, selecting a second monitoring device, acquiring monitoring data by the second monitoring device, and performing a current analytical procedure on the monitoring data. The method further comprises sending the monitoring data to the first monitoring device, performing, in the first monitoring device, an updated analytical procedure on the monitoring data, and evaluating the updated analytical procedure based on the outcome of the current analytical procedure and the updated analytical procedure.Type: GrantFiled: July 29, 2022Date of Patent: May 28, 2024Assignee: Axis ABInventors: Axel Keskikangas, Georgios Efstathiou
-
Patent number: 11989843Abstract: A method for programming a robotic system by demonstration is described. In one aspect, the method includes displaying a first virtual object in a display of an augmented reality (AR) device, the first virtual object corresponding to a first physical object in a physical environment of the AR device, tracking, using the AR device, a manipulation of the first virtual object by a user of the AR device, identifying an initial state and a final state of the first virtual object based on the tracking, the initial state corresponding to an initial pose of the first virtual object, the final state corresponding to a final pose of the first virtual object, and programming by demonstration a robotic system using the tracking of the manipulation of the first virtual object, the first initial state of the first virtual object, and the final state of the first virtual object.Type: GrantFiled: June 22, 2022Date of Patent: May 21, 2024Assignee: Snap Inc.Inventors: Kai Zhou, Adrian Schoisengeier
-
Patent number: 11978143Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.Type: GrantFiled: May 23, 2022Date of Patent: May 7, 2024Assignee: LEMON INC.Inventors: Zeng Dai, Yunzhu Li, Nite Luo
-
Patent number: 11978159Abstract: A cross reality system that provides an immersive user experience by storing persistent spatial information about the physical world that one or multiple user devices can access to determine position within the physical world and that applications can access to specify the position of virtual objects within the physical world. Persistent spatial information enables users to have a shared virtual, as well as physical, experience when interacting with the cross reality system. Further, persistent spatial information may be used in maps of the physical world, enabling one or multiple devices to access and localize into previously stored maps, reducing the need to map a physical space before using the cross reality system in it. Persistent spatial information may be stored as persistent coordinate frames, which may include a transformation relative to a reference orientation and information derived from images in a location corresponding to the persistent coordinate frame.Type: GrantFiled: December 3, 2021Date of Patent: May 7, 2024Assignee: Magic Leap, Inc.Inventors: Anush Mohan, Rafael Domingos Torres, Daniel Olshansky, Samuel A. Miller, Jehangir Tajik, Joel David Holder, Jeremy Dwayne Miranda, Robert Blake Taylor, Ashwin Swaminathan, Lomesh Agarwal, Hiral Honar Barot, Helder Toshiro Suzuki, Ali Shahrokni, Eran Guendelman, Prateek Singhal, Xuan Zhao, Siddharth Choudhary, Nicholas Atkinson Kramer, Kenneth William Tossell, Christian Ivan Robert Moore
-
Patent number: 11954063Abstract: Described herein is a graphics processing unit (GPU) configured to receive an instruction having multiple operands, where the instruction is a single instruction multiple data (SIMD) instruction configured to use a bfloat16 (BF16) number format and the BF16 number format is a sixteen-bit floating point format having an eight-bit exponent. The GPU can process the instruction using the multiple operands, where to process the instruction includes to perform a multiply operation, perform an addition to a result of the multiply operation, and apply a rectified linear unit function to a result of the addition.Type: GrantFiled: February 17, 2023Date of Patent: April 9, 2024Assignee: Intel CorporationInventors: Subramaniam Maiyuran, Shubra Marwaha, Ashutosh Garg, Supratim Pal, Jorge Parra, Chandra Gurram, Varghese George, Darin Starkey, Guei-Yuan Lueh
-
Patent number: 11941846Abstract: A method for ascertaining the pose of an object. The method includes recording a first and a second camera image of the object, ascertaining a correspondence between camera pixels of the camera images and vertices of a 3D model of the object, and ascertaining the pose of the object from a set of poses by minimizing, across the set of poses, a loss function, the loss function for a pose being provided by accumulation of distance measures between projections of the object in the pose onto the respective camera image plane and the corresponding pixels of the respective camera image.Type: GrantFiled: February 22, 2022Date of Patent: March 26, 2024Assignee: ROBERT BOSCH GMBHInventor: Markus Spies
-
Patent number: 11935293Abstract: A method for providing device support to a user using an augmented reality (AR) interface includes establishing, by a computer system, a real-time communication session streaming audio and video between (i) a local application executing on a local user device and (ii) a remote application executing a remote user device. The method further includes transmitting, via the real-time communication session, images depicting equipment captured by the local application. Annotations to the images are received from the remote application. Each annotation is anchored to user-selected positions on the equipment. The annotations are presented on the remote user device overlaid on the equipment at the user-selected positions. The real-time communication session is used to transmit audio or video instructional information related to the equipment and referencing the annotations. The computing system stores a new session record that includes the images, the annotations, and the audio or video instructional information.Type: GrantFiled: June 9, 2022Date of Patent: March 19, 2024Assignee: CareAR Holdings LLCInventors: Samuel Waicberg, Chetan Gandhi
-
Patent number: 11922315Abstract: Solutions for adapting machine learning (ML) models to neural networks (NNs) include receiving an ML pipeline comprising a plurality of operators; determining operator dependencies within the ML pipeline; determining recognized operators; for each of at least two recognized operators, selecting a corresponding NN module from a translation dictionary; and wiring the selected NN modules in accordance with the operator dependencies to generate a translated NN. Some examples determine a starting operator for translation, which is the earliest recognized operator having parameters. Some examples connect inputs of the translated NN to upstream operators of the ML pipeline that had not been translated. Some examples further tune the translated NN using backpropagation. Some examples determine whether an operator is trainable or non-trainable and flag related parameters accordingly for later training.Type: GrantFiled: August 26, 2019Date of Patent: March 5, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Matteo Interlandi, Byung-Gon Chun, Markus Weimer, Gyeongin Yu, Saeed Amizadeh
-
Systems and methods for modeling structures using point clouds derived from stereoscopic image pairs
Patent number: 11915368Abstract: A system for modeling a roof structure comprising an aerial imagery database and a processor in communication with the aerial imagery database. The aerial imagery database stores a plurality of stereoscopic image pairs and the processor selects at least one stereoscopic image pair among the plurality of stereoscopic image pairs and related metadata from the aerial imagery database based on a geospatial region of interest. The processor identifies a target image and a reference image from the at least one stereoscopic pair and calculates a disparity value for each pixel of the identified target image to generate a disparity map. The processor generates a three dimensional point cloud based on the disparity map, the identified target image and the identified reference image. The processor optionally generates a texture map indicative of a three-dimensional representation of the roof structure based on the generated three dimensional point cloud.Type: GrantFiled: August 16, 2021Date of Patent: February 27, 2024Assignee: Insurance Services Office, Inc.Inventors: Joseph L. Mundy, Bryce Zachary Porter, Ryan Mark Justus, Francisco Rivas -
Patent number: 11900549Abstract: One variation of a method for automatically capturing data from non-networked production equipment includes: detecting a location of a mobile device within a facility, the mobile device manipulated by an operator while performing a step of an augmented digital procedure at a machine in the facility; estimating a position of a display on the machine relative to a field of view of an optical sensor in the mobile device based on the location of the mobile device and a stored location of the machine within the facility; in response to the position of the display falling within the field of view of the optical sensor, selecting an image captured by the optical sensor; extracting a value, presented on the display, from a region of the image depicting the display; and storing the value in a procedure file for the augmented digital procedure completed at the machine.Type: GrantFiled: September 13, 2021Date of Patent: February 13, 2024Assignee: Apprentice FS, Inc.Inventors: Frank Maggiore, Angelo Stracquatanio, Milan Bradonjic, Nabil Chehade