Patents Examined by Xilin Guo
-
Patent number: 11972855Abstract: A method includes, receiving: (i) a selected three-dimensional (3D) section that has been ablated in a patient organ in accordance with a specified contour, and (ii) a dataset, which is indicative of a set of lesions formed during ablation of the selected 3D section. The selected 3D section is transformed into a two-dimensional (2D) map, and checking, on the 2D map, whether the set of lesions covers the specified contour.Type: GrantFiled: August 12, 2021Date of Patent: April 30, 2024Assignee: Biosense Webster (Israel) Ltd.Inventor: Assaf Govari
-
Patent number: 11972712Abstract: Methods, systems, and devices that support a dynamic screen refresh rate are described. An electronic device may dynamically (e.g., autonomously, while operating) adjust the rate at which a screen is refreshed, such as to balance considerations such as user experience and power consumption by the electronic device. For example, the electronic device may use an increased refresh rate when executing applications for which user experience is enhanced by a higher refresh rate and may use a decreased refresh rate when executing other applications. As another example, the electronic device may use different refresh rates while executing different portions of the same application, as some aspects of an application (e.g., more intense portions of a video game) may benefit more than others from a higher refresh rate. The electronic device may also account of other factors, such as battery level, when setting or adjusting the refresh rate of the screen.Type: GrantFiled: March 10, 2022Date of Patent: April 30, 2024Inventors: Ashish Ranjan, Carly M. Wantulok, Prateek Trivedi, Carla L. Christensen, Jun Huang, Avani F. Trivedi
-
Patent number: 11967010Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for providing angular snapping guides to efficiently, accurately, and flexibly align user interactions and editing operations to existing angular linear segments of digital design objects in a digital design document. In particular, in one or more embodiments, the disclosed systems determine target angular linear segments for presentation of angular snapping guides by generating angular bin data structures based on orientation and signed distances of angular linear segments within the digital design document. Accordingly, in one or more embodiments, the disclosed systems can efficiently search these angular bin data structures based on angles and signed distances corresponding to user interactions.Type: GrantFiled: January 3, 2022Date of Patent: April 23, 2024Assignee: Adobe Inc.Inventors: Arushi Jain, Praveen Kumar Dhanuka
-
Patent number: 11967019Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.Type: GrantFiled: October 24, 2022Date of Patent: April 23, 2024Assignee: Varjo Technologies OyInventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
-
Patent number: 11967015Abstract: The subject technology provides a framework for learning neural scene representations directly from images, without three-dimensional (3D) supervision, by a machine-learning model. In the disclosed systems and methods, 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. For example, a loss function can be provided which enforces equivariance of the scene representation with respect to 3D rotations. Because naive tensor rotations may not be used to define models that are equivariant with respect to 3D rotations, a new operation called an invertible shear rotation is disclosed, which has the desired equivariance property. In some implementations, the model can be used to generate a 3D representation, such as mesh, of an object from an image of the object.Type: GrantFiled: January 8, 2021Date of Patent: April 23, 2024Assignee: Apple Inc.Inventors: Qi Shan, Joshua Susskind, Aditya Sankar, Robert Alex Colburn, Emilien Dupont, Miguel Angel Bautista Martin
-
Patent number: 11948236Abstract: The present disclosure discloses a method and apparatus for generating animation. An implementation of the method may include: processing a to-be-processed material to generate a normalized text; analyzing the normalized text to generate a Chinese pinyin sequence of the normalized text; generating a reference audio based on the to-be-processed material; and obtaining a animation of facial expressions corresponding to the timing sequence of the reference audio based on the Chinese pinyin sequence and the reference audio.Type: GrantFiled: November 16, 2021Date of Patent: April 2, 2024Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Shaoxiong Yang, Yang Zhao, Chen Zhao
-
Patent number: 11941739Abstract: Systems and methods generate a modified three-dimensional mesh representation of an object using a trained neural network. A computer system receives a set of input values for posing an initial mesh defining a surface of a three-dimensional object. The computer system provides the input values to a neural network trained on posed meshes generated using a rigging model to generate mesh offset values based upon the set of input values and the initial mesh. The neural network includes an input layer, an output layer, and a plurality of intermediate layers. The computer system generates, by the output layer of the neural network, a set of offset values corresponding to a set of three-dimensional target points based on the set of input values. The offset values are applied to the initial mesh to generate a posed mesh. The computer system outputs the posed mesh for generating an animation frame.Type: GrantFiled: January 5, 2022Date of Patent: March 26, 2024Assignee: PIXARInventors: Sarah Radzihovsky, Fernando Ferrari de Goes, Mark Meyer
-
Patent number: 11941747Abstract: A method includes accessing a first object in a virtual environment, the first object representing a first asset at a first level of detail (LoD). A second object is generated to represent the first asset at a second LoD having decreased complexity. The method further includes determining a first importance value for the first asset and, based on the first importance value, selecting the first object to represent the first asset. Additionally, the method includes accessing a third object representing the second asset at the first LoD and generating a fourth object representing the second asset at the second LoD. The method further includes determining a second importance value, lower than the first importance value, for the second asset and selecting the fourth object to represent the second asset. The method further includes causing a client device to update a display of the virtual environment by transmitting the selected objects.Type: GrantFiled: October 29, 2021Date of Patent: March 26, 2024Assignee: Adobe Inc.Inventors: Qi Sun, Xin Sun, Stefano Petrangeli, Shaoyu Chen, Li-Yi Wei, Jose Ignacio Echevarria Vallespi
-
Patent number: 11935170Abstract: Systems, methods, and computer-readable media are disclosed for systems and methods for automated generation and presentation of sign language avatars for video content. Example methods may include determining, by one or more computer processors coupled to memory, a first segment of video content, the first segment including a first set of frames, first audio content, and first subtitle data, where the first subtitle data comprises a first word and a second word. Methods may include determining, using a first machine learning model, a first sign gesture associated with the first word, determining first motion data associated with the first sign gesture, and determining first facial expression data. Methods may include generating an avatar configured to perform the first sign gesture using the first motion data, where a facial expression of the avatar while performing the first sign gesture is based on the first facial expression data.Type: GrantFiled: November 18, 2021Date of Patent: March 19, 2024Assignee: Amazon Technologies, Inc.Inventors: Abhinav Jain, Avijit Vajpayee, Vimal Bhat, Arjun Cholkar, Louis Kirk Barker
-
Patent number: 11922594Abstract: Techniques and systems are provided for dynamically adjusting virtual content provided by an extended reality system. In some examples, a system determines a level of distraction of a user of the extended reality system due to virtual content provided by the extended reality system. The system determines whether the level of distraction of the user due to the virtual content exceeds or is less than a threshold level of distraction, where the threshold level of distraction is determined based at least in part on one or more environmental factors associated with a real world environment in which the user is located. The system also adjusts one or more characteristics of the virtual content based on the determination of whether the level of distraction of the user due to the virtual content exceeds or is less than the threshold level of distraction.Type: GrantFiled: August 11, 2022Date of Patent: March 5, 2024Assignee: QUALCOMM IncorporatedInventors: Robert Tartz, Jonathan Kies, Daniel James Guest
-
Patent number: 11922589Abstract: The present disclosure describes systems and methods for generating digital twin augmented reality replications of non-homogenous elements in integrated environments. One method includes storing a first data structure for a first element in a digital twin augmented reality environment, the first data structure including respective fields for a first function, a first set of relationships, a first physical location, and a first time period of operation. The method also includes storing a second data structure for a second element in the digital twin augmented reality environment, the second data structure includes respective fields for a second function, a second set of relationships, a second physical location, and a second time period of operation. The method can generate a visual representation of the first element and the second element in the digital twin augmented reality environment.Type: GrantFiled: July 17, 2022Date of Patent: March 5, 2024Assignee: NETWORK DOCUMENTATION & IMPLEMENTATION INC.Inventor: Jorge McBain
-
Patent number: 11915360Abstract: A deep learning-based volumetric image inference system and method are disclosed that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network (RNN) (referred to herein as Recurrent-MZ), 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63×/1.4 NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. The generalization of this recurrent network for 3D imaging is further demonstrated by showing its resilience to varying imaging conditions, including e.g.Type: GrantFiled: October 19, 2021Date of Patent: February 27, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Luzhe Huang
-
Patent number: 11915451Abstract: A method and a system for object detection and pose estimation within an input image. A 6-degree-of-freedom object detection and pose estimation is performed using a trained encoder-decoder convolutional artificial neural network including an encoder head, an ID mask decoder head, a first correspondence color channel decoder head and a second correspondence color channel decoder head. The ID mask decoder head creates an ID mask for identifying objects, and the color channel decoder heads are used to create a 2D-to-3D-correspondence map. For at least one object identified by the ID mask, a pose estimation based on the generated 2D-to-3D-correspondence map and on a pre-generated bijective association of points of the object with unique value combinations in the first and the second correspondence color channels is generated.Type: GrantFiled: January 17, 2020Date of Patent: February 27, 2024Assignee: Siemens AktiengesellschaftInventors: Ivan Shugurov, Andreas Hutter, Sergey Zakharov, Slobodan Ilic
-
Patent number: 11907414Abstract: An animation system includes an animated figure, multiple sensors, and an animation controller that includes a processor and a memory. The memory stores instructions executable by the processor. The instructions cause the animation controller to receive guest detection data from the multiple sensors, receive shiny object detection data from the multiple sensors, determine an animation sequence of the animated figure based on the guest detection data and shiny object detection data, and transmit a control signal indicative of the animation sequence to cause the animated figure to execute the animation sequence. The guest detection data is indicative of a presence of a guest near the animated figure. The animation sequence is responsive to a shiny object detected on or near the guest based on the guest detection data and the shiny object detection data.Type: GrantFiled: May 31, 2022Date of Patent: February 20, 2024Assignee: Universal City Studios LLCInventors: David Michael Churchill, Clarisse Vamos, Jeffrey A. Bardt
-
Patent number: 11900674Abstract: Systems, computer program products, and methods are described herein for real-time identification of unauthorized access. The present invention is configured to receive, from a first computing device, an indication of a first triggering activity; extract one or more first assessment vectors associated with the first triggering activity; receive, from a second computing device, an indication of a second triggering activity; extract one or more second assessment vectors associated with the second triggering activity; determine, in real-time, that the one or more first assessment vectors and the one or more second assessment vectors indicate an incidence of a misappropriate activity; dynamically generate, using an augmented reality application, an augmented reality overlay comprising at least the one or more first assessment vectors, the one or more second assessment vectors, and an indication of the incidence of the misappropriate activity.Type: GrantFiled: July 8, 2021Date of Patent: February 13, 2024Assignee: BANK OF AMERICA CORPORATIONInventor: Matthew K. Bryant
-
Patent number: 11886245Abstract: A wearable computing device, including a device body configured to be affixed to a body of a user. The wearable computing device may further include an inertial measurement unit (IMU) and a processor. The processor may receive kinematic data from the IMU while the device body is affixed to the body of the user. The processor may perform a first coordinate transformation on the kinematic data into a training coordinate frame of a training wearable computing device. At a first machine learning model trained using training data including training kinematic data collected at the training wearable computing device, the processor may compute a training-frame velocity estimate for the wearable computing device based on the transformed kinematic data. The processor may perform a second coordinate transformation on the training-frame velocity estimate to obtain a runtime-frame velocity estimate and may output the runtime-frame velocity estimate to a target program.Type: GrantFiled: May 6, 2021Date of Patent: January 30, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Evan Gregory Levine, Salim Sirtkaya
-
Patent number: 11880950Abstract: Implementations selectively offload visual frames from a client device to an edge system for processing. The client device can receive streaming visual frames and a request to process the visual frames using a data service. The client device can offload visual frames to an edge system preloaded with a workload resource that corresponds to the requested data service. After the edge system processes the offloaded visual frames using the workload resource, the edge system can return the processed visual frame to the client device. In some implementations, the edge system and client device are situated in a network such that a latency for the offload communications support real-time video display. A cloud system can maintain a registry of edge systems and provide client devices with information about nearby edge systems. The cloud system can also preload the edge systems with workload resources that correspond to data services.Type: GrantFiled: March 14, 2022Date of Patent: January 23, 2024Assignee: Meta Platforms Technologies, LLCInventors: Pranav Saxena, Brian Johnston, Jun Woo Shin, Tao Tao, Alaukik Aggarwal
-
Patent number: 11874302Abstract: A digital oscilloscope includes a video input interface, a data processing system, a video output interface, and a clock system. The video input interface is configured to receive a digital video signal; the data processing system receives the digital video signal and processes the digital video signal to generate an oscillogram signal, which includes an oscillogram image and further includes one of a menu image and a frame image of the digital video signal; and the video output interface is connected to the data processing system, receives the oscillogram signal and outputs it to external terminals. The oscilloscope can display a variety of image information, with high intuitiveness, simplified structure, improved portability, and is convenient to use in outdoor places.Type: GrantFiled: December 29, 2020Date of Patent: January 16, 2024Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Congrui Wu, Lihua Geng, Xitong Ma
-
Patent number: 11875470Abstract: A virtual environment may be generated, the environment including a procedurally generated virtual property. The virtual property may include various features automatically generated based upon a procedural rule set, so as to provide variability between two or more virtual properties. One or more procedurally generated virtual properties may be provided to a user, and various virtual tools may be provided via which the user may identify relevant aspects corresponding to risk associated with the virtual property.Type: GrantFiled: December 7, 2022Date of Patent: January 16, 2024Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Rebecca A. Little, Deanna L. Stockweather, Nathan C. Summers, Bryan R. Nussbaum, Karen Wylie
-
Patent number: 11861887Abstract: Aspects of the technology described herein relate to techniques for guiding an operator to use an ultrasound device. Thereby, operators with little or no experience operating ultrasound devices may capture medically relevant ultrasound images and/or interpret the contents of the obtained ultrasound images. For example, some of the techniques disclosed herein may be used to identify a particular anatomical view of a subject to image with an ultrasound device, guide an operator of the ultrasound device to capture an ultrasound image of the subject that contains the particular anatomical view, and/or analyze the captured ultrasound image to identify medical information about the subject.Type: GrantFiled: September 7, 2021Date of Patent: January 2, 2024Assignee: BFLY OPERATIONS, INC.Inventors: Matthew de Jonge, Robert Schneider, David Elgena, Alex Rothberg, Jonathan M. Rothberg, Michal Sofka, Tomer Gafner, Karl Thiele, Abraham Neben