Patents Issued in February 6, 2024
  • Patent number: 11893651
    Abstract: Systems, methods, and devices disclosed herein can collect digital witness statements (e.g., on a witness's own mobile device), detect when witnesses are accessing electronic resources during preparation of those digital witness statements, elicit input from witnesses to identify the electronic resources accessed, and detect portions of the digital witness statements that may have been influenced by data procured from those electronic resources. Furthermore, the systems, methods, and devices disclosed herein can generate an indication of consistency between content found in the electronic resources and content found in the digital witness statements so that a degree to which the electronic resources influenced the witness statements can be inferred.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: February 6, 2024
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Anoop Sehgal Paras Ram, Jin Hoe Phua, Woei Chyuan Tan, Jonathan Chan
  • Patent number: 11893652
    Abstract: A control device that is communicably connected to a vehicle includes a control unit. The control unit executes acquisition of sensor information from a sensor provided in the vehicle, determination of whether the vehicle is parked based on the sensor information, determination of whether a user is staying in the vehicle based on the sensor information, and estimation that the user is staying in the vehicle when a parking time of the vehicle exceeds a first reference value and a staying time of the user exceeds a second reference value.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: February 6, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Hirohiko Morikawa
  • Patent number: 11893653
    Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one embodiment, the presented new approach or solution uses Operating System (OS) allocation on the central processing unit (CPU) combined with graphics processing unit (GPU) driver mappings to provide a unified virtual address (VA) across both GPU and CPU. The new approach helps ensure that a GPU VA pointer does not collide with a CPU pointer provided by OS CPU allocation (e.g., like one returned by “malloc” C runtime API, etc.).
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: February 6, 2024
    Assignee: NVIDIA Corporation
    Inventors: Amit Rao, Ashish Srivastava, Yogesh Kini
  • Patent number: 11893654
    Abstract: The present disclosure relates to methods and devices for graphics processing including an apparatus, e.g., a GPU. The apparatus may configure a portion of a GPU to include at least one depth processing block, the at least one depth processing block being associated with at least one depth buffer. The apparatus may also identify one or more depth passes of each of a plurality of graphics workloads, the plurality of graphics workloads being associated with a plurality of frames. Further, the apparatus may process each of the one or more depth passes in the portion of the GPU including the at least one depth processing block, each of the one or more depth passes being processed by the at least one depth processing block, the one or more depth passes being associated with the at least one depth buffer.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: February 6, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Sreyas Kurumanghat, Kalyan Kumar Bhiravabhatla, Andrew Evan Gruber, Tao Wang, Baoguang Yang, Pavan Kumar Akkaraju
  • Patent number: 11893655
    Abstract: A method is provided that includes receiving a video timing pulse, determining, in response to receiving the video timing pulse, a video engine is busy processing a previous frame, and storing settings for a current frame in a pending queue in response to determining the video engine is busy processing the previous frame. The method further includes configuring the video engine with the settings for the current frame from the pending queue after the video engine has completed processing of the previous frame, and processing, using the configured video engine, the current frame.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: February 6, 2024
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: Jason W. Herrick, Hongtao Zhu
  • Patent number: 11893656
    Abstract: The present invention relates to a prediction system for determining a set of subregions to be used for rendering a virtual world of a computer graphics application, said subregions belonging to streamable objects to be used for rendering said virtual world, said streamable objects each comprising a plurality of subregions. The prediction system comprises a plurality of predictor units arranged for receiving from a computer graphics application information on the virtual world and each arranged for obtaining a predicted set of subregions for rendering a virtual world using streamable objects, each predicted set being obtained by applying a different prediction scheme, a streaming manager arranged for receiving the predicted sets of subregions, for deriving from the predicted sets a working set of subregions to be used for rendering and for outputting, based on the working set of subregions, steering instructions concerning the set of subregions to be actually used.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: February 6, 2024
    Assignee: Unity Technologies ApS
    Inventors: Bart Pieters, Charles-Frederik Hollemeersch, Aljosha Demeulemeester
  • Patent number: 11893657
    Abstract: A system, a method, and a computer program for transferring a system for a recognition area are provided. The range of an application object including a specific style is expanded from an image to a style of a real object or a style of a specific area included in a photo. In addition, the recognition area limited to a confined photo space is expanded to a real object and a background by using a projector beam. In addition, more various styles are mixed and applied to a painting style image, which is output, or an original image.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 6, 2024
    Assignee: CoreDotToday Inc.
    Inventors: Kyung Hoon Kim, Bongsoo Jang
  • Patent number: 11893658
    Abstract: An augmented virtual vehicle testing system and method for presenting graphics to a vehicle operator during operation of a vehicle. The method includes: determining a position of a vehicle operator within a vehicle testing environment; executing an augmentative simulation of the vehicle testing environment, wherein the augmentative simulation is used to provide a position of one or more virtual objects within the vehicle testing environment; generating graphics representing the one or more virtual objects based on the position of the vehicle operator and the position of the one or more virtual objects within the vehicle testing environment; and presenting the graphics on an electronic display and to the vehicle operator during operation of the vehicle.
    Type: Grant
    Filed: August 17, 2022
    Date of Patent: February 6, 2024
    Assignee: The Regents of the University of Michigan
    Inventors: Tyler S. Worman, Huei Peng, Gregory J. McGuire
  • Patent number: 11893659
    Abstract: The present invention relates to a method and system that allows input mammography images to be converted between domains. More particularly, the present invention relates to converting mammography images from the image style common to one manufacturer of imaging equipment to the image style common to another manufacturer of imaging equipment. Aspects and/or embodiments seek to provide a method of converting input images from the format output by one imaging device into the format normally output by another imaging device. The imaging devices may differ in their manufacturer, model or configuration such that they produce different styles of image, even if presented with the same raw input data, due to the image processing used in the imaging device(s).
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: February 6, 2024
    Assignee: Kheiron Medical Technologies Ltd.
    Inventors: Tobias Rijken, Michael O'Neill, Andreas Heindl, Joseph Yearsley, Dimitrios Korkinof, Galvin Khara
  • Patent number: 11893660
    Abstract: An image processing apparatus includes a first image creation unit, a second image creation unit, and a CNN processing unit. The first image creation unit creates a first tomographic image of an m-th frame using a data group in list data included in the m-th frame. The second image creation unit creates a second tomographic image using a data group in the list data having a data amount larger than that of the data group used in creating the first tomographic image. The CNN processing unit inputs the second tomographic image to a CNN, outputs an output tomographic image from the CNN, trains the CNN based on a comparison between the output tomographic image and the first tomographic image, and repeats the training operation to generate the output tomographic image in each training.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: February 6, 2024
    Assignee: HAMAMATSU PHOTONICS K.K.
    Inventors: Fumio Hashimoto, Kibo Ote
  • Patent number: 11893661
    Abstract: The disclosed apparatus, systems and methods relate to a framelet-based iterative algorithm for polychromatic CT which can reconstruct two components using a single scan. The algorithm can have various steps including a scaled-gradient descent step of constant or variant step sizes; a non-negativity step; a soft thresholding step; and a color reconstruction step.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: February 6, 2024
    Assignees: University of Iowa Research Foundation
    Inventors: Wenxiang Cong, Ye Yangbo, Ge Wang, Shuwei Mao, Yingmei Wang
  • Patent number: 11893662
    Abstract: An imaging system includes a sensor configured to receive imaging data, where the imaging data comprises k-space data from a magnetic resonance imaging (MRI) scan of a patient. The imaging system also includes a processor operatively coupled to the sensor and configured to identify a degree of interaction between measured points of the k-space data located at a radius from a center of k-space. The processor is also configured to determine, based at least in part on the degree of interaction between the measured points, density weights for a density compensation filter. The processor is also configured to apply the density compensation filter to the k-space data to generate filtered k-space data. The processor is further configured to generate an MRI image of the patient based at least in part on the filtered k-space data.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: February 6, 2024
    Assignee: Northwestern University
    Inventors: Kyungpyo Hong, Daniel Kim
  • Patent number: 11893663
    Abstract: A method of generating a segmentation confidence map by processing classification values each indicating a respective classification of a respective voxel of a retinal C-scan into a respective retinal layer class of a predefined set of retinal layer classes, the method comprising: generating, for each voxel, a respective confidence value which indicates a level of confidence in the classification of the voxel; for a retinal layer class of the predefined set, identifying a subset of the voxels such that the classification value generated for each voxel indicates a classification of the voxel into the retinal layer class; calculating, for each A-scan having voxels in the identified subset, a respective average of the confidence indicator values generated for the voxels; and using the calculated averages to generate the map, which indicates a spatial distribution of a level of confidence in the classification of the voxels.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: February 6, 2024
    Assignee: OPTOS PLC
    Inventor: Enrico Pellegrini
  • Patent number: 11893664
    Abstract: An example computing device is configured to (i) generate a cross-sectional view of a three-dimensional drawing file, the cross-sectional view including an object corresponding to a given mesh of the three-dimensional drawing file, the object including a void contained within the object, (ii) determine a plurality of two-dimensional line segments that collectively define a boundary of the void, (iii) for each line segment, determine one or more nearby line segments based on a distance between an end point of the line segment and an end point of the one or more nearby line segments being within a threshold distance, (iv) determine one or more fully-connected sub-objects by connecting respective sets of nearby line segments in series, (v) determine, from the fully-connected sub-objects, a final sub-object to be used as a new boundary of the void, and (vi) add the final sub-object to the cross-sectional view as the new boundary of the void.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: February 6, 2024
    Assignee: Procore Technologies, Inc.
    Inventor: Christopher Myers
  • Patent number: 11893665
    Abstract: As a container (3) is filled with a material to a very small tolerance, a method displays an evolution of a measured current filling quantity (Q) of the material, from a starting filling quantity Q0 to a target filling quantity QT. A measuring means (5) measures the measured current filling quantity as filling proceeds. A display means (10) shows a first pointer (11), a position X of which is indicative of the measured current filling quantity. The position X on the display means is a monotonic function of the measured current filling quantity. As the measured current filling quantity enters a tolerance subrange on either side of the target filling quantity, a function that defines the position of the first pointer is changed to permit accurate filling, but in a manner so that a user of the measuring means does not perceive a discontinuity in the display.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: February 6, 2024
    Assignee: Mettler-Toledo (Albstadt) GmbH
    Inventor: Uwe Renz
  • Patent number: 11893666
    Abstract: An approach is provided in which the approach generates a parallel chart based on multiple records that includes a set of variable values corresponding to a set of variables. To generate the parallel chart, the approach arranges the set of variables on the parallel chart in a variable order based on at least one variable arrangement rule. The approach arranges an initial variable value order for each one of the set of variables, and computes a lucidity score based on the variable order and the initial variable value order of each of the set of variables. The lucidity score is a measurement of the clarity of the parallel chart. The approach adjusts the variable value order of at least one of the set of variables to increase the lucidity score and optimizes the clarity of the parallel chart based on the adjusted variable value order.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: February 6, 2024
    Assignee: International Business Machines Corporation
    Inventors: Xiao Ming Ma, Si Er Han, Xue Ying Zhang, Jing Xu, Wen Pei Yu, Ji Hui Yang, Jing Jia
  • Patent number: 11893667
    Abstract: A system and method for rendering vector graphics using precomputed textures, includes receiving a vector image, the vector image including a plurality of instructions, each instruction for rendering at least a geometric primitive; detecting in the plurality of instructions an instruction for generating a first Bezier curve; selecting a first precomputed curve in a texture map to match the first Bezier curve; and generating a raster image based at least on the first precomputed curve. In an embodiment selecting the first precomputed curve includes computing a transformation matrix between the first precomputed curve and target coordinates, wherein the target coordinates are coordinates of a display; computing texture coordinates based on the computed transformation matrix and the texture map; and rendering an adapted precomputed curve, based on the texture map and the computed texture coordinates.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: February 6, 2024
    Assignee: THINK SILICON RESEARCH AND TECHNOLOGY SINGLE MEMBER S.A.
    Inventors: Ioannis Oikonomou, Georgios Keramidas
  • Patent number: 11893668
    Abstract: A method of generating an image may include obtaining a profile of a combination of a full-frame camera and a lens; obtaining image information from an electronic sensor of a mobile electronic device; and/or generating, via an electronic control unit of the mobile electronic device, a final digital image via applying a profile of the one or more profiles to the image information.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: February 6, 2024
    Inventor: Thomas Edward Bishop
  • Patent number: 11893669
    Abstract: A digital human development platform can enable a user to generate a digital human. The digital human development platform can receive user input specifying a dialogue for the digital human and one or more behaviors for the digital human, the one or more specified behaviors corresponding with one or more portions of the dialog on a common timeline. Scene data can be generated with the digital human development platform by merging the one or more behaviors with one or more portions of the dialogue based on times of the one or more behaviors and the one or more portions of the dialog on the common timeline.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: February 6, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Simon Gibbs, Anil Unnikrishnan, Link Huang
  • Patent number: 11893670
    Abstract: Provided are an animation generation method, apparatus, and system and a storage medium, relating to the field of animation technology. The method includes acquiring the real feature data of a real object, where the real feature data includes the action data and the face data of the real object during a performance process; determining the target feature data of a virtual character according to the real feature data, where the virtual character is a preset animation model, and the target feature data includes the action data and the face data of the virtual character; and generating the animation of the virtual character according to the target feature data. The performance of the real object is used for generating the animation of the virtual character.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: February 6, 2024
    Assignees: Mofa (Shanghai) Information Technology Co., Ltd., Shanghai Movu Technology Co., Ltd.
    Inventors: Jinxiang Chai, Wenping Zhao, Shihao Jin, Bo Liu, Tonghui Zhu, Hongbing Tan, Xingtang Xiong, Congyi Wang, Zhiyong Wang
  • Patent number: 11893671
    Abstract: Systems and methods for image retargeting are provided. Image data may be acquired that includes motion capture data indicative of motion of a plurality of markers disposed on a surface of a first subject. Each of the markers may be associated with a respective location on the first subject. A plurality of blendshapes may be calculated for the motion capture data based on a configuration of the markers. An error function may be identified for the plurality of blendshapes, and it may be determined that the plurality of blendshapes can be used to retarget a second subject based on the error function. The plurality of blendshapes may then be applied to a second subject to generate a new animation.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: February 6, 2024
    Assignee: SONY INTERACTIVE ENTERTAINMENT LLC
    Inventors: Mark Andrew Sagar, Tim Szu-Hsien Wu, Frank Filipe Bonniwell, Homoud B. Alkouh, Colin Joseph Hodges
  • Patent number: 11893672
    Abstract: An embodiment for creating an avatar of an online viewer during a live video broadcast is provided. The embodiment may include receiving a number of in-person viewers at an event. The embodiment may also include identifying a number of online viewers watching the event remotely. The embodiment may further include in response to determining the number of in-person viewers at the event is below a threshold, identifying a location of the in-person viewers at the event. The embodiment may also include analyzing reactions of the online viewers to scenarios of the event. The embodiment may further include creating an avatar for each of the online viewers. The embodiment may also include populating each empty seat at the event with the created avatar for each online viewer. The embodiment may further include displaying the created avatars occupying the empty seats to the online viewers.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: February 6, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sanjay Guha Thakurta, Sarbajit K. Rakshit, Karthik Krishnan, Venkataramana Logasundaram Jaganathan
  • Patent number: 11893673
    Abstract: A computer graphics animation system is provided to assist prevent the generation of undesirable shapes, by providing realistic examples of a subject which are incorporated into an interpolation function which can be used to animate a new shape deformation of the subject.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: February 6, 2024
    Inventors: Tim Wu, Pavel Sumetc, David Bullivant
  • Patent number: 11893674
    Abstract: Aspects of the present disclosure are directed to creating interactive avatars that can be pinned as world-locked artificial reality content. Once pinned, an avatar can interact with the environment according to contextual queues and rules, without active control by the avatar owner. An interactive avatar system can configure the avatar with action rules, visual elements, and settings based on user selections. Once an avatar is configured and pinned to a location by an avatar owner, when other XR devices are at that location, a central system can provide the avatar (with its configurations) to the other XR device. This allows a user of that other XR device to discover and interact with the avatar according to the configurations established by the avatar owner.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: February 6, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Campbell Orme, Matthew Adam Hanson, Daphne Liang, Matthew Roberts, Fangwei Lee, Bryant Jun-Yao Tang
  • Patent number: 11893675
    Abstract: Various implementations set forth a computer-implemented method for scanning a three-dimensional (3D) environment. The method includes generating, in a first time interval, a first extended reality (XR) stream based on a first set of meshes representing a 3D environment, transmitting, to a remote device, the first XR stream for rendering a 3D representation of a first portion of the 3D environment in a remote XR environment, determining that the 3D environment has changed based on a second set of meshes representing the 3D environment and generated subsequent to the first time interval, generating a second XR stream based on the second set of meshes, and transmitting, to the remote device, the second XR stream for rendering a 3D representation of at least a portion of the changed 3D environment in the remote XR environment.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: February 6, 2024
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Caelin Thomas Jackson-King, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11893676
    Abstract: In one embodiment, a computing system may store, by first buffer blocks, texels organized into a texel array including a number of N×N texel sub-arrays. Each texel may fall within a corresponding N×N texel sub-array and may be associated with a two-dimensional sub-array coordinate indicating a position of that texel within the corresponding N×N texel sub-array. Each first buffer block of may be assigned a particular two-dimensional sub-array coordinate and stores a texel subset having the particular two-dimensional sub-array coordinate. The system may receive, by filter blocks, texels from the first buffer blocks. Each filter block may receive a texel from each first buffer block to form a corresponding N×N texel sub-array. The system may perform, by filter blocks, sampling operations parallelly on their respective N×N texel sub-arrays.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: February 6, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventor: Larry Seiler
  • Patent number: 11893677
    Abstract: Systems and techniques are provided for widening a hierarchical structure for ray tracing. For instance, a process can include obtaining a plurality of primitives of a scene object included in a first hierarchical acceleration data structure and determining one or more candidate hierarchical acceleration data structures each including the plurality of primitives. A cost metric can be determined for the one or more candidate hierarchical acceleration data structures and, based on the cost metric, a compressibility prediction associated with a candidate hierarchical acceleration data structure of the one or more candidate hierarchical acceleration data structures can be determined. An output hierarchical acceleration data structure can be generated based on the compressibility prediction.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: February 6, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Adimulam Ramesh Babu, Srihari Babu Alla, Avinash Seetharamaiah, Jonnala Gadda Nagendra Kumar, David Kirk McAllister
  • Patent number: 11893678
    Abstract: Disclosed herein are an apparatus and method for searching for a global minimum of a point cloud registration error. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program performs collecting multiple registration results in which the registration error between a source point cloud and a target point cloud is a local minimum as candidates and selecting the registration result in which the registration error between the source point cloud and the target point cloud is a global minimum, among the candidates. Collecting the multiple registration results may comprise repeatedly randomly initializing the source point cloud and the target point cloud and registering the initialized source point cloud to the initialized target point cloud to thereby search for a registration result in which the registration error therebetween is a local minimum.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: February 6, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyuk-Min Kwon, Hyun-Cheol Kim, Jeong-Il Seo, Sang-Woo Ahn, Seung-Jun Yang
  • Patent number: 11893679
    Abstract: The disclosure discloses methods and devices for transmitting and rendering a 3D scene.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: February 6, 2024
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Yvon Legallais, Charline Taibi, Serge Travert, Charles Salmon-Legagneur
  • Patent number: 11893680
    Abstract: A computing device for video object detection. Images from a camera are transferred in parallel to a first processor running object detection and a second processor running a 3D reconstruction. The object detection identifies a semantic object of interest and assigns a label to it and outputs the label information to an object mapper. The object mapper assigns the label to a component in the 3D model representing the object. The computing device can form part of a subsea or other harsh environment imaging system.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: February 6, 2024
    Assignee: Rovco Limited
    Inventors: Iain Wallace, Pep Lluis Negre Carrasco, Lyndon Hill
  • Patent number: 11893681
    Abstract: Provided is a two-dimensional (2D) image processing method including obtaining a 2D image, processing the obtained 2D image by using a trained convolutional neural network (CNN) to obtain at least one camera parameter and at least one face model parameter from the 2D image, and generating a three-dimensional (3D) face model, based on the obtained at least one camera parameter and at least one face model parameter.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: February 6, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ivan Viktorovich Glazistov, Ivan Olegovich Karacharov, Andrey Yurievich Shcherbinin, Ilya Vasilievich Kurilin
  • Patent number: 11893682
    Abstract: One variation of a method includes: accessing a 2D color image recorded by a 2D color camera and a 3D point cloud recorded by a 3D depth sensor at approximately a first time, the 2D color camera and the 3D depth sensor defining intersecting fields of view and facing outwardly from an autonomous vehicle; detecting a cluster of points in the 3D point cloud representing a continuous surface approximating a plane; isolating a cluster of color pixels in the 2D color image depicting the continuous surface; projecting the cluster of color pixels onto the plane to define a set of synthetic 3D color points in the 3D point cloud, the cluster of points and the set of synthetic 3D color points representing the continuous surface; and rendering points in the 3D point cloud and the set of synthetic 3D color points on a display.
    Type: Grant
    Filed: October 10, 2022
    Date of Patent: February 6, 2024
    Inventors: Kah Seng Tay, Qing Sun, James Patrick Marion
  • Patent number: 11893683
    Abstract: An Artificial Intelligence (AI) three-dimensional modeling system that analyzes and segments imagery of a room, generates a three-dimensional model of the room from the segmented imagery, identifies objects within the room, and conducts an assessment of the room based on the identified objects.
    Type: Grant
    Filed: February 23, 2023
    Date of Patent: February 6, 2024
    Assignee: The Travelers Indemnity Company
    Inventors: Hoa Ton-That, Ryan M. Scanlon, Douglas L. Roy
  • Patent number: 11893684
    Abstract: Disclosed is an encoding and decoding system and associated methods for producing a compressed waveform that encodes data points of a point cloud in a format and size that may be transmitted over a data network, decompressed, decoded, and rendered on a remote device without the buffering or lag associated with transmitting and rendering an uncompressed point cloud. The encoder receives a request from a remote device to access the point cloud, encodes a set of data points from the point cloud as one or more signals derived from values defined for the positional and non-positional elements of each data point from the set of data points, generates one or more compressed waveforms from compressing the one or more signals and transmits the one or more compressed waveforms to the remote device in response to the request for decompression, decoding, and image rendering.
    Type: Grant
    Filed: August 3, 2023
    Date of Patent: February 6, 2024
    Assignee: Illuscio, Inc.
    Inventors: Robert Monaghan, Fatemeh Jamalidinan, Mark Weingartner, Dwayne Elahie, Kevin Edward Dean
  • Patent number: 11893685
    Abstract: The present disclosure provides a landform map building method and apparatus, an electronic device and a readable storage medium, and relates to the field of image processing technologies.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 6, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Junjun Zhang, Yi Zeng, Xiaopei Hou
  • Patent number: 11893687
    Abstract: The disclosure relates to a computer-implemented method comprising inputting a representation of a 3D modeled object to an abstraction neural network which outputs a first set of a first number of first primitives fitting the 3D modeled object; and determining, from the first set, one or more second sets each of a respective second number of respective second primitives. The second number is lower than the first number. The determining includes initializing a third set of third primitives as the first set and performing one or more iterations, each comprising to merging one or more subsets of third primitives together each into one respective single fourth primitive, to thereby obtain a fourth set of fourth primitives. Each iteration further comprises setting the third set of a next iteration as the fourth set of a current iteration and setting the one or more second sets as one or more obtained fourth sets.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: February 6, 2024
    Assignee: DASSAULT SYSTEMES
    Inventors: Mariem Mezghanni, Julien Boucher, Paul Villedieu
  • Patent number: 11893688
    Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: February 6, 2024
    Assignee: TETAVI LTD.
    Inventors: Sefy Kagarlitsky, Shirley Keinan, Amir Green, Yair Baruch, Roi Lev, Michael Birnboim, Miky Tamir
  • Patent number: 11893689
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: February 6, 2024
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans
  • Patent number: 11893690
    Abstract: A computer-implemented method for 3D reconstruction including obtaining 2D images and, for each 2D image, camera parameters which define a perspective projection. The 2D images all represent a same real object. The real object is fixed. The method also includes obtaining, for each 2D image, a smooth map. The smooth map has pixel values, and each pixel value represents a measurement of contour presence. The method also includes determining a 3D modeled object that represents the real object. The determining iteratively optimizes energy. The energy rewards, for each smooth map, projections of silhouette vertices of the 3D modeled object having pixel values representing a high measurement of contour presence. This forms an improved solution for 3D reconstruction.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 6, 2024
    Assignee: DASSAULT SYSTEMES
    Inventors: Serban Alexandru State, Eloi Mehr, Yoan Souty
  • Patent number: 11893691
    Abstract: A method, computer program, and computer system is provided for processing point cloud data. Quantized point cloud data including a plurality of voxels is received. An occupancy map is generated for the quantized point cloud corresponding to lost voxels during quantization from among the plurality of voxels. A point cloud is reconstructed from the quantized point cloud data based on populating the lost voxels.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: February 6, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Anique Akhtar, Wen Gao, Xiang Zhang, Shan Liu
  • Patent number: 11893692
    Abstract: Disclosed is a graphics system and associated methodologies for selectively increasing the level-of-detail at specific parts of a mesh model based on a point cloud that provides a higher detailed representation of the same or similar three-dimensional (“3D”) object. The graphics system receives the mesh model and the point cloud of the 3D object. The graphics system determines a region-of-interest of the 3D object based in part on differences amongst points that represent part or all of the region-of-interest. The graphics system reconstructs the region-of-interest in the mesh model and generates a modified mesh model by modifying a first set of meshes representing the region-of-interest in the mesh model to a second set of meshes based on the positional elements of the point cloud points. The second set of meshes has more meshes and represents the region-of-interest at a higher level-of-detail than the first set of meshes.
    Type: Grant
    Filed: May 31, 2023
    Date of Patent: February 6, 2024
    Assignee: Illuscio, Inc.
    Inventor: DonEliezer Baize
  • Patent number: 11893693
    Abstract: Generating and storing digital media can be resource intensive processes. Some systems and methods disclosed herein relate to generating digital media using a pre-existing three-dimensional (3D) model of an object and feature points of the object. According to an embodiment, a method includes an e-commerce platform receiving a request for digital media depicting an object. In response to the request, the e-commerce platform may obtain a 3D model corresponding to the object and data pertaining to one or more feature points of the object. The one or more feature points may correspond to respective views of the 3D model. The e-commerce platform may then generate the digital media based on the 3D model and the one or more feature points, where the digital media could include renders of the 3D model depicting the one or more feature points.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: February 6, 2024
    Inventors: Jonathan Wade, Juho Mikko Haapoja, Stephan Leroux, Daniel Beauchamp
  • Patent number: 11893694
    Abstract: A control apparatus for work machines includes a position designation reception unit configured to identify designation of a position with respect to a state image displayed on a display panel and a screen control unit configured to perform screen control according to an image displayed at the position that has been identified, among partial images constituting parts of the state image.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: February 6, 2024
    Assignee: Komatsu Ltd.
    Inventors: Shintaro Hamada, Yoshiyuki Onishi, Mitsuhiro Aoki
  • Patent number: 11893695
    Abstract: A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator performs adjusts a focal length of the camera such that perspective corresponding to a distance between the real user position and the screen is cancelled.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: February 6, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Wataru Kaku
  • Patent number: 11893696
    Abstract: Methods, systems, and computer readable media for providing an extended reality (XR) user interface. A method for providing an XR user interface occurs at a user device executing an XR application.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: February 6, 2024
    Assignee: THE TRUSTEES OF THE UNIVERSITY OF PENNSYLVANIA
    Inventors: Stephen H. Lane, Matthew Anthony Boyd-Surka, Yaoyi Bai, Aline Sarah Normoyle
  • Patent number: 11893697
    Abstract: A controller outputs an image of the virtual space in correspondence with a posture of a first user wearing the mounted display, outputs an image of the virtual space to a touch panel display used by a second user, performs a first action in the virtual space in correspondence with a touch operation performed by the second user on the touch panel display, outputs an image of the virtual space reflecting the first action to the mounted display, and performs a second action in the virtual space reflecting the first action in correspondence with an operation performed by the first user on an operation unit.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: February 6, 2024
    Assignee: GREE, INC.
    Inventor: Masashi Watanabe
  • Patent number: 11893698
    Abstract: An electronic device according to various embodiments of the disclosure includes: a communication module comprising communication circuitry and a processor operatively connected to the communication module. The processor may be communicatively connected to an augmented reality (AR) device through the communication module, and be configured to receive image information obtained by a camera of the AR device from the AR device, to detect an object based on the received image information, to acquire virtual information corresponding to the object, to control the communication module to transmit the virtual information to the AR device, to determine, based on the received image information, whether the object is out of a viewing range of the AR device, and to change a transfer interval of the virtual information for the AR device based on the determination.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: February 6, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungbum Lee, Seungseok Hong, Donghyun Yeom
  • Patent number: 11893699
    Abstract: A method and processing unit for providing content in a bandwidth constrained environment is disclosed. Initially, a content along with audio inputs, which is received during rendering of the content and provided to one or more users in a bandwidth constrained environment is received. Further, at least one object of interest within the content and associated with the audio inputs is identified. One or more regions of interest, including the at least one object of interest, is determined in the bandwidth constrained environment. Upon determining the one or more regions of interest, bitrate for rendering the content is modified based on the determined one or more regions of interest, to obtain a modified content for the bandwidth constrained environment. The modified content is provided to be rendered in the bandwidth constrained environment.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: February 6, 2024
    Assignee: Zeality Inc
    Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
  • Patent number: 11893700
    Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: February 6, 2024
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Ajit Ninan, Neil Mammen, Tyrome Y. Brown
  • Patent number: 11893701
    Abstract: A preferred method for dynamically displaying virtual and augmented reality scenes can include determining input parameters, calculating virtual photometric parameters, and rendering a VAR scene with a set of simulated photometric parameters.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: February 6, 2024
    Assignee: Dropbox, Inc.
    Inventors: Terrence Edward McArdle, Benjamin Zeis Newhouse