Patents Examined by Ming Wu
-
Patent number: 11949869Abstract: A three-dimensional data encoding method includes: (i) in a first case where a layered structure is generated by classifying three-dimensional points into layers: encoding attribute information for the three-dimensional points based on the layered structure; and generating a bitstream including layer information utilized for the generation of the layered structure; and (ii) in a second case where the three-dimensional points are not classified: encoding attribute information for the three-dimensional points; and generating a bitstream not including the layer information.Type: GrantFiled: September 14, 2021Date of Patent: April 2, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Toshiyasu Sugio, Noritaka Iguchi
-
Patent number: 11941770Abstract: A method and a system for garment try-on. The method includes: capturing a three dimensional (3D) image of a customer; obtaining first 3D pose of the customer in the 3D image; performing a machine learning model on the 3D pose to generate a first skinned multi-person linear model (SMPL) pose; calculating an angle of the whole body rotation of the customer based on the first SMPL pose; when the angle is in a predefined range relative to a front direction of the customer: constructing an SMPL model using the first SMPL pose; and when the angle is out of the predefined range: generating a second SMPL pose using two dimensional (2D) component of the 3D image, and constructing the SMPL model using the second SMPL pose.Type: GrantFiled: December 30, 2020Date of Patent: March 26, 2024Assignees: BEIJING WODONG TIANJUN INFORMATION TECHNOLOGY CO., LTD., JD.COM AMERICAN TECHNOLOGIES CORPORATIONInventors: Xiaochuan Fan, Dan Miao, Chumeng Lyu
-
Patent number: 11943271Abstract: A method, computer program, and computer system is provided for streaming immersive media. Content is ingested in a first two-dimension format or a first three-dimensional format, whereby the format references a neural network. The ingested content is converted to a second two-dimensional or a second three-dimensional format based on the referenced neural network. The converted content is streamed to a client end-point, such as a television, a computer, a head-mounted display, a lenticular light field display, a holographic display, an augmented reality display, or a dense light field display.Type: GrantFiled: August 20, 2021Date of Patent: March 26, 2024Assignee: TENCENT AMERICA LLCInventors: Arianne Hinds, Stephan Wenger
-
Patent number: 11941751Abstract: Techniques for aligning images generated by two cameras are disclosed. This alignment is performed by computing a relative 3D orientation between the two cameras. A first gravity vector for a first camera and a second gravity vector for a second camera are determined. A first camera image is obtained from the first camera, and a second camera image is obtained from the second camera. A first alignment process is performed to partially align the first camera's orientation with the second camera's orientation. This process is performed by aligning the gravity vectors, thereby resulting in two degrees of freedom of the relative 3D orientation being eliminated. Visual correspondences between the two images are identified. A second alignment process is performed to fully align the orientations. This process is performed by using the identified visual correspondences to identify and eliminate a third degree of freedom of the relative 3D orientation.Type: GrantFiled: March 30, 2023Date of Patent: March 26, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds, Sudipta Narayan Sinha
-
Patent number: 11925519Abstract: Method for evaluating a dental situation of a patient. The method having the following successive steps: 1) generating an initial model of at least one dental arch of the patient, preferably by means of a scanner; 2) splitting the initial model in order to define a tooth model for at least some of the teeth represented on the initial model and thereby to obtain a split model; 3) determining an initial support curve of the tooth models in the split model; 4) fixing each tooth model virtually on the initial support curve, preferably by computer; 5) modifying the split model by deformation of the initial support curve according to a deformed support curve, so as to obtain a first deformed model, in which the tooth models are aligned according to the deformed support curve; 6) presenting the first deformed model.Type: GrantFiled: July 6, 2020Date of Patent: March 12, 2024Assignee: DENTAL MONITORINGInventors: Philippe Salah, Thomas Pellissard, Laurent Debraux, Louis-Charles Roisin
-
Patent number: 11928830Abstract: Disclosed are methods and systems for generating three-dimensional reconstructions of environments. A system, for example, may include a housing having an image sensor directed in a first direction and a distance sensor directed in a second direction and a control unit including a processor and a memory storing instructions. The processor may be configured to execute the instructions to: generate a first 3D model of an environment; generate a plurality of revolved 3D models by revolving the first 3D model relative to the image sensor to a plurality of positions within a predetermined angular range; match a set of distance values to one of the revolved 3D models; determine an angular position of the second direction relative to the first direction; and generate a 3D reconstruction of the environment.Type: GrantFiled: December 22, 2021Date of Patent: March 12, 2024Assignee: Honeywell International Inc.Inventors: Zhiguo Ren, Alberto Speranzon, Carl Dins, Juan Hu, Zhiyong Dai, Vijay Venkataraman
-
Patent number: 11910995Abstract: The application relates to the problem of navigating a surgical instrument (at 301, 311) towards a region-of-interest (at 312) in endoscopic surgery when an image (300) provided by the endoscope is obscured at least partly by obscuring matter (at 303), wherein the obscuring matter is a leaking body fluid, debris or smoke caused by ablation. To address this problem, a computer-implemented method is proposed, wherein, upon detecting that the image from the endoscope is at least partly obscured, a second image is determined based on a sequence of historic images and based on the current position and orientation of the endoscope. Furthermore, a virtual image (310) is generated based on the determined second image.Type: GrantFiled: July 10, 2020Date of Patent: February 27, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Bernardus Hendrikus Wilhelmus Hendriks, Caifeng Shan, Marco Lai, Robert Johannes Frederik Homan, Drazenko Babic
-
Patent number: 11900521Abstract: An apparatus includes an electronic display configured to be positioned in a first location and one or more processors electronically coupled to the electronic display. The processors receive a video from a server. The video depicts a view of a second location and includes an image of a rectangular casing, a frame, and one or more muntins. The image is composited with the video by the server to provide an illusion of a window in the second location to a user viewing the video. The rectangular casing surrounds the window. The processors synchronize a time-of-view at the second location in the video with a time-of-day at the first location and synchronize a second length-of-day at the second location in the video with a first length-of-day at the first location. The processors transmit the video to the electronic display for viewing by the user.Type: GrantFiled: August 17, 2021Date of Patent: February 13, 2024Assignee: LiquidView CorpInventors: Mitchell Braff, Jan C. Hobbel, Paulina A. Perrault, Adam Sah, Kangil Cheon, Yeongkeun Jeong, Grishma Rao, Noah Michael Shibley, Hyerim Shin, Marcelle van Beusekom
-
Patent number: 11902369Abstract: An autoencoder includes memory configured to store data including an encode network and a decode network, and processing circuitry coupled to the memory. The processing circuitry is configured to cause the encode network to convert inputted data to a plurality of values and output the plurality of values, batch-normalize values indicated by at least two or more layers of the encode network, out of the output plurality of values, the batch-normalized values having a predetermined average value and a predetermined variance value, quantize each of the batch-normalized values, and cause the decode network to decode each of the quantized values.Type: GrantFiled: February 8, 2019Date of Patent: February 13, 2024Assignee: Preferred Networks, Inc.Inventors: Ken Nakanishi, Shinichi Maeda
-
Patent number: 11900587Abstract: A system for creating a wiring guide graphic comprising a database configured to receive inputs of a connector body identity, an engineering specification and a wiring table; and a processor coupled to the sensor, the processor configured to create a wiring guide graphic.Type: GrantFiled: May 17, 2021Date of Patent: February 13, 2024Assignee: Raytheon CompanyInventors: Michael P. Wilkinson, Steven A. Iskra
-
Patent number: 11887196Abstract: This application discloses methods, systems, and computer-implemented virtualization software applications and computer-implemented graphical user interface tools for remote virtual visualization of structures. Images are captured by an imaging vehicle of a structure and the captured images are transmitted to a remote server via a communication network. Using virtual 3D digital modeling software the server, using the images received from the imaging vehicle, generates a virtual 3D digital model of the structure and stores it in a database. This virtual 3D digital model can be accessed by remote users, using virtualization software applications, and used to view images of the structure. The user is able to manipulate the images and to view them from various perspectives and compare the before-the-damage images with images taken after damage have occurred. Based on all this the user is enabled to remotely communicate with an insurance agent and/or file an insurance claim.Type: GrantFiled: November 7, 2022Date of Patent: January 30, 2024Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Bryan R. Nussbaum, An Ho, Nathan C. Summers, Kevin L Mitchell, Rebecca A. Little
-
Patent number: 11880541Abstract: Methods, systems, computer-readable media, and apparatuses for generating an Augmented Reality (AR) object are presented. The apparatus can include memory and one or more processors coupled to the memory. The one or more processors can be configured to receive an image of at least a portion of a real-world scene including a target object. The one or more processors can also be configured to generate an AR object corresponding to the target object and including a plurality of parts. The one or more processors can further be configured to receive a user input associated with a designated part of the plurality of parts and manipulate the designated part based on the received user input.Type: GrantFiled: August 13, 2021Date of Patent: January 23, 2024Assignee: QUALCOMM IncorporatedInventors: Raphael Grasset, Hartmut Seichter
-
Patent number: 11872764Abstract: Methods, systems, and computer readable media for 3D printing from images, e.g., medical images or images obtained using any appropriate volumetric imaging technology. In some examples, a method includes receiving, from a medical imaging device, a multi-dimensional image of a structure. The method includes, for each two dimensional (2D) slice of the multi-dimensional image, converting, row-by-row for each row of the 2D slice, voxels of the 2D slice into 3D printing instructions for the 2D slice. The method includes 3D printing, by controlling a 3D printing extruder, a physical model based on the structure by 3D printing, slice by slice, each 2D slice using the 3D printing instructions.Type: GrantFiled: December 5, 2018Date of Patent: January 16, 2024Assignee: THE TRUSTEES OF THE UNIVERISTY OF PENNSYLVANIAInventor: Chamith Sudesh Rajapakse
-
Patent number: 11875441Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.Type: GrantFiled: October 11, 2022Date of Patent: January 16, 2024Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
-
Patent number: 11869223Abstract: An example method includes receiving (502) a plurality of points that represent a point cloud; representing a position of the point in each dimension of a three-dimensional space as a sequence of bits (504), where the position of the point is encoded according to a tree data structure; partitioning (506) at least one of the sequences of bits into a first portion of bits and a second portion of bits; quantizing (508) each of the second portions of bits according to a quantization step size, where the quantization step size is determined according to an exponential function having a quantization parameter value as an input and the quantization step size as an output; and generating (510) a data structure representing the point cloud and including the quantized second portions of bits.Type: GrantFiled: January 8, 2021Date of Patent: January 9, 2024Assignee: Apple Inc.Inventors: David Flynn, Khaled Mammou, Fabrice A. Robinet
-
Patent number: 11861868Abstract: A three-dimensional data encoding method includes: determining whether a first valid node count is greater than or equal to a first threshold value predetermined, the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; and, when the first valid node count is greater than or equal to the first threshold value, performing first encoding on attribute information of the current node, the first encoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node.Type: GrantFiled: March 28, 2022Date of Patent: January 2, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Toshiyasu Sugio, Noritaka Iguchi
-
Patent number: 11863729Abstract: Systems and techniques are described for processing image data to generate an image with a synthetic depth of field (DoF). An imaging system receives first image data of a scene captured by a first image sensor. The imaging system receives second image data of the scene captured by a second image sensor. The first image sensor is offset from the second image sensor by an offset distance. The imaging system generates, using at least the first image data and the second image data as inputs to one or more trained machine learning systems, an image having a synthetic depth of field corresponding to a simulated aperture size. The simulated aperture size is associated with the offset distance. The imaging system outputs the image.Type: GrantFiled: September 21, 2021Date of Patent: January 2, 2024Assignee: QUALCOMM IncorporatedInventors: Meng-Lin Wu, Venkata Ravi Kiran Dayana
-
Patent number: 11854261Abstract: In one embodiment, a computing system may receive, from a first electronic device associated with a first user, a first request to generate a link associated with an artificial reality application and an action to be performed by the artificial reality application. The computing system may then generate a link to instructions that are executable on an artificial reality device to cause the artificial reality device to launch the artificial reality application and perform the action. The computing system may then receive, from a second electronic device associated with a second user, an indication that the second user activated the link on the second electronic device, and send the instructions associated with the link to an artificial reality device associated with the second user to cause the artificial reality device associated with the second user to launch the artificial reality application and perform the action.Type: GrantFiled: December 19, 2022Date of Patent: December 26, 2023Assignee: Meta Platforms Technologies, LLCInventors: Tian Lan, Mamta Jain
-
Patent number: 11830136Abstract: A method includes creating a point cloud model of an environment, applying at least one filter to the point cloud model to produce a filtered model of the environment and defining a plane in the filtered model corresponding to a horizontal expanse associated with a floor of the environment.Type: GrantFiled: October 14, 2020Date of Patent: November 28, 2023Assignee: CARNEGIE MELLON UNIVERSITYInventor: Steven Huber
-
Patent number: 11823330Abstract: An object of the present disclosure is to provide a technique for creating a three-dimensional model of a line-like structure from a point cloud obtained using three-dimensional laser measuring equipment and detecting a three-dimensional model of a cable.Type: GrantFiled: August 19, 2019Date of Patent: November 21, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Masaaki Inoue, Hitoshi Niigaki, Yukihiro Goto, Shigehiro Matsuda, Toshiya Ohira, Ryuji Honda, Tomoya Shimizu, Hiroyuki Oshida