Patents Issued in September 14, 2023
  • Publication number: 20230290062
    Abstract: Artificial neural networks (ANN) may be trained to output estimated floor plans from 3D spaces that would be challenging or impossible for existing techniques to estimate. In embodiments, an ANN may be trained using a supervised approach where top-down views of 3D meshes or point clouds are provided to the ANN as input, with ground truth floor plans provided as output for comparison. A suitably large training set may be used to fully train the ANN on challenging scenarios such as open loop scans and/or unusual geometries. The trained ANN may then be used to accurately estimate floor plans for such 3D spaces. Other embodiments are described.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng Su
  • Publication number: 20230290063
    Abstract: A method of post processing a decoded 3D textured mesh to adaptively tesselate the mesh, can include: receiving from a mesh decoder one or more of metadata(i) describing various information about the mesh structure, a decoded base mesh m?(i), displacements d?(i) associated with the vertices of the decoded base mesh, and optionally one or more attribute maps A?(i) describing information associated with the mesh surface; and receiving control parameters from an application consuming the decoded 3D textured mesh; and performing one or more subdivisions of the decoded base mesh m?(i) based on the received control information.
    Type: Application
    Filed: February 8, 2023
    Publication date: September 14, 2023
    Inventors: Khaled Mammou, Alexandros Tourapis, Jungsun Kim
  • Publication number: 20230290064
    Abstract: The disclosure relates to a computer-implemented method for tessellation processing. The tessellation forms a surface representation of a real-world 3D object. The method comprises providing a constrained tetrahedral Delaunay mesh comprising the tessellation. The mesh is conformal and comprises one or more sets of tetrahedrons, each set representing a gap between portions of the tessellation, each gap having a size lower than a predefined threshold. The method further comprises determining a set of one or more tetrahedra faces of the mesh to be added to the tessellation, includes minimizing an objective function that includes a term penalizing surface creation by face addition to the tessellation. The minimization is under the constraint that a given set of tetrahedra faces of the Delaunay mesh is to be added to the tessellation. The given set of tetrahedra faces includes, for each gap of one or more gaps, the tetrahedra faces meshing the gap.
    Type: Application
    Filed: February 15, 2023
    Publication date: September 14, 2023
    Applicant: DASSAULT SYSTEMES
    Inventors: Aurelien Jean Marie ALLEAUME, Nicolas DUNY, Mark LORIOT
  • Publication number: 20230290065
    Abstract: In one embodiment, a method includes generating a front panel of a garment based on one or more images including the garment, generating a back panel of the garment, aligning the front panel and the back panel in a three-dimensional space so that the front panel is in front of a three-dimensional body and the back panel is behind the three-dimensional body, identifying one or more pairs of boundary segments of the front panel and the back panel, wherein each pair of boundary segments of the front panel and the back panel are to be attached together, and generating a digital garment by attaching each of the identified one or more pairs of boundary segments of the front panel and the back panel through a plurality of iterative simulations using a physics simulation model.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 14, 2023
    Inventors: Tuur Jan M Stuyck, Tony Tung
  • Publication number: 20230290066
    Abstract: In a computer-implemented method and system for capturing the condition of a structure, the structure is scanned with an unmanned aerial vehicle (UAV). Data collected by the UAV corresponding to points on a surface of a structure is received and a 3D point cloud is generated for the structure, where the 3D point cloud is generated based at least in part on the received UAV data. A 3D model of the surface of the structure is reconstructed using the 3D point cloud.
    Type: Application
    Filed: May 17, 2023
    Publication date: September 14, 2023
    Inventors: James M. Freeman, Roger D. Schmidgall, Patrick H. Boyer, Nicholas U. Christopulos, Jonathan D. Maurer, Nathan L. Tofte, Jackie O. Jordan, II
  • Publication number: 20230290067
    Abstract: A system for computational localization of fibrillation sources is provided. In some implementations, the system performs operations comprising generating a representation of electrical activation of a patient's heart and comparing, based on correlation, the generated representation against one or more stored representations of hearts to identify at least one matched representation of a heart. The operations can further comprise generating, based on the at least one matched representation, a computational model for the patient's heart, wherein the computational model includes an illustration of one or more fibrillation sources in the patient's heart. Additionally, the operations can comprise displaying, via a user interface, at least a portion of the computational model. Related systems, methods, and articles of manufacture are also described.
    Type: Application
    Filed: May 16, 2023
    Publication date: September 14, 2023
    Inventors: David E. Krummen, Andrew D. McCulloch, Christopher T. Villongco, Gordon Ho
  • Publication number: 20230290068
    Abstract: A mesh model of a 3D space is provided with improved accuracy based on user inputs. In one aspect, a triangle face of the mesh is divided into three smaller triangle faces base on a user-selected point in a 3D space. A user can select the point on a display screen, for example, where a corresponding vertex in the mesh is a point in the mesh which is intersected by a ray cast from the selected point. This process can be repeated to provide new vertices in the mesh model which more accurately represent an object in the 3D space and therefore allow a more accurate measurement of the size or area of the object. For example, the user might select four points to identify a rectangular object.
    Type: Application
    Filed: August 1, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng SU
  • Publication number: 20230290069
    Abstract: A mesh model of a 3D space is modified based on semantic segmentation data to more accurately represent boundaries of an object in the 3D space. In one aspect, semantic segmentation images define one or more boundaries of the object. The semantic segmentation images are projected to a 3D mesh representation of the 3D space, and the 3D mesh representation is updated based on the one or more boundaries in the projected semantic segmentation image. In another aspect, the 3D mesh representation is updated based on one or more boundaries defined by the semantic segmentation images as applied to a point cloud of the 3D space.
    Type: Application
    Filed: August 1, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng Su
  • Publication number: 20230290070
    Abstract: Embodiments of devices and techniques of obtaining a three dimensional (3D) representation of an area are disclosed. In one embodiment, a two dimensional (2D) frame is obtained of an array of pixels of the area. Also, a depth frame of the area is obtained. The depth frame includes an array of depth estimation values. Each of the depth estimation values in the array of depth estimation values corresponds to one or more corresponding pixels in the array of pixels. Furthermore, an array of confidence scores is generated. Each confidence score in the array of confidence scores corresponds to one or more corresponding depth estimation values in the array of depth estimation values. Each of the confidence scores in the array of confidence scores indicates a confidence level that the one or more corresponding depth estimation values in the array of depth estimation values is accurate.
    Type: Application
    Filed: October 5, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Nikilesh URELLA
  • Publication number: 20230290071
    Abstract: The present disclosure relates to a model display method, apparatus, and system. The method includes: acquiring a hand location of a hand model in the virtual reality scene in real time, receiving an input signal from an input device, determining a key position corresponding to the input signal in the 3D model, determining model location of the 3D model in the virtual reality according to the key position in the 3D model and the hand location, and displaying the 3D model at the model location in the virtual reality scene. By this method, the 3D model of the input device in real space can be accurately displayed in the virtual reality scene.
    Type: Application
    Filed: May 17, 2023
    Publication date: September 14, 2023
    Applicant: BEIJING SOURCE TECHNOLOGY CO., LTD.
    Inventor: Zixiong LUO
  • Publication number: 20230290072
    Abstract: A system comprising: processors and memory containing instructions to control processors to: receive images representing an interior of a physical environment, identify, using neural network for object recognition, an object in an image, the object is associated with a location relative to the physical environment, identify, using neural network for object recognition, another object in another image, determine if objects in the images are located near or at a similar location based on location information associated with the objects, if the objects are located near or at a similar location, then objects are an instance of a single object, store similar location associated with the single object, display an interactive walkthrough visualization of a 3D model of the physical environment including the single object, receive request regarding object location through the interactive walkthrough visualization, and provide the similar location of the single object for display in the interactive walkthrough visualiz
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: Matterport, Inc.
    Inventors: Gunnar Hovden, Azwad Sabik
  • Publication number: 20230290073
    Abstract: In one embodiment, a user-navigable 3-D virtual room-based user interface for a home automation system is provided. A control app renders and displays the user-navigable 3-D virtual room from a perspective defined by a virtual camera, wherein the control app renders the user-navigable 3-D virtual room based on data from at least one of a plurality of 2-D images of the physical room captured from different respective positions in the physical room. In response to an explicit navigation command or implicit action, the control app alters a position or orientation of the virtual camera. The control app re-renders and displays the user-navigable 3-D virtual room from a new perspective defined by the altered position or orientation, wherein the new perspective does not coincide with the position in the physical room from which any of the 2-D images were captured, by blending data from multiple 2-D images captured from different positions.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 14, 2023
    Inventors: Robert P. Madonna, Maxwell R. Madonna, David W. Tatzel, Michael A. Molta, Timothy Kallman
  • Publication number: 20230290074
    Abstract: A configuration tool adapted to configure a quality control system to monitor and/or guide an operator in a working environment through recognition of objects, events or an operational process, comprises: a volumetric sensor adapted to capture volumetric image frames of the working environment while an object, event or operational process is demonstrated; a display, coupled to the volumetric sensor and configured to live display the volumetric image frames; and a processor configured to: generate a user interface in overlay of the volumetric image frames to enable a user to define a layout zone; and automatically generate a virtual box in the layout zone when an object, event or operational process is detected during demonstration of the object, event or operational process.
    Type: Application
    Filed: May 16, 2023
    Publication date: September 14, 2023
    Inventor: Ives DE SAEGER
  • Publication number: 20230290075
    Abstract: A computer server system comprises a communications module; a processor coupled with the communications module; and a memory coupled to the processor and storing processor-executable instructions which, when executed by the processor, configure the processor to receive, via the communications module and from a requesting device, a signal that includes a request to send a stored-value card to a recipient, the request identifying one or more parameters of the stored-value card; generate the stored-value card and a three-dimensional object representing the stored-value card according to the one or more parameters; and send, via the communications module and to a mobile device of the recipient, a signal that includes the three-dimensional object representing the stored-value card for display in augmented reality.
    Type: Application
    Filed: March 9, 2022
    Publication date: September 14, 2023
    Applicant: The Toronto-Dominion Bank
    Inventors: Adrian Chung-Hey MA, Michael PRONSKI, Darius BRAZIUNAS, Imran Ahmed KHAN
  • Publication number: 20230290076
    Abstract: Systems, methods, and apparatus are provided for intelligent dynamic rendering of an augmented reality (AR) display. An AR device may capture an image of a physical environment. Feature extraction combined with deep learning may be used for object recognition and detection of changes in the environment. Contextual analysis of the environment based on the deep learning outputs may enable improved integration of AR rendering with the physical environment in real time. A BLE beacon feed may provide supplemental information regarding the physical environment. The beacon feed may be extracted, classified, and labeled based on user interests using machine learning algorithms. The beacon feed may be paired with the AR advice to incorporate customized location-based graphics and text into the AR display.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 14, 2023
    Inventor: Shailendra Singh
  • Publication number: 20230290077
    Abstract: A virtual window configuration method includes the following steps. A processor generates a virtual window. A depth detection sensor generates depth information based on an image. The processor analyzes the depth information to generate a depth matrix. The processor finds a depth configuration block in the image using the depth matrix. A feature point detection sensor generates feature point information for the image. The processor analyzes the feature point information to generate a feature point matrix. The processor finds a feature point configuration block in the image using the feature point matrix. The processor moves the virtual window to the depth configuration block or the feature point configuration block.
    Type: Application
    Filed: September 13, 2022
    Publication date: September 14, 2023
    Inventors: Wei-Chou CHEN, Ming-Fong YEH, Yu-Chi CHANG, Lee-Chun KO
  • Publication number: 20230290078
    Abstract: Various implementations use object information to facilitate a communication session. Some implementations create a dense reconstruction (e.g., a point cloud or triangular mesh) of a physical environment, for example, using light intensity images and depth sensor data. A less data-intensive object information is also created to represent the physical environment for more efficient storage, editing, sharing, and use. In some implementations, the object information includes object attribute and location information. In some implementations, a 2D floorplan or other 2D representation provides object locations and metadata (e.g., object type, texture, heights, dimensions, etc.) provide object attributes. The object location and attribute information may be used, during a communication session, to generate a 3D graphical environment that is representative of the physical environment.
    Type: Application
    Filed: November 22, 2022
    Publication date: September 14, 2023
    Inventors: Bradley W. PEEBLER, Alexandre DA VEIGA
  • Publication number: 20230290079
    Abstract: It is disclosed an augmented reality system comprising at least one projector, a detection surface and at least one marker. The projector is configured to project a digital image onto a physical object inside a projection volume. The at least one marker is couplable to the physical object and it is adapted to engage the detection surface in at least one point of contact, thus generating a detection signal representative of one or more properties of the point of contact. The detection surface is configured to identify in use an absolute position and an orientation of the physical object coupled to the marker inside the projection volume as a function of the detection signal.
    Type: Application
    Filed: July 20, 2021
    Publication date: September 14, 2023
    Inventors: Gaetano CASCINI, Giandomenico CARUSO, Niccolò BECATTINI, Federico MOROSI
  • Publication number: 20230290080
    Abstract: Methods and systems for generating a model of a real-world 3D space including a storage unit with a plurality of sub-units for storing a plurality of objects, the method comprising: generating a first component comprising a model of at least a structural surface of the real-world 3D space; generating a second component comprising a model of the storage unit including the sub-units; combining the first and second components which include a position and a dimension of the storage unit by identifying landmark features; and storing, in a memory, the generated model. Methods and systems for locating objects in the real-world 3D space using the model.
    Type: Application
    Filed: July 21, 2021
    Publication date: September 14, 2023
    Inventors: Joshua DAITER, Jeffrey DAITER, Victor TRUONG
  • Publication number: 20230290081
    Abstract: A virtual reality sharing system is configured such that a VR-server that composites a plurality of camera-captured videos obtained by capturing videos of a same real space from different viewpoints to create a VR video, an AR-server that draws and creates an AR object (ARO), and a viewer virtual reality sharing terminal (terminal) that receives the VR video and the ARO, superimposes the ARO on the VR video, and then displays the VR video are connected so as to be able to communicate with each other via a network, and when an input interface of the terminal accepts a setting input operation about a viewer’s viewpoint for taking in the VR video, the terminal transmits data indicating a position and direction of the viewer’s viewpoint to the VR-server and the AR-server, receives the VR video viewed from the viewer’s viewpoint from the VR-server, receives drawing data of the ARO viewed from the viewer’s viewpoint from the AR-server, superimposes the ARO based on the drawing data on the VR video viewed from the v
    Type: Application
    Filed: August 6, 2020
    Publication date: September 14, 2023
    Inventors: Osamu KAWAMAE, Masuo OKU
  • Publication number: 20230290082
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates and displays a portion of a representation of a face of a user. For example, an example process may include obtaining a first set of data corresponding to features of a face of a user in a plurality of configurations, while a user is using an electronic device, obtaining a second set of data corresponding to one or more partial views of the face from one or more image sensors, generating a representation of the face of the user based on the first set of data and the second set of data, wherein portions of the representation correspond to different confidence values, and displaying the portions of the representation based on the corresponding confidence values.
    Type: Application
    Filed: March 23, 2023
    Publication date: September 14, 2023
    Inventors: Brian Amberg, Nicolas V. Scapel, Jason D. Rickwald, Dorian D. Dargan, Gary I. Butcher, Giancarlo Yerkes, William D. Lindmeier, John S. McCarten
  • Publication number: 20230290083
    Abstract: The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of t
    Type: Application
    Filed: January 12, 2023
    Publication date: September 14, 2023
    Inventors: Peter Meier, Frank A. Angermann
  • Publication number: 20230290084
    Abstract: A computer-generated environment may include computer characters. Characteristics of the real-world environment are estimated and the interactions between computer characters and the real-world environment can be based on the estimated characteristics.
    Type: Application
    Filed: February 24, 2023
    Publication date: September 14, 2023
    Inventors: Daniel KURZ, Novaira MASOOD, Shem NGUYEN, Jeremy R. BERNSTEIN
  • Publication number: 20230290085
    Abstract: A method for displaying a target individual includes receiving a plurality of reference markers that characterize a target individual and selecting a first reference image file and a second reference image file from a database. The method further includes displaying a first graphical representation and a second graphical representation over a visual representation of the target individual. The first reference image file is associated with a first anatomical layer and the second reference image file is associated with a different anatomical layer. The method additionally includes modifying at least one of (i) the first graphical representation or (ii) at least one of the plurality of reference markers. The method also includes displaying, on a display, a modified visual representation of the target individual based on the modified at least one of (i) the first graphical representation or (ii) the at least one of the plurality of reference markers.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 14, 2023
    Inventor: Gustav Lo
  • Publication number: 20230290086
    Abstract: Systems and methods for conveying virtual content in an augmented reality environment comprising images of virtual content superimposed over physical objects and/or physical surroundings visible within a field of view of a user as if the images of the virtual content were present in the real world. Exemplary implementations may: obtain user information for a user associated with a presentation device physically present at a location of the system; compare the user information with the accessibility criteria for the virtual content to determine whether any portions of the virtual content are to be presented to the user based on the accessibility criteria and the user information for the user; and facilitate presentation of the virtual content to the user via presentation device of user based on the virtual content information, the field of view, and the correlations between the multiple linkage points and the reference frame of the virtual content.
    Type: Application
    Filed: March 30, 2023
    Publication date: September 14, 2023
    Inventor: Nicholas T. Hariton
  • Publication number: 20230290087
    Abstract: An electronic apparatus including a user interface, a camera, a memory, and a processor. The electronic apparatus receives an input of a first user command for generating a map showing arrangement state of at least one device existing in a specific region, based on the first user command being input, obtains an image by capturing the specific region and the at least one device through a camera, obtains information associated with a size and a shape of the specific region based on the image, obtains identification information and arrangement information for the at least one device by recognizing the at least one device included in the image, and generates a map for the specific region based on the information on the size and the shape of the specific region and the identification information and the arrangement information for the at least one device.
    Type: Application
    Filed: May 16, 2023
    Publication date: September 14, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sukun YOON, Jinho KIM, Hyungsuk LEE
  • Publication number: 20230290088
    Abstract: In one implementation, a method of displaying overlapping objects is performed at a device including an optical see-through display, one or more processors, and a non-transitory memory. The method includes obtaining a request to display a first object in front of a second object. The method includes modifying a transparency of the first object in a region corresponding to an overlap between the first object and the second object. The method includes displaying the first object in front of the second object while maintaining visibility of the second object through the overlap between the first object and the second object based on the modified transparency.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 14, 2023
    Inventor: Tobias Eble
  • Publication number: 20230290089
    Abstract: In one embodiment, a method includes capturing images of a first user wearing a VR display device in a real-world environment. The method includes receiving a VR rendering of a VR environment. The VR rendering is from the perspective of the mobile computing device with respect to the VR display device. The method includes generating a first MR rendering of the first user in the VR environment. The first MR rendering of the first user is based on a compositing of the images of the first user and the VR rendering. The method includes receiving an indication of a user interaction with one or more elements of the VR environment in the first MR rendering. The method includes generating, in real-time responsive to the indication of the user interaction with the one or more elements, a second MR rendering of the first user in the VR environment. The one or more elements are modified according to the interaction.
    Type: Application
    Filed: May 22, 2023
    Publication date: September 14, 2023
    Inventors: Sarah Tanner Simpson, Gregory Smith, Jeffrey Witthuhn, Ying-Chieh Huang, Shuang Li, Wenliang Zhao, Peter Koch, Meghana Reddy Guduru, Ioannis Pavlidis, Xiang Wei, Kevin Xiao, Kevin Joseph Sheridan, Bodhi Keanu Donselaar, Federico Adrian Camposeco Paulsen
  • Publication number: 20230290090
    Abstract: Embodiments herein may relate to generating, based on a three-dimensional (3D) graphical representation of a 3D space, a two-dimensional (2D) image that includes respective indications of respective locations of one or more objects in the 3D space. The 2D image may then be displayed to a user that provides user input related to selection of an object of the one or more objects. The graphical representation of the object in the 2D image may then be altered based on the user input. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: June 27, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng SU
  • Publication number: 20230290091
    Abstract: A method for determining personal protective equipment (PPE) comfort for an individual wearer includes defining a first anatomical shape data representative of an anatomical area of the individual wearer prior to donning a PPE, a second anatomical shape data representative of the anatomical area of the individual wearer after donning the PPE, and comparing the first anatomical shape data with the second anatomical shape data. The method further includes determining a soft skin tissue deformation at a plurality of predetermined anatomical positions based on the comparison between the first anatomical shape data and the second anatomical shape data, and determining a displacement comfort threshold (CTd) value based on the soft skin tissue deformation. The method also includes determining a pressure pain threshold (PPT) value, and determining a comfort metric based on the PPT values and the CTd values. The method also includes generating a notification corresponding to the comfort metric.
    Type: Application
    Filed: July 28, 2021
    Publication date: September 14, 2023
    Inventors: Ambuj Sharma, Claire R. Donoghue, Stephen R. Gamble, Andrew W. Long, Christine L. McCool, Caitlin E. Meree, Henning T. Urban, Andrew S. Viner, Richard C. Webb, Caroline M. Ylitalo
  • Publication number: 20230290092
    Abstract: There is provided an information processing device capable of suppressing a decrease in recognition accuracy of a recognition target. The information processing device includes a presentation control unit configured to control a presentation unit to present, to a user, notification information prompting to change an orientation of at least any one of a first part or a second part, on the basis of determination that the first part shields the second part recognized on the basis of a captured image including the first part of a body of the user in an imaging range.
    Type: Application
    Filed: August 6, 2021
    Publication date: September 14, 2023
    Applicant: Sony Group Corporation
    Inventor: Tomohisa TANAKA
  • Publication number: 20230290093
    Abstract: A data processing method according to the present disclosure includes: distinguishing, in a 3D model, an analysis region including at least one tooth region; and determining a degree of completeness of the 3D model based on the analysis region.
    Type: Application
    Filed: November 2, 2021
    Publication date: September 14, 2023
    Applicant: MEDIT CORP.
    Inventor: Dong Hoon LEE
  • Publication number: 20230290094
    Abstract: A positioning model optimization method, a positioning method, and a positioning device are provided. The positioning model optimization method includes: inputting a positioning model for a scene, the positioning model including a three-dimensional (3D) point cloud and a plurality of descriptors corresponding to each 3D point in the 3D point cloud; calculating a significance of each 3D point in the 3D point cloud, and if the significance is greater than a predetermined threshold, outputting the 3D point and the plurality of descriptors corresponding to the 3D point to an optimized positioning model for the scene; and outputting the optimized positioning model for the scene.
    Type: Application
    Filed: July 22, 2021
    Publication date: September 14, 2023
    Inventors: Linjie LUO, Jing LIU, Zhili CHEN, Guohui WANG, Xiao YANG., Jianchao YANG, Xiaochen LIAN
  • Publication number: 20230290095
    Abstract: The present disclosure relates to a user-interface-framework based processing method, apparatus, device, and medium. The method includes: acquiring a data set of a target component based on a user-interface-framework; performing a layout process on the data set to obtain display style information of the target component; converting the display style information into a drawing vector based on a preset vector drawing library; and generating and displaying a rendering result of the target component based on the drawing vector.
    Type: Application
    Filed: November 17, 2022
    Publication date: September 14, 2023
    Inventor: Tianzhu ZHOU
  • Publication number: 20230290096
    Abstract: Various implementations disclosed herein include devices, systems, and methods that progressively capture data representing an actual appearance of a user for creating a 3D avatar of the user. Image sensors at a user's electronic device may capture images (or other sensor data) of different portions of a user's body over time. Images and other sensor data that is captured initially or at a given time may not represent all of the user's body. Thus, in some implementations, the progressively captured data provides representations of additional portions of the user's 3D avatar over time.
    Type: Application
    Filed: March 22, 2023
    Publication date: September 14, 2023
    Inventors: Victor Ng-Thow Hing, Timothy B. Henning
  • Publication number: 20230290097
    Abstract: A controller of an information processing apparatus is configured to adjust a virtual distance from a predetermined reference position in virtual space to an object in the virtual space, the object corresponding to an interlocutor of a user, so that the sum of a real distance from a display disposed in real space to the user in the real space and the virtual distance corresponds to a predetermined value.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Wataru KAKU
  • Publication number: 20230290098
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method includes receiving a sequence of frame images, face area parameters corresponding to positions of a face area in a frame image of the sequence of frame images, and facial landmark parameters corresponding to the frame image of the sequence of frame images, receiving an image of a source face, modifying, based on the facial landmark parameters corresponding to the frame image, the image of the source face to obtain a further face image featuring the source face adopting a facial expression corresponding to the facial landmark parameters, and inserting the further face image into the frame image at a position determined by the face area parameters corresponding to the frame image, thereby generating an output frame of an output video.
    Type: Application
    Filed: May 22, 2023
    Publication date: September 14, 2023
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
  • Publication number: 20230290099
    Abstract: A method for reconstructing three-dimensional includes the following operations. At least two frames of first key images for current reconstruction are acquired. A first space surrounding visual cones of the at least two frames of the first key images is determined. The first key images are obtained by photographing a to-be-reconstructed target. A first feature map of the first space is determined based on image information in the several frames of the first key images. The first feature map includes first feature information of voxels in the first space. A first reconstruction result of the current reconstruction is obtained based on the first feature map. A second reconstruction result obtained by previous reconstruction is updated based on the first reconstruction result of the current reconstruction.
    Type: Application
    Filed: May 17, 2023
    Publication date: September 14, 2023
    Applicant: Zhejiang SenseTime Technology Development Co., Ltd.
    Inventors: Hujun BAO, Xiaowei ZHOU, Jiaming SUN, Yiming XIE
  • Publication number: 20230290100
    Abstract: A method and apparatus for providing a guide for combining pattern pieces receives a selection of a first point in a first pattern piece and a selection of a second point in a second pattern piece to be combined with the first pattern piece, generates a virtual pattern piece in response to the selection of the second point being received, arranges the virtual pattern piece such that a third point in the virtual pattern piece having a position corresponding to the first point in the first pattern piece is matched to the second point in the second pattern piece, and provides a guide for combining the first pattern piece and the second pattern piece by moving the virtual pattern piece such that an outer line of the second pattern piece and an outer line of the virtual pattern piece correspond to each other.
    Type: Application
    Filed: May 19, 2023
    Publication date: September 14, 2023
    Inventors: Hohyun LEE, Yeji KIM
  • Publication number: 20230290101
    Abstract: Embodiments of this application provide a data processing method and apparatus, an electronic device, and a computer-readable storage medium. The method includes obtaining a target video of a target object; determining three-dimensional attitude angles of target joint points of the target object in each frame of image of the target video, and first three-dimensional coordinates of a first joint point and a second joint point corresponding to each frame of image in a first coordinate system corresponding to a virtual object; determining a displacement deviation of the second joint point; correcting the first three-dimensional coordinates of the first joint point according to the first three-dimensional coordinates and the historical three-dimensional coordinates of the second joint point to obtain a target three-dimensional coordinates of the first joint point; and determining a three-dimensional attitude of the virtual object.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 14, 2023
    Inventor: Zejun YANG
  • Publication number: 20230290102
    Abstract: A device for detecting a coating applied to a surface includes a portable housing, a light source, a light detector, and a processing unit. The light source emits a first light having a first wavelength. The coating includes a fluorophore that re-emits a second light having a second wavelength, which is different than the first wavelength, in response to excitation by the first light. The light detector receives the second light re-emitted from the coating. The processing unit is adapted to determine a re-emission intensity of the second light and to determine a coverage metric of the coating based on the re-emission intensity of the second light. The coverage metric is then used to infer the efficacy of the coating.
    Type: Application
    Filed: March 9, 2022
    Publication date: September 14, 2023
    Applicant: The Boeing Company
    Inventors: Reuben Strydom, Jason Armstrong, David Corporal, Nicola Vaisey, Celeste De Mezieres, Michael Monteiro
  • Publication number: 20230290103
    Abstract: Apparatus, systems and methods for measuring changes in the volume of a tissue of a person are described herein. The methods include applying a first marker to a portion of a tissue, the first marker having a pattern of markings thereon, each marking of the pattern of markings having an initial position; capturing an image of the pattern of markings at a first time after applying the first marker to the tissue, at least one marking of the pattern of markings having a subsequent position at the first time when the image is captured; determining a change of position of the at least one marking by comparing the subsequent position of the at least one marking in the captured image to the initial position of the at least one marking; and based on the change of position of the at least one marking, determining the physiological measurement of the tissue.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 14, 2023
    Inventors: Michael Gross, John Matthews, Jimmy Chow
  • Publication number: 20230290104
    Abstract: An object detection device includes a processor that executes a procedure. The procedure includes: converting an input image into a first vector such that information related to an area of an object in the image is contained in the first vector; converting input text into a second vector such that information related to an order of appearance in the text of one or more word strings each indicating a detection target object included in the text is contained in the second vector; generating a third vector in which the first vector and the second vector have been reflected in a vector of initial values corresponding to detection target objects; and estimating whether or not a feature indicated by the third vector corresponds to a detection target object that appears at which number place in the text, and estimating a position of the detection target object in the image.
    Type: Application
    Filed: February 22, 2023
    Publication date: September 14, 2023
    Applicant: Fujitsu Limited
    Inventor: Moyuru YAMADA
  • Publication number: 20230290105
    Abstract: A product detection device is provided with: an image acquisition unit; a binarization unit; and a detection unit. The image acquisition unit acquires an image of shelves for displaying products. The binarization unit binarizes a region in the image into a product region where products are imaged and a non-product region where things other than the products are imaged. The detection unit detects the display state of products displayed on the shelves in accordance with the width of the binarized product region and the width of a gap region adjacent to the products.
    Type: Application
    Filed: July 31, 2020
    Publication date: September 14, 2023
    Applicant: NEC Corporation
    Inventors: Rina TOMITA, Yuji Tahara
  • Publication number: 20230290106
    Abstract: A captured image is acquired, an instruction with respect to the captured image acquired is received, a likelihood map indicating likelihood of presence of an object in a predetermined region of regions into which the captured image is divided is acquired, a region indicating a position and size of the object in the captured image is estimated, and an object region corresponding to the instruction is determined using the likelihood map and one or more object region candidates selected from the estimated region based on the position indicated by the received instruction.
    Type: Application
    Filed: March 6, 2023
    Publication date: September 14, 2023
    Inventors: YASUYUKI YAMAZAKI, KENSHI SAITO
  • Publication number: 20230290107
    Abstract: A method for light field rendering includes: obtaining an n-dimensional mixture model, with n a natural number equal to or larger than 4, of a light field. The model is made of kernels wherein each kernel represents light information and is expressed by parameter values; mathematically reducing the n-dimensional mixture model into a 2-dimensional mixture model of an image given a certain point of view, wherein the 2-dimensional model is also made of kernels; rendering a view in a pixel domain from the 2-dimensional model made of kernels.
    Type: Application
    Filed: May 5, 2021
    Publication date: September 14, 2023
    Inventors: Martijn COURTEAUX, Glenn VAN WALLENDAEL, Peter LAMBERT
  • Publication number: 20230290108
    Abstract: In one embodiment, a method of training a machine-learning model for modifying a facial illumination in an image includes accessing an initial image including a human face having an initial illumination and determining one or more illumination priors for the initial image. The method includes providing the initial image and the one or more illumination priors to the machine-learning model; receiving, from the machine-learning model, a set of correction operators identifying a modified illumination for the human face; creating, based at least on the set of correction operators and the initial image, a modified image having the modified illumination; creating, based on the modified image, a reconstructed initial image including the human face having a reconstructed illumination; and adjusting one or more parameters of the machine-learning model by minimizing a loss function based on a difference between the initial and the reconstructed initial images in their respective illumination.
    Type: Application
    Filed: March 2, 2023
    Publication date: September 14, 2023
    Inventors: Kushal Kardam Vyas, Kathleen Sofia Hajash, Sajid Sadi, Sergio Perdices-Gonzalez
  • Publication number: 20230290109
    Abstract: Various embodiments set forth systems and techniques for evaluating media content items. The techniques include receiving visual feedback associated with one or more audience members viewing a first media content item; analyzing the visual feedback to generate one or more emotion signals based on the visual feedback; and generating a set of features associated with the one or more audience members viewing the first media content item based on the one or more emotion signals.
    Type: Application
    Filed: March 14, 2022
    Publication date: September 14, 2023
    Inventors: Aaron Michael BAKER, Mary Nell BORST, Dennis LI, Jacek Krzysztof NARUNIEC, Dustin TUCKER, Romann Matthew WEBER
  • Publication number: 20230290110
    Abstract: Presented herein are systems and methods for generating synthetic training data for machine learning models. Images of a particular object (such as an aircraft) can be received and processed to cutout the object (i.e., separate the object from the background) from the received image. The systems and methods described herein can detect areas in the background images to place an object. Once a suitable area has been detected, the cutout object image can be superimposed on the background image at the location determined to be suitable for placing the object. Superimposing the object onto the background image can include blending the two images using a plurality of blending techniques to reduce artifacts that may bias a supervised training process.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 14, 2023
    Applicant: The MITRE Corporation
    Inventors: Robert A. CASE, Joseph JUBINSKI, Dasith A. GUNAWARDHANA, Melvin H. DEDICATORIA, Richard W. HUZIL, Ransom WINDER
  • Publication number: 20230290111
    Abstract: A computer-implemented method for processing electronic medical images, the method including receiving one or more digital medical images of at least one pathology specimen, the pathology specimen being associated with a patient and receiving one or more search criteria. One or more machine learning systems may be determined based on the one or more search criteria. The one or more machine learning systems may be output to a user, wherein outputting the one or more machine learning system includes applying the one or more machine learning systems to the one or more received medical images, and displaying the one or more digital medical images after the machine learning system performed analysis on the digital medical images. A selection from a user may be received, the selection corresponding to a first machine learning system from the one or more machine learning systems. The first machine learning system may be output.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 14, 2023
    Inventors: Jeremy Daniel KUNZ, Danielle GORTON, Adam CASSON