Patents Issued in October 31, 2024
  • Publication number: 20240362849
    Abstract: A non-transitory computer-readable recording medium storing a training program for causing a computer to perform a process includes: obtaining a model of an object that includes a three-dimensional surface; generating image data in which the model of the object is rendered; specifying three-dimensional skeleton data of the rendered image data by inputting the rendered image data to a first learner trained with image data of an object included in training data as an explanatory variable and three-dimensional skeleton data of the training data as an objective variable; and executing training of a second learner with the specified three-dimensional skeleton data as an objective variable and the model of the object as an explanatory variable.
    Type: Application
    Filed: July 5, 2024
    Publication date: October 31, 2024
    Applicant: Fujitsu Limited
    Inventor: Sosuke YAMAO
  • Publication number: 20240362850
    Abstract: A cloud management platform is configured to: receive a graphic asset uploaded by at least one of a plurality of asset providers; receive payment success information of at least one of a plurality of asset users for the graphic asset, and record that the at least one asset user has permission to use the graphic asset; receive a rendering request that is for the graphic asset and that is sent by the at least one asset user, and request the graphic asset from the cloud management platform in response to the rendering request; when determining that the at least one asset user has the permission to use the graphic asset, send the graphic asset to the rendering unit; render the first graphic asset to generate a rendering result, and send the rendering result to the at least one asset user.
    Type: Application
    Filed: July 12, 2024
    Publication date: October 31, 2024
    Inventor: Pu Chen
  • Publication number: 20240362851
    Abstract: A bounding volume is used to approximate the space an object occupies. If a more precise understanding beyond an approximation is required, the object itself is then inspected to determine what space it occupies. Often, a simple volume (such as an axis-aligned box) is used as bounding volume to approximate the space occupied by an object. But objects can be arbitrary, complicated shapes. So a simple volume often does not fit the object very well. That causes a lot of space that is not occupied by the object to be included in the approximation of the space being occupied by the object. Hardware-based techniques are disclosed herein, for example, for efficiently using multiple bounding volumes (such as axis-aligned bounding boxes) to represent, in effect, an arbitrarily shaped bounding volume to better fit the object, and for using such arbitrary bounding volumes to improve performance in applications such as ray tracing.
    Type: Application
    Filed: July 10, 2024
    Publication date: October 31, 2024
    Inventors: Gregory MUTHLER, John BURGESS
  • Publication number: 20240362852
    Abstract: An instruction (or set of instructions) that can be included in a program to perform a ray tracing acceleration data structure traversal, with individual execution threads in a group of execution threads executing the program performing a traversal operation for a respective ray in a corresponding group of rays such that the group of rays performing the traversal operation together. The instruction(s), when executed by the execution threads in respect of a node of the ray tracing acceleration data structure, cause one or more rays from the group of plural rays that are performing the traversal operation together to be tested for intersection with the one or more volumes associated with the node being tested. A result of the ray-volume intersection testing can then be returned for the traversal operation.
    Type: Application
    Filed: July 12, 2024
    Publication date: October 31, 2024
    Applicant: Arm Limited
    Inventors: Richard BRUCE, William Robert STOYE, Mathieu Jean Joseph ROBART
  • Publication number: 20240362853
    Abstract: Disclosed is method including: obtaining neural network(s) trained for rendering images, wherein input of neural network(s) has 3D position of point in real-world environment and output of neural network(s) includes colour and opacity of point; obtaining 3D model(s) of real-world environment; receiving viewpoint from perspective of which image is to be generated; receiving gaze direction; determining region of real-world environment that is to be represented in image, based on viewpoint and field of view of image; determining gaze portion and peripheral portion of region of real-world environment, based on gaze direction, wherein gaze portion corresponds to gaze direction, while peripheral portion surrounds gaze portion; utilising neural network(s) to ray march for gaze portion, to generate gaze segment of image; and utilising 3D model(s) to generate peripheral segment of image.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 31, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Kimmo Roimela
  • Publication number: 20240362854
    Abstract: New contents data is generated by using a plurality of pieces of contents data owned by a user as raw material data. The information processing apparatus according to the present disclosure obtains user information indicating a user, obtains a plurality of pieces of contents data whose current owner is the user based on the user information, and generates output control data indicating contents in which rendering results of at least two pieces of contents data among the plurality of pieces of contents data are output and controlled as a time series signal.
    Type: Application
    Filed: April 1, 2024
    Publication date: October 31, 2024
    Inventor: Daisuke KATSUMI
  • Publication number: 20240362855
    Abstract: System and method are disclosed for training a generative adversarial network pipeline that can produce realistic artificial depth images useful as training data for deep learning networks used for robotic tasks. A generator network receives a random noise vector and a computer aided design (CAD) generated depth image and generates an artificial depth image. A discriminator network receives either the artificial depth image or a real depth image in alternation, and outputs a predicted label indicating a discriminator decision as to whether the input is the real depth image or the artificial depth image. Training of the generator network is performed in tandem with the discriminator network as a generative adversarial network. A generator network cost function minimizes correctly predicted labels, and a discriminator cost function maximizes correctly predicted labels.
    Type: Application
    Filed: August 10, 2022
    Publication date: October 31, 2024
    Applicant: Siemens Aktiengesellschaft
    Inventors: Wei Xi Xia, Eugen Solowjow, Shashank Tamaskar, Juan L. Aparicio Ojea, Heiko Claussen, Ines Ugalde Diaz, Gokul Narayanan Sathya Narayanan, Yash Shahapurkar, Chengtao Wen
  • Publication number: 20240362856
    Abstract: Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
    Type: Application
    Filed: July 9, 2024
    Publication date: October 31, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Vaibhav Mathur, David Manly, Jahja I. Trisnadi, Clinton Carlisle, Lionel Ernest Edwin, Michael Anthony Klug
  • Publication number: 20240362857
    Abstract: An AR device displays virtual objects to users as part of an AR experience by generating a depth image by rendering the depth image from a three-dimensional (3D) world model. The AR device receives an image from a camera and estimates its physical pose in the real world when the image was captured. The AR device accesses the 3D world model and estimates a virtual pose within a 3D world model that corresponds to the estimated physical pose in the real world. The AR device uses the virtual pose to render the depth image using the 3D world model. The AR device may use a graphics processor to render the depth image from a camera view corresponding to the virtual pose. The AR device uses the depth image to present content to the user over the image captured by the camera.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: Erik Marshall Murphy-Chutorian, Nicholas John Butko
  • Publication number: 20240362858
    Abstract: A parallel-reality application enables users to tag virtual elements in their proximity for later interaction. A geographic location of a user is received and used to identify a virtual location, in a virtual world, that maps to the geographic location of the user. A region of the virtual world is identified based on the identified virtual location and one or more virtual elements within the region of the virtual world are selected. An identifier of the one or more virtual elements is stored in conjunction with an identifier of the user and a list of tagged virtual elements is provided for display to the user. At a later time, the user interacts with a selected one of the tagged virtual elements.
    Type: Application
    Filed: April 24, 2024
    Publication date: October 31, 2024
    Inventors: Tatsuo Nomura, Chihiro Kanno, Hiroki Asakawa
  • Publication number: 20240362859
    Abstract: Systems, methods, and devices provide virtual, three-dimensional, interactive environments with multiple customization layers. A virtual environment platform provides an environment generator engine for generating a computer-generated three-dimensional (3D) space by rendering 3D model and applying one or more surface customizations to the 3D model. Additionally, product models are mapped to one or more virtual surfaces of the 3D model. An avatar customization engine generates a virtual avatar navigable in the computer-generated 3D space. Moreover, the avatar customization engine defines a first set of customization parameters as mutable and a second set of customization parameters as immutable. The system also causes the virtual interactive environment to be presented, at a display of a user device, and receives user input(s) controlling the virtual avatar and selecting at least one product model of the one or more product models. In response, the system presents data associated with the product model.
    Type: Application
    Filed: April 25, 2024
    Publication date: October 31, 2024
    Applicant: OBSESS, INC.
    Inventors: Neha Singh, John Mann, Hiran Gnanapragasam
  • Publication number: 20240362860
    Abstract: A calculation method according to one aspect of the present disclosure obtains three-dimensional points that represent an object in a space on a computer, each indicating a position on the object, classifies the three-dimensional points into groups based on the respective normal directions of the three-dimensional points, and calculates a first accuracy of each of the groups, the first accuracy increasing with an increase of a second accuracy of at least one three-dimensional point belonging to the group. The three-dimensional points are generated by a sensor detecting light from the object from different positions and in different directions. The normal direction of each three-dimensional point is determined based on the different directions used to generate the three-dimensional point.
    Type: Application
    Filed: July 5, 2024
    Publication date: October 31, 2024
    Inventors: Kensho TERANISHI, Toru MATSUNOBU, Satoshi YOSHIKAWA
  • Publication number: 20240362861
    Abstract: Systems and methods for aligning and recognizing text in extended reality environments include determining a plane and aligning text on that plane in the extended reality environment to provide for text recognition and interactivity of the recognized text.
    Type: Application
    Filed: July 8, 2024
    Publication date: October 31, 2024
    Applicant: VR-EDU, Inc.
    Inventor: Ethan Fieldman
  • Publication number: 20240362862
    Abstract: A hierarchical data structure has sets of nodes representing a 3D space of an environment at different granularity levels. Sets of neural networks at different granularity levels are trained. For a portion of an output image, a granularity level at which the portion is to be reconstructed is determined. A corresponding node is identified; the node having sets of child nodes. A set of child nodes is selected at the granularity level at which the portion is to be reconstructed. For a child node, a cascade of neural networks is utilised to reconstruct the portion. Granularity level of N+1th neural network is higher than that of Nth neural network. Input of a neural network includes outputs of at least a predefined number of previous neural networks.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 31, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Kimmo Roimela
  • Publication number: 20240362863
    Abstract: A location-based augmented-reality system to generate and cause display of augmented-reality content that includes three-dimensional typography, based on a perspective, and location of a client device.
    Type: Application
    Filed: July 5, 2024
    Publication date: October 31, 2024
    Inventors: Piers George Cowburn, David Li, Isac Andreas Muller Sandvik, Qi Pan
  • Publication number: 20240362864
    Abstract: A method in a computing device includes: capturing, via a depth sensor, a first point cloud depicting an object; determining, from the first point cloud, a first attribute of a plane corresponding to a surface of the object; monitoring, via a motion sensor, an orientation of the depth sensor; in response to detecting a change in the orientation that meets a threshold, capturing a second point cloud depicting the object; determining, from the second point cloud, a second attribute of the plane corresponding to the surface of the object; determining whether the first attribute and the second attribute match; and when the first attribute and the second attribute match, dimensioning the object based on at least one of the first point cloud and the second point cloud.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventor: Raghavendra Tenkasi Shankar
  • Publication number: 20240362865
    Abstract: According to systems and techniques disclosed herein, a method for generating a navigable three-dimensional image of a tissue sample may include receiving a plurality of whole slide images (WSI) associated with the tissue sample. The method may further include providing the plurality of whole slide images to a machine-learning model. The machine-learning model may have been trained to identify one or more positional features within the plurality of whole slide images and output a plurality of relative positional relationships corresponding to each of the plurality of whole slide images. The method may further include generating the navigable three-dimensional image of the tissue sample based on the plurality of relative positional relationships. The method may further include generating an interactive display incorporating the navigable three-dimensional image. The method may further include providing, to a user interface, the interactive display.
    Type: Application
    Filed: April 23, 2024
    Publication date: October 31, 2024
    Inventor: Samuel SEYMOUR
  • Publication number: 20240362866
    Abstract: A health management system, including: a health assessment module configured to obtain human health-related health parameter information of a user, and generate a health condition assessment result on the basis of the health parameter information; a health intervention module configured to generate a health management plan on the basis of the health condition assessment result; a human body model generation module configured to generate a human body model that can be displayed on a display interface; and a human body information display model configured to: obtain at least one first display instruction each corresponding to one human body system model of a plurality of human body system models medically classified according to human body systems, and display, on the display interface, the three-dimensional human body model in the form of layers based on the human body system model corresponding to the at least one first display instruction.
    Type: Application
    Filed: July 2, 2024
    Publication date: October 31, 2024
    Inventors: Bingdong WANG, Hua BAI, Yunan WANG, Limin YANG
  • Publication number: 20240362867
    Abstract: A computer-implemented method, according to one embodiment, includes outputting, from a first user device associated with a first user that owns a first portion of land in a metaverse to a second user device associated with a second user that owns a second portion of land in the metaverse, a first request for being defined as a first neighbor of the first user. In response to a determination that an acceptance has been received from the second user device to be defined as the first neighbor of the first user, an adapter is caused to be added to a sub-portion of the first portion of land and a sub-portion of the second portion of land. The method further includes generating a definition of neighbors of the first portion of land, the definition including the first neighbor. The definition is caused to be recorded in a predetermined database.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 31, 2024
    Inventors: Guang Han Sui, Peng Hui Jiang, Jun Su, Su Liu, Yu Zhu
  • Publication number: 20240362868
    Abstract: A user creates and changes appearance of a digital object in extended reality environments by voice commands in conjunction pointing input and retrieving a representation of the digital object for display at a location point pointed to by the user with an artificial intelligence process communicating with a database of images or with image-generating software.
    Type: Application
    Filed: January 8, 2024
    Publication date: October 31, 2024
    Applicant: VR-EDU, Inc.
    Inventor: Ethan Fieldman
  • Publication number: 20240362869
    Abstract: Provided is an optical device for augmented reality capable of providing high luminous uniformity. According to an aspect of the present invention, there is provided an optical device for augmented reality, the optical device including: an optical means configured to allow virtual image light, output from an image output unit, to propagate through the interior thereof and transmit real object image light therethrough toward a pupil of a user; and a plurality of reflective units disposed in the optical means to transfer the virtual image light toward the pupil of the user; wherein the plurality of reflective units are each configured such that a dielectric coating layer coated with a dielectric material is formed on the reflective surface thereof that reflects incident virtual image light and transfers it to the pupil.
    Type: Application
    Filed: April 25, 2024
    Publication date: October 31, 2024
    Applicant: LETINAR CO., LTD
    Inventors: Jeong Hun HA, Sung A KIM
  • Publication number: 20240362870
    Abstract: Systems and methods are provided that allow developers to quickly and easily develop augmented reality (AR) applications that enrich the real-world with data from the cloud. Given that the development of an AR application is a complex and time-consuming process, the systems and methods described herein allow software developers to concisely describe their needs in a succinct program, written in the QWL domain-specific language. The systems and methods take this program and automatically generate an AR application.
    Type: Application
    Filed: May 13, 2024
    Publication date: October 31, 2024
    Inventors: Arnab Nandi, Codi Burley, Ritesh Sarkhel
  • Publication number: 20240362871
    Abstract: One aim of the present invention is to provide an information processing device whereby a user can be made aware of the audio transmission range in a situation where a virtual space is used and users communicate with each other. This information processing device includes: a detection unit that detects audio generated by a user operating an avatar inside a virtual space; an audio control unit that outputs the audio to a user of an avatar that fulfills prescribed conditions in a relationship with a speaking avatar, being an avatar operated by the user that provided the audio; and a display control unit that changes the display mode for a listener avatar, being an avatar that fulfills the prescribed conditions.
    Type: Application
    Filed: September 3, 2021
    Publication date: October 31, 2024
    Applicant: NEC Corporation
    Inventors: Shin NORIEDA, Kenta Fukuoka, Yoshiyuki Tanaka
  • Publication number: 20240362872
    Abstract: A cross reality system that renders virtual content generated by executing native mode applications may be configured to render web-based content using components that render content from native applications. The system may include a Prism manager that provides Prisms in which content from executing native applications is rendered. For rendering web based content, a browser, accessing the web based content, may be associated with a Prism and may render content into its associated Prism, creating the same immersive experience for the user as when content is generated by a native application. The user may access the web application from the same program launcher menu as native applications. The system may have tools that enable a user to access these capabilities, including by creating for a web location an installable entity that, when processed by the system, results in an icon for the web content in a program launcher menu.
    Type: Application
    Filed: July 3, 2024
    Publication date: October 31, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Haiyan Zhang, Robert John Cummings MacDonald
  • Publication number: 20240362873
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.
    Type: Application
    Filed: July 3, 2024
    Publication date: October 31, 2024
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
  • Publication number: 20240362874
    Abstract: Aspects of the present disclosure involve a system for providing AR experiences. The system accesses, by a messaging application, an image depicting a real-world fashion item of a user and generates a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image. The system stores the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user. The system generates, by the messaging application, an augmented reality (AR) experience that allows the user to interact with the virtual wardrobe.
    Type: Application
    Filed: July 9, 2024
    Publication date: October 31, 2024
    Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Ma'ayan Mishin Shuvi
  • Publication number: 20240362875
    Abstract: A computer-implemented method of aligning a source model with a target model includes receiving the source model and the target model, identifying geometric features in each of the source model and target model, assigning a feature vector to each feature, defining an associated geometry type, position, direction and magnitude for the feature, pairing each feature vector in the source model with each other feature vector in the source model and pairing each feature vector in the target model with each other feature vector in the target model, calculating a pair vector for each pairing, defining the geometry type of each feature vector in the pairing, the dimension of each feature vector in the pairing, a relative orientation and separation distance, identifying matching pair vectors between the source model and target model; and calculating a transformation matrix between the source model and target model based on the matching pair vectors.
    Type: Application
    Filed: June 15, 2023
    Publication date: October 31, 2024
    Applicant: Hong Kong Centre for Logistics Robotics Limited
    Inventors: Yunhui Liu, Xueyan Tang
  • Publication number: 20240362876
    Abstract: An electronic device includes memory storing instructions, a display and at least one processor comprising processing circuitry. The instructions, when executed by the at least one processor individually and/or collectively, cause the electronic device to: obtain, from an external electronic device providing a virtual environment, information on a space of the virtual environment in which an avatar is to be located, information on a situation related to the space, and information on body proportion of avatars located in the space; identify, first proportion information of an avatar defined in the space, second proportion information of an avatar defined in the situation, and third proportion information used by the avatars; identify a priority with respect to proportions for the avatars, and display the avatar having a second proportion changed from a first proportion in the space, based on the priority.
    Type: Application
    Filed: February 16, 2024
    Publication date: October 31, 2024
    Inventors: Jihyeok YOON, Sungoh KIM, Jihyun KIM, Jongpil PARK, Donghyun YEOM, Donghoon HAN
  • Publication number: 20240362877
    Abstract: A computer-implemented method and apparatus for determining morphology of a human breast, the method comprising: obtaining at least one image of a subject; extracting features of at least a portion of the subject's body from the at least one image, wherein the features correspond to a model of standard human anatomy; generating a three-dimensional model of the subject's body based on the extracted features and the model of standard human anatomy; and determining a morphological parameter of the subject's breast from the three-dimensional model of the subject's body. In another aspect, a computer implemented method is presented for adjusting a virtual garment to fit the three-dimensional model and determining a parameter based on the adjusted virtual garment. In a further aspect, there is provided a method for providing a manufactured wearable garment.
    Type: Application
    Filed: July 7, 2022
    Publication date: October 31, 2024
    Inventors: Prashant Aparajeya, Vandita Shukla, Salman Khan, Frederic Leymarie, Tigran Hakobyan, Tang Thuy Trang Ngo
  • Publication number: 20240362878
    Abstract: An information processing apparatus receives, while a first virtual viewpoint image generated based on first three-dimensional shape data corresponding to a structure is displayed on a user display unit, a user operation on a first virtual camera corresponding to the first virtual viewpoint image, and generates, based on the user operation, camera parameters indicating a position and an orientation of a second virtual camera corresponding to a second virtual viewpoint image generated based on second three-dimensional shape data indicating a shape of the structure different from a shape indicated by the first three-dimensional shape data.
    Type: Application
    Filed: April 22, 2024
    Publication date: October 31, 2024
    Inventor: WATARU SUZAKI
  • Publication number: 20240362879
    Abstract: Aspects of the present disclosure relate to an anchor object to which virtual objects can be consistently mapped in an artificial reality (XR) environment. In some implementations, the virtual objects can include avatars of users accessing the XR environment on respective XR systems. The anchor object can be a virtual object, such as a menu or shape, or a physical object, such as a stage computing device positioned in the users' surrounding real-world environments. The users can move the anchor object as rendered on their respective XR systems, which causes reciprocal movement of their corresponding avatars on other users' XR systems. Thus, virtual objects can be consistently referenced across all of the XR systems accessing the XR environment.
    Type: Application
    Filed: April 25, 2024
    Publication date: October 31, 2024
    Inventors: David Frederick GEISERT, Lauren JAVOR, Paul Armistead HOOVER
  • Publication number: 20240362880
    Abstract: Systems and methods used to perform touchless registration of images for surgical navigation are disclosed. In some embodiments, the systems include a 3-D scanning device to capture spatial data of a region of interest of a patient and a reference frame. A digital mesh model is generated from the spatial data. A reference frame model is registered with the digital mesh model. Anatomical features of the digital mesh model and a patient registration model are utilized to register the digital mesh model with the patient registration model. A position of a surgical instrument is tracked relative to the reference frame and the patient registration model.
    Type: Application
    Filed: July 3, 2024
    Publication date: October 31, 2024
    Inventors: Ryan D. DATTERI, Yvan PAITEL, Kevin E. MARK, Samantha Joanne PRESTON, Andrew SUMMERS, Ganesh SAIPRASAD
  • Publication number: 20240362881
    Abstract: Systems and methods are disclosed for generating a scaled reconstruction for a consumer product. One method includes receiving digital input comprising a calibration target and an object; defining a three-dimensional coordinate system; positioning the calibration target in the three-dimensional coordinate system; based on the digital input, aligning the object to the calibration target in the three-dimensional coordinate system; and generating a scaled reconstruction of the object based on the alignment of the object to the calibration target in the three-dimensional coordinate system.
    Type: Application
    Filed: July 5, 2024
    Publication date: October 31, 2024
    Inventors: Eric J. VARADY, Atul KANAUJIA
  • Publication number: 20240362882
    Abstract: This application provides a suit processing method and apparatus for a virtual object, an electronic device, and a storage medium. The method includes displaying a virtual scene, the virtual scene including a first virtual object wearing a first suit, the first suit including a plurality of components being distributed at different positions on the first virtual object; to determining that a color of the first region does not match a color of the first component; and in response to the determining, replacing a first component in the first suit with a second component, a color of the second component matching the color of the first region, and a wearing position for the second component is the same as that for the first component.
    Type: Application
    Filed: July 11, 2024
    Publication date: October 31, 2024
    Inventors: Cong Tian, Jieqi XIE, Boyi LIU, Weijian CUI, Yu DENG, Zhi LI, Jingjing HE
  • Publication number: 20240362883
    Abstract: Modulation-encoded light, using different spectral bin coded light components, can illuminate a stationary or moving (relative) target object or scene. Response signal processing can use information about the respective different time-varying modulation functions, to decode to recover information about a respective response parameter affected by the target object or scene. Electrical or optical modulation encoding can be used. LED-based spectroscopic analysis of a composition of a target (e.g., SpO2, glucose, etc.) can be performed; such can optionally include decoding of encoded optical modulation functions. Baffles or apertures or optics can be used, such as to constrain light provided by particular LEDs. Coded light illumination can be used with a focal plane array light imager receiving response light for inspecting a moving semiconductor or other target.
    Type: Application
    Filed: June 21, 2024
    Publication date: October 31, 2024
    Inventor: Shrenik Deliwala
  • Publication number: 20240362884
    Abstract: Provided is an electronic device configured to receive an analog signal for an original image captured using a target camera mounted on a vehicle, convert the analog signal for the original image into a digital signal based on a resolution of the original image, generate a digital image based on the digital signal, determine a target object in the digital image using one or more image recognition models, generate a synthesized image by synthesizing a computer graphic for the target object with the digital image, convert the synthesized image into an analog signal, and transmit the analog signal for the synthesized image to an analog signal-receiving device installed in the vehicle.
    Type: Application
    Filed: March 27, 2024
    Publication date: October 31, 2024
    Inventors: Young Chan LA, Seok Hoon KANG
  • Publication number: 20240362885
    Abstract: Provided is an information-processing device including a CPU; and a memory storing instructions for causing the information-processing device, when executed by the CPU, to: include at least a machine learning model configured to receive an input image and an attribute as input, and to output at least one region in the input image and an evaluation value for each of the at least one region, wherein, for a common input image, the region and the evaluation value output when one attribute is given are different from the region and the evaluation value output when another attribute different from the one attribute is given.
    Type: Application
    Filed: September 30, 2021
    Publication date: October 31, 2024
    Inventors: Hiya ROY, Mitsuru NAKAZAWA, Bjorn STENGER
  • Publication number: 20240362886
    Abstract: An electronic device may include a camera and a processor. The processor may be configured to: identify a width of a window to be used to segment the image, based on a field-of-view (FoV) of an image obtained through the camera, identify a height of the window based on a first area including a visual object corresponding to a reference surface, segment the first area to a plurality of partial areas, using the window, based on the width and the height, identify whether an external object is included in the first partial area, from a neural network to which a first partial area among the plurality of partial areas is inputted, and based on a gap identified based on whether the external object is included in the first partial area, obtain a second partial area separated from the first partial area within the image.
    Type: Application
    Filed: April 25, 2024
    Publication date: October 31, 2024
    Inventors: Haejun JUNG, Yosep PARK
  • Publication number: 20240362887
    Abstract: Methods for processing a semiconductor wafer are provided. A plurality of patches are extracted from the query image related to the semiconductor wafer. The patches are encoded with a set of weightings to obtain an encoding matrix. The database is searched based on the encoding matrix to retrieve the images corresponding to the query image. The retrieved images is used to inspect of defects of the semiconductor wafer, so as to generate an inspection result. A semiconductor process is performed on the semiconductor wafer when the inspection result is normal.
    Type: Application
    Filed: July 10, 2024
    Publication date: October 31, 2024
    Inventor: Katherine CHIANG
  • Publication number: 20240362888
    Abstract: One embodiment of the present invention provides an information processing apparatus, an information processing method, and an information processing program capable of recording useful discontinuity information of a subject. According to one aspect of the present invention, there is provided an information processing apparatus including: a processor, in which the processor is configured to: acquire first discontinuity information obtained by analyzing an image of a subject with a first criterion, the first discontinuity information including information indicating a feature of a discontinuity; acquire second discontinuity information obtained by analyzing the image of the subject with a second criterion that is stricter than the first criterion, the second discontinuity information including information indicating the feature of the discontinuity; and record the second discontinuity information in association with the first discontinuity information.
    Type: Application
    Filed: July 11, 2024
    Publication date: October 31, 2024
    Applicant: FUJIFILM Corporation
    Inventors: Mitsutoshi TAIRA, Makoto YONAHA, Eiichi TANAKA
  • Publication number: 20240362889
    Abstract: An example device for processing image data includes a memory configured to store image data; and one or more processors implemented in circuitry and configured to: determine a set of keypoints representing objects in an image of the image data captured by a camera of a vehicle; determine depth values for the objects in the image; determine positions of the objects relative to the vehicle using the set of keypoints and the depth values; and at least partially control operation of the vehicle according to the positions of the objects. For example, the depth values may represent descriptors for the keypoints or be used to determine the descriptors for the keypoints.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: Amin Ansari, Mandar Narsinh Kulkarni, Ahmed Kamel Sadek
  • Publication number: 20240362890
    Abstract: A data processing method includes: obtaining at least one frame of an original image; performing visual saliency detection on each frame of the original image in the at least one frame of the original image to obtain visual saliency data of each frame of the original image; performing differential processing on different positions of each frame of the original image to obtain at least one frame of a processed target image based on the visual saliency data; and encoding the processed target image.
    Type: Application
    Filed: April 9, 2024
    Publication date: October 31, 2024
    Inventors: Linlin CHEN, Liang HE
  • Publication number: 20240362891
    Abstract: A system for selecting motion models for aligning scene content captured by different image sensors, is configurable to (i) access a first image captured by a first image sensor and a second image captured by a second image sensor; (ii) access a set of motion models; (iii) define a reference patch within the second image; (iv) generate a respective match patch for each motion model of the set of motion models; (v) determine a similarity between each respective match patch and the reference patch within the second image; (vi) select a final motion model from the set of motion models based upon the similarity between each respective match patch and the reference patch within the second image; and (vii) utilize the final motion model to generate an output image for display to a user.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 31, 2024
    Inventors: Michael BLEYER, Pascal PARÉ, Paul LEE, Aleksander Bogdan BAPST
  • Publication number: 20240362892
    Abstract: Provided are an image classification device and method that are capable of extracting and mapping an important feature in an image. The image classification device includes: a feature extraction unit 101 that generates a first image group generated by applying different noises to the same image among images included in an image group and a second image group including different images, is trained such that features obtained from the first image group are approximate, is trained such that features obtained from the second image group are more different, and extracts features; a feature mapping unit 102 that maps the extracted plurality of features two-dimensionally or three-dimensionally using manifold learning; and a display unit 103 that displays a mapping result and constructs a training information application task screen.
    Type: Application
    Filed: July 30, 2021
    Publication date: October 31, 2024
    Inventors: Sota KOMATSU, Masayoshi ISHIKAWA, Fumihiro BEKKU
  • Publication number: 20240362893
    Abstract: The present disclosure relates to methods, systems and non-transitory computer-readable storage mediums for detecting an object of a first object type in a video sequence. A first algorithm is used to detect areas or objects in the scene as captured in the video stream that have an uncertain object type status. A second algorithm is used to provide a background model of the video sequence. For areas or objects having the uncertain object type status, the background model is used to check if the area or object is considered to be part of the background or the foreground in the video sequence. If the area or object is determined to belong to the foreground, the area or object is classified as the first object type. If the area or object is determined to not belong to the foreground, the area or object is not classified as the first object type.
    Type: Application
    Filed: March 26, 2024
    Publication date: October 31, 2024
    Applicant: Axis AB
    Inventors: Ludvig HASSBRING, Song YUAN
  • Publication number: 20240362894
    Abstract: A processor-implemented method with image processing includes obtaining point cloud data of a target scene, generating a feature map of the point cloud data by extracting a feature from the point cloud data, for each of a plurality of objects included in the target scene, generating a feature vector indicating the object in the target scene based on the feature map, and reconstructing a panorama of the target scene based on the feature vectors of the objects.
    Type: Application
    Filed: April 26, 2024
    Publication date: October 31, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Xiaoxuan YU, Weiming LI, Hao WANG, Jingrui SONG, Qiang WANG, SoonYong CHO, Young Hun SUNG
  • Publication number: 20240362895
    Abstract: A system filters near-duplicate images to generate data for training or validation of a machine learning model. The system receives a set of images and generates feature vectors from the images. The system clusters the feature vectors. For each cluster of feature vectors, the system determines near-duplicate pairs of images. The system may generate a cost matrix representing a linear assignment problem and find near-duplicate pairs of images by solving the linear assignment problem. The system filters images from the set of images based on the near-duplicate pairs of images. The system uses the filtered set of images for training or validation of the machine learning model.
    Type: Application
    Filed: April 26, 2023
    Publication date: October 31, 2024
    Inventors: Abdelhamid Bouzid, Mark William Sabini
  • Publication number: 20240362896
    Abstract: In some embodiments, a method sends information for a sample of content, a first question, and a second question for output on an interface. The first question receives, from a subject, a first response for a sample level rating for an artifact that is perceived to be visible in the sample and the second question receives, from the subject, a second response for regions in the sample that are perceived to contain the artifact. The method receives the first response for the sample level rating and the second response for regions that are perceived to contain the artifact. First responses are combined from multiple subjects to generate an opinion score for the sample and second responses are combined to generate region scores for regions. The method generates training data from the opinion score and the region scores to train a process to perform an action based on the artifacts.
    Type: Application
    Filed: April 11, 2024
    Publication date: October 31, 2024
    Applicants: Disney Enterprises, Inc., Beijing Hulu Software Technology Development Co., Ltd.
    Inventors: Yuanyi XUE, Scott LABROZZI, Wenhao ZHANG, Christopher Richard SCHROERS, Roberto Gerson DE ALBUQUERQUE AZEVEDO, Xuchang HUANGFU, Lemei HUANG, Yang ZHANG
  • Publication number: 20240362897
    Abstract: In various examples, systems and methods are disclosed relating to synthetic data generation using viewpoint augmentation for autonomous and semi-autonomous systems and applications. One or more circuits can identify a set of sequential images corresponding to a first viewpoint and generate a first transformed image corresponding to a second viewpoint using a first image of the set of sequential images as input to a machine-learning model. The one or more circuits can update the machine-learning model based at least on a loss determined according to the first transformed image and a second image of the set of sequential images.
    Type: Application
    Filed: April 12, 2024
    Publication date: October 31, 2024
    Applicant: NVIDIA Corporation
    Inventors: Tzofi Klinghoffer, Jonah Philion, Zan Gojcic, Sanja Fidler, Or Litany, Wenzheng Chen, Jose Manuel Alvarez Lopez
  • Publication number: 20240362898
    Abstract: Systems and methods are described for receiving, by processing circuitry, a plurality of maps of the geographic area, wherein each map of the plurality of maps is generated based on a respective plurality of overhead images captured during a respective portion of a time period, and each overhead image of the respective pluralities of overhead images comprises a respective plurality of pixels, and each pixel is designated as being of a particular mapping category of a plurality of mapping categories. The systems and methods may be configured to train, using the processing circuitry and the plurality of maps, the machine learning model to identify the expected distribution for the mapping categories of the geographic area at the given time.
    Type: Application
    Filed: April 12, 2024
    Publication date: October 31, 2024
    Inventors: Steven P. Brumby, Amy E. Larson, Melanie Corcoran, Peter Kerins, Mark Mathis