Patents Issued in May 16, 2024
-
Publication number: 20240161410Abstract: Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.Type: ApplicationFiled: January 25, 2024Publication date: May 16, 2024Inventors: Feng Tang, Afshin Dehghan, Kai Kang, Yang Yang, Yikang Liao, Guangyu Zhao
-
Publication number: 20240161411Abstract: Disclosed are systems and methods that provide a novel framework for an industrial metaverse that provides an immersive, computerized experience for users to, but not limited to, collaborate on the design of a space, control of machinery, management of employees, and the like. The disclosed framework can execute known or to be known augmented reality (AR), virtual reality (VR), mixed reality (MR) and extended (XR) modules to analyze and render displayable, interactive interface objects within a display of a device. Effectively, the immersive environment created via the disclosed framework creates and enables a spatial, virtual workspace, whereby real-world activities can be virtualized so as to enable seamless, efficient and secure working environments for users. The framework provides a 3D virtual space that provides an interactive platform for users to connect, engage with each other, engage with products/assets and/or digitize physical tasks.Type: ApplicationFiled: November 16, 2023Publication date: May 16, 2024Inventors: Maurizio Galardo, Simon Bennett, Alessandro Giusti
-
Publication number: 20240161412Abstract: Aspects of the subject disclosure may include, for example, receiving a first request for a first communication service from a first end user device operated by a first user, where the first request provides or can be utilized to obtain first service information associated with first service parameters corresponding to the first user according to a first root ID provided to the first end user device by the first user; providing, over a network, first immersive media for presentation by the first end user device in response to the first request, where the first immersive media is provided utilizing the first network slice, the first spectrum resource allocation, and the first RAT that are selected according to the first service parameters. Other embodiments are disclosed.Type: ApplicationFiled: November 11, 2022Publication date: May 16, 2024Applicant: AT&T Intellectual Property I, L.P.Inventors: Yupeng Jia, Hongyan Lei, Venson Shaw
-
Publication number: 20240161413Abstract: In some examples, temporal impact analysis of cascading events on metaverse-based organization avatar entities may include determining a temporal impact of a metaverse event on a specified organization avatar entity. With respect to the specified organization avatar entity, a similarity of the metaverse event may be determined in a current temporal context to past events. A reaction plan of a plurality of reaction plans may be selected from an event database and based on the determined similarity. Based on an analysis of the temporal impact with respect to the selected reaction plan, instructions may be generated to execute the selected reaction plan by a metaverse operating environment.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Applicant: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Janardan MISRA, Sanjay PODDER
-
Publication number: 20240161414Abstract: Aspects of the subject disclosure may include, for example providing virtual links and/or virtual content to metaverse users based at least in part on the metaverse user's physical location and/or viewing direction. Low resolution images of physical objects expected to be in a field of view of a metaverse user may be matched to actual images of physical objects in a metaverse user's field of view prior to providing virtual links and/or virtual content. Other embodiments are disclosed.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Applicant: AT&T Intellectual Property I, L.P.Inventors: Wei Wang, Lars Benjamin Johnson, Mikhail Istomin, Rachel Rosencrantz
-
Publication number: 20240161415Abstract: A method and system for performing dynamic provisioning of augmented reality information is provided. The method includes acquiring, from a user device of a user, user device-based information, retrieving rebate information corresponding to the user device-based information, and retrieving user information of the user of the user device. Based on the acquired and retrieved information, the method further tracks movement or activity of the user device, and determines whether a predetermined condition is met. When the predetermined condition is determined to have been met, the method displays one or more rebates in augmented reality over image information displayed on a display of the user device. However, when the predetermined condition is determined not to have been met, the method continues the tracking of movement or activity of the user device until the predetermined condition is determined to be met.Type: ApplicationFiled: November 16, 2022Publication date: May 16, 2024Applicant: JPMorgan Chase Bank, N.A.Inventors: Alexander BUTS, Anitha SRINIVASAN, Steven WEINER, Tyrone SAUNDERS, Andrew Tayag RODRIGUEZ, Jonathan LEACH, Eric BIKORIMANA, Aniella ARANTES, Rocky MAUFORT
-
Publication number: 20240161416Abstract: An augmented reality interaction system applied to a physical scene and comprising a server and a plurality of mobile devices is provided. The server stores a point cloud map corresponding to the physical scene, and one of the mobile devices uploads a physical image, role state variation data and local variation data to the server. The server compares the physical image with the point cloud map to generate orientation data of the mobile device in real time, and adjusts role data corresponding to a user according to the role state variation data and the local variation data. The server pushes the orientation data of the mobile device and the role data to the other mobile devices such that augmented reality images displayed by the other mobile devices are adjusted in real time according to the orientation data of the mobile device and the role data.Type: ApplicationFiled: November 21, 2022Publication date: May 16, 2024Inventors: Hsien Cheng Liao, Jia Wei Hong
-
Publication number: 20240161417Abstract: Aspects of the present disclosure relate to augmented reality (AR) measurement display. A measurement of a quantity in an environment of a user currently using an AR device can be received. A preferred unit of measurement (UoM) for the measurement of the quantity can be selected based on an analysis of historical data of the user. The measurement of the quantity can be conceptualized by obtaining a virtual object associated with the measurement of the quantity. A command to display, on the AR device, the measurement of the quantity within the preferred UoM and the virtual object associated with the measurement of the quantity can be issued.Type: ApplicationFiled: November 14, 2022Publication date: May 16, 2024Inventors: Tushar Agrawal, Martin G. Keen, Jeremy R. Fox, Sarbajit K. Rakshit
-
Publication number: 20240161418Abstract: Systems and techniques for displaying augmented reality enhanced media content are disclosed. For instance, a process can include displaying media content on one or more displays. The process can include: generating a first media content element for an application engine associated with an application state of the application engine; generating a second media content element for the application engine associated with the application state of the application engine; displaying, on a first display of a first device, the first media content element; and outputting the second media content element for display on a second display of a second device relative to a pose of the first display of the first device.Type: ApplicationFiled: December 2, 2022Publication date: May 16, 2024Inventors: David DURNIL, Todd LEMOINE, Murali MANTRAVADI
-
Publication number: 20240161419Abstract: A method includes using at least one processor to receive image data including one or more virtual objects and determining a distance between a user extremity and the one or more virtual objects based on the image data. Further, the method includes detecting a first gesture of the user extremity based on the image data and generating a virtual sphere within a virtual object of the one or more virtual objects. The method also includes determining that the distance is less than a threshold, detecting movement of the user extremity based on the image data, and adjusting a position of the virtual object based on the movement of the user extremity.Type: ApplicationFiled: March 29, 2023Publication date: May 16, 2024Inventors: Rafi Izhar, Simon Blackwell, Pascal Gilbraith, John Howard Pritchard
-
Publication number: 20240161420Abstract: An augmented reality display device, used for wearing in front of an eye of a user, which includes an image source, a polarizer, a first wave plate, a transflective element, a first lens, a second wave plate, a reflective polarizer, a second lens, a light guiding unit, and a curved transflective film. The image source is used for emitting an image light beam. The polarizer, the first wave plate, the transflective element, the first lens, the second wave plate, the reflective polarizer, and the second lens are sequentially disposed on a path of the image light beam. The light guiding unit has a reflective surface. The curved transflective film is disposed on the light guiding unit. After being incident on the light guiding unit, the image light beam is sequentially reflected by the reflective surface and the curved transflective film to the eye of the user.Type: ApplicationFiled: March 24, 2023Publication date: May 16, 2024Applicant: Acer IncorporatedInventors: Yi-Jung Chiu, Tsung-Wei Tu, Wei-Kuo Shih
-
Publication number: 20240161421Abstract: Methods and devices are disclosed for intra-operative viewing of pre- and intra-operative 3D patient images.Type: ApplicationFiled: June 15, 2023Publication date: May 16, 2024Applicants: MONTEFIORE MEDICAL CENTER, ALBERT EINSTEIN COLLEGE OF MEDICINE, The New York City Health and Hospitals CorporationInventors: Oren Mordechai TEPPER, Jillian SCHREIBER, Cesar COLASANTE
-
Publication number: 20240161422Abstract: According to an embodiment, a processor of a wearable device displays a first visual object associated with a position viewed through Field of View (FoV) of a first user. The processor obtains, based on identifying an external electronic device connected to the wearable device, a state of the external electronic device. The processor identifies, based on the state of the external electronic device corresponding to a preset state for displaying a visual object associated with the position, a cluster common to the first user and a second user of the external electronic device. The processor adjusts, based on the identified cluster, the first visual object viewed through the FoV to a second visual object indicated by the cluster.Type: ApplicationFiled: August 15, 2023Publication date: May 16, 2024Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sanghun LEE, Jaewon BANG, Donghyun YEOM, Moonsoo CHANG
-
Publication number: 20240161423Abstract: Disclosed are example embodiments of systems and methods for virtual try-on of articles of clothing. An example method of virtual try-on of articles of clothing includes selecting a garment from a pre-existing database. The method also includes loading a photo of a source model wearing the selected garment. Additionally, the method includes generating a semantic segmentation of the model image. The method also includes extracting the selected garment from the photo of the model. Additionally, the method includes determining a correspondence between a target model and the source model by performing a feature point detection and description of the target model and the source model, and performing feature matching and correspondence validation. The method also includes performing garment warping and alignment of the extracted garment. Additionally, the method includes overlaying and rendering the garment.Type: ApplicationFiled: October 10, 2023Publication date: May 16, 2024Inventors: Sandra Sholl, Adam Freede, Kimberly Byers, Samuel Aronoff
-
Publication number: 20240161424Abstract: Introduced here computer programs and associated computer-implemented techniques for establishing the dimensions of interior spaces. These computer programs are able to accomplish this by combining knowledge of these interior spaces with spatial information that is output by an augmented reality (AR) framework. Such an approach allows two-dimensional (2D) layouts to be seamlessly created through guided corner-to-corner measurement of interior spaces.Type: ApplicationFiled: January 22, 2024Publication date: May 16, 2024Inventors: Victor Palmer, Cole Winans
-
Publication number: 20240161425Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by a messaging application, a video feed from a camera of a user device that depicts a face; receiving a request to add a 3D caption to the video feed; identifying a graphical element that is associated with context of the 3D caption; and displaying the 3D caption and the identified graphical element in the video feed at a position in 3D space of the video feed proximate to the face depicted in the video feed.Type: ApplicationFiled: January 22, 2024Publication date: May 16, 2024Inventors: Kyle Goodrich, Samuel Edward Hare, Maximov Lazarov, Tony Mathew, Andrew Andrew McPhee, Daniel Moreno, Wentao Shang
-
Publication number: 20240161426Abstract: Augmented reality presentations are provided at respective electronic devices. A first electronic device receives information relating to modification made to an augmented reality presentation at a second electronic device, and the first electronic device modifies the first augmented reality presentation in response to the information.Type: ApplicationFiled: January 22, 2024Publication date: May 16, 2024Inventors: Sean Blanchflower, Timothy Halbert
-
Publication number: 20240161427Abstract: Systems and methods enable providing various virtual activities. One of the methods comprises obtaining images of a physical area acquired by an imaging device, generating a virtual activity world representative of the physical area, the generating comprising using at least part of the images to map physical elements of a plurality of physical elements, to the virtual activity world, wherein the physical elements include a given physical element movable by the user, and adding one or more virtual objects to the virtual activity world, using the virtual activity world to detect an interaction between the given physical element mapped to the virtual activity world, and a given virtual object, responsive to a detected interaction, determining an outcome of the interaction, applying a change in the virtual activity world corresponding to the outcome, and displaying a representation of the virtual activity world on a display device.Type: ApplicationFiled: March 7, 2022Publication date: May 16, 2024Inventors: Robert BIEHL, Arno MITTELBACH, Markus SCHLATTMANN, Thomas BADER
-
Publication number: 20240161428Abstract: The disclosure relates to a method for presenting a shared virtual environment to users residing in different vehicles while driving in a real-world environment. In at least one of the vehicles, an interactive user device of that vehicle receives definition data from the user residing in that vehicle, wherein the definition data describe a new virtual event for the virtual environment, and a position sensor of the vehicle generates position data of the vehicle while the user generates the definition data, wherein the position data describe a current position of the vehicle in the real-world environment. A server device receives the definition data and the corresponding position data and generates corresponding event data of a new virtual event, wherein based on the position data a trigger region is defined that defines where in the real-world environment the virtual event shall be triggered for the users in their respective vehicle.Type: ApplicationFiled: March 11, 2021Publication date: May 16, 2024Inventors: Daniel PFALLER, Christoph WEIGAND
-
Publication number: 20240161429Abstract: A good display image is obtained by superimposing, on a real space image, a virtual space image suitable for the real space image. An image generating section generates, in reference to CG (Computer Graphics) data, a virtual space image corresponding to an imaging range of an imaging section on a real space. For example, a virtual imaging section is installed on a virtual space in a manner corresponding to the imaging section on the real space, and a CG object on the virtual space is imaged to generate a virtual space image. An image superimposing section superimposes the virtual space image on a real space image obtained by the imaging section imaging an object on the real space, to obtain a display image.Type: ApplicationFiled: January 18, 2023Publication date: May 16, 2024Inventors: MASAO ERIGUCHI, RYO OGUCHI, SHIN TAKANASHI
-
Publication number: 20240161430Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that apply a resolution independent, vector-based decal on a 3D object. In one or more implementations, the disclosed systems apply piecewise non-linear transformation on an input decal vector geometry to align the decal with a surface of an underlying 3D object. To apply a vector-based decal on a 3D object, in certain embodiments, the disclosed systems parameterize a 3D mesh of the 3D object to create a mesh map. Moreover, in some instances, the disclosed systems determine intersections between edges of a decal geometry and edges of the mesh map to add vertices to the decal geometry at the intersections. Additionally, in some implementations, the disclosed systems lift and project vertices of the decal geometry into three dimensions to align the vertices with faces of the 3D mesh of the 3D object.Type: ApplicationFiled: November 10, 2022Publication date: May 16, 2024Inventors: Sumit Dhingra, Siddhartha Chaudhuri, Vineet Batra
-
Publication number: 20240161431Abstract: Systems and methods for identifying tooth correspondences are disclosed. A method includes receiving a digital representation including patient teeth, and identifying a tooth outline from the digital representation. The method includes retrieving a 3D mesh including model teeth. The method includes projecting a first mesh boundary onto a patient tooth and modifying the first projected mesh boundary to match the first tooth outline. The method includes identifying a first tooth point on the first tooth outline that corresponds with a first mesh point on the first projected mesh boundary. The method includes mapping the first tooth point to the 3D mesh. The method includes determining that the first and second tooth points correspond to a common 3D mesh point. The method includes modifying at least one of the digital representation or the 3D mesh based on determining the tooth points correspond to the common 3D mesh point.Type: ApplicationFiled: August 18, 2023Publication date: May 16, 2024Applicant: SDC U.S. SmilePay SPVInventors: Jared Lafer, Ramsey Jones, Ryan Amelon
-
Publication number: 20240161432Abstract: Disclosed herein is a method for generating a virtual concert environment in a metaverse. The method may include collecting data related to a virtual concert, generating a multi-feature layer map for reflecting features of respective elements constituting a virtual concert environment based on the data and generating a virtual concert environment based on the multi-feature layer map, and aligning and matching an object in the virtual concert environment based on virtual space coordinates of a metaverse space.Type: ApplicationFiled: June 1, 2023Publication date: May 16, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Sung CHOI, Ki-Hong KIM, Yong-Wan KIM
-
Publication number: 20240161433Abstract: A method for manufacturing a plate-shaped three-dimensional object includes: preparing color image data including a colored three-dimensional pattern; and gray-scaling the color image data to obtain a grayscale image data. The method further includes: creating three-dimensional data based on a gradation of the grayscale image data; creating color three-dimensional data by performing texture mapping on the three-dimensional data using the color image data as a texture; and shaping the plate-shaped three-dimensional object by a full color 3D printer based on the color three-dimensional data.Type: ApplicationFiled: September 14, 2023Publication date: May 16, 2024Applicant: EPOCH COMPANY, LTD.Inventor: Koichi NISHINO
-
Publication number: 20240161434Abstract: Methods and systems are disclosed for displaying an augmented reality virtual object on a multimedia device. One method comprises detecting, in an augmented reality environment displayed using a first device, a virtual object; detecting, within the augmented reality environment, a second device, the second device comprising a physical multimedia device; and generating, at the second device, a display comprising a representation of the virtual object.Type: ApplicationFiled: January 23, 2024Publication date: May 16, 2024Inventors: Kevin GORDON, Charlotte SPENDER
-
Publication number: 20240161435Abstract: Examples described herein provide a method for point cloud alignment. The method includes receiving a first set of three-dimensional (3D) points of an environment. The method further includes capturing a second set of 3D points of the environment using a sensor of a processing system. The method further includes aligning the first set of 3D points of the environment with the second set of 3D points of the environment to create a point cloud of the environment. The method further includes generating, on a display of the processing system, a graphical representation of the point cloud of the environment. The graphical representation displays at least a portion of the first set of 3D points of the environment as an augmented reality element.Type: ApplicationFiled: August 8, 2023Publication date: May 16, 2024Inventors: Angelo Wostal, John Chan, Michael MÜLLER, Daniel Korgel, Daniel Pompe
-
Publication number: 20240161436Abstract: Compact LiDAR representation includes performing operations that include generating a three-dimensional (3D) LiDAR image from LiDAR input data, encoding, by an encoder model, the 3D LiDAR image to a continuous embedding in continuous space, and performing, using a code map, a vector quantization of the continuous embedding to generate a discrete embedding. The operations further include decoding, by the decoder model, the discrete embedding to generate modified LiDAR data, and outputting the modified LiDAR data.Type: ApplicationFiled: November 10, 2023Publication date: May 16, 2024Inventors: Yuwen XIONG, Wei-Chiu MA, Jingkang WANG, Raquel URTASUN
-
Publication number: 20240161437Abstract: A system for digital twinning a lift device includes a lift device, and processing circuitry. The lift device includes at least one of a sensor, a device, or an electronic control system. The processing circuitry is configured to obtain machine-specific data from the at least one of the sensor, device, or the electronic control system of the lift device. The processing circuitry is also configured to obtain a user input regarding a requested configuration of the lift device. The processing circuitry is further configured to generate a virtual lift device using the requested configuration of the lift device and the machine-specific data obtained from the lift device. The processing circuitry is further configured to operate a user device to present the virtual lift device according to the requested configuration in a virtual environment of a webpage or in an augmented reality environment.Type: ApplicationFiled: November 15, 2023Publication date: May 16, 2024Applicant: Oshkosh CorporationInventor: Korry D. Kobel
-
Publication number: 20240161438Abstract: There is provided an information processing apparatus (40) including an information acquisition unit (402) that acquires operation information concerning user operation performed a virtual object superimposed and displayed on a real space or a virtual space and a sense of force control unit (408) that outputs, based on the operation information, via a sense-of-force device attached to a part of a body of a user, a sense of force for causing the user to recognize weight.Type: ApplicationFiled: February 8, 2022Publication date: May 16, 2024Inventors: YUSUKE NAKAGAWA, TAHA MORIYAMA, MASANORI OKAZAKI, OSAMU ITO, YOHEI FUKUMA, AYUMI NAKAGAWA
-
Publication number: 20240161439Abstract: A computing device includes digital imaging functionality that captures images digitally using any of a variety of different technologies. The computing device receives an indication of, or determines, a distance between a physical location of a tag and an image capture module. The tag is a device that transmits and receives signals allowing one or more other devices to determine the physical location of the tag, such as an ultra-wideband tag. The computing device also includes a flash intensity determination system that automatically generates, based at least in part on the distance between the tag and the image capture module, a flash intensity for capturing a digital image.Type: ApplicationFiled: November 16, 2022Publication date: May 16, 2024Applicant: Motorola Mobility LLCInventors: Ashok Oliver Prabhu, Jeevitha Jayanth, Sindhu Chamathakundil
-
Publication number: 20240161440Abstract: Images captured by different image capturing devices may have different fields of views and/or resolutions. One or more of these images may be aligned based on an image template, and additional details for the adapted images may be predicted using a machine-learned data recovery model and added to the adapted images such that the images may have the same field of view or the same resolution.Type: ApplicationFiled: November 16, 2022Publication date: May 16, 2024Applicant: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Meng Zheng, Yuchun Liu, Fan Yang, Srikrishna Karanam, Ziyan Wu, Terrence Chen
-
Publication number: 20240161441Abstract: There is provided a deep learning-based overlay key centering system and a method thereof that may precisely measure and examine an alignment state of fine patterns of a semiconductor substrate. The method includes collecting an input data set from at least one device, the input data set comprising measurement image data of an overlay key and label data including information on a position and bounding box size of the overlay; and training the model by inputting the input data set to a model for deep learning. The step of training the model may include a step of calculating a loss function by comparing result data predicted by the model with the label data; and a step of optimizing an algorithm of the model by modifying a weight of the model so that a loss value calculated with the loss function may become smaller than a reference value.Type: ApplicationFiled: May 3, 2023Publication date: May 16, 2024Applicant: AUROS TECHNOLOGY, INC.Inventors: Soo-Yeon MO, Ga-Min KIM, Hyo-Sik HAM
-
Publication number: 20240161442Abstract: A method and apparatus with object detector training is provided. The method includes obtaining first input data and second input data from a target object; obtaining second additional input data by performing data augmentation on the second input data; extracting a first feature to a shared embedding space by inputting the first input data to a first encoder; extracting a second feature to the shared embedding space by inputting the second input data to a second encoder; extracting a second additional feature to the shared embedding space by inputting thesecond additional input data to the second encoder; identifying a first loss function based on the first feature, the second feature, and the second additional feature; identifying a second loss function based on the second feature and the second additional feature; and updating a weight of the second encoder based on the first loss function and the second loss function.Type: ApplicationFiled: August 17, 2023Publication date: May 16, 2024Applicants: SAMSUNG ELECTRONICS CO., LTD., Korea University Research and Business FoundationInventors: Sujin JANG, Sangpil KIM, Jinkyu KIM, Wonseok ROH, Gyusam CHANG, Dongwook LEE, Dae Hyun JI
-
Publication number: 20240161443Abstract: Before recognition processing is performed, preprocessing is performed on image data acquired by a sensor or image data obtained by converting the image data. An information processing system according to an embodiment includes a specifying unit (201) that specifies a correction target pixel in a depth map using a first learning model and a correction unit (202) that corrects the correction target pixel specified by the specifying unit.Type: ApplicationFiled: February 15, 2022Publication date: May 16, 2024Inventor: KAZUYUKI OKUIKE
-
Publication number: 20240161444Abstract: A method for generating training sample images for an image binarization model, and reconstruction of broken characters up to a significant extentis disclosed. In some embodiments, the method includes receiving a source image and a corresponding target image from an image dataset via at least one encoder model; generating a source image feature map corresponding to the source image and a target image feature map corresponding to the target image; generating a rough stylized image feature map through the AdaIN module based on each of the source image feature map and the target image feature map; transforming the rough stylized image feature map into an image form to obtain a rough stylized image; and generating a residual details image to obtain a final stylized image based on a combination of the residual details image and the rough stylized image.Type: ApplicationFiled: March 22, 2023Publication date: May 16, 2024Inventors: SUDIP DAS, KINJAL DASGUPTA, DHEVENDRA ALAGAN PALANIVEL, SAINARAYANAN GOPALAKRISHNAN
-
Publication number: 20240161445Abstract: An object detection apparatus includes: a generation unit that performs compression encoding on each of a first image obtained from an image generation apparatus and a second image indicating a detection target object so as to extract a feature quantity that allows object detection and so as to be decoded later, thereby generate respective one of first encoding information that is the compressed, encoded first image and that is usable as a first feature quantity that is the feature quantity of the first image, and second encoding information that is the compressed, encoded second image and that is usable as a second feature quantity that is the feature quantity of the second image; and a detection unit that detects the detection target object in the first image, by using the first and second feature quantities.Type: ApplicationFiled: April 7, 2021Publication date: May 16, 2024Applicant: NEC CorporationInventor: Masaya FUJIWAKA
-
Publication number: 20240161446Abstract: Systems and methods for managing data corpus are provided. For example, a method comprises: receiving an unlabeled image; identifying one or more geohashes associated with the unlabeled image; determining whether each geohash of the one or more geohashes is labeled; generating a coverage score for the unlabeled image based on the determination; evaluating whether the coverage score is below a predetermined threshold; in response to the coverage score being below the predetermined threshold, transmitting the unlabeled image to an image labeling system.Type: ApplicationFiled: November 8, 2023Publication date: May 16, 2024Inventor: Christian Sidak
-
Publication number: 20240161447Abstract: According to one aspect, spatial action localization in the future (SALF) may include feeding a frame from a time step of a video clip through an encoder to generate a latent feature, feeding the latent feature and one or more latent features from one or more previous time steps of the video clip through a future feature predictor to generate a cumulative information for the time step, feeding the cumulative information through a decoder to generate a predicted action area and a predicted action classification associated with the predicted action area, and implementing an action based on the predicted action area and the predicted action classification. The encoder may include a 2D convolutional neural network (CNN) and/or a 3D-CNN. The future feature predictor may be based on an ordinary differential equation (ODE) function.Type: ApplicationFiled: April 14, 2023Publication date: May 16, 2024Inventors: Hyung-gun CHI, Kwonjoon LEE, Nakul AGARWAL, Yi XU, Chiho CHOI
-
Publication number: 20240161448Abstract: A system and method for performing object recognition using chords is disclosed, A chord is a feature representing the distance between edge pixels and/or the angle of a line segment connected to edge pixels. The object recognition system preferably includes: an edge detector, a plurality of transforms, a chord generator, and a summing circuit. The edge detector is configured to detect a plurality of edge pixels in the acquired image and reference image. Each transform is associated with a scale which is the ratio of an input chord length and an output chord length for associated with the transform. The chord generator is configured to generate a first plurality of chords representing a point in the acquired image, and a second plurality of chords representing a point in the reference image.Type: ApplicationFiled: November 9, 2023Publication date: May 16, 2024Inventor: Andrew Steven Naglestad
-
Publication number: 20240161449Abstract: A method for converting a lineless table into a lined table includes associating a first set of tables with a second set of tables to form a set of multiple table pairs that includes tables with lines and tables without lines. A conditional generative adversarial network (cGAN) is trained, using the table pairs, to produce a trained cGAN. Using the trained cGAN, lines are identified for overlaying onto a lineless table. The lines are overlaid onto the lineless table to produce a lined table.Type: ApplicationFiled: January 23, 2024Publication date: May 16, 2024Inventors: Mehrdad Jabbarzadeh GANGEH, Hamid Reza MOTAHARI NEZAD
-
Publication number: 20240161450Abstract: A feature extraction apparatus, a method, and a program capable of efficiently using processing resources are provided. In a feature extraction apparatus, a feature extraction unit receives an object image and extracts features of an object included in the received object image. The determination unit determines a value of M (the value of M is an integer equal to or larger than one but equal to or smaller than N), which is the number of object images whose features will be extracted of N object images with which the first tracking ID is associated, in accordance with a usage status of processing resources of the feature extraction apparatus.Type: ApplicationFiled: October 30, 2023Publication date: May 16, 2024Applicant: NEC CorporationInventor: Satoshi Yamazaki
-
Publication number: 20240161451Abstract: An information processing system communicably connectable to one or more apparatuses and configured to execute a plurality of processes, includes: a data storing unit configured to store, in response to a user operation on a first apparatus, setting information associating identification information identifying content of a particular process among the plurality of processes and the particular process; an area identifying unit configured to identify an area including the identification information from image data received from a second apparatus; and a processing unit configured to execute the particular process, based on the identified arca and the setting information.Type: ApplicationFiled: March 2, 2022Publication date: May 16, 2024Inventors: Kohji KAMBARA, Shuhei AKIYAMA
-
Publication number: 20240161452Abstract: Disclosed is an image enhancement method, including: obtaining an image adjustment parameter; converting first image data to be processed into second image data; adjusting the luminance data to obtain adjusted luminance data, updating the luminance adjustment parameter when that the adjusted luminance data does not meet a preset luminance enhancement condition, and based on an updated luminance adjustment parameter, continuing to adjust the adjusted luminance data until final adjusted luminance data meets the luminance enhancement condition; adjusting the chroma data to obtain adjusted chroma data, updating the chroma adjustment parameter when that the adjusted chroma data does not meet a preset chroma enhancement condition, and based on an updated chroma adjustment parameter, continuing to adjust the adjusted chroma data until final adjusted chroma data meets the chroma enhancement condition; and performing image color dimension conversion on the luminance data and the chroma data to obtain third image data.Type: ApplicationFiled: January 12, 2024Publication date: May 16, 2024Applicant: UNILUMIN GROUP CO., LTDInventors: Bin HUANG, Yongjie LI, Lingxiang SHEN
-
Publication number: 20240161453Abstract: A detection method includes: preparing sample image data of a sample in which a target substance is labeled with a labeling substance; acquiring first background image data in which an image feature amount of a single bright spot in the sample image data is reduced by a first filter; acquiring second background image data in which an image feature amount of a dense bright spot in the sample image data is reduced by a second filter; synthesizing the first background image data and the second background image data on a basis of brightness information of the sample image data and acquiring synthesized background image data; and obtaining difference image data that is a difference between the sample image data and the synthesized background image data.Type: ApplicationFiled: October 10, 2023Publication date: May 16, 2024Applicant: kONICA MINOLTA, INC.Inventors: Tsukasa MATSUO, Hiroshi NEGISHI, Keisuke YAMAGUCHI, Yuki MIYAKE, Toshihiko IWASAKI
-
Publication number: 20240161454Abstract: The application relates to a portrait image skin retouching method, apparatus, an electronic device, and a storage medium. The method comprises the following steps: identifying a skin area and a portrait structure area contained in the skin area from a portrait image to be processed; determining a skin analysis area, which is a remaining area of the skin area except the portrait structure area; fitting a pixel value distribution in the skin analysis area by utilizing a Gaussian mixture model, to determine one or more model parameters of the fitted Gaussian mixture model; determining a retouching intensity parameter based on variances the model parameters; and performing skin retouching on the portrait image to be processed based on the retouching intensity parameter. With the method, the proper skin retouching effect can be achieved for portrait images with different image qualities.Type: ApplicationFiled: November 9, 2023Publication date: May 16, 2024Inventor: Yuanyuan Nan
-
Publication number: 20240161455Abstract: A processor-implemented method includes: extracting feature maps respectively corresponding to a plurality of channels based on a convolutional network with respect to two images; generating a matching point map from the feature maps; refining the matching point map by using attention between matching points comprised in the matching point map; and extracting a matching point between the two images from the refined matching point map.Type: ApplicationFiled: May 12, 2023Publication date: May 16, 2024Applicants: SAMSUNG ELECTRONICS CO., LTD., POSTECH Research and Business Development FoundationInventors: Seung Wook KIM, Minsu CHO, Juhong MIN
-
Publication number: 20240161456Abstract: An image comparison apparatus includes an image comparison portion. The image comparison portion generates a shadow target image which is an image of a portion of a shadow in a target image. The image comparison portion generates a shadow reference image corresponding to the shadow target image in a reference image. The image comparison portion matches a tone of the two shadow images. The image comparison portion generates a non-shadow target image in the target image and a non-shadow reference image in the reference image. The image comparison portion matches a tone of the two non-shadow images. The image comparison portion generates the target image from which the shadow has been removed by synthesizing the shadow target image and the non-shadow target image. The image comparison portion compares the reference image and the target image from which the shadow has been removed.Type: ApplicationFiled: November 6, 2023Publication date: May 16, 2024Inventors: Naoki Takeuchi, Yasuhide Sato, Rommel Custodio, Akira Yuki, Daijiro Kitamoto
-
Publication number: 20240161457Abstract: A computer-implemented method can include: visually presenting visual stimuli to participants in multiple runs, wherein the visual stimuli include pairs of images, each pair including a reference product image and a competitor product image; performing multiple scans on the participants during passive viewing of the visual presenting, the scans including functional magnetic resonance imaging (fMRI); and determining a similarity index based on results of the scans, the similarity index indicating a level a perceived similarity between the reference product image and the competitor product image.Type: ApplicationFiled: January 31, 2022Publication date: May 16, 2024Inventors: Ming Hsu, Andrew Stewart Kayser, Zhihao Zhang
-
Publication number: 20240161458Abstract: Disclosed is a method that includes generating a prediction consistency value that indicates a consistency of prediction of an object in an input image with respect to class prediction values for the object in an input image from classification models to which the input image is input, and identifying a class of the object. Identifying the class of the object includes, in response to a class type being determined, based on the prediction consistency value, of the object being determined to correspond to a majority class, identifying a class of the object based on a corresponding class prediction value output for the object from a majority class prediction model, and in response to the class type of the object being determined to correspond to a minority class, identifying the class of the object based on another corresponding class prediction value output for the object from a minority class prediction model.Type: ApplicationFiled: March 31, 2023Publication date: May 16, 2024Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kikyung KIM, Jiwon BAEK, Chanho AHN, Seungju HAN
-
Publication number: 20240161459Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object detection. In one aspect, a method comprises: obtaining: (i) an image, and (ii) a set of one or more query embeddings, wherein each query embedding represents a respective category of object; processing the image and the set of query embeddings using an object detection neural network to generate object detection data for the image, comprising: processing the image using an image encoding subnetwork of the object detection neural network to generate a set of object embeddings; processing each object embedding using a localization subnetwork to generate localization data defining a corresponding region of the image; and processing: (i) the set of object embeddings, and (ii) the set of query embeddings, using a classification subnetwork to generate, for each object embedding, a respective classification score distribution over the set of query embeddings.Type: ApplicationFiled: January 25, 2024Publication date: May 16, 2024Inventors: Matthias Johannes Lorenz Minderer, Alexey Alexeevich Gritsenko, Austin Charles Stone, Dirk Weissenborn, Alexey Dosovitskiy, Neil Matthew Tinmouth Houlsby