Patents Issued in January 9, 2020
  • Publication number: 20200013187
    Abstract: A space coordinate converting server and method thereof are provided. The space coordinate converting server receives a field video recorded with a 3D object from an image capturing device, and generates a point cloud model accordingly. The space coordinate converting server determines key frames of the field video, and maps the point cloud model to key images of the key frames based on rotation and translation information of the image capturing device for generating a characterized 3D coordinate set. The space coordinate converting server determines 2D coordinates of the 3D object in key images, and selects 3D coordinates from the characterized 3D coordinate set according to the 2D coordinates. The space coordinate converting server determines a space coordinate converting relation according to marked points of the 3D object and the 3D coordinates.
    Type: Application
    Filed: July 30, 2018
    Publication date: January 9, 2020
    Inventors: Jia-Wei HONG, Shih-Kai HUANG, Ming-Fang WENG, Ching-Wen LIN
  • Publication number: 20200013188
    Abstract: A location estimating apparatus according to an embodiment includes a degree-of-similarity calculator, a correspondence calculator, and a location calculator. The degree-of-similarity calculator calculates a degree of similarity between the input image and the reference image by an arithmetic operation that varies, in a non-discrete manner, the degree of similarity in accordance with a non-discrete variation of a first parameter. The correspondence calculator calculates correspondence between a pixel of the input image and a pixel of the reference image by an arithmetic operation that varies, in a non-discrete manner, the correspondence in accordance with a non-discrete variation of a second parameter. The location calculator calculates a first location indicating a position and/or an orientation of the first imaging device when the input image is captured by an arithmetic operation that varies, in a non-discrete manner, the first location in accordance with a non-discrete variation of the correspondence.
    Type: Application
    Filed: March 4, 2019
    Publication date: January 9, 2020
    Inventors: Ryo NAKASHIMA, Tomoki WATANABE, Akihito SEKI, Yusuke TAZOE
  • Publication number: 20200013189
    Abstract: The present embodiments relate to automatically estimating a three]dimensional pose of an object from an image captured using a camera with a structured light sensor. By way of introduction, the present embodiments described below include apparatuses and methods for training a system for and estimating a pose of an object from a test image. Training and test images are sampled to generate local image patches. Features are extracted from the local image patches to generate feature databased used to estimate nearest neighbor poses for each local image patch. The closest nearest neighbor pose to the test image is selected as the estimated three]dimensional pose.
    Type: Application
    Filed: February 23, 2017
    Publication date: January 9, 2020
    Inventors: Srikrishna Karanam, Ziyan Wu, Shanhui Sun, Oliver Lehmann, Stefan Kluckner, Terrence Chen, Jan Ernst
  • Publication number: 20200013190
    Abstract: An apparatus and a method for recognizing an object in an image are disclosed. The method for recognizing an object in an image may include: executing a deep neural network algorithm which has been trained in advance to recognize an object in an image, on a first image inputted from a camera module; finding an amount of change in image between the first image and a second image inputted from the camera module after the first image according to a predetermined cycle; and in response that an object has been detected from the first image as a result of executing the deep neural network algorithm, tracking the position of the detected object from the second image, based on the found amount of change in image.
    Type: Application
    Filed: September 20, 2019
    Publication date: January 9, 2020
    Applicant: LG ELECTRONICS INC.
    Inventors: Ruei Hung LI, Sang Hoon KIM, Jin Gyeong KIM, Jin Seok IM
  • Publication number: 20200013191
    Abstract: The disclosure includes a surgical needle counting system comprising a stand, a camera, a touch screen device, a processor system, and a memory. The memory may include executable instructions that, when executed by the processor system, cause the processor system to effectuate operations. The operations may include receiving a first input comprising at least one of a style, dimension, and an initial quantity of at least one surgical needle prior to the at least one surgical needle entering a surgical field. The operations may comprise capturing an image of the at least one surgical needle after at least one of the at least one surgical needle has exited the surgical field. The operations may include analyzing the image to determine a final quantity of the at least one surgical needle after exiting the surgical field. The operations may incorporate determining whether the initial quantity equals the final quantity.
    Type: Application
    Filed: July 9, 2018
    Publication date: January 9, 2020
    Inventor: Christoper M. Berning
  • Publication number: 20200013192
    Abstract: Instead of using a single LUT to model a processing function, it is proposed to use another smaller LUT, i.e. an iterative LUT, such that, when applied at least two times in succession to data to process, the same processing function is modeled with at least the same accuracy. A specific way to compute this iterative LUT is given. Specific applications are given in the field of color processing. Modeling is more accurate and/or less bins are needed to model complex functions.
    Type: Application
    Filed: July 4, 2019
    Publication date: January 9, 2020
    Inventors: Erik REINHARD, Elena GARCES, Jurgen STAUDER
  • Publication number: 20200013193
    Abstract: An image evaluation apparatus which is capable of evaluating an image that can be given positive evaluations in a networking service. Images included in post data posted on a networking service and evaluating values for the post data are obtained at predetermined time intervals. When the post data has been obtained, first parameters are generated by applying an image analysis process to the images. The first parameters and the evaluation values are stored in association with each other. Upon input of image to be evaluated, a second parameter is generated by applying the image analysis process to the image to be evaluated. A first parameter corresponding to the second parameter is extracted from the plurality of stored first parameters. Parameter evaluation values representing variations in the evaluation values associated with the extracted first parameter in notifying order are calculated. Notification of the calculated parameter evaluation values is provided.
    Type: Application
    Filed: July 3, 2019
    Publication date: January 9, 2020
    Inventor: Yurie Uno
  • Publication number: 20200013194
    Abstract: An apparatus to facilitate compute compression is disclosed. The apparatus includes a graphics processing unit including mapping logic to map a first block of integer pixel data to a compression block and compression logic to compress the compression block.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 9, 2020
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Balaji Vembu, Prasoonkumar Surti, Kamal Sinha, Nadathur Rajagoplan Satish, Narayan Srinivasa, Feng Chen, Dukhwan Kim, Farshad Akhbari
  • Publication number: 20200013195
    Abstract: A dynamic content providing method performed by a computer-implemented dynamic content providing system including recognizing a facial region in an input image, extracting feature information of the recognized facial region, and dynamically synthesizing an image object of content based on the feature information, the content being synthesizable with the input image may be provided.
    Type: Application
    Filed: September 20, 2019
    Publication date: January 9, 2020
    Applicant: Snow Corporation
    Inventors: Jimin KIM, Sangho Choi, Byung-Sun Park, Junghwan Jin, Wonhyo Yi, Hyeongbae Shin, Seongyeop Jeong, Sungwook Kim, Noah Hahm
  • Publication number: 20200013196
    Abstract: A discrete wavelet transform (DWT) based generative system for generating images of fashion products is provided. The system includes a memory having computer-readable instructions stored therein. The system includes a processor configured to access a plurality of fashion images of a plurality of fashion products. Each fashion image is generated at a first resolution. The processor is configured to train one or more DWT based generative models using the plurality of fashion images of the fashion products. Each of the generative models is selectively trained using a directional fashion image. The directional fashion image includes details of the fashion products corresponding to a pre-determined orientation and scale. The processor is further configured to generate an upsampled fashion image corresponding to each of the fashion images. Each upsampled fashion image is generated at a second resolution.
    Type: Application
    Filed: February 4, 2019
    Publication date: January 9, 2020
    Inventor: Makkapati Vishnu Vardhan
  • Publication number: 20200013197
    Abstract: A method of and system for: acquiring drawing technique prompt information associated with an original graphic; initiating presentation of a stroke represented by the touch input on a display screen in response to detecting touch input on a touch pad; initiating presentation of a first prompt of the plurality of prompts in association with the stroke on the display screen; and initiating recording of graphic data corresponding to the stroke as an imitation of the first portion in response to completion of imitating a first portion.
    Type: Application
    Filed: June 21, 2019
    Publication date: January 9, 2020
    Inventors: Xiangxiang ZOU, Hongtao GUAN, Lu TONG
  • Publication number: 20200013198
    Abstract: To enable proper adjustment of a monitor using a color bar regardless of a difference in transfer functions. Provided is an image processing device including a determination unit configured to determine a transfer function related to conversion between light and an image signal and to be used in a display device among a plurality of transfer functions, and a generation unit configured to generate a color bar signal corresponding to the transfer function determined by the determination unit and output the generated color bar signal to the display device.
    Type: Application
    Filed: January 17, 2018
    Publication date: January 9, 2020
    Applicant: SONY CORPOR ATION
    Inventors: Tomoyuki ENDO, Koji KAMIYA
  • Publication number: 20200013199
    Abstract: A system and method of image reconstruction is disclosed. First image scan data corresponding to a spiral CT modality received during a first time period is received. The first image scan data includes at least partially overlapping axial positions. A change in position over time for each of the at least partially overlapping axial positions is determined and a first respiratory waveform is estimated from the change in position over time for each of the at least partially overlapping axial positions.
    Type: Application
    Filed: July 3, 2019
    Publication date: January 9, 2020
    Inventor: James J. Hamill
  • Publication number: 20200013200
    Abstract: Projection images of reduced resolution are generated by reducing resolution of filtered projection images and/or reducing the number of filtered projection images. Volume data of reduced resolution is generated by performing CT reconstruction using the projection images of reduced resolution. Each voxel of the volume data of reduced resolution is provisionally divided. The provisionally divided voxels are compared in voxel value before and after provisional division. If a difference in voxel value before and after the provisional division is greater than a threshold, the provisional division is determined to be valid, and division is further continued. If the difference in voxel value before and after the provisional division is less than or equal to the threshold, the provisional division is determined to be invalid and the voxel ends being divided.
    Type: Application
    Filed: June 28, 2019
    Publication date: January 9, 2020
    Applicants: THE UNIVERSITY OF TOKYO, MITUTOYO CORPORATION
    Inventors: Yutaka OHTAKE, Tomonori GOTO, Masato KON
  • Publication number: 20200013201
    Abstract: This disclosure relates generally to image processing, and more particularly to method and system for image reconstruction using deep dictionary learning (DDL). The system collects the degraded image as test image and processes the test image to extract sparse features from the test image, at different levels, using dictionaries. The extracted sparse features and data from the dictionaries are used by the system to reconstruct the HR image corresponding to the test image.
    Type: Application
    Filed: July 5, 2019
    Publication date: January 9, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Karthik SEEMAKURTHY, Sandeep NK, Ashley VARGHESE, Shailesh Shankar DESHPANDE, Mariaswamy Girish CHANDRA, Balamuralidhar PURUSHOTHAMAN, Angshul MAJUMDAR
  • Publication number: 20200013202
    Abstract: In a gas chromatograph, a display unit is provided in a main body. In addition, a touch panel-type display screen is included in the display unit. Further, a setting processing unit performs setting related to a graph displayed in a graph area based on a touch operation on the graph area of the display screen. For this reason, an operator may perform setting related to the graph displayed in the graph area of the display screen only by performing a touch operation on the touch panel-type display screen provided in the main body.
    Type: Application
    Filed: July 5, 2018
    Publication date: January 9, 2020
    Applicant: Shimadzu Corporation
    Inventor: Shingo MASUDA
  • Publication number: 20200013203
    Abstract: Systems and methods for aggregating and storing different types of data, and generating interactive user interfaces for analyzing the stored data. In some embodiments, entity data is received for a plurality of entities from one or more data sources, and used to determine attribute values for the entities for one or more given time periods. The plurality of entities may be categorized into one or more entity groups, and aggregate attribute values may be generated based upon the entity groups. A first interactive user interface is generated displaying the one or more entity groups in association with the aggregated attribute values associated with the entity group. In response to a received indication of a user selection of an entity group, a second interactive user interface is generated displaying the one or more entities associated with the selected entity group, each entity displayed in association with the attribute values associated with the entity.
    Type: Application
    Filed: September 18, 2019
    Publication date: January 9, 2020
    Inventors: Sean Kelley, Dylan Scott, Ayush Sood, Kevin Verdieck, Izaak Baker, Eliot Ball, Zachary Bush, Allen Cai, Jerry Chen, Aditya Dahiya, Daniel Deutsch, Calvin Fernandez, Jonathan Hong, Jiaji Hu, Audrey Kuan, Lucas Lemanowicz, Clark Minor, Nicholas Miyake, Michael Nazario, Brian Ngo, Mikhail Proniushkin, Siddharth Rajgarhia, Christopher Rogers, Kayo Teramoto, David Tobin, Grace Wang, Wilson Wong, Holly Xu, Xiaohan Zhang
  • Publication number: 20200013204
    Abstract: Systems and methods are provided for presenting multiple dimensions of an entity for visual comparison. Multiple sets of data representing multiple dimensions of a first entity may be accessed. The multiple sets of data may be converted for plotting within a first multidimensional arc chart. The first multidimensional arc chart may be defined by a rounded outer shape. The rounded outer shape may be characterized by an arc length. The first multidimensional arc chart may be divided into multiple sections. Individual sections may include a plot of a dimension of the first entity. Values of the plot may be determined based on corresponding positions along the arc length. An interface that includes the first multidimensional arc chart may be provided.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 9, 2020
    Inventors: Alexandru Antihi, Ari Gesher
  • Publication number: 20200013205
    Abstract: There is disclosed a system and method for colorizing vector graphic objects in a digital medium environment. The system comprises a processing unit and a deep neural network of the processing unit, in which the deep neural network includes a generator. The processing unit receives a non-colorized vector image and converts the non-colorized vector image to a non-colorized raster image. The deep neural network generates a colorized raster image from the non-colorized raster image. The generator processes the non-colorized raster image using an extended number of convolutional layers and residual blocks to add skip connections between at least two of the convolutional layers. The processing unit converts the colorized raster image to a colorized vector image.
    Type: Application
    Filed: July 5, 2018
    Publication date: January 9, 2020
    Applicant: Adobe Inc.
    Inventors: Mridul Kavidayal, Vineet Batra, Jingwan Lu, Ankit Phogat
  • Publication number: 20200013206
    Abstract: The present disclosure generally relates to a system that includes a processor configured to execute an augmented reality (AR) translator and visualizer system. The AR translator and visualizer system is configured to receive a language file that includes content, determine a background in the language file, remove the background, and retrieve the content from the language file. Moreover, the AR translator and visualizer system is configured to overlay the content onto a real world view via a display to form AR content that includes the content merged with the real world view. Furthermore the AR translator and visualizer system is configured to cause the system to display the real world view overlaid with the content via the display.
    Type: Application
    Filed: July 6, 2018
    Publication date: January 9, 2020
    Inventors: William Forrester Seely, Glen William Brooksby, Sandra Beverly Kolvick
  • Publication number: 20200013207
    Abstract: A method and apparatus for editing an uploaded image are provided. A controller of the apparatus receives the uploaded image that contains an article area showing an image of a purchased article and a personal information area showing personal information. The controller identifies the article area and the personal information area in the received image, edits the received image to protect the personal information without damaging the article area, and posts the edited image to a webpage.
    Type: Application
    Filed: May 13, 2019
    Publication date: January 9, 2020
    Inventors: KiHyun KIM, HaYoon KIM
  • Publication number: 20200013208
    Abstract: A system and method of creating customized characters and selectively displaying them in an electronic display, such as an augmented reality or virtual reality display is provided. A digital character may be provided by a character provider for customization by others using the system. Such customizations may be instantiated in user devices that provide electronic displays. Instantiation of the custom digital character may be conditioned on one or more trigger conditions, which may be specified by the character customizer. For example, a digital character customized using the system may be conditioned on triggering events in the real-world or in a virtual world. When a relevant triggering condition is satisfied at a user device, the custom character (i.e., information for instantiating the custom character) may be transmitted to that user device. In this manner, the system may push custom characters to user devices that satisfy the triggering condition.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Applicant: Binary Bubbles, Inc.
    Inventors: Lisa Gai-Tzen WONG, Amit TISHLER, Richard Paul WEEKS
  • Publication number: 20200013209
    Abstract: Provided is a method of controlling an image and sound pickup device, which is includes obtaining a plurality of audio signals and a participant image, which shows a plurality of participants, and generating location information about a sound source location by using comparison information about a comparison among the plurality of audio signals and face recognition that is performed on the participant image; and generating an estimated utterer image, which displays an estimated utterer, by using the location information.
    Type: Application
    Filed: September 17, 2019
    Publication date: January 9, 2020
    Inventors: Daisuke MITSUI, Takayuki INOUE
  • Publication number: 20200013210
    Abstract: Techniques are described for efficient label insertion and collision handling. A bounding geometry for a label to be graphically displayed on a display screen as part of an electronic map is determined, wherein the bounding geometry comprises a circle. The bounding geometry is inserted into a grid index, wherein the grid index represents a viewport of the electronic map. Disjoint regions of the grid index intersected by the bounding geometry are identified, wherein each disjoint region represents a different portion of the viewport. For each intersected disjoint region, it is identified whether there is at least one collision between the bounding geometry and one or more existing bounding geometries in the disjoint region; and responsive to identifying whether there is at least one collision in the intersected disjoint region, a target opacity of the label is set.
    Type: Application
    Filed: September 20, 2019
    Publication date: January 9, 2020
    Inventors: Christopher Jacob Loer, Ansis Brammanis, Nicki Zippora Dlugash, Molly Lloyd
  • Publication number: 20200013211
    Abstract: Embodiments of the present invention provide a method, system and computer program product for automated virtual artifact generation through natural language processing. In an embodiment of the invention, a method for automated virtual artifact generation includes loading electronic documentation for a real world object into memory of a computer, parsing by a processor of the computer the electronic documentation into different words and storing the different words. The method further includes natural language processing the different words to determine different physical and functional attributes of the real world object, generating a virtual artifact in the memory of the computer based upon a mapping of the physical attributes of the real world object to structural attributes of the virtual artifact and a mapping of the functional attributes of the real world object to functional attributes of the virtual artifact, and rendering the virtual artifact in the virtual reality environment.
    Type: Application
    Filed: July 4, 2018
    Publication date: January 9, 2020
    Inventors: Paul Bergen, Robert Huntington Grant, Zachary Silverstein, Trudy L. Hewitt
  • Publication number: 20200013212
    Abstract: Techniques are provided for facial image replacement between a reference facial image and a target facial image, of varying pose and illumination, using 3-dimensional morphable face models (3DMMs). A methodology implementing the techniques according to an embodiment includes fitting the reference face and the target face to a first and second 3DMM, respectively. The method further includes generating a texture map based on the fitted 3D reference face and rendering the fitted 3D reference face to a pose of the fitted 3D target face. The rendering is based on parameters of the first 3DMM, parameters of the second 3DMM, and the generated texture map associated with the fitted 3D reference face. The method further includes, determining a region of interest of the target facial image; and blending the rendered 3D reference face onto the region of interest of the target facial image to generate a replaced facial image.
    Type: Application
    Filed: April 4, 2017
    Publication date: January 9, 2020
    Applicant: INTEL CORPORATION
    Inventors: SHANDONG WANG, MING LU, ANBANG YAO, YURONG CHEN
  • Publication number: 20200013213
    Abstract: During tracing of a primary ray in a 3-D space (e.g., a 3-D scene in graphics rendering), a ray is found to intersect a primitive (e.g., a triangle) located in the 3-D space. Secondary ray(s) may be generated for a variety of purposes. For example, occlusion rays may be generated to test occlusion of a point of intersection between the primary ray and primitive is illuminated by any of the light(s). An origin for each secondary ray can be modified from the intersection point based on characteristics of the primitive intersected. For example, an offset from the intersection point can be calculated using barycentric coordinates of the intersection point and interpolation of one or more parameters associated with vertices defining the primitive. These parameters may include a size of the primitive and differences between a geometric normal for the primitive and a respective additional vector supplied with each vertex.
    Type: Application
    Filed: September 17, 2019
    Publication date: January 9, 2020
    Inventor: Aaron Dwyer
  • Publication number: 20200013214
    Abstract: Instructions indicative of changing a view of a virtual object may be received by a device. At least a portion of the virtual object may be viewable from a viewpoint that is at a given distance from a surface of the virtual object. The device may cause a change of the view along a rotational path around the virtual object in response to the receipt of the instructions based on the given distance being greater than a threshold distance. The device may cause a change of the view along a translational path indicative of a shape of the surface of the virtual object in response to the receipt of the instructions based on the given distance being less than the threshold distance.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Inventors: James Joseph Kuffner, James Robert Bruce, Thor Lewis, Sumit Jain
  • Publication number: 20200013215
    Abstract: An electronic apparatus and method for adaptive sub-band based coding of hierarchical transform coefficients of a 3D point cloud, is provided. The electronic apparatus stores the 3D point cloud and generates a plurality of voxels from the 3D point cloud. The electronic apparatus generates a plurality of hierarchical transform coefficients by application of a hierarchical transform scheme on the generated plurality of voxels and classifies the plurality of hierarchical transform coefficients into a plurality of sub-bands of hierarchical transform coefficients. The plurality of hierarchical transform coefficients are classified based on a weight of each of the plurality of hierarchical transform coefficients.
    Type: Application
    Filed: May 28, 2019
    Publication date: January 9, 2020
    Inventors: ARASH VOSOUGHI, DANILLO GRAZIOSI
  • Publication number: 20200013216
    Abstract: Methods for product design and corresponding systems and computer-readable mediums. A method includes receiving a modeled object having a surface and a non-homogeneous density distribution. The method includes tessellating the surface of the object into a set of triangles defined by triangle vertices. The method includes selecting a reference point for the object. The method includes, for each triangle in the tessellation, constructing a tetrahedron, the tetrahedron defined by tetrahedron vertices that include the vertices of the corresponding triangle and the reference point, determining a material density at each of the tetrahedron vertices, and computing mass properties for the tetrahedron using the material density at each of the tetrahedron vertices. The method includes aggregating the mass properties of the tetrahedrons. The method includes storing the aggregated mass properties of the tetrahedrons as the mass properties of the object.
    Type: Application
    Filed: March 14, 2017
    Publication date: January 9, 2020
    Inventors: Suraj Ravi Musuvathy, George Allen
  • Publication number: 20200013217
    Abstract: Systems and methods relate to encoded video streams including geometric-data streams transmitted to a receiver for rendering of a viewpoint-adaptive 3D persona. A method includes obtaining a three-dimensional (3D) mesh of a subject generated from depth-camera-captured information about the subject, obtaining a facial-mesh model, locating a facial portion of the obtained 3D mesh of the subject, computing a geometric transform based on the facial portion and the facial-mesh model, the geometric transform determined in response to one or more aggregated error differences between a plurality of feature points on the facial-mesh model and a plurality of corresponding feature points on the facial portion of the obtained 3D mesh, generating a transformed facial-mesh model using the geometric transform and generating a hybrid mesh of the subject at least in part by combining the transformed facial-mesh model and at least a portion of the obtained 3D mesh.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Inventors: Simion Venshtain, Po-Han Huang
  • Publication number: 20200013218
    Abstract: A biological model generating apparatus establishes, for each of a plurality of nodes on a centerline of a blood vessel, based on blood vessel information, a first circle and a second circle on a plane cutting through the node and intersecting the centerline at a right angle. The first circle centers at the node and has a first radius obtained by adding a margin of error to the radius of the blood vessel. The second circle centers at the node and has a second radius obtained by subtracting the margin of error from the radius. Subsequently, the generating apparatus generates an implicit function defining a curved surface that intersects the plane cutting through each of the plurality of nodes, at the inner side of the first circle but the outer side of the second circle. Then, the generating apparatus generates a mesh model based on the defined curved surface.
    Type: Application
    Filed: September 17, 2019
    Publication date: January 9, 2020
    Inventors: KOHEI HATANAKA, Machiko Nakagawa
  • Publication number: 20200013219
    Abstract: Objects can be rendered in three-dimensions and viewed and manipulated in an augmented reality environment. Background images are subtracted from object images from multiple viewpoints to provide baseline representations of the object. Morphological operations can be used to remove errors caused by misalignment of an object image and background image. Using two different contrast thresholds, pixels can be identified that can be said at two different confidence levels to be object pixels. An edge detection algorithm can be used to determine object contours. Low confidence pixels can be associated with the object if they can be connected to high confidence pixels without crossing an object contour. Segmentation masks can be created from high confidence pixels and properly associated low confidence pixels. Segmentation masks can be used to create a three-dimensional representation of the object.
    Type: Application
    Filed: September 17, 2019
    Publication date: January 9, 2020
    Inventors: Arnab Sanat Kumar Dhua, Himanshu Arora, Radek Grzeszczuk
  • Publication number: 20200013220
    Abstract: An information processing apparatus enables a user viewing a displayed virtual viewpoint image to easily understand the state in a generation target scene of the virtual viewpoint image. The information processing apparatus generates a layout that is a figure representing a position of an object included in an imaging target area captured by a plurality of imaging units from different directions, and controls a display unit to display a virtual viewpoint image and the generated layout. The virtual viewpoint image is generated based on images acquired by the plurality of imaging units and viewpoint information indicating a virtual viewpoint.
    Type: Application
    Filed: July 1, 2019
    Publication date: January 9, 2020
    Inventors: Kazuhiro Yoshimura, Yosuke Okubo
  • Publication number: 20200013221
    Abstract: An XR device and a method for controlling the same are disclosed. The XR device is applicable to 5G communication technology, robot technology, autonomous driving technology, and Artificial Intelligence (AI) technology. A method for controlling the XR device includes executing an electronic appliance arrangement application by a user who uses the XR device, displaying an image of an indoor space of a display screen of the XR device, overlapping an electronic appliance selected by the user with the image of the indoor space, and displaying the overlap result image, and if the electronic appliance is arranged at a specific position of the indoor space and the specific position is improper for arrangement of the electronic appliance, providing the user with an improper notification message including information about why the arrangement position is improper.
    Type: Application
    Filed: August 28, 2019
    Publication date: January 9, 2020
    Applicant: LG ELECTRONICS INC.
    Inventor: Haena WOO
  • Publication number: 20200013222
    Abstract: A method of automatically generating an augmented reality view of an interior space using a mobile device comprising a display screen and a video camera, and located within the interior space. The method comprises obtaining data identifying an alignment point of a virtual model of the interior space. Displaying a video image from the camera on the display screen. Outputting a request to a user to point the camera at an alignment feature of the interior space corresponding to the alignment point. In response to receiving a user input indicating that the camera is pointing at the alignment feature, capturing an image from the camera. Analyzing the captured image to identify the alignment feature in the captured image corresponding to the alignment point. Analyzing the identified alignment feature to compute a transformation required to align the alignment point of the virtual model of the interior space with the alignment feature in the captured image corresponding to the alignment point.
    Type: Application
    Filed: July 9, 2019
    Publication date: January 9, 2020
    Inventors: Martin Paul Fergie, James Lewis
  • Publication number: 20200013223
    Abstract: The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Inventors: Sebastian Knorr, Peter Meier
  • Publication number: 20200013224
    Abstract: Augmenting real-time views of a patient with three-dimensional (3D) data. In one embodiment, a method may include identifying 3D data for a patient with the 3D data including an outer layer and multiple inner layers, determining virtual morphometric measurements of the outer layer from the 3D data, registering a real-time position of the outer layer of the patient in a 3D space, determining real-time morphometric measurements of the outer layer of the patient, automatically registering the position of the outer layer from the 3D data to align with the registered real-time position of the outer layer of the patient in the 3D space using the virtual morphometric measurements and using the real-time morphometric measurements, and displaying, in an augmented reality (AR) headset, one of the inner layers from the 3D data projected onto real-time views of the outer layer of the patient.
    Type: Application
    Filed: September 18, 2019
    Publication date: January 9, 2020
    Inventors: Steven Cvetko, Ph.D, Wendell Arlen Gibby
  • Publication number: 20200013225
    Abstract: A vehicle external information output method and an apparatus therefor are disclosed. The vehicle external information output method according to an embodiment of the present invention has the advantageous effect. A DSM camera acquires external information relating to a zone to which a user's gaze is directed. The external information is output to provide the user with a visual field having no blind spot. An autonomous vehicle according to the present invention can be associated with an artificial intelligence module, a drone (unmanned aerial vehicle, UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, and a 5G service.
    Type: Application
    Filed: September 18, 2019
    Publication date: January 9, 2020
    Applicant: LG ELECTRONICS INC.
    Inventors: Jongjin PARK, Jichan MAENG
  • Publication number: 20200013226
    Abstract: To alleviate the load on the eyeglasses type electronic device, the present disclosure provides an electronic device comprising an optical driving assembly configured to emit image light corresponding to augmented reality information, an optical element in which the emitted image light is incident to form an output area, a front frame coupled to the optical element, a side frame coupled to the front frame to form a body of an eyeglasses shape together with the front frame, and a support member of a pad type coupled to the front frame to support at least one region of a wearer's nose, in which the support member includes a fixing area fixed to two left points and two right points of the front frame, and a flexible area deformable to closely fit the wearer's nose between the two left fixing areas and the two right fixing areas.
    Type: Application
    Filed: September 18, 2019
    Publication date: January 9, 2020
    Applicant: LG ELECTRONICS INC.
    Inventor: Haklim LEE
  • Publication number: 20200013227
    Abstract: A wearable electronic device displays a virtual object in a field of view of a user wearing the wearable electronic device. The wearable electronic device moves the virtual object from the first location to a second location in the field of view in response to determining movement of the wearable electronic device will cause the virtual object to be outside the field of view.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 9, 2020
    Inventor: Philip Scott Lyren
  • Publication number: 20200013228
    Abstract: An electronic device is disclosed. The electronic device of the present disclosure includes a main body wearable on the head of a user, a display coupled to the main body to be detachable to the main body, and a controller configured to generate images to implement the images on the display while the display is mounted on the main body. An electronic device according to the present invention may be associated with an artificial intelligence module, robot, augmented reality (AR) device, virtual reality (VR) device, and device related to 5G services.
    Type: Application
    Filed: September 20, 2019
    Publication date: January 9, 2020
    Applicant: LG ELECTRONICS INC.
    Inventors: Hak Lim LEE, Sam Youp KIM, Ji Yong SHIN, Jong Beom HAN
  • Publication number: 20200013229
    Abstract: One or more of an autonomous vehicle, a user terminal, and a server of the present disclosure may be connected to, for example, an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, or a 5G service device. An information processing method in an electronic device according to one embodiment of the present disclosure includes identifying a container that is logically docked on an operating system (OS), identifying an application corresponding to the container, identifying an event related to running of the application, and transmitting, to another node, information on a first block on difference including first identification information for the first block on difference generated based on first data associated with the event and second identification information for the container.
    Type: Application
    Filed: September 20, 2019
    Publication date: January 9, 2020
    Inventors: Chulhee LEE, Namyong PARK, Taesuk YOON, Dongkyu LEE, Eunkoo LEE
  • Publication number: 20200013230
    Abstract: A method for controlling a plurality of mobile agricultural devices that includes establishing electronic communication with a plurality of transceivers mounted to the mobile agricultural devices. The method also includes building a three-dimensional model including a virtual representation of each of the mobile agricultural devices and displaying the three-dimensional model at a user interface having a display. The method further includes receiving location data regarding the mobile agricultural devices via the transceivers and adjusting at least one of the virtual representations of the mobile agricultural devices within the model to reflect the location data.
    Type: Application
    Filed: July 6, 2018
    Publication date: January 9, 2020
    Applicant: LINDSAY CORPORATION
    Inventor: Mark William Miller
  • Publication number: 20200013231
    Abstract: Techniques for managing transitions in a three-dimensional environment include rendering, on the displays, a first three-dimensional scene. An indication is received that the first three-dimensional scene is to be replaced with a second three-dimensional scene. Graphics data is received that is representative of a transition to the second three-dimensional scene. The first three-dimensional scene is transitioned to the second three-dimensional scene using the graphics data. Control of rendering the second three-dimensional scene is transitioned to a process configured to render the second three-dimensional scene.
    Type: Application
    Filed: July 9, 2018
    Publication date: January 9, 2020
    Inventors: Craig R. MAITLEN, John Edward CHURCHILL, Joseph WHEELER, Tyler P. ESSELSTROM
  • Publication number: 20200013232
    Abstract: A method of converting a three-dimensional (3D) scanned object to an avatar. The method contains the steps of conducting a 3D segmentation of the 3D scanned object to obtain segmented results; and adapting a first template to the segmented results to create an avatar. The first template includes a topology, and the adapting step contains the step of mapping the topology of the first template to the segmented results to create the avatar. The invention provides an automated process which requires virtually no human intervention to convert the 3D scanned object to the avatar.
    Type: Application
    Filed: July 18, 2018
    Publication date: January 9, 2020
    Inventor: Bun KWAI
  • Publication number: 20200013233
    Abstract: A method of automatically fitting an accessory object to an avatar. The method contains the steps of providing an avatar; providing an accessory object; providing a template which the accessory object does not penetrate, and fitting the accessory object to the avatar as a result of the template fitted to the avatar. The invention provides an automated process which requires virtually no human intervention to fit an accessory object (e.g. a garment) to the avatar.
    Type: Application
    Filed: July 18, 2018
    Publication date: January 9, 2020
    Inventor: Bun KWAI
  • Publication number: 20200013234
    Abstract: [Subject] To provide a size measuring device that can be readily handled and easily used for a size-taking process by even a user who has no specialized size-taking technique, a managing server, a user terminal and a size measuring system. [Means to Solve Problems] The size measuring system is provided with a size measuring device 10 that is attached to a user's body so as to measure the size of the user's body, and outputs sensor measurement information indicating the measured size or the like, a user terminal 20 operated by the user who measures the body and a managing server 30 that manages information or the like of the size and shape of apparel commodities, and based upon the sensor measurement information, supplies user size information corresponding to the information of the body size of the user and commodity retrieval results information corresponding to information about commodities that fit to the size to the user terminal 20.
    Type: Application
    Filed: February 9, 2018
    Publication date: January 9, 2020
    Applicant: ZOZO, INC.
    Inventor: Yusaku MAEZAWA
  • Publication number: 20200013235
    Abstract: A method and an apparatus for processing patches of a point cloud are provided. The apparatus includes an input/output (I/O) device, a storage device, and a processor. The I/O device is used to receive a bit stream of the point cloud. The storage device is configured to store an index table recording indexes corresponding to a plurality of orientations. The processor is coupled to the I/O device and the storage device and is configured to execute a program to demultiplex the bit stream of the point cloud into a patch image and indexes corresponding to a plurality of patches in the patch image, look up the index table obtain an orientation of each patch, transform the patch image according to the orientation to recover the plurality of patches of the point cloud, and reconstruct the point cloud by using the recovered patches.
    Type: Application
    Filed: July 3, 2019
    Publication date: January 9, 2020
    Applicant: Industrial Technology Research Institute
    Inventors: Yi-Ting Tsai, Chun-Lung Lin, Ching-Chieh Lin
  • Publication number: 20200013236
    Abstract: A method, system, and computer program, for providing the virtual object in the virtual or semi-virtual environment, based on a characteristic associated with the user. In one example embodiment, the system comprises at least one computer processor, and a memory storing instructions that, when executed by the at least one computer processor, perform a set of operations comprising determining the characteristic associated with the user in the virtual or semi-virtual environment with respect to a predetermined reference location in the environment, and providing a virtual object based on the characteristic.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Carlos G. PEREZ, Vidya SRINIVASAN, Colton B. MARSHALL, Aniket HANDA, Harold Anthony MARTINEZ MOLINA