Patents Issued in January 21, 2021
  • Publication number: 20210019919
    Abstract: A head-mounted device can be operated with another device and/or object for which information is gathered to facilitate visual display of a representation thereof. An object can be provided with indicators that allow a head-mounted device to determine both an identity and a characteristic (e.g., position, orientation, distance, etc.) of the object. Additionally or alternatively, the head-mounted device can determine both an identity and a characteristic (e.g., position, orientation, distance, etc.) of an electronic device attached to an object for producing a virtual representation of the object. Additionally or alternatively, the head-mounted device can receive data from an electronic device attached to an object for producing a virtual representation of the object. The virtual representation of the object can resemble the physical object, even where the object itself is not independently analyzed.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 21, 2021
    Inventors: David A. SCHMUCK, Marinus MEURSING, Brian S. LAU, Jeremy C. FRANKLIN
  • Publication number: 20210019920
    Abstract: An image processing apparatus includes a graphical image generator that generates a first graphical image and a second graphical image, and a mode switching unit that switches between a first display mode and a second display mode so that a display displays the first graphical image or the second graphical image. Each graphical image includes a color wheel indicating a two-dimensional color space of hue and saturation and a lightness bar indicating a one-dimensional color space of lightness. In the first graphical image, each position in the color wheel and the lightness bar is indicated by the hue and saturation indicated by that position. In the second graphical image, a distribution of colors included in the color image in the color space of hue and saturation or the color space of lightness is superposed on the color wheel and the lightness bar.
    Type: Application
    Filed: May 28, 2020
    Publication date: January 21, 2021
    Applicant: FANUC CORPORATION
    Inventors: Fumikazu WARASHINA, Yuutarou TAKAHASHI, Wanfeng FU
  • Publication number: 20210019921
    Abstract: A display processing unit 15 detects an edge component by using an image signal generated by an imaging unit 12 and processed by a camera processing unit 13, and performs highlighting processing on a pixel whose edge component has a value greater than or equal to a predetermined value, or pixels in a predetermined range including the pixel. A control unit 20 changeably sets the size and the position of a processing target area on which the highlighting processing is performed, on the basis of an operation signal supplied from an operation unit 18, and the like. For example, the control unit 20 may set the size and the position of the processing target area on the basis of a size setting operation and a position setting operation using an operation key of the operation unit 18, or may set the size and the position of the processing target area depending on a touch panel operation on the operation unit 18.
    Type: Application
    Filed: January 15, 2019
    Publication date: January 21, 2021
    Inventor: TAKUJI YOSHIDA
  • Publication number: 20210019922
    Abstract: An image processing system comprising includes: an acquisition unit that acquires a plurality of projection images obtained by tomosynthesis imaging in which radiation is emitted from a radiation source to a breast at different irradiation angles and a projection image is captured at each irradiation angle by a radiation detector; a tomographic image generation unit that generates a plurality of tomographic images in each of a plurality of tomographic planes of the breast, from the plurality of projection images; a composite two-dimensional image generation unit that generates a composite two-dimensional image from a plurality of images selected from among the plurality of projection images and the plurality of tomographic images; an information generation unit that generates correspondence relationship information representing a correspondence relationship between a position in the composite two-dimensional image and a depth of a tomographic plane corresponding to the position; a display controller that perf
    Type: Application
    Filed: June 11, 2020
    Publication date: January 21, 2021
    Applicant: FUJIFILM Corporation
    Inventor: Wataru FUKUDA
  • Publication number: 20210019923
    Abstract: An imaging apparatus according to an aspect of the present disclosure includes: a filter that includes a region through which a two-dimensional signal indicating an image passes, and includes a portion which blocks at least a part of the two-dimensional signal in the region; a detector that detects a power of the two-dimensional signal passing through the filter; at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to reconstruct the image indicated by the two-dimensional signal based on the power detected in a plurality of conditions that differ in positional relationship between the two-dimensional signal imaged on the filter and a distribution of the portion.
    Type: Application
    Filed: March 19, 2019
    Publication date: January 21, 2021
    Applicant: NEC Corporation
    Inventor: Chenhui HUANG
  • Publication number: 20210019924
    Abstract: A medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry is configured to obtain Time-of-Flight (TOF) depiction image data generated on the basis of an annihilation point of a gamma ray. The processing circuitry is configured to output reconstructed Positron Emission computed Tomography (PET) image data on the basis of the TOF depiction image data and a trained model that outputs the reconstructed PET image data on the basis of an input of the TOF depiction image data.
    Type: Application
    Filed: July 17, 2020
    Publication date: January 21, 2021
    Applicant: CANON MEDICAL SYSTEMS CORPORATION
    Inventor: Kenta MORIYASU
  • Publication number: 20210019925
    Abstract: A method and system for providing optimized digital drawings are provided. The method and system include receiving a request to perform a transformation operation on a digital drawing to optimize the digital drawing, providing a tool for selecting an area of the digital drawing to perform the transformation operation on, receiving a selection of the selected area of the digital drawing, transforming the selected area of the digital drawing to optimize the digital drawing, and displaying the transformed area of the digital drawing. The method may also include receiving an input for a digital drawing, identifying the input as related to coloring a portion of the digital drawing, identifying one or more boundary lines for the portion of the digital drawing, determining if the input includes a stroke that extends outside the one or more boundary lines, and removing a section of the stroke that extends outside the boundary lines.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 21, 2021
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Arcadio GARCIA SALVADORES
  • Publication number: 20210019926
    Abstract: The present invention relates to a mobile terminal and a cosmetics automatic recognition system capable of recognizing cosmetics located nearby and recommending a make-up method using the recognized cosmetics. The present invention may comprise a mobile terminal including: an RFID reader unit configured to recognize cosmetics through an RFID tag attached to a cosmetic and receive cosmetic information from the recognized RFID tag and from cosmetics having an RFID tag which stores cosmetic information attached thereto; a control unit configured to generate a cosmetics list consisting of at least one cosmetic recognized by the RFID reader unit; and a display unit configured to display the cosmetics list.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 21, 2021
    Applicant: LG HOUSEHOLD & HEALTH CARE LTD.
    Inventors: Hae Young HWANG, Bok Hyun PACK
  • Publication number: 20210019927
    Abstract: Systems and methods for utilizing living entities as markers for virtual content in an augmented reality environment are discussed herein. The virtual content may comprise objects, surfaces, textures, effects, and/or other content visibly manifested in views of the augmented reality environment. In some implementations, the virtual content may comprise an avatar and/or other full- or partial-body virtual content object depicted based on the living entity. A living entity and multiple linkage points for the living entity may be detected within the field of view of a user. Based on the arrangement of the linkage points, virtual content may be rendered and appear superimposed over or in conjunction with a view of the living entity in the augmented reality environment. In some implementations, the rendering of virtual content in the augmented reality environment may be triggered by the arrangement of the multiple linkage points for a given living entity.
    Type: Application
    Filed: October 2, 2020
    Publication date: January 21, 2021
    Inventor: Nicholas T. Hariton
  • Publication number: 20210019928
    Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Inventors: Dominik Tobias BORER, Martin GUAY, Jakob Joachim BUHMANN, Robert Walker SUMNER
  • Publication number: 20210019929
    Abstract: Provided are systems and methods for single image-based body animation. An example method includes receiving an input image that includes a body of a person and segmenting the input image into a body portion and a background portion. The method further includes fitting a model to the body portion. The model is configured to receive a set of pose parameters representing a pose of the body and generate an output image including an image of the body adopting the pose. The method further includes receiving a series of further sets of pose parameters, each representing at least one of further poses of the body. The further sets of pose parameters are generated using a generic model. The method also includes generating a series of output images of the body adopting the further poses and generating an output video based on the series of output images.
    Type: Application
    Filed: October 2, 2020
    Publication date: January 21, 2021
    Inventors: Egor Nemchinov, Sergei Gorbatyuk, Aleksandr Mashrabov, Egor Spirin, Iaroslav Sokolov, Andrei Smirdin, Igor Tukh
  • Publication number: 20210019930
    Abstract: Disclosed is a method of localizing a user operating a plurality of sensing components, preferably in an augmented or mixed reality environment, the method comprising transmitting pose data from a fixed control and processing module and receiving the pose data at a first sensing component, the pose data is then transformed into a first component relative pose in a coordinate frame based on the control and processing module. A display unit in communication with the first sensing component is updated with the transformed first component relative pose to render virtual content with improved environmental awareness.
    Type: Application
    Filed: August 5, 2020
    Publication date: January 21, 2021
    Applicant: Magic Leap, Inc.
    Inventor: Paul M. Greco
  • Publication number: 20210019931
    Abstract: Embodiments provide for cut-aware UV transfer. Embodiments include receiving a surface correspondence map that maps points of a source mesh to points of a target mesh. Embodiments include generating a set of functions encoding locations of seam curves and wrap curves from a source UV map of the source mesh. Embodiments include using the set of functions and the surface correspondence map to determine a target UV map that maps a plurality of target seam curves and a plurality of target wrap curves to the target mesh. Embodiments include transferring a two-dimensional parametrization of the source UV map to the target UV map.
    Type: Application
    Filed: July 17, 2019
    Publication date: January 21, 2021
    Inventor: Fernando Ferrari DE GOES
  • Publication number: 20210019932
    Abstract: Various methods and systems are provided for medical imaging. In one embodiment, a method comprises displaying a volume-rendered image from a 3D medical imaging dataset; positioning a first virtual marker within a rendered volume of the volume-rendered image, the rendered volume defined by the 3D medical imaging dataset; and illuminating the rendered volume by projecting simulated light from the first virtual marker. In this way, the illumination of the rendered volume by the first virtual marker visually indicates the position and depth of the first virtual marker within the volume-rendered image.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 21, 2021
    Inventor: Lars Hofsoy Breivik
  • Publication number: 20210019933
    Abstract: The invention provides, in some aspects, a system for implementing a rule derived basis to display image sets. In various embodiments of the invention, the selection of the images to be displayed, the layout of the images, as well as the rendering parameters and styles can be determined using a rule derived basis. The rules are based on meta data of the examination as well as image content that is being analyzed by neuronal networks. In an embodiment of the present invention, the user is presented with images displayed based on their preferences without having to first manually adjust parameters.
    Type: Application
    Filed: October 1, 2020
    Publication date: January 21, 2021
    Applicant: PME IP PTY LTD
    Inventors: MALTE WESTERHOFF, DETLEV STALLING
  • Publication number: 20210019934
    Abstract: Techniques for generating an ensemble image are disclosed. In some embodiments, images associated with a plurality of independent scenes are translated to a prescribed origin, translated images associated with each independent scene are transformed into a prescribed perspective; and pixels of an image array having the prescribed perspective that is associated with each independent scene is populated with corresponding pixels from the transformed translated images associated with that independent scene. An ensemble image comprising the prescribed perspective is at least in part generated by combining at least some pixels of image arrays associated with the plurality of independent scenes.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 21, 2021
    Inventors: Clarence Chui, Manu Parmar
  • Publication number: 20210019935
    Abstract: Systems and methods for generating illumination effects for inserted luminous content, which may include augmented reality content that appears to emit light and is inserted into an image of a physical space. The content may include a polygonal mesh, which may be defined in part by a skeleton that has multiple joints. Examples may include generating a bounding box on a surface plane for the inserted content, determining an illumination center point location on the surface plane based on the content, generating an illumination entity based on the bounding box and the illumination center point location, and rendering the illumination entity using illumination values determined based on the illumination center point location. Examples may also include determining illumination contributions values for some of the joints, combining the illumination contribution values to generate illumination values for pixels, and rendering another illumination entity using the illumination values.
    Type: Application
    Filed: July 17, 2019
    Publication date: January 21, 2021
    Inventors: Ivan Neulander, Mark Dochtermann
  • Publication number: 20210019936
    Abstract: Embodiments herein provide techniques for signaling of priority information (e.g., priority ranking) and/or quality information in a timed metadata track associated with point cloud content. For example, embodiments include procedures for signaling of priority information and/or quality information in a timed metadata track to support viewport-dependent distribution of point cloud content, e.g., based on MPEG's International Organization for Standardization (ISO) Base Media File Format (ISOBMFF). In some embodiments, metadata samples of the timed metadata track may include priority information and/or quality information for a point cloud bounding box of a point cloud media presentation (e.g., for one or more point cloud objects in the point cloud bounding box). Other embodiments may be described and claimed.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Inventor: Ozgur Oyman
  • Publication number: 20210019937
    Abstract: A computer vision method, executed by one or more processors, for generating a single 3D model view of a geographic scene includes: receiving image data for the scene from a plurality of sensors located at different angles with respect to the geographic scene; dividing the image data into a plurality of image spatial regions; correlating the image data in each image spatial region to obtain a score for each image data in each image spatial region; grouping the image data in each image spatial region into two or more image clusters, based on the scores for each image; performing a multi-ray intersection within each image cluster to obtain a 3D reference point for each region; for each region, combining the one or more clusters, based on the 3D reference point for the region; and registering the combined clusters for each region to obtain a single 3D model view of the scene.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 21, 2021
    Inventors: Jacob Wesely Gallaway, Jeremy Jens Gerhart, Stephen J. Raif, Jody Dale Verret
  • Publication number: 20210019938
    Abstract: Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations.
    Type: Application
    Filed: October 1, 2020
    Publication date: January 21, 2021
    Applicant: Ultrahaptics IP Two Limited
    Inventors: Kevin A. HOROWITZ, David S. HOLZ
  • Publication number: 20210019939
    Abstract: An electronic apparatus and method is provided for shape-refinement of a triangular 3D mesh using a modified Shape from Shading (SFS) scheme. The electronic apparatus generates a flat two-dimensional (2D) mesh based on an orthographic projection of an initial three-dimensional (3D) triangular mesh on an image plane that includes a plurality of square grid vertices. The electronic apparatus estimates a final grid depth value for each square grid vertex of the flat 2D mesh based on a modified SFS scheme. The modified SFS scheme corresponds to an objective relationship among a reference grid image intensity value, an initial grid depth value, and a grid albedo value for each square grid vertex of the plurality of square grid vertices. The electronic apparatus estimates a final 3D triangular mesh based on the initial 3D triangular mesh and the estimated final grid depth value.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 21, 2021
    Inventors: JIE HU, MOHAMMAD GHARAVI-ALKHANSARI
  • Publication number: 20210019940
    Abstract: A tessellation method uses both vertex tessellation factors and displacement factors defined for each vertex of a patch, which may be a quad, a triangle or an isoline. The method is implemented in a computer graphics system and involves calculating a vertex tessellation factor for each corner vertex in one or more input patches. Tessellation is then performed on the plurality of input patches using the vertex tessellation factors. The tessellation operation involves adding one or more new vertices and calculating a displacement factor for each newly added vertex. A world space parameter for each vertex is subsequently determined by calculating a target world space parameter for each vertex and then modifying the target world space parameter for a vertex using the displacement factor for that vertex.
    Type: Application
    Filed: October 2, 2020
    Publication date: January 21, 2021
    Inventors: Peter Malcolm Lacey, Simon Fenney
  • Publication number: 20210019941
    Abstract: A user selects a set of photographs from a trip through an environment that he or she desires to present to other people. A collection of photographs, including the set of photographs captured during the trip optionally augmented with additional photographs obtained from another collection, are combined with a terrain model (e.g., a digital elevation model) to extract information regarding the geographic location of each of the photographs within the environment. The collection of photographs are analyzed, considering their geolocation information as well as the photograph content to register the photographs relative to one another. This information for the photographs is compared to the terrain model in order to accurately position the viewpoint for each photograph within the environment. A presentation of the selected photographs within the environment is generated that displays both the selected photographs and synthetic data filled in beyond the edges of the selected photographs.
    Type: Application
    Filed: October 2, 2020
    Publication date: January 21, 2021
    Applicant: Adobe Inc.
    Inventors: Michal Lukác, Zhili Chen, Jan Brejcha, Martin Cadik
  • Publication number: 20210019942
    Abstract: System and method for enhancing situational awareness. A moveable see-through display viewable by a user displays an augmented reality 2D image of an external scene based on received 2D image data, in accordance with updated position and orientation of display. The see-through display further displays an augmented reality 3D image of the external scene based on received 3D image data, the 3D image overlaid conformally onto view of external scene, in accordance with updated position and orientation of display. The see-through display further selectively displays: a gradual transition of the 2D image into the 3D image, or a gradual transition of the 3D image into the 2D image. At least one image feature may gradually appear or gradually disappear during the gradual transition. The 2D or 3D image may include a region of interest based on updated position and orientation of display or selected by user.
    Type: Application
    Filed: March 6, 2018
    Publication date: January 21, 2021
    Inventors: Yoav Ophir, Itamar Nocham
  • Publication number: 20210019943
    Abstract: Methods, systems, and computer program products are described for obtaining, from a first tracking system, an initial three-dimensional (3D) position of an electronic device in relation to image features captured by a camera of the electronic device and obtaining, from a second tracking system, an orientation associated with the electronic device. Responsive to detecting a movement of the electronic device, obtaining, from the second tracking system, an updated orientation associated with the detected movement of the electronic device, generating and providing a query to the first tracking system, the query corresponding to at least a portion of the image features and including the updated orientation and the initial 3D position of the electronic device, generating, for a sampled number of received position changes, an updated 3D position for the electronic device and generating a 6-DoF pose using the updated 3D positions and the updated orientation for the electronic device.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Inventors: Aveek Purohit, Kan Huang
  • Publication number: 20210019944
    Abstract: Provided are various systems and methods that establish a self-service AR generation and publication platform. The platform is configured to allow novice users to build AR experiences for rendering to other users. The platform encodes the AR experiences under a universal data format that decouples generation functions from later visualization responsive to a recognizable visual trigger. Access to the file triggers request, retrieval, rendering and display of the user defined AR Media and associated options set for that specific AR Experience. Also provided is a AR universal browser which public and private users can execute to view, share, load, and experience user generated AR Media (e.g., generated on the platform). The browser can provide the connections, logic and security needed to initialize and load the specific aggregation of data for the AR experience.
    Type: Application
    Filed: July 16, 2019
    Publication date: January 21, 2021
    Inventor: Robert E. McKeever
  • Publication number: 20210019945
    Abstract: A device is configured to determine a movement of a vehicle based on vehicle sensor data, and a movement of an augmented reality (AR) headset based on headset sensor data. The device is configured to estimate a user portion of the movement of the AR headset caused by a movement of a user and not caused by the movement of the vehicle. The device is configured to determine a gaze target of the user based on the user portion and to generate visualization data based on the gaze target. Responsive to determining that the gaze target is inside the vehicle, the visualization data includes a first visual depiction of a first point of interest that is outside the vehicle. Responsive to determining that the gaze target is outside the vehicle, the visualization data includes a second visual depiction of a second point of interest that is inside the vehicle.
    Type: Application
    Filed: July 19, 2019
    Publication date: January 21, 2021
    Inventors: James Joseph Bonang, William J. Wood
  • Publication number: 20210019946
    Abstract: An electronic device includes an image sensor, a touchscreen display, a transceiver, a processor, and a memory coupled to the processor. The memory stores instructions executable by the processor to receive images of a real world environment via the image sensor; receive, via the touchscreen display, an input for a location of at least one augmented reality anchor point on the received images of the real world environment; receive a selection of at least one virtual object associated with the at least one augmented reality anchor point; output to the touchscreen display the at least one virtual object in proximity to the at least one augmented reality anchor point; generate an augmented reality scene based in part on the at least one augmented reality anchor point and the at least one virtual object; and transmit the augmented reality scene to a receiving device via the transceiver.
    Type: Application
    Filed: November 11, 2019
    Publication date: January 21, 2021
    Inventors: Moiz Kaizar Sonasath, Andrew Lawrence Deng
  • Publication number: 20210019947
    Abstract: The invention relates to creating actual object data for mixed reality applications. In some embodiments, the invention includes using a mixed reality controller to (1) define a coordinate system frame of reference for a target object, the coordinate system frame of reference including an initial point of the target object and at least one directional axis that are specified by a user of the mixed reality controller, (2) define additional points of the target object, and (3) define interface elements of the target object. A 3D model of the target object is generated based on the coordinate system frame of reference, the additional points, and the interface elements. After receiving input metadata for defining interface characteristics for the interface elements displayed on the 3D model, the input metadata is sued to generate a workflow for operating the target object in a mixed reality environment.
    Type: Application
    Filed: July 15, 2020
    Publication date: January 21, 2021
    Inventors: Larry Clay Greunke, Mark Bilinski, Christopher James Angelopoulos, Michael Joseph Guerrero
  • Publication number: 20210019948
    Abstract: Systems, methods, devices, and other techniques for placing and rendering virtual objects in three-dimensional environments. The techniques include providing, by a device, a view of an environment of a first user. A first computing system associated with the first user receives an instruction to display, within the view of the environment of the first user, a virtual marker at a specified position of the environment of the first user, the specified position derived from a second user's interaction with a three-dimensional (3D) model of at least a portion of the environment of the first user. The device displays, within the view of the environment of the first user, the virtual marker at the specified position of the environment of the first user.
    Type: Application
    Filed: August 5, 2020
    Publication date: January 21, 2021
    Inventors: Matthew Thomas Short, Sunny Webb, Joshua Opel, Theo E. Christensen
  • Publication number: 20210019949
    Abstract: In one implementation, a method of generating a depth map is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes generating, based on a first image and a second image, a first depth map of the second image. The method includes generating, based on the first depth map of the second image and pixel values of the second image, a second depth map of the second image.
    Type: Application
    Filed: September 24, 2020
    Publication date: January 21, 2021
    Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
  • Publication number: 20210019950
    Abstract: Systems and methods for conveying virtual content in an augmented reality environment comprising images of virtual content superimposed over physical objects and/or physical surroundings visible within a field of view of a user as if the images of the virtual content were present in the real world. Exemplary implementations may: obtain user information for a user associated with a presentation device physically present at a location of the system; compare the user information with the accessibility criteria for the virtual content to determine whether any portions of the virtual content are to be presented to the user based on the accessibility criteria and the user information for the user; and facilitate presentation of the virtual content to the user via presentation device of user based on the virtual content information, the field of view, and the correlations between the multiple linkage points and the reference frame of the virtual content.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Inventor: Nicholas T. Hariton
  • Publication number: 20210019951
    Abstract: A method for providing a configurable virtual reality environment comprises receiving virtual reality world implementation data that describes locations of structures within a virtual reality world from a customer at an input interface. The virtual reality world implementation data is mapped to a physical plan describing a physical implementation of the structures within the virtual reality world using a mapping controller. The physical plan provides indexing and reference points with respect to the physical implementation of the structures in the virtual reality word enabling the physical plan to define the structures in the real world using the mapping controller. Configurable hardware components necessary to build the physical implementation of the structures in the virtual reality world are determined responsive to the physical plan and a listing of available configurable hardware using a parts list controller.
    Type: Application
    Filed: October 5, 2020
    Publication date: January 21, 2021
    Inventors: Jovan Hutton Pulitzer, David Walens, Matthew Kelly, Geoffrey Wright
  • Publication number: 20210019952
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: obtaining virtual image data representing a virtual object; and encoding the virtual image data with physical image data to provide a formatted image file, wherein the encoding includes for a plurality of spatial image elements providing one or more data field that specifies physical image information and one or more data field that specifies virtual image information based on the virtual image data so the formatted image file for each of the plurality of spatial image elements provides physical image information and virtual image information, and wherein the encoding includes providing indexing data that associates an identifier for the virtual object to spatial image elements for the virtual object.
    Type: Application
    Filed: October 5, 2020
    Publication date: January 21, 2021
    Applicant: Wayfair LLC
    Inventors: David C. Bastian, Aaron K. Baughman, Nicholas A. McCrory, Todd R. Whitman
  • Publication number: 20210019953
    Abstract: Techniques for improving how surface reconstruction data is prepared and passed between multiple devices are disclosed. For example, an environment is scanned to generate 3D scanning data. This 3D scanning data is then transmitted to a central processing service. The 3D scanning data is structured or otherwise configured to enable the central processing service to generate a digital 3D representation of the environment using the 3D scanning data. Reduced resolution representation data is received from the central processing service. This reduced resolution representation data was generated based on 3D scanning data generated by one or more other computer systems that were also scanning the same environment. A first visualization corresponding to the original 3D scanning data is then displayed simultaneously with one or more secondary visualization(s) corresponding to the reduced resolution representation data.
    Type: Application
    Filed: July 16, 2019
    Publication date: January 21, 2021
    Inventors: Yuri Pekelny, Michael Bleyer, Raymond Kirk Price
  • Publication number: 20210019954
    Abstract: A computer implemented method or system including a map conversion toolkit and a map Population toolkit. The map conversion toolkit allows one to quickly trace the layout of a floor plan, generating a file (e.g., GeoJSON file) that can be rendered in two dimensions (2D) or three dimensions (3D) using web tools such as Mapbox. The map population toolkit takes the scan (e.g., 3D scan) of a room in the building (taken from an RGB-D camera), and, through a semi- automatic process, generates individual objects, which are correctly dimensioned and positioned in the (e.g., GeoJSON) representation of the building. In another example, a computer implemented method for diagraming a space comprises obtaining a layout of the space; and annotating or decorating the layout with meaningful labels that are translatable to glanceable visual signals or audio signals.
    Type: Application
    Filed: July 17, 2020
    Publication date: January 21, 2021
    Applicant: The Regents of the University of California
    Inventors: Viet Trinh, Roberto Manduchi
  • Publication number: 20210019955
    Abstract: [Problem] To enable an AR image to be displayed more suitably. [Solution] Provided is an information processing apparatus, including: a control unit that controls a display device to display a first image in a first display region and a second display region, respectively, which are adjacent to each other and have mutually different display timing, so that the first image is superimposed over a real space as seen by a user of the display device, and that controls the display device to transform the first image in the first display region and the first image in the second display region on the basis of changes in position posture information relating to the display device.
    Type: Application
    Filed: February 7, 2019
    Publication date: January 21, 2021
    Inventors: HIROYUKI AGA, ATSUSHI ISHIHARA, KOICHI KAWASAKI, MITSURU NISHIBE
  • Publication number: 20210019956
    Abstract: A method for automatically generating hierarchical exploded views based on assembly constraints and collision detection, in which parts to be exploded are layered in explosion sequence according to a design result of the 3D assembly process planning, and the parts to be exploded in each layer are grouped based on the type and the disassembly direction; a feasible explosion direction of the parts in each layer is determined according to assembly constraints and collision detection; the explosion sequence and explosion direction of the parts in each layer are determined; and then the layered explosion is performed at a certain distance. Ball markers and a part-list are generated after all the parts are exploded.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 21, 2021
    Inventors: Fujun TIAN, Xingyu CHEN, Hongqi ZHANG, Hongqiao ZHOU, Yixiong WEI, Lei GUO, Liangxi CHEN, Jinwen ZHOU, Yanlong ZHANG, Jianjun SU
  • Publication number: 20210019957
    Abstract: An information processing apparatus includes an entry and leaving management unit configured to analyze a captured image and detect vehicle information included in the captured image, and in a case where there is reservation information of parking corresponding to the detected vehicle information, open a gate of a parking lot corresponding to the reservation information.
    Type: Application
    Filed: May 21, 2020
    Publication date: January 21, 2021
    Applicant: SONY CORPORTATION
    Inventors: Hiroaki NISHIMURA, Nikolaos GEORGIS
  • Publication number: 20210019958
    Abstract: A relearning necessity determination method is provided for determining a necessity of relearning of a learned diagnostic model in a machine tool including a machining abnormality diagnosing unit. The machining abnormality diagnosing unit determines normal or abnormality of machining using the diagnostic model generated through machine learning. The method includes storing a cumulative cutting time or a cumulative cutting distance of a tool mounted to the machine tool as a tool usage, storing the tool usage when the machining abnormality diagnosing unit diagnoses the machining as machining abnormality, and determining the necessity of the relearning of the diagnostic model based on a frequency distribution of the tool usage stored in the storing of the tool usage.
    Type: Application
    Filed: July 13, 2020
    Publication date: January 21, 2021
    Applicant: OKUMA CORPORATION
    Inventor: Hiroshi UENO
  • Publication number: 20210019959
    Abstract: A system includes a vehicle event recorder and a modem. The vehicle event recorder is configured to receive a sensor data signal from a sensor that is mounted on a vehicle trailer. The modem is configured to demodulate the sensor data signal to create a modulated sensor data signal. The demodulated sensor data signal is received using a first line. The first line is coupled via a harness to sensor on a vehicle trailer.
    Type: Application
    Filed: October 6, 2020
    Publication date: January 21, 2021
    Inventors: Lanh Trinh, Gregory Dean Sutton
  • Publication number: 20210019960
    Abstract: This state monitoring device configured to monitor a current state of a vehicle includes: a storage configured to have stored therein a load index which is an index indicating an accumulation degree of a load having occurred in the vehicle; and a calculation unit configured to receive detection values of a plurality of sensors mounted on the vehicle, and configured to perform a predetermined calculation. The calculation unit includes: a status specification section configured to, by using the detection values of the plurality of sensors, specify, out of a plurality of status categories determined in advance and each indicating a status of use regarding running of the vehicle, one or more of the status categories; and an index update section configured to, on the basis of each specified status category, update the load index stored in the storage.
    Type: Application
    Filed: July 15, 2020
    Publication date: January 21, 2021
    Inventor: Yoshimoto MATSUDA
  • Publication number: 20210019961
    Abstract: The present invention provides a method and an apparatus for configuring an automobile diagnostic function, and an automobile diagnostic device. The method for configuring an automobile diagnostic function includes: obtaining first function configuration information from a server, the first function configuration information including an identifier of at least one automobile diagnostic function; determining an automobile diagnostic function group based on the first function configuration information; and granting use permission of the automobile diagnostic function group in an automobile diagnostic application program. Function configuration information is obtained from the server and then an automobile diagnostic function supported by a product is configured, thereby improving flexibility of automobile diagnostic function configuration and reducing product development and maintenance costs.
    Type: Application
    Filed: January 14, 2019
    Publication date: January 21, 2021
    Inventors: Weilin WANG, Jiasheng ZHONG, Guilin DING, Longhui ZHONG
  • Publication number: 20210019962
    Abstract: A system according to the present disclosure includes an identification module, a data recording module, and a data upload module. The identification module is configured to identify at least one of a vehicle and a user of the vehicle. The data recording module is configured to record a location of the vehicle, an acceleration of the vehicle, and data received from a controller area network (CAN) bus of the vehicle during a driving session. The data upload module is configured to upload the vehicle location, the vehicle acceleration, the CAN bus data, and at least one of the vehicle identification and the user identification to a remote server.
    Type: Application
    Filed: October 5, 2020
    Publication date: January 21, 2021
    Inventors: Talus Park, Paul Bromnick, Martin Steinbacher, Xinwei Hu
  • Publication number: 20210019963
    Abstract: This disclosure relates to a method and system for an exterior display of a vehicle, such as an autonomous vehicle, configured to issue feedback prompts. An example method includes issuing a prompt via a human-machine interface on an exterior of a vehicle permitting an input indicative of an undesired condition of the vehicle.
    Type: Application
    Filed: July 17, 2019
    Publication date: January 21, 2021
    Inventors: Stuart C. Salter, Kristopher Karl Brown, Steven Wayne Friedlander, James J. Surman, Cornel Lewis Gardner
  • Publication number: 20210019964
    Abstract: A movable barrier operator system is provided that includes a moving-barrier imminent motion notification apparatus, a motor, a controller, and communication circuitry configured to communicate with a remote control. In response to the communication circuitry receiving a communication from the remote control, the controller causes the motor to change the state of the movable barrier and additionally operate the moving-barrier imminent motion notification apparatus upon a determination of the remote control being beyond a physical proximity of a location associated with the movable barrier operator system. Upon a determination that the remote control is within the physical proximity, the controller refrains from operation of the moving-barrier imminent motion notification apparatus.
    Type: Application
    Filed: October 6, 2020
    Publication date: January 21, 2021
    Inventors: Casparus Cate, Jordan Ari Farber, James J. Fitzgibbon, Nathan J. Kopp
  • Publication number: 20210019965
    Abstract: A control method of controlling, using a distributed ledger, locking or unlocking of one or more storage units, in each of which an item is storable, includes: operating a first smart contract and one or more second smart contracts by a code, stored in the distributed ledger, being executed by a computer, the first smart contract managing the one or more second smart contracts that are in one-to-one correspondence with the one or more storage units; controlling, by each of the one or more second smart contracts, locking or unlocking of a corresponding one of the one or more storage units, the controlling being performed under the management by the first smart contract; and controlling, by the first smart contract, whether to place each of the one or more second smart contracts under the management.
    Type: Application
    Filed: October 7, 2020
    Publication date: January 21, 2021
    Inventors: Fumiaki KAGAYA, Eiichi ABE, Junichiro SOEDA
  • Publication number: 20210019966
    Abstract: Aspects of the invention are directed towards method system and devices assisting a user in providing seamless access to a user through an access point of the access system. The invention describes estimating various factors such as time, distance, and signal strength enabling frictionless access to the user. The invention describes adjusting the time interval and the signal strength according to the needs of the user. Various other embodiments of the invention describes estimating a threshold signal strength and time interval based on a distance of the user from the access point.
    Type: Application
    Filed: July 10, 2020
    Publication date: January 21, 2021
    Inventors: Krishnakanth Jarugumilli, Mahesh Dhumpeti, Gokul Ellanki, Anil Kumar Varturi, Rajesh Krishna Etikela, Sriram Narasimha Murthy Dulam, Michael R. Green, Kishore Maroju, Ashley Kennedy-Foster
  • Publication number: 20210019967
    Abstract: It is provided a method for providing access to a physical space for provision of a service. The method is performed in an access coordinator and comprises the steps of: receiving an approval signal indicating that the service consumer allows a service provider agent of a service provider to open the lock; deriving service provider access data being necessary for the service provider agent to open the lock; transmitting the service provider access data to a service provider server, for storage by the service provider server; deleting the service provider access data from the access coordinator; receiving the service provider access data and a request to assign a service provider agent to open the lock; generating service agent access data; and transmitting the service agent access data to a service provider agent device associated with the service provider agent.
    Type: Application
    Filed: April 10, 2019
    Publication date: January 21, 2021
    Inventors: Stefan STRÖMBERG, Sona SINGH
  • Publication number: 20210019968
    Abstract: It is provided a method for providing access to a physical space secured by a lock for provision of a service. The method comprises the steps of: receiving an approval signal from a service consumer device of the service consumer, the approval signal indicating that the service consumer allows a service provider agent of a service provider to open the lock; receiving, from the service provider device, a request to assign a service provider agent to open the lock; communicating with the service provider device to use a private key of a cryptographic key pair accessible to the service provider device, the private key being used to generate service agent access data that is specific for the service provider agent, to allow the service provider agent to open the lock; and transmitting the service agent access data to a service provider agent device associated with the service provider agent.
    Type: Application
    Filed: April 11, 2019
    Publication date: January 21, 2021
    Inventors: Stefan STRÖMBERG, Sona SINGH