Patents Issued in January 25, 2018
  • Publication number: 20180025505
    Abstract: An image processing device is applied to an image processing device and a related depth estimation system. The image processing device includes a receiving unit and a processing unit. The receiving unit is adapted to receive a capturing image. The processing unit is electrically connected with the receiving unit to determine a first sub-image and a second sub-image on the capturing image, to compute relationship between a feature of the first sub-image and a corresponding feature of the second sub-image, and to compute a depth map about the capturing image via disparity of the foresaid relationship. The feature of the first sub-image is correlated with the corresponding feature of the second sub-image, and a scene of the first sub-image is at least partly overlapped with a scene of the second sub-image.
    Type: Application
    Filed: January 17, 2017
    Publication date: January 25, 2018
    Inventors: Yu-Hao Huang, Tsu-Ming Liu
  • Publication number: 20180025506
    Abstract: Techniques are disclosed for performing avatar-based video encoding. In some embodiments, a video recording of an individual may be encoded utilizing an avatar that is driven by the facial expression(s) of the individual. In some such cases, the resultant avatar animation may accurately mimic facial expression(s) of the recorded individual. Some embodiments can be used, for example, in video sharing via social media and networking websites. Some embodiments can be used, for example, in video-based communications (e.g., peer-to-peer video calls; videoconferencing). In some instances, use to the disclosed techniques may help to reduce communications bandwidth use, preserve the individual's anonymity, and/or provide enhanced entertainment value (e.g., amusement) for the individual, for example.
    Type: Application
    Filed: March 6, 2017
    Publication date: January 25, 2018
    Applicant: INTEL CORPORATION
    Inventors: Wenlong Li, Yangzhou Du, Xiaofeng Tong
  • Publication number: 20180025507
    Abstract: Methods and systems for developing image processing in a vehicle are described. In an example, a system, a tool or method may be used to determine the effect of changing parameters for processing the image data from a vehicle camera without actually processing the image. The image may be processed after the parameters reach a threshold of minimum requirements. After the image is approved, the parameters may be stored and transmitted to a separate system to be integrated into head unit instructions of a vehicle or loaded into head unit memory in a vehicle. The vehicle may display a processed image in a vehicle display. Vehicle processing circuitry may develop image processing for a vehicle are described. In an example, the image processing that relates to preparing an image for display occurs in the head unit in the vehicle may be positioned away from the camera itself.
    Type: Application
    Filed: July 20, 2017
    Publication date: January 25, 2018
    Inventors: Brian HUFNAGEL, Damian EPPEL, Przemyslaw SZEWCZYK
  • Publication number: 20180025508
    Abstract: Provided is an apparatus for generating an around view. The apparatus includes a capture unit configured to capture images in front of, behind, to the left of, and to the right of a vehicle using cameras, a mask generation unit configured to set a region ranging a predetermined distance from the vehicle in the captured image as a mask region, a feature point extraction unit configured to extract ground feature points from the mask region of each of the captured images; a camera attitude angle estimation unit configured to generate a rotation matrix including a rotation angle of the camera using the extracted feature points; and an around view generation unit configured to rotationally convert the captured images to a top-view image using the rotation matrix.
    Type: Application
    Filed: June 29, 2017
    Publication date: January 25, 2018
    Applicant: HYUNDAI MOBIS CO., LTD.
    Inventor: Seong Soo LEE
  • Publication number: 20180025509
    Abstract: An image processing method for processing an image includes determining, selecting, and replacing. It is determined whether a portion, including a plurality of pixels and darker than a threshold in the image, is subject to a color replacement. One predetermined color from among a plurality of predetermined colors is selected, based on a color of at least a part of the plurality of pixels. Based on the determination, a color of the portion is replaced with the selected one predetermined color.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 25, 2018
    Inventor: Koya Shimamura
  • Publication number: 20180025510
    Abstract: A system for constructing images representing a 4DCT sequence of an object from a plurality of projections taken from a plurality of angles with respect to the object and at a plurality of times, first portions of the object being less static than second portions of the object.
    Type: Application
    Filed: February 18, 2016
    Publication date: January 25, 2018
    Applicant: University of Florida Research Foundation, Incorporated
    Inventors: Yunmei Chen, Hao Zhang, Chunjoo Park, Bo Lu
  • Publication number: 20180025511
    Abstract: Disclosed is an apparatus for computed tomography (CT) image data reconstruction based on motion compensation. The apparatus for computed tomography (CT) data reconstruction based on motion compensation, the apparatus including a storage configured to store projection radiographs and an image processor configured to obtain pairs of opposite projection radiographs from the projection radiographs, and compensate a movement of a radiography subject based on a correlation between the projection radiographs of each pair of opposite projection radiographs.
    Type: Application
    Filed: July 20, 2017
    Publication date: January 25, 2018
    Applicants: VATECH Co., Ltd., VATECH EWOO Holdings Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Jong chul YE, Ja wook GU, Tae Woo KIM, Sung Il CHOI, Woong BAE
  • Publication number: 20180025512
    Abstract: A method for segmenting a medical image is disclosed. The method includes acquiring MR image and PET data during a scan of the object, acquiring an air/bone ambiguous region in the MR image, the air/bone ambiguous region including air voxels and bone voxels undistinguished from each other. The method also includes assigning attenuation coefficients to the voxels of the plurality of regions and generating an attenuation map. The method further includes iteratively reconstructing the PET data and the attenuation map to generate a PET image and an estimated attenuation map. The method further includes reassigning attenuation coefficients to the voxels of the air/bone ambiguous region based on the estimated attenuation map, and distinguishing the bone voxels and air voxels in the air/bone ambiguous region.
    Type: Application
    Filed: July 20, 2016
    Publication date: January 25, 2018
    Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Wentao ZHU, Tao FENG, Hongdi LI
  • Publication number: 20180025513
    Abstract: An image processing element 9 of the present invention includes a weighting factor setting element 9a that sets up a weighting factors wnn, wbi of the weight addition combing a nearest neighbor interpolation and a bilinear interpolation based on an absolute value |Ibi?Inn| of the difference between the pixel value acquired by the nearest neighbor interpolation and the pixel value Inn acquired by the bilinear interpolation; and a weight addition element 9b that implement a weighting addition based on the set-up weighting factors wnn, wbi. A reconstructed image can be acquired by arranging the backprojection pixel value Inew every pixel following the weighting.
    Type: Application
    Filed: January 9, 2015
    Publication date: January 25, 2018
    Inventors: Tommonori SAKIMOTO, Kazuyoshi NISHINO
  • Publication number: 20180025514
    Abstract: Systems and methods for MBIR reconstruction utilizing a super-voxel approach are provided. A super-voxel algorithm is an optimization algorithm that, as with ICD, produces rapid and geometrically agnostic convergence to the MBIR reconstruction by processing super-voxels which comprise a plurality of voxels whose corresponding memory entries substantially overlap. The voxels in the super-voxel may also be localized or adjacent to one another in the image. In addition, the super-voxel algorithm straightens the memory in the “sinogram” that contains the measured CT data so that both data and intermediate results of the computation can be efficiently accessed from high-speed memory and cache on a computer, GPU, or other high-performance computing hardware. Therefore, each iteration of the super-voxel algorithm runs much faster by more efficiently using the computing hardware.
    Type: Application
    Filed: February 16, 2016
    Publication date: January 25, 2018
    Applicants: Purdue Research Foundation, HIGH PERFORMANCE IMAGING, INC.
    Inventors: Charles Addison Bouman, Samuel Pratt Midkiff, Sherman Jordan Kisner, Xiao Wang
  • Publication number: 20180025515
    Abstract: A method includes generating a material landmark images in a low and high energy image domain. The material landmark image estimates a change of a value of an image pixel caused by adding a small amount of a known material to the pixel. The method further includes generating an air values image in the low and high energy image domain. The air values image estimates a value for each image pixel where a value of a pixel is replaced by a value representing air. The method further includes extracting from de-noised low and high images generated from the low and high line integrals, a material composition of each image pixel based on the material landmark images and air values image. The method further includes generating a signal indicative the extracted material composition.
    Type: Application
    Filed: February 24, 2016
    Publication date: January 25, 2018
    Inventor: Gilad SHECHTER
  • Publication number: 20180025516
    Abstract: The present disclosure discloses an intelligent interactive interface, comprising: an interface underlayer drawn from trajectory formed by measurement; a plurality of identifications disposed on the interface underlayer, each of the identifications corresponds to an external device, information of the external device is uploaded in real time, displayed on the interface underlayer and can be stored on a server, and a mapping relationship is established between the information of the external device and the corresponding identification of respective external device; a terminal apparatus is connected to the external device and displays the interface underlayer, identifications and control and/or exchange information with the external device; wherein the information of the external device is displayed in real time on the terminal apparatus through the identifications, the identifications can be added or deleted in real time.
    Type: Application
    Filed: January 8, 2016
    Publication date: January 25, 2018
    Inventors: Yufei Wei, Xin Shi, David Xing
  • Publication number: 20180025517
    Abstract: Adding new nodes to a graph diagram. A set of one or more new nodes is identified from a graph to be added to an existing graph diagram. A set of one or more anchor candidate nodes are identified in the graph that are coupled to the nodes in the set of one or more new nodes. One of the nodes in the set of one or more anchor candidate nodes is selected as an anchor node. An automatic graph diagram layout of the anchor node and new nodes that are to be coupled to the anchor node is performed to create a disjoint graph diagram. A spatial offset from the anchor node to each of the new nodes coupled to the anchor node in the disjoint graph diagram is identified. Each of the new nodes is added to the existing graph diagram while maintaining the identified spatial offsets.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Kevin David James GREALISH, Frederick Edward WEBER, III, Yin KEI
  • Publication number: 20180025518
    Abstract: An unmanned air vehicle system includes an unmanned air vehicle including a camera unit, a pilot terminal capable of piloting the unmanned air vehicle, and a display that displays images captured by the camera unit. The camera unit includes a first camera that captures a first image and a second camera that captures a second image. An angle of view of the first camera is narrower than an angle of view of the second camera.
    Type: Application
    Filed: February 6, 2017
    Publication date: January 25, 2018
    Inventor: Satoshi HORIE
  • Publication number: 20180025519
    Abstract: [Problem] If an HMD is attached in a different direction than a predetermined direction, usage of the HMD may be difficult in some cases. The problem to be addressed is to provide an information processing device which is mounted on the head of the human body and which is capable of maintaining usability regardless of the attachment direction. [Solution] An information processing device mounted on a head of a human body, including: an operation detection unit that detects an operation performed with respect to the information processing device; and a control unit that, on a basis of an attitude of the information processing device, decides an action to be conducted in correspondence with an operation detected by the operation detection unit.
    Type: Application
    Filed: October 21, 2015
    Publication date: January 25, 2018
    Inventors: TETSUYA ASAYAMA, YOSHINORI OOTA
  • Publication number: 20180025520
    Abstract: Disclosed are a binocular see-through AR head-mounted device and an information displaying method thereof. Sight mapping relationship ? is preset in the head-mounted device, human eye spatial sight information data of a user is tracked and calculated by a sight tracking system, virtual information that needs to be displayed is displayed on the left and right lines of sight of the human eyes on the basis of a binocular see-through AR head-mounted device virtual image imaging principle and a human eye binocular vision principle, thus implementing accurate overlap of the virtual information to the proximity of the position of the fixation point of the human eyes, allowing a high degree of integration of the virtual information with the environment, and implementing enhanced virtual reality in the true sense. The present invention provides a simple solution, requires only the sight tracking system to complete the process, obviates the need for excessive hardware facilities, and is inexpensive.
    Type: Application
    Filed: August 7, 2015
    Publication date: January 25, 2018
    Applicant: CHENGDU IDEALSEE TECHNOLOGY CO., LTD.
    Inventors: Qinhua Huang, Haitao Song, Xinyu Li
  • Publication number: 20180025521
    Abstract: In example implementations, an amount of ambient light in a real world image is measured. An analysis of the virtual image is performed and a user preference for the virtual image is determined. A brightness level of the real world image is modulated based upon the amount of ambient light in the real world image that is measured. The virtual image is adjusted to optimize an appearance of the virtual image on a near eye display that is overlaid on the real world image based upon the analysis of the virtual image, the real world image that is modulated and the user preference for the virtual image.
    Type: Application
    Filed: January 30, 2015
    Publication date: January 25, 2018
    Applicant: ENT. SERVICES DEVELOPMENT CORPORATION LP
    Inventors: William J. ALLEN, Kas KASRAVI
  • Publication number: 20180025522
    Abstract: A method for displaying location-specific content via a head-mounted display device includes: receiving, by the head-mounted display device, the location-specific content, wherein the location-specific content is related to a predetermined spatial position; determining, by the head-mounted display device, a distance between the head-mounted display device and the predetermined spatial position; and displaying, by the head-mounted display device, the location-specific content based on the determined distance between the head-mounted display device and the predetermined spatial position.
    Type: Application
    Filed: July 18, 2017
    Publication date: January 25, 2018
    Inventor: Wolfgang Wirths
  • Publication number: 20180025523
    Abstract: A picture synthesis method and apparatus, an instant messaging method and a picture synthesis server/device are disclosed. The method comprises: after at least two pictures to be synthesized are acquired, determining a visual center of each of the pictures to be synthesized; cutting the corresponding pictures to be synthesized in accordance with the visual center of each of the pictures to be synthesized and a first set specification; and synthesizing all of the cut pictures to be synthesized to obtain a synthesized picture. Due to full consideration of the visual center of each of the pictures to be synthesized, a visual effect of the finally obtained synthesized picture is ensured, thereby improving the user experience.
    Type: Application
    Filed: September 29, 2017
    Publication date: January 25, 2018
    Inventor: Puchao FENG
  • Publication number: 20180025524
    Abstract: A unified image processing algorithm results in better post-processing quality for combined images that are made up of multiple single-capture images. To ensure that each single-capture image is processed in the context of the entire combined image, the combined image is analyzed to determine portions of the image (referred to as “zones”) that should be processed with the same parameters for various image processing algorithms. These zones may be determined based on the content of the combined image. Alternatively, these zones may be determined based on the position of each single-capture image with respect to the entire combined image or the other single-capture images. Once zones and their corresponding image processing parameters are determined for the combined image, they are translated to corresponding zones each of the single-capture images. Finally, the image processing algorithms are applied to each of the single-capture images using the zone-specified parameters.
    Type: Application
    Filed: October 4, 2017
    Publication date: January 25, 2018
    Inventors: Balineedu Chowdary Adsumilli, Timothy MacMillan
  • Publication number: 20180025525
    Abstract: Techniques for animating a non-rigid object in a computer graphics environment. A three-dimensional (3D) curve rigging element representing the non-rigid object is defined, the 3D curve rigging element comprising a plurality of knot primitives. One or more defined values are received for an animation control attribute of a first knot primitive. One or more values are generated, for a second animation control attribute for a second knot primitive, based on the plurality of animation control attributes of a neighboring knot primitive. An animation is then rendered using the 3D curve rigging element. More specifically, one or more defined values for the first attribute of the first knot primitive and the generated value for the second attributes of the second knot primitive are used to generate the animation. The rendered animation is output for display.
    Type: Application
    Filed: September 30, 2016
    Publication date: January 25, 2018
    Inventors: Mark C. HESSLER, Jeremie TALBOT, Mark PIRETTI, Kevin A. SINGLETON
  • Publication number: 20180025526
    Abstract: A method is described comprising: applying a random pattern to specified regions of an object; tracking the movement of the random pattern during a motion capture session; and generating motion data representing the movement of the object using the tracked movement of the random pattern.
    Type: Application
    Filed: September 22, 2017
    Publication date: January 25, 2018
    Inventors: Timothy Cotter, Stephen G. Perlman, John Speck, Roger van der Laan, Kenneth A. Pearce, Greg LaSalle
  • Publication number: 20180025527
    Abstract: A skin deformation system for use in computer animation is disclosed. The skin deformation system accesses the skeleton structure of a computer generated character, and accesses a user's identification of features of the skeleton structure that may affect a skin deformation. The system also accesses the user's identification of a weighting strategy. Using the identified weighting strategy and identified features of the skeleton structure, the skin deformation system determines the degree to which each feature identified by the user may influence the deformation of a skin of the computer generated character. The skin deformation system may incorporate secondary operations including bulge, slide, scale and twist into the deformation of a skin. Information relating to a deformed skin may be stored by the skin deformation system so that the information may be used to produce a visual image for a viewer.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Applicant: DreamWorks Animation L.L.C.
    Inventors: Paul Carmen DILORENZO, Matthew Christopher GONG, Arthur D. GREGORY
  • Publication number: 20180025528
    Abstract: A media playing device detects an event for an object included in a web document is detected before the event occurs, and groups objects for each event if it was to actually occur, and the event is applied to other objects temporarily replacing the grouped objects. Thus, the amount of resources required for image rendering can be significantly reduced.
    Type: Application
    Filed: July 25, 2017
    Publication date: January 25, 2018
    Inventors: Sung Jae HAN, Seong In YUNE, Kang Tae LEE
  • Publication number: 20180025529
    Abstract: A method for color texture imaging of teeth with a monochrome sensor array obtains a 3-D mesh representing a surface contour image according to image data from views of the teeth. For each view, recording image data generates sets of at least three monochromatic shading images. Each set of the monochromatic shading images is combined to generate 2-D color shading images, corresponding to one of the views. Each polygonal surface in the mesh is assigned to one of a subset of the views. Polygonal surfaces assigned to the same view are grouped into a texture fragment. Image coordinates for the 3-D mesh surfaces in each texture fragment are determined from projection of vertices onto the view associated with the texture fragment. The 3-D mesh is rendered with texture values in the 2-D color shading images corresponding to each texture fragment to generate a color texture surface contour image.
    Type: Application
    Filed: September 11, 2015
    Publication date: January 25, 2018
    Inventors: Yingqian Wu, Victor C. Wong, Qinran Chen, Zhaohua Liu
  • Publication number: 20180025530
    Abstract: A system and method for geometric warping correction in projection mapping is provided. A lower resolution mesh is applied to A mesh model, at least in a region of the mesh model misaligned with a corresponding region of a real-world object. One or more points of the lower resolution mesh are moved. In response, one or more corresponding points of the mesh model are moved to increase alignment between the region of the mesh model and the corresponding region of the real-world object. An updated mesh model is stored in a memory. And one or more projectors are controlled to projection map images corresponding to the updated mesh model onto the real-world object.
    Type: Application
    Filed: July 21, 2016
    Publication date: January 25, 2018
    Inventors: Roy ANTHONY, Kevin MOULE, Derek SCOTT, Nick WASILKA, Maxwell ELENDT
  • Publication number: 20180025531
    Abstract: A method of providing a virtual experience to a user includes identifying a plurality of virtual objects. The method further includes detecting a position of a part of the user's body other than the user's head. The method further includes detecting a reference line of sight of the user. The method further includes setting an extension direction for a first virtual object of the plurality of virtual objects based on a direction of the reference line of sight. The method further includes setting a region for a first virtual object of the plurality of virtual objects, wherein the region comprises a part extending in the extension direction. The method further includes determining whether the first virtual object and a virtual representation of the part of the body have touched based on a positional relationship between the region and a position of the virtual representation of the part of the body.
    Type: Application
    Filed: July 19, 2017
    Publication date: January 25, 2018
    Inventor: Shuhei TERAHATA
  • Publication number: 20180025532
    Abstract: An electronic system and a method for creating an image includes a display arranged to display a plurality of two-dimensional representations within a three-dimensional space, wherein the plurality of two-dimensional representations are arranged to individually represent a portion of a three-dimensional object within the three-dimensional space; and an imager arranged to capture the plurality of two-dimensional representations being displayed within the three-dimensional space; wherein the plurality of two-dimensional representations in a plurality of predefined positions are combined to form an image representative of the three-dimensional object within the three-dimensional space.
    Type: Application
    Filed: July 22, 2016
    Publication date: January 25, 2018
    Inventors: Miu Ling Lam, Yaozhun Huang, Sze Chun Tsang, Bin Chen
  • Publication number: 20180025533
    Abstract: An image processing apparatus includes an image acquisition unit acquiring a first and second captured images from first and second points of view respectively, an initial value acquisition unit acquiring initial values of respective clip positions to clip display images from the first and second captured images, a derivation unit deriving an amount of a first exterior region of a first display image outside a first region of the first captured image when the first display image is clipped based on the initial values, and deriving an amount of a second exterior region of a second display image outside a second region of the second captured image when the second display image is clipped based on the initial values, and a determination unit determining the respective clip positions to clip the display images from the first and second captured images based on the first and second amounts.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 25, 2018
    Inventors: Masanori Sato, Masashi Nakagawa
  • Publication number: 20180025534
    Abstract: Techniques of displaying a virtual environment in a HMD involve generating a lighting scheme within a virtual environment configured to reveal a real object in a room in the virtual environment in response to a distance between a user in the room and the real object decreasing while the user is immersed in the virtual environment. Such a lighting scheme protects a user from injury resulting from collision with real objects in a room while immersed in a virtual environment.
    Type: Application
    Filed: July 20, 2017
    Publication date: January 25, 2018
    Inventors: Manuel Christian Clement, Thor Lewis, Stefan Welker
  • Publication number: 20180025535
    Abstract: Computer implemented method for rendering an image of a three-dimensional scene on an image plane by encoding at least a luminosity in the image plane by a luminosity function. The value of the luminosity can be computed at substantially each point of the image plane by using a set of stored input data describing the scene. The method includes constructing the luminosity function as equivalent to a first linear combination involving the functions of a first set of functions, and computing at least the value of the coefficients of the first linear combination, by solving a first linear system, obtained by using at least the functions of the first linear combination, at least a subset of the first subset of the image plane, and the luminosity at the points of said subset. The method further includes storing the value of the coefficients of the first linear combination and at least the information needed to associate each coefficient to the function multiplying said coefficient in the first linear combination.
    Type: Application
    Filed: July 24, 2017
    Publication date: January 25, 2018
    Applicants: Technische Universitat Berlin, University of Toronto
    Inventors: Christian Lessig, Eugene Fiume
  • Publication number: 20180025536
    Abstract: The present disclosure concerns a methodology that allows a user to “orbit” around a model on a specific axis of rotation and view an orthographic floor plan of the model. A user may view and “walk through” the model while staying at a specific height above the ground with smooth transitions between orbiting, floor plan, and walking modes.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Matthew Bell, Michael Beebe
  • Publication number: 20180025537
    Abstract: Portable globes may be provided for viewing regions of interest in a Geographical Information System (GIS). A method for providing a portable globe for a GIS may include determining one or more selected regions corresponding to a geographical region of a master globe. The method may further include organizing geospatial data from the master globe based on the selected region and creating the portable globe based on the geospatial data. The portable globe may be smaller in data size than the master globe. The method may include transmitting the portable globe to a local device that may render the selected region at a higher resolution than the remainder of the portable globe in the GIS. A system for providing a portable globe may include a selection module, a fusion module and a transmitter. A system for updating a portable globe may include a packet bundler and a globe cutter.
    Type: Application
    Filed: July 31, 2017
    Publication date: January 25, 2018
    Inventors: Manas Ranjan Jagadev, Eli Dylan Lorimer, Bret Peterson, Vijay Raman, Mark Wheeler
  • Publication number: 20180025538
    Abstract: Systems and methods for displaying labels in conjunction with geographic imagery provided, for instance, by a geographic information system, such as a mapping service or a virtual globe application are provided. Candidate positions for displaying labels in conjunction with geographic imagery can be determined based at least in part on a virtual camera viewpoint. The candidate positions can be associated with non-occluded points on three-dimensional models corresponding to the labels. Adjusted positions for labels can be determined form the plurality of candidate positions. The labels can be provided for display in conjunction with the geographic imagery at the adjusted positions.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventor: Jonah Jones
  • Publication number: 20180025539
    Abstract: The present disclosure relates to a method of applying a sublayer in which a layer is applied on the basis of a sewing line as a unit and an apparatus thereof in making 3D clothing by computer simulation. A partial region within a pattern is designated by selecting a sewing line through a user interface, and a sublayer in which a layer is set on the basis of the sewing line is set.
    Type: Application
    Filed: April 26, 2017
    Publication date: January 25, 2018
    Applicant: CLO virtual fashion
    Inventor: Seung Woo OH
  • Publication number: 20180025540
    Abstract: A system for computer vision is disclosed. The system may comprise a processor and a non-transitory computer-readable storage medium coupled to the processor. The non-transitory computer-readable storage medium may store instructions that, when executed by the processor, cause the system to perform a method. The method may comprise obtaining a first and a second images of at least a portion of an object, extracting a first and a second 2D contours of the portion of the object respectively from the first and second images, matching one or more first points on the first 2D contour with one or more second points on the second 2D contour to obtain a plurality of matched contour points and a plurality of mismatched contour points, and reconstructing a shape of the portion of the object based at least in part on at least a portion of the matched points and at least a portion of the mismatched contour points.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 25, 2018
    Inventors: Gengyu MA, Yuan WANG, Yue FEI
  • Publication number: 20180025541
    Abstract: The present invention relates to a high-accuracy automatic 3D modeling method for complex buildings, comprising the steps of: transforming the complex building to a complex polygon by using the topological structure of polygons firstly, transforming complex polygons to a set of triangles which are seamlessly spliced by programming an algorithm and accomplishing high-accuracy automatic 3D modeling of buildings.
    Type: Application
    Filed: July 19, 2017
    Publication date: January 25, 2018
    Inventor: Hongyu Xie
  • Publication number: 20180025542
    Abstract: A method and system is provided for automatic generation and navigation of optimal views of facades of multi-dimensional building models based on where and how the original images were captured. The system and method allows for navigation and visualization of facades of individual or multiple building models in a multi-dimensional building model visualization system.
    Type: Application
    Filed: September 29, 2017
    Publication date: January 25, 2018
    Applicant: HOVER INC.
    Inventors: Manish Upendran, Adam J. Altman
  • Publication number: 20180025543
    Abstract: Systems and methods for constructing and saving files containing computer-generated image data with associated virtual camera location data during 3-D visualization of an object (e.g., an aircraft). The process tags computer-generated images with virtual camera location and settings information selected by the user while navigating a 3-D visualization of an object. The virtual camera location data in the saved image file can be used later as a way to return the viewpoint to the virtual camera location in the 3-D environment from where the image was taken. For example, these tagged images can later be drag-and-dropped onto the display screen while the 3-D visualization application is running to activate the process of retrieving and displaying a previously selected image. Multiple images can be loaded and then used to determine the relative viewpoint offset between images.
    Type: Application
    Filed: July 19, 2016
    Publication date: January 25, 2018
    Applicant: The Boeing Company
    Inventors: James J. Troy, Christopher D. Esposito, Vladimir Karakusevic
  • Publication number: 20180025544
    Abstract: The present invention relates to techniques for determining rendering information for virtual content within an augmented reality. The technique may comprise capturing an image comprising a graphical tag by an image capturing unit, wherein the graphical tag comprises one or more geometric objects, and the graphical tag representing coded information. Size reference information may then be obtained from the captured image and a distortion of a captured view of one of the geometric objects may then be determined. Thereafter, based on the size reference information and the distortion of the captured view, a relative position of the graphical tag to the image capturing unit may be determined. Based on the determined relative position, positioning information and scaling information for rendering the virtual content within an augmented reality relative to the graphical tag may then be determined.
    Type: Application
    Filed: July 22, 2016
    Publication date: January 25, 2018
    Inventor: Philipp A. SCHOELLER
  • Publication number: 20180025545
    Abstract: A method for creating visualized effect for data within a three-dimensional space is implemented by a processor executing instructions stored in a non-transitory computer-readable medium. The method includes processing data contained in a data space to retrieve at least one item contained therein; determining a data value of an information attribute of the at least one item; creating a virtual element according to the data value of the information attribute; and controlling a display device to project the virtual element onto a specific location in the three-dimensional space.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 25, 2018
    Inventor: Pol-Lin Tai
  • Publication number: 20180025546
    Abstract: A system and method is provided for visualizing a volumetric image of an anatomical structure. Using a first view of the volumetric image showing a non-orthogonal cross-section of a surface of the anatomical structure, a local orientation of the surface within the volumetric image is determined, namely by analyzing the image data of the volumetric image. Having determined the local orientation of the surface, a second view is generated of the volumetric image, the second view being geometrically defined by a viewing plane intersecting the surface of the anatomical structure in the volumetric image orthogonally. Accordingly, the surface is shown in a sharper manner in the second view than would typically be the case in the first view. Advantageously, the user can manually define or correct a delineation of the outline of the anatomical structure in a more precise manner.
    Type: Application
    Filed: September 18, 2015
    Publication date: January 25, 2018
    Inventors: THOMAS BUELOW, DANIEL BYSTROV, RAFAEL WIEMKER, DOMINIK BENJAMIN KUTRA
  • Publication number: 20180025547
    Abstract: There is disclosed a method of creating a three-dimensional image comprising: establishing a mapping between a two-dimensional template and the three-dimensional image; applying a graphic to the two-dimensional template; receiving the two-dimensional template with the graphic applied; and creating the three-dimensional image based on the mapping and the applied graphic.
    Type: Application
    Filed: February 5, 2016
    Publication date: January 25, 2018
    Inventors: Benjamin ALUN-JONES, Hal WATTS, Kirsty EMERY, Greg BAKKER
  • Publication number: 20180025548
    Abstract: An image processing apparatus of the present invention includes an image obtaining unit obtaining first and second three-dimensional images, a deformation information obtaining unit obtaining deformation between two images, a cross-sectional image generating unit generating first and second cross-sectional images, a target position obtaining unit obtaining a target position in the first cross-sectional image, a corresponding position obtaining unit obtaining a corresponding position in the second three-dimensional image which corresponds to the target position on the basis of the deformation information.
    Type: Application
    Filed: July 13, 2017
    Publication date: January 25, 2018
    Inventors: Takaaki Endo, Kiyohide Satoh
  • Publication number: 20180025549
    Abstract: Meters and meter covers comprising: a removable cover housing configured to accommodate the upper portion of the internal components of an existing meter, the cover housing engageable with the housing base of the existing meter to cover and enclose the internal components of the existing meter; a sensor affixed to the cover housing, the sensor configured to collect environmental information pertaining to the local external environment of the existing meter; a wireless radio affixed to the cover housing, the wireless radio configured to transmit the environmental information to the existing meter or to a remote server in communication with the existing meter; and a power unit affixed to the cover housing, the power unit supplying power to the sensor and the wireless radio.
    Type: Application
    Filed: September 14, 2017
    Publication date: January 25, 2018
    Inventors: David William KING, Alexander SCHWARZ, Stephen John HUNTER, Chad P. RANDALL
  • Publication number: 20180025550
    Abstract: A parking meter includes a housing, processor, memory, network interface, display screen, first camera facing outward from the first side of the housing, second camera facing outward from the housing towards a parking space, and a payment acceptor. The meter is configured to sense a vehicle's presence in the parking space, capture an identification of the vehicle, transmit the identification to a remote networked computer system, determine that a parking violation has occurred, transmit the notice to the remote computer system, accept payment of fines, transmit notice of fine payment to the remote computer system, reset the parking time period to zero upon the vehicle's exit from the parking space, and receive updated parking rate parameters by the processor from the remote computer system via the network interface.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 25, 2018
    Inventors: Thomas G. Hudson, Joseph M. Caldwell, II, Richard C. Gage
  • Publication number: 20180025551
    Abstract: A system and method for acquiring and communicating toll data including a server having a processing device and memory device operably connected to a toll database including toll related data related to toll usage. The server retrieving toll related data from the toll database at predetermined time intervals. The toll related data is acquired by a toll interrogation device from a toll identification device disposed on, and unique to, a particular vehicle passing though a tolling station. The processing device matching the retrieved toll data to a toll user responsible for the toll usage. The server electronically transmitting the matched toll information to the toll user.
    Type: Application
    Filed: July 21, 2017
    Publication date: January 25, 2018
    Applicant: Highway Toll Administration, LLC
    Inventor: David Centner
  • Publication number: 20180025552
    Abstract: The physical dimensions of a parcel are determined by taking a photograph of the parcel with a smart phone camera. An object of known dimensions is included in the photographic image. An App installed on the smart phone calculates the parcel dimensions by reference to the object of know dimensions and displays the result.
    Type: Application
    Filed: July 20, 2017
    Publication date: January 25, 2018
    Inventor: Carlos E Cano
  • Publication number: 20180025553
    Abstract: A vehicle stores privacy settings that specify a plurality of data privacy protections to apply to vehicle communications over a network when the vehicle is in a stealth mode but not when the vehicle is in a normal mode. A telematics control unit of the vehicle indicates transition from the normal mode to the stealth mode responsive to connection of the vehicle to the network via an unknown service provider or responsive to vehicle entry to a predefined geofence area. The telematics control unit of the vehicle indicates transition from the stealth mode to the normal mode responsive to connection of the vehicle to the carrier to which the vehicle is subscribed for network service or vehicle exit from the geofence area.
    Type: Application
    Filed: July 22, 2016
    Publication date: January 25, 2018
    Inventors: Manpreet Singh BAJWA, Omar MAKKE, Perry Robinson MacNEILLE, Oleg Yurievitch GUSIKHIN
  • Publication number: 20180025554
    Abstract: In examples provided herein, a system in a vehicle comprises a processor and a memory including instructions executable by the processor to aggregate and transmit to a context-aware platform (CAP) diagnostic data for a vehicle; receive from the CAP responsive information based on analysis of the diagnostic data; and cause the responsive information to be audibly provided to a driver of the vehicle.
    Type: Application
    Filed: January 30, 2015
    Publication date: January 25, 2018
    Applicant: ENT. SERVICES DEVELOPMENT CORPORATION LP
    Inventors: Jonathan GIBSON, Shivaprasad VENKATRAMAN, Joseph MILLER, Clifford A. WILKE