Patents Examined by Frank S Chen
-
Patent number: 11399930Abstract: Systems and methods for psycho-signal processing. According to an aspect, a method includes receiving a visual representation of a subject. The method also includes performing a structured motion operation on the received visual representation to generate a modified visual representation of the subject. The method further includes presenting, via a user interface, the modified visual representation.Type: GrantFiled: January 23, 2018Date of Patent: August 2, 2022Assignee: Duke UniversityInventor: Sina Farsiu
-
Patent number: 11397320Abstract: An information processing apparatus includes a detector and a command unit. The detector detects movement of a user using a display device based on an image photographed by the display device that displays a virtual-space image in such a manner as to be overlapped with a real space and that has a photographing function. The command unit commands the display device to display, as the virtual-space image, relevant information related to input information input to an input target at a position near the input target based on the detected movement of the user.Type: GrantFiled: September 13, 2018Date of Patent: July 26, 2022Assignee: FUJIFILM Business Innovation Corp.Inventors: Yusuke Yamaura, Seiya Inagi, Kazunari Hashimoto, Hidetaka Izumo, Tadaaki Sato, Teppei Aoki, Daisuke Yasuoka, Hiroshi Umemoto
-
Patent number: 11398075Abstract: A method includes determining a mesh representing an environment based, at least in part, on a point cloud the mesh comprising a plurality of triangles each having a normal vector, acquiring one or more images of the environment wherein each of the one or more images is attributed with a view position and a view angle and applying the one or more images to the mesh.Type: GrantFiled: July 15, 2020Date of Patent: July 26, 2022Assignee: Kaarta, Inc.Inventors: Ji Zhang, Ethan Abramson, Brian Boyle, Steven Huber
-
Patent number: 11393200Abstract: A camera captures video imagery depicting a digitally-watermarked object. A reference signal in the watermark is used to discern the pose of the object relative to the camera, and this pose is used in affine-transforming and positioning a graphic on the imagery as an augmented reality overlay. Feature points are also discerned from the captured imagery, or recalled from a database indexed by the watermark. As the camera moves relative to the object, the augmented reality overlay tracks the changing object depiction, using these feature points. When feature point-based tracking fails, the watermark is again processed to determine pose, and the overlay presentation is updated accordingly. In another arrangement, feature points are extracted from images of supermarket objects captured by multiple users, and are compiled in a database in association with watermark data identifying the objects—serving as a crowd-sourced repository of feature point data.Type: GrantFiled: August 10, 2020Date of Patent: July 19, 2022Assignee: Digimarc CorporationInventor: Emma C. Sinclair
-
Patent number: 11393155Abstract: In an image processing system, an image insertion is to be included onto, or relative to, a first and second frame, each depicting images of a set of objects of a geometric model. A point association is determined for a depicted object that is depicted in both the first frame and the second frame, representing reference coordinates in a virtual scene space of a first location on the depicted object independent of at least one position change and a mapping of a first image location in the first image to where the first location appears in the first image. A corresponding location in the second image is determined based on where the first location on the depicted object appears according to the reference coordinate in the virtual scene space and a second image location on the second image where the first location appears in the second image.Type: GrantFiled: October 11, 2021Date of Patent: July 19, 2022Assignee: Unity Technologies SFInventor: Peter M. Hillman
-
Patent number: 11386610Abstract: Disclosed is a system and method for rendering point clouds via a hybrid data point and construct visualization. The system receives a point cloud of a three-dimensional (“3D”) environment, and differentiates a first set of the point cloud data points from a second set of the data points based on a position of each data point relative to a specified render position. The system generates a first visualization from values of each of the first set of data points, and a second visualization from values of a set of constructs that replace the second set of data points. Each construct has a polygonal shape and a singular set of values defined from the values of two or more of the second set of data points. The system presents a final render of the 3D environment from the render position by combining the first visualization with the second visualization.Type: GrantFiled: February 28, 2022Date of Patent: July 12, 2022Assignee: Illuscio, Inc.Inventors: Joseph Bogacz, Robert Monaghan
-
Patent number: 11380053Abstract: One embodiment is directed to a system for presenting views of a very large point data set, comprising: a storage system comprising data representing a point cloud comprising a very large number of associated points; a controller operatively coupled to the storage cluster and configured to automatically and deterministically organize the point data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; and a user interface through which a user may select a viewing perspective origin and vector, which may be utilized to command the controller to assemble an image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy.Type: GrantFiled: September 11, 2019Date of Patent: July 5, 2022Assignee: Willow Garage, LLCInventors: Stuart Glaser, Wim Meeussen, Eitan Marder-Eppstein
-
Patent number: 11380052Abstract: A system comprises a storage system comprising data representing a point cloud comprising a very large number of associated points; a controller operatively coupled to the storage cluster and configured to organize the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; and a user interface through which a user may select a viewing perspective origin and vector, which may be utilized to command the controller to assemble an image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin.Type: GrantFiled: September 11, 2019Date of Patent: July 5, 2022Assignee: Willow Garage, LLCInventors: Eitan Marder-Eppstein, Stuart Glaser, Wim Meeussen
-
Patent number: 11380078Abstract: System and method are provided for scaling a 3-D representation of a building structure. The method includes obtaining images of the building structure, including non-camera anchors. The method also includes identifying reference poses for images based on the non-camera anchors. The method also includes obtaining world map data including real-world poses for the images. The method also includes selecting candidate poses from the real-world poses based on corresponding reference poses. The method also includes calculating a scaling factor for a 3-D representation of the building structure based on correlating the reference poses with the selected candidate poses. Some implementations use structure from motion techniques or LiDAR, in addition to augmented reality frameworks, for scaling the 3-D representations of the building structure. In some implementations, the world map data includes environmental data, such as illumination data, and the method includes generating or displaying the 3-D representation.Type: GrantFiled: December 10, 2020Date of Patent: July 5, 2022Assignee: HOVER, INC.Inventors: Manish Upendran, William Castillo, Jena Dzitsiuk, Yunwen Zhou, Matthew Thomas
-
Patent number: 11377945Abstract: A computer-based method and system for predicting the propagation of cracks along a pipe is provided, wherein successive time-indexed ultrasound images of a pipe surface are captured and digitized. A computer vision algorithm processes the images to identify defects in the pipe, including cracks. At least one blob detection module is used to identify groups of cracks on the pipe surface that have created detectable areas of stress concentration or a prescribed likelihood of crack coalescence or crack cross-influence. The center locations and radial extents of respective blobs are each parametrized as a function of time and pipe surface location by determining parity relationships between successive digital data sets from successive captured images. The determined parity relationships are then used as training data for a machine learning process to train a system implementing the method to predict the propagation of cracks along the pipe.Type: GrantFiled: April 29, 2020Date of Patent: July 5, 2022Assignee: Saudi Arabian Oil CompanyInventors: Kaamil Ur Rahman Mohamed Shibly, Ahmad Aldabbagh
-
Patent number: 11373365Abstract: One embodiment is directed to a method for presenting views of a very large point data set, comprising: storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; automatically and deterministically organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy.Type: GrantFiled: September 11, 2019Date of Patent: June 28, 2022Assignee: Willow Garage, LLCInventors: Wim Meeussen, Eitan Marder-Eppstein, Stuart Glaser
-
Patent number: 11373364Abstract: One method embodiment comprises storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin.Type: GrantFiled: September 11, 2019Date of Patent: June 28, 2022Assignee: Willow Garage, LLCInventors: Eitan Marder-Eppstein, Stuart Glaser, Wim Meeussen
-
Patent number: 11373366Abstract: The invention disclosures a method for improving modeling speed of digital slide scanner, relates to the technical field of microscopes, the modeling speed is slow for image quality; and the focusing plane positions of the modeling points in modeling and in scanning imaging are different, which causes decrease of image quality, that is the question that higher requirement on motion repetition precision of the stage. The invention adopts modeling and scanning imaging in units of scanning lines, modeling and scanning the current line and modeling and scanning imaging the next line thereafter, wherein the modeling of the next line takes advantage of the time from the end of the current line to line feeding and returning to the beginning of the next line, during which the modeling is completed, which may parallel the modeling time with scanning and line feeding process, and effectively reduce the equivalent modeling time.Type: GrantFiled: April 16, 2019Date of Patent: June 28, 2022Assignee: MOTIC CHINA GROUP CO., LTD.Inventors: Jun Kang, Shouli Jia, Muwang Chen
-
Patent number: 11367222Abstract: A deep learning method employs a neural network having three sub-nets to classify and retrieve the most similar 3D model of an object, given a rough 3D model or scanned images. The most similar 3D model is present in a database and can be retrieved to use directly or as a reference to redesign the 3D model. The three sub-nets of the neural network include one dealing with object images and the other two dealing with voxel representations. Majority vote is used instead of view pooling to classify the object. A feature map and a list of top N most similar well-designed 3D models are also provided.Type: GrantFiled: April 20, 2018Date of Patent: June 21, 2022Assignee: Hewlett-Packard Development Company, L.P.Inventors: Ruiting Shao, Yang Lei, Jian Fan, Jerry Liu
-
Patent number: 11361542Abstract: An augmented reality experience is provided to a user of a hand held device, such as a mobile phone, which incorporates an electronic processor, a camera and a display. Images taken from video footage are displayed in a display of a hand held device together with a live camera view, to create the illusion that the subject of the video—i.e., the virtual moving image—is present in the field of view of the camera in real time. In this context the term “real world” image means an image taken from reality, such as a physical, real-world scenario using an electronic photo-capture technique, e.g. video recording. A camera of a hand held device is aimed at a well-known object, which is recognisable to the device. A moving virtual image of an actor playing the part of an historical figure, chosen because of its relevance to the object, is displayed.Type: GrantFiled: December 10, 2020Date of Patent: June 14, 2022Assignee: 2MEE LTDInventors: Christopher George Knight, James Patrick Riley
-
Patent number: 11348320Abstract: A method implemented by an extended reality (XR) display device capturing, by an optical sensor, an image portraying a number of objects within an environment and analyzing the image to identify a tracking pattern corresponding to a first object. The first object is an external electronic device associated with the XR display device. The method further includes generating a first map of the environment based on the image, in which a relative location of the external electronic device within the environment with respect to the XR display device is determined for the first map based on the tracking pattern. The method further includes accessing a final map of the environment based on the first map of the environment and a second map of the environment generated with respect to the relative location.Type: GrantFiled: September 8, 2020Date of Patent: May 31, 2022Assignee: SAMSUNG ELECTRONICS COMPANY, LTD.Inventors: Christopher A. Peri, Dotan Knaan
-
Patent number: 11348201Abstract: An electronic device and a method of controlling the electronic device are provided. The electronic device includes a housing, a roll at least partially contained in the housing, a display configured to be rolled on the roll, the display including a display area having a size that changes according to a rotation of the roll, and the display being configured to display a screen including at least one element in the display area, a sensor configured to sense the rotation of the roll, and a processor electrically connected to the display and the sensor. In response to a size of the display area being changed according to the rotation of the roll, the processor is configured to change at least one of a size and a layout of an element included in the screen according to the size of the display area.Type: GrantFiled: March 29, 2021Date of Patent: May 31, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hee-seok Jeong, Sang-young Lee
-
Patent number: 11341723Abstract: Methods, systems, and computer program products for improving the generation of a 3D representation of an object may include adding a distribution of additional points to plurality of points representative of captured portions of the object based on a computational analysis of the plurality of points, and performing a surface reconstruction on the plurality of points and the distribution of additional points to generate a three-dimensional mesh representation of the object that is watertight.Type: GrantFiled: February 23, 2018Date of Patent: May 24, 2022Assignee: SONY GROUP CORPORATIONInventors: Francesco Michielin, Pal Szasz, Fredrik Mattisson
-
Patent number: 11341712Abstract: A virtual reality (VR) video processing apparatus and a VR method divides a received video image into a plurality of regions, establishes a region popularity table of the video image and updates the region popularity table by tracking an angle of view of a user when a video is playing, to collect information about a hotspot region in a panoramic video, and sends a hotspot region prompt to the user.Type: GrantFiled: April 30, 2021Date of Patent: May 24, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Hongbo Liu, Liang Wu, Wei He, Li Jin
-
Patent number: 11335063Abstract: Described herein are methods and systems for generating multiple maps during object scanning for 3D object reconstruction. A sensor device captures RGB images and depth maps of objects in a scene. A computing device receives the RGB images and the depth maps from the sensor device. The computing device creates a first map using at least a portion of the depth maps, a second map using at least a portion of the depth maps, and a third map using at least a portion of the depth maps. The computing device finds key point matches among the first map, the second map, and the third map. The computing device performs bundle adjustment on the first map, the second map, and the third map using the matched key points to generate a final map. The computing device generates a 3D mesh of the object using the final map.Type: GrantFiled: December 30, 2020Date of Patent: May 17, 2022Assignee: VanGogh Imaging, Inc.Inventors: Ken Lee, Jun Yin, Craig Cambias