Patents Examined by Phu K. Nguyen
  • Patent number: 10769845
    Abstract: A technique for generating virtual models of plants in a field is described. Generally, this includes recording images of plants in-situ; generating point clouds from the images; generating skeleton segments from the point cloud; classifying a subset of skeleton segments as unique plant features using the images; and growing plant skeletons from skeleton segments classified as unique plant feature. The technique may be used to generate a virtual model of a single, real plant, a portion of a real plant field, and/or the entirety of the real plant field. The virtual model can be analyzed to determine or estimate a variety of individual plant or plant population parameters, which in turn can be used to identify potential treatments or thinning practices, or predict future values for yield, plant uniformity, or any other parameter can be determined from the projected results based on the virtual model.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: September 8, 2020
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: Lee Kamp Redden, Nicholas Apostoloff
  • Patent number: 10768769
    Abstract: A system and method for video surveillance and searching are disclosed. Video is analyzed and events are automatically detected. Based on the automatically detected events, textual descriptions are generated. The textual descriptions may be used to supplement video viewing and event viewing, and to provide for textual searching for events.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: September 8, 2020
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Tae Eun Choe, Mun Wai Lee, Kiran Gunda, Niels Haering
  • Patent number: 10755470
    Abstract: Techniques are provided to estimate of location or position of objects that are depicted in an image of a scene. Some implementations include obtaining an image of a scene; identifying an object within the image of the scene; obtaining a three-dimensional model that corresponds to the object that was identified within the image of the scene, the three-dimensional model being obtained from the database of three-dimensional models; determining, based on data from the three-dimensional model, an estimated depth of the object within the scene; generating or updating a three-dimensional representation of the scene based at least on the estimated depth of the object within the scene; and providing the three-dimensional representation of the scene, including at least a portion of the three-dimensional representation of the scene that was generated or updated based on the three-dimensional model of the object, to the scene analyzer.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: August 25, 2020
    Assignee: X Development LLC
    Inventors: Nicholas John Foster, Matthew Sibigtroth
  • Patent number: 10755379
    Abstract: During an analysis technique, a three-dimensional (3D) image of a portion of an individual is iteratively transformed to facilitate accurate determination of detailed multi-point annotation of an anatomical structure. In particular, for a given marker point, in response to receiving information specifying a two-dimensional (2D) plane having an angular position in the 3D image, the 3D image is translated and rotated from an initial position and orientation so that the 2D plane is presented in an orientation parallel to a reference 2D plane of a display. Then, after annotation information specifying the detailed annotation in the 2D plane of the given marker point is received, the 3D image is translated and rotated back to the initial position and orientation. These operations may be repeated for one or more other marker points.
    Type: Grant
    Filed: August 11, 2018
    Date of Patent: August 25, 2020
    Assignee: EchoPixel, Inc.
    Inventors: Sergio Aguirre-Valencia, William Johnsen, Anthony Chen
  • Patent number: 10755378
    Abstract: During an analysis technique, a three-dimensional (3D) image of a portion of an individual is iteratively transformed to facilitate accurate determination of detailed multi-point annotation of an anatomical structure. In particular, for a given marker point, in response to receiving information specifying a two-dimensional (2D) plane having an angular position in the 3D image, the 3D image is translated and rotated from an initial position and orientation so that the 2D plane is presented in an orientation parallel to a reference 2D plane of a display. Then, after annotation information specifying the detailed annotation in the 2D plane of the given marker point is received, the 3D image is translated and rotated back to the initial position and orientation. These operations may be repeated for one or more other marker points.
    Type: Grant
    Filed: August 11, 2018
    Date of Patent: August 25, 2020
    Assignee: Echo Pixel, Inc.
    Inventors: Sergio Aguirre-Valencia, William Johnsen, Anthony Chen
  • Patent number: 10748329
    Abstract: An image processing method and apparatus belong to the technical field of image processing. The method is applied to a mobile terminal, and includes: acquiring a target image on a Two-dimensional (2D) image (S11); creating a Three-Dimensional (3D) image layer, taking the 2D image as a background of the 3D image layer, and creating a background frame on the 3D image layer (S12); detecting an orientation of the mobile terminal, and adjusting a shape of the background frame according to the orientation of the mobile terminal, so that different areas of the 2D image are moved into or out of the background frame (S13); and when the target object on the 2D image is not completely accommodated within the background frame, drawing the target object at a position of the target object on the 3D image layer (S14), where the 2D image, the background frame, and the drawn target object on the 3D image layer are sequentially stacked from bottom to top.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: August 18, 2020
    Assignee: XI'AN ZHONGXING NEW SOFTWARE CO., LTD.
    Inventor: Rong Li
  • Patent number: 10748333
    Abstract: In various embodiments, a finite aperture omni-directional camera is modeled by aligning a finite aperture lens and focal point with the omni-directional part of the projection. For example, each point on an image plane maps to a direction in camera space. For a spherical projection, the lens can be orientated along this direction and the focal point is picked along this direction at focal distance from the lens. For a cylindrical projection, the lens can be oriented along the projected direction on the two dimensional (2D) xz-plane, as the projection is not omni-directional in the y direction. The focal point is picked along the (unprojected) direction so its projection on the xz-plane is at focal distance from the lens. The final outgoing ray can be constructed by sampling of point on this oriented lens and shooting a ray from there through the focal point.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: August 18, 2020
    Assignee: NVIDIA Corporation
    Inventor: Dietger van Antwerpen
  • Patent number: 10740943
    Abstract: One embodiment of the present disclosure presents a technique for enabling user modification of an animation effect, while the animation effect is being rendered. The technique includes generating an editor user interface, wherein the editor user interface that includes an editor module that includes an editor model and an engine model. In the technique the editor model includes an editor value corresponding to a bindable property. In addition, the engine model comprises an engine value corresponding to the bindable property. The technique also includes receiving user input corresponding to a modification of the editor value. The technique further includes modifying the editor value based on the user input. In addition, the technique includes synchronizing the modified editor value with the engine value and modifying an animation effect based on the synchronized engine value.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: August 11, 2020
    Assignee: Facebook, Inc.
    Inventors: Pedro Veras Bezerra de Silva, Andrey Staroseltsev, Tingyong Liu, Shady Hassan Sayed Hassen Aly, Alexander Nicholas Rozanski
  • Patent number: 10729522
    Abstract: The invention relates to a method for the virtual secondary machining of a virtual three-dimensional gingiva model, said model having been created during the planning of an artificial gingiva. Here, the virtual gingiva model is virtually machined by at least one defined three-dimensional surface structure of the gingiva model being modified by means of a virtual tool using a computer.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: August 4, 2020
    Assignee: DENTSPLY SIRONA Inc.
    Inventor: Thorsten Jordan
  • Patent number: 10733796
    Abstract: A method of processing primitives within a tiling unit of a graphics processing system is described. The method comprises determining whether a primitive falls within a tile based on positions of samples within each pixel. If it is determined that the primitive does fall within a tile based on the positions of samples within pixels in a tile, an association between the tile and the primitive is stored to indicate that the primitive is present in the tile. For example, an identifier for the primitive may be added to a control stream for the tile to indicate that the primitive is present in the tile. Various different methods are described to make the determination and these may be used separately or in any combination.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: August 4, 2020
    Assignee: Imagination Technologies Limited
    Inventors: Xile Yang, Lorenzo Belli, Richard Broadhurst
  • Patent number: 10722328
    Abstract: A processing device determines two or more adjacent teeth in a virtual model, the two or more adjacent teeth include a first tooth adjacent to a second tooth in the virtual model. The processing device inserts a first virtual filler in the virtual model between the first tooth and the second tooth. The processing device selects points in the virtual model. A processing device transforms the plurality of points into a voxel volume comprising a plurality of voxels. A processing device performs a smoothing operation on the voxel volume. A processing device subsequent to performing the smoothing operation on the voxel volume, determines a geometry of an updated virtual filler by transforming a surface of the smoothed voxel volume into a polygonal mesh.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: July 28, 2020
    Assignee: Align Technology, Inc.
    Inventors: Israel Velazquez, Andrey Cherkas, Stephan Albert Alexandre Dumothier, Anatoliy Parpara, Alexey Geraskin, Yury Slynko, Danila Chesnokov
  • Patent number: 10719973
    Abstract: Rendering systems that can use combinations of rasterization rendering processes and ray tracing rendering processes are disclosed. In some implementations, these systems perform a rasterization pass to identify visible surfaces of pixels in an image. Some implementations may begin shading processes for visible surfaces, before the geometry is entirely processed, in which rays are emitted. Rays can be culled at various points during processing, based on determining whether the surface from which the ray was emitted is still visible. Rendering systems may implement rendering effects as disclosed.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: July 21, 2020
    Assignee: Imagination Technologies Limited
    Inventor: Luke T. Peterson
  • Patent number: 10692266
    Abstract: A non-transitory computer readable storage medium storing computer program code that, when executed by a processing device, cause the processing device to perform operations comprising: determining a first representative point, wherein the first representative point represents a first geometric primitive; determining a second representative point, wherein the second representative point represents a second geometric primitive; determining an initial distance between the first representative point and the second representative point; calculating a first displacement based on a velocity of the first representative point; calculating a second displacement based on a velocity of the second representative point; determining a separating direction between the first representative point and the second representative point; projecting the first displacement along the separating direction; projecting the second displacement along the separating direction; calculating a predicted minimum distance between the first repr
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 23, 2020
    Assignee: ELECTRONIC ARTS INC.
    Inventor: Christopher Charles Lewin
  • Patent number: 10685477
    Abstract: Generation/rendering of a 2D perspective/view of a geographic region, e.g. a map the world or portion thereof, superimposed over which are indicators of related data items and graphical representations of the relationships therebetween is disclosed. Based on location data associated with each data item, a relative geographic presentation within, or otherwise superimposed over, the 2D presentation is generated relative to a 3D representation of the geographic region. Graphical interconnections are then derived based on the positions of the data items relative to each other and depicted in a manner which shows both the relationship between the data items and the geographic relationship with respect to the geographic region. The graphical interconnections may further be derived in a manner so as to depict a or other wise follow the perspective depicted by the 2D view, e.g. as arcs between related data items conforming to the depicted spherical contour of a globe.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: June 16, 2020
    Assignee: Tic Talking Holdings Inc.
    Inventor: Alexander M. Giunio-Zorkin
  • Patent number: 10685489
    Abstract: A device has an optical sensor, an inertial sensor, and a hardware processor. The optical sensor generates image data. The inertial sensor generates inertia data. The hardware processor receives an augmented reality (AR) authoring template authored at a client device, generates media content using the image data, and receives a selection of spatial coordinates within a three-dimensional region using the inertia data and the image data. The three-dimensional region is identified in the AR authoring template. The hardware processor further identifies an entry in the AR authoring template corresponding to the media content, places the media content at the selected spatial coordinates in the entry in the AR authoring template, and forms AR content using the media content at the selected spatial coordinates placed in the entry in the AR authoring template.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: June 16, 2020
    Assignee: DAQRI, LLC
    Inventors: Samuel Finding, Brian Mullins, Noopur Gupta, Anthony L. Reyes, Neil Aalto
  • Patent number: 10678838
    Abstract: Disclosed are examples of methods, apparatus, systems, and computer program products for providing an augmented reality display of an image with record data. In one example, image data is received at one or more processors. A request message is sent requesting record data associated with the image data from one or more of a plurality of records stored in a database system. In some implementations, when the requested record data is received, a graphical display of the record data in combination with the image can be provided on a display device.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: June 9, 2020
    Assignee: salesforce.com, inc.
    Inventor: Samuel W. Bailey
  • Patent number: 10667867
    Abstract: A computer process including receiving first patient bone data of a patient leg and foot in a first pose, the first pose comprising a position and orientation of the patient leg relative to the patient foot as defined in the first patient bone data. The computer process may further include receiving second patient bone data of the patient leg and foot in a second pose, the second pose comprising a position and orientation of the patient leg relative to the patient foot as defined in the second patient bone data. The computer process may further include generating a 3D bone model of the patient leg and foot. Finally, the computer process may include modifying the 3D bone model of the patient leg and foot such that the plurality of 3D bone models are reoriented into a third pose that matches the second pose.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: June 2, 2020
    Assignee: Stryker European Holdings I, LLC
    Inventors: Ashish Gangwar, Kanishk Sethi, Anup Kumar, Ryan Sellman, Peter Sterrantino, Manoj Kumar Singh
  • Patent number: 10659769
    Abstract: An image processing apparatus includes an obtaining unit configured to obtain virtual viewpoint information indicating a position and/or a direction of a virtual viewpoint corresponding to a virtual viewpoint image generated based on images captured by a plurality of cameras which capture images of a subject from different directions, and a display controller configured to display, while a first virtual viewpoint image corresponding to the virtual viewpoint information obtained by the obtaining unit is displayed on a display unit, a second virtual viewpoint image having an angle of view larger than an angle of view of the first virtual viewpoint image being displayed on the display unit, when an instruction for switching to a virtual viewpoint having a position and/or a direction which are/is not set by a user is received.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: May 19, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Shogo Mizuno
  • Patent number: 10657703
    Abstract: An image processing apparatus generating geometric data of an object includes an obtaining unit configured to obtain a plurality of images of the object, each image captured from different viewpoints, a generating unit configured to generate geometric data of the object from the images obtained by the obtaining unit, and a correcting unit configured to correct the geometric data based on a reliability of at least a part of the geometric data generated by the generating unit.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: May 19, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yoshinari Higaki, Tomohiro Nishiyama
  • Patent number: 10650585
    Abstract: A method of generating a 3D x-ray image cube movie from 2D x-ray images of a patient is provided. The method includes converting first multiple sets of x-ray images of a lung captured at different projection angles into second multiple sets of x-ray images of the lung corresponding to different breathing phases. The method further includes generating a static image cube from each set of the second multiple sets of x-ray images at a respective breathing phase using back projection. The method includes combining the static image cubes corresponding to the different breathing phases of the lung into a 3D x-ray image cube movie through temporal interpolation.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: May 12, 2020
    Assignee: DATA INTEGRITY ADVISORS, LLC
    Inventor: Janid Blanco Kiely