Three-dimension Patents (Class 345/419)
  • Patent number: 11430192
    Abstract: In a general aspect, a method can include: receiving, by an electronic device, data defining an augmented reality (AR) environment; receiving an indication to place an AR object on a reference surface in the AR environment; in response to the indication, displaying the AR object on the reference surface in the AR environment; performing a first gesture on an input device of the electronic device; in response to the first gesture, elevating the AR object a distance above the reference surface in the AR environment; performing a second gesture on the input device of the electronic device; and in response to the second gesture, moving the AR object in the AR environment.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: August 30, 2022
    Assignee: Google LLC
    Inventors: Joost Korngold, Keith Schaefer, Bryan Woods, Jay Steele
  • Patent number: 11430182
    Abstract: A computing system includes one or more processors and a memory storing instructions that, when executed by the one or more processors, cause the system to perform operations. The operations include determining that a portion of an existing map is to be updated; obtaining a point cloud acquired by one or more Lidar sensors corresponding to a location of the portion; converting the portion into an equivalent point cloud; performing a point cloud registration based on the equivalent point cloud and the point cloud; and updating the existing map based on the point cloud registration.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: August 30, 2022
    Assignee: Pony AI Inc.
    Inventors: Mengda Yang, Yuyang Ding, Ruimeng Shi
  • Patent number: 11430197
    Abstract: Systems and methods are described to enable the creation of user interfaces that may adapt to different environments and may be automatically created. User interfaces may be two-dimensional or three-dimensional and may be used in virtual reality or augmented reality applications. An interface creator may create or receive digital assets associated with a content item, define virtual planes and associated digital asset templates, associate the digital assets with the virtual planes, and enable display of the virtual planes with associated digital assets to a user for user interaction. Digital assets may be automatically edited to meet the specifications of the templates associated with the virtual planes. Virtual planes and templates may also be standardized and aggregated so that a completed user interface may be easily delivered and presented with other content items in a uniform manner.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: August 30, 2022
    Assignee: Comcast Cable Communications, LLC
    Inventors: John Zankowski, Jesse Mullen, Matthew Luther, Michael Garzarelli, Thomas Loretan
  • Patent number: 11429250
    Abstract: Systems, methods, and devices are provided for altering an appearance of acquaintances when viewed through smart glasses, which may be altered using augmented reality technology. In particular, an embodiment of the invention is directed to allowing users to specify their appearances to others when viewed by others wearing smart glasses. The others viewing the user through smart glasses include friends, family, contacts, or other acquaintances, which may be specified in one or more social networks or contacts databases. The altered appearance that is displayed may be based on the particular relationship between the user and the viewer. For example, a user may appear as a particular superhero to his friends on a social network website, as having a cartoon feature (such as an enormous head) to his children, as normal to his mother and business contacts, and as wearing a Hawaiian shirt to his closest buddies.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: August 30, 2022
    Assignee: Hallmark Cards, Incorporated
    Inventor: Scott A. Schimke
  • Patent number: 11430222
    Abstract: An object tracking system includes a sensor and a tracking system. The sensor is configured to capture a first frame of a global plane for at least a portion of a marker grid in a space. The tracking system is configured to receive a first coordinate in the global plane for a first corner of a marker grid, to determine a second coordinate in the global plane for the first marker on the marker grid, and to determine a third coordinate in the global plane where the second marker on the marker grid. The tracking system is further configured to determine a first pixel location for the first marker, to determine a second pixel location for the second marker, and to generate a homography based on the second coordinate for the first marker, the third coordinate for the second marker, the first pixel location, and the second pixel location.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: August 30, 2022
    Assignee: 7-ELEVEN, INC.
    Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy, Crystal Maung
  • Patent number: 11425463
    Abstract: Disclosed herein is a content supplying apparatus for supplying a video content to a content reproduction apparatus, including: a production section adapted to produce a video switch command for causing the content reproduction apparatus to execute a process regarding changeover of a video content to be reproduced; a broadcasting section adapted to broadcast a video content, in which the produced video switch command is embedded, through a broadcasting network; and a delivery section adapted to deliver the video content through the Internet.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: August 23, 2022
    Assignee: SATURN LICENSING LLC
    Inventor: Naohisa Kitazato
  • Patent number: 11423799
    Abstract: A method for generating assembly instructions may include receiving inputs from a user, such that the inputs may include an indication of components of a product package. The method may also include retrieving assembly attribute data for each component from a first storage component, such that the assembly attribute data includes information regarding compatibility properties between two or more of the plurality of components. The method may then involve retrieving model data for each component from a second storage component, such that the model data is representative of physical properties of a respective component. The method may also include generating the assembly instructions for assembling the components together based on the attribute data and the model data, such that the assembly instructions include visualizations representative of at least a portion of a process for assembling the plurality of components together. The assembly instructions are then presented via an electronic display.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: August 23, 2022
    Assignee: Rockwell Automation Technologies, Inc.
    Inventors: Kyle Kocher, Kelly M. Kunowski, Mason Khan, James Furukawa, Troy M. Bellows, Scott B. Lasko, Mayo D. Hemmingson, Charles L. Quentin, Anthony M. Avila, Austen K. Scudder, Karen R. Hecht, Juerg Merki, Guillermo Garcia, Maciej Branicki
  • Patent number: 11422725
    Abstract: A method of storing a set of data representing a point cloud, comprising: creating an array in a digital memory having cells addressable by reference to at least one index, wherein each of the at least one indices has a predetermined correspondence to a geometric location within the point cloud; and storing a value of the data set in each of the cells.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: August 23, 2022
    Assignee: GENERAL ELECTRIC COMPANY
    Inventor: Justin Mamrak
  • Patent number: 11425364
    Abstract: A head-up display system for a vehicle includes a head-up display apparatus and an occupant monitoring apparatus. The head-up display apparatus includes a projection device configured to project a three-dimensional image in front of an occupant of the vehicle. The head-up display apparatus is configured to project a projection image with respect to an object present outside of the vehicle. The occupant monitoring apparatus includes an imaging device configured to capture an image of the occupant. The occupant monitoring apparatus is configured to generate personal data related to a sense of sight of the occupant on the basis of the captured image of the occupant. The head-up display apparatus is configured to adjust the projection image in the three-dimensional image on the basis of the personal data generated by the occupant monitoring apparatus, and project, from the projection device, the three-dimensional image in which the projection image is adjusted.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: August 23, 2022
    Assignee: SUBARU CORPORATION
    Inventor: Ryota Nakamura
  • Patent number: 11423519
    Abstract: Systems and methods may provide a plurality of distortion meshes that compensate for radial and chromatic aberrations created by optical lenses. The plurality of distortion meshes may include different lens specific parameters that allow the distortion meshes to compensate for chromatic aberrations created within received images. The plurality of distortion meshes may correspond to a red color channel, green color channel, or blue color channel to compensate for the chromatic aberrations. The distortion meshes may also include shaped distortions and grids to compensate for radial distortions, such as pin cushion distortions. In one example, the system uses a barrel-shaped distortion and a triangulation grid to compensate for the distortions created when the received image is displayed on a lens.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: August 23, 2022
    Assignee: Intel Corporation
    Inventor: Daniel Pohl
  • Patent number: 11423600
    Abstract: The present disclosure relates to methods and apparatus for configuring a texture filtering logic unit for deep learning operation. The apparatus can map one or more inputs of a deep learning operation to a respective input of a texture filtering logic unit in a graphics pipeline. Moreover, the apparatus can generate, by the texture filtering logic unit, at least one output for the deep learning operation based on the one or more inputs mapped to the texture filtering logic unit. Furthermore, the apparatus can communicate the at least one output to a programmable shader, which can analyze the output result to determine information relating to an input image based on the deep learning operation.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: August 23, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Liang Li, Elina Kamenetskaya, Andrew Evan Gruber
  • Patent number: 11425354
    Abstract: Described are methods for identifying the in-field positions of plant features on a plant by plant basis. These positions are determined based on images captured as a vehicle (e.g., tractor, sprayer, etc.) including one or more cameras travels through the field along a row of crops. The in-field positions of the plant features are useful for a variety of purposes including, for example, generating three-dimensional data models of plants growing in the field, assessing plant growth and phenotypic features, determining what kinds of treatments to apply including both where to apply the treatments and how much, determining whether to remove weeds or other undesirable plants, and so on.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: August 23, 2022
    Assignee: Blue River Technology Inc.
    Inventor: Lee Kamp Redden
  • Patent number: 11423562
    Abstract: A device and method for obtaining distance information from views is provided. The method generating epipolar images from a light field captured by a light field acquisition device; an edge detection step for detecting, in the epipolar images, edges of objects in the scene captured by the light field acquisition device; in each epipolar image, detecting valid epipolar lines formed by a set of edges; determining the slopes of the valid epipolar lines. The edge detection step may calculate a second spatial derivative for each pixel of the epipolar images and detect the zero-crossings of the second spatial derivatives, to detect object edges with subpixel precision. The method may be performed by low cost mobile devices to calculate real-time depth-maps from depth-camera recordings.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: August 23, 2022
    Assignee: PHOTONIC SENSORS & ALGORITHMS, S.L.
    Inventors: Jorge Vicente Blasco Claret, Carles Montoliu Alvaro, Arnau Calatayud Calatayud
  • Patent number: 11422259
    Abstract: Techniques are discussed for using multi-resolution maps, for example, for localizing a vehicle. Map data of an environment can be represented by discrete map tiles. In some cases, a set of map tiles can be precomputed as contributing to localizing the vehicle in the environment, and accordingly, the set of map tiles can be loaded into memory when the vehicle is at a particular location in the environment. Further, a level of detail represented by the map tiles can be based at least in part on a distance between a location associated with the vehicle and a location associated with a respective region in the environment. The level of detail can also be based on a speed of the vehicle in the environment. The vehicle can determine its location in the environment based on the map tiles and/or the vehicle can generate a trajectory based on the map tiles.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: August 23, 2022
    Assignee: Zoox, Inc.
    Inventors: Nitesh Shroff, Brice Rebsamen, Elena Stephanie Stumm
  • Patent number: 11423610
    Abstract: Embodiments of the invention provide systems and methods of generating a complete and accurate geometrically optimized environment. Stereo pair images depicting an environment are selected from a plurality of images to generate a Digital Surface Model (DSM). Characteristics of objects in the environment are determined and identified. The geometry of the objects may be determined and fit with polygons and textured facades. By determining the objects, the geometry, and the material from original satellite imagery and from a DSM created from the matching stereo pair point clouds, a complete and accurate geometrically optimized environment is created.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: August 23, 2022
    Assignee: Applied Research Associates, Inc.
    Inventors: Benjamin L. Raskob, Nicholas A. Maxwell, Steven Craig Stutts, John-Richard Papadakis, Graham Rhodes, Chris W. Driscoll, Alberico Menozzi, Michael C. Tarnowski
  • Patent number: 11423626
    Abstract: A computer implemented method for warping virtual content from two sources includes a first source generating first virtual content based on a first pose. The method also includes a second source generating second virtual content based on a second pose. The method further includes a compositor processing the first and second virtual content in a single pass. Processing the first and second virtual content includes generating warped first virtual content by warping the first virtual content based on a third pose, generating warped second virtual content by warping the second virtual content based on the third pose, and generating output content by compositing the warped first and second virtual content.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: August 23, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Reza Nourai, Robert Blake Taylor
  • Patent number: 11422627
    Abstract: A method for controlling an electronic device and an electronic device are provided. The method includes displaying at least one object on a touch screen of the electronic device; identifying a first input at a position corresponding to the at least one object displayed on the touch screen; identifying the at least one object based on the first input; identifying a second input on the touch screen; displaying the identified at least one object at a location on the touch screen of the electronic device based on the identified second input; and providing feedback based on displaying the identified at least one object.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: August 23, 2022
    Inventors: Ju-Youn Lee, Jin-Hyoung Park, Sang-Hyup Lee
  • Patent number: 11423616
    Abstract: In one embodiment, a system may access an input image of an object captured by cameras, and the input image depicts appearance information associated with an object. The system may generate a first mesh of the object based on features identified from the input image of the object. The system may generate, by processing the first mesh using a machine-learning model, a position map that defines a contour of the object. Each pixel in the position map corresponds to a three-dimensional coordinate. The system may further generate a second mesh based on the position map, wherein the second mesh has a higher resolution than the first mesh. The system may render an output image of the object based on the second mesh. The system disclosed in the present application can render a dense mesh which has a higher resolution to provide details which cannot be compensated by texture information.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: August 23, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Tomas Simon Kreuz, Jason Saragih, Stephen Anthony Lombardi, Shugao Ma, Gabriel Bailowitz Schwartz
  • Patent number: 11425350
    Abstract: A storage part retains: screen positioning data indicating a position, orientation, and shape of the screen in a reference coordinate space; image capture part positioning data indicating the position and orientation of the user space image capture parts in the reference coordinate space; and three-dimensional data representing a three-dimensional object in the reference coordinate space. A processing part: identifies a user viewpoint position on the basis of the image capture part positioning data and the user space images; generates, on the basis of the user viewpoint position, the screen positioning data, and the three-dimensional data, a display image of the three-dimensional object being viewable as though the three dimensional object had been seen in a virtual space from the user viewpoint position via the screen; and causes said display image to be displayed on the screen of a display.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: August 23, 2022
    Assignee: VIRTUALWINDOW CO., LTD.
    Inventor: Rui Sato
  • Patent number: 11423615
    Abstract: Described are techniques for producing a three-dimensional model of a scene from one or more two dimensional images. The techniques include receiving by a computing device one or more two dimensional digital images of a scene, the image including plural pixels, applying the received image data to scene generator/scene understanding engine that produces from the one or more digital images a metadata output that includes depth prediction data for at least some of the plural pixels in the two dimensional image and that produces metadata for a controlling a three-dimensional computer model engine, and outputting the metadata to a three-dimensional computer model engine to produce a three-dimensional digital computer model of the scene depicted in the two dimensional image.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: August 23, 2022
    Assignee: HL Acquisition, Inc.
    Inventor: Rachelle Villalon
  • Patent number: 11422681
    Abstract: Non-limiting examples of the present disclosure describe an application command control user interface menu to facilitate user interaction between a user and a mobile application. An application command control menu is displayed on a display screen of a processing device. An input may be received into an application canvas of a launched application. The application canvas may be positioned above the application command control menu on the display screen. In response to a received input into the application canvas, a soft input keyboard application may be displayed. The soft input keyboard application may display below the application command control menu on the display screen. A selection may be received in the application command control menu. In response to the received selection, display of the application command control menu may be expanded to replace display on the soft input keyboard application on the display screen. Other examples are also described.
    Type: Grant
    Filed: October 12, 2015
    Date of Patent: August 23, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vlad Riscutia, Julie Seto, Maya Rodrig, Matthew Vogel, Ramy Bebawy, Sunder Raman, Edward Augustus Layne, Jr., Jon Bell, Choon-Mun Hooi, Kimberly Koenig
  • Patent number: 11417072
    Abstract: A system and method of selecting personal protection equipment (PPE) for a worker. The system identifies sets of PPEs and simulates a fitting of the identified PPE sets to the worker. Fitting includes, for each of the identified PPE sets, selecting a three-dimensional computer model for each article of the PPE set, placing the computer model of each of the PPE articles of the PPE set being fitted on a computer model representing the worker, identifying collisions between the computer models as placed, and taking an action based on the identified collisions.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: August 16, 2022
    Assignee: 3M Innovative Properties Company
    Inventors: Alexandra R. Cunliffe, Caroline M. Ylitalo, Guruprasad Somasundaram, Benjamin D. Zimmer, Jonathan D. Gandrud, Claire R. Donoghue
  • Patent number: 11417062
    Abstract: In an image including a continuous visual field of 360 degrees, such as a celestial sphere image, in a case where a partial region is displayed as an output target region, transition from a certain output target region to the other output target region is realized smoothly and with natural feeling, without having visually uncomfortable feeling. For this reason, a transition source output target region and a transition destination output target region are specified, in an output target region which is a partial region of the entire image, which is an image having a continuous visual field of 360 degrees in at least one direction. Then, a visual field transition path from the specified transition source output target region to the specified transition destination output target region is automatically determined.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: August 16, 2022
    Assignee: Sony Group Corporation
    Inventors: Yuta Nakao, Yuki Ono, Daisuke Tahara
  • Patent number: 11417004
    Abstract: Disclosed are various embodiments of variable transform systems for three-dimensional engines. In some aspects, transform data is identified for an object. The object is associated with a base transform class of a three-dimensional engine. A variable transform class generates global transform data using the transform data. The global transform data is expressed according to a cartesian coordinate system used by the three-dimensional engine. The variable transform class provides the global transform data to the base transform class of the three-dimensional engine to position the object in world space.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: August 16, 2022
    Assignee: VMWARE, INC.
    Inventors: Arjun Dube, Andrew Buccellato
  • Patent number: 11417049
    Abstract: A system for real-time updates to a display based upon the location of a camera or a detected location of a human viewing the display or both is disclosed. The system enables real-time filming of an augmented reality display that reflects realistic perspective shifts. The display may be used for filming, or may be used as a “game” or informational screen in a physical location, or other applications. The system also enables the use of real-time special effects that are centered upon an actor or other human to be visualized on a display, with appropriate perspective shift for the location of the human relative to the display and the location of the camera relative to the display.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: August 16, 2022
    Assignee: ARWall, Inc.
    Inventors: Leon Hui, Rene Amador, William Hellwarth, Michael Plescia
  • Patent number: 11417052
    Abstract: Systems and methods of generating ground truth datasets for producing virtual reality (VR) experiences, for testing simulated sensor configurations, and for training machine-learning algorithms. In one example, a recording device with one or more cameras and one or more inertial measurement units captures images and motion data along a real path through a physical environment. A SLAM application uses the captured data to calculate the trajectory of the recording device. A polynomial interpolation module uses Chebyshev polynomials to generate a continuous time trajectory (CTT) function. The method includes identifying a virtual environment and assembling a simulated sensor configuration, such as a VR headset. Using the CTT function, the method includes generating a ground truth output dataset that represents the simulated sensor configuration in motion along a virtual path through the virtual environment.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: August 16, 2022
    Assignee: Snap Inc.
    Inventors: Kai Zhou, Qi Qi, Jeroen Hol
  • Patent number: 11416000
    Abstract: A method of navigational ray casting in a computing device includes: obtaining a distance map having a plurality of cells representing respective sub-regions of an environment containing obstacles; wherein each cell defines a minimum obstacle distance indicating a distance from the corresponding sub-region to a nearest one of the obstacles; selecting an origin cell from the plurality of cells, and setting the origin cell as a current cell; selecting a ray cast direction for a ray originating from the origin cell; retrieving the minimum obstacle distance defined by the current cell; selecting a test cell at the minimum obstacle distance from the current cell in the ray cast direction; determining whether the test cell indicates the presence of one of the obstacles; and when the determination is affirmative, determining a total distance between the origin cell and the test cell.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: August 16, 2022
    Assignee: Zebra Technologies Corporation
    Inventors: Sadegh Tajeddin, Harsoveet Singh, Feng Cao
  • Patent number: 11410395
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: August 9, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez, Mukta Prasad, Eran Guendelman, Ali Shahrokni, Ashwin Swaminathan
  • Patent number: 11407621
    Abstract: A crane creates a 3D map on the basis of three-dimensional information acquired by a three-dimensional information obtaining section that is provided on a boom. The three-dimensional information obtaining section: is configured to be capable of accumulating acquired three-dimensional information and to be capable of changing a measurement direction, a measurement range, and a measurement density; and creates the 3D map by superimposing accumulated three-dimensional information. When an operation signal for a swivel operation for a swivel base, a hoisting operation for the boom, or an extension/retraction operation for the boom has been detected, on the basis of the movement direction and movement speed of the boom as calculated from a detected value for the operation signal, the three-dimensional information obtaining section: changes the measurement direction; and narrows the measurement range and increases the measurement density as compared to when the operation signal has not been detected.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: August 9, 2022
    Assignee: TADANO LTD.
    Inventors: Iwao Ishikawa, Keisuke Tamaki
  • Patent number: 11410387
    Abstract: In one embodiment for generating passthrough, a computing system may access images of an environment captured by cameras of a device worn by a user. The system may generate, based on the images, depth measurements of objects in the environment. The system may generate a mesh covering a field of view of the user and then update the mesh based on the depth measurements to represent a contour of the objects in the environment. The system may determine a first viewpoint of a first eye of the user and render a first output image based on the first viewpoint and the updated mesh. The system may then display the first output image on a first display of the device, the first display being configured to be viewed by the first eye of the user.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: August 9, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Matthew James Alderman, Gaurav Chaurasia, Paul Timothy Furgale, Lingwen Gan, Alexander Sorkine Hornung, Alexandru-Eugen Ichim, Arthur Nieuwoudt, Jan Oberländer, Gian Diego Tipaldi
  • Patent number: 11410375
    Abstract: A system and method for content creation via interactive layers is provided. Parameters for an artifact are received. A mutable general object on which to build the artifact is maintained and includes a plurality of n-dimensional data units. Layers of data for the artifact are generated via different generators. Each layer of the artifact represents a set of characteristics based on arrangements of the data units. Each layer is generated by obtaining data about an arrangement of the data units for that layer, from one or more layers of the artifact prior to that data layer, and creating the layer to mutate the data units based on the data from one or more prior data layers and the received parameters. The artifact is formed by stacking the layers via the mutable general object. Each data layer is stored with the generator for that layer as a string of characters.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: August 9, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Jacob Le, Gregory Michael Youngblood, Robert Thomas Krivacic, Jichen Zhu
  • Patent number: 11410363
    Abstract: A modeling method searches for a sequence matched to a user input using a fluid animation graph generated based on similarities among frames included in sequences included in the fluid animation graph and models a movement corresponding to the user input based on a result of the searching. Provided also is a corresponding apparatus and a method for preprocessing for such modeling.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: August 9, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Nahyup Kang, Donghoon Sagong, Hyong Euk Lee, Hwiryong Jung
  • Patent number: 11410439
    Abstract: Systems and methods are disclosed for capturing multiple sequences of views of a three-dimensional object using a plurality of virtual cameras. The systems and methods generate aligned sequences from the multiple sequences based on an arrangement of the plurality of virtual cameras in relation to the three-dimensional object. Using a convolutional network, the systems and methods classify the three-dimensional object based on the aligned sequences and identify the three-dimensional object using the classification.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Yuncheng Li, Zhou Ren, Ning Xu, Enxu Yan, Tan Yu
  • Patent number: 11410367
    Abstract: A computer system is used to host a virtual reality universe process in which multiple avatars are independently controlled in response to client input. The host provides coordinated motion information for defining coordinated movement between designated portions of multiple avatars, and an application responsive to detect conditions triggering a coordinated movement sequence between two or more avatars. During coordinated movement, user commands for controlling avatar movement may be in part used normally and in part ignored or otherwise processed to cause the involved avatars to respond in part to respective client input and in part to predefined coordinated movement information. Thus, users may be assisted with executing coordinated movement between multiple avatars.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: August 9, 2022
    Assignee: PFAQUTRUMA RESEARCH LLC
    Inventor: Brian Mark Shuster
  • Patent number: 11410438
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 9, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Patent number: 11410402
    Abstract: A computer-implemented method for making a skeleton of a modeled human or animal body take a posture, including obtaining a first and a second skeleton each comprising rotational joints connected by bones, each rotational joint of the second skeleton being associated to a respective joint of the first skeleton, determining a relative configuration of the second skeleton, mapping each joint of the first skeleton to a joint of the second skeleton, making the first skeleton take a posture defined by a rotational state for each joint of the first skeleton, and computing transformation matrices for the joints of the second skeleton such that a change is minimized, said second skeleton further including a prismatic joint on at least one of its bones, and determining rotations of the rotational joints and translation of the prismatic joint or joints of the second skeleton such that change is minimized.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: August 9, 2022
    Assignee: DASSAULT SYSTEMES
    Inventors: Sarath Reddi, Pinghan Chen
  • Patent number: 11409105
    Abstract: Aspects of the present invention relate to methods and systems for the see-through computer display systems with integrated IR eye imaging technologies.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: August 9, 2022
    Assignee: Mentor Acquisition One, LLC
    Inventor: John D. Haddick
  • Patent number: 11410328
    Abstract: This disclosure relates to maintaining a feature point map. The maintaining can include selectively updating feature points in the feature point map based on an assigned classification of the feature points. For example, when a feature points is assigned a first classification, the feature point is updated whenever information indicates that the feature point should be updated. In such an example, when the feature point is assigned a second classification different from the first classification, the feature point forgoes being updated whenever information indicates that the feature point should be updated. A classification can be assigned to a feature point using a classification system on one or more pixels of an image corresponding to the feature point.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: August 9, 2022
    Assignee: Apple Inc.
    Inventors: Bruno M. Sommer, Alexandre da Veiga
  • Patent number: 11410570
    Abstract: A method for operating a comprehensive three-dimensional teaching field, including: collecting, by a sensor, a depth data of a real teaching space, point cloud data of a teacher and voice data of the teacher; performing calculation and caching of an architecture for data storage, transmission and rendering of a virtual teaching space based on edge cloud; building a database model of the virtual teaching space by using a R-tree spatial index structure to realize distributed data storage; generating a virtual avatar model updating in real time by positioning and tracking an action of a user; displaying an image of the virtual teaching space on terminals of the teacher and a student through encoding, uploading, 5G rendering and decoding by using a 5G link.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: August 9, 2022
    Assignee: Central China Normal University
    Inventors: Zongkai Yang, Zheng Zhong, Di Wu, Xu Chen
  • Patent number: 11410323
    Abstract: A method for training a convolutional neural network to reconstruct an image. The method includes forming a common loss function basing on the left and right images (IL, IR), reconstructed left and right images (I?L, I?R), disparity maps (dL, dR), reconstructed disparity maps (d?L, d?R) for the left and right images (IL, IR) and the auxiliary images (I?L, I?R) and training the neural network based on the formed loss function.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: August 9, 2022
    Inventors: Valery Valerievich Anisimovskiy, Andrey Yurievich Shcherbinin, Sergey Alexandrovich Turko
  • Patent number: 11403827
    Abstract: Embodiments resolve hemisphere ambiguity at a system comprising sensors. A hand-held controller of the system emits magnetic fields. Sensors positioned within a headset of the system detect the magnetic fields. A first position and orientation of the hand-held controller is determined within a first hemisphere with respect to the headset based on the magnetic fields. A second position and orientation of the hand-held controller is determined within a second hemisphere, diametrically opposite the first hemisphere, with respect to the headset based on the magnetic fields. A normal vector is determined with respect to the headset, and a position vector identifying a position of the hand-held controller with respect to the headset in the first hemisphere. A dot-product of the normal vector and the position vector is calculated, and the first position and orientation of the hand-held controller is determined to be accurate when a result of the dot-product is positive.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: August 2, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Sheng Wan
  • Patent number: 11405558
    Abstract: Techniques are described for using computing devices to perform automated operations to control acquisition of images in a defined area, including obtaining and using data from one or more hardware sensors on a mobile device that is acquiring the images, analyzing the sensor data (e.g., in a real-time manner) to determine the geometric orientation of the mobile device in three-dimensional (3D) space, and using that determined orientation to control the acquisition of further images by the mobile device. In some situations, the determined orientation information may be used in part to automatically generate and display a corresponding GUI (graphical user interface) that is overlaid on and augments displayed images of the environment surrounding the mobile device during the image acquisition process, so as to control the mobile device's geometric orientation in 3D space.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: August 2, 2022
    Assignee: Zillow, Inc.
    Inventors: Mitchell David Dawson, Li Guan, Andrew H. Otwell, Dun-Yu Hsiao
  • Patent number: 11399803
    Abstract: A method and ultrasound imaging system includes displaying a plurality of steps of a workflow on a touchscreen, graphically indicating one of the plurality of steps in the workflow, displaying a text instruction describing the one of the steps in the workflow on the touchscreen at the same time as displaying the plurality of steps, and displaying an ultrasound image on the main display at the same time as displaying the plurality of steps and the text description on the touchscreen. The method and system includes implementing the step described in the text instruction on the ultrasound image displayed on a main display.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: August 2, 2022
    Assignee: General Electric Company
    Inventors: Walter Duda, Klaus Pintoffl, Simon Scharinger
  • Patent number: 11403606
    Abstract: Implementations of the present specification disclose mobile payment methods, apparatuses, and devices. In one aspect, the method includes: monitoring, by a terminal device that is in lock screen mode, outputs of one or more sensors of the terminal device; determining that the outputs of the one or more sensors satisfy first specified criteria indicative of one or more particular body movements; in response to determining that the outputs of the one or more sensors satisfy the first specified criteria, displaying, on the terminal device, a payment processing interface; receiving an input through the payment processing interface; and executing a payment service based on the input.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: August 2, 2022
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Jing Li, Chunpei Feng, Wenbo Yang, Mian Huang
  • Patent number: 11403531
    Abstract: The disclosure provides an approach for learning latent representations of data using factorized variational autoencoders (FVAEs). The FVAE framework builds a hierarchical Bayesian matrix factorization model on top of a variational autoencoder (VAE) by learning a VAE that has a factorized representation so as to compress the embedding space and enhance generalization and interpretability. In one embodiment, an FVAE application takes as input training data comprising observations of objects, and the FVAE application learns a latent representation of such data. In order to learn the latent representation, the FVAE application is configured to use a probabilistic VAE to jointly learn a latent representation of each of the objects and a corresponding factorization across time and identity.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: August 2, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: G. Peter K. Carr, Zhiwei Deng, Rajitha D. B Navarathna, Yisong Yue, Stephan Marcel Mandt
  • Patent number: 11403825
    Abstract: A method is disclosed, the method comprising the steps of identifying a first real object in a mixed reality environment, the mixed reality environment having a user; identifying a second real object in the mixed reality environment; generating, in the mixed reality environment, a first virtual object corresponding to the second real object; identifying, in the mixed reality environment, a collision between the first real object and the first virtual object; determining a first attribute associated with the collision; determining, based on the first attribute, a first audio signal corresponding to the collision; and presenting to the user, via one or more speakers, the first audio signal.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: August 2, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Anastasia Andreyevna Tajik
  • Patent number: 11403434
    Abstract: A method and system for computer aided design (CAD) is disclosed for designing geometric objects, wherein interpolation and/or blending between such objects is performed while deformation data is being input. Thus, a designer obtains immediate feedback to input modifications without separately entering a command(s) for performing such deformations. A novel N-sided surface generation technique is also disclosed herein to efficiently and accurately convert surfaces of high polynomial degree into a collection of lower degree surfaces. E.g., the N-sided surface generation technique disclosed herein subdivides parameter space objects (e.g., polygons) of seven or more sides into a collection of subpolygons, wherein each subpolygon has a reduced number of sides. More particularly, each subpolygon has 3 or 4 sides. The present disclosure is particularly useful for designing the shape of surfaces. Thus, the present disclosure is applicable to various design domains such as the design of, e.g.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: August 2, 2022
    Assignee: CAD-Sense LLC
    Inventor: Alyn P. Rockwood
  • Patent number: 11399036
    Abstract: Disclosed herein are systems and method for correlating events to detect an information security incident, a correlation module may receive a plurality of network events indicating potential security violations, wherein each network event of the plurality of network events has a respective timestamp. The correlation module may identify, from the plurality of network events, a subset of network events that have occurred within a period of time, based on each respective timestamp. The correlation module may determine a plurality of potential orders of occurrence for the subset of network events. The correlation module may apply at least one correlation rule to each respective potential order of the plurality of potential orders. In response to determining that the at least one correlation rule is fulfilled, the correlation module may detect the information security incident.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: July 26, 2022
    Assignee: AO Kaspersky Lab
    Inventors: Ivan S. Lyukshin, Andrey A. Kiryukhin, Dmitry S. Lukiyan, Pavel V. Filonov
  • Patent number: 11395971
    Abstract: Methods, systems, and computer programs are provided for identifying an abusive player in a game. One method includes receiving gameplay data for a player during gameplay of the game. The method includes processing a plurality of game mechanics of the game as the player provides input for one or more of said plurality of game mechanics. The method includes processing abusive action scores for each of the plurality of game mechanics. The abusive action scores have a number of tagged actions that are inconsistent with predefined use of said plurality of game mechanics based a game context of the gameplay of the game. The method includes qualifying the player as abusive in regard to one or more of the plurality of game mechanics based on one or more of the abusive actions scores exceeding a threshold during a session of the gameplay of the game.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: July 26, 2022
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Vanessa Valencia
  • Patent number: RE49149
    Abstract: A method for detecting two dimensional sketch data from source model data for three dimensional reverse modeling. The method includes the steps of detecting optional model data, establishing X-axis, Y-axis and Z-axis of the model data depending upon a reference coordinate system information inputted from a user, and setting a work plane for detecting two dimensional section data of the model data; projecting, on the work plane, two dimensional section data to be detected from the model data or polylines detected by designating a detection position; detecting two dimensional projected section data of the model data projected on the work plane, and dividing the two dimensional projected section data into feature segments depending upon a curvature distribution; and establishing a constraint and numerical information in accordance with connection of the divided feature segments of the two dimensional projected section data, and creating two dimensional sketch data.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: July 26, 2022
    Assignee: 3D Systems, Inc.
    Inventors: Seock Hoon Bae, Dong Hoon Lee, Kang Hoon Chung