Three-dimension Patents (Class 345/419)
  • Patent number: 10681310
    Abstract: The subject technology provides a video conferencing application in which a live incoming or outgoing video stream can be supplemented with supplemental content, such as stickers, animations, etc., from within the video conferencing application. In this manner, a user participating in a video conferencing session with a remote user can add stickers, animations, and/or adaptive content to an outgoing video stream being captured by the device of the user, or to an incoming video stream from the device of the remote user, without having to locally cache/store a video clip before editing, and without having to leave the video conferencing session (or the video conferencing application) to access a video editing application.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: June 9, 2020
    Assignee: Apple Inc.
    Inventors: Christopher M. Garrido, Eric L. Chien, Austin W. Shyu, Ming Jin, Yan Yang, Ian J. Baird, Joe S. Abuan
  • Patent number: 10681183
    Abstract: A computer-implemented method of providing a server-based feature cloud model of a realm includes receiving by a server a series of digital contributions that collectively originate from a plurality of remote computing devices, characterizing portions of the realm. The method also includes processing by the server the received digital contributions to associate them with a global coordinate system and storing the processed contributions in a realm model database as components of the feature cloud model of the realm. Finally, the method includes, in response to a query message over the Internet from a computing device of an end-user, serving, over the Internet by the server to the computing device, digital data defining a selected portion of the feature cloud model for integration and display by the computing device.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: June 9, 2020
    Inventors: Alexander Hertel, Philipp Hertel
  • Patent number: 10679412
    Abstract: A comprehensive solution is provided to transforming locations and retail spaces into high-traffic VR attractions that provide a VR experience blended with a real-world tactile experience. A modular stage and kit of stage accessories suitable for a wide variety of commercial venues contains all of the necessary equipment, infrastructure, technology and content to assemble and operate a tactile, onsite VR attraction. Utilizing a modular set of set design and physical props, the physical structure and layout of the installations are designed to be easily rearranged and adapted to new VR content, without requiring extensive construction or specialized expertise.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: June 9, 2020
    Assignee: Unchartedvr Inc.
    Inventors: Douglas Griffin, Kalon Ross Gutierrez, Jennifer Millard, Tiburcio De La Carcova
  • Patent number: 10678410
    Abstract: Performing image processing is disclosed, including: causing an original image to be displayed in a first editable object of a page, wherein the first editable object is generated based at least in part on web page data from a server; receiving an input track corresponding to the original image of the first editable object of the page; selecting at least a portion of the original image relative to the input track to use as a result image; and causing the result image to be displayed in a second editable object, wherein the second editable object is generated based at least in part on the web page data.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: June 9, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Hao Ye, Keliang Ye
  • Patent number: 10678238
    Abstract: According to various aspects, a modified-reality device may be described, the modified-reality device including: a head-mounted device including one or more displays, wherein the one or more displays are configured to receive image data representing at least an image element and to display a modified-reality image including at least the image element; one or more sensors configured to provide head tracking data associated with a location and an orientation of the head-mounted device; and a processing arrangement configured to receive flight data associated with a flight of an unmanned aerial vehicle, generate the image data representing at least the image element based on the head tracking data and the flight data, and provide the image data to the one or more displays.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: June 9, 2020
    Assignee: Intel IP Corporation
    Inventors: Marco Moeller, Cornelius Claussen, Daniel Pohl
  • Patent number: 10681337
    Abstract: A method for viewpoint selection assistance in free viewpoint video generation includes: executing acquisition processing that includes acquiring three-dimensional information with respect to a subject on a field by using a plurality of cameras placed around the field; executing first identification processing that includes identifying a path of a ball for a predetermined period based on the three-dimensional information; executing second identification processing that includes identifying at least one player located within a predetermined distance from a position of the ball for a predetermined duration of time or longer in the predetermined period; executing setting processing that includes setting, as a video output range, a range containing both of the path of the ball and a path of the at least one player; and executing generation processing that includes generating video for the range set by the setting processing.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: June 9, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Takuya Masuda, Kazumi Kubota
  • Patent number: 10678414
    Abstract: A method, system, and/or computer program product adjust values of a plurality of conditions. A processor receives a user input, which is a movement across a user interface. A tendency of the movement, which describes a direction and velocity of the movement, is determined. According to the tendency of the movement, a processor adjusts a value of at least one of the plurality of conditions by using a plurality of graphs representing the plurality of conditions, where the plurality of conditions describe search criteria, and where the user input describes the search criteria.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: June 9, 2020
    Assignee: International Business Machines Corporation
    Inventors: Jian Wen Chi, Fang Liang Dong, Rong Rong Gong, Lin Ying Ying
  • Patent number: 10679275
    Abstract: A mobile application (“app”) enabling a shopper to identify an item or items that the user wishes to locate or purchase. The device displays a list of stores which stock the item(s), and a user is then provided with in-store guidance enabling the shopper to find the item or items that the user wishes to locate or purchase. A user can identify an item or items by scanning a product bar code, scanning a product label; capturing an image of a product, or typing a product name in a search field. The method further includes identifying stores that stock the item. Items can be added to an accumulative shopping list, and items may be ordered online if not available nearby. The system provides basic navigation to store(s) carrying the item(s). Once at a store, an interior route to a product is portrayed through an overhead view map of a store layout. Once near a product, a user may be shown a virtual display of a shelf with the vertical or height placement of the product on the nearby shelf.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: June 9, 2020
    Inventor: Jacob Kaufman
  • Patent number: 10679368
    Abstract: Methods and apparatus to reduce a depth map size for use in a collision avoidance system are described herein. Examples described herein may be implemented in an unmanned aerial vehicle. An example unmanned aerial vehicle includes a depth sensor to generate a first depth map. The first depth map includes a plurality of pixels having respective distance values. The unmanned aerial vehicle also includes a depth map modifier to divide the plurality of pixels into blocks of pixels and generate a second depth map having fewer pixels than the first depth map based on distance values of the pixels in the blocks of pixels. The unmanned aerial vehicle further includes a collision avoidance system to analyze the second depth map.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: June 9, 2020
    Assignee: Intel IP Corporation
    Inventors: Daniel Pohl, Markus Achtelik
  • Patent number: 10681032
    Abstract: A system and a method for single sign-on voice authentication that provides access to multiple voice recognition and artificial intelligence platforms, to multiple devices and to multiple third party web service systems.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 9, 2020
    Assignee: OV LOOP, INC.
    Inventors: John Maddox, Richard A. Smith, William W. Graylin
  • Patent number: 10679409
    Abstract: A three-dimensional model creating device creates an individual model indicating an individual shape of an object from an integrated model created based on data obtained by capturing images of or measuring at least two or more objects together. The three-dimensional model creating device includes: a three-dimensional model division processing unit configured to create a plurality of divided models obtained by dividing the integrated model by an extension plane extended from each plane configured to form the integrated model; a user interface unit configured to receive tagging of each of the divided models; and an individual model creating unit configured to create the individual model of the object based on the tagging of each of the divided models.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: June 9, 2020
    Assignee: FANUC CORPORATION
    Inventors: Rie Oota, Mamoru Kubo
  • Patent number: 10675538
    Abstract: The present invention provides a program that causes an electronic device including a display unit and a processing unit to execute selecting, as candidate objects, objects related to game events from among objects to be rendered on the display unit; determining attention scores representing the degrees of attention that will be paid by a player to the individual selected candidate objects by determining weights relating to the occurrence of individual candidate events, which are events related to the candidate objects, on the basis of the candidate objects, an event or a sequence of events that occurred immediately before, and event history information; and determining resource allocation in the processing unit for rendering the individual candidate objects on the basis of the attention scores and depth distances of the candidate objects as viewed from the player.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: June 9, 2020
    Assignee: CYGAMES, INC.
    Inventor: Shuichi Kurabayashi
  • Patent number: 10675479
    Abstract: An operation teaching device used in operation teaching during movement and/or rotation operation of an object (2h) to adjust the object to a predetermined position and direction, including: a TOF type depth image camera (40) for obtaining three-dimensional shape information about the object; extraction means for extracting a feature region from the three-dimensional shape information acquired by the TOF type depth image camera (40) using a luminance image of the object obtained from information about light receiving intensity of the projection light, the projection light being reflected from the object and received with light receiving means; and generation means for generating information for the operation teaching by calculating a deviation between the three-dimensional shape information including the feature region of the object in the predetermined position and direction and the three-dimensional shape information including the feature region of the object in a current position and direction.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: June 9, 2020
    Assignees: OSAKA UNIVERSITY, A SCHOOL CORPORATION KANSAI UNIVERSITY, Teijin Pharma Limited
    Inventors: Yoshihiro Yasumuro, Ryo Ebisuwaki, Youichi Saitoh, Taiga Matsuzaki
  • Patent number: 10672512
    Abstract: Systems and methods are disclosed for automatically managing how and when computerized advanced processing techniques (for example, CAD and/or other image processing) are used. In some embodiments, the systems and methods discussed herein allow users, such as radiologists, to efficiently interact with a wide variety of computerized advanced processing (“CAP”) techniques using computing devices ranging from picture archiving and communication system (“PACS”) workstations to handheld devices such as smartphone and tablets. Furthermore, the systems and methods may, in various embodiments, automatically manage how data associated with these CAP techniques (for example, results of application of one or more computerized advanced processing techniques) are used, such as how data associated with the computerized analyzes is reported, whether comparisons to prior abnormalities should be automatically initiated, whether the radiologist should be alerted of important findings, and the like.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: June 2, 2020
    Assignee: MERGE HEALTHCARE SOLUTIONS INC.
    Inventor: Evan K. Fram
  • Patent number: 10672191
    Abstract: Devices, systems, and methods may implement one or more techniques for anchoring computer generated objects within an augmented reality scene. One or more techniques may include capturing an image frame via a camera sensor, for example. One or more techniques may include determining a ground plane of the image frame and/or identifying a wall plane of the image frame. One or more techniques may include generating a model of a guide reticle and/or linking a virtual tether connecting the guide reticle to a digital object to be rendered relative to the guide reticle. One or more techniques may include rendering the guide reticle and the virtual tether to a display of, for example, a mobile image capture computing device relative to the wall plane and the ground plane. One or more techniques may include rendering the digital object to the display relative to the guide reticle and the virtual tether.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: June 2, 2020
    Assignee: Marxent Labs LLC
    Inventors: Bret Besecker, Barry Besecker, Ken Moser
  • Patent number: 10674075
    Abstract: [Object] To generate, for a panoramic image, a thumbnail image showing satisfactory visibility when displayed. [Solution] Both first thumbnail image data generated by applying resolution conversion to panoramic image data generated by combining a plurality of captured images and second thumbnail image data generated by performing aspect ratio adjustment processing on the panoramic image data are generated as thumbnail images for the panoramic image data.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: June 2, 2020
    Assignee: Sony Corporation
    Inventor: Atsushi Kimura
  • Patent number: 10672195
    Abstract: An information processing method and an information processing device are disclosed. The information processing method comprises: calculating at least one of a shape parameter and an expression parameter based on a correspondence relationship between a first set of fin a two-dimensional image containing a face of a person and a second set of landmarks in an average three-dimensional face model; and configuring a face deformable model using the at least one of the shape parameter and the expression parameter, to obtain a specific three-dimensional model corresponding to the face contained in the two-dimensional image.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 2, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Qianwen Miao
  • Patent number: 10671344
    Abstract: Approaches provide for controlling, managing, and/or otherwise interacting with mixed (e.g., virtual and/or augmented) reality content in response to input from a user, including voice input, device input, among other such inputs, in a mixed reality environment. For example, a mixed reality device, such as a headset or other such device can perform various operations in response to a voice command or other such input. In one such example, the device can receive a voice command and an application executing on the device or otherwise in communication with the device can analyze audio input data of the voice command to control the view of content in the environment, as may include controlling a user's “position” in the environment. The position can include, for example, a specific location in time, space, etc., as well as directionality and field of view of the user in the environment.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: June 2, 2020
    Assignee: METRIK LLC
    Inventor: Keara Elizabeth Fallon
  • Patent number: 10671841
    Abstract: Attribute state classification techniques are described. In one or more implementations, one or more pixels of an image are classified by a computing device as having one or several states for one or more attributes that do not identify corresponding body parts of a user. A gesture is recognized by the computing device that is operable to initiate one or more operations of the computing device based at least in part of the state classifications of the one or more pixels of one or more attributes.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexandru O. Balan, Richard E. Moore, Mark J. Finocchio
  • Patent number: 10671238
    Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: June 2, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen DiVerdi, Seth Walker, Brian Williams
  • Patent number: 10668376
    Abstract: An operation detector has a common operation portion in which the operation content is shared for a certain length of time after the start of the operation, and detects the start of either a slide operation associated with a command to move a three-dimensional character operated by a player in a virtual space, or a flick operation associated with another command. After the start of the operation is detected, in the common operation period, a model processor moves the three-dimensional character in the virtual space. After this, if the operation detector has detected a slide operation, the model processor moves the three-dimensional character without interruption in the movement processing in the common operation period.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: June 2, 2020
    Assignee: DeNA Co., Ltd.
    Inventor: Yuichi Kanemori
  • Patent number: 10672170
    Abstract: Systems and methods for utilizing a device as a marker for virtual content viewed in an augmented reality environment are discussed herein. The device (or sign post) may comprise a wirelessly connectable device linked to a power source and associated with multiple linkage points. The device may provide information to a user (or a device of a user) defining virtual content and a correlation between the linkage points and a reference frame of the virtual content. When rendered by a display device, the virtual content may be presented based on the reference frame of the virtual content correlated to the real world by virtue of the position of the linkage points in the real world.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: June 2, 2020
    Inventor: Nicholas T. Hariton
  • Patent number: 10672189
    Abstract: A cloud network server system, a method, and a software program product for compiling and presenting a three-dimensional (3D) model are provided. An end 3D model is composed from at least two pre-existing 3D models stored in the cloud network server system by combining the pre-existing 3D models. The end 3D model is partitioned into smaller cells. The system and method allow a drawing user to view and draw the end 3D model for example of a computer game, via a drawing user terminal computer. Based on a virtual location of the drawing user in the end 3D model, parts of at least one version of the end 3D model are rendered to the drawing user. The system and method render a more lifelike virtual reality gaming experience with substantially lesser time lag, lesser memory footprint requirement, and lesser production effort.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: June 2, 2020
    Assignee: Umbra Software Oy
    Inventors: Otso Makinen, Antti Hatala, Hannu Saransaari, Jarno Muurimaki, Jasin Bushnaief, Johann Muszynski, Mikko Pulkki, Niilo Jaba, Otto Laulajainen, Turkka Aijala, Vinh Truong
  • Patent number: 10672103
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 10671843
    Abstract: Technologies for detecting interactions with surfaces from a spherical view of a room include a compute device. The compute device includes an image capture manager to obtain one or more images that depict a spherical view of a room that includes multiple surfaces. Additionally, the compute device includes a surface interaction detection manager to detect, from the one or more images, a person in the room, generate a bounding box around the person, preprocess the bounding box to represent the person in an upright orientation, determine a pose of the person from the preprocessed bounding box, detect an outstretched hand from the determined pose, and determine, from the detected outstretched hand, a surface of interaction in the room.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: June 2, 2020
    Assignee: Intel Corporation
    Inventors: Srenivas Varadarajan, Selvakumar Panneer, Omesh Tickoo, Giuseppe Raffa, Carl Marshall
  • Patent number: 10663728
    Abstract: A binocular display device comprising two ocular assemblies (1A, 1B) to be worn by a user concurrently with one respective ocular assembly at each eye. Each ocular assembly comprises an outer optical part having a positive optical strength (2A, 2B), an inner optical part (4A, 4B) having a negative optical strength and a transparent slab waveguide display part (3A, 3B) in between them. Substantially collimated display light is output from the waveguide for display, and external light of an external scene is transmitted through the waveguide from the outer optical part for viewing concurrently with the display light. The inner optical part imposes a divergence on the received display light to generate a virtual focal point (f) substantially common to each ocular assembly. In use, an image conveyed by the display light is superimposed on the external scene as a three-dimensional (3D) image when viewed through the binocular display device.
    Type: Grant
    Filed: May 4, 2016
    Date of Patent: May 26, 2020
    Assignee: BAE SYSTEMS plc
    Inventor: Michael David Simmonds
  • Patent number: 10661177
    Abstract: Transmission data including at least data regarding luminances of a plurality of small areas obtained by dividing the entirety of an at least partial area of a captured image captured by an image capturing unit is acquired. Then, using the acquired transmission data, and based on a luminance of the entirety of the at least partial area of the captured image and the luminance of any of the plurality of small areas, a shape of an image capturing target and/or a position of the image capturing target relative to a data transmission device are determined, and based on the result of the determination, predetermined information processing is performed.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: May 26, 2020
    Assignee: NINTENDO CO., LTD.
    Inventors: Ryo Kataoka, Hiromasa Shikata
  • Patent number: 10665014
    Abstract: A system and method for tap event location includes a device using a selection apparatus that provides accurate point locations. The device determines a 3-dimensional map of a scene in the view frustum of the device relative to a coordinate frame. The device receives an indication of the occurrence of a tap event comprising a contact of the selection apparatus with a subject, and determines the location of the tap event relative to the coordinate frame from the location of the selection apparatus. The location of the tap event may be used to determine a subject. Data associated with the subject may then be processed to provide effects in, or data about, the scene in the view frustum of the device. Embodiments include a selection apparatus that communicates occurrences of tap events to the device and includes features that allow the device to determine the location of the selection apparatus.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: May 26, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John Weiss, Xiaoyan Hu
  • Patent number: 10664136
    Abstract: A method of selecting items in a graphical user interface displayed on a screen. A selection of at least one item in the graphical user interface is detected, the at least one item being located in a bounded region of the graphical user interface, the bounded region having an abutting region. In the abutting region, a selection deadband representing a distance from a boundary between the bounded region and the abutting region is set, the selection deadband having a thickness determined according to proximity of another boundary of the abutting region. Selection of an item located in the abutting region is disabled until a selector is traversed past the selection deadband and into the abutting region.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: May 26, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Julie Rae Kowald, Evgeny Vostrikov
  • Patent number: 10665036
    Abstract: An augmented reality system with a dynamic representation technique of augmented images according to the present invention comprises augmented reality glasses which are equipped with a video camera that captures a scene in the surroundings of a user; and in displaying a 3D virtual image on a transparent display, which display the 3D virtual image corresponding to current position information and the actual image information within field of view of the user; and a server that provides, to the augmented reality glasses, the 3D virtual image corresponding to the current position information and actual image information transmitted from the augmented reality terminal in real-time, wherein an amount of information of a virtual object of the 3D virtual image, which is assigned to each physical object of the actual image information, is dynamically adjusted according to a distance between the physical object and the user and displayed on the augmented reality glasses.
    Type: Grant
    Filed: August 3, 2019
    Date of Patent: May 26, 2020
    Assignee: VIRNECT INC.
    Inventor: Tae Jin Ha
  • Patent number: 10664947
    Abstract: An image processing apparatus includes an acquisition unit, a determination unit, and a conversion unit. The acquisition unit is configured to acquire at least one or more pieces of input image data used to represent an image. The determination unit is configured to determine a region of interest in the input image data. The conversion unit is configured to convert, based on the region of interest, the input image data into output image data representing at least a part of the image in equidistant cylindrical projection.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: May 26, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yuji Kato
  • Patent number: 10665029
    Abstract: Embodiments provide for reconciling a first map of an environment created by a first device with a second map of the environment created by a second device by determining a first confidence score for a first location of a given object in the first map; determining a second confidence score for a second location of the given object in the second map; inserting the given object into a reconciled map at a position based on the first confidence score and the second confidence score; anchoring an Augmented Reality object in the reconciled map at coordinates based on the position of the given object; and outputting the reconciled map to an Augmented Reality device.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: May 26, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Jason A. Yeung, Michael P. Goslin, Corey D. Drake, Elliott H. Baumbach, Nicholas F. Barone
  • Patent number: 10664218
    Abstract: Holographic augmented authoring provides an extension to personal computing experiences of a universal or conventional productivity application. A user interface of a productivity application executing on a personal computing device can be switched from a touch or conventional mode to a holographic mode, which opens communication between the personal computing device and a holographic enabled device providing a mixed reality system. A semantic representation of a command in a productivity application is generated as a hologram in a mixed reality system and the change to a content file from performing the command in the mixed reality system does not require a holographic enabled device to view or even further edit.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: May 26, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jonathan Kaufthal
  • Patent number: 10664543
    Abstract: Example embodiments of the present disclosure include a system comprising a computer-readable storage medium storing at least one program and a computer-implemented method for providing a customized fitting room environment. Consistent with some embodiments, the method may include identifying a garment being brought into a fitting room, and determining a garment type of the garment. The method may further include adjusting one or more environmental settings of the fitting room based on the garment type. Additional aspects of the present disclosure include further adjusting environmental settings based on various combinations of user data and a desired use case of the garment specified by the individual.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: May 26, 2020
    Assignee: eBay Inc.
    Inventors: Healey Cypher, Tracy Ogishi, Darren Endo, Michael Franklin, Lars Wensel
  • Patent number: 10664628
    Abstract: A computer-implemented method and system modifies a pre-existing surface. The invention method/product/data storage medium/system generates an outline of a shape of an object, which is a curve. A reference surface is then created by extruding the curve. Selected entities of the pre-existing surface are projected to a location on the reference surface, after which the pre-existing surface is regenerated using the location for each entity to calculate a modified pre-existing surface.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: May 26, 2020
    Assignee: DASSAULT SYSTEMES SOLIDWORKS CORPORATION
    Inventors: Benjamin H. Schriesheim, Salvatore F. Lama, Xavier Benveniste
  • Patent number: 10665342
    Abstract: Systems and methods are disclosed for automatically managing how and when computerized advanced processing techniques (for example, CAD and/or other image processing) are used. In some embodiments, the systems and methods discussed herein allow users, such as radiologists, to efficiently interact with a wide variety of computerized advanced processing (“CAP”) techniques using computing devices ranging from picture archiving and communication system (“PACS”) workstations to handheld devices such as smartphone and tablets. Furthermore, the systems and methods may, in various embodiments, automatically manage how data associated with these CAP techniques (for example, results of application of one or more computerized advanced processing techniques) are used, such as how data associated with the computerized analyses is reported, whether comparisons to prior abnormalities should be automatically initiated, whether the radiologist should be alerted of important findings, and the like.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: May 26, 2020
    Assignee: MERGE HEALTHCARE SOLUTIONS INC.
    Inventor: Evan K. Fram
  • Patent number: 10665008
    Abstract: Object sets are often organized and traversed in a hierarchical manner according to ownership, wherein a subset of contained objects are processed before or after a containing object that contains the contained objects. Such object sets may also be presented as a scene, which may involve traversing the object set in a drawing order, such as a descending distance order that renders objects in a back-to-front manner. It may be difficult to reconcile these distinct traversal techniques, particularly if different portions of the object set utilize a different traversal order. Presented herein are hybrid traversal techniques in which a selected subset of related objects is identified and traversed in a drawing order, and the remainder of the object set is traversed in an ownership order, in furtherance of various tasks that involve hybrid traversal orders and/or to facilitate the traversal of different types of object subsets within the object set.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: May 26, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Geoffrey Tyler Trousdale, Anthony Tunjen Hsieh, Danielle Renee Neuberger, Christopher Nathaniel Raubacher, Harneet Singh Sidhana, Jeffrey Evan Stall
  • Patent number: 10664988
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed. An example system for avoiding collision for virtual environment in a shared physical space includes a first mobile device associated with a first user, a first mobile device generating a first virtual environment, a second mobile device, associated with a second user, the second mobile device generating a second virtual environment and a server. The server includes an index map generator to generate a first index map and a second index map from the first virtual environment and the second virtual environment, respectively, a collision detector to determine a collision likelihood based on a comparison of the first index map and the second index map, and an object placer to, in response to the collision likelihood satisfying a threshold, modify at least one of the first virtual environment or the second virtual environment.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: May 26, 2020
    Assignee: Intel Corporation
    Inventor: Derek Chilcote-Bacco
  • Patent number: 10665037
    Abstract: A system for providing augmented reality (AR) content can include a user interface, a generator for creating AR models or scene files from a 3D image file that are each formatted for a different, particular AR rendering platform, and a computer system operably connected to the user interface and to the generator. The computer can include computer-executable instructions that cause the computer system to receive an image file, render a 3D image from the image file, create a plurality of AR models or scene files from the 3D image with the generator, provide a universal link associated with the plurality of AR models or scene files at the user interface, receive a request for an AR model or scene file, as a result of the universal link being selected, and identify a particular AR model or scene file based on the AR rendering platform associated with the request.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 26, 2020
    Assignee: SEEK LLC
    Inventors: Thane Brimhall, Thomas Griffiths, Trey Nelson, Christopher Tart
  • Patent number: 10665619
    Abstract: A display panel including a display region and a non-display region, a plurality of gate lines, a plurality of data lines, a pixel array and a gate on array circuit. The non-display region is located at one side of the display region. The plurality of gate lines and the plurality of data lines are disposed in the display region. The pixel array is located in the display region, wherein the pixel array is composed of a plurality of pixel units that are repeatedly arranged. The pixel units consist of three gate lines, two data lines and six sub-pixels, and each sub-pixel is electrically connected to one of the gate lines and one of the data lines located in the pixel array respectively. The gate on array circuit is disposed in the pixel array.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: May 26, 2020
    Assignee: Au Optronics Corporation
    Inventors: Cheng-Kuang Wang, Chun-Da Tu
  • Patent number: 10665026
    Abstract: In respect of first virtual reality content of a scene captured by a first virtual reality content capture device and second virtual reality content of the same scene captured by a second virtual reality content capture device, the first and second virtual reality content capture devices physically arranged within a predetermined distance of one another in the same scene when the first virtual reality content and the second virtual reality content was captured; provide for display, in virtual reality, of an amalgamated virtual reality space representative of the scene, the amalgamated virtual reality space comprising an amalgamation of both the images or parts thereof of the first virtual reality content and the images or parts thereof of the second virtual reality content.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: May 26, 2020
    Assignee: Nokia Technologies Oy
    Inventors: Arto Lehtiniemi, Troels Rønnow, Hongwei Li, David Bitauld
  • Patent number: 10659789
    Abstract: An encoder includes a processor and a memory. The encoder may perform a method of progressive compression. In one example implementation, the method may include determining a priority value for each edge of a plurality of edges, the priority value of an edge of the plurality of edges determined based on an error metric value and an estimated encoding cost associated with the edge. The method may further include determining a set of edges for collapse, the set of edges determined from the plurality of edges based on the priority values and collapsing the set of edges and generating vertex split information. In some implementations, the method may include entropy encoding the vertex split information.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: May 19, 2020
    Assignee: GOOGLE LLC
    Inventors: Michael Hemmer, Pierre Alliez, Cedric Portaneri
  • Patent number: 10659755
    Abstract: There is provided an information processing device, an information processing method, and a program that can facilitate a user to perceive a stereoscopic vision object, the information processing device including: a display control unit configured to perform movement control of a stereoscopic vision object perceived by a user from a start depth that is different from a target depth to the target depth on a basis of mode specifying information that specifies a mode of the movement control that supports stereoscopic vision by the user, and an information processing method including: performing movement control of a stereoscopic vision object perceived by a user from a start depth that is different from a target depth to the target depth on a basis of mode specifying information that specifies a mode of the movement control that supports stereoscopic vision by the user.
    Type: Grant
    Filed: May 24, 2016
    Date of Patent: May 19, 2020
    Assignee: SONY CORPORATION
    Inventors: Akane Yano, Tsubasa Tsukahara
  • Patent number: 10657712
    Abstract: Described herein are a system and techniques for performing partially or fully automatic retopology of an object model. In some embodiments, the techniques may involve categorizing and/or segmenting an object model into a number of regions. 3D data in each region may then be compared to 3D data in corresponding regions for a number of similar object models in order to identify a closest matching corresponding region. The techniques may also involve identifying a set of edges stored in relation to each closest matching corresponding region for each region of an object model. Each set of edges may be conformed to the 3D data of its corresponding region. Once conformed, the sets of edges may be compiled into a cage for the object model, from which a mesh may be generated.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: May 19, 2020
    Assignee: LOWE'S COMPANIES, INC.
    Inventors: Mason E. Sheffield, Oleg Alexander, Jonothon Frederick Douglas
  • Patent number: 10657675
    Abstract: An encoder includes a processor and a memory. The encoder generates a first plurality of levels of detail (LODs) and associated first type of vertex split records, each of the first type of vertex split records associated with an LOD of the first plurality of LODs is generated using a first type of collapse operator. The encoder initiates a switch from using the first type of collapse operator to a second type of collapse operator in response to a switching condition being satisfied. The encode further a second plurality of LODs and associated second type of vertex split records, each of the second type of vertex split records associated with a LOD of the second plurality of LODs is generated using the second type of collapse operator.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: May 19, 2020
    Assignee: GOOGLE LLC
    Inventors: Michael Hemmer, Pierre Alliez, Cedric Portaneri
  • Patent number: 10656424
    Abstract: In a wearable information display terminal, information related to an object is displayed at a timing required by a user in an easily recognized form. Photography is performed in a field-of-vision direction, a first object is detected from a first image obtained by the photography, relevant information related to the first object is acquired from a network, and a second image related to the relevant information is generated. A second object different from the first object is detected from the first image, the second object is used as a display trigger, and the second image is displayed when the second object is close to the first object.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: May 19, 2020
    Assignee: MAXELL, LTD.
    Inventors: Susumu Yoshida, Satoshi Ouchi, Yukinobu Tada, Tadashi Kuwabara, Yoshiho Seo
  • Patent number: 10656422
    Abstract: An optical attenuator, when damaged, loses not only a light attenuation function but also part of an optical path shift function utilizing a refractive effect, and thus an optical path shift function in a normal state is lost. Thus, it is possible to diverts an optical path of modulate light, which is a laser beam, from a direction toward a mirror surface, that is, a direction toward the eye of an observer.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: May 19, 2020
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Takeshi Shimizu, Shuichi Wakabayashi
  • Patent number: 10659717
    Abstract: An airborne optronic equipment item comprises: at least one image sensor suitable for acquiring a plurality of images of a region flown over by a carrier of the equipment item; and a data processor configured or programmed to receive at least one acquired image and transmit it to a display device; wherein the data processor is also configured or programmed to: access a database of images of the region flown over; extract from the database information to synthesize a virtual image of the region which would be seen by an observer situated at a predefined observation point and looking, with a predefined field of view, along a predefined line of sight; synthesize the virtual image; and transmit it to a display device. A method for using such an equipment item is provided.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: May 19, 2020
    Assignee: THALES
    Inventors: Ludovic Perruchot, Arnaud Beche, Fabien Deprugney, Denis Rabault, Bruno Depardon
  • Patent number: 10657696
    Abstract: An interactive avatar display system provides a computer-generated view of a virtual space including an avatar that moves in the virtual space in response to movements of the user that are sensed by sensors. The number of sensed movements is less than the number of degrees of freedom of avatar movement. The interactive avatar display system computes an array of accelerations to apply to movable body parts of the avatar and computes the array of accelerations by solving equations of motions from masses of the movable body parts and an array of forces computed from an array of inverse dynamics force values for the movable body parts and one or both of an array of balance control force values or an array of locomotion control force values, taking into account a set of constraints for the avatar, and possible also environmental objects in the virtual space.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: May 19, 2020
    Assignee: DeepMotion, Inc.
    Inventors: Weihua Jin, Kaichuan He
  • Patent number: 10656722
    Abstract: This document describes a system for animating a two-dimensional object in real-time with gestural data collected by a gestural sensor of the system. A gestural sensor collects gestural data. An image extraction engine extracts image data that renders a two-dimensional object on a display device of the system. An overlay engine generates a mesh overlay for the two-dimensional object. Based on a detected gesture represented in the gestural data collected, an animation engine modifies portions of the two-dimensional object from a first image frame to a second image frame based on the collected gestural data. An image frame generator generates the first and second image frames that animate the two-dimensional object in accordance with the gestural data collected.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: May 19, 2020
    Assignee: Carnegie Mellon University
    Inventor: Ali Momeni