Patents Examined by Jin Ge
  • Patent number: 10728534
    Abstract: A volumetric display system for displaying a three-dimensional image. The volumetric display system includes a multi-plane volumetric display with display elements; a graphics processing unit to process the aforesaid image to generate image planes corresponding thereto; and a projector communicably coupled to the aforesaid elements. The projector includes a light source for emitting a light beam; a spatial light modulator to modulate the emitted light beam; a telecentric projection arrangement to direct the modulated light beam towards the display elements, whilst providing a substantially-constant magnification of the modulated light beam; and a driver module coupled to the light source, the spatial light modulator. The driver module receives the image planes from the graphics processing unit, and projects the image planes upon the display elements, by way of the modulated light beam.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: July 28, 2020
    Assignee: LightSpace Technologies, SIA
    Inventors: Ilmārs Osmanis, Kri{hacek over (s)}s Osmanis, Mārtiņ{hacek over (s)} Narels, U{grave over (g)}is Gertners, Roberts Zabels, Armands {hacek over (S)}maukstelis
  • Patent number: 10720122
    Abstract: There is provided an image processing apparatus including an image processing unit which combines a virtual object with a captured image. The image processing unit determines the virtual object based on a state or a type of an object shown in the captured image.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: July 21, 2020
    Assignee: SONY CORPORATION
    Inventor: Reiko Miyazaki
  • Patent number: 10699474
    Abstract: Described herein are apparatuses, systems and methods for generating an interactive three-dimensional (“3D”) environment using virtual depth. A method comprises receiving a pre-rendered media file comprising a plurality of frames, receiving depth data related to the media file, wherein the depth data corresponds to each of the plurality of frames, creating an invisible three-dimensional (“3D”) framework of a first frame of the media file based on the corresponding depth data, and rendering a new first frame in real time to include the pre-rendered first frame, one or more virtual visible 3D objects and the invisible 3D framework.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: June 30, 2020
    Assignee: VIACOM INTERNATIONAL INC.
    Inventors: Tamer William Eskander, Isaac Steele
  • Patent number: 10682767
    Abstract: Methods for operating medical imaging devices and medical imaging devices are disclosed herein. In one example, the medical imaging device includes a user interface device for displaying information relevant to an imaging process to a user and/or receiving user input relevant to an imaging process and at least one component controllable according to a user command entered using the user interface device, wherein the user interface device includes at least one pair of mixed reality smart glasses, whereby a virtual assistance line indicating the direction of view of a wearer of the smart glasses is projected into the field of view of the smart glasses, wherein upon reception of at least one user command at least one component is controlled based on the user command and the direction defined by the assistance line.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: June 16, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Alexander Grafenberg, Hans Schweizer
  • Patent number: 10679428
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 9, 2020
    Assignee: Snap Inc.
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Patent number: 10672195
    Abstract: An information processing method and an information processing device are disclosed. The information processing method comprises: calculating at least one of a shape parameter and an expression parameter based on a correspondence relationship between a first set of fin a two-dimensional image containing a face of a person and a second set of landmarks in an average three-dimensional face model; and configuring a face deformable model using the at least one of the shape parameter and the expression parameter, to obtain a specific three-dimensional model corresponding to the face contained in the two-dimensional image.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 2, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Qianwen Miao
  • Patent number: 10664949
    Abstract: Techniques related to eye contact correction to provide a virtual user gaze aligned with a camera while the user views a display are discussed. Such techniques may include determining and reducing histogram of oriented gradient features for an eye region of a source image to provide a feature set, applying a pretrained classifier to the feature set to determine a motion vector field for the eye region, and warping and inserting the eye region into the source image to generate an eye contact corrected image.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: May 26, 2020
    Assignee: Intel Corporation
    Inventors: Edmond Chalom, Or Shimshi
  • Patent number: 10665155
    Abstract: Windows including sunroofs of a car may be enabled to display cinematic video, still pictures, opaque content, or picturesque content such as the countryside to simulate various environments while driving. On the outside of the autonomous vehicle advertising content may be shown on digital displays to pedestrians, bystanders, and other vehicle, autonomous vehicle or semi-autonomous vehicle passengers. Advertising revenue may offset vehicle leasing, maintenance, fuel, and ridesharing costs. Ad networks may send local offers content for presentation on the vehicle, autonomous vehicle or semi-autonomous vehicle's windows. Passengers may order food and interact with ads on the car window. The car windows may be multi-touch enabled to allow for various gestures and interactivity with the vehicle, autonomous vehicle or semi-autonomous vehicle. Chairs in the car may be rotated to allow for better positioning of the passenger to window screens.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: May 26, 2020
    Assignee: Accelerate Labs, LLC
    Inventor: Sanjay K. Rao
  • Patent number: 10656956
    Abstract: Disclosed herein are a virtual desktop server for supporting high-quality graphics processing and a method for processing high-quality graphics using the virtual desktop server. The virtual desktop server includes one or more virtual desktops for creating instructions for accelerated graphics processing by running a high-quality graphics application, one or more hardware-based graphics accelerators for creating screen data by executing the instructions for accelerated graphics processing and for storing the created screen data in a frame buffer, and a hypervisor for transmitting the screen data received from the virtual desktop to a client over a network, and the virtual desktop captures the screen data stored in the frame buffer, converts the captured screen data, and delivers the converted screen data to the hypervisor.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: May 19, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Soo-Cheol Oh, Dae-Won Kim, Sun-Wook Kim, Jae-Geun Cha, Ji-Hyeok Choi, Seong-Woon Kim
  • Patent number: 10621779
    Abstract: Artificial intelligence based techniques are used for analysis of 3D objects in conjunction with each other. A 3D model of two or more 3D objects is generated. Features of 3D objects are matched to develop a correspondence between the 3D objects. Two 3D objects are geometrically mapped and an object is overlayed on another 3D object to obtain a superimposed object. Match analysis of 3D objects is performed based on machine learning based models to determine how well the objects are spatially matched. The analysis of the objects is used in augmented reality applications.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: April 14, 2020
    Assignee: FastVDO LLC
    Inventors: Pankaj N. Topiwala, Madhu Peringassery Krishnan, Wei Dai
  • Patent number: 10607397
    Abstract: Examples relate to capturing and processing three dimensional (3D) scan data. In some examples, 3D scan data of a real-world object is obtained while the real-world object is repositioned in a number of orientations, where the 3D scan data includes 3D scan passes that are each associated with one of the orientations. A projector is used to project a visual cue related to a position of the real-world object as the real-world object is repositioned at each of the orientations. The 3D scan passes are stitched to generate a 3D model of the real-world object, where a real-time representation of the 3D model is shown on a display as each of the 3D scan passes is incorporated into the 3D model.
    Type: Grant
    Filed: June 4, 2015
    Date of Patent: March 31, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: David Bradley Short, Stephen George Miller, Jordi Morillo Peres, Jinman Kang, Patricia Panqueva, Matthew Leck, Daniel Jordan Kayser, Eddie Licitra
  • Patent number: 10593126
    Abstract: A virtual space display system for a self-driving moving body displays a virtual space different from a real environment such that a passenger on the moving body can enjoy the virtual space without feeling strange for a motion. The system includes a display device, a surrounding situation detector, and a virtual space display unit. The surrounding situation detector obtains information specifying features that influence a path along which the moving body is to move, and specifies such an important feature among features located in the surrounding of the moving body that presence/absence of the important feature has an influence on the path of the moving body. The virtual space display unit converts the important feature into an object that is fit for the influence on the path of the moving body based on a predetermined rule, and causes the display device to display the virtual space including the converted object.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: March 17, 2020
    Assignee: National University Corporation Nagoya University
    Inventors: Yoshio Ishiguro, Shimpei Kato, Kenjirou Yamada
  • Patent number: 10579466
    Abstract: Systems and methods are provided for agentless error management by an agentless system. The agentless system can include a management processor and a memory that stores agentless management firmware. Execution of the firmware causes to obtain first graphic data corresponding to actual output graphics that are displayed via a display device. An error is detected in the actual output graphics. The error can indicate one or more differences between the actual output graphics and intended output graphics. The detected error can be addressed, such that it is remedied or attempted to be remedied by eliminating the differences and/or extraneous graphical content from the displayed data or actual output graphics.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: March 3, 2020
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventor: Andrew Brown
  • Patent number: 10580040
    Abstract: Disclosed herein are methods and systems for real-time image and signal processing in an augmented reality environment; for example, for video conferencing in a virtual environment of the participants' choice. In particular, image information of a real life object is extracted by separating the object from its actual environment in one or more images that are captured in real-time using a comprehensive characteristic-based mechanism. The extracted real life object is then integrated with a virtual environment based on image relations between each pixel of the image information of the real life object and a corresponding pixel of each image of the plurality of images of the virtual environment through a pixel-by-pixel integration approach. The image relations comprise at least a depth relation or a transparency relation.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: March 3, 2020
    Inventor: Eliza Y. Du
  • Patent number: 10579211
    Abstract: A display apparatus and method. A display apparatus including a display control unit configured to display a moving image on a first region of a display screen; and a thumbnail generation unit configured to generate a plurality of thumbnail images based on a plurality of still images related to the moving image, wherein the display control unit is configured to display the plurality of thumbnail images on a second region of the display screen.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: March 3, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jun-gyun Sang, Hye-jin Kim, Wan-je Park
  • Patent number: 10572133
    Abstract: Technologies described herein provide a mixed environment display of attached control elements. The techniques disclosed herein enable users of a first computing device to interact with a remote computing device configured to control an object, such as a light, appliance, or any other suitable object. Configurations disclosed herein enable the first computing device to cause one or more actions, such as a selection of the object or the display of a user interface, by capturing and analyzing input data defining the performance of one or more gestures, such as a user looking at the object controlled by the second computing device. Rendered graphical elements configured to enable the control of the object can be displayed with a real-world view of the object.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: February 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David M. Hill, Andrew William Jean, Jeffrey J. Evertt, Alan M. Jones, Richard C. Roesler, Charles W. Carlson, Emiko V. Charbonneau, James Dack
  • Patent number: 10565796
    Abstract: Disclosed are systems and methods for compositing an augmented reality scene, the methods including the steps of extracting, by an extraction component into a memory of a data-processing machine, at least one object from a real-world image detected by a sensing device; geometrically reconstructing at least one virtual model from at least one object; and compositing AR content from at least one virtual model in order to augment the AR content on the real-world image, thereby creating AR scene. Preferably, the method further includes; extracting at least one annotation from the real-world image into the memory of the data-processing machine for modifying at least one virtual model according to at least one annotation. Preferably, the method further includes: interacting with AR scene by modifying AR content based on modification of at least one object and/or at least one annotation in the real-world image.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: February 18, 2020
    Assignee: Apple Inc.
    Inventors: Netanel Hagbi, Oriel Y. Bergig, Jihad A. Elsana
  • Patent number: 10553030
    Abstract: Decision making speed of a human is improved by presenting data to the human in ways that enable the human to receive the data at higher rates, and spend more time analyzing received data. The data may represent information, such as infrared imagery or map data, that is not directly perceivable by the human. An individual datum may be presented to the human via multiple senses, to make receiving the datum easier, and to make it more likely that the datum is received, especially in contexts where the human may be busy, stressed, cognitively loaded or distracted by other demands on the human's attention. Some embodiments automatically select which sense or combination of senses to use for presenting each datum, based on various factors, such as how the human's current environment may interfere with the human's ability to receive or process the datum or how busy a given sense is.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: February 4, 2020
    Assignee: The Charles Stark Draper Laboratory, Inc.
    Inventors: Jana Lyn Schwartz, Emily Catherine Vincent, Meredith Gerber Cunha
  • Patent number: 10538200
    Abstract: A work vehicle includes a vehicle body, cameras, circuitry, and a display. The vehicle body has a peripheral area surrounding the vehicle body and a work equipment attachable to the vehicle body. The peripheral area is divided to allocated areas. The cameras are provided on the vehicle body to capture images of the allocated areas, respectively. The circuitry is configured to convert the images captured by the cameras to partial overhead images via view-point transformation, respectively. The circuitry is configured to composite the partial overhead images based on an image composition pattern that is changeable to produce a peripheral overhead image of the peripheral area. The display is to display the peripheral overhead image.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: January 21, 2020
    Assignee: KUBOTA CORPORATION
    Inventor: Yushi Matsuzaki
  • Patent number: 10504270
    Abstract: Techniques are disclosed relating to synchronizing access to pixel resources. Examples of pixel resources include color attachments, a stencil buffer, and a depth buffer. In some embodiments, hardware registers are used to track status of assigned pixel resources and pixel wait and pixel release instruction are used to synchronize access to the pixel resources. In some embodiments, other accesses to the pixel resources may occur out of program order. Relative to tracking and ordering pass groups, this weak ordering and explicit synchronization may improve performance and reduce power consumption. Disclosed techniques may also facilitate coordination between fragment rendering threads and auxiliary mid-render compute tasks.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 10, 2019
    Assignee: Apple Inc.
    Inventors: Terence M. Potter, Richard W. Schreyer, James J. Ding, Alexander K. Kan, Michael Imbrogno