Patents Examined by Kee M. Tung
  • Patent number: 10290139
    Abstract: A method of producing a realistic virtual avatar by using a change in a pupil size according to heartbeats. The method tracks and records, in real time, a change in a pupil size of the actual user according to heartbeats of the actual user and applies the same to an eye model of a virtual avatar so as to synchronize the pupil size of the eye model with the pupil size of the actual user.
    Type: Grant
    Filed: February 1, 2016
    Date of Patent: May 14, 2019
    Inventors: Myoung Ju Won, Min Cheol Whang, Sang In Park
  • Patent number: 10290138
    Abstract: Device comprising a memory (2) storing sound data, three-dimensional surface data, and a plurality of control data sets which represent control points defined by data of coordinates which are associated with sound data, and a processor (4) which, on the basis of first and second successive sound data, and of first three-dimensional surface data, selects the control data sets associated with the first and second sound data, and defines second three-dimensional surface data by applying a displacement to each point. The displacement of a given point is calculated as the sum of displacement vectors calculated for each control point on the basis of the sum of first and second vectors, weighted by the ratio between the result of a function with two variables exhibiting a zero limit at infinity applied to the given point and to the control point and the sum of the result of this function applied to the point on the one hand and to each of the control points on the other hand.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: May 14, 2019
    Inventors: Slim Ouni, Guillaume Gris
  • Patent number: 10282812
    Abstract: One embodiment provides for a parallel processor comprising a processing array within the parallel processor, the processing array including multiple compute blocks, each compute block including multiple processing clusters configured for parallel operation, wherein each of the multiple compute blocks is independently preemptable. In one embodiment a preemption hint can be generated for source code during compilation to enable a compute unit to determine an efficient point for preemption.
    Type: Grant
    Filed: April 9, 2017
    Date of Patent: May 7, 2019
    Assignee: Intel Corporation
    Inventors: Altug Koker, Ingo Wald, David Puffer, Subramaniam M. Maiyuran, Prasoonkumar Surti, Balaji Vembu, Guei-Yuan Lueh, Murali Ramadoss, Abhishek R. Appu, Joydeep Ray
  • Patent number: 10283032
    Abstract: A method of image processing, the method including performing linear processing of an input data signal encoded with a nonlinear function to generate a linear representation of the input data signal including linearized image data, and using an integrated circuit to generate a processed linear image by nonlinearly quantizing the linearized image data to generate nonlinear quantized data, generating a memory address based on the nonlinear quantized data, and accessing a lookup table based on the generated memory address.
    Type: Grant
    Filed: May 5, 2016
    Date of Patent: May 7, 2019
    Assignee: Samsung Display Co., Ltd.
    Inventor: David M. Hoffman
  • Patent number: 10275689
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: April 30, 2019
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10275650
    Abstract: An apparatus, method and computer program wherein the apparatus comprises: processing circuitry; and memory circuitry including computer program code; the memory circuitry and the computer program code configured to, with the processing circuitry, cause the apparatus at least to perform; detecting user selection of a part of an image wherein the image is displayed on a display; obtaining context information; and determining information to be provided to the user based on the user selection, the displayed image and the obtained context information.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: April 30, 2019
    Assignee: Nokia Technologies Oy
    Inventor: Lasse Laaksonen
  • Patent number: 10271040
    Abstract: A system that generates a 3D environment from data collected by depth sensors (such as LIDAR) and color sensors (such as color video camera data) observing an area or activity, transmits versions of the 3D environment to various devices for display, and enables device users to dynamically alter a viewing angle of the 3D environment. The version of the 3D environment sent to each device may be optimized for the device's resolution and for the bandwidth of the connection to the device. Embodiments may enrich the 3D environment by detecting and tagging objects and their locations in the environment, and by calculating metrics related to motion or actions of these objects. Object tags and metrics may be transmitted to devices and displayed for example as overlays of images rendered from user-selected viewing angles. Embodiments of the system also enable 3D printing of an object as a memento for example.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: April 23, 2019
    Assignee: ALIVE 3D
    Inventors: Raymond Paul Marchak, Jr., Russell Neil Harlan, Jr., Hunter Thomas Laux
  • Patent number: 10271054
    Abstract: Adaptive video processing for a target display panel may be implemented in or by a decoding/display pipeline associated with the target display panel. The adaptive video processing methods may take into account video content, display characteristics, and environmental conditions including but not limited to ambient lighting and viewer location when processing and rendering video content for a target display panel in an ambient setting or environment. The display-side adaptive video processing methods may use this information to adjust one or more video processing functions as applied to the video data to render video for the target display panel that is adapted to the display panel according to the ambient viewing conditions.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: April 23, 2019
    Assignee: Apple, Inc.
    Inventors: Kenneth I. Greenebaum, Haitao Guo, Hao Pan, Guy Cote, Andrew Bai
  • Patent number: 10269142
    Abstract: The present disclosure is directed towards methods and systems for providing a digital mixed output color of two reference colors defined in an RGB model where the digital mixed output color at least generally reflects a color produced by mixing physical pigments of the two reference colors or a custom user-defined color. The systems and methods receive a selection of two reference colors to mix. Additionally, the systems and methods can determine a mixing ratio of the two reference colors. Moreover, the systems and methods query at least one predefined mixing table and identify from the at least one predefined mixing table a mixed output color correlating to a mixture of the two reference colors.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: April 23, 2019
    Assignee: ADOBE INC.
    Inventors: Zhili Chen, Daichi Ito, Byungmoon Kim, Gahye Park
  • Patent number: 10271159
    Abstract: It is determined whether or not an information processing apparatus is located at a particular place. If it is determined in the determination of the location that the information processing apparatus is located at the particular place, use of predetermined data is permitted. If the information processing apparatus satisfies a predetermined condition related to the fact that the information processing apparatus is located at the particular place, the permitted use of the data is prohibited.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: April 23, 2019
    Assignee: NINTENDO CO., LTD.
    Inventor: Tomoyoshi Yamane
  • Patent number: 10262453
    Abstract: In order to improve depth perception for an image displayed during a laparoscopic surgery, a representation of a shadow of a tool included in the image and used in the laparoscopic surgery is identified and introduced into the image. A processor augments a three-dimensional (3D) model including a 3D representation of a surface of an object included in the image, and a representation of the tool by introducing a virtual light source into the 3D model to generate a virtual shadow within the 3D model. The processor subtracts the representation of the shadow out of the augmented 3D model and superimposes the representation of the shadow on the image to be displayed during the laparoscopic surgery.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: April 16, 2019
    Assignee: Siemens Healthcare GmbH
    Inventors: Peter Mountney, Thomas Engel, Philip Mewes, Thomas Pheiffer
  • Patent number: 10262390
    Abstract: A graphics processing unit (GPU) service platform includes a control server, and a cluster of GPU servers each having one or more GPU devices. The control server receives a service request from a client system for GPU processing services, allocates multiple GPU servers nodes within the cluster to handle GPU processing tasks specified by the service request by logically binding the allocated GPU server nodes, and designating one of the at least two GPU servers as a master server, and send connection information to the client system to enable the client system to connect to the master server. The master GPU server node receives a block of GPU program code transmitted from the client system, which is associated with the GPU processing tasks specified by the service request, processes the block of GPU program code using the GPU devices of the logically bound GPU servers, and returns processing results to the client system.
    Type: Grant
    Filed: April 14, 2017
    Date of Patent: April 16, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Yifan Sun, Layne Peng, Robert A. Lincourt, Jr., John Cardente, Junping Zhao
  • Patent number: 10264266
    Abstract: Display brightness adjustment apparatus and methods are described in which the average brightness of a display may be scaled up or down using a non-linear function. When applying the non-linear function to scale down brightness, the contrast of the output signal may not be reduced so that the dynamic range and highlights are preserved. The non-linear brightness adjustment may be performed automatically, for example in response to ambient light level as detected by sensor(s), but may also be applied in response to a user adjustment to a brightness control knob or slider. The non-linear brightness adjustment may be performed globally, or alternatively may be performed on local regions of an image or display panel. The non-linear function may be a piecewise linear function, or some other non-linear function.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: April 16, 2019
    Assignee: Apple Inc.
    Inventor: Hao Pan
  • Patent number: 10255725
    Abstract: An augmented reality (AR) system for providing an AR experience to a user of an AR venue includes a hardware processor, a memory, a light detector, a display, and an AR application stored in the memory. The hardware processor can execute the AR application to provide a virtual environment corresponding to the AR venue on the display, and to detect a light event resulting from interaction of an infrared (IR) light produced by an AR accessory utilized by the user with a surface within the AR venue. The hardware processor can further execute the AR application to identify a location of the light event within the AR venue, determine whether the location of the light event is within an activation zone of the AR venue, and render a visual effect corresponding to the light event on the display if the location is within the activation zone.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: April 9, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Nathan Nocon, Katherine M. Bassett
  • Patent number: 10248296
    Abstract: A method and an apparatus are provided for editing a display of a touch display apparatus. A first screen including at least one object is displayed. An object on the first screen is designated. The touch display apparatus is converted to an edit mode for editing the display, when the object is designated. When a movement of the touch display apparatus is detected, the first screen is converted into a second screen according to at least one of a degree and a direction of the movement. The designated object is displayed on the second screen.
    Type: Grant
    Filed: January 12, 2015
    Date of Patent: April 2, 2019
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Sang-ok Cha, Sang-jun Han, Jung-hyun Shim
  • Patent number: 10242491
    Abstract: An image processing apparatus includes an image acquisition unit acquiring a first and second captured images from first and second points of view respectively, an initial value acquisition unit acquiring initial values of respective clip positions to clip display images from the first and second captured images, a derivation unit deriving an amount of a first exterior region of a first display image outside a first region of the first captured image when the first display image is clipped based on the initial values, and deriving an amount of a second exterior region of a second display image outside a second region of the second captured image when the second display image is clipped based on the initial values, and a determination unit determining the respective clip positions to clip the display images from the first and second captured images based on the first and second amounts.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: March 26, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Masanori Sato, Masashi Nakagawa
  • Patent number: 10242654
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: March 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Jeffrey Powers Bradford, Miguel Comparan, Gene Leung, Adam James Muff, Ryan Scott Haraden, Christopher Jon Johnson
  • Patent number: 10242470
    Abstract: An energy management system is provided. The system includes an information collector which is configured to collect a first information in accordance with elapsing of time, including information on energy consumption of a subject of energy management or information on factors involved in a change of the energy consumption; and a display controller which is configured to collate a second information in which is included the first information, which is collected by the information collector, or specified energy performance for each second unit divided by one or a plurality of boundaries of an operating status which changes in accordance with the elapsing of time of the subject of energy management for each first unit which is divided by one or a plurality of physical or logical boundaries of the subject of energy management, based on the first information collected by the information collector to be the first unit and the second unit to cause the collated result to be displayed on a display.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: March 26, 2019
    Assignee: Yokogawa Electric Corporation
    Inventor: Ken-ichi Inoue
  • Patent number: 10242471
    Abstract: Systems and methods of rendering webpage interaction statistics data over graphical user interfaces is provided herein. A computing device can transmit a request for an interaction statistics identifying an information resource. The computing device can receive the interaction data set for the identified information resource. The computing device can calculate a similarity measure among the client devices based on the interaction data set using a clustering algorithm. The computing device can identity a subset of client devices based on the similarity measure. The computing device can calculate the interaction sequence metrics for the client devices for each selected content element. The computing device can select the content elements based on the calculated interaction sequence metrics. The computing device can generate a graph element including nodes representing each selected content element and paths representing the interaction sequence metrics. The computing device can display the graph element.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: March 26, 2019
    Assignee: Google LLC
    Inventors: Emre Demiralp, Tommaso Francesco Bersano-Begey
  • Patent number: 10242499
    Abstract: A computer program product for overlaying geographic map data onto a live feed is provided. The computer program product includes a computer readable storage medium having program instructions embodied therewith. The program instructions are readable and executable by a processing circuit to cause the processing circuit to approximate fluid depth on fixed points in a live feed to calculate discrete depth readings, combine the discrete depth readings with a contour map associated with the live feed to generate a fluid depth map and combine the fluid depth map with the live feed to produce an augmented reality image including the fluid depth map superimposed onto the live feed.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: March 26, 2019
    Inventors: Donald L. Bryson, Eric V. Kline, Sarbajit K. Rakshit