Patents Examined by Weiming He
-
Patent number: 10692465Abstract: Input video data of a video dynamic range and its input dynamic metadata is received. Input graphics data of a graphics dynamic range is received. Display identification data is received from a target display over a video interface. Interpolated dynamic metadata is generated based at least in part on (a) the input dynamic metadata, (b) the display identification data, and (c) a numeric interpolation factor, in order to operate a priority-transition mode for transitioning between a video-priority mode and a graphics-priority mode. The input video data is blended with the input graphics data based at least in part on the interpolated dynamic metadata into graphics blended video data. The graphics blended video data and the interpolated dynamic metadata is sent to the target display for rendering graphics blended images represented in the graphics blended video data.Type: GrantFiled: May 25, 2017Date of Patent: June 23, 2020Assignee: Dolby Laboratories Licensing CorporationInventor: Robin Atkins
-
Patent number: 10679314Abstract: Examples described herein generally relate to rendering graphics using a graphics processing unit (GPU) in a computing device. A synchronization object associated with a wait event can be created, wherein the wait event indicates a time offset before a timed event associated with a display device of the computing device. A plurality of rendering instructions for the GPU can be stored in a buffer, wherein the plurality of rendering instructions can be received from an application before a release of the synchronization object. Release of the synchronization object can be detected based on occurrence of the wait event, and the plurality of rendering instructions can be sent from the buffer to at least a portion of the GPU based on detecting the release of the synchronization object.Type: GrantFiled: June 28, 2017Date of Patent: June 9, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Steve M. Pronovost, Benjamin M. Cumings
-
Patent number: 10665013Abstract: Provided is a single-image-based fully automatic three-dimensional (3D) hair modeling method. The method mainly includes four steps: generation of hair image training data, hair segmentation and growth direction estimation based on a hierarchical depth neural network, generation and organization of 3D hair exemplars, and data-driven 3D hair modeling. The method can automatically and robustly generate a complete high quality 3D model of which the quality reaches the level of the currently most advanced user interaction-based technology. The method can be used in a series of applications, such as hair style editing in portrait images, browsing of hair style spaces, and searching for Internet images of similar hair styles.Type: GrantFiled: October 17, 2018Date of Patent: May 26, 2020Assignee: ZHEJIANG UNIVERSITYInventors: Kun Zhou, Menglei Chai
-
Patent number: 10657690Abstract: Systems, devices and methods for an intelligent augmented reality (IAR) platform-based communications are disclosed. During a communication, real-time audio, video and/or sensor data are captured in real-time; and scene analysis and data analytics are also performed in real-time to extract information from raw data. The extracted information can be further analyzed to provide knowledge. Real-time AR data can be generated by integrating the raw data, AR input data, information input, and knowledge input, based on one or more criteria comprising a user preference, a system setting, an integration parameter, a characteristic of an object or a scene of the raw data, an interactive user control, or a combination thereof. In some embodiments, information and knowledge can be obtained by incorporating Big Data in the analysis.Type: GrantFiled: August 11, 2017Date of Patent: May 19, 2020Inventor: Eliza Y. Du
-
Patent number: 10657928Abstract: There is provided a display device including a display unit, and a projection unit provided in a direction intersecting with a direction perpendicular to a display surface of the display unit, with a rear side of the display surface as a projection direction.Type: GrantFiled: November 7, 2014Date of Patent: May 19, 2020Assignee: SONY CORPORATIONInventors: Heesoon Kim, Tatsushi Nashida
-
Patent number: 10636209Abstract: Two-dimensional aerial images and other geo-spatial information are processed to produce land classification data, vector data and attribute data for buildings found within the images. This data is stored upon a server computer within shape files, and also stored are source code scripts describing how to reconstruct a type of building along with compiled versions of the scripts. A software game or simulator executes upon a client computer in which an avatar moves within a landscape. A classifier classifies a type of building in the shape file to execute the appropriate script. Depending upon its location, a scene composer downloads a shape file and a compiled script is executed in order to reconstruct any number of buildings in the vicinity of the avatar. The script produces a three-dimensional texture mesh which is then rendered upon a screen of the client computer to display a two-dimensional representation of the building.Type: GrantFiled: September 13, 2019Date of Patent: April 28, 2020Assignee: BONGFISH GMBHInventors: Wolfgang Thaller, Thomas Richter-Trummer, Michael Putz, Richard Maierhofer
-
Patent number: 10620700Abstract: Systems and methods are provided for discerning the intent of a device wearer primarily based on movements of the eyes. The system can be included within unobtrusive headwear that performs eye tracking and controls screen display. The system can also utilize remote eye tracking camera(s), remote displays and/or other ancillary inputs. Screen layout is optimized to facilitate the formation and reliable detection of rapid eye signals. The detection of eye signals is based on tracking physiological movements of the eye that are under voluntary control by the device wearer. The detection of eye signals results in actions that are compatible with wearable computing and a wide range of display devices.Type: GrantFiled: May 9, 2015Date of Patent: April 14, 2020Assignee: GOOGLE LLCInventors: Nelson George Publicover, Lewis James Marggraff, Eliot Francis Drake, Spencer James Connaughton
-
Patent number: 10614734Abstract: A display apparatus including at least one image renderer; light sources; controllable scanning mirrors; at least two actuators associated with the controllable scanning mirrors; means for detecting gaze direction of user; and a processor communicably coupled to the aforementioned components. The processor is configured to: (a) obtain an input image and determine region of visual accuracy thereof; (b) process the input image to generate a context image and a focus image; (c) determine a focus area within a projection surface over which the focus image is to be drawn; (d) render the context image; (e) draw the focus image; and (f) control the actuators to align the controllable scanning mirrors. The processor is configured to perform (d), (e) and (f) substantially simultaneously, and optically combine a projection of the drawn focus image with a projection of the rendered context image to create a visual scene.Type: GrantFiled: March 6, 2018Date of Patent: April 7, 2020Assignee: Varjo Technologies OyInventors: Ari Antti Erik Peuhkurinen, Oiva Arvo Oskari Sahlsten, Klaus Mikael Melakari
-
Patent number: 10607413Abstract: The technology disclosed relates to a method of realistic rendering of a real object as a virtual object in a virtual space using an offset in the position of the hand in a three-dimensional (3D) sensory space. An offset between expected positions of the eye(s) of a wearer of a head mounted device and a sensor attached to the head mounted device for sensing a position of at least one hand in a three-dimensional (3D) sensory space is determined. A position of the hand in the three-dimensional (3D) sensory space can be sensed using a sensor. The sensed position of the hand can be transformed by the offset into a re-rendered position of the hand as would appear to the wearer of the head mounted device if the wearer were looking at the actual hand. The re-rendered hand can be depicted to the wearer of the head mounted device.Type: GrantFiled: September 2, 2016Date of Patent: March 31, 2020Assignee: Ultrahaptics IP Two LimitedInventors: Alex Marcolina, David Holz
-
Patent number: 10607379Abstract: A graph drawing system includes an electronic device and a calculation server. A first processor of the electronic device transmits calculation inquiry data including information on a function entered by a user operation to the calculation server. A second processor of the calculation server calculates drawing information including coordinates of a plurality of drawing points represented by the function and plotted in a graph drawing area of the display, and continuity/discontinuity information indicating whether or not adjacent points of the drawing points should be connected to each other, based on the calculation inquiry data and transmits the drawing information to the electronic device. The first processor causes the display to display a graph corresponding to the function, based on the drawing information.Type: GrantFiled: September 7, 2018Date of Patent: March 31, 2020Assignee: CASIO COMPUTER CO., LTD.Inventor: Hirokazu Tanaka
-
Patent number: 10593108Abstract: Systems and methods are disclosed for more efficiently and quickly utilizing digital aerial images to generate models of a site. In particular, in one or more embodiments, the disclosed systems and methods capture a plurality of digital aerial images of a site. Moreover, the disclosed systems and methods can cluster the plurality of digital aerial images based on a variety of factors, such as visual contents, capture position, or capture time of the digital aerial images. Moreover, the disclosed systems and methods can analyze the clusters independently (i.e., in parallel) to generate cluster models. Further, the disclosed systems and methods can merge the cluster models to generate a model of the site.Type: GrantFiled: October 31, 2017Date of Patent: March 17, 2020Assignee: Skycatch, Inc.Inventors: Manlio Francisco Barajas Hernandez, David Chen, Pablo Arturo Martinez Gonzalez, Christian Sanz
-
Patent number: 10565757Abstract: A computing system transforms an input image into a stylized output image by applying first and second style features from a style exemplar. The input image is provided to a multimodal style-transfer network having a low-resolution-based stylization subnet and a high-resolution stylization subnet. The low-resolution-based stylization subnet is trained with low-resolution style exemplars to apply the first style feature. The high-resolution stylization subnet is trained with high-resolution style exemplars to apply the second style feature. The low-resolution-based stylization subnet generates an intermediate image by applying the first style feature from a low-resolution version of the style exemplar to first image data obtained from the input image. Second image data from the intermediate image is provided to the high-resolution stylization subnet.Type: GrantFiled: June 9, 2017Date of Patent: February 18, 2020Assignee: Adobe Inc.Inventors: Geoffrey Oxholm, Xin Wang
-
Patent number: 10535427Abstract: Devices, systems, and methods pertain to planning and providing guidance for a cranial based implantation of an implantable medical device or devices. The collection of data, such as data pertaining to the skull of the patient, the scalp of the patient, the vascular structure or neurological structures in the head of the patient, is performed. The data may be in the form of images, such as images generated by X-ray, magnetic resonance imaging, CT-scan and fluoroscopy. A surgeon can use the collected data to determine, for example, whether the patient is a candidate for a cranial implantation, whether the patient's skull and scalp can support the implantation, what configuration of device should be implanted, where the device should be implanted, and how the surgical incisions should be made.Type: GrantFiled: January 10, 2018Date of Patent: January 14, 2020Assignee: Medtronic, Inc.Inventor: Steven M. Goetz
-
Patent number: 10528792Abstract: A display apparatus performs predetermined image processing on at least one image data item among the plurality of image data items, does not perform the predetermined image processing on at least another one image data item among the plurality of image data items, and to cause a display unit to display the plurality of image data items, if (A) it is determined, on the basis of meta-information associated with the plurality of image data items, that the plurality of image data items are image data items that have been output from a single image output apparatus, or if (B) it is determined that a display mode is set in which a plurality of image data items are to be displayed, the plurality of image data items including a first image data item and a second image data item generated by duplicating the first image data item.Type: GrantFiled: June 7, 2017Date of Patent: January 7, 2020Assignee: Canon Kabushiki KaishaInventors: Sosuke Kagaya, Hirofumi Urabe
-
Patent number: 10520924Abstract: Systems and methods are described for controlling an operation of machinery using augmented reality or virtual reality. A controller is configured to detect an object visible via a display based at least in part on sensor data received from an object sensor. The controller then generates and outputs on the display a graphical element corresponding to the detected object. The controller then monitors for interactions between an operator's hand and the graphical element and transmits control signals to a machinery actuator to control an operation of a tool of the machinery based at least in part on one or more detected interactions between the operator's hand and the graphical element.Type: GrantFiled: October 31, 2017Date of Patent: December 31, 2019Assignee: DEERE & COMPANYInventor: Mark J. Cherney
-
Patent number: 10491819Abstract: A portable system providing augmented vision of surroundings. In one embodiment the system includes a helmet, a plurality of camera units and circuitry to generate a composite field of view from channels of video data. The helmet permits a user to receive a first field of view in the surroundings based on optical information received directly from the surroundings with the user's natural vision. The camera units are mounted about the helmet to generate the multiple channels of video data. Each camera channel captures a different field of view of a scene in a region surrounding the helmet.Type: GrantFiled: July 19, 2017Date of Patent: November 26, 2019Assignee: FotoNation LimitedInventor: Peter Corcoran
-
Patent number: 10482562Abstract: An apparatus to facilitate partitioning of a graphics device is disclosed. The apparatus includes a plurality of engines and logic to partition the plurality of engines to facilitate independent access to each engine within the plurality of engines.Type: GrantFiled: April 21, 2017Date of Patent: November 19, 2019Assignee: INTEL CORPORATIONInventors: Abhishek R. Appu, Balaji Vembu, Altug Koker, Bryan R. White, David J. Cowperthwaite, Joydeep Ray, Murali Ramadoss
-
Patent number: 10482843Abstract: In an embodiment, a user equipment (UE) coupled to a display screen enters into a reduced blue light (RBL) mode. The UE determines, while operating in accordance with the RBL mode, a degree of blue light reduction in at least a portion of a display frame to be output on the display screen using at least one RBL rule from a set of RBL rules that is based upon one or more of (i) application-specific information of an application that is contributing image data to the portion of the display frame, and (ii) content-specific information that characterizes the image data in the portion of the display frame. The UE selectively reduces the blue light in the at least a portion of the display frame based on the determining. The UE sends the display frame with the selectively reduced blue light portion to the display screen for output.Type: GrantFiled: July 28, 2017Date of Patent: November 19, 2019Assignee: QUALCOMM IncorporatedInventors: Daniel James Guest, Min Dai, Robyn Teresa Oliver
-
Patent number: 10474336Abstract: Techniques disclosed herein provide a user experience with virtual reality content and user-selected, real world objects. An exemplary technique involves receiving input selecting a subset of the real world objects in a selection view. The subset of the real world objects is selected for inclusion in the user experience. The technique further involves identifying virtual reality content for inclusion in the user experience. The technique provides the user experience by combining a live view of the user's real world environment with the virtual reality content. The real world objects in the live view that are in the subset selected by the user are visible in the user experience. The real world objects in the live view that are not in the subset selected by the user are replaced with virtual reality content.Type: GrantFiled: December 20, 2016Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventor: Kevin Smith
-
Patent number: 10460424Abstract: A projector includes a light source, a light modulator that has an image drawing region where an image is drawable and modulates light emitted from the light source by using an image drawn in the image drawing region, and a projection system that includes a projection lens and projects image light modulated by the light modulator. The projector further includes a lens shift mechanism that moves the projection lens and a control section that controls the image drawing performed by the light modulator, and the control section provides, based on the position of the projection lens, the image drawing region of the light modulator with a suppression region where the amount of the image light is suppressed.Type: GrantFiled: July 28, 2017Date of Patent: October 29, 2019Assignee: SEIKO EPSON CORPORATIONInventors: Shuji Narimatsu, Takateru Mori, Akira Nemura, Hiroyuki Furui