Patents Examined by Charles Tseng
-
Patent number: 10748349Abstract: An image conversion unit according to an embodiment includes a gradation unit, a black line extraction unit, and a white line extraction unit. The gradation unit classifies a pixel of a pre-conversion image as one of four levels based on a brightness of the pixel, and expresses, in a color corresponding to the level as which the pixel has been classified, a corresponding pixel of a converted image. The black line extraction unit extracts pixels of an outline of an object in the pre-conversion image, and expresses corresponding pixels of the converted image in black. The white line extraction unit extracts pixels adjacent to an outline of an object in the pre-conversion image, and expresses corresponding pixels of the converted image in white.Type: GrantFiled: December 21, 2018Date of Patent: August 18, 2020Assignee: NINTENDO CO., LTD.Inventors: Satoshi Miyama, Yosuke Mori, Norihiro Aoyagi, Yuta Yamashita
-
Patent number: 10740954Abstract: In various examples, the actual spatial properties of a virtual environment are used to produce, for a pixel, an anisotropic filter kernel for a filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment. Geometry of the virtual environment may be computed based at least in part on a projection of a light source onto a surface through an occluder, in order to determine a footprint that reflects a contribution of the light source to lighting conditions of the pixel associated with a point on the surface. The footprint may define a size, orientation, and/or shape of the anisotropic filter kernel and corresponding filter weights. The anisotropic filter kernel may be applied to the pixel to produce a graphically-rendered image of the virtual environment.Type: GrantFiled: March 15, 2019Date of Patent: August 11, 2020Assignee: NVIDIA CorporationInventor: Shiqiu Liu
-
Patent number: 10733808Abstract: A system of properly displaying chroma key content is presented. The system obtains a digital representation of a 3D environment, for example a digital photo, and gathers data from that digital representation. The system renders the digital representation in an environmental model and displays that digital representation upon an output device. Depending upon the context, content anchors of the environmental model are selected which will be altered by suitable chroma key content. The chroma key content takes into consideration the position and orientation of the chroma key content relative to the content anchor and relative to the point of view that the environmental model is displayed from in order to accurately display chroma key content in a realistic manner.Type: GrantFiled: February 27, 2019Date of Patent: August 4, 2020Assignee: NANTMOBILE, LLCInventors: Evgeny Dzhurinskiy, Ludmila Bezyaeva
-
Patent number: 10719569Abstract: An information processing apparatus includes a storing device and a processor. The storing device stores associated segment information indicative of two or more display unit segments having common data forming display contents among a plurality of display unit segments included in a screen of the client terminal. The processor instructs, when an update occurs on the data forming the display contents on a first display unit segment among the plurality of display unit segments, the client terminal to update data forming the display contents to be displayed on a second display unit segment associated with the first display unit segment among the plurality of display unit segments by referring to the associated segment information stored in the storing device.Type: GrantFiled: November 13, 2018Date of Patent: July 21, 2020Assignee: FUJITSU LIMITEDInventors: Kota Iwadate, Kazuyoshi Watanabe, Toshiharu Makida
-
Patent number: 10713847Abstract: The present group of inventions relates to methods and systems intended for interacting with virtual objects, involving determining a control unit to be used for interacting with virtual objects, determining characteristic graphics primitives of a virtual object, determining the spatial position of the control unit, correlating the spatial position of the control unit to the graphics primitives of the virtual object, and performing the desired actions with regard to the virtual object. In accordance with the invention, images are used from a user's client device which has a video camera and a display, a control unit image library is created on the basis of the received images, and the obtained image library is used for determining the graphics primitives of the control unit. Then, the spatial position of the control unit is determined by calculating the motion in space of the control unit graphics primitives.Type: GrantFiled: June 7, 2016Date of Patent: July 14, 2020Assignee: DEVAR ENTERTAINMENT LIMITEDInventors: Vitaly Vitalyevich Averyanov, Andrey Valeryevich Komissarov
-
Patent number: 10699666Abstract: Disclosed are an image display device and an image processing method thereof. A signal processing method of the image display device according to the present invention is a signal processing method of an image display device configured to receive an image having a varied vertical synchronization signal, including detecting an input synchronization signal using an input image clock, extracting an input vertical synchronization signal from the input synchronization signal, delaying the input vertical synchronization signal by a reference value of an output clock, continuously tracking a falling region of the delayed input vertical synchronization signal, and finally generating an output synchronization signal in which a vertical front porch is varied.Type: GrantFiled: August 23, 2018Date of Patent: June 30, 2020Assignee: LG ELECTRONICS INC.Inventors: Kyounghoon Jang, Kwangyeon Rhee, Byungtae Choi
-
Patent number: 10699486Abstract: A display system includes a projector that projects a virtual image onto a target space to allow a target person to visibly recognize the virtual image and a controller that controls display of the virtual image. When the projector projects a virtual image corresponding to a caution object, the controller selects at least one reference point from one or more candidate points existing around the caution object and associates the virtual image with the at least one reference point.Type: GrantFiled: June 25, 2018Date of Patent: June 30, 2020Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Akira Tanaka, Tadashi Shibata, Nobuyuki Nakano, Masanaga Tsuji, Shohei Hayashi
-
Patent number: 10684494Abstract: The present invention is directed to a see-through near eye optical module that in most cases is fabricated as a standalone unit. The see-through near eye optical module is in certain embodiments then placed in optical communication and alignment with an eyewear lens having appropriate optical power such that when a wearer thereof looks through the see-through near eye optical module he or she can see a real world image and virtual image clearly. In other embodiments the appropriate optical power is provided in the rear section of the see-through near eye optical module. Thus, the combination of both the see-through near eye optical module and the appropriate optical power provides the wearer with a clear augmented reality or mixed reality experience. The placement can be by way of positioning within an open notch, hole, groove, recess, or other section of an eyewear lens.Type: GrantFiled: October 11, 2019Date of Patent: June 16, 2020Assignee: NewSight Reality, Inc.Inventors: Ronald Blum, Philip Nathan Garfinkle
-
Patent number: 10685483Abstract: Viewport transformation modules for use in a three-dimensional rendering system wherein vertices are received from an application in a strip. The viewport transformation modules include a fetch module configured to read from a vertex buffer: untransformed coordinate data for a vertex in a strip; information identifying a viewport associated with the vertex; and information identifying a viewport associated with one or more other vertices in the strip. The one or more other vertices in the strip are selected based on a provoking vertex of a primitive to be formed by the vertices in the strip and a number of vertices in the primitive. The viewport transformation modules also include a processing module that performs a viewport transformation on the untransformed coordinate data based on each of the identified viewports to generate transformed coordinate data for each identified viewport; and a write module that writes the transformed coordinate data for each identified viewport to the vertex buffer.Type: GrantFiled: July 2, 2018Date of Patent: June 16, 2020Assignee: Imagination Technologies LimitedInventor: Jairaj Dave
-
Patent number: 10685470Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating and providing composition effect tutorials for creating and editing digital content based on a metadata composite structure. For example, the disclosed systems can generate and/or access a metadata composite structure that includes nodes corresponding to composition effects applied to a digital content item, where a given node can include location information indicating where a composition effect is applied relative to a digital content item. The disclosed systems can further generate a tutorial to guide a user to implement a selected composition effect by identifying composition effects of nodes that correspond to a location selected within a composition interface and presenting instructions for a particular composition effect.Type: GrantFiled: September 12, 2018Date of Patent: June 16, 2020Assignee: ADOBE INC.Inventors: Amol Jindal, Vivek Mishra, Neha Sharan, Anmol Dhawan
-
Patent number: 10679424Abstract: Described herein are devices, systems, media, and methods using an augmented reality smartphone application to capture measurements of an interior or exterior space in real-time and generate a floorplan of the space and/or a 3D model of the space from the captured measurements in less than 5 minutes.Type: GrantFiled: April 17, 2019Date of Patent: June 9, 2020Assignee: SMART PICTURE TECHNOLOGIES, INC.Inventors: Dejan Jovanovic, Andrew Kevin Greff
-
Patent number: 10672169Abstract: Methods and systems for generating an interactive rotatable 360-degree presentation of an object are disclosed. The methods and systems obtain data describing the object, where the data includes information about a number of images of the object, as well as additional information about the object. The images are automatically obtained and rearranged into at least one sequence of images substantially evenly distributed around 360 degrees. It is determined whether to add hotspot(s) to image(s), and if hotspot(s) are to be added, the hotspot(s) are automatically added to the image(s). The ordered images of the sequence(s) are then merged into an interactive rotatable 360-degree presentation of the object.Type: GrantFiled: November 14, 2019Date of Patent: June 2, 2020Inventors: Steven Saporta, Collin Stocks, Devin Daly, Michael Quigley
-
Patent number: 10672106Abstract: Methods and systems for generating an interactive rotatable 360-degree presentation of an object are disclosed. The methods and systems obtain data describing the object, where the data includes information about a number of images of the object, as well as additional information about the object. The images are automatically obtained and rearranged into at least one sequence of images substantially evenly distributed around 360 degrees. It is determined whether to add hotspot(s) to image(s), and if hotspot(s) are to be added, the hotspot(s) are automatically added to the image(s). The ordered images of the sequence(s) are then merged into an interactive rotatable 360-degree presentation of the object.Type: GrantFiled: May 16, 2019Date of Patent: June 2, 2020Inventors: Steven Saporta, Collin Stocks, Devin Daly, Michael Quigley
-
Patent number: 10664518Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.Type: GrantFiled: October 23, 2018Date of Patent: May 26, 2020Assignee: Nant Holdings IP, LLCInventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
-
Patent number: 10666954Abstract: A method and system for improving audio and video multimedia modification and presentation is provided. The method includes receiving an audio/video stream and analyzing objects of the audio/video stream for generating predictions with respect to the objects. Component analysis code is executed with respect to the audio/video stream and an object is removed from the audio/video stream resulting in a modified audio/video stream being generated thereby reducing hardware storage and transfer size requirements of the audio/video stream. The modified audio/video stream is presented to a user via a graphical user interface.Type: GrantFiled: June 19, 2018Date of Patent: May 26, 2020Assignee: International Business Machines CorporationInventors: David Bastian, Aaron K. Baughman, Nicholas A. McCrory, Todd R. Whitman
-
Patent number: 10657617Abstract: A method and system including a central processing unit (CPU), an accelerator, a communication bus and a system memory device for dynamically processing an image file are described. The accelerator includes a local memory buffer, a data transfer scheduler, and a plurality of processing engines. The data transfer scheduler is arranged to manage data transfer between the system memory device and the local memory buffer, wherein the data transfer includes data associated with the image file. The local memory buffer is configured as a circular line buffer, and the data transfer scheduler includes a ping-pong buffer for transferring output data from the one of the processing engines to the system memory device. The local memory buffer is configured to execute cross-layer usage of data associated with the image file.Type: GrantFiled: November 26, 2018Date of Patent: May 19, 2020Assignee: GM Global Technology Operations LLCInventors: Shige Wang, Wei Tong, Shuqing Zeng, Roman L. Millett
-
Patent number: 10634921Abstract: According to embodiments of the invention, the invention is an augmented reality system that utilizes a near eye see-through optical module that comprises a transparent or semi-transparent see-through near eye display that is in optical alignment with a micro-lens array. According to certain embodiments of the invention, the augmented reality system comprises generating a virtual image as perceived by an eye of a wearer of the augmented reality system when looking at an object in space having a location in the real world that forms a real image. When utilizing a certain embodiment of the invention the virtual image changes, by way of example only, one or more of its shape, form, depth, 3D effect, location due to the eye or eyes shifting its (their) fixation position due to changing the location of different lighted pixels of the see-through near eye display(s).Type: GrantFiled: June 22, 2019Date of Patent: April 28, 2020Assignee: NewSight Reality, Inc.Inventors: Ronald Blum, Ami Gupta, Igor Landau, Rick Morrison
-
Patent number: 10633093Abstract: Provided are systems and methods for monitoring an asset via an autonomous model-driven inspection. In an example, the method may include storing an inspection plan including a virtually created three-dimensional (3D) model of a travel path with respect to a virtual asset that is created in virtual space, converting the virtually created 3D model of the travel path about the virtual asset into a physical travel path about a physical asset corresponding to the virtual asset, autonomously controlling vertical and lateral movement of the unmanned robot in three dimensions with respect to the physical asset based on the physical travel path and capturing data at one or more regions of interest, and capturing data at one or more regions of interest, and storing information concerning the captured data about the asset.Type: GrantFiled: May 5, 2017Date of Patent: April 28, 2020Assignee: GENERAL ELECTRIC COMPANYInventors: Mauricio Castillo-Effen, Ching-Ling Huang, Raju Venkataramana, Roberto Silva Filho, Alex Tepper, Steven Gray, Yakov Polishchuk, Viktor Holovashchenko, Charles Theurer, Yang Zhao, Ghulam Ali Baloch, Douglas Forman, Shiraj Sen, Huan Tan, Arpit Jain
-
Patent number: 10628909Abstract: A Resource Dependency Viewer for graphics processing unit (GPU) execution information is disclosed. The Resource Dependency Viewer provides profiling/debugging information concurrently with information about execution flow, resource utilization, execution statistics, and orphaned resources, among other things. A user-interactive graph (“dependency graph”) may be provided via a graphical user interface to allow interactive analysis of code executed on a GPU (e.g., graphics or compute code). Resource utilization and execution flow of encoders may be identified by analyzing contents of a GPU workload representative of a GPU execution trace to generate the dependency graph. Information about dependencies and execution statistics may be further analyzed using heuristics to identify potential problem areas. The dependency graph may include visual indicators of these problem areas.Type: GrantFiled: June 1, 2018Date of Patent: April 21, 2020Assignee: Apple Inc.Inventors: Ohad Frenkel, Eric O. Sunalp, Dustin J. Greene, Alp Yucebilgin, Domenico Troiano, Maximilian Christ, Andrew M. Sowerby, Lionel Lemarie, Sebastian Schaefer
-
Patent number: 10614609Abstract: Methods and apparatus of processing 360-degree virtual reality images are disclosed. According to one method, a 2D (two-dimensional) frame is divided into multiple blocks. The multiple blocks are encoded or decoded using quantization parameters by restricting a delta quantization parameter to be within a threshold for any two blocks corresponding to two neighboring blocks on a 3D sphere. According to another embodiment, one or more guard bands are added to one or more edges that are discontinuous in the 2D frame but continuous in the 3D sphere. Fade-out process is applied to said one or more guard bands to generate one or more faded guard bands. At the decoder side, the reconstructed 2D frame is generated from the decoded extended 2D frame by cropping said one or more decoded faded guard bands or by blending said one or more decoded faded guard bands and reconstructed duplicated areas.Type: GrantFiled: July 13, 2018Date of Patent: April 7, 2020Assignee: MEDIATEK INC.Inventors: Cheng-Hsuan Shih, Chia-Ying Li, Ya-Hsuan Lee, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang