Patents Examined by Yi Wang
-
Patent number: 10706214Abstract: An information processing terminal device includes: an acquiring unit acquiring image data; a detecting unit detecting a posture of the information processing terminal device; a generating unit generating display image data based on the image data acquired in the acquiring unit according to the posture of the terminal device detected by the detecting unit; and a displaying unit displaying the display image data generated by the generating unit on a display unit. If the posture of the terminal device is changed, the generating unit generates the display image data according to the changed posture of the terminal device. If the posture of the terminal device is changed during the generation of the image data by the generating unit, the generating unit does not generate the display image data according to the changed posture of the terminal device, until the generation of the display image data is completed.Type: GrantFiled: January 4, 2020Date of Patent: July 7, 2020Assignee: Brother Kogyo Kabushiki KiashaInventor: Norihiko Asai
-
Patent number: 10692467Abstract: Devices and methods for automatic application of mapping functions to video signals based on inferred parameters are provided. In one example, a method, including initiating display of content based on a video signal being processed by a device, is provided. The method may further include in response to at least a first change in an intensity of ambient light or a second change in a color of the ambient light subsequent to the initiating of the display of the content based on the video signal, selecting a first mapping function applicable to pixels corresponding to frames of the video signal based at least on a first inferred parameter from a selected machine learning model. The method may further include automatically applying the first mapping function to a first plurality of pixels corresponding to a first set of frames of the video signal.Type: GrantFiled: May 4, 2018Date of Patent: June 23, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Abo Talib Mafoodh, Mehmet Kucukgoz, Holly H. Pollock
-
Patent number: 10675005Abstract: A system and method for synchronizing caliper measurements in a multi-frame 2D image and an anatomical M-mode image is provided. The method may include selecting a frame of a multi-frame 2D image of a region of interest. The method may include positioning a first caliper measurement on the selected frame. The method may include generating an anatomical M-mode image based on a direction of the first caliper measurement. The method may include automatically overlaying a second caliper measurement on the anatomical M-mode image, the second caliper measurement corresponding with the first caliper measurement on the selected frame. The method may include presenting the selected frame having the first caliper measurement simultaneously with the anatomical motion mode image having the second caliper measurement at a display system.Type: GrantFiled: August 2, 2018Date of Patent: June 9, 2020Assignee: GENERAL ELECTRIC COMPANYInventor: Kjetil Viggen
-
Patent number: 10672193Abstract: Embodiments relate to receiving an indication of a desired item by a user in a multi-use virtual world. The indication requests a rendering of a restricted virtual object in a space of the user. A server can determine and retrieve a partially rendered model of the restricted virtual object, determine a rendering location based on position data, and link the partially rendered model to the user and to the rendering location. The partially rendered model and the rendering location can be sent to user devices for rendering the partially rendered model at the rendering location. A partially rendered appearance of the restricted virtual object indicates to a second user the desired item by a first user. A fully rendered model may contain restricted content data, whereas a partially rendered model may not contain restricted content data. The restricted content data may include a restricted digital media file.Type: GrantFiled: August 21, 2018Date of Patent: June 2, 2020Assignee: DISNEY ENTERPRISES, INC.Inventor: K. C. (Casey) Marsh
-
Patent number: 10665014Abstract: A system and method for tap event location includes a device using a selection apparatus that provides accurate point locations. The device determines a 3-dimensional map of a scene in the view frustum of the device relative to a coordinate frame. The device receives an indication of the occurrence of a tap event comprising a contact of the selection apparatus with a subject, and determines the location of the tap event relative to the coordinate frame from the location of the selection apparatus. The location of the tap event may be used to determine a subject. Data associated with the subject may then be processed to provide effects in, or data about, the scene in the view frustum of the device. Embodiments include a selection apparatus that communicates occurrences of tap events to the device and includes features that allow the device to determine the location of the selection apparatus.Type: GrantFiled: April 1, 2019Date of Patent: May 26, 2020Assignee: Microsoft Technology Licensing, LLCInventors: John Weiss, Xiaoyan Hu
-
Patent number: 10657596Abstract: A method may include receiving image data representative of cash acquired via one or more image sensors, identifying a first currency depicted in the cash based on a plurality of images associated with a plurality of currencies, and determining a currency conversion rate between the first currency and a second currency. The method may also include generating a visualization representative of a currency value of the cash in the second currency based on the currency conversion rate and overlaying the visualization on the image data.Type: GrantFiled: August 31, 2018Date of Patent: May 19, 2020Assignee: United Services Automobile Association (USAA)Inventors: Carlos J P Chavez, David Jason Anderson James, Rachel Elizabeth Csabi, Quian Jones, Andrea Marie Richardson
-
Patent number: 10650560Abstract: A system comprising a computer-readable storage medium storing at least one program and a method for generating graphical representations of event participation flows are presented. In example embodiments, the method includes determining an event participation flow for participants of a subject event, and causing presentation of a graphical representation of the event participation flow in the user interface. The method may further include receiving a user selection of a filter via the user interface, and filtering the graphical representation of the event participation flow in accordance with the user selected filter.Type: GrantFiled: December 11, 2018Date of Patent: May 12, 2020Assignee: Palantir Technologies Inc.Inventors: Catherine Lu, Karanveer Mohan, Jacob Stern
-
Patent number: 10652387Abstract: A method of displaying information on a display of a display device via a display controller includes imaging a user of the display device using an imaging device integrated with the display device or provided around the display device, detecting a line of sight of the user from an image captured by the imaging device, determining whether the display exists within a central visual field region centered on the detected line of sight of the user or within a peripheral visual field region located outside the central visual field region, displaying notification information on the display in a first display form when the display exists within the central visual field region, and displaying the notification information on the display in a second display form when the display exists within the peripheral visual field region, the second display form having a higher abstraction level than that in the first display form.Type: GrantFiled: April 20, 2016Date of Patent: May 12, 2020Assignees: Nissan Motor Co., Ltd., Renault S.A.S.Inventor: Masafumi Tsuji
-
Patent number: 10636192Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various facial sensors, such as cameras, that capture images of portions of the user's face outside of the HMD. For example, multiple facial sensors capture images of a portion of the user's face below the HMD. Through image analysis, points of the portion of the user's face are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face. Additionally, a parametric representation of the user's face is determined for each captured image, resulting in various representations indicating the user's facial expressions. From the parametric representations and transforms mapping the captured images to three dimensions, a rendering model is used and applied to the three dimensional model of the face to render the user's facial expressions.Type: GrantFiled: June 29, 2018Date of Patent: April 28, 2020Assignee: Facebook Technologies, LLCInventors: Jason Saragih, Hernan Badino, Shih-En Wei
-
Patent number: 10635905Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment. The method may also include determining, from the real-time data, current mapping and object data. The current mapping data may include coordinate data for the environment and the current object data may include both state data and relationship data for objects in the environment. The method may also include determining mapping deltas between the current mapping data and baseline map data and determining object deltas between the current object data and an event graph. The event graph may include prior state data and prior relationship data for objects. The method may also include updating the baseline map data and the event graph based on the deltas and sending updated baseline map data and event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: September 14, 2018Date of Patent: April 28, 2020Assignee: Facebook Technologies, LLCInventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
-
Patent number: 10627641Abstract: A 3D display panel assembly includes a display panel and an adjusting panel. The display panel includes a plurality of subpixels arranged in an array, any two adjacent lines along a first direction being respectively a first subpixel line and a second subpixel line, the first subpixel line including first subpixels configured to emit primary light, and the second subpixel line including second subpixels configured to display black when the first subpixel line emits the primary light. The adjusting panel includes a plurality of adjusting units arranged in an array, each line along a second direction including a plurality of continuously arranged first adjusting unit groups. A 3D display device and a method for driving the same are also provided.Type: GrantFiled: August 26, 2016Date of Patent: April 21, 2020Assignees: BOE Technology Group Co., Ltd., Beijing BOE Optoelectronics Technology Co., Ltd.Inventors: Xiaochuan Chen, Pengcheng Lu, Wenqing Zhao, Ming Yang, Rui Xu, Lei Wang, Jian Gao, Xiaochen Niu, Xue Dong, Haisheng Wang
-
Patent number: 10621760Abstract: Techniques are disclosed for the synthesis of a full set of slotted content, based upon only partial observations of the slotted content. With respect to a font, the slots may comprise particular letters or symbols or glyphs in an alphabet. Based upon partial observations of a subset of glyphs from a font, a full set of the glyphs corresponding to the font may be synthesized and may further be ornamented.Type: GrantFiled: June 15, 2018Date of Patent: April 14, 2020Assignee: Adobe Inc.Inventors: Matthew David Fisher, Samaneh Azadi, Vladimir Kim, Elya Shechtman, Zhaowen Wang
-
Patent number: 10614592Abstract: The present disclosure provides a three-dimensional posture estimating method and apparatus, a device and a computer storage medium, wherein the method comprises: obtaining two-dimensional posture information of an object in an image and three-dimensional size information of the object; determining coordinates of key points of the object in an object coordinate system according to the three-dimensional size information of the object; determining a transformation relationship between a camera coordinate system and the object coordinate system according to a geometrical relationship between coordinates of key points of the object in the object coordinate system and the two-dimensional posture information of the object. Application of this manner to the field of autonomous driving may implement mapping a detection result of a two-dimensional obstacle to a three-dimensional space to obtain its posture.Type: GrantFiled: June 28, 2018Date of Patent: April 7, 2020Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Xun Sun, Rui Wang, Yuqiang Zhai, Tian Xia
-
Patent number: 10606363Abstract: A method for a display system of a motor vehicle is provided. The motor vehicle comprises one or more seats for occupants of the vehicle, the position and/or orientation of the seats within an interior of the motor vehicle being variable. The method comprises: determining a position and/or orientation of the seats; determining an orientation at least a first portion of an image to be displayed by the display system according to the position and/or orientation of the seats; and displaying the image such that at least the first portion of the image is in the determined orientation. A display system for a motor vehicle is also provided.Type: GrantFiled: June 1, 2018Date of Patent: March 31, 2020Assignee: Ford Global Technologies, LLCInventors: Marcus Hoggarth, Tom Gordon, Alan Pich
-
Patent number: 10607567Abstract: An environment map, such as a cube map, can be obtained for a scene that is appropriate for the current lighting state. A grayscale image representation is generated that represents physical objects visible in the scene. The grayscale representation is provided to a device for rendering AR content. A color lookup table (LUT) is generated for coloring the grayscale image representation. The color LUT can be appropriate for the current lighting conditions of the scene. As the lighting state changes, such as over the course of a day, different color LUTs can be sent to the device for purposes of updating the environment map. The grayscale image representation, once colored, can serve as an environment map for purposes of creating reflection effects on AR content to be rendered with respect to a live view of the scene.Type: GrantFiled: March 16, 2018Date of Patent: March 31, 2020Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Richard Schritter, Sidharth Moudgil, Pratik Patel
-
Patent number: 10593012Abstract: A video processing method includes receiving an omnidirectional content corresponding to a sphere, generating a projection-based frame according to at least the omnidirectional content and a segmented sphere projection (SSP) format, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by a first circular projection face, a second circular projection face, and at least one rectangular projection face packed in an SSP layout. A north polar region of the sphere is mapped onto the first circular projection face. A south polar region of the sphere is mapped onto the second circular projection face. At least one non-polar ring-shaped segment between the north polar region and the south polar region of the sphere is mapped onto said at least one rectangular projection face.Type: GrantFiled: March 20, 2018Date of Patent: March 17, 2020Assignee: MEDIATEK INC.Inventors: Ya-Hsuan Lee, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
-
Patent number: 10586391Abstract: A framework for interactive VR content items provides for user interaction via placement of interaction points within VR content items that would otherwise be passively viewed by users on HMD devices. The interaction points are defined by positional metadata that contains information regarding when and where the interaction points should be displayed during the interactive VR content play. The interaction points are also defined by action metadata that determines one or more actions to be executed when the user selects the interactions points. Selection of the interaction points via one or more of a user gesture and a voice input is also enabled by the framework.Type: GrantFiled: May 30, 2017Date of Patent: March 10, 2020Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Nikhil Chandrakant Khedkar, Sanyog Suresh Barve, Sandip Chhaganlal Sutariya
-
Patent number: 10580218Abstract: In one embodiment, a computing system accesses a first tracking record of a first user during a first movement session. The first tracking record comprises a plurality of locations of the first user and associated time measurements. During a second movement session, the system determines a current location of a second user and an associated current time measurement. From the plurality of locations in the first tracking record, a first location of the first user in the first movement session is determined based on (1) the associated time measurement relative to a start time of the first movement session and (2) the current time measurement relative to a start time of the second movement session. The system determines a display position of a virtual object on a display screen of the second user based on the first location relative to the current location of the second user.Type: GrantFiled: July 12, 2018Date of Patent: March 3, 2020Assignee: Facebook, Inc.Inventor: David Michael Viner
-
Patent number: 10575132Abstract: Herein is disclosed a positional content platform and related systems and methods. According to some embodiments, the platform includes a mobile processing and communication device and a service layer executing on a server. The mobile processing and communication device communicates with the service layer. By virtue of the communication, a user of the mobile device is able to locate and view digital content that has been created and stored on the platform. The aforementioned content is associated with a geographic location, and, according to some embodiments, the content is represented in a user interface by an icon that is superimposed over a field of view representative of a geographic region.Type: GrantFiled: August 5, 2016Date of Patent: February 25, 2020Inventors: Jason Jude Hogg, Nicholas Eugene Kleinjan, Nicholas Patrick Johns
-
Patent number: 10565775Abstract: An apparatus and method for load balancing in a ray tracing architecture. For example, one embodiment of a graphics processing apparatus comprises: an intersection unit engine to test a plurality of rays against a plurality of primitives to identify a closest primitive that each ray intersects; an intersection unit queue to store work to be performed by the intersection unit engine; and an intersection unit offload engine to monitor the intersection unit queue to determine a pressure level on the intersection unit engine, the intersection unit offload engine to responsively offload some of the work in the intersection unit queue to intersection program code executed on one or more execution units of the graphics processor.Type: GrantFiled: February 13, 2018Date of Patent: February 18, 2020Assignee: Intel CorporationInventor: Tomas G. Akenine-Moller