Patents Examined by Terrell M Robinson
-
Patent number: 12380660Abstract: A method includes receiving first and second images from first and second see-through cameras with first and second camera viewpoints. The method also includes generating a first virtual image corresponding to a first virtual viewpoint by applying a first mapping to the first image. The first mapping is based on relative positions of the first camera viewpoint and the first virtual viewpoint corresponding to a first eye of a user. The method further includes generating a second virtual image corresponding to a second virtual viewpoint by applying a second mapping to the second image. The second mapping is based on relative positions of the second camera viewpoint and the second virtual viewpoint corresponding to a second eye of the user. In addition, the method includes presenting the first and second virtual images to the first and second virtual viewpoints on at least one display panel of an augmented reality device.Type: GrantFiled: April 5, 2023Date of Patent: August 5, 2025Assignee: Samsung Electronics Co., Ltd.Inventor: Yingen Xiong
-
Patent number: 12380648Abstract: Embodiments of devices and techniques of obtaining a three dimensional (3D) representation of an area are disclosed. In one embodiment, a two dimensional (2D) frame is obtained of an array of pixels of the area. Also, a depth frame of the area is obtained. The depth frame includes an array of depth estimation values. Each of the depth estimation values in the array of depth estimation values corresponds to one or more corresponding pixels in the array of pixels. Furthermore, an array of confidence scores is generated. Each confidence score in the array of confidence scores corresponds to one or more corresponding depth estimation values in the array of depth estimation values. Each of the confidence scores in the array of confidence scores indicates a confidence level that the one or more corresponding depth estimation values in the array of depth estimation values is accurate.Type: GrantFiled: October 5, 2022Date of Patent: August 5, 2025Assignee: STREEM, LLCInventor: Nikilesh Urella
-
Patent number: 12361621Abstract: Creating images and animations of lip motion from mouth shape data includes providing, as one or more input features to a neural network model, a vector of a plurality of coefficients. Each vector of the plurality of coefficients corresponds to a different mouth shape. Using the neural network model, a data structure output specifying a visual representation of a mouth including lips having a shape corresponding to the vector is generated.Type: GrantFiled: October 17, 2022Date of Patent: July 15, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Siddarth Ravichandran, Anthony Sylvain Jean-Yves Liot, Dimitar Petkov Dinev, Ondrej Texler, Hyun Jae Kang, Janvi Chetan Palan, Sajid Sadi
-
Patent number: 12361625Abstract: A system operable to better facilitate user interaction and avoid frustrating a distribution motivation of a distribution user, a viewing motivation of a viewing user, and/or an interaction motivation between users, which may include a distribution unit displaying a moving image on a viewing user terminal, a reception unit receiving a display request for a predetermined gift and/or a predetermined comment from the viewing user terminal, and a display unit displaying a predetermined gift object and/or the predetermined comment in the moving image, based on the display request received by the reception unit. The display unit may display specific display corresponding to a specific gift and/or a specific comment set in accordance with a manipulation of a distribution user in the moving image, and may display a specific gift object and/or the specific comment in the moving image, in accordance with selection of the specific display by a viewing user.Type: GrantFiled: December 29, 2022Date of Patent: July 15, 2025Assignee: GREE HOLDINGS, Inc.Inventors: Keisuke Yamaga, Yoshiki Kudo
-
Patent number: 12361646Abstract: Various implementations provide a method for determining how a second user prefers to be depicted/augmented in a first user's view of a multi-user environment in a privacy preserving way. For example, a method may include determining that a physical environment includes a second device, where a second user associated with the second device is to be depicted in a view of a three-dimensional (3D) environment. The method may further include determining position data indicative of a location of the second device relative to the first device. The method may further include sending the position data indicative of the location of the second device relative to the first device, to an information system (e.g., a user preference system). The method may further include receiving a user preference setting associated with the second user for depicting or augmenting the second user in the 3D environment from the information system.Type: GrantFiled: May 9, 2023Date of Patent: July 15, 2025Assignee: Apple Inc.Inventors: Avi Bar-zeev, Ranjit Desai, Rahul Nair
-
Patent number: 12354197Abstract: A computing system captures a first image, comprising an object in a first position, using a camera. The object has indicators indicating points of interest on the object. The computing system receives first user input linking at least a subset of the indicators and establishing relationships between the points of interest on the object and second user input comprising a graphic element and a mapping between the graphic element and the object. The computing system captures second images, comprising the object in one or more modified positions using, the camera. The computing system tracks the modified positions of the object across the second images using the indicators and the relationships between the points of interest. The computing system generates a virtual graphic based on the one or more modified positions, the graphic element, and the mappings between the graphic element and the object.Type: GrantFiled: May 10, 2022Date of Patent: July 8, 2025Assignee: Adobe Inc.Inventors: Jiahao Li, Li-Yi Wei, Stephen DiVerdi, Kazi Rubaiat Habib
-
Patent number: 12354226Abstract: The various implementations described herein include methods and systems for providing input capabilities at various fidelity levels. In one aspect, a method includes receiving, from an application, a request identifying an input capability for making an input operation available within the application. The method further includes, in response to receiving the request: identifying techniques that the artificial-reality system can use to make the requested input capability available to the application using data from one or more devices; selecting a first technique for making the requested input capability available to the application; and using the first technique to provide to the application data to allow for performance of the requested input capability.Type: GrantFiled: February 27, 2023Date of Patent: July 8, 2025Assignee: Meta Platforms Technologies, LLCInventors: Chengyuan Yan, Joseph Davis Greer, Sheng Shen, Anurag Sharma
-
Patent number: 12348700Abstract: A fast and generalizable novel view synthesis method with sparse inputs is disclosed. The method may comprise: accessing at least a first input image with a first view of a subject in the first input image, and a second input image with a second view of the subject in the second input image using a computer system; estimating depths for pixels in the at least first and second input images; constructing a point cloud of image features from the estimated depths; and synthesizing a novel view by forward warping by using a point cloud rendering of the constructed point cloud.Type: GrantFiled: May 9, 2022Date of Patent: July 1, 2025Assignee: The Regents of the University of MichiganInventors: Justin Johnson, Ang Cao, Chris Rockwell
-
Patent number: 12325457Abstract: Methods and systems implement an output interface of a display computing system to display a visualization of predictive output of an onboard computing system of a train. A display computing system obtains, from an onboard predictive control system of a train traveling along a rail from a departure to a destination, predicted speeds which the predictive control system uses to preemptively adjust control signals sent by an onboard train control system of the train. The display computing system provides a visualization of the predicted speeds obtained by the predictive control system.Type: GrantFiled: January 9, 2023Date of Patent: June 10, 2025Assignee: Progress Rail Services CorporationInventors: Colby Smith Bradley, Sammy Akif, Konan Thomason, Marcos Blanco Fernandes
-
Patent number: 12322051Abstract: A list information display method and apparatus, an electronic device, and a storage medium are provided, the method includes: displaying list information in response to an information display operation triggered by a user; obtaining a real scene shooting image; and loading a plurality of types of information related to the list information into the real scene shooting image and display the real scene shooting image. Due to the use of virtual enhanced display technology, while the list information is displayed in the real scene shooting image, the plurality of types of information related to the list information are also displayed. On one hand, the user can obtain more information related to the list information while obtaining the list information, which improves the efficiency of information acquisition; and on the other hand, the user can obtain information in the real scene shooting image, which improves the visual effect and interactive experience.Type: GrantFiled: August 26, 2021Date of Patent: June 3, 2025Assignee: LEMON INC.Inventors: Jingcong Zhang, Weikai Li, Zihao Chen, Guohui Wang, Xiao Yang, Haiying Cheng, Anda Li, Ray McClure, Zhili Chen, Yiheng Zhu, Shihkuang Chu, Liyou Xu, Yunzhu Li, Jianchao Yang
-
Patent number: 12322146Abstract: The present disclosure relates to an information processing device and method capable of suppressing a reduction in coding efficiency. For a point cloud representing a three-dimensional object as a set of points, a coordinate system for geometry data is transformed from a polar coordinate system to a Cartesian coordinate system, a reference relationship indicating a reference destination used to calculate a predictive value of attribute data of a processing target point is set by using the generated geometry data in the Cartesian coordinate system, a prediction residual that is a difference value between the attribute data of the processing target point and the predictive value calculated based on the set reference relationship is calculated, and the calculated prediction residual is encoded. The present disclosure can be applied to, for example, an information processing device, an encoding device, a decoding device, an electronic device, an information processing method, or a program.Type: GrantFiled: December 24, 2021Date of Patent: June 3, 2025Assignee: SONY GROUP CORPORATIONInventors: Tomoya Naganuma, Satoru Kuma, Hiroyuki Yasuda, Ohji Nakagami
-
Patent number: 12311835Abstract: Embodiments of the present invention provide electronic display devices capable of displaying information that is observable to the public (e.g., other motorists and pedestrians) without substantially impacting the visibility of the driver. Embodiments include a display panel facing out from the vehicle that is visible from the exterior, and a display device pointing to the inside of the vehicle that reproduces the scene behind the vehicle in a way that is visible to occupants (e.g., the driver). The electronic display devices can be mounted inside the rear window of an emergency response vehicle using mounts that orient the display substantially perpendicular to the road, for example.Type: GrantFiled: June 9, 2023Date of Patent: May 27, 2025Inventor: Richard Bourque
-
Patent number: 12314620Abstract: An embodiment of a display apparatus includes a display panel provided within a vehicle and including a main display area and an edge display area disposed outside the main display area and an integrated controller configured to control the display panel, wherein the integrated controller includes a passenger behavior determining unit configured to determine whether a passenger uses the display panel, a driving state determining unit configured to determine a driving state of the vehicle, and an output controller configured to control a first driving display, corresponding to the driving state of the vehicle, to be outputted to the edge display area in response to a determination that the passenger uses the display panel.Type: GrantFiled: November 30, 2022Date of Patent: May 27, 2025Assignees: Hyundai Motor Company, Kia CorporationInventors: Jung Seok Suh, Tae Un Kim, Hong Gyu Lee, Ja Yoon Goo, Hae Seong Jeong
-
Patent number: 12315062Abstract: Embodiments of the present disclosure provide methods and systems for broadcasting virtual performance. Motion data related to a performer is received from one or more sensors during a physical performance of the performer. The motion data is processed to animate movements of at least one virtual character. A virtual performance corresponding to the physical performance is generated based, at least in part, on the animated movements of the virtual character. The virtual performance is broadcasted to a viewer device for performing a playback of the virtual performance. The virtual performance simulates an experience of viewing the physical performance for the viewer.Type: GrantFiled: October 21, 2024Date of Patent: May 27, 2025Assignee: HYTTO PTE. LTD.Inventor: Dan Liu
-
Patent number: 12299907Abstract: Camera arrays for mediated-reality systems and associated methods and systems are disclosed herein. In some embodiments, a camera array includes a support structure having a center, and a depth sensor mounted to the support structure proximate to the center. The camera array can further include a plurality of cameras mounted to the support structure radially outward from the depth sensor, and a plurality of trackers mounted to the support structure radially outward from the cameras. The cameras are configured to capture image data of a scene, and the trackers are configured to capture positional data of a tool within the scene. The image data and the positional data can be processed to generate a virtual perspective of the scene including a graphical representation of the tool at the determined position.Type: GrantFiled: June 18, 2024Date of Patent: May 13, 2025Assignee: PROPRIO, INC.Inventors: David Julio Colmenares, James Andrew Youngquist, Adam Gabriel Jones, Thomas Ivan Nonn, Jay Peterson
-
Patent number: 12299816Abstract: The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.Type: GrantFiled: December 30, 2019Date of Patent: May 13, 2025Assignees: LEICA GEOSYSTEMS AG, HEXAGON GEOSYSTEMS SERVICES AG, LUCIAD NVInventors: Burkhard Böckem, Jürgen Dold, Pascal Strupler, Joris Schouteden, Daniel Balog
-
Patent number: 12293475Abstract: Augmented reality (AR) devices, systems and methods are provided to argument captured images of objects of interest. Image data can be obtained for an object of interest by an imaging device of an ear-worn device worn by a user. Augmenting information is generated to augment onto an image of the object. The augmented image is adjusted based on the detection on whether the object is in a field of view (FOV) of the imaging device.Type: GrantFiled: June 15, 2021Date of Patent: May 6, 2025Assignee: 3M Innovative Properties CompanyInventors: Lori A. Sjolund, Christopher M. Brown, Glenn E. Casner, Jon A. Kirschhoffer, Kathleen M. Stenersen, Miaoding Dai, Travis W. Rasmussen, Carter C. Hughes
-
Patent number: 12289532Abstract: An extended reality (XR) system receives capture information from a first camera with a first image sensor that faces a first direction, for instance facing an environment. The capture information is associated with capture of first image data by the first image sensor, for instance including the first image data and/or first image capture settings used to capture the first image data. The XR system determines an image capture setting, such as an exposure setting, for a second image sensor based on the capture information. The second image sensor faces second direction, for instance facing a user of the XR system. In some examples, the XR system determines the image capture setting also based on information from a display buffer for a display that faces the second direction. The XR system causes a second image sensor to capture second image data according to the image capture setting.Type: GrantFiled: December 12, 2023Date of Patent: April 29, 2025Assignee: QUALCOMM IncorporatedInventors: Soman Ganesh Nikhara, Adinarayana Nuthalapati
-
Patent number: 12282982Abstract: Some embodiments of a method disclosed herein may include: receiving a predicted driving route, sensor ranges of sensors on an autonomous vehicle (AV), and sensor field-of-view (FOV) data; determining whether minimum sensor visibility requirements are met along the predicted driving route; predicting blind areas along the predicted driving route, wherein the predicted blind areas are determined to have potentially diminished sensor visibility; and displaying an augmented reality (AR) visualization of the blind areas using an AR display device.Type: GrantFiled: December 1, 2023Date of Patent: April 22, 2025Assignee: DRNC HOLDINGS, INC.Inventors: Jani Mantyjarvi, Jussi Ronkainen, Mikko Tarkiainen
-
Patent number: 12277640Abstract: Provided are systems and methods for portrait animation. An example method includes receiving, by a computing device, scenario data including information concerning movements of a first head, receiving, by the computing device, a target image including a second head and a background, determining, by the computing device and based on the target image and the information concerning the movements of the first head, two-dimensional (2D) deformations of the second head in the target image, applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video, the at least one output frame including the second head displaced according to the movements of the first head, and filling, by the computing device and using a background prediction neural network, a portion of the background in gaps between the displaced second head and the background.Type: GrantFiled: April 22, 2024Date of Patent: April 15, 2025Assignee: Snap Inc.Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov