Patents Examined by Robert J Craddock
-
Patent number: 11636234Abstract: The disclosure notably relates to a computer-implemented method for generating a 3D model representing a building. The method comprises providing a 2D floor plan representing a layout of the building. The method also comprises determining a semantic segmentation of the 2D floor plan. The method also comprises determining the 3D model based on the semantic segmentation. Such a method provides an improved solution for processing a 2D floor plan.Type: GrantFiled: December 28, 2018Date of Patent: April 25, 2023Assignee: DASSAULT SYSTEMESInventors: Asma Rejeb Sfar, Louis Dupont de Dinechin, Malika Boulkenafed
-
Patent number: 11636662Abstract: Methods and systems are disclosed for performing operations for applying augmented reality elements to a fashion item. The operations include receiving an image that includes a depiction of a person wearing a fashion item. The operations include generating a segmentation of the fashion item worn by the person depicted in the image. The operations include extracting a portion of the image corresponding to the segmentation of the fashion item; estimating an angle of each pixel in the portion of the image relative to a camera used to capture the image. The operations include applying one or more augmented reality elements to the fashion item in the image based on the estimated angle of each pixel in the portion of the image relative to the camera used to capture the image.Type: GrantFiled: September 30, 2021Date of Patent: April 25, 2023Assignee: Snap Inc.Inventors: Itamar Berger, Gal Dudovitch, Gal Sasson, Ma'ayan Shuvi, Matan Zohar
-
Patent number: 11620791Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.Type: GrantFiled: May 13, 2021Date of Patent: April 4, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 11615506Abstract: A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.Type: GrantFiled: November 18, 2021Date of Patent: March 28, 2023Assignee: Snap Inc.Inventors: Bernhard Jung, Edward Lee Kim-Koon
-
Patent number: 11615503Abstract: In one example embodiment, an information processing apparatus causes a display device to display a first image from images associated with an observation target object. The images include the first image and a second image which corresponds to an annotation mark. In this embodiment, the information processing apparatus also causes the display device to display the annotation mark corresponding to the second image. In this embodiment, the displayed annotation mark overlaps the first image.Type: GrantFiled: September 15, 2021Date of Patent: March 28, 2023Assignee: Sony CorporationInventors: Masashi Kimoto, Shigeatsu Yoshioka
-
Patent number: 11601490Abstract: The disclosure is directed to systems and methods for local rendering of 3D models which are then accessed by remote computers. The advantage of the system is that extensive hardware needed for rendering complex 3D models is centralized and can be accessed by smaller remote computers without and special hardware or software installation. The system also provides enhanced security as model data can be restricted to a limited number of servers instead of stored on individual computers.Type: GrantFiled: June 25, 2021Date of Patent: March 7, 2023Assignee: AVEVA Software, LLCInventors: David Matthew Stevenson, Paul Antony Burton, Mira Witczak
-
Patent number: 11593999Abstract: A method for guiding installation of smart-home devices may include capturing, by a camera of a mobile computing device, a view of an installation location for a smart-home device; determining, by the mobile computing device, an instruction for installing the smart-home device at the location; and displaying, by a display of the mobile computing device, the view of the installation location for a smart-home device with the instruction for installing the smart-home device.Type: GrantFiled: September 13, 2021Date of Patent: February 28, 2023Assignee: Google LLCInventors: Adam Mittleman, Jason Chamberlain, Jacobi Grillo, Daniel Biran, Mark Kraz, Lauren Chanen, Daniel Foran, David Fichou, William Dong, Bao-Tram Phan Nguyen, Brian Silverstein, Yash Modi, Alex Finlayson, Dongeek Shin
-
Patent number: 11587266Abstract: Techniques are disclosed to add augmented reality to a sub-view of a high resolution central video feed. In various embodiments, a central video feed is received from a first camera on a first recurring basis and time-stamped position information is received from a tracking system on a second recurring basis. The central video feed is calibrated against a spatial region encompassed by the central video feed. The received time-stamped position information and a determined plurality of tiles associated with at least one frame of the central video feed are used to define a first sub-view of the central video feed. The first sub-view and a homography defining placement of augmented reality elements on the at least one frame of the central video feed are provided as output to a device configured to use the first sub-view and the homography display the first sub-view.Type: GrantFiled: July 21, 2021Date of Patent: February 21, 2023Assignee: Tempus Ex Machina, Inc.Inventors: Erik Schwartz, Michael Naquin, Christopher Brown, Steve Xing, Pawel Czarnecki, Charles D. Ebersol, Anne Gerhart
-
Patent number: 11580698Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.Type: GrantFiled: May 13, 2021Date of Patent: February 14, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 11568608Abstract: Provided is an augmented reality display technology capable of more entertaining a user. An augmented reality display device 10 includes an imaging unit 13, a special effect execution unit 11b, and a display unit 14. The imaging unit 13 acquires a background image of a real world. When a plurality of models for a specific combination are present in a virtual space, the special effect execution unit 11b executes a special effect corresponding to the combination of the models. The display unit 14 displays the models together with the background image based on the special effect.Type: GrantFiled: June 11, 2021Date of Patent: January 31, 2023Assignee: SQUARE ENIX CO., LTD.Inventors: Remi Driancourt, Makoto Tsuda
-
Patent number: 11557101Abstract: An estimation system includes a first acquisition unit, a second acquisition unit, and an estimation unit. The first acquisition unit acquires model information. The model information is information about a human model rendered in a virtual space. The human model is generated based on model data of a human. The second acquisition unit acquires environmental information. The environmental information is information about an environment corresponding to the virtual space and potentially having a particular effect on the human model. The estimation unit estimates a condition of the human model based on the model information and the environmental information.Type: GrantFiled: January 9, 2020Date of Patent: January 17, 2023Assignee: Panasonic Intellectual Property Management Co., Ltd.Inventor: Hareesh Puthiya Veettil
-
Patent number: 11551326Abstract: A tile-based graphics system has a rendering space sub-divided into a plurality of tiles which are to be processed. Graphics data items, such as parameters or texels, are fetched into a cache for use in processing one of the tiles. Indicators are determined for the graphics data items, whereby the indicator for a graphics data item indicates the number of tiles with which that graphics data item is associated. The graphics data items are evicted from the cache in accordance with the indicators of the graphics data items. For example, the indicator for a graphics data item may be a count of the number of tiles with which that graphics data item is associated, whereby the graphics data item(s) with the lowest count(s) is (are) evicted from the cache.Type: GrantFiled: February 13, 2018Date of Patent: January 10, 2023Assignee: Imagination Technologies LimitedInventors: Steven John Fishwick, John Howson
-
Patent number: 11544902Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.Type: GrantFiled: May 13, 2021Date of Patent: January 3, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 11532072Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, are described for combining the crop function with zoom, pan and straighten functions as part of a single cropping environment, such that a user can select a portion of an image for cropping, apply zoom, pan and straighten transformations to the selected image portion and then crop the transformed image portion in a single utility. In one aspect, the methods include the actions of receiving user input defining a crop region within a displayed image. The methods also include the actions of displaying a user interface including a cropping panel that is configured to display a subset of the image corresponding to the defined crop region. Further, the methods include the actions of receiving user input requesting to perform at least one of a zoom, rotate or translate operation on the crop region displayed in the cropping panel.Type: GrantFiled: September 13, 2021Date of Patent: December 20, 2022Assignee: Apple Inc.Inventors: Nikhil Bhatt, Timothy David Cherna
-
Patent number: 11531510Abstract: In accordance with some embodiments, the render rate is varied across and/or up and down the display screen. This may be done based on where the user is looking in order to reduce power consumption and/or increase performance. Specifically the screen display is separated into regions, such as quadrants. Each of these regions is rendered at a rate determined by at least one of what the user is currently looking at, what the user has looked at in the past and/or what it is predicted that the user will look at next. Areas of less focus may be rendered at a lower rate, reducing power consumption in some embodiments.Type: GrantFiled: August 11, 2021Date of Patent: December 20, 2022Assignee: Intel CorporationInventors: Eric J. Asperheim, Subramaniam M. Maiyuran, Kiran C. Veernapu, Sanjeev S. Jahagirdar, Balaji Vembu, Devan Burke, Philip R. Laws, Kamal Sinha, Abhishek R. Appu, Elmoustapha Ould-Ahmed-Vall, Peter L. Doyle, Joydeep Ray, Travis T. Schluessler, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Altug Koker
-
Patent number: 11521291Abstract: In some implementations, a method of reducing latency associated with an image read-out operation is performed at a device including one or more processors, non-transitory memory, an image processing architecture, and an image capture device. The method includes: obtaining first image data corresponding to a physical environment; reading a first slice of the first image data into an input buffer; performing processing operations on the first slice of the first image data to obtain a first portion of second image data; reading a second slice of the first image data into the input buffer; performing the image processing operations on the second slice of the first image data to obtain a second portion of the second image data; and generating an image frame of the physical environment based at least in part on the first and second portions of the second image data.Type: GrantFiled: March 29, 2021Date of Patent: December 6, 2022Assignee: APPLE INC.Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
-
Patent number: 11521355Abstract: A system and method for modeling visual and non-visual experiential characteristics of a work space environment, the system comprising at least a first emissive surface useable to view a virtual world (VW) representation, a processor that is programmed to perform the steps of (a) presenting a VW representation via the at least a first emissive surface, the VW representation including an affordance configuration shown in the VW representation, (b) model at least one non-visual experiential characteristic associated with an environment associated with the VW representation and (c) present at least some indication of the non-visual experiential characteristic to the system user.Type: GrantFiled: November 8, 2021Date of Patent: December 6, 2022Assignee: Steelcase Inc.Inventors: Stephen E. Goetzinger, Jr., Kyle R. Dhyne
-
Patent number: 11519695Abstract: A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm.Type: GrantFiled: March 25, 2022Date of Patent: December 6, 2022Inventor: Philip Scott Lyren
-
Patent number: 11513656Abstract: In a method facilitating connectivity between at least first and second persons contemplates utilizing one or more computer processors to instantiate a first augmented reality space that mimics a real world space physically in existence about a first person. The first augmented reality space includes at least an avatar of the first person, and a first virtual representation of at least one real world object within the first augmented reality space. Using one or more computer processors, the method provides an interface through which the second person, distal to the first person, can use a second avatar to contemporaneously occupy and enter the first augmented reality space, traverse the space, and interact with the object.Type: GrantFiled: March 30, 2021Date of Patent: November 29, 2022Assignee: Wormhole Labs, Inc.Inventors: Curtis Hutten, Robert D. Fish
-
Patent number: 11501505Abstract: Systems and methods are described that obtain depth data associated with a scene captured by an electronic device, obtain location data associated with a plurality of physical objects within a predetermined distance of the electronic device, generate a plurality of augmented reality objects configured to be displayed over a portion of the plurality of physical objects, and generate a plurality of proximity layers corresponding to the at least one scene, wherein a respective proximity layer is configured to trigger display of the auxiliary data corresponding to AR objects associated with the respective proximity layer while suppressing other AR objects.Type: GrantFiled: July 27, 2021Date of Patent: November 15, 2022Assignee: GOOGLE LLCInventors: Michael Ishigaki, Diane Wang