Patents Examined by Steven Z Elbinger
  • Patent number: 11630630
    Abstract: An information processing apparatus includes a first display screen, a second display screen, a detection unit, and a display controller. The detection unit detects switching between the first mode and the second mode. The first mode is a mode in which information is displayed on the first display screen. The second mode is a mode in which information is displayed on the first display screen and the second display screen. The display controller controls whether enlarged display or additional display is performed in accordance with a change in the orientation of the first display screen or an operation of specifying the information displayed on the first display screen. The change or the operation is performed in a certain time including a time point at which the detection unit detects switching from the first mode to the second mode.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: April 18, 2023
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Yuki Noguchi
  • Patent number: 11615616
    Abstract: A user-guidance system that utilizes augmented-reality (AR) components and human-posture-detection techniques is presented. The user-guidance system can help users to use smart devices to conduct 3D body scans more efficiently and accurately. AR components are computer generated for the on-screen guidance to guide a camera operator to position the camera in a particular location in relation to a target object with a particular tilt orientation in relation to the target object to capture an image that includes a region of the target object for 3D reconstruction of the target object. Human-posture-detection techniques are used to detect a human user's real-time posture and provide real-time on-screen guidance feedback and instructions to the human user to adopt an intended best posture for 3D reconstruction of the human user.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 28, 2023
    Inventor: Jeff Jian Chen
  • Patent number: 11615592
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising: receiving a video that depicts a person; tracking three-dimensional (3D) movement of the person within the video using a 3D reference point; computing a 3D position for placement of an augmented reality item relative to the 3D reference point; causing to be displayed the augmented reality item within the video at the 3D position; and updating the 3D position of the augmented reality item in the video as the 3D reference point changes based on the 3D movement of the person.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: March 28, 2023
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Matan Zohar
  • Patent number: 11610378
    Abstract: A system and method for determining a location for a surgical jig in a surgical procedure includes providing a mixed reality headset, a 3D spatial mapping camera, an infrared or stereotactic camera, and a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera. The system and method also include attaching a jig to a bone, mapping the bone and jig using the 3D spatial mapping camera, and then identifying a location for the surgical procedure using the computer system. Then the system and method use the mixed reality headset to provide a visualization of the location for the surgical procedure.
    Type: Grant
    Filed: April 26, 2022
    Date of Patent: March 21, 2023
    Inventors: Russell Todd Nevins, David Jon Backstein, Bradley H. Nathan
  • Patent number: 11600053
    Abstract: A system and method for determining a location for a surgical jig in a surgical procedure includes providing a mixed reality headset, a 3D spatial mapping camera, an infrared or stereotactic camera, and a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera. The system and method also include attaching a jig to a bone, mapping the bone and jig using the 3D spatial mapping camera, and then identifying a location for the surgical procedure using the computer system. Then the system and method use the mixed reality headset to provide a visualization of the location for the surgical procedure.
    Type: Grant
    Filed: April 15, 2022
    Date of Patent: March 7, 2023
    Inventors: Russell Todd Nevins, David Jon Backstein, Bradley H. Nathan
  • Patent number: 11600054
    Abstract: Methods and systems for manufacture of a garment are disclosed, in particular for generating fabrication data for manufacture of a garment, where said garment is for regulation of a body region of a wearer of the garment. Measurement data for the body region of the wearer is obtained, and the measurement data is modified to simulate a regulating effect for the garment. The modified measurement data is used to generate the fabrication data for manufacturing the garment. The measurement data may be obtained for an unregulated state of the body region of the wearer, the measurement data may be modified to simulate a regulated state of the body region of the wearer. An initial form of the garment may be manufactured using the fabrication data, and the measurement data used to obtain a three-dimensional model of the body region. The initial form of the garment may then be compared to the three-dimensional model; and the comparison used to modify the initial form of the garment.
    Type: Grant
    Filed: July 4, 2018
    Date of Patent: March 7, 2023
    Inventor: Anne Oommen
  • Patent number: 11579689
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: February 14, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Mark Terrano, Ian Erkelens, Kevin James MacKenzie
  • Patent number: 11568620
    Abstract: Apparatuses and methods are provided for augmented reality-assisted assessment of three-dimensional (3D) fit of physical objects within a physical environment in different positions. According to an embodiment, an augmented reality (AR) device obtains 3D dimensions of a virtual object representative of a real-world physical object and displays a 3D representation of the virtual object in an AR space depicted by a user interface of the AR device that is representative of a real-world physical environment in a field of view of the AR device. The 3D representation of the virtual object is proportionally dimensioned relative to the physical environment based on the obtained 3D dimensions of the virtual object and the virtual object is repositionable in the AR space responsive to input received by the AR device to allow assessment of 3D fit of the virtual object within the physical environment in different positions.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: January 31, 2023
    Assignee: SHOPIFY INC.
    Inventors: Byron Leonel Delgado, Daniel Beauchamp, Maas Mansoor Ali Lalani
  • Patent number: 11562714
    Abstract: An augmented reality device sends data representative of a field of view to a remote system. The field of view is comprises two portions, a first portion displayed to a user of the augmented reality device, and a second portion encompassing an area outside of the first portion. The remote system generates an element of an augmented reality display based on the second portion, and sends the element to the augmented reality device. When movement of the device causes the field of view to shift, the device includes the generated element in the augmented reality display.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: January 24, 2023
    Assignee: Amazon Technologies, Inc.
    Inventor: Stephen Daniel Vilke
  • Patent number: 11551421
    Abstract: Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: January 10, 2023
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Seunghee Han, Caelin Thomas Jackson-King, Jamie Kuppel, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11544913
    Abstract: A method for providing a manual based on a wearable device according to an embodiment of the present disclosure, as a method for providing an augmented reality (AR)-based manual by a manual application executed by at least one processor of the wearable device, comprises obtaining at least one piece of AR manual content for a first work; and providing the AR manual content by executing the obtained AR manual content, wherein the providing AR manual content includes providing head position AR content that visualizes a head pose describing 3D position and 6 degrees of freedom information of the head of a user who has performed the first work and providing guidance image information that provides guidance on a first process within the first work.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: January 3, 2023
    Assignee: VIRNECT INC.
    Inventor: Tae Jin Ha
  • Patent number: 11532135
    Abstract: A dual mode augmented reality surgical system configured to operate in both a tracking and a non-tracking mode includes a head mounted display configured to provide an optical view of a patient and to inject received data content over top of the optical view to form an augmented reality view of the patient, and comprising internal tracking means configured to determine a surgeon's position as well as angle and direction of view relative to the patient. The system further includes an augmented reality computing system comprising one or more processors, one or more computer-readable tangible storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: December 20, 2022
    Assignee: SURGICAL THEATER, INC.
    Inventors: Alon Yakob Geri, Mordechai Avisar
  • Patent number: 11521352
    Abstract: Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: December 6, 2022
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Seunghee Han, Caelin Thomas Jackson-King, Jamie Kuppel, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11455759
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: September 27, 2022
    Assignee: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Patent number: 11430087
    Abstract: Techniques for representing a scene or map based on statistical data of captured environmental data are discussed herein. In some cases, the data (such as covariance data, mean data, or the like) may be stored as a multi-resolution voxel space that includes a plurality of semantic layers. In some instances, individual semantic layers may include multiple voxel grids having differing resolutions. Multiple multi-resolution voxel spaces may be merged to generate combined scenes based on detected voxel covariances at one or more resolutions.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 30, 2022
    Assignee: Zoox, Inc.
    Inventors: Michael Carsten Bosse, Patrick Blaes, Derek Adams, Brice Rebsamen
  • Patent number: 11430164
    Abstract: The graphics processing unit described herein is configured to process graphics data using a rendering space which is sub-divided into a plurality of tiles. The graphics processing unit comprises a tiling unit and rendering logic. The tiling unit is arranged to generate a tile control list for each tile, the tile control list identifying each graphics data item present in the tile. The rendering logic is arranged to render the tiles using the tile control lists generated by the tiling unit. The tiling unit comprises per-tile hash generation logic arranged to generate, for each tile, a per-tile hash value based on a set of textures that will be accessed when processing the tile in the rendering logic, and the tiling unit is further arranged to store the per-tile hash value for a tile within the tile control list for the tile.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: August 30, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Isuru Herath, Richard Broadhurst
  • Patent number: 11417060
    Abstract: In one implementation, a method involves tessellating a surface of a 3D object by identifying vertices having 3D positions. The method transforms the 3D positions into positions for a first sphere-based projection for a left eye viewpoint and positions for a second sphere-based projection for a right eye viewpoint. Transforming the 3D positions of the vertices involves transforming the vertices based on a user orientation (i.e., camera position) and differences left and right eye viewpoints (e.g., based on interaxial distance and convergence angle). The method further renders a stereoscopic 360° rendering of the 3D object based on the first sphere-based projection for the left eye viewpoint and the second sphere-based projection for the right eye viewpoint. For example, an equirectangular representation of the first sphere-based projection can be combined with an equirectangular representation of the second sphere-based projection to provide a file defining a stereoscopic 360° image.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: August 16, 2022
    Assignee: Apple Inc.
    Inventors: Stuart M. Pomerantz, Timothy K. Dashwood
  • Patent number: 11403795
    Abstract: Implementations provide methods including actions of processing patient data to generate one or more graphical representations of the patient data, at least one graphical representation of the one or more graphical representations including a waveform, displaying at least one waveform segment of the waveform, and displaying calipers associated with the at least one waveform segment, each caliper being associated with an interval, where displaying the calipers includes, for each caliper: receiving a measurement value of the interval associated with the caliper, determining respective positions of a first handle and a second handle of the caliper based on the measurement, and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment.
    Type: Grant
    Filed: April 16, 2013
    Date of Patent: August 2, 2022
    Assignee: AirStrip IP Holdings, LLC
    Inventors: Stephen Trey Moore, Thomas Scott Wade, Lloyd Kory Brown, William Cameron Powell, Daniel Lee Blake, Neil R. McQueen, Augustine Vidal Pedraza, IV, Alan Williams Portela
  • Patent number: 11380011
    Abstract: Disclosed is a system for presenting simulated reality relative to a user's position. The system includes a camera, memory containing computer-readable instructions, and a processor. The processor processes the instructions to receive a captured image of a marker from the camera. The marker has a position relative to a simulated reality layer. The processor compares the image to one or more stored marker data sets to determine whether the image corresponds to a stored marker data set of the one or more stored marker data sets; detects a corresponding stored marker data set; and determines a position of the camera relative to the marker based on the comparison of the captured image and the one or more stored marker data sets. The processor causes a display, on a display device, of a simulated reality environment having a simulated reality object, based on the determined position of the camera.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: July 5, 2022
    Assignee: KreatAR, LLC
    Inventor: Liron Lerman
  • Patent number: 11381659
    Abstract: Data representative of a physical feature of a morphologic subject is received in connection with a procedure to be carried out with respect to the morphologic subject. A view of the morphologic subject overlaid by a virtual image of the physical feature is rendered for a practitioner of the procedure, including generating the virtual image of the physical feature based on the representative data, and rendering the virtual image of the physical feature within the view in accordance with one or more reference points on the morphologic subject such that the virtual image enables in-situ visualization of the physical feature with respect to the morphologic subject.
    Type: Grant
    Filed: March 22, 2020
    Date of Patent: July 5, 2022
    Assignee: ARIS MD, Inc.
    Inventors: Chandra Devam, Zaki Adnan Taher, William Scott Edgar