Placing Generated Data In Real Scene Patents (Class 345/632)
-
Patent number: 11967029Abstract: Provided herein are exemplary embodiments directed to a method for creating digital media, including the placing the digital media in a computer graphics environment, the computer graphics environment further comprising visually perceptible elements appearing as real objects placed in a real world setting, and viewing the digital media when at the real world setting. Various exemplary systems include an augmented reality and virtual reality server connected to a network, and a client device connected to the network, the client device having an augmented reality and virtual reality application. Further exemplary systems include a body or motion sensor connected to the client device and/or an augmented reality and virtual reality interface connected to the client device.Type: GrantFiled: April 8, 2022Date of Patent: April 23, 2024Assignee: tagSpace Pty LtdInventor: Paul Simon Martin
-
Patent number: 11967034Abstract: Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects.Type: GrantFiled: October 31, 2023Date of Patent: April 23, 2024Assignee: NANT HOLDINGS IP, LLCInventor: Patrick Soon-Shiong
-
Patent number: 11967031Abstract: Biological digital imaging systems and methods are disclosed herein for analyzing pixel data of one or more digital images depicting absorbent articles or portions of absorbent articles. A digital image comprising pixel data is obtained depicting an absorbent article or a portion of an absorbent article, the digital image. An imaging application (app) analyzes the digital image to detect a biological feature depicted within the pixel data of the digital image of the absorbent article or the portion of the absorbent article. The imaging app generates an individual-specific biological prediction value corresponding to at least one of: (a) the absorbent article; (b) the portion of the absorbent article; or (c) an individual associated with the absorbent article or portion of the absorbent article. The individual-specific biological prediction value is based on the biological feature depicted within the pixel data of the digital image of the absorbent article or the portion of the absorbent article.Type: GrantFiled: June 8, 2022Date of Patent: April 23, 2024Assignee: The Procter & Gamble CompanyInventors: Jennifer Joan Gustin, Amirhossein Tavanaei, Kelly Anderson, Donald C. Roe, Roland Engel, Latisha Salaam Zayid
-
Patent number: 11948483Abstract: An image generation apparatus generates an image whose background region on which a virtual object appearing in the real space is not superimposed is drawn in a background color having a predetermined luminance so as to make a region of a shadow of the virtual object look relatively dark. A rendering section renders the shadow of the virtual object appearing in a mesh structure in the real space by rendering not only the virtual object but also the mesh structure in the real space. A pixel value conversion section heightens colors of all pixels such that the background region uniformly takes on a background color having a predetermined luminance. A shadow/background processing section identifies the region of the shadow of the virtual object, sets the background region other than the shadow to the background color, and sets the shadow region to a color whose luminance is equal to or lower than that of the background color.Type: GrantFiled: March 17, 2020Date of Patent: April 2, 2024Assignee: Sony Interactive Entertainment Inc.Inventor: Yoshinori Ohashi
-
Patent number: 11948257Abstract: Systems and methods are described for generating an AR image are described herein. A physical camera is used to capture a video of a physical object in front of a physical background. The system then accesses data defining a virtual environment and selects a first position of a virtual camera in the virtual environment. While capturing the video, the system displays captured video of the physical object, such that the physical background is replaced with a view of the virtual environment from the first position of the virtual camera. In response to detecting a movement of the physical camera, the system selects a second position of the virtual camera in the virtual environment based on the detected movement. The system then displays the captured video of the physical object, wherein the view of the physical background is replaced with a view of the virtual environment from the second position of the virtual camera.Type: GrantFiled: May 9, 2022Date of Patent: April 2, 2024Assignee: Rovi Guides, Inc.Inventor: Warren Keith Edwards
-
Patent number: 11918888Abstract: Systems, methods, and computer readable media directed to multi-user visual experiences such as interactive gaming experiences and artistic media experiences. A viewing electronic device includes a camera configured to capture images, a display, and a processor coupled to the camera and the display. The processor is configured to capture, with the camera, images of a rotating marker where the rotating marker is presented on a monitor of a remote device and to present, on the display, a visual experience where the visual experience has an adjustable feature. The processor detects a parameter of the rotating marker from the captured images that corresponds to the adjustable feature and updates the visual experience responsive to the detected parameter. The parameter may be one or more of an angle, speed of rotation, direction of rotation, color, or pattern of the rotating marker.Type: GrantFiled: May 31, 2022Date of Patent: March 5, 2024Assignee: Snap Inc.Inventors: Andrés Monroy-Hernández, Ava Robinson, Yu Jiang Tham, Rajan Vaish
-
Patent number: 11895391Abstract: The present disclosure generally relates to capturing and displaying video with multiple focal planes. The video includes a subject with a predefined portion of the subject identified in a first focal region. When a set of one or more conditions is met, the video is displayed with a focal plane of the video selected to be outside of the first focal region. When the set of one or more conditions is met is no longer met; and the video is displayed with the focal plane of the video selected to be inside of the first focal region.Type: GrantFiled: September 20, 2021Date of Patent: February 6, 2024Assignee: Apple Inc.Inventors: Julian Missig, Guillaume Ardaud
-
Patent number: 11886487Abstract: The invention relates to a method of generating a media file, the method comprising: generating a first data structure assigning a subset of samples or subsamples of a track to one or more sample groups; generating a second data structure comprising data for describing each of the one or more sample groups, the first data structure comprising a first grouping type and the second data structure comprising a second grouping type, wherein the second data structure comprises data for indicating whether the data for describing each of the one or more sample groups is invariant along time or not; and generating a media file including the samples, and the first and second data structuresType: GrantFiled: June 11, 2021Date of Patent: January 30, 2024Assignee: Canon Kabushiki KaishaInventors: Frédéric Maze, Franck Denoual, Naël Ouedraogo, Jean Le Feuvre
-
Patent number: 11869160Abstract: Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects.Type: GrantFiled: October 24, 2022Date of Patent: January 9, 2024Assignee: Nant Holdings IP, LLCInventor: Patrick Soon-Shiong
-
Patent number: 11861798Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating composite images. One of the methods includes maintaining first data associating each location within an environment with a particular time; obtaining an image depicting the environment from a point of view of a display device; obtaining second data characterizing one or more virtual objects; and processing the obtained image and the second data to generate a composite image depicting the one or more virtual objects at respective locations in the environment from the point of view of the display device, wherein the composite image depicts each virtual object according to the particular time that the first data associates with the location of the virtual object in the environment.Type: GrantFiled: July 26, 2021Date of Patent: January 2, 2024Inventor: Stephen Wilkes
-
Patent number: 11836865Abstract: An augmented reality visualization system is disclosed for viewing non-destructive inspection (NDI) data of a component. The system comprises a component camera configured to capture a real time image of the component, a component display configured to display an AR image of the component, and an AR controller including an augmented reality processor and a non-transitory tangible storage medium. The storage medium includes processor-executable instructions for controlling the AR processor when the instructions are executed by the AR processor. The instructions access the NDI data of the component and implement a component AR engine to generate the AR image in real time by overlaying the NDI data on the real time image. The AR engine is configured to detect one or more three-dimensional (3D) features of the component in the real-time image and generate the AR image based on the detected one or more features.Type: GrantFiled: September 17, 2021Date of Patent: December 5, 2023Assignee: WICHITA STATE UNIVERSITYInventors: Waruna Seneviratne, John Tomblin, Christopher Pini
-
Patent number: 11816926Abstract: The subject technology displays first augmented reality content on a computing device, the first augmented reality content comprising a first output media content. The subject technology provides for display a plurality of selectable graphical items, each of the selectable graphical items corresponding to a different augmented reality content including a set of media content modified utilizing facial synthesis. The subject technology receives a selection of one of the plurality of selectable graphical items. The subject technology, based at least in part on the selection, identifies second augmented reality content. The subject technology provides the second augmented reality content for display on the computing device.Type: GrantFiled: March 25, 2022Date of Patent: November 14, 2023Assignee: SNAP INC.Inventors: Ivan Babanin, Valerii Fisiun, Diana Maksimova, Alexey Pchelnikov
-
Patent number: 11810177Abstract: A method includes: acquiring an image of first piece of clothing to be collocated; determining information of one or more second piece of clothing for collocation with the first piece of clothing; determining clothing collocation images containing the information of the one or more second piece of clothing in the collocation image library; and selecting clothing collocation images matched with the image of the first piece of clothing from the determined clothing collocation images. The information of the one or more second piece of clothing is pre-marked clothing category information of clothing collocation images in a collocation image library.Type: GrantFiled: June 14, 2021Date of Patent: November 7, 2023Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventor: Changsen Chu
-
Patent number: 11783554Abstract: Systems and methods for rendering one or more different types of datasets is described. An exemplar method includes: (i) obtaining a first type of dataset, wherein each first data value within the first type of dataset is associated with one or more three-dimensional coordinates, which define a location or region in real space; (ii) obtaining a second type of dataset, wherein each second data value within the second type of dataset is associated with one or more of the three-dimensional coordinates; (iii) spatializing the first type of dataset to create a first type of spatialized dataset; (iv) spatializing the second type of dataset to create a second type of spatialized dataset; (v) aligning the first type of spatialized dataset with the second type of spatialized dataset to create an enhanced three-dimensional spatialized environment; and (vi) rendering the enhanced three-dimensional spatialized environment on a display component.Type: GrantFiled: May 10, 2020Date of Patent: October 10, 2023Assignee: BADVR, INC.Inventors: Jad Meouchy, Suzanne Ramona Borders
-
Patent number: 11776184Abstract: The present disclosure provides systems and methods for image editing. Embodiments of the present disclosure provide an image editing system for perform image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. The original image and the replacement image (e.g., the image including a desirable object or region) include layers of masks. A sky from the replacement image may replace the sky of the image to produce an aesthetically pleasing composite image.Type: GrantFiled: March 17, 2021Date of Patent: October 3, 2023Assignee: ADOBE, INC.Inventors: Jianming Zhang, Alan Erickson, I-Ming Pao, Guotong Feng, Kalyan Sunkavalli, Frederick Mandia, Hyunghwan Byun, Betty Leong, Meredith Payne Stotzner, Yukie Takahashi, Quynn Megan Le, Sarah Kong
-
Patent number: 11763624Abstract: A system for play by play wagering on a live sporting event through an augmented reality or virtual reality device. Users of a play by play wagering network can place wagers on the outcome of individual plays in a live sporting event that are displayed based on the players or elements of the live event that are in the point of view of the user's virtual or augmented reality device. Selected wagers can be augmented with multipliers based on user's manipulation of the augmented or virtual reality environment, such as running a path through the virtual field that they believe a player will take.Type: GrantFiled: November 6, 2020Date of Patent: September 19, 2023Assignee: AdrenalineIPInventors: Casey Alexander Huke, Michael D'Andrea
-
Patent number: 11733525Abstract: A method, system, apparatus, and/or device that may include a first optic located a first distance from an optical receiver, the first optic being adapted to: receive environment content from a location in front of the first optic relative to the optical receiver; and alter a focal vergence of the environment content. The method, system, apparatus, and/or device may include a display located a second distance from the optical receiver, the display being is adapted to: receive the environment content from the first optic; and deliver the environment content and display content to a second optic. The method, system, apparatus, and/or device may include the second optic located a third distance from the optical receiver, the second optic being is adapted to: receive the environment content and the display content from the display; alter the focal vergence of the environment content; and alter a focal vergence of the display content.Type: GrantFiled: June 16, 2021Date of Patent: August 22, 2023Assignee: WEST TEXAS TECHNOLOGY PARTNERS, LLCInventor: Sleiman Itani
-
Patent number: 11712189Abstract: A dipole group quantification method includes quantitating a dipole group that are sources of signals based on positions of dipoles.Type: GrantFiled: October 23, 2020Date of Patent: August 1, 2023Assignee: RICOH COMPANY, LTD.Inventors: Ryoji Hirano, Masayuki Hirata, Toshiharu Nakashima, Miyako Asai
-
Patent number: 11657562Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that utilize hemispherical clamping for importance sampling of an image-based light (IBL) to generate a digital image of a virtual environment. For example, the disclosed systems identify a hemispherical portion of an IBL image that corresponds to a reflective surface location on a virtual object. The disclosed systems can then clamp the IBL image using one or more importance sampling algorithms to exclude portions of the IBL image outside of the hemispherical portion that do not contribute direct lighting onto the reflective surface location. The disclosed systems can further utilize the one or more importance sampling algorithms to efficiently sample a ray direction between the reflective surface location and the hemispherical portion of the IBL image. In certain embodiments, the disclosed systems use the sampled ray direction to generate a digital image rendering portraying the virtual object.Type: GrantFiled: April 19, 2021Date of Patent: May 23, 2023Assignee: Adobe Inc.Inventors: Xin Sun, Milos Hasan, Nathan Carr
-
Patent number: 11635509Abstract: A radio frequency (RF) imaging device comprising a display receives a three-dimensional (3D) image that is a superposition of two or more images having different image types, which may include at least a 3D RF image of a space disposed behind a surface. A plurality of input control devices receive a user input for manipulating the display of the 3D image. Alternatively or additionally, the radio frequency (RF) imaging device may receive a three-dimensional (3D) image that is a weighted combination of a plurality of images, which may include a 3D RF image of a space disposed behind a surface, an infrared (IR) image of the surface, and a visible light image of the surface. A user input may specify changes to the weighted combination. In another embodiment, the RF imaging device may include an output device that produces a physical output indicating a detected type of material of an object in the space.Type: GrantFiled: October 26, 2020Date of Patent: April 25, 2023Assignee: Fluke CorporationInventors: Mabood Ghazanfarpour, Brian Knight
-
Patent number: 11607189Abstract: The present invention relates to a blood vessel image processing method, a blood vessel image processing apparatus, a computer storage medium, and an imaging device. The method includes: obtaining blood vessel geometric structure information of a blood vessel segment of interest; obtaining vital feature information of the blood vessel segment; establishing an association relationship between the blood vessel geometric structure information and the vital feature information; and displaying the blood vessel geometric structure information and the vital feature information in the same image in a mutual fusion manner by using the association relationship as a reference. In this way, work efficiency of users can be improved by the solution.Type: GrantFiled: September 3, 2018Date of Patent: March 21, 2023Assignee: SHANGHAI PULSE MEDICAL TECHNOLOGY, INC.Inventors: Shengxian Tu, Wei Yu, Jiayue Huang, Su Zhang, Xiaogang Fu
-
Patent number: 11580710Abstract: A multiuser, collaborative augmented reality (AR) system employs individual AR devices for viewing real-world anchors, that is, physical models that are recognizable to the camera and image processing module of the AR device. To mitigate ambiguous configurations when used in the collaborative mode, each anchor is registered with a server to ensure that only uniquely recognizable anchors are simultaneously active at a particular location. The system permits collaborative AR to span multiple sites, by associating a portal with an anchor at each site. Using the location of their corresponding AR device as a proxy for their position, AR renditions of the other participating users are provided. This AR system is particularly well suited for games.Type: GrantFiled: January 25, 2022Date of Patent: February 14, 2023Inventors: Jordan Kent Weisman, William Gibbens Redmann
-
Patent number: 11582412Abstract: A camera agnostic core monitor for an enhanced flight vision system (EFVS) is disclosed. In embodiments, a structured light projector (SLP) generates and projects a precise geometric pattern or other like artifact, which is reflected by collimating elements into the EFVS optical path. Within the optical path, the EFVS focal plane array is illuminated by, and detects, the projected artifacts within the scene imagery captured for display by the EFVS. Image processors assess the presentation of the detected artifacts (e.g., position/orientation relative to the expected presentation of the detected artifact within the scene imagery) to verify that the displayed EFVS imagery is not misleading.Type: GrantFiled: April 1, 2021Date of Patent: February 14, 2023Assignee: Rockwell Collins, Inc.Inventors: Gregory G. Lajiness, David I. Han
-
Patent number: 11535155Abstract: Superimposed-image display devices and programs display a guide image providing a guidance of information to a driver of a vehicle such that the guide image is superimposed on a view ahead of the vehicle and is visually recognized. The systems and programs obtain three-dimensional map information that specifies a three-dimensional shape of a road and a structure nearby the road and arrange the guide image in the three-dimensional map information, based on the three-dimensional shape of the road and the structure nearby the road. The systems and programs obtain a shape of the guide image that is visually recognized from a position of the driver in the three-dimensional map information and display the guide image having the obtained shape.Type: GrantFiled: July 19, 2018Date of Patent: December 27, 2022Assignee: AISIN CORPORATIONInventors: Kenji Watanabe, Hiroyuki Miyake, Yuusuke Morita
-
Patent number: 11514652Abstract: Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects.Type: GrantFiled: July 27, 2021Date of Patent: November 29, 2022Assignee: NANT HOLDINGS IP, LLCInventor: Patrick Soon-Shiong
-
Patent number: 11508148Abstract: The present disclosure relates to systems, computer-implemented methods, and non-transitory computer readable medium for automatically transferring makeup from a reference face image to a target face image using a neural network trained using semi-supervised learning. For example, the disclosed systems can receive, at a neural network, a target face image and a reference face image, where the target face image is selected by a user via a graphical user interface (GUI) and the reference face image has makeup. The systems transfer, by the neural network, the makeup from the reference face image to the target face image, where the neural network is trained to transfer the makeup from the reference face image to the target face image using semi-supervised learning. The systems output for display the makeup on the target face image.Type: GrantFiled: March 18, 2020Date of Patent: November 22, 2022Assignee: ADOBE INC.Inventors: Yijun Li, Zhifei Zhang, Richard Zhang, Jingwan Lu
-
Patent number: 11488358Abstract: Methods and systems are disclosed for creating a shared augmented reality (AR) session. The methods and systems perform operations comprising: receiving, by a second device, a request to join an AR session initialized by a first device; in response to receiving the request, detecting a body corresponding to a user of the first device in one or more images captured by a camera of the second device; identifying a body part of the detected body corresponding to the user of the first device; determining, by the second device, a transformation in the AR session between the first device and the second device using the identified body part; and causing the AR session to be displayed by the second device based on the determined transformation.Type: GrantFiled: February 5, 2020Date of Patent: November 1, 2022Assignee: Snap Inc.Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan, Matan Zohar
-
Patent number: 11468641Abstract: An augmented reality AR device may be communicatively connected to a remote network management platform configured to support a managed network. The AR device may capture an image of a real object in the field of view of an imaging component of the AR device. The real object may be recognized as a known managed object of the managed network. The AR device may also concurrently determine context information indicating a location or physical environment. The AR device may then transmit an identifier of the known managed object and the context information in a message to the management platform. In response, the AR device may receive data associated with the known managed. The AR device may then display a virtual object in a virtual space superimposed on the captured image of the real object, where the virtual object and the virtual space are based on the received management data.Type: GrantFiled: May 11, 2020Date of Patent: October 11, 2022Assignee: ServiceNow, Inc.Inventor: Darius Koohmarey
-
Patent number: 11460817Abstract: The present disclosure relates to a clothes treating apparatus that operates by executing an artificial intelligence (AI) algorithm and/or a machine learning algorithm in a 5G environment connected for Internet of Things, and a method for operating the clothes treating apparatus. The method for operating the clothes treating apparatus includes acquiring a clothing image by using a camera to photograph a user wearing clothes and standing in front of a mirror display placed on a front surface of the clothes treating apparatus, analyzing the clothing image, setting an operation mode of the clothes treating apparatus according to the result of analyzing the clothing image, and causing the clothes treating apparatus to operate according to the set operation mode. It is possible to improve user satisfaction by automatically setting and activating an operation mode of a clothes treating apparatus based on clothing image information.Type: GrantFiled: November 27, 2019Date of Patent: October 4, 2022Assignee: LG Electronics Inc.Inventors: Mi Rae Kim, Sang Oh Kim, Jun Sang Yun
-
Patent number: 11436932Abstract: Disclosed herein is a system for facilitating real pilots in real aircraft using augmented and virtual reality to meet in a virtual piece of airspace, in accordance with some embodiments. Accordingly, the system may include a communication device configured for receiving at least one first sensor data corresponding to at least one first sensor associated with a first vehicle (and receiving at least one second sensor data corresponding to at least one second sensor associated with a second vehicle). Further, the communication device may be configured for transmitting at least one second presentation data to at least one second presentation device associated with the second vehicle. Further, the system may include a processing device configured for generating the at least one second presentation data based on the at least one first sensor data and the at least one second sensor data.Type: GrantFiled: February 21, 2019Date of Patent: September 6, 2022Assignee: RED SIX AEROSPACE INC.Inventors: Daniel Augustine Robinson, Nikola Vladimir Bicanic, Glenn Thomas Snyder
-
Patent number: 11388122Abstract: The present application contemplates a method of providing a participant with traversable access to a local environmental context of a target. In preferred embodiments, a context engine accesses multiple views of the local environmental context and stitches together the multiple views to produce a digital, walkabout reality of the local environmental context. Upon a participant/recipient accessing a communication from a sender, the participant is able to use a portal during a viewing session to traverse the digital, walkabout reality associated with a target. It is contemplated that the target is physically located within the local environmental context.Type: GrantFiled: February 3, 2020Date of Patent: July 12, 2022Assignee: Wormhole Labs, Inc.Inventors: Robert D. Fish, Curtis Hutten
-
Patent number: 11372655Abstract: The present disclosure relates to providing a computer-generated reality (CGR) platform for generating CGR environments including virtual and augmented reality environments. In some embodiments, the platform includes an operating-system-level (OS-level) process that simulates and renders content in the CGR environment, and one or more application-level processes that provide information related to the content to be simulated and rendered to the OS-level process.Type: GrantFiled: February 25, 2020Date of Patent: June 28, 2022Assignee: Apple Inc.Inventors: Helmut Garstenauer, Martin Garstenauer, Edwin Iskandar, Timothy R. Oriol, Geoffrey Stahl, Cody J. White
-
Patent number: 11360733Abstract: Methods and systems are disclosed for creating a shared augmented reality (AR) session. The methods and systems perform operations comprising: receiving, by a client device, input that selects a shared augmented reality (AR) experience from a plurality of shared AR experiences; in response to receiving the input, determining one or more resources associated with the selected shared AR experience; determining, by the client device, that two or more users are located within a threshold proximity of the client device; and activating the selected shared AR experience in response to determining that the two or more users are located within the threshold proximity of the client device.Type: GrantFiled: February 16, 2021Date of Patent: June 14, 2022Assignee: Snap Inc.Inventors: Ana Maria Cardenas Gasca, Ella Dagan Peled, Andrés Monroy-Hernández, Ava Robinson, Yu Jiang Tham, Rajan Vaish
-
Patent number: 11360576Abstract: A method, system, and computer program product for motion-based user interfaces are provided. The method identifies a navigation mode indication. A navigation interface is generated at a display device and configured based on the navigation mode indication to include a first graphical content. At least a portion of the navigation interface is modified based on one or more movements of the mobile computing device. A portion of the navigation interface is selected based on detecting a pressure change on at least a portion of a mobile computing device. The method presents a second graphical content on the display device based on the selected portion of the navigation interface.Type: GrantFiled: December 21, 2020Date of Patent: June 14, 2022Assignee: International Business Machines CorporationInventors: Sarbajit K. Rakshit, Justin David Weisz
-
Patent number: 11354011Abstract: A position of a cursor may be detected within an augmented reality (AR) display of a physical space captured by a camera. A snapping range of a snap-select function of the cursor may be dynamically changed in response to the position of the cursor. Accordingly, a user may place or hold a camera in a location to view an associated AR display, and may easily and precisely execute a snap-select function to select a desired AR object, even when the AR display includes AR objects that are of different sizes, or are different distances from the camera.Type: GrantFiled: December 4, 2019Date of Patent: June 7, 2022Assignee: GOOGLE LLCInventor: Xavier Benavides Palos
-
Patent number: 11340850Abstract: The disclosure discloses a non-transitory computer-readable recording medium storing a virtual label display process program for executing steps. The steps include a composite image generating step, a composite image output step, a determining step, and a notifying step. In the composite image generating step, a real image data of a desired field of view and a virtual image data of a label are combined. In the composite image output step, a composite image data is output to a display device, and a virtual image of the label on the display device is superimposed and displayed. In the determining step, it is determined whether a desired suitability is satisfied between an exterior appearance of a background object and an exterior appearance of the label based on the real image data and the virtual image data. In the notifying step, a predetermined suitability notification is made.Type: GrantFiled: March 30, 2020Date of Patent: May 24, 2022Assignee: BROTHER KOGYO KABUSHIKI KAISHAInventors: Feng Zhu, Keigo Kako
-
Patent number: 11327504Abstract: A method of mobile automation apparatus localization in a navigation controller includes: controlling a depth sensor to capture a plurality of depth measurements corresponding to an area containing a navigational structure; selecting a primary subset of the depth measurements; selecting, from the primary subset, a corner candidate subset of the depth measurements; generating, from the corner candidate subset, a corner edge corresponding to the navigational structure; selecting an aisle subset of the depth measurements from the primary subset, according to the corner edge; selecting, from the aisle subset, a local minimum depth measurement for each of a plurality of sampling planes extending from the depth sensor; generating a shelf plane from the local minimum depth measurements; and updating a localization of the mobile automation apparatus based on the corner edge and the shelf plane.Type: GrantFiled: April 5, 2018Date of Patent: May 10, 2022Assignee: Symbol Technologies, LLCInventors: Feng Cao, Harsoveet Singh, Richard Jeffrey Rzeszutek, Jingxing Qian, Jonathan Kelly
-
Patent number: 11324566Abstract: A method of guiding a surgical instrument during sinus surgery on a patient including receiving pre-operative patient data marked with a plurality of points defining a drainage pathway and determining a position of the surgical instrument relative to the planned drainage pathway. The method may further comprise generating an augmented reality image including a first visualization element representing the drainage pathway and a second visualization element comprising a plurality of circles spaced along the planned drainage pathway, and displaying the augmented reality image to help guide the surgical instrument. The method may also comprise generating an augmented reality image including a first visualization element representing the drainage pathway and a second visualization element representing the determined position of the surgical instrument relative to the drainage pathway.Type: GrantFiled: December 19, 2019Date of Patent: May 10, 2022Assignee: Stryker European Operations LimitedInventors: Bartosz Kosmecki, Christopher Özbek, Christian Winne
-
Patent number: 11285371Abstract: A video reproduction apparatus determines a position of a joint of a performer based on a sensing result by a sensor. The video reproduction apparatus calculates an angle formed by a joint used for scoring an element of an athletic event by using calculation information regarding a calculation formula for calculating the angle and the determined position of the joint of the performer. The video reproduction apparatus displays a performance image of the performer acquired from a camera and a scoring image having the angle displayed on a 3D model image of the performer which is generated according to the sensing result so as to be compared with each other.Type: GrantFiled: November 18, 2019Date of Patent: March 29, 2022Assignee: FUJITSU LIMITEDInventors: Kazumi Kubota, Tsuyoshi Matsumoto, Satoshi Shimizu, Hirohisa Naito, Takuya Masuda
-
Patent number: 11270367Abstract: The present disclosure generally relates to providing product information. The appearance of a first product and a second product is detected within a field of view of one or more image sensors. Movement of the first product or the second product relative to one another is then detected. If the movement of the first product or the second product relative to one another causes the first product to come within a threshold distance of the second product, then comparison information is displayed at a location at least partially between the first product and the second product.Type: GrantFiled: April 8, 2020Date of Patent: March 8, 2022Assignee: Apple Inc.Inventors: Golnaz Abdollahian, Earl M. Olson
-
Patent number: 11264057Abstract: A method of facilitating modified content play such that modification actions may be implemented during play of original content form. The modification actions may be specified by users to modify the original content form. The modification may be disseminated to subscribers or other users desiring similar content modifications. The method may be useful in social networking systems to allow social members to share commentary and otherwise modify original content forms to include their personal reflections.Type: GrantFiled: June 13, 2017Date of Patent: March 1, 2022Assignee: Cable Television Laboratories, Inc.Inventors: Judson D. Cary, Frank Sandoval, David E. Agranoff, David K. Broberg, Stephen G. Glennon
-
Patent number: 11250598Abstract: An image generation apparatus includes: a memory; and one or more processors configured to receive moving object-related information regarding a moving object from the moving object having identification information that is recognizable from the outside of the moving object, receive, from the usage terminal, usage terminal information regarding a usage terminal and imaging data including an image of the moving object captured by the usage terminal configured to image the moving object, specify the moving object based on the identification information included in the imaging data, generate superimposition image data based on the moving object-related information of the specified moving object and the usage terminal information of the usage terminal having output the imaging data including the image of the moving object, the superimposition image data being an image data to be displayed on a display unit of the usage terminal while being superimposed on the imaging data.Type: GrantFiled: September 3, 2019Date of Patent: February 15, 2022Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Takahiro Shiga
-
Patent number: 11245872Abstract: A merged reality system comprises at least one server storing a virtual world system comprising one or more virtual objects, the virtual objects including virtual replicas of at least a first location, a second location, and real world elements in the at least first and second locations. The at least one server is configured to receive, from a plurality of connected devices communicating with the at least one server through a network, real-world data from real-world elements in the first and second locations; use the real-world data from the first and second locations in the virtual world system to enrich and synchronize the virtual replicas with corresponding real-world elements; and overlap and stream at least one portion of the real-world data from the second location onto, e.g., one or more surfaces of the virtual replica of the first location.Type: GrantFiled: June 17, 2020Date of Patent: February 8, 2022Assignee: THE CALANY Holding S. À R.L.Inventor: Cevat Yerli
-
Patent number: 11209261Abstract: Systems and methods for producing a traceable measurement of a component are provided. One or more augmented reality (AR) graphical elements, indicative of a measurement to be performed on the component using a local measurement instrument, are generated. The AR graphical elements are rendered via an AR device. One or more measurement values, associated with the measurement as performed on the component using the local measurement instrument, are obtained. An augmented image comprising a representation of the component, a representation of the local measurement instrument obtaining the measurement values, and a representation of the AR graphical elements is acquired. The measurement values are stored in association with the augmented image.Type: GrantFiled: August 22, 2019Date of Patent: December 28, 2021Assignee: INNOVMETRIC LOGICIELS INC.Inventors: Louis-Jérôme Doyon, Marc Soucy
-
Patent number: 11210863Abstract: Devices, systems, and methods are provided for real-time object placement guidance in augmented reality experience. An example method may include receiving, by a device having a sensor, an indication of an object to be viewed in an physical environment of the device. The example method may also include determining a 3D model of the physical environment using data of the physical environment captured by the sensor. The example method may also include determining that a first surface in the 3D model of the environment is a first floor space, and a second surface in the 3D model of the environment is a first wall space. The example method may also include determining that a portion of the first surface is unoccupied and sized to fit the object.Type: GrantFiled: August 24, 2020Date of Patent: December 28, 2021Assignee: A9.com, Inc.Inventors: Geng Yan, Xing Zhang, Amit Kumar K C, Arnab Dhua, Yu Lou
-
Patent number: 11188786Abstract: A sensor data processing system and method is described. Contemplated systems and methods derive a first recognition trait of an object from a first data set that represents the object in a first environmental state. A second recognition trait of the object is then derived from a second data set that represents the object in a second environmental state. The sensor data processing systems and methods then identifies a mapping of elements of the first and second recognition traits in a new representation space. The mapping of elements satisfies a variance criterion for corresponding elements, which allows the mapping to be used for object recognition. The sensor data processing systems and methods described herein provide new object recognition techniques that are computationally efficient and can be performed in real-time by the mobile phone technology that is currently available.Type: GrantFiled: August 12, 2019Date of Patent: November 30, 2021Assignee: Nant Holdings IP, LLCInventors: Kamil Wnuk, Jeremi Sudol, Bing Song, Matheen Siddiqui, David McKinnon
-
Patent number: 11169602Abstract: An apparatus configured to, based on virtual reality content for presentation to a user in a virtual reality space for viewing in virtual reality, wherein a virtual reality view presented to the user provides for viewing of the virtual reality content, and an identified physical real-world object; providing for display of an object image that at least includes a representation of the identified physical real-world object that is overlaid on the virtual reality content presented in the virtual reality view, the object image displayed at a location in the virtual reality space that corresponds to a real-world location of the identified physical real-world object relative to the user, the object image further including at least a representation of a further physical real-world object that is identified as potentially hindering physical user-access to said identified physical real-world object.Type: GrantFiled: November 21, 2017Date of Patent: November 9, 2021Assignee: Nokia Technologies OyInventors: Arto Lehtiniemi, Antti Eronen, Jussi Leppänen, Juha Arrasvuori
-
Patent number: 11164001Abstract: Embodiments of the disclosure disclose a method for automatically annotating a target object in images. In one embodiment, the method comprises: obtaining an image training sample including a plurality of images, wherein each image of the plurality of images is obtained by photographing a same target object, and the adjacent images share one or more same environmental feature points; using one of the plurality of images as a reference image to determine a reference coordinate system, and create a three-dimensional space model based on the three-dimensional reference coordinate system; determining the position information of the target object in the three-dimensional reference coordinate system upon the three-dimensional space model being moved to the position of the target object in the reference image; and mapping the three-dimensional space model to image planes of each image, respectively, based on respective camera pose information determined based on environmental feature points in each image.Type: GrantFiled: September 19, 2018Date of Patent: November 2, 2021Assignee: ALIBABA GROUP HOLDING LIMITEDInventors: Boren Li, Hongwei Xie
-
Patent number: 11164240Abstract: Embodiments disclosed herein include virtual apparel fitting systems configured to perform methods comprising generating a first virtual garment carousel the includes images of garments. In operation, a user scrolling through the virtual garment carousel causes a graphical user interface to display images of the garments in the carousel superposed over an image of the user, thereby enabling the user to see how the garments would look on him or her, where virtual fit points of each garment image align with virtual fit points on the image of the user.Type: GrantFiled: September 30, 2019Date of Patent: November 2, 2021Assignee: SelfieStyler, Inc.Inventors: Kyle Mitchell, Julianne Applegate, Muhammad Ibrahim, Waqas Muddasir, Jeff Portaro, Dustin Ledo
-
Patent number: 11132764Abstract: Embodiments can facilitate performing a localized weather analysis for a portion of a sky. For achieving this, optical sensors can be placed in a capture grid network to capture images of the sky portion. The capture grid network may be logically divided into grids. The images captured by the optical sensors may include meta information indicating time and locations on the capture grid network the images were captured. The images can be combined to obtain wide-view images for the sky portion at various points of time. The wide-view images can be used to perform weather analysis such as cloud analysis.Type: GrantFiled: June 26, 2019Date of Patent: September 28, 2021Assignee: TIANJIN KANTIAN TECHNOLOGY CO., LTD.Inventor: Simeng Yan