Patents Examined by Martin Mushambo
-
Patent number: 11398089Abstract: Techniques are provided for identifying objects (such as products within a physical store) within a captured video scene and indicating which of object in the captured scene matches a desired object requested by a user. The matching object is then displayed in an accentuated manner to the user in real-time (via augmented reality). Object identification is carried out via a multimodal methodology. Objects within the captured video scene are identified using a neural network trained to identify different types of objects. The identified objects can then be compared against a database of pre-stored images of the desired product to determine if a close match is found. Additionally, text on the identified objects is analyzed and compared to the text of the desired object. Based on either or both identification methods, the desired object is indicated to the user on their display, via an augmented reality graphic.Type: GrantFiled: February 17, 2021Date of Patent: July 26, 2022Assignee: Adobe Inc.Inventors: Amol Jindal, Ajay Bedi
-
Patent number: 11393171Abstract: Aspects of the present disclosure relate to controlling virtual reality (VR) content displayed on a VR head mounted display (HMD). Communication can be established between a computer system, a VR HMD, and a mobile device. A user input configured to control VR content displayed on a display of the VR HMD can be received on the mobile device. The VR content displayed on the VR HMD can then be controlled based on the user input received on the mobile device.Type: GrantFiled: July 21, 2020Date of Patent: July 19, 2022Assignee: International Business Machines CorporationInventors: Namit Kabra, Smitkumar Narotambhai Marvaniya, Yannick Saillet, Kunjavihari Madhav Kashalikar
-
Patent number: 11392891Abstract: A method includes: obtaining, from an image sensor mounted on a mobile automation apparatus, an image representing a plurality of items on a support structure in a facility; responsive to detection of the items in the image, for each item: obtaining an item region defining an area of the image containing the item; obtaining a performance metric corresponding to the item; encoding the performance metric as a visual attribute; and generating an item overlay using the visual attribute; and controlling a display to present the image, and each of the item overlays placed over the corresponding item regions.Type: GrantFiled: November 3, 2020Date of Patent: July 19, 2022Assignee: Zebra Technologies CorporationInventors: Adrian J. Stagg, Marco Perrella, Jordan K. Varley, Patrick Kennedy
-
Patent number: 11386625Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: capturing, by a client device implementing a messaging application, an image; receiving, from a server, an identification of an augmented reality experience associated with the image, the server identifying the augmented reality experience by assigning visual words to features of the image and searching a visual search database to identify a plurality of marker images, and the server retrieving the augmented reality experience associated with a given one of the plurality of marker images; automatically, in response to capturing the image, displaying one or more graphical elements of the identified augmented reality experience; and receiving input from the user interacting with the one or more graphical elements.Type: GrantFiled: October 13, 2020Date of Patent: July 12, 2022Assignee: Snap Inc.Inventors: Hao Hu, Kevin Sarabia Dela Rosa, Bogdan Maksymchuk, Volodymyr Piven, Ekaterina Simbereva
-
Patent number: 11383275Abstract: An approach is provided for tracking and managing physical mail items using image recognition. An image is captured of a mail item and may include any information on the mail item, such as sender information, recipient information, a postmark, a cancellation, as well as artifacts of the mail item, such as seams, markings, coloration, texture, damage, etc. A unique value is generated for the image, for example, by processing the image data for the image using one or more hash functions to generate a hash value. The hash value uniquely identifies the mail item based upon the information included in the image, such as the sender and recipient information, postmark, cancellation, artifacts, etc., and is used to track and manage the mail item.Type: GrantFiled: March 15, 2019Date of Patent: July 12, 2022Assignee: RICOH COMPANY, LTD.Inventors: Nicole Blohm, Steve Cousins
-
Patent number: 11380022Abstract: An electronic device and method are provided for content modification in a shared session among multiple head-mounted display (HMD) devices. The electronic device determines emotional state information associated with a wearer of each of a plurality of HMD devices. Each HMD device renders media content in a computer-simulated environment and the emotional state information corresponds to a first portion of the rendered media content. The electronic device constructs an input feature for a first neural network based on the first portion of the rendered media content and the emotional state information. The electronic device selects, from a set of content modification operations, a first content modification operation based on an application of the first neural network on the input feature. Thereafter, the electronic device modifies the rendered media content based on the selected first content modification operation. The modified media content is rendered on at least one HMD device.Type: GrantFiled: October 22, 2020Date of Patent: July 5, 2022Assignee: SONY GROUP CORPORATIONInventors: Sandeep Rajarathnam, Prashanth Puttamalla
-
Patent number: 11380137Abstract: A motion analysis device, a motion analysis method and a recording medium for storing a motion analysis program that make it possible to use a display region more efficiently are provided. The motion analysis device includes an acquisition part that acquires time-series data relating to an operation performed by an operator, an analysis part that analyzes the time-series data and generates motion data indicating a type and execution time of an elemental motion, a generation part that excludes data corresponding to a stop period of the operator which is taken until an initial elemental motion is started from the motion data, and generates shortened motion data, and a display control part that performs control to differentiate periods corresponding to different elemental motions and display the shortened motion data on a display.Type: GrantFiled: October 14, 2020Date of Patent: July 5, 2022Assignee: OMRON CorporationInventors: Masashi Miyazaki, Hirotaka Wada
-
Patent number: 11373305Abstract: An image processing method is provided, including: obtaining a target image; invoking an image recognition model including: a backbone network, a pooling module and a dilated convolution module that are connected to the backbone network and that are parallel to each other, and a fusion module connected to the pooling module and the dilated convolution module; performing feature extraction on the target image by extracting, using the backbone network, a feature map of the target image, separately processing, using the pooling module and the dilated convolution module, the feature map, to obtain a first result outputted by the pooling module and a second result outputted by the dilated convolution module, and fusing the first result and the second result by using the fusion module into a model recognition result of the target image; and determining a semantic segmentation labeled image of the target image based on the model recognition result.Type: GrantFiled: August 10, 2020Date of Patent: June 28, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Ruixin Zhang, Xinyang Jiang, Xing Sun, Xiaowei Guo
-
Patent number: 11373384Abstract: This application provides a method for configuring parameters of a three-dimensional face model. The method includes: obtaining a reference face image; identifying a key facial point on the reference face image to obtain key point coordinates as reference coordinates; and determining a recommended parameter set in a face parameter value space according to the reference coordinates. The first projected coordinates are projected coordinates of the key facial point obtained by projecting a three-dimensional face model corresponding to the recommended parameter set onto a coordinate system. The proximity of the first projected coordinates to the reference coordinates meets a preset condition.Type: GrantFiled: February 26, 2021Date of Patent: June 28, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Mu Hu, Sirui Gao, Yonggen Ling, Yitong Wang, Linchao Bao, Wei Liu
-
Patent number: 11372253Abstract: Various implementations disclosed herein include devices, systems, and methods that enable improved display of virtual content in computer generated reality (CGR) environments. In some implementations, the CGR environment is provided at an electronic device based on a field of view (FOV) of the device and a position of virtual content within the FOV. A display characteristic of the virtual object is adjusted to minimize or negate any adverse effects of the virtual object or a portion of the virtual object falling outside of the FOV of the electronic device.Type: GrantFiled: July 30, 2020Date of Patent: June 28, 2022Assignee: APPLE INC.Inventor: Luis R. Deliz Centeno
-
Patent number: 11367254Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.Type: GrantFiled: April 21, 2020Date of Patent: June 21, 2022Assignee: Electronic Arts Inc.Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11366322Abstract: A virtual image generation system for use by an end user comprises a projection subsystem configured for generating a collimated light beam, and a display configured emitting light rays in response to the collimated light beam to display a pixel of an image frame to the end user. The pixel has a location encoded with angles of the emitted light rays. The virtual image generation system further comprises a sensing assembly configured for sensing at least one parameter indicative of at least one of the emitted light ray angles, and a control subsystem configured for generating image data defining a location of the pixel, and controlling an angle of the light beam relative to the display based on the defined location of the pixel and the sensed parameter(s).Type: GrantFiled: December 9, 2020Date of Patent: June 21, 2022Assignee: Magic Leap, Inc.Inventors: Ivan L. Yeoh, Lionel Ernest Edwin, Robert Dale Tekolste
-
Patent number: 11367233Abstract: A system and method of creating customized characters and selectively displaying them in an electronic display, such as an augmented reality or virtual reality display is provided. A digital character may be provided by a character provider for customization by others using the system. Such customizations may be instantiated in user devices that provide electronic displays. Instantiation of the custom digital character may be conditioned on one or more trigger conditions, which may be specified by the character customizer. For example, a digital character customized using the system may be conditioned on triggering events in the real-world or in a virtual world. When a relevant triggering condition is satisfied at a user device, the custom character (i.e., information for instantiating the custom character) may be transmitted to that user device. In this manner, the system may push custom characters to user devices that satisfy the triggering condition.Type: GrantFiled: November 5, 2020Date of Patent: June 21, 2022Assignee: Pure Imagination Holdings, LLCInventors: Lisa Gai-Tzen Wong, Amit Tishler, Richard Paul Weeks
-
Patent number: 11361064Abstract: A method provides for a field of view (FOV) of a smart contact lens of a user, such that the FOV includes a plurality of segments of the FOV. A device is identified from object recognition performed on image data from the smart contact lens and viewed within a first segment of the FOV. A key is transmitted to the device that includes credentials of the user of the smart contact lens to authenticate the user to the device that is viewed within the first segment of the FOV. A first level of access to the device is provided, based on viewing the device in the first segment of the FOV, and in response to changing the viewing of the device to a second segment of the FOV, providing a second level of access to the device associated with viewing the device in the second segment of the FOV.Type: GrantFiled: May 7, 2020Date of Patent: June 14, 2022Assignee: International Business Machines CorporationInventors: Soma Shekar Naganna, Sarbajit K. Rakshit, Abhishek Seth, Venkata JayaPrakash Jinka
-
Patent number: 11351005Abstract: A method comprises displaying a surgical environment image. The surgical environment image includes a virtual control element for controlling a component of a surgical system, and the virtual control element includes a real-time image of the component of the surgical system in the surgical environment image. The method further comprises displaying an image of a body part of a user. The body part is used to interact with the virtual control element. The method further comprises receiving a gesture of the body part of the user in a predetermined motion, via a gesture based input device registering movement of the body part of the user, while the body part interacts with the virtual control element. The method further comprises adjusting a setting of the component of the surgical system based on the received gesture.Type: GrantFiled: October 18, 2018Date of Patent: June 7, 2022Assignee: INTUITIVE SURGICAL OPERATIONS, INC.Inventors: Brandon D. Itkowitz, Simon P. DiMaio, Paul W. Mohr, Theodore W. Rogers
-
Patent number: 11340707Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.Type: GrantFiled: May 29, 2020Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
-
Patent number: 11340072Abstract: Provided is a mechanism that enables the balance between the satisfaction of measurement accuracy required for an application and the suppression of power consumption. An information processing apparatus includes a control unit that acquires information indicating required accuracy of measurement information based on a result of detection by a sensor from the application that uses the measurement information, and controls the sensor on the basis of the information indicating the required accuracy.Type: GrantFiled: December 26, 2017Date of Patent: May 24, 2022Assignee: SONY CORPORATIONInventors: Takahiro Tsujii, Tomohisa Takaoka, Haruto Takeda
-
Patent number: 11341383Abstract: The disclosure is directed towards methods and apparatus to detect effective tiling area and fill tiles efficiently. The method improves efficiency by not filling tiles within an inner box in a shape having a large unfilled area. One example method includes detecting an inner box, determining whether the detected inner box is big enough for pre-clipping, and confirming that the outer clip path contains the inner box. When filling tiles into a bounding rectangle tiling area, it is determined if a particular tile (or tile(s)) falls into an inner box or not, and if the tile falls in the inner box, that particular tile is not filled. According to one embodiment, the inner box is an internal rectangle that contains a maximum area in which it is unnecessary to fill tiles.Type: GrantFiled: July 17, 2020Date of Patent: May 24, 2022Assignee: KYOCERA Document Solutions Inc.Inventors: Jayant Bhatt, Xuqiang Bai
-
Patent number: 11335060Abstract: A location-based augmented-reality system to generate and cause display of augmented-reality content that includes three-dimensional typography, based on a perspective, and location of a client device.Type: GrantFiled: March 13, 2020Date of Patent: May 17, 2022Assignee: Snap Inc.Inventors: Piers George Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan
-
Patent number: 11334777Abstract: A system and method for converting imaging data, for example, medical imaging data, to three-dimensional printer data Imaging data may be received describing for example a three-dimensional volume of a subject or patient. Using printer definition data describing a particular printer, 3D printer input data may be created from the imaging data describing at least part of the three-dimensional volume.Type: GrantFiled: December 3, 2020Date of Patent: May 17, 2022Assignee: 3D SYSTEMS INC.Inventors: Oren Kalisman, Dan Pri-Tal, Roy Porat, Yaron Vaxman