Patents Issued in July 14, 2020
  • Patent number: 10713802
    Abstract: An ultrasonic image processing system comprising: a data receiving module for acquiring multiple sets of three-dimensional image data corresponding to a single target tissue; an image analyzing module for dividing, on the basis of any one group of the multiple sets of three-dimensional image data, a target region to obtain a three-dimensional volume structure boundary of the target region, and a safety boundary generated by outward expansion or inward contraction along the three-dimensional volume structure boundary; an image mapping module for establishing a spatial mapping relation between the multiple sets of three-dimensional image data, and according to the spatial mapping relation, mapping the three-dimensional volume structure boundary and the safety boundary of the target region to the other sets of three-dimensional image data; and an image marking module for marking, in a displayed image, corresponding three-dimensional volume structure boundaries and safety boundaries of the target region in the mu
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: July 14, 2020
    Assignee: Shenzhen Mindray Bio-Medical Electronics Co., Ltd.
    Inventors: Longfei Cong, Teng Sun
  • Patent number: 10713803
    Abstract: Images of a fixture may be acquired by cameras positioned with a field-of-view of the fixture. Such images are processed to identify estimated tops of items at the fixture. Using the estimated tops and item data for items designated for stowage at the fixture, one or more estimated locations of items (such as bounding boxes representative of the items) may be determined. Each estimated location for an item is tested for validity. For example, each estimated location is checked to see if the estimated location is within a working volume of the fixture. If the estimated location of the item is within the working volume, the item is determined to be valid. Otherwise, the item is deemed invalid.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: July 14, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Vuong Van Le, Joseph Patrick Tighe
  • Patent number: 10713804
    Abstract: A method for obtaining a combined depth map, and a depth camera. The method is applicable to a processor in the depth camera. The depth camera includes the processor, at least one light emitting element, and at least two time-of-flight (ToF) sensors, and the composite irradiation range of the at least one light emitting element convers the composite field of view of the at least two ToF sensors. The light emitting elements in the depth camera modulate light signals with the same modulation signal and then transmit the modulated light signals; the ToF sensors demodulate, by using same demodulation signal corresponding to the modulation signal, the received modulated light signal reflected back by an object, so as to generate depth data; and the processor performs data fusion processing on all the received depth data, so as to obtain a combined depth map.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: July 14, 2020
    Assignee: Hangzhou Hikvision Digital Technology Co., Ltd.
    Inventors: Jingxiong Wang, Hui Mao, Linjie Shen, Hai Yu, Shiliang Pu
  • Patent number: 10713805
    Abstract: A method for encoding depth map image involves dividing the image into blocks. These blocks are then classified into smooth blocks without large depth discontinuities and discontinuous blocks with large depth discontinuities. In the discontinuous blocks, depth discontinuities are represented by line segments and partitions. Interpolation-based intra prediction is used to approximate and compress the depth values in the smooth blocks and partitions. Further compression can be achieved with of depth-aware quantization, adaptive de-blocking filtering, scale adaptive block size, and resolution decimation schemes.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: July 14, 2020
    Assignee: VERSITECH LIMITED
    Inventors: Shing Chow Chan, Jia-fei Wu
  • Patent number: 10713806
    Abstract: An image processing apparatus according to an embodiment includes a disparity-specific cost value calculation circuit configured to calculate the cost value of a disparity calculation target pixel in a source image and the cost value of cost value calculation target pixels at respective positions, in the horizontal direction, of pixels arranged from a position of the disparity calculation target pixel in a reference image up to a position by a maximum disparity apart from the disparity calculation target pixel at respective positions in the horizontal direction, an inter-line minimum cost value extraction circuit configured to extract a minimum cost value from a plurality of pixels which have the same positions in the horizontal direction as the positions of the cost value calculation target pixels, and a cost optimization operation circuit configured to perform a cost optimization operation through global optimization using the cost value corresponding to one line.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: July 14, 2020
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Toru Sano
  • Patent number: 10713807
    Abstract: A vicinity supervising device of a vehicle includes an image capturing unit capturing a plurality of images at different locations; a distance acquiring unit acquiring a distance up to the object detected by transmitting and receiving probing waves from/to the object; a first offset calculation unit calculating a first parallax offset value based on the plurality of images and the distance up to the object; and a second offset calculation unit calculating a second parallax offset value based on a change in a parallax in a predetermined period at an identical point among the plurality of images and a travel distance of the vehicle travelling in a predetermined period; and a parallax correction unit correcting the parallax using the first parallax offset value under a condition where a difference between the first parallax offset value and the second parallax offset value is less than or equal to a threshold.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: July 14, 2020
    Assignees: DENSO CORPORATION, TOYOTA JIDOSHA KABUSHIKI KAISHA, RICOH COMPANY, LTD.
    Inventors: Toshihiko Terada, Hiroaki Ito, Naohide Uchida, Kagehiro Nagao
  • Patent number: 10713808
    Abstract: Disclosed is a stereo matching method and apparatus based on a stereo vision, the method including acquiring a left image and a right image, identifying image data by applying a window to each of the acquired left image and right image, storing the image data in a line buffer, extracting a disparity from the image data stored in the line buffer, and generating a depth map based on the extracted disparity.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: July 14, 2020
    Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Kwang Yong Kim, Byungin Moon, Mi-ryong Park, Kyeong-ryeol Bae
  • Patent number: 10713809
    Abstract: A device and a method for determining a position of a user's body portion. The device includes a camera, configured to capture the body portion, and a display for providing visual feedback. A sensor determines at least one of a roll angle, a pitch angle, and a yaw angle of the device, and an interface receives picture data related to a pictorial representation of the body portion captured and sensor data related to the determined angle of the device. An analyzer analyzes, based on the picture data, whether the captured body portion is within a predetermined region of the picture and, based on the sensor data, whether at least one of the roll angle, the pitch angle, and the yaw angle is within a predetermined angle range.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: July 14, 2020
    Assignee: BRAUN GMBH
    Inventor: Ingo Vetter
  • Patent number: 10713810
    Abstract: An information processing apparatus, comprising: a control unit configured to control a pattern that a projection apparatus projects onto an object; an obtainment unit configured to obtain a plurality of images respectively captured at a plurality of times by a plurality of image capturing apparatuses that capture the object onto which the pattern has been projected; and a measurement unit configured to measure range information of the object by performing matching, between images respectively captured by the plurality of image capturing apparatuses, using information of temporal change of pixel values of the images.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: July 14, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Hisayoshi Furihata, Masahiro Suzuki
  • Patent number: 10713811
    Abstract: A security camera system includes a base unit and sensor modules for generating image data. The base unit includes several mounting sockets arranged at different elevational and azimuthal directions around the base unit, and the sensor modules attach, for example, magnetically, to the mounting sockets. The security camera system is capable of automatic detection of the location of the sensor modules, as the identification information for the mounting sockets to which the sensor modules are attached are identified by image analytics. A reference image depicting the security camera system or an area surrounding the security camera system is analyzed and the positions of the sensor modules are determined based on the reference image. In the latter example, the reference image includes markers designating points of reference visible to the sensor modules and is compared to image data generated by the sensor modules.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: July 14, 2020
    Assignee: Sensormatic Electronics, LLC
    Inventors: Patrick Siu, Christopher Cianciolo
  • Patent number: 10713812
    Abstract: A method of determining a facial pose angle of a human face within an image is provided. After capturing a first image of the human face, respective coordinates of a predefined set of facial feature points of the human face in the first image are obtained. The predefined set of facial feature points includes an odd number of facial feature points, e.g., at least a first pair of symmetrical facial feature points, a second pair of symmetrical facial feature points, and a first single facial feature point. The predefined set of facial feature points are not coplanar. Next, one or more predefined key values based on the respective coordinates of the predefined set of facial feature points of the human face in the first image are calculated. Finally, a pre-established correspondence table is queried using the one or more predefined key values to determine the facial pose angle of the human face in the first image.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: July 14, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Chengjie Wang
  • Patent number: 10713813
    Abstract: A computer-implemented method for determining a gaze position of a user, comprising: receiving an initial image of at least one eye of the user; extracting at least one color component of the initial image to obtain a corresponding at least one component image; for each component image, determining a respective internal representation; determining an estimated gaze position in the initial image by applying a respective primary stream to obtain a respective internal representation for each of the at least one component image; and outputting the estimated gaze position. The processing of the component images is performed using a neural network configured to, at run time and after the neural network has been trained, process the component images using one or more neural network layers to generate the estimated gaze position. A system for determining a gaze position of a user is also provided.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: July 14, 2020
    Assignee: INNODEM NEUROSCIENCES
    Inventors: Etienne De Villers-Sidani, Paul Alexandre Drouin-Picaro
  • Patent number: 10713814
    Abstract: A computer-implemented method for determining a gaze position of a user, comprising: receiving an initial image of at least one eye of the user; extracting at least one color component of the initial image to obtain a corresponding at least one component image; for each component image, determining a respective internal representation; determining an estimated gaze position in the initial image by applying a respective primary stream to obtain a respective internal representation for each of the at least one component image; and outputting the estimated gaze position. The processing of the component images is performed using a neural network configured to, at run time and after the neural network has been trained, process the component images using one or more neural network layers to generate the estimated gaze position. A system for determining a gaze position of a user is also provided.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: July 14, 2020
    Assignee: INNODEM NEUROSCIENCES
    Inventors: Etienne De Villers-Sidani, Paul Alexandre Drouin-Picaro
  • Patent number: 10713815
    Abstract: A method for supporting at least one administrator to evaluate detecting processes of object detectors to provide logical grounds of an autonomous driving is provided. And the method includes steps of: (a) a computing device instructing convolutional layers, included in an object detecting CNN which has been trained before, to generate reference convolutional feature maps by applying convolutional operations to reference images inputted thereto, and instructing ROI pooling layers included therein to generate reference ROI-Pooled feature maps by pooling at least part of values corresponding to ROIs on the reference convolutional feature maps; and (b) the computing device instructing a representative selection unit to classify the reference ROI-Pooled feature maps by referring to information on classes of objects included in their corresponding ROIs on the reference images, and to generate at least one representative feature map per each class.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: July 14, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10713816
    Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums that correct image color casts by utilizing a fully convolutional network (FCN), where the patches in an input image may differ in influence over the color constancy estimation. This influence is formulated as a confidence weight that reflects the value of a patch for inferring the illumination color. The confidence weights are integrated into a novel pooling layer where they are applied to local patch estimates in determining a global color constancy result.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yuanming Hu, Baoyuan Wang, Stephen S. Lin
  • Patent number: 10713817
    Abstract: A device that uses color recognition systems designed to alert distracted drivers/operators in non-moving vehicles, to reduce their time of response to unnoticed external environment color changes. The device provides a method to image a vehicle's external environment color change by detecting and analyzing color variations of the traffic light signals or proceeding vehicle's brake lights. The device can emit a sound and/or vibration when the device recognizes a color change and the operator of the vehicle does not react, as in the case of a distracted operator. The device employs a camera, an accelerometer, a buzzer, an algorithm, a CPU and a power source. The device can be installed on the windshield of a vehicle, fully integrated into a vehicle, or partially integrated, wherein a mobile device App may be employed.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: July 14, 2020
    Inventors: Oriol Oliva-Perez, Mohammad Salaheldin Sallam
  • Patent number: 10713818
    Abstract: Methods, and systems, including computer programs encoded on computer storage media for compressing data items with variable compression rate. A system includes an encoder sub-network configured to receive a system input image and to generate an encoded representation of the system input image, the encoder sub-network including a first stack of neural network layers including one or more LSTM neural network layers and one or more non-LSTM neural network layers, the first stack configured to, at each of a plurality of time steps, receive an input image for the time step that is derived from the system input image and generate a corresponding first stack output, and a binarizing neural network layer configured to receive a first stack output as input and generate a corresponding binarized output.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: George Dan Toderici, Sean O'Malley, Rahul Sukthankar, Sung Jin Hwang, Damien Vincent, Nicholas Johnston, David Charles Minnen, Joel Shor, Michele Covell
  • Patent number: 10713819
    Abstract: Methods are provided for generating a prescription map for the application of crop inputs. In one method, the user draws a boundary on a map within a user interface and the system identifies relevant soil data and generates a soil map overlay and legend for changing the application prescription for various soils and soil conditions. In another method, the user instead drives a field boundary which is recorded on a planter monitor using a global positioning receiver, and the system generates a soil map and legend for changing the application prescription.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: July 14, 2020
    Assignee: The Climate Corporation
    Inventors: Derek A. Sauder, Timothy A. Sauder, Steven D. Monday
  • Patent number: 10713820
    Abstract: A device is provided for adjusting brightness of a plurality of images each including a plurality of pixels. The device may include a memory configured to store instructions. The device may also include a processor configured to execute the instructions to determine overall luminance values of the images. The processor may also be configured to determine, from the images, a reference image and a reference overall luminance value based on the overall luminance values. The processor may further be configured to determine adjustment factors for the images based on the reference overall luminance value of the reference image, determine weighting factors for the pixels in an image to be adjusted, and adjust luminance values of the pixels in the image to be adjusted based on an adjustment factor for the image and the weighting factors for the pixels.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: July 14, 2020
    Assignee: SHANGHAI XIAOYI TECHNOLOGY CO., LTD.
    Inventors: Kongqiao Wang, Junzhou Mou
  • Patent number: 10713821
    Abstract: Techniques are generally described for context aware text-to-image synthesis. First text data comprising a description of an object may be received. A recurrent neural network may determine a first semantic representation data representing the first text data. A generator trained using a first generative adversarial network (GAN) may determine first image data representing the object using the first semantic representation. An encoder of a second GAN may generate a first feature representation of the first image data. The first feature representation may be combined with a projection of the first semantic representation data. A decoder of the second GAN may generate second image data representing the first text data.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: July 14, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Shiv Surya, Arijit Biswas, Sumit Negi, Amrith Rajagopal Setlur
  • Patent number: 10713822
    Abstract: A tomographic imaging apparatus includes an X-ray detector comprising a plurality of dual mode pixels and configured to detect radiation that has passed through an object, and at least one processor configured to obtain scan data from the X-ray detector, and control each pixel of the plurality of dual mode pixels to operate in one of a first mode and a second mode, wherein each pixel of the plurality of dual mode pixels includes a sensor configured to generate a scan signal by converting incident radiation into an electric signal, a first signal path circuit configured to transmit the scan signal in the first mode, a second signal path circuit configured to transmit the scan signal in the second mode, and a photon counter configured to count photons from the scan signal transmitted through one of the first and second signal path circuits.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: July 14, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang-min Lee, Dong-uk Kang, Do-il Kim
  • Patent number: 10713823
    Abstract: When a group of (pre-processed) projection data is stored into a projection-data storage unit, a Gaussian-based expansion-data creating unit creates a group of Gaussian-based expansion data that is expanded from each of the group of projection data through linear combination based on a plurality of Gaussian functions that is stored by a Gaussian-function storage unit and has different center points. A reconstruction-image creating unit then creates a reconstruction image by using the Gaussian-based expansion-data created by the Gaussian-based expansion-data creating unit, and stores the created reconstruction image into an image storage unit.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: July 14, 2020
    Assignee: TOSHIBA MEDICAL SYSTEMS CORPORATION
    Inventors: Manabu Teshigawara, Takuzo Takayama, Tomoyasu Komori, Takaya Umehara
  • Patent number: 10713824
    Abstract: The disclosure relates to system and method for CT image reconstruction.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: July 14, 2020
    Assignee: UIH AMERICA, INC.
    Inventor: Stanislav Zabic
  • Patent number: 10713825
    Abstract: A medical image processing device is disclosed. The disclosed medical image processing device may include: an input interface through which a depth adjusting command is input from a user; an image processor and controller generating a two-dimensional reconstruction image by overlapping a part or all of CT image data in a view direction, and changing a contrast of at least a part of the two-dimensional reconstruction image according to the depth adjusting command; and a display part displaying the two-dimensional reconstruction image.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: July 14, 2020
    Assignees: VATECH Co., Ltd., VATECH EWOO Holdings Co., Ltd.
    Inventors: Woong Bae, Sung Il Choi
  • Patent number: 10713826
    Abstract: A computer-implemented method of drawing a polyline in a three-dimensional scene: a) draws a segment (S1) of said polyline in said three-dimensional scene, said segment having a starting point (P1) and an endpoint (P2); b) displays, in the three-dimensional scene, a graphical tool (PST) representing a set of three orthogonal planes (PLA, PLB, PLC), one of said planes being orthogonal to the segment; c) selects one of said planes; and d) draws another segment of the polyline (S2), having a starting point coinciding with the endpoint of the segment drawn in step a) and lying in the plane (PLA) selected in step c). Steps a), c) and d) are carried out based on input commands provided by a user. A computer program product, non-volatile computer-readable data-storage medium and Computer Aided Design or three-dimensional illustration authoring system carries out such a method.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: July 14, 2020
    Assignee: DASSAULT SYSTEMES
    Inventors: Christophe Rene Francis Delfino, Nicolas Arques
  • Patent number: 10713827
    Abstract: A system and method for graphical representation of spatial data. A disclosed video display system is capable of presenting a layout of graphics objects as part of a displayed image. The system provides in the displayed image i) a first graphical representation in a first display area of a display and ii) a diagrammatic representation in a second display area. The diagrammatic representation features superimposed graphical elements that are dependent on the first graphical representation. The video display system can provide, for example, a pie chart as the first graphical representation and a map of a geographic area as the diagrammatic representation. The pie chart graphically represents, for example, a breakdown of members by organization, wherein each slice in the pie chart corresponds to a different organization. Superimposed on the map are elements of a bar chart, which is another example of a graphical representation.
    Type: Grant
    Filed: February 2, 2020
    Date of Patent: July 14, 2020
    Assignee: Polaris Wireless, Inc.
    Inventors: Mitul Bhat, Pratik Dhebri
  • Patent number: 10713828
    Abstract: An image processing device for stitching a plurality of input images together so as to generate a panoramic composite image is provided. An imaging control section sets a guide for photographing a photographing subject from different imaging directions as a guide information for obtaining a plurality of input images suitable for the image composition. A guide display setting section displays a photographing guide image based on the guide information. A user rotates a camera while checking the guide image and captures the plurality of input images from different imaging directions. An image compositing section stitches the plurality of input images together so as to generate a composite image.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: July 14, 2020
    Assignee: MORPHO, INC.
    Inventors: Masaki Hiraga, Shun Hirai, Takeshi Miura, Ryo Ono
  • Patent number: 10713829
    Abstract: An accident report device for reporting accident information to a predetermined report destination in a case where a vehicle is involved in an accident, includes: an image acquiring unit configured to acquire an image acquired by at least one vehicle-mounted camera mounted on the vehicle; an information acquiring unit configured to acquire information related to the vehicle; and an image control unit configured to control a terminal of the report destination such that the terminal displays a synthetic image generated based on the image, the synthetic image showing an area surrounding the vehicle and an external appearance of the vehicle as seen from a virtual viewpoint.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: July 14, 2020
    Assignee: DENSO TEN LIMITED
    Inventors: Sho Yasugi, Koichi Yamamoto
  • Patent number: 10713830
    Abstract: An image and the maximum number of tokens for a to-be-created image caption are received in a computing system. Font size of graphical image of the token is calculated from the maximum number of tokens and the dimension of desired input image for prediction-style image classification technique. Desired input image is divided into first and second portions. A 2-D symbol is formed by placing a resized image derived from the received image with substantially similar contents in the first portion and by initializing the second portion with blank images. Next token of the image caption is predicted by classifying the 2-D symbol using the prediction-style image classification technique. 2-D symbol is modified by appending the graphical image of just-predicted token to the existing image caption in the second portion, if termination condition for image caption creation is false. Next token is repeatedly predicted until termination condition becomes true.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: July 14, 2020
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Baohua Sun
  • Patent number: 10713831
    Abstract: There are provided systems and methods for providing event enhancement using augmented reality (AR) effects. In one implementation, such a system includes a computing platform having a hardware processor and a memory storing an AR effect generation software code. The hardware processor is configured to execute the AR effect generation software code to receive a venue description data corresponding to an event venue, to identify the event venue based on the venue description data, and to identify an event scheduled to take place at the event venue. The hardware processor is further configured to execute the AR effect generation software code to generate one or more AR enhancement effect(s) based on the event and the event venue, and to output the AR enhancement effect(s) for rendering on a display of a wearable AR device during the event.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: July 14, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Mark Arana, Steven M. Chapman, Michael DeValue, Michael P. Goslin
  • Patent number: 10713832
    Abstract: A method of identifying locations in a virtual environment where a motion sequence can be performed by an animated character may include accessing the motion sequence for the animated character, identifying a plurality of contact locations in the motion sequence where the animated character contacts surfaces in virtual environments, accessing the virtual environment comprising a plurality of surfaces, and identifying the locations in the virtual environment where the motion sequence can be performed by the animated character by identifying surfaces in the plurality of surfaces that match the plurality of contact locations.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: July 14, 2020
    Assignees: Disney Enterprises, Inc., ETH Zürich
    Inventors: Robert Sumner, Xu Xianghao, Stelian Coros, Mubbasir Turab Kapadia
  • Patent number: 10713833
    Abstract: Facial expressions and whole-body gestures of a 3D character are provided based on facial expressions of a user and gestures of a hand puppet perceived using a depth camera.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: July 14, 2020
    Assignee: Korea Institute of Science and Technology
    Inventors: Hwasup Lim, Youngmin Kim, Jae-In Hwang, Sang Chul Ahn
  • Patent number: 10713834
    Abstract: A method includes defining a virtual space comprising a first avatar and a second avatar, wherein the first avatar is associated with a first user, and the second avatar is associated with a second user. The method further includes receiving a first input from the first user. The method further includes performing charging-related processing based on the received first input. The method further includes requesting a performance by the second avatar in response to performance of the charging-related processing. The method further includes detecting a motion of the second user in response to the requesting of the performance. The method further includes moving the second avatar in accordance with detected motion of the second user.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: July 14, 2020
    Assignee: COLOPL, INC.
    Inventor: Kento Nakashima
  • Patent number: 10713835
    Abstract: A method for of playing an animation image, the method including: obtaining a plurality of images; displaying a first image of the plurality of images; detecting a first event as a trigger to play the animation image for a first object of the first image; and playing the animation image for the first object using the plurality of images.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: July 14, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Pavan Sudheendra, Sarvesh, Yogesh Manav, Adappa M Gourannavar, Rahul Varna, Sumanta Baruah
  • Patent number: 10713836
    Abstract: Examples are disclosed that relate to computing devices and methods for simulating light passing through one or more lenses. In one example, a method comprises obtaining a point spread function of the one or more lenses, obtaining a first input raster image comprising a plurality of pixels, and ray tracing the first input raster image using the point spread function to generate a first output image. Based on ray tracing the first input raster image, a look up table is generated by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. A second input raster image is obtained, and the look up table is used to generate a second output image from the second input raster image.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: July 14, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Trebor Lee Connell
  • Patent number: 10713837
    Abstract: A system and method uses a two-dimensional graphics library to generate an image representation that can be used by a three-dimensional graphics library to render the image.
    Type: Grant
    Filed: October 14, 2018
    Date of Patent: July 14, 2020
    Assignee: Charles Schwab & Co., Inc.
    Inventor: Sean M. Payne
  • Patent number: 10713838
    Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: July 14, 2020
    Assignee: NVIDIA Corporation
    Inventors: Morgan McGuire, David Luebke, Cyril Crassin, Peter-Pike Sloan, Peter Shirley, Brent Oster, Christopher Wyman, Michael Mara
  • Patent number: 10713839
    Abstract: A method and system for generating a three-dimensional representation of a vehicle to assess damage to the vehicle. A mobile device may capture multispectral scans of a vehicle from each a plurality of cameras configured to scan the vehicle at a different wavelength of the electromagnetic spectrum. A virtual model of the vehicle may be generated from the multispectral scan of the vehicle, such that anomalous conditions or errors in individual wavelength data are omitted from model generation. A representation of the virtual model may be presented to the user via the display of the mobile device. The virtual model of the vehicle may further be analyzed to assess damage to the vehicle.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: July 14, 2020
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventor: Nathan C. Summers
  • Patent number: 10713840
    Abstract: A method is provided, including the following method operations: using a robot having a plurality of sensors to acquire sensor data about a local environment; processing the sensor data to generate a spatial model of the local environment, the spatial model defining virtual surfaces that correspond to real surfaces in the local environment; further processing the sensor data to generate texture information that is associated to the virtual surfaces defined by the spatial model; tracking a location and orientation of a head-mounted display (HMD) in the local environment; using the spatial model, the texture information, and the tracked location and orientation of the HMD to render a view of a virtual space that corresponds to the local environment; presenting the view of the virtual environment through the HMD.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: July 14, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Erik Beran
  • Patent number: 10713841
    Abstract: A system for generating a point cloud map of one or more objects in real-world environment. The system including data acquisition device for acquiring plurality of data points representing objects, wherein data acquisition device is configured to acquire first set of data points for objects from first position to generate first point cloud, and acquire second set of data points for objects from second position to generate second point cloud, a server arrangement including a receiving module configured to receive first point cloud and second point cloud, registration module to register received first point cloud and received second point cloud to generate a point cloud pair which is aligned and data processing module to determine quality score for generated point cloud pair, compare determined quality score with predefined threshold value, and generate point cloud map if determined quality score is less than a predefined threshold value.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: July 14, 2020
    Assignee: Unkie Oy
    Inventor: Tuomas Pöyhtäri
  • Patent number: 10713842
    Abstract: A process for receiving, from a computing device, a series of captured building images by overlaying, on a capture device display, a selected graphical guide from a set of sequentially related graphical guides. The process continues by capturing, by a capture device, a building image, wherein the capturing is performed during substantial alignment of an image of a selected building object with a corresponding orientation of the selected graphical guide. The process continues by receiving acknowledgement of the building image being captured for the selected graphical guide and the selected building object. The process is repeated for a plurality of building images.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: July 14, 2020
    Assignee: HOVER, Inc.
    Inventors: Manish Upendran, William Castillo, Ajay Mishra, Adam J. Altman
  • Patent number: 10713843
    Abstract: Embodiments of the disclosure include methods, machines, and non-transitory computer-readable medium having one or more computer programs stored therein to enhance core analysis planning for a plurality of core samples of subsurface material. Embodiments can include positioning electronic depictions of structure of encased core samples of subsurface material on a display and determining portions of each of the images as different planned sample types thereby to virtually mark each of the images. Planned sample types can include, for example, full diameter samples, special core analysis samples, conventional core analysis samples, and mechanical property samples. Embodiments further can include transforming physical properties of encased core samples of subsurface material into images responsive to one or more penetrative scans by use of one or more computerized tomography (CT) scanners.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: July 14, 2020
    Assignee: Saudi Arabian Oil Company
    Inventors: Sinan Caliskan, Abdullah M. Shebatalhamd
  • Patent number: 10713844
    Abstract: A method and image processing apparatus for creating simplified representations of an existing virtual 3D model for use in occlusion culling is provided. A visual hull construction is performed on the existing virtual 3D model using an approximate voxel volume consisting of a plurality of voxels. A set of projections from a plurality of viewing angles provide a visual hull of the existing 3D model. The volumetric size of the visual hull of the existing 3D model is increased to envelop the existing virtual 3D model to provide the visual hull as an occludee model, and the volumetric size of the visual hull of the existing 3D model is decreased to be enveloped by the existing virtual 3D model to provide the visual hull as an occluder model. The occludee model and the occluder model are used during runtime in a 3D virtual environment for occlusion culling.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: July 14, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ulrik Lindahl, Gustaf Johansson
  • Patent number: 10713845
    Abstract: A cloud network server system, a method, and a software program product for experiencing a three-dimensional (3D) model are provided. 3D model data associated with a 3D video game is uploaded to the cloud network server system. The system and method are used to design for example a computer game that renders non-spatial characteristics such as, smell, reflection and/or refraction of light, wind direction, sound reflection, etc., along with spatial and visibility information associated with 3D objects displayed in the 3D video game. Different versions of the 3D model are created based on memory, streaming bandwidth, and/or processing power requirements of different user terminal computers. Based on a virtual location of a user in the 3D model, parts of at least one version of the 3D model are rendered to the user.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: July 14, 2020
    Assignee: Umbra Software Oy
    Inventors: Otso Makinen, Antti Hatala, Hannu Saransaari, Jarno Muurimaki, Jasin Bushnaief, Johann Muszynski, Mikko Pulkki, Niilo Jaba, Otto Laulajainen, Turkka Aijala, Vinh Truong
  • Patent number: 10713846
    Abstract: Computationally implemented methods and systems include acquiring one or more first augmentations for inclusion in a first augmented view of a first scene, displaying the first augmented view including the one or more first augmentations, and transmitting augmentation data associated with the one or more first augmentations to facilitate remote display of one or more second augmentations in a second augmented view of a second scene, the second scene having one or more visual items that are also included in the first scene. In addition to the foregoing, other aspects are described in the claims, drawings, and text.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: July 14, 2020
    Assignee: Elwha LLC
    Inventors: Gene Fein, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud, John D. Rinaldo, Jr., Clarence T. Tegreene
  • Patent number: 10713847
    Abstract: The present group of inventions relates to methods and systems intended for interacting with virtual objects, involving determining a control unit to be used for interacting with virtual objects, determining characteristic graphics primitives of a virtual object, determining the spatial position of the control unit, correlating the spatial position of the control unit to the graphics primitives of the virtual object, and performing the desired actions with regard to the virtual object. In accordance with the invention, images are used from a user's client device which has a video camera and a display, a control unit image library is created on the basis of the received images, and the obtained image library is used for determining the graphics primitives of the control unit. Then, the spatial position of the control unit is determined by calculating the motion in space of the control unit graphics primitives.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: July 14, 2020
    Assignee: DEVAR ENTERTAINMENT LIMITED
    Inventors: Vitaly Vitalyevich Averyanov, Andrey Valeryevich Komissarov
  • Patent number: 10713848
    Abstract: Present disclosure relates to a system for providing a simulated environment and a method thereof. The system comprises a first wearable device and a computing unit. The first wearable device is configured to output a first scenario of the simulated environment. The computing unit is configured to provide an indication corresponding to a mobile object in the first scenario when the mobile object is detectable in a predetermined distance distanced from the first wearable device.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: July 14, 2020
    Assignee: HTC Corporation
    Inventors: Hsin-Hao Lee, Ching-Hao Lee
  • Patent number: 10713849
    Abstract: Modifying augmented reality viewing is provided. It is determined that a user is viewing a scene space via augmented reality at a current geographic location of the user. It is detected that the viewing of the scene space is suboptimal for the user based on at least one of overcrowding of the viewed scene space at the current geographic location and significant battery usage to support augmented reality processing. Priority of one or more masks associated with the viewing of the scene space by the user is determined based on a user profile. The one or more masks associated with the viewing of the scene space are implemented based on the current geographic location of the user and the user profile. The one or more masks indicate that a portion of the viewed scene space is not to be processed for the viewing of the scene space via augmented reality.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: July 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kelley Anders, Al Chakra, Liam S. Harpur, Robert H. Grant
  • Patent number: 10713850
    Abstract: Virtual reality-based apparatus that includes a memory device, a depth sensor and a modeling circuitry, captures a plurality of depth values of a first human subject from a single viewpoint using the depth sensor. The memory device stores a reference three dimensional (3D) human body model that comprises a mean body shape and a set of body shape variations. The modeling circuitry determines a first shape of the first human subject based on the plurality of depth values and generates a first deformed 3D human body model by deformation of the mean body shape. The modeling circuitry determines a first plurality of pose parameters for a first pose based on a plurality of rigid transformation matrices. The modeling circuitry generates a second deformed 3D human body model by deformation of a plurality of vertices and controls display of the second deformed 3D human body model as a reconstructed 3D model.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: July 14, 2020
    Assignee: SONY CORPORATION
    Inventors: Jie Ni, Mohammad Gharavi-Alkhansari
  • Patent number: 10713851
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: July 14, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu