Abstract: An image recognition method applied to an electronic device is provided. The method includes constructing a first semantic segmentation network. In response that an initial labeled result of one of a plurality of initial labeled images does not match a preset labeled result, a target image corresponding to the one of the plurality of initial labeled images and a target labeled result of the target image are obtained. A second semantic segmentation network is obtained by training the first semantic segmentation network based on a plurality of the target images and the target labeled result of each target image, and a labeled result of an image to be recognized is obtained by inputting the image to be recognized into the second semantic segmentation network.
Abstract: A matching support apparatus includes a first display information generation unit that generates first display information for displaying, on the screen of a display device, a first display region for displaying an image including the face of a person targeted for matching captured using an image capturing apparatus, a second display information generation unit that generates second display information for displaying, on the screen of the display device 30, a second display region for displaying a reference face development image generated based on three-dimensional data of a head serving as a reference, and a user interface display information generation unit that generates first user interface display information for displaying, on the screen of the display device 30, a second user interface to be used in an operation for enabling a user to designate, in the second display region, a feature region of the person targeted for matching visible on the skin surface of the person targeted for matching.
Abstract: Certain aspects of the present disclosure provide techniques for generating fine depth maps for images of a scene based on semantic segmentation and segment-based refinement neural networks. An example method generally includes generating, through a segmentation neural network, a segmentation map based on an image of a scene. The segmentation map generally comprises a map segmenting the scene into a plurality of regions, and each region of the plurality of regions is generally associated with one of a plurality of categories. A first depth map of the scene is generated through a first depth neural network based on a depth measurement of the scene. A second depth map of the scene is generated through a depth refinement neural network based on the segmentation map and the first depth map. One or more actions are taken based on the second depth map of the scene.
Type:
Grant
Filed:
February 4, 2022
Date of Patent:
March 25, 2025
Assignee:
QUALCOMM Incorporated
Inventors:
Hong Cai, Shichong Peng, Janarbek Matai, Jamie Menjay Lin, Debasmit Das, Fatih Murat Porikli
Abstract: Machine learning systems and methods that determine gaze direction by using face orientation information, such as facial landmarks, to modify eye direction information determined from images of the subject's eyes. System inputs include eye crops of the eyes of the subject, as well as face orientation information such as facial landmarks of the subject's face in the input image. Facial orientation information, or facial landmark information, is used to determine a coarse prediction of gaze direction as well as to learn a context vector of features describing subject face pose. The context vector is then used to adaptively re-weight the eye direction features determined from the eye crops. The re-weighted features are then combined with the coarse gaze prediction to determine gaze direction.
Abstract: A product defect detection method which includes acquiring a detection image of a product to be detected is provided. The method further includes dividing the detection image into a first preset number of detection blocks. Once a detection result of each detection block is obtained by inputting each detection block into a preset defect recognition model, according to a position of each detection block in the detection image, a detection result of the product is determined according to the detection result of each detection block.
Abstract: A system for generating a depth output for an image is described. The system receives input images that depict the same scene, each input image including one or more potential objects. The system generates, for each input image, a respective background image and processes the background images to generate a camera motion output that characterizes the motion of the camera between the input images. For each potential object, the system generates a respective object motion output for the potential object based on the input images and the camera motion output. The system processes a particular input image of the input images using a depth prediction neural network (NN) to generate a depth output for the particular input image, and updates the current values of parameters of the depth prediction NN based on the particular depth output, the camera motion output, and the object motion outputs for the potential objects.
Type:
Grant
Filed:
September 13, 2023
Date of Patent:
March 25, 2025
Assignee:
Google LLC
Inventors:
Vincent Michael Casser, Soeren Pirk, Reza Mahjourian, Anelia Angelova
Abstract: A system for generating motion blur comprises: a frame camera, an event camera and an accumulator for accumulating event information from a plurality of events occurring within a window around the exposure time of an image frame in a plurality of event frames. A processor determines from the events in at least a first of the plurality of event frames, one or more areas of movement within the field of view of the event camera; determines from the events in at least a second of the plurality of event frames, a direction of movement for the one or more areas of movement; and applies blur in one or more areas of the image frame corresponding to the one or more determined areas of movement in accordance with at least the direction of movement for each of the one or more areas of movement to produce a blurred image.
Abstract: A joint imaging system based on unmanned aerial vehicle platform and an image enhancement fusion method are provided. The system includes a flying unit as a load platform, a shutter control system for controlling the operation of a load camera, a posture control system for recording the movement track and POS data of the flying unit, an airborne image transmission system for communicating with ground equipment, and an onboard computing unit with an image processing module for receiving the output images and performing image processing and fusion.
Type:
Grant
Filed:
June 6, 2024
Date of Patent:
March 18, 2025
Assignee:
CHINA UNIVERSITY OF MINING AND TECHNOLOGY
Inventors:
Zhenlu Shao, Xiaoxing Zhong, Rong Deng, Tong Yang, Guangchen Qu, Huisong Zhang, Zhaolong Wang, Chentao Ye, Tao Zhou
Abstract: An example apparatus includes at least one processor circuitry to execute or instantiate instructions to identify a media file is scheduled to be accessed by a media device within a first time period after a publishing of the media file was published by a media provider; select a first symbol to be inserted at a first symbol position and a second symbol to be inserted at a second symbol position to identify an access of the media file is to be accessed by the media device within the first time period, the first symbol position in a first bit sequence, the second symbol position in a second bit sequence; encode the first bit sequence in the media file on a first encoding layer of a multilayered watermark, and encode the second bit sequence in the media file on a second encoding layer of the multilayered watermark.
Type:
Grant
Filed:
October 31, 2023
Date of Patent:
March 18, 2025
Assignee:
The Nielsen Company (US), LLC
Inventors:
Wendell D. Lynch, Christen V. Nielsen, Alexander Topchy, Khaldun Karazoun, Jeremey M. Davis
Abstract: The present systems and methods permit users to select a custom color which serves as a target color, with millions of possible combinations; then filter a product search by the selected color. The target color and the product color are each expressible as a coordinate in a color space; and distance between the target color coordinate and the product color coordinate is calculated to determine the color code distance for each product in the search result, potentially numbering the thousands. Descriptions of the products are presented in the GUI, in order of the calculated color code distance for each product, where a smaller color code distance indicates a closer match to the target color code. The user can easily choose a product type and a desired or target color using a color picker, and receive a search result including only products sufficiently matching the target color.
Abstract: A system and method allows a user to pay for a transaction by scanning an encoded image, for example, using a mobile device. The payor is anonymous to the party receiving payment.
Type:
Grant
Filed:
September 27, 2023
Date of Patent:
March 11, 2025
Assignee:
CHARLES SCHWAB & CO., INC.
Inventors:
Konstantinos P. Konstantinides, Naresh Sikha, Janardhan D. Kakarla, Eliel R. Johnson
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to encrypt media for identification. An example apparatus accesses a mesh points matrix. Additionally, the example apparatus sets an upper limit equal to a larger of two values of a vector, respective ones of the two values equal to elements of the vector at indices equal to respective values of a coordinate pair of the mesh points matrix and set a lower limit equal to a smaller of the two values of the vector. The example apparatus also appends a value to an encryption vector, the value based on whether the lower limit is equal to zero, the value equal to a lowest integer not less than a quotient of a number based on a square of the upper limit and two and encrypt an input matrix with the encryption vector.
Abstract: An optical foot sole scanning apparatus, comprising: a) an element that is substantially in the shape of a plate or a wedge in the unloaded state and is made of a resilient material, having a foot placement side facing a foot sole of a human foot to be scanned when in use and having a scanner side, wherein the resilient material is at least partially light-transmissive, b) an optical scanner that is arranged on the scanner side of the element and is configured to emit electromagnetic radiation through the element at least in part onto the foot placement side of the element when in use and to record data when in use, and c) an evaluation unit that is connected to the optical scanner and is set up to perform a reconstruction of the three-dimensional shape of the foot sole from the data recorded by the optical scanner, and an orthopedic foot sole scanning system having same, an insole production apparatus having same, a method for ascertaining a three-dimensional shape of an insole having same, and a method for
Abstract: Systems, methods, and apparatus are provided for real-time adjustment of image quality parameters. The system includes a controller configured to: acquire an image frame, having a fixed-pixel region that defines a region-of-interest, from one or more imaging devices; apply image processing techniques to determine a modified fixed-pixel region that excludes non-relevant object pixels; alter one or more image quality parameters based on statistics of pixels in the modified fixed-pixel region; and provide the altered one or more image quality parameters to the one or more imaging devices for use with subsequent image frames; wherein the one or more imaging devices produce an image that is tuned, based on the altered one or more image quality parameters, to the portions of the image in the region-of-interest that does not include the non-relevant object pixels.
Type:
Grant
Filed:
May 10, 2022
Date of Patent:
March 4, 2025
Assignee:
GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventors:
Sai Vishnu Aluru, Justin Keith Francisco, Joseph G Machak, Eyal Lubelski
Abstract: An information processing device includes a recognition unit that recognizes, from a composite image generated using a code image and a cover image, the code image. The recognition unit recognizes the code image from the composite image using a recognizer trained so that a recognition result of the code image and a recognition result of the composite image are identical to each other.
Abstract: A method for identifying authenticity of an object, the method includes maintaining, in an identification server system, a reference image of an original object, the reference image and provided to represent all equivalent original objects, receiving, in the identification server system, one or more input images of the object to be identified, and generating, by the identification server system, a target image from the one or more input images. The method further includes aligning, by the identification server system, the target image with the reference image and analysing, by the identification server system, the target image in relation to the aligned reference image for identifying authenticity of the object.
Abstract: Examples of the present disclosure describe systems and methods for detecting and remediating compression artifacts in multimedia items. In example aspects, a machine learning model is trained on a dataset related to compression artifacts and non-compression artifacts. Input data may then be collected by a data collection module and provided to a pattern recognition module. The pattern recognition module may extract visual and audio features of the multimedia item and provide the extracted features to the trained machine learning model. The trained machine learning model may compare the extracted features to the model, and a confidence value may be generated. The confidence value may be compared to a confidence value threshold. If the confidence value is equal to or exceeds the confidence threshold, then the input data may be classified as containing at least one compression artifact. Remedial action may subsequently be applied (e.g., boosting the system with technical resources).
Abstract: Techniques for securing client watermarks are described herein. In accordance with various embodiments, a server receives a request from a client device for authorizing rendering a media content item at the client device. A validation engine on the server obtains at least a portion of an image representing a screen capture of rendering the media content item including a client watermark and/or metadata associated with the rendering. The validation engine then validates the watermark based at least in part on at least the portion of the image and/or the metadata. Having invalidated the client watermark, the server causes disruption of rendering the media content item at the client device. On the client side, a watermark engine captures the image of rendering the media content item including the client watermark and requests the server to validate the client watermark and renew the authorization based on the validation.
Type:
Grant
Filed:
March 29, 2022
Date of Patent:
February 18, 2025
Assignee:
Synamedia Limited
Inventors:
David Livshits, Steven Jason Epstein, Amir Hochbaum
Abstract: Some example embodiments relate to a super resolution scanning electron microscope (SEM) image implementing device and/or a method thereof. Provided a super resolution scanning electron microscope (SEM) image implementing device comprising a processor configured to crop a low resolution SEM image to generate a first cropped image and a second cropped image, to upscale the first cropped image and the second cropped image to generate a first upscaled image and a second upscaled image, and to cancel noise from the first upscaled image and the second upscaled image to generate a first noise canceled image and a second noise canceled image.
Type:
Grant
Filed:
April 15, 2022
Date of Patent:
February 18, 2025
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Ho Joon Lee, Il Kwon Kim, Sang Gul Park, Chang Wook Jeong, Moon Hyun Cha, Sat Byul Kim
Abstract: A compute-implemented method for determining an abnormal structure in an examination region in conjunction with an X-ray recording of an X-ray system, comprising receiving input data, the input data relating to an X-ray recording data set of the X-ray recording having multiple data channels; applying a trained function to the input data, the trained function being based on a machine learning method and applied to at least two data channels to determine the abnormal structure and generate output data; and providing the output data, the output data including an abnormal structure of the examination region.
Abstract: An information processing apparatus sets a color mode in place of a monochrome mode on condition that the monochrome mode and multiplexing of additional information on a print target image are set as print settings based on an input image data; generates, based on the input image data, color image data corresponding to printing in a color mode which represents a color of the monochromated print target image by a value of a color signal; performs, for the color image data generated by the generation unit, processing for multiplexing the additional information on the print target image; and causes a printing apparatus to print, in the color mode, a multiplexed image on which the additional information is multiplexed.
Abstract: A system includes a processor and a non-transitory computer readable medium coupled to the processor. The non-transitory computer readable medium includes code, that when executed by the processor, causes the processor to receive input from a user of a user device to generate an optimal payment location on an application display, generate a first boundary of the optimal payment location on the application display of the user device based upon a first motion of a payment enabled card in a first direction and generate a second boundary of the optimal payment location on the application display of the user device based upon a second motion of the payment enabled card in a second direction. The first boundary and the second boundary combine to form defining edges of the optimal payment location.
Type:
Grant
Filed:
December 8, 2022
Date of Patent:
February 11, 2025
Assignee:
Visa International Service Association
Inventors:
Kasey Chiu, Kuen Mee Summers, Whitney Wilson Gonzalez
Abstract: A method of printing authentication indicia by applying an at least amplitude-modulated halftone print in a detection zone to an object uses adjoining halftone cells, in each of which a halftone dot is printed from a matrix of printable halftone elements, individual tone values of the halftone print corresponding in each case to a halftone plane of a halftone mountain for a halftone dot. In this process, the assigned screen plane of the screen mountain is modified in the detection zone in a predetermined manner for a plurality of tone values of screen dots to be printed, so that a predetermined matrix image of the screen elements to be printed is assigned to it while the tone value of the print remains constant.
Abstract: Provided is a method of processing and transmitting three-dimensional data represented as a point cloud. The method of encoding three-dimensional (3D) data includes: generating a geometry image indicating position information of points, a texture image indicating color information of the points, and an occupancy map indicating occupancy information of the position information of the points in the geometry image, by projecting the points included in the 3D data onto a two-dimensional (2D) plane; generating a filtered occupancy map by performing filtering on the occupancy map; performing image padding on the geometry image and the texture image, based on the filtered occupancy map; and generating and outputting a bitstream including a padded geometry image, a padded texture image, a downsampled occupancy map, and information about the filtering performed on the occupancy map.
Abstract: A method for managing an augmented reality interface related to a content provided by a digital signage is provided. The method includes the steps of: acquiring a picture photographed by a device in relation to a content provided by a digital signage; and estimating identification information on the content with reference to a comparison target content related to the content and included in a content pool, and causing an augmented reality interface corresponding to the content to be displayed together with the photographed picture, on the basis of the identification information on the content.
Abstract: Methods and systems for analyzing an integrity of a roof covering are presented. An exemplary method includes observing, by an imaging unit, a respective lift for one or more discontinuous roof covering materials of a roof covering of a structure. The exemplary method further includes generating, by one or more processors, an overall roof integrity rating based on the respective lift of one or more of the one or more discontinuous roof covering materials.
Type:
Grant
Filed:
August 17, 2023
Date of Patent:
January 28, 2025
Assignee:
STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
Abstract: A system for training neural networks that predict the parameters of a human mesh model is disclosed herein. The system includes at least one camera and a data processor configured to execute computer executable instructions for: receiving a first frame and a second frame of a video from the at least one camera; extracting first and second image data from the first and second frames of the video; inputting the sequence of frames of the video into a human mesh estimator module, the human mesh estimator module estimating mesh parameters from the sequence of frames of the video so as to determine a predicted mesh; and generating a training signal for input into the human mesh estimator module by using a two-dimensional keypoint loss module that compares a first set of two-dimensional image-based keypoints to a second set of two-dimensional model-based keypoints.
Abstract: Embodiments of the present disclosure provide a method, an electronic device, and a computer program product for data processing. In a method for data processing, a first electronic device processes data based on a first data processing model to generate an initial result. A data size of the initial result is smaller than a data size of the data. The first electronic device sends the initial result to a second electronic device. The initial result is adjusted at the second electronic device and based on a second data processing model to generate an adjusted result. The second electronic device has more computing resources than the first electronic device, the second data processing model occupies more computing resources than the first data processing model, and an accuracy of the adjusted result is higher than that of the initial result.
Abstract: A method for detecting an open flame includes: acquiring a plurality of frames of first images of a suspected target in a monitoring region; acquiring gray scale change features of the plurality of frames of first images and attribute features of the suspected target based on the plurality of frames of first images, the gray scale change features being configured to indicate temperature changes of the suspected target; and determining the suspected target in the monitoring region as an open flame, if the gray scale change features of the plurality of frames of first images and the attribute features of the suspected target both satisfy an open flame condition.
Abstract: A method for video processing via an artificial neural network includes receiving a video stream as an input at the artificial neural network. A residual is computed based on a difference between a first feature of a current frame of the video stream and a second feature of a previous frame of the video stream. One or more portions of the current frame of the video stream are processed based on the residual. Additionally, processing is skipped for one or more portions of the current frame of the video based on the residual.
Abstract: Techniques and systems are provided for positioning mixed-reality devices within mixed-reality environments. The devices, which are configured to perform inside out tracking, transition between position tracking states in mixed-reality environments and utilize positional information from other inside out tracking devices that share the mixed-reality environments to identify/update positioning of the devices when they become disoriented within the environments and without requiring an extensive or full scan and comparison/matching of feature points that are detectable by the devices with mapped feature points of the maps associated with the mixed-reality environments. Such techniques can conserve processing and power consumption that would be required when performing a full or extensive scan and comparison of matching feature points. Such techniques can also enhance the accuracy and speed of positioning mixed-reality devices.
Type:
Grant
Filed:
October 20, 2023
Date of Patent:
January 21, 2025
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Erik Alexander Hill, Kathleen Carol Heasley, Jake Thomas Shields, Kevin James-Peddicord Luecke, Robert Neil Drury, Garret Paul Jacobson
Abstract: A computer-implemented system and method for uploading media to an inspection record via a hybrid cell telephony connection and WWW protocol connection are disclosed. Example embodiments include: initiating a telephony call by an authority representative (Auth) having an inspector record for which remote capturing or viewing of media by a Field Agent (FA) is needed for an inspection at a site; sending a notification to a device of the FA (FA device), the notification including a link to establish a data connection between the FA device and a public-facing web server/service (PFWS); establishing a connection with the PFWS and prompting the FA by use of the telephony call to capture one or more snapshots or videos (media) at the site using a camera of the FA device; and uploading the media to the PFWS for storage.
Abstract: A synesthesia-based encryption system and method (referred to as a system) includes a camera that captures an image and a transceiver communicatively coupled to the camera and a video-only network. The system includes a sensor that monitors a location and generates a sensor message. The sensor message include information that represents a state, a measurement, and/or a detection at that location. The system's processor maps colors to characters from the sensor message to generate a replacement image. In some systems, the sensor is encrypted first. The processor integrates the replacement image within the original image or some or all of the video frames captured by the camera to form a combined image(s) and causes a transceiver to transmit the combined image(s) across the video-only network to a destination.
Type:
Grant
Filed:
May 10, 2022
Date of Patent:
January 14, 2025
Assignee:
UT-Battelle, LLC
Inventors:
Peter L. Fuhr, Gary Hahn, Margaret M. Morganti, Jason K. Richards, William H. Monday
Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, techniques to process pixel values sampled from a multi color channel imaging device. In particular, methods and/or techniques to process pixel samples for interpolating pixel values for one or more color channels.
Type:
Grant
Filed:
March 14, 2022
Date of Patent:
January 14, 2025
Assignee:
Arm Limited
Inventors:
Liam James O'Neil, Joshua James Sowerby, Samuel James Edward Martin, Matthew James Wash
Abstract: In various examples, systems and methods are disclosed herein for a vehicle command operation system that may use technology across multiple modalities to cause vehicular operations to be performed in response to determining a focal point based on a gaze of an occupant. The system may utilize sensors to receive first data indicative of an eye gaze of an occupant of the vehicle. The system may utilize sensors to receive second data indicative of other data from the occupant. The system may then calculate a gaze vector based on the data indicative of the eye gaze of the occupant. The system may determine a focal point based on the gaze vector. In response to determining the focal point, the system causes an operation to be performed in the vehicle based on the second data.
Abstract: A method and to a device for processing a 3D point cloud representing surroundings, which is generated by a sensor. Initially, starting cells are identified based on ascertained starting ground points within the 3D point cloud which meet at least one predefined ground point criterion with respect to a reference plane divided into cells. Thereafter, cell planes are ascertained for the respective starting cells of the reference plane. Thereafter, estimated cell planes and ground points are ascertained for candidate cells deviating from the starting cells based on the cell planes of the starting cells, which are subsequently converted into final cell planes. As a result of such a cell growth originating from the starting cells, the cells of the reference plane are iteratively run through and processed so that the 3D point cloud is reliably classifiable into ground points and object points based on this method.
Type:
Grant
Filed:
May 13, 2022
Date of Patent:
January 7, 2025
Assignee:
ROBERT BOSCH GMBH
Inventors:
Chengxuan Fu, Jasmine Richter, Dennis Hardenacke, Ricardo Martins Costa
Abstract: A method and apparatus for embedding a digital watermark in image content that is not visible to the human eye is performed on single-sensor digital camera images (often called ‘raw’ images) from a pixel-array. The raw image is transformed to generate preprocessed image coefficients, a watermark message is encrypted using a first key; the encrypted watermark message is randomized using a second key to form a watermark; and the watermark is embedded in randomly selected preprocessed image coefficients.
Abstract: The present invention relates to a system for determining the colors of different objects for comparison. The system may include user computing devices connected to a server storing user data, digital photographs, and texture data. A user computing device operates an app, through which a user may compare the colors of objects in different photos. The server may be programmed to analyze the matching in color between the different objects and return a color matching, expressed as a matching percentage from 0% to 100%, for display on the user computing device. The system may be used to send and receive messages between users and contacts. The system may also be used to visualize objects as they would appear if they were made of a different color or texture. Methods of use are also disclosed.
Abstract: The present technology pertains to determining whether traffic signal cameras of an AV are miscalibrated. One technique for determining if a given traffic signal camera is miscalibrated includes obtaining an image from a traffic signal camera, marking the location of a traffic signal in the image with a bounding box, and determining if the light sources of the traffic signal are within the bounding box. If not, an alternate reason check is performed to determine whether an alternate reason exists for why the light source of the traffic signal is not within the bounding box. If no such reason exists, then the traffic signal camera may be miscalibrated. An additional calibration check may be performed using images from two traffic signal cameras of the AV each having four common light sources, and performing a homographic computation using the location of the light sources in the images.
Abstract: An image processing device includes a noise adder that obtains a captured image from an image capturing device including a mask having at least one aperture, an MPH information obtainer that obtains aperture pattern information corresponding to the pattern of the at least one aperture, the noise adder that adds, to the captured image, noise determined according to the aperture pattern information, and a transmitter that outputs the noise added captured image.
Type:
Grant
Filed:
February 14, 2023
Date of Patent:
January 7, 2025
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
Abstract: An image measurement apparatus includes: a setting section which sets a measurement element having a geometric shape; a storage section which stores in advance a correspondence relationship between a shape type or a size of the measurement element settable by the setting section, and positions and the number of contact target positions of the touch probe to be arranged with respect to the measurement element; and a control section which specifies a plurality of contact target positions of the touch probe based on a position of the measurement element on the workpiece image set by the setting section, the shape type or size of the measurement element, and the correspondence relationship stored in advance during the measurement execution by the touch probe, and relatively moves a stage or the touch probe to move the touch probe sequentially to the plurality of specified contact target positions.
Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a simulated environment. For example, a simulation may be run to simulate a virtual world or environment, render frames of virtual sensor data (e.g., images), and generate corresponding depth maps and segmentation masks (identifying a component of the simulated environment such as a road). To generate input training data, 3D structure estimation may be performed on a rendered frame to generate a representation of a 3D surface structure of the road. To generate corresponding ground truth training data, a corresponding depth map and segmentation mask may be used to generate a dense representation of the 3D surface structure.
Type:
Grant
Filed:
October 28, 2021
Date of Patent:
January 7, 2025
Assignee:
NVIDIA Corporation
Inventors:
Kang Wang, Yue Wu, Minwoo Park, Gang Pan
Abstract: A device for graphical rendering includes a memory and processing circuitry. The processing circuitry is configured to receive sample values, transmitted by one or more servers, of samples of an object, wherein the sample values are generated by the one or more servers from inputting coordinates into a trained neural network and outputting, from the trained neural network, the sample values of the samples, store the sample values in the memory, and render image content of the object based on the sample values.
Abstract: An observation platform comprises a first computer system associated with an organization, a first communication device associated with a first user, a second communication device communicatively and a second computer system both communicatively coupled with the first computer system The second computer system receives: via the first computer system and the first communication device, a signal with a characteristic corresponding to an audible source; and from the first computer system, context information at least in part from a speech to text analysis of the characteristic, where the context information represents one or more attributes of the first user and comprises a query about the organization made by the first user. The second computer system compiles a response to the query relative to the organization and responds via the first computer system and the second communication device, to the first communication device regarding the query based on the compiled response.
Abstract: Illustrated herein are systems and methods of transferring physical currency into electronic credit that becomes electronically available. One method includes remotely and quickly depositing non-negotiable instruments, such as cash, by communicating information, such as an image of the currency, about the instrument to a depositor. An exemplary system includes computing devices and a capture module with a retrieval engine, a transmission engine, an inspection engine, and a conferral engine. The capture module receives information about a non-negotiable instrument, and an inspection engine analyzes the information to confirm if the unit of currency is authentic.
Type:
Grant
Filed:
September 24, 2021
Date of Patent:
December 31, 2024
Assignee:
United Services Automobile Association (USAA)
Abstract: Embodiments relate to a non-fungible physical (NFP) item. The non-fungible physical (NFP) item comprises an identifier. The identifier is embedded and layered within the non-fungible physical item in an unplanned pattern. The identifier in the unplanned pattern is configured to provide high security against counterfeiting of the non-fungible physical (NFP) item. The identifier comprises at least one of a random marker and a unique marker. The unplanned pattern comprises at least one of a random pattern and a unique pattern. Further the non-fungible physical (NFP) item is registered as a non-fungible token on a blockchain. The NFP item is then paired with the non-fungible token for enabling two-way mutual authentication and enhanced authenticity. The pairing of the NFP item with the non-fungible token enables tracking condition, provenance, and grading of the NFP item.
Abstract: A system and method for generating a play prediction for a team is disclosed herein. A computing system retrieves trajectory data for a plurality of plays from a data store. The computing system generates a predictive model using a variational autoencoder and a neural network by generating one or more input data sets, learning, by the variational autoencoder, to generate a plurality of variants for each play of the plurality of plays, and learning, by the neural network, a team style corresponding to each play of the plurality of plays. The computing system receives trajectory data corresponding to a target play. The predictive model generates a likelihood of a target team executing the target play by determining a number of target variants that correspond to a target team identity of the target team.
Type:
Grant
Filed:
January 13, 2023
Date of Patent:
December 24, 2024
Assignee:
Stats LLC
Inventors:
Sujoy Ganguly, Long Sha, Jennifer Hobbs, Xinyu Wei, Patrick Joseph Lucey
Abstract: A video quality improvement method may comprise: inputting a structure feature map converted from current target frame by first convolution layer to first multi-task unit and second multi-task unit, which is connected to an output side of first multi-task unit, among the plurality of multi-task units; inputting a main input obtained by adding the structure feature map to a feature space, which is converted by second convolution layer from those obtained by concatenating, in channel dimension, a previous target frame and a correction frame of the previous frame to first multi-task unit; and inputting current target frame to Nth multi-task unit connected to an end of output side of second multi-task unit, wherein Nth multi-task unit outputs a correction frame of current target frame, and machine learning of the video quality improvement model is performed using an objective function calculated through the correction frame of current target frame.
Type:
Grant
Filed:
October 8, 2021
Date of Patent:
December 24, 2024
Assignee:
POSTECH RESEARCH AND BUSINESS DEVELOPMENT FOUNDATION
Inventors:
Seung Yong Lee, Jun Yong Lee, Hyeong Seok Son, Sung Hyun Cho