Feature Extraction Patents (Class 382/190)
  • Patent number: 11122218
    Abstract: Systems and methods are described for determining that the user interaction with a display of a computing device during display of a video comprising a sequence of frames indicates a region of interest in a current frame of the sequence of frames of the displayed video. For each frame of the sequence of frames after the current frame, the frame is cropped to generate a cropped frame comprising a portion of the frame including the region of interest in the frame, the cropped frame is enlarged based on a display size corresponding to an angle or orientation of the computing device during display of the video, and the enlarged cropped frame replaces the frame such that the enlarged cropped frame is displayed in the sequence of frames of the video on the display of the computing device instead of the frame.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: September 14, 2021
    Assignee: Snap Inc.
    Inventors: Jia Li, Nathan Litke, Jose Jesus (Joseph) Paredes, Rahul Bhupendra Sheth, Daniel Szeto, Ning Xu, Jianchao Yang
  • Patent number: 11113532
    Abstract: Disclosed herein is an artificial intelligence apparatus for recognizing at least one object, comprising: a memory configured to store a plurality of recognition models for generating identification information corresponding to the object from image data; and a processor configured to: obtain image data for the object, generate first identification information corresponding to the object from the image data using a default recognition model composed of at least one or more of the plurality of recognition models, measure a confidence level for the first identification information, obtain the first identification information as a recognition result of the object if the confidence level is equal to or greater than a first reference value, and obtain second identification information corresponding to the object from the image data as a recognition result of the object using a compound recognition model composed of at least one or more of the plurality of recognition models if the measured confidence level is less
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: September 7, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Jaehong Kim, Heeyeon Choi
  • Patent number: 11113567
    Abstract: Described are systems and methods for generating training data that is used to train a machine learning system to detect moving objects represented in sensor data. The system and methods utilize position data received from a target vehicle to determine data points within sensor data that represents that target vehicle. For example, a station at a known location may receive Automatic Dependent Surveillance-Broadcast (“ADS-B”) data (position data) corresponding to a target vehicle that is within the field of view of a station sensor, such as a camera. The position data may then be correlated with the sensor data and projected into the sensor data to determine data points within the sensor data that represent the target vehicle. Those data points are then labeled to indicate the location, size, and/or shape of the target vehicle as represented in the sensor data, thereby producing training that may be provided to train a machine learning algorithm or system to detect moving objects, such as aircraft.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: September 7, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Jean-Guillaume Durand, Pradeep Krishna Yarlagadda, Ishay Kamon, Francesco Callari
  • Patent number: 11113519
    Abstract: A character recognition method for a moving image includes extracting a region corresponding to a character string included in each frame of a moving image to be recognized. The method reads a character string from the extracted region and corrects the character string read from each frame, based on appearance rule information that specifies an appearance rule of a character string corresponding to the order of the frames, such that the appearance order of the read character string conforms to the appearance rule.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: September 7, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Yusuke Hamada, Misato Hishi
  • Patent number: 11107178
    Abstract: Systems and methods for implementing radial density masking graphics rendering for use in applications such as head mounted displays (“HMDs”) are described. Exemplary algorithms are disclosed, according to which image resolution varies within an image depending on the distance of a particular point on the image from one or more fixation points. Reconstruction algorithms according to certain embodiments include three stages: (1) hole filling; (2) cross-cell blending; and (3) Gaussian blur.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: August 31, 2021
    Assignee: Valve Corporation
    Inventors: Alex Vlachos, Kenneth Barnett
  • Patent number: 11100319
    Abstract: A system performs optical character recognition (OCR) on an image displaying a portion of an object. An image classification system identifies the object in the image, based on which one or more object detection models identify labels associated with the object within the image. The system determines text of the identified labels using OCR, and analyzes the OCR resultant text for discrepancies and/or inaccuracies. In response to identifying a discrepancy, the system provides a recommendation for improving the accuracy of the OCR resultant text.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: August 24, 2021
    Assignee: salesforce.com, inc.
    Inventors: Dennis Schultz, Daniel Thomas Harrison, Christopher Anthony Kemp, Michael A. Salem
  • Patent number: 11100145
    Abstract: A method includes: receiving initial input from a client at least partially specifying one or more characteristics sought by the client; selecting a set of images from an image database for output to the client; and determining after each set of images whether an end condition has occurred. The method also includes, until the end condition has occurred: responsive to each set of images output to the client, receiving additional input from the client further specifying the one or more characteristics sought by the client; and responsive to each input received from the client, selecting another set of images for presentation to the client, said set of images being determined to at least partially satisfy the one or more characteristics specified by all input received from the client, said determination being based at least in part on side information for respective images for at least the set of images.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: August 24, 2021
    Assignee: International Business Machines Corporation
    Inventors: Xiaoxiao Guo, Hui Wu, Rogerio Feris
  • Patent number: 11093561
    Abstract: In one embodiment, a method includes receiving a query comprising a query content object and constraints, generating a feature vector representing the query content object, accessing a sparse graph comprising nodes corresponding to candidate content objects represented by compact codes and links connecting the nodes, selecting an entry node, selecting similar content objects iteratively by identifying linked nodes of the entry node, decompressing the compact codes representing candidate content objects to generate feature vectors, selecting zero or more similar content objects based on a comparison between the feature vector representing the query content object and the feature vectors representing the candidate content objects, returning the selected similar content objects if a completion condition is satisfied, else repeating the iterative selection by using a linked node corresponding to a most similar content object as the entry node, and sending instructions for presenting one or more of the selected si
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: August 17, 2021
    Assignee: Facebook, Inc.
    Inventors: Matthys Douze, Alexandre Sablayrolles, Hervé Jegou
  • Patent number: 11086406
    Abstract: A hand interaction system can use a three-state model to differentiate between normal hand movements, such as reaching for an object, and hand input gestures. The three-state model can specify a sequence of states including: 1) a neutral state, 2) a tracking state, and 3) an active state. In the neutral state, the hand interaction system monitors for a gesture signaling a transition to the tracking state but does not otherwise interpret a gesture corresponding to the active state as input. Once a gesture causes a transition to the intermediate tracking state, the hand interaction system can recognize a further active state transition gesture, allowing active state interaction. Thus, the monitoring for the intermediate tracking state provides a gating mechanism, making it less likely that the hand interaction system will interpret hand movements as input when not so intended by the user.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: August 10, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Varga, Jasper Stevens, Robert Ellis, Jonah Jones
  • Patent number: 11082731
    Abstract: Generally discussed herein are devices, systems, and methods for privacy-preserving video. A method can include identifying which classes of objects are present in video data, for each class of the classes identified in the video data, generating respective video streams that include objects of the class and exclude objects not of the class, and providing each of the respective video streams to a content distribution network.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: August 3, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Landon Prentice Cox, Paramvir Bahl, Sandeep Maurice Dsouza, Lixiang Ao
  • Patent number: 11077814
    Abstract: Provided is an occupant observation device including an imager configured to capture an image of a head of an occupant of a vehicle; and an eye detector configured to detect at least a part of eyes of the occupant in the image captured by the imager, in which the eye detector sets an eye closer to the imager among the eyes of the occupant as a detection target.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: August 3, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Kota Saito, Seungho Choi
  • Patent number: 11080977
    Abstract: A management system includes: an information storage member that stores individual identification information of a management target; an information reading device that reads individual identification information I from the information storage member located within a predetermined distance from the information reading device; an imaging device hat generates continuous image data by continuously capturing images of at least an area where the information reading device can read the individual identification information I from the information storage member; a storage device that stores the continuous image data; a control device that acquires an event occurrence time at which an event related to the individual identification information I read by the information reading device has occurred, and sets in the continuous image data a playback start time corresponding to the event occurrence time; and a display device capable of displaying an image based on the continuous image data.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: August 3, 2021
    Inventor: Hiroshi Aoyama
  • Patent number: 11082249
    Abstract: Systems and methods for determining locations and configuring controllable devices are provided. Example systems and methods include determining a location estimate for a computing device and capturing image data, by the computing device, of a physical space that includes a controllable device performing an identification action. The example systems and methods may also include identifying the controllable device in the image data based at least in part on the identification action and determining configuration information for the controllable device. The configuration information may be based at least in part on the location estimate for the computing device.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: August 3, 2021
    Assignee: GOOGLE LLC
    Inventors: Diane Wang, Paulo Coelho, Benjamin Hylak
  • Patent number: 11080549
    Abstract: Example systems and methods may selection of video frames using a machine learning (ML) predictor program are disclosed. The ML predictor program may generate predicted cropping boundaries for any given input image. Training raw images associated with respective sets of training master images indicative of cropping characteristics for the training raw image may be input to the ML predictor, and the ML predictor program trained to predict cropping boundaries for raw image based on expected cropping boundaries associated training master images. At runtime, the trained ML predictor program may be applied to runtime raw images in order to generate respective sets of runtime cropping boundaries corresponding to different cropped versions of the runtime raw image. The runtime raw images may be stored with information indicative of the respective sets of runtime boundaries.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: August 3, 2021
    Assignee: Gracenote, Inc.
    Inventors: Aneesh Vartakavi, Casper Lützhøft Christensen
  • Patent number: 11080845
    Abstract: An image processing apparatus and method thereof are provided. The image processing apparatus stores at least a reference image and performs the following operations: (a) receiving an image, (b) determining a plurality of representative keypoints for the image, such as determining the representative keypoints by a density restriction based method, (c) finding out that a matched area in the image corresponds to a first reference image according to the representative keypoints, (d) determining that a matched number between the representative keypoints and a plurality of reference keypoints of the first reference image is less than a threshold, and (e) storing the matched area in the image processing apparatus as a second reference image.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: August 3, 2021
    Assignee: HTC CORPORATION
    Inventor: Shu-Jhen Fan Jiang
  • Patent number: 11082719
    Abstract: There are disclosed various methods, apparatuses and computer program products for video encoding and decoding. In some embodiments a bitstream comprising a coded first-view picture and a coded second-view picture is encoded or encapsulated. The coded second-view picture represents a smaller field of view than the coded first-view picture, wherein decoding of the coded first-view picture results in a decoded first-view picture, and decoding of the coded second-view picture results in a decoded second-view picture. An indication is inserted in or along the bitstream that a reconstructed second-view picture comprises the decoded second-view picture and at least one region of the decoded first-view picture, wherein the reconstructed second-view picture represents the same field of view as the decoded first-view picture.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: August 3, 2021
    Assignee: Nokia Technologies Oy
    Inventors: Miska Hannuksela, Sebastian Schwarz
  • Patent number: 11080522
    Abstract: The present disclosure relates to a system and a method for identification of individual animals based on images, such as 3D-images, of the animals, especially of cattle and cows. When animals live in areas or enclosures where they freely move around, it can be complicated to identify the individual animal. In a first aspect the present disclosure relates to a method for determining the identity of an individual animal in a population of animals with known identity, the method comprising the steps of acquiring at least one image of the back of a preselected animal, extracting data from said at least one image relating to the anatomy of the back and/or topology of the back of the preselected animal, and comparing and/or matching said extracted data against reference data corresponding to the anatomy of the back and/or topology of the back of the animals with known identity, thereby identifying the preselected animal.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: August 3, 2021
    Assignee: Viking Genetics FMBA
    Inventors: Soren Borchersen, Claus Borggaard, Niels Worsoe Hansen
  • Patent number: 11082629
    Abstract: An image processing device includes a control unit configured to control generation of a parameter related to adjustment of color corresponding to mutually different exposure times on the basis of a plurality of images photographed at the exposure times.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: August 3, 2021
    Assignee: Sony Corporation
    Inventors: Osamu Izuta, Satoko Suzuki, Yutaro Honda, Syouei Hirasawa, Daisuke Kasai
  • Patent number: 11074442
    Abstract: Aspects of the disclosure provide for mechanisms for identification of table partitions in documents using neural networks. A method of the disclosure includes obtaining a plurality of symbol sequences of a document having at least one table, determining a plurality of vectors representative of symbol sequences having at least one alphanumeric character or a table graphics element, processing the plurality of vectors using a first neural network to obtain a plurality of recalculated vectors, determining an association between a first recalculated vector and a second recalculated vector, wherein the first recalculated vector is representative of an alphanumeric sequence and the second recalculated vector is associated with a table partition, and determining, based on the association between the first recalculated vector and the second recalculated vector, an association between the alphanumeric sequence and the table partition.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: July 27, 2021
    Assignee: Abbyy Production LLC
    Inventor: Stanislav Semenov
  • Patent number: 11074501
    Abstract: Disclosed herein is a learning method of a neural network model for flame determination. The learning method of a neural network includes generating a learning image including a fake image generated by combining a real fire image and an arbitrary flame image with a background image; inputting the learning image to a first neural network model and outputting a determination result for whether a flame is present; and updating a weight in a layer extracting features of the learning image from the first neural network model using the determination result. According to the present invention, data of various fire situations may be secured, a performance of the neural network model that determines an occurrence of the fire through the secured data may be increased, and a quality of data for learning may be increased to allow the neural network model itself to predict various situations of fires.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: July 27, 2021
    Assignee: GYNETWORKS CO., LTD.
    Inventor: Seung On Bang
  • Patent number: 11068932
    Abstract: Systems and methods including one or more processing modules and one or more non-transitory storage modules storing computing instructions configured to run on the one or more processing modules and perform acts of obtaining a uniform resource locator (URL) of a first webpage that is shown on a graphical user interface and that is external to a website of a retailer (where, in some embodiments, the URL is obtained from a referral website or is entered by a user from a chat window or search box), using a web scraper to extract web text displayed on the first webpage on the graphical user interface, processing the web text displayed on the first webpage on the graphical user interface to determine an interest of a user, using a set of rules to determine items related to the web text displayed on the first webpage on the graphical user interface, and coordinating displaying the items on a second webpage to promote the items as related to the interest of the user, where the second webpage is internal or external
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: July 20, 2021
    Assignee: WAL-MART STORES, INC.
    Inventors: Wei Shen, Yuan Xie, Vahid Jalalibarsari, Lu Wang, Chenxi Liu, Zhao Zhao
  • Patent number: 11070729
    Abstract: A disclosed image processing apparatus calculates a background vector expressing a motion of a background based on a plurality of motion vectors detected between a plurality of images. Then the image processing apparatus detects a motion vector of a moving object from the plurality of motion vectors, based on a magnitude of Euclidean distance between each of the plurality of motion vectors and the background vector.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: July 20, 2021
    Assignee: CANON KABUSHIKI KAiSHA
    Inventors: Keisuke Midorikawa, Ryosuke Tsuji
  • Patent number: 11058208
    Abstract: Disclosed is a method for selecting a cosmetic product, and a method for image acquisition and image processing for the selection method, including: acquiring an image of a person in a controlled lighting environment, and measuring and recording the colorimetric coordinates of the person's face; processing the image to determine an absolute value of the skin tone; correlating the absolute value with a usage color map established for each of multiple cosmetic products of a database by measuring the color of each cosmetic product under its conditions of use, and to determine a personalized color matrix; extracting, from the database, the cosmetic product(s) whose color measurement is part of the personalized color matrix; using an information medium, presenting the product(s) included in the personalized color matrix; and optionally selecting the preferred product(s) chosen by the person from those that are part of the personalized color matrix.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: July 13, 2021
    Assignee: CHANEL PARFUMS BEAUTE
    Inventors: Astrid Lassalle, Sandrine Couderc
  • Patent number: 11064102
    Abstract: Methods, devices, computer readable medium, and systems are described for capturing images at venues. Venues are organized around one or more shotspots. Shotspots are geographic locations which may have one or more subspots and one or more venue operated camera devices. Subspots identify positions to be occupied by subjects within a shotspot. Shotspots are installed, registered, and operated by venue operators. Images are captured based on triggers which may include one or more conditions under which images will be stored. Conditions may also include negative limitations. Users wishing to use a shotspot do so through a mobile device. Interacting through the mobile device, a user may discover, receive directions to, navigate to, and arrange sessions during which they may use a shotspot. Users may also receive sample images, define triggers, initiate triggers, preview framing, and receive images through the mobile application.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: July 13, 2021
    Assignee: Ikorongo Technology, LLC
    Inventors: Mike Helpingstine, Hugh Blake Svendsen
  • Patent number: 11055348
    Abstract: Systems, methods, and non-transitory computer-readable media can identify a set of videos. One or more overlapping portions in the set of videos are automatically identified. A stitched media content item is automatically generated based on the one or more overlapping portions and at least a subset of the set of videos.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: July 6, 2021
    Assignee: Facebook, Inc.
    Inventors: Clark Martin Gredoña, Chun-Yu Tsai
  • Patent number: 11055514
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for synthesizing a realistic image with a new expression of a face in an input image by receiving an input image comprising a face having a first expression; obtaining a target expression for the face; and extracting a texture of the face and a shape of the face. The program and method for generating, based on the extracted texture of the face, a target texture corresponding to the obtained target expression using a first machine learning technique; generating, based on the extracted shape of the face, a target shape corresponding to the obtained target expression using a second machine learning technique; and combining the generated target texture and generated target shape into an output image comprising the face having a second expression corresponding to the obtained target expression.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: July 6, 2021
    Assignee: Snap Inc.
    Inventors: Chen Cao, Sergey Tulyakov, Zhenglin Geng
  • Patent number: 11050933
    Abstract: A method for determining a location of a target positioned behind a tow vehicle is provided. The method includes receiving images from a camera positioned on a back portion of the tow vehicle. The images include the target. The method includes applying one or more filter banks to the images. The method also includes determining a region of interest within each image based on the applied filter banks. The region of interest includes the target. The method also includes identifying the target within the region of interest and determining a target location of the target including a location in a real-world coordinate system. The method also includes transmitting instructions to a drive system supported by the vehicle. The instructions cause the tow vehicle to autonomously maneuver towards the location in the real-world coordinate system.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: June 29, 2021
    Assignee: Continenal Automotive Systems, Inc.
    Inventors: Joyce Chen, Xin Yu, Julien Ip
  • Patent number: 11048912
    Abstract: A captured image acquisition section 50 acquires, from an imaging apparatus 12, data of a polarized image obtained by capturing a target object and stores the data into an image data storage section 52. A region extraction section 60 of a target object recognition section 54 extracts a region in which a figure of the target object is included in the polarized image. A normal line distribution acquisition section 62 acquires a distribution of normal line vectors on a target object surface in regard to the extracted region. A model adjustment section 66 adjusts a three-dimensional model of the target object stored in a model data storage section 64 in a virtual three-dimensional space such that the three-dimensional model conforms to the distribution of the normal line vectors acquired from the polarized image to specify a state of the target object.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: June 29, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Hidehiko Ogasawara, Akio Ohba, Hiroyuki Segawa
  • Patent number: 11050948
    Abstract: The present disclosure relates to systems and methods for image capture. Namely, an image capture system may include a camera configured to capture images of a field of view, a display, and a controller. An initial image of the field of view from an initial camera pose may be captured. An obstruction may be determined to be observable in the field of view. Based on the obstruction, at least one desired camera pose may be determined. The at least one desired camera pose includes at least one desired position of the camera. A capture interface may be displayed, which may include instructions for moving the camera to the at least one desired camera pose. At least one further image of the field of view from the at least one desired camera pose may be captured. Captured images may be processed to remove the obstruction from a background image.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: June 29, 2021
    Assignee: Google LLC
    Inventors: Michael Rubinstein, William Freeman, Ce Liu
  • Patent number: 11040685
    Abstract: Provided is an occupant observation device including an imager configured to capture an image of a head of an occupant of a vehicle; and an eye detector configured to detect at least a part of eyes of the occupant in the image captured by the imager, in which the eye detector sets an eye closer to the imager among the eyes of the occupant as a detection target.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: June 22, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Kota Saito, Seungho Choi
  • Patent number: 11036990
    Abstract: A target identification method includes: using information of a to-be-detected target acquired within a predetermined time period as judgment information; acquiring an identification result of the to-be-detected target at a current time and outputting the identification result; judging whether the attribute type corresponding to the identification result is an attribute type having the highest priority; and if the attribute type corresponding to the identification result is not the attribute type having the highest priority, using information of the to-be-detected target acquired within a next predetermined time period as the judgment information, and returning to the step of acquiring an identification result of the to-be-detected target at a current time and outputting the identification result.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: June 15, 2021
    Assignee: CLOUDMINDS (BEIJING) TECHNOLOGIES CO., LTD.
    Inventors: Shiguo Lian, Zhaoxiang Liu, Ning Wang
  • Patent number: 11030236
    Abstract: Systems and methods for searching digital content, such as digital images, are disclosed. A method includes receiving a first search constraint and generating search results based on the first search constraint. A search constraint includes search values or criteria. The search results include a ranked set of digital images. A second search constraint and a weight value associated with the second search constraint are received. The search results are updated based on the second search constraint and the weight value. The updated search results are provided to a user. Updating the search results includes determining a ranking (or a re-ranking) for each item of content included in the search results based on the first search constraint, the second search constraint, and the weight value. Re-ranking the search results may further be based on a weight value associated with the first search constraint, such as a default or maximum weight value.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: June 8, 2021
    Assignee: Adobe Inc.
    Inventors: Samarth Gulati, Brett Butterfield, Baldo Faieta, Bernard James Kerr, Kent Andrew Edmonds
  • Patent number: 11025907
    Abstract: Convolutional neural networks (CNN) that determine a mode decision (e.g., block partitioning) for encoding a block include feature extraction layers and multiple classifiers. A non-overlapping convolution operation is performed at a feature extraction layer by setting a stride value equal to a kernel size. The block has a N×N size, and a smallest partition output for the block has a S×S size. Classification layers of each classifier receive feature maps having a feature dimension. An initial classification layer receives the feature maps as an output of a final feature extraction layer. Each classifier infers partition decisions for sub-blocks of size (?S)×(?S) of the block, wherein ? is a power of 2 and ?=2, . . . , N/S, by applying, at some successive classification layers, a 1×1 kernel to reduce respective feature dimensions; and outputting by a last layer of the classification layers an output corresponding to a N/(?S)×N/(?S)×1 output map.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: June 1, 2021
    Assignee: GOOGLE LLC
    Inventors: Shan Li, Claudionor Coelho, Aki Kuusela, Dake He
  • Patent number: 11017296
    Abstract: The present invention extends to methods, systems, and computer program products for classifying time series image data. Aspects of the invention include encoding motion information from video frames in an eccentricity map. An eccentricity map is essentially a static image that aggregates apparent motion of objects, surfaces, and edges, from a plurality of video frames. In general, eccentricity reflects how different a data point is from the past readings of the same set of variables. Neural networks can be trained to detect and classify actions in videos from eccentricity maps. Eccentricity maps can be provided to a neural network as input. Output from the neural network can indicate if detected motion in a video is or is not classified as an action, such as, for example, a hand gesture.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: May 25, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Gaurav Kumar Singh, Pavithra Madhavan, Bruno Jales Costa, Gintaras Vincent Puskorius, Dimitar Petrov Filev
  • Patent number: 11019317
    Abstract: A method of photographing a subject includes storing a library of photographic scene designs in a computer memory, training a photographic scene detection model by a computer processing device using machine learning from sample portrait images comprising known photographic scenes defined in the library of photographic scene designs, capturing a production portrait photograph, using a digital camera, of a subject in a photographic scene that is defined by a photographic scene design in the library of photographic scene designs, automatically detecting the photographic scene in the production portrait photograph using the photographic scene detection model operating on one or more computer processors, and processing the production portrait photograph by an image processing system to personalize the photographic scene detected in the production portrait photograph.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: May 25, 2021
    Assignee: Shutterfly, LLC
    Inventors: Leo Cyrus, Keith A. Benson
  • Patent number: 11017271
    Abstract: Examples of techniques for interactive generation of labeled data and training instances are provided. According to one or more embodiments of the present invention, a computer-implemented method for interactive generation of labeled data and training instances includes presenting, by the processing device, control labeling options to a user. The method further includes selecting, by a user, one or more of the presented control labeling options. The method further includes selecting, by a processing device, a representative set of unlabeled data samples based at least in part on the control labeling options selected by the user. The method further includes generating, by a processing device, a set of suggested labels for each of the unlabeled data samples.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: May 25, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nirmit V. Desai, Dawei Li, Theodoros Salonidis
  • Patent number: 11012579
    Abstract: An image processing apparatus receives destination information for use in data transmission, performs control, based on the received destination information including a destination in an email address format, so that a first screen, which is used to transmit data external to the image processing apparatus, and on which a transmission destination of the data is displayed, based on the received destination information, is displayed on the operation unit, and performs control, based on the received destination information including only a destination in a fax format so that a second screen, different from the first screen and used to perform fax transmission, on which a transmission destination of the fax transmission is displayed, based on the received destination information, is displayed on the operation unit.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: May 18, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yosui Naito
  • Patent number: 11012592
    Abstract: An image analyzing method of detecting a dimension of a region of interest inside an image is applied to an image analyzing device. The image analyzing method includes positioning an initial triggering pixel unit within a detective identifying area inside the image, and assigning a first detection region via a center of the initial triggering pixel unit, positioning a first based pixel unit conforming to a first target value inside the first detection region, applying a mask via a center of the first based pixel unit to determine whether a first triggering pixel unit exists inside the mask, and utilizing a determination result of the initial triggering pixel unit and the first triggering pixel unit to decide a maximal dimension of the region of interest.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: May 18, 2021
    Assignee: VIVOTEK INC.
    Inventors: Hsiang-Sheng Wang, Shih-Hsuan Chen
  • Patent number: 11010643
    Abstract: A system comprising a database and a user device. The database may be configured to (i) store metadata generated in response to objects detected in a video, (ii) store a confidence level associated with the metadata, (iii) provide to a plurality of users (a) data portions of the video and (b) a request for feedback, (iv) receive the feedback and (v) update the confidence level associated with the metadata in response to the feedback. The user device may be configured to (i) view the data portions, (ii) accept input to receive the feedback from one of said plurality of users and (iii) communicate the feedback to the database. The confidence level may indicate a likelihood of correctness of the objects detected in response to video analysis performed on the video. The database may track user statistics for the plurality of users based on the feedback.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: May 18, 2021
    Assignee: WAYLENS, INC
    Inventor: Jeffery R. Campbell
  • Patent number: 11006046
    Abstract: The embodiment of the disclosure discloses a method and an apparatus for image processing, and a mobile terminal. The method may include: acquiring image parameters of a real-time preview image displayed in a preview interface; evaluating, based on the image parameters and a pre-established image evaluation model, the real-time preview image to obtain an evaluation result; and displaying the evaluation result. The method enables the user of the mobile terminal to obtain the evaluation result of the real-time preview image displayed in the preview interface in real time, so that the user can get the quality of the current real-time preview image in real time, and the user can adjust the real-time preview image as needed, in order to obtain images with better evaluation results, thereby improving the overall quality of the images captured by the mobile terminal.
    Type: Grant
    Filed: August 11, 2019
    Date of Patent: May 11, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Yaoyong Liu, Yan Chen
  • Patent number: 11003867
    Abstract: Approaches for cross-lingual regularization for multilingual generalization include a method for training a natural language processing (NLP) deep learning module. The method includes accessing a first dataset having a first training data entry, the first training data entry including one or more natural language input text strings in a first language; translating at least one of the one or more natural language input text strings of the first training data entry from the first language to a second language; creating a second training data entry by starting with the first training data entry and substituting the at least one of the natural language input text strings in the first language with the translation of the at least one of the natural language input text strings in the second language; adding the second training data entry to a second dataset; and training the deep learning module using the second dataset.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 11, 2021
    Assignee: salesforce.com, inc.
    Inventors: Jasdeep Singh, Nitish Shirish Keskar, Bryan McCann
  • Patent number: 11004205
    Abstract: A hardware accelerator for histogram of oriented gradients computation is provided that includes a gradient computation component configured to compute gradients Gx and Gy of a pixel, a bin identification component configured to determine a bin id of an angular bin for the pixel based on a plurality of representative orientation angles, Gx, and signs of Gx and Gy, and a magnitude component configured to determine a magnitude of the gradients Gmag based on the plurality of representative orientation angles and the bin id.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: May 11, 2021
    Assignee: Texas Instruments Incorporated
    Inventor: Aishwarya Dubey
  • Patent number: 11004239
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: May 11, 2021
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
  • Patent number: 10998211
    Abstract: In a semiconductor fabrication apparatus composed of a plurality of components, such as fluid control devices, a manager is to be enabled to identify components by intuition. Information on the identified component is to be provided to the manager in an easy-to-understand manner. In a system in which a manager terminal 3 and an information processor 2 are communicably configured via networks NW1 and NW2, the manager terminal 3 receives component information on a semiconductor fabrication apparatus 1 from the information processor 2. Upon the identification of the position of a component constituting the semiconductor fabrication apparatus 1 on the captured image of the semiconductor fabrication apparatus 1 using an identification processing unit 32, a compositing processing unit 33 creates a composite image in which component information is composited with the captured image at the position of the component identified, and an image display unit 34 displays the composite image.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: May 4, 2021
    Assignee: Fujikin Inc.
    Inventors: Ryutaro Tanno, Takahiro Mastuda, Tsutomu Shinohara
  • Patent number: 10997232
    Abstract: A system and method for automated detection of figure element reuse. The system can receive articles or other publications from a user input or an automated input. The system then extracts images from the articles and compares them to reference images from a historical database. The comparison and detection of matches occurs via a copy-move detection algorithm implemented by a processor of the system. The processor first locates and extracts keypoints from a submission image and finds matches between those keypoints and the keypoints from a reference image using a near neighbor algorithm. The matches are clustered and the clusters are compared for keypoint matching. Matched clusters are further compared for detectable transformations. The processor may additionally implement natural language processing to filter matches based on the context of the use of the submission image in the submission and a patch detector for removing false positive features.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: May 4, 2021
    Assignees: SYRACUSE UNIVERSITY, Northwestern University, Rehabilitation Institute of Chicago
    Inventors: Daniel Ernesto Acuna, Konrad Kording
  • Patent number: 10989600
    Abstract: Embodiments herein disclose automated methods and systems to fill background and interstitial space in the visual object layout with one or more colors that bleed/blend into each other. Embodiments herein automate the creation of multi-colored backgrounds for filling the interstitial space.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: April 27, 2021
    Inventors: Laurent Francois Martin, Narendra Dubey, Jean Pierre Gehrig
  • Patent number: 10990845
    Abstract: Disclosed is a method for determining a relational imprint between two images including the following steps: —the implementation of a first image and of a second image, —a phase of calculating vectors of similarity between tiles belonging respectively to the first and second images, the similarity vectors forming a field of imprint vectors, the field of imprint vectors including at least one haphazard region disordered in the sense of an entropy criterion, —a phase of recording in the guise of relational imprint of a representation of the calculated field of imprint vectors. Also disclosed is a method for authenticating a candidate image with respect to an authentic image implementing the method for determining a relational imprint.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: April 27, 2021
    Assignee: KERQUEST
    Inventors: Yann Boutant, Thierry Fournel
  • Patent number: 10984610
    Abstract: The present invention relates to methods for interacting with virtual objects comprising placing an flat image of an augmented reality object in the field of view of the video camera of the device for creating and viewing virtual objects of augmented reality, determining colors and recognizing patterns on the images received from the video camera device to create and view objects of augmented reality. Coloring the augmented reality object in accordance with the colors defined on the painted image obtained from camera devices. A correspondence is established between the patterns and colors of the painted image and actions of the augmented reality objects, depending on the color, color combination, pattern or colored pattern in the images obtained from the video camera of the device for creating and viewing the augmented reality objects.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: April 20, 2021
    Assignee: DEVAR ENTERTAINMENT LIMITED
    Inventors: Andrei Valerievich Komissarov, Anna Igorevna Belova
  • Patent number: 10984228
    Abstract: Implementations of the present specification provide an interaction behavior detection method, apparatus, system, and device. The method includes the following: obtaining a to-be-detected depth image photographed by a depth photographing device, extracting a foreground image used to represent a moving object from the to-be-detected depth image, obtaining spatial coordinate information of the moving object based on the foreground image, comparing the spatial coordinate information of the moving object with spatial coordinate information of a shelf in a rack, and determining an article touched by the moving object based on a comparison result and one or more articles on the shelf.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 20, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Kaiming Huang, Xiaobo Zhang, Chunlin Fu, Hongbo Cai, Li Chen, Le Zhou, Xiaodong Zeng, Feng Lin
  • Patent number: 10986328
    Abstract: A device, method and system for utilizing an optical array generator to generate dynamic patterns in a dental camera for projection onto the surface of an object, while reducing noise and increasing data density for three-dimensional (3D) measurement. Projected light patterns are used to generate optical features on the surface of the object to be measured and optical 3D measuring methods which operate according to triangulation principles are used to measure the object.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: April 20, 2021
    Assignee: DENTSPLY SIRONA INC.
    Inventor: Michael Tewes