Patents Issued in January 14, 2020
-
Patent number: 10534950Abstract: A method performed by a computer for biometric authentication includes: obtaining, by a processor of the computer, a first image group including a plurality of images that are sequentially captured by a biometric sensor configured to capture at least a part of a region of a body of a user; obtaining, by the processor of the computer, a movement amount of the body and a distance between the body and the biometric sensor; and selecting, by the processor of the computer, a second image group from the first image group in accordance with the movement amount and the distance, the second image group including images to be used in authentication processing with respect to the body, wherein the size of a common region between images to be included in the second image group is adjusted according to the distance.Type: GrantFiled: February 15, 2018Date of Patent: January 14, 2020Assignee: FUJITSU LIMITEDInventors: Hajime Nada, Soichi Hama, Satoshi Maeda, Satoshi Semba, Yukihiro Abiko, Rie Hasada, Toshiaki Wakama
-
Patent number: 10534951Abstract: A fingerprint identification unit and a manufacturing method thereof, an array substrate, a display device and a fingerprint identification method are disclosed, which can realize fingerprint identification without increasing the thickness of the display device. The fingerprint identification unit can include a photosensitive device, a data read-out signal line and a thin film transistor for controlling the switching of the photosensitive device. On the photosensitive device is formed a first insulating layer for insulating the photosensitive device from an OLED luminescent layer, and the part of the OLED luminescent layer corresponding to the photosensitive device does not illuminate. The data read-out signal line can be configured to read out a photocurrent generated by the photosensitive device, and identify fingerprints according to the amount of each photocurrent. The array substrate includes the fingerprint identification unit mentioned in the above technical solution.Type: GrantFiled: May 26, 2017Date of Patent: January 14, 2020Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Rui Xu, Haisheng Wang
-
Patent number: 10534952Abstract: A system and method for providing a matching process that determines a conforming pattern match to a pattern-under-test from a set of matching patterns in a pattern storage. A matching process tests every template against the pattern-under-test and resolves a conforming match condition when multiple matches to the pattern-under-test are found. In some embodiments, a matcher engine using variable match thresholds may be used to differentiate among matching templates to identify a conforming-matching template with respect to the pattern-under-test.Type: GrantFiled: March 9, 2017Date of Patent: January 14, 2020Assignee: IDEX ASAInventor: Roger A. Bauchspies
-
Patent number: 10534953Abstract: An electronic apparatus for illuminating and photographing a user's face and a control method thereof are provided. The electronic apparatus includes a camera configured to photograph a user's face, a transceiver configured to perform communication with a user terminal, and at least one processor configured to photograph the user's face a plurality of times on the basis of a plurality of photographing parameters to obtain a plurality of images, extract face regions corresponding to each of the plurality of images photographed on the basis of the plurality of photographing parameters, synthesize the extracted user's face regions with each other to create a synthesized image, and control the transceiver to transmit data on the synthesized image to the user terminal for the purpose of skin analysis for the user's face.Type: GrantFiled: November 14, 2017Date of Patent: January 14, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Jin-gu Jeong, Sang-wook Yoo, Yong-joon Choe, Ha-ram O, Sang-hyun Lee, Kyong-tae Park
-
Patent number: 10534954Abstract: An augmented reality device (ARD) can present virtual content which can provide enhanced experiences with the user's physical environment. For example, the ARD can detect a linkage between a person in the FOV of the ARD and a physical object (e.g., a document presented by the person) or detect linkages between the documents. The linkages may be used in identity verification or document verification.Type: GrantFiled: June 1, 2017Date of Patent: January 14, 2020Assignee: Magic Leap, Inc.Inventor: Adrian Kaehler
-
Patent number: 10534955Abstract: A method for evaluating a facial performance using facial capture of two users includes obtaining a reference set of facial performance data representing a first user's facial capture; obtaining a facial capture of a second user; extracting a second set of facial performance data based on the second user's facial capture; calculating at least one matching metric based on a comparison of the reference set of facial performance data to the second set of facial performance data; and displaying an indication of the at least one matching metric on a display.Type: GrantFiled: January 17, 2017Date of Patent: January 14, 2020Assignee: DreamWorks Animation L.L.C.Inventors: Emmanuel C. Francisco, Demian Gordon, Elvin Korkuti
-
Patent number: 10534956Abstract: A system receives a digital representation of a biometric for a person, uses the digital representation of the biometric to determine and/or otherwise retrieve identity information associated with the person, and uses the identity information to perform one or more actions related to the person's presence in one or more areas. For example, the system may estimate a path for the person and signal an agent electronic device based on the path. In another example, the system may determine a presence of a person within the area and/or transmit information to an agent electronic device regarding the determined presence. In still another example, the system may receive a request to communicate with the person and forward the communication to the person using the identity information.Type: GrantFiled: April 25, 2019Date of Patent: January 14, 2020Assignee: ALCLEAR, LLCInventors: Joe Trelin, Matthew Snyder, Rob Wisniewski
-
Patent number: 10534957Abstract: The application discloses an eyeball movement analysis method and device and a storage medium. The method includes: acquiring a real-time image shot by a photographic device and extracting a real-time facial image from the real-time image; inputting the real-time facial image into a pretrained eye mean shape and recognizing n1 orbit feature points and n2 eyeball feature points representative of an eye position in the real-time facial image; determining an eye region according to the (n1+n2) feature points and judging whether the eye region is a human eye region or not by use of a human eye classification model; and if YES, calculating a movement direction and movement distance of an eyeball in the real-time facial image according to x and y coordinates of the n1 orbit feature points and n2 eyeball feature points in the real-time facial image.Type: GrantFiled: October 31, 2017Date of Patent: January 14, 2020Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.Inventors: Lin Chen, Guohui Zhang
-
Patent number: 10534958Abstract: An events and data management system includes a computing device having a processor, an input device, an output device; and memory. The system further includes an external input device communicatively coupled to the computing device over a network. The memory stores information from the computing device and the input device; and a user accesses the information on the computing device via a graphical user interface.Type: GrantFiled: September 27, 2017Date of Patent: January 14, 2020Assignee: Axxiom Consulting LLCInventor: Christopher Clark Ragland
-
Patent number: 10534959Abstract: The iris recognition device includes an iris camera module used for collecting iris characteristics of a user, and at least one fill light component used for providing a supplementary light source for the iris camera module. When the iris recognition device is used for collecting the iris characteristics of the user, the supplementary light source provided by the fill light component reduces reflective spots on the iris or make reflective spots in areas other than iris such as sclera and pupil, thereby improving precision of the collected iris characteristics of the user.Type: GrantFiled: November 13, 2018Date of Patent: January 14, 2020Assignee: Ningbo Sunny Opotech Co., Ltd.Inventors: Yiwu Gu, Mengjie Luo, Xinke Tang, Jiewei Xu, Shixin Xu, Lin Huang
-
System and method for locating and performing fine grained classification from multi-view image data
Patent number: 10534960Abstract: Some embodiments of the invention provide a method for identifying geographic locations and for performing a fine-grained classification of elements detected in images captured from multiple different viewpoints or perspectives. In several embodiments, the method identifies the geographic locations by probabilistically combining predictions from the different viewpoints by warping their outputs to a common geographic coordinate frame. The method of certain embodiments performs the fine-grained classification based on image portions from several images associated with a particular geographic location, where the images are captured from different perspectives and/or zoom levels.Type: GrantFiled: April 3, 2017Date of Patent: January 14, 2020Assignee: California Institute of TechnologyInventors: Pietro Perona, Steven J. Branson, Jan D. Wegner, David C. Hall -
Patent number: 10534961Abstract: A method for detecting pad construction and/or drilling and/or for hydraulic fracturing of at least one hydrocarbon well, comprising the steps of: selecting at least one specified well location, obtaining at least one time series of top view images of the specified well location, in which each top view image has a date corresponding to a day of acquisition of the top view image, processing the time series of top view images to detect at least one top view image showing the apparition of a well pad and/or showing drilling activity and/or showing fracturing activity, exporting the date corresponding to the acquisition day of the top view image showing the apparition of the well pad and/or drilling activity and/or fracturing activity, providing, based on export date, an information of pad construction date and/or drilling starting date and/or fracturing starting date and/or a full production forecast for the specified well location.Type: GrantFiled: November 14, 2017Date of Patent: January 14, 2020Assignee: KAYRROSInventors: Antoine Rostand, Ivaylo Petkantchin, Axel Davy, Carlo De Franchis, Jean-Michel Morel, Sylvain Calisti
-
Patent number: 10534962Abstract: Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.Type: GrantFiled: June 17, 2017Date of Patent: January 14, 2020Assignee: Matterport, Inc.Inventors: Gunnar Hovden, Mykhaylo Kurinnyy, Matthew Bell
-
Patent number: 10534963Abstract: Audio content may be captured during capture of spherical video content. An audio event within the audio content may indicate an occurrence of a highlight event based on sound(s) originating from audio source(s) captured within an audio event extent within the spherical video content at an audio event moment. Temporal type of the audio event providing guidance with respect to relative temporality of the highlight event with respect to the audio event and spatial type of the audio event providing guidance with respect to relative spatiality of the highlight event with respect to the audio event may be determined. A highlight event moment of the highlight event may be identified based on the audio event moment and temporal type of the audio event. A highlight event extent of the highlight event may be identified based on the audio event extent and the spatial type of the audio event.Type: GrantFiled: March 29, 2019Date of Patent: January 14, 2020Assignee: GoPro, Inc.Inventor: Ingrid A. Cotoros
-
Patent number: 10534964Abstract: Methods and devices for extracting feature descriptors for a video, the video having a sequence of pictures. The method includes identifying a first key picture and a second key picture later in the sequence than the first key picture; extracting a first set of feature descriptors from the first key picture and a second set of feature descriptors from the second key picture; identifying a set of pairs of feature descriptors, where each pair includes one descriptor from the first set and one descriptor from the second set; generating motion information describing the motion field between the first key picture and the second key picture; and filtering the set of pairs of feature descriptors based on correlation with the motion information to produce and output a subset of persistent descriptors.Type: GrantFiled: January 30, 2017Date of Patent: January 14, 2020Assignee: BlackBerry LimitedInventors: Muhammad Rabeiah M Alrabeiah, Jun Chen, Dake He, Liangyan Li, Yingchan Qiao, Yizhong Wang, Ting Yin
-
Patent number: 10534965Abstract: Techniques for analyzing stored video upon a request are described. For example, a method of receiving a first application programming interface (API) request to analyze a stored video, the API request to include a location of the stored video and at least one analysis action to perform on the stored video; accessing the location of the stored video to retrieve the stored video; segmenting the accessed video into chunks; processing each chunk with a chunk processor to perform the at least one analysis action, each chunk processor to utilize at least one machine learning model in performing the at least one analysis action; joining the results of the processing of each chunk to generate a final result; storing the final result; and providing the final result to a requestor in response to a second API request is described.Type: GrantFiled: March 20, 2018Date of Patent: January 14, 2020Assignee: Amazon Technologies, Inc.Inventors: Nitin Singhal, Vivek Bhadauria, Ranju Das, Gaurav D. Ghare, Roman Goldenberg, Stephen Gould, Kuang Han, Jonathan Andrew Hedley, Gowtham Jeyabalan, Vasant Manohar, Andrea Olgiati, Stefano Stefani, Joseph Patrick Tighe, Praveen Kumar Udayakumar, Renjun Zheng
-
Patent number: 10534966Abstract: Systems and method of identifying activities and/or events represented in a video are presented herein. An activity and/or event may be represented in a video by virtue of one or both of an entity moving with a capture device during capture of the video preforming the activity and/or event, or the video portraying one or more entities performing the activity and/or event. Activity types may be characterized by one or more of common movements, equipment, spatial context, and/or other features. Events may be characterized by one or both of individual movements and/or sets of movements that may routinely occur during performance of an activity. The identification of activities and/or events represented in a video may be based on one or more spectrogram representations of sensor output signals of one or more sensors coupled to a capture device.Type: GrantFiled: February 2, 2017Date of Patent: January 14, 2020Assignee: GoPro, Inc.Inventors: Daniel Tse, Desmond Chik, Guanhang Wu
-
Patent number: 10534967Abstract: A fish monitoring system deployed in a particular area to obtain fish images is described. Neural networks and machine-learning techniques may be implemented to periodically train fish monitoring systems and generate monitoring modes to capture high quality images of fish based on the conditions in the determined area. The camera systems may be configured according to the settings, e.g., positions, viewing angles, specified by the monitoring modes when conditions matching the monitoring modes are detected. Each monitoring mode may be associated with one or more fish activities, such as sleeping, eating, swimming alone, and one or more parameters, such as time, location, and fish type.Type: GrantFiled: May 3, 2018Date of Patent: January 14, 2020Assignee: X Development LLCInventors: Joel Fraser Atwater, Barnaby John James, Matthew Messana
-
Patent number: 10534968Abstract: Methods, systems, apparatus, and non-transitory media are described for verifying a vehicle's odometer mileage that is associated with an insurance policy. The described techniques include sending a notification to a mobile computing device requesting an update of the odometer mileage. The notification may be generated in response to the current date being within a threshold number of days prior to the insurance policy's expiration date. In response to the notification, a user may capture an image of the odometer with the mobile computing device, which may also include data indicative of the status of the mobile computing device. One or more optical character recognition (OCR) processes may be applied to the captured image to determine various odometer mileages. Aspects include verifying the odometer mileage when different OCR processes produce the same odometer mileage result and using the status of the mobile computing device to flag potentially fraudulent images.Type: GrantFiled: April 13, 2016Date of Patent: January 14, 2020Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Mark E. Clauss, David W. Thurber, Matthew Eric Riley, Sr., Richard J. Lovings, Steven J. Balbach, Nicholas R. Baker, J. Lynn Wilson
-
Patent number: 10534969Abstract: The present disclosure describes systems and methods for imaging an iris for biometric recognition. An image sensor may be located a distance L from an iris, to acquire an image of the iris. The image may have a contrast amplitude comprising modulation of a signal level at the image sensor due to features of the iris being acquired, and a standard deviation of said signal level due to noise generated within the image sensor. An illumination source may provide infrared illumination during acquisition of the image. When L is set at a value of at least 30 centimeters, the illumination source can provide the infrared illumination to the iris at a first irradiance value, quantified in watts per centimeter-squared, such that the contrast amplitude is at a value above that of the standard deviation, and enables the acquired image to be used for biometric recognition.Type: GrantFiled: February 22, 2018Date of Patent: January 14, 2020Assignee: EYELOCK LLCInventor: George Herbert Needham Riddle
-
Patent number: 10534970Abstract: A system and method of inspection may include capturing image data by a stereo imaging device. A determination as to whether noise indicative of a transparent or specular object exists in the image data may be made. A report that a transparent or specular object was captured in the image data may be made.Type: GrantFiled: December 24, 2014Date of Patent: January 14, 2020Assignee: Datalogic IP Tech S.R.L.Inventor: Nicoletta Laschi
-
Patent number: 10534971Abstract: Methods for detecting digital or physical tampering of an imaged physical credential include the actions of: receiving a digital image representing a physical credential having one or more high value regions, the digital image including an array of pixels; processing the digital image with a tamper detector to generate an output corresponding to an intrinsic characteristic of the digital image, the tamper detector configured to perform a pixel-level analysis of the high value regions of the digital image with respect to a predetermined tampering signature; and determining, based on the output from the tamper detector, whether the digital image has been digitally tampered with.Type: GrantFiled: October 13, 2017Date of Patent: January 14, 2020Assignee: ID Metrics Group IncorporatedInventors: Richard Austin Huber, Jr., Satya Prakash Mallick, Matthew William Flagg, Koustubh Sinhal
-
Patent number: 10534972Abstract: An image processing method, device and medium are provided. The method includes: acquiring candidate areas from an image to be processed, each of the candidate areas including a reference target; extracting a predetermined characteristic of each of the candidate areas; calculating an evaluation value of each of the candidate areas according to the predetermined characteristic; and acquiring a snapshot of the image to be processed according to the evaluation values.Type: GrantFiled: December 4, 2016Date of Patent: January 14, 2020Assignee: XIAOMI INC.Inventors: Baichao Wang, Qiuping Qin, Wendi Hou
-
Patent number: 10534973Abstract: Methods, systems, and media for color palette extraction for videos are provided. In some embodiments, the method comprises: identifying, at a server, a frame of a video content item; clustering pixels of the frame of the video content item based on a color of each of the pixels into a group of clusters; for each of a plurality of clusters in the group of clusters, determining an average color for the cluster; selecting a cluster of the plurality of clusters based on the average color of the cluster; determining a color palette corresponding to the frame of the video content item for one or more user interface elements in which the video content item is to be presented based on the average color of the selected cluster; and transmitting information indicating the color palette to a user device in response to a request to present the video content item.Type: GrantFiled: April 18, 2017Date of Patent: January 14, 2020Assignee: Google LLCInventors: Samuel Keene, Maegan Clawges
-
Patent number: 10534974Abstract: Provided is an image area extraction method for extracting an image area of an object from a color image of obtained color image data. The image data extraction method includes converting RGB values of each pixel in the color image data into HSV values, performing threshold processing to binarize at least one of the converted S and V values of each pixel so that it will be converted into HS?V? values, generating composite image data including an X value, a Y value, and a Z value for each pixel, the X value, the Y value, and the Z value being obtained by adding values according to predetermined one-to-one combinations between any one of an R value, a G value, and a B value and any one of an H value, an S? value, and a V? value, and extracting the image area using the composite image data.Type: GrantFiled: January 8, 2018Date of Patent: January 14, 2020Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Norimasa Kobori
-
Patent number: 10534975Abstract: A multi-frequency high-precision object recognition method is disclosed, wherein a multi-frequency light emitting unit is used to emits lights of different frequencies onto an object-to-be-tested, and a multi-frequency image sensor unit is used to fetch the image of lights reflected from the object-to-be-tested. In an X axis and a Y axis is a single-piece planar image, while lights of different frequencies is used to form image depth in a Z axis. The sample light in the Z axis includes two infrared light narrow range image signals, each having wavelength between 850 nm and 1050 nm, and wavelength width between 10 nm and 60 nm. Calculate to obtain a plurality of single-piece planar images in the X axis and the Y axis as sampled by different wavelength widths in the Z axis, superimpose the plurality of single-piece planar images into a 3-dimension stereoscopic relief image for precise comparison and recognition.Type: GrantFiled: July 10, 2018Date of Patent: January 14, 2020Inventors: Kuan-Yu Lu, Wei-Hsin Huang, Wei-Hung Chang, Chun-Shing Chu
-
Patent number: 10534976Abstract: A display apparatus includes a request information acquiring section, an information analyzing section, and a distinction tip deciding section. The request information acquiring section acquires request information including information to distinguish a target. The information analyzing section analyzes the request information acquired by the request information acquiring section, and extracts information required to distinguish the target. The distinction tip deciding section decides a distinction tip when a user distinguishes the target based on the information extracted by the information analyzing section.Type: GrantFiled: January 26, 2016Date of Patent: January 14, 2020Assignee: OLYMPUS CORPORATIONInventors: Yoshiyuki Fukuya, Kazuo Kanda, Kazuhiko Shimura
-
Patent number: 10534977Abstract: An example method for segmenting an object contained in an image includes receiving an image including a plurality of pixels, transforming a plurality of characteristics of a pixel into respective neutrosophic set domains, calculating a neutrosophic similarity score for the pixel based on the respective neutrosophic set domains for the characteristics of the pixel, segmenting an object from background of the image using a region growing algorithm based on the neutrosophic similarity score for the pixel, and receiving a margin adjustment related to the object segmented from the background of the image.Type: GrantFiled: December 10, 2018Date of Patent: January 14, 2020Assignee: H. LEE MOFFITT CANCER CENTER AND RESEARCH INSTITUTE, INC.Inventors: Yanhui Guo, Segundo J. Gonzalez
-
Patent number: 10534978Abstract: An approach is provided in which a system analyzes a first subject in a first image taken at a first point in time against a second subject in a second image taken at a second point in time. The first image and the second image are captured at a venue. The system determines, based on a time duration between the first point in time and the second point, a probability that the first subject is at the venue at the second point in time. Next, the system computes, based on the probability, a relevance score that quantifies a relationship between the first subject in the first image and the second subject in the second image. Finally, the system assigns the second image to a first image file corresponding to the first image based on the relevance score reaching a relevance score threshold.Type: GrantFiled: December 17, 2017Date of Patent: January 14, 2020Assignee: International Business Machines CorporationInventors: Mahesh P. Bhat, Paul F. Gerver, Lennard G. Streat, Cameron J. Young
-
Patent number: 10534979Abstract: Systems and methods of distinguishing between feature depicted in an image are presented herein. Information defining an image may be obtained. The image may include visual content comprising an array of pixels. The array may include pixel rows. An identification of a pixel row in an image may be obtained. Distances of individual pixels and/or groups of pixels from the identified row of pixels may be determined. Parameter values for a set of pixel parameters of individual pixels of the image may be determined. Based on one or more of the distances from the identified row of pixels, parameter values of one or more pixel parameters, and/or other information, individual pixels and/or groups of pixels may be classified as one of a plurality of image features.Type: GrantFiled: May 21, 2019Date of Patent: January 14, 2020Assignee: GoPro, Inc.Inventors: Vincent Garcia, Maxime Schwab, Francois Lagunas
-
Patent number: 10534980Abstract: A method and apparatus for recognizing an object may obtain, from an input image, feature points and descriptors corresponding to the feature points, determine indices of the feature points based on the descriptors, estimate a density distribution of feature points for each of the indices, and recognize an object included in the input image based on the estimated density distribution.Type: GrantFiled: November 6, 2017Date of Patent: January 14, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Huijin Lee, Seung-Chan Kim, Sunghoon Hong
-
Patent number: 10534981Abstract: Disclosed herein is an intelligent agent to analyze a media object. The agent comprises a trained model comprising a number of state layers for storing a history of actions taken by the agent in each of a number of previous iterations performed by the agent in analyzing a media object. The stored state may be used by the agent in a current iteration to determine whether or not to make, or abstain from making, a prediction from output generated by the model, identify another portion of the media object to analyze, end analysis. Output from the agent's model may comprise a semantic vector that can be mapped to a semantic vector space to identify a number of labels for a media object.Type: GrantFiled: April 6, 2018Date of Patent: January 14, 2020Assignee: OATH INC.Inventor: Simon Osindero
-
Patent number: 10534982Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. The neural network is trained with training images. During the training, calibration parameters are initialized and input to the neural network, and are updated through the training. Accordingly, the network parameters of the neural network are updated based in part on the calibration parameters. Upon completion of the training, the neural network is calibrated for a user. This calibration includes initializing and inputting the calibration parameters along with calibration images showing an eye of the user to the neural network. The calibration includes updating the calibration parameters without changing the network parameters by minimizing the loss function of the neural network based on the calibration images. Upon completion of the calibration, the neural network is used to generate 3D gaze information for the user.Type: GrantFiled: March 30, 2018Date of Patent: January 14, 2020Assignee: Tobii ABInventor: Erik Linden
-
Patent number: 10534983Abstract: A piping and instrumentation planning and maintenance system includes an input/output (I/O) interface for receiving a target piping and instrumentation diagram (PID) from a document source system; a processor in communication with the I/O interface; and non-transitory computer readable media in communication with the processor. The non-transitory computer readable media store instruction code, which when executed by the processor, causes the processor to classify entities and properties thereof within the target PID. The entities include one or more assets and interconnections therebetween specified in the PID. The processor compares the classified entities to a knowledge base that represents relationships between a plurality of assets and interconnections between the assets. The processor then determines, based on the comparison, whether the assets in the target PID are interconnected correctly.Type: GrantFiled: September 7, 2018Date of Patent: January 14, 2020Assignee: Accenture Global Solutions LimitedInventors: Teresa Sheausan Tung, Jean-Luc Chatelain, Jurgen Albert Weichenberger, Ishmeet Singh Grewal
-
Patent number: 10534984Abstract: Various embodiments are generally directed to techniques of adjusting the combination of the samples in a training batch or training set. Embodiments include techniques to determine an accuracy for each class of a classification model, for example. Based on the determined accuracies, the combination of the samples in the training batch may be adjusted or modified to improve the training of the classification model.Type: GrantFiled: June 14, 2019Date of Patent: January 14, 2020Assignee: CAPITAL ONE SERVICES, LLCInventors: Fardin Abdi Taghi Abad, Jeremy Edward Goodsitt, Austin Grant Walters
-
Patent number: 10534985Abstract: A vehicle camera device is applied in a vehicle. The vehicle camera device performs image recognition of current driving image to obtain a license-plate pattern in the driving image, and when size of the license-plate pattern is less than a threshold, increases a focus according to current speed of the vehicle.Type: GrantFiled: March 6, 2018Date of Patent: January 14, 2020Assignee: GETAC TECHNOLOGY CORPORATIONInventors: Cheng-Liang Huang, Yung-Le Hung
-
Patent number: 10534986Abstract: A printing apparatus includes a first interpretation unit configured to generate intermediate data of a page by interpreting print data, a second interpretation unit configured to generate intermediate data of another page by interpreting the print data, and a controller configured to perform, according to a specific print setting command indicating that a specific process is to be performed on all pages, control such that the specific process is performed on all the pages. The controller performs the control if the specific print setting command is included in a specific page.Type: GrantFiled: February 10, 2017Date of Patent: January 14, 2020Assignee: CANON KABUSHIKI KAISHAInventor: Shuichi Takenaka
-
Patent number: 10534987Abstract: An image processing apparatus includes a target value calculation unit configured to calculate a target value to be output in a predetermined region in input image data based on pixel values of pixels included in the region, a distribution order determination unit configured to determine a distribution order of output values for distributing output values corresponding to the target value in the region based on a pixel value of each pixel included in the region and a threshold value in the threshold matrix corresponding to the pixel, and an output value determination unit configured to determine an output value of each pixel included in the region by allocating the target value to at least one pixel included in the region in the distribution order.Type: GrantFiled: March 1, 2019Date of Patent: January 14, 2020Assignee: CANON KABUSHIKI KAISHAInventors: Shigeo Kodama, Hisashi Ishikawa
-
Patent number: 10534988Abstract: Durable fabric RFID labels are provided for mounting on garments, fabrics and other fabric-containing items, the mounting and durability being before, during or after manufacturing and processing of the items. These labels are robust enough to withstand processing during manufacturing, while being capable of remaining on the item during inventory handling, merchandising and consumer use, including washing and drying. The durable labels include an RFID inlay, a face sheet overlying a first face of the RFID inlay, and a functional adhesive, such as a hot-melt adhesive, overlying a second face of the RFID inlay. The face sheet can be of printable material or have indicia or be a printed face sheet. The functional adhesive can be of a moisture-resistive type. The RFID inlay can be encased within a pocket of polymeric material. A polymeric sheet reinforcement layer can be adhered to and cover all or a portion of the RFID inlay.Type: GrantFiled: August 18, 2017Date of Patent: January 14, 2020Assignee: AVERY DENNISON RETAIL INFORMATION SERVICES LLCInventors: Richard K. Bauer, Rishikesh K. Bharadwaj
-
Patent number: 10534989Abstract: The purpose of the present invention is to enable a machine-read mark to also determine, with high probability, a management state to be normal when a visual mark has determined the management state to be normal, in cases when the visual mark and the machine-read mark, which are provided with a sensor function for detecting an abnormality in the same management state, are present on one product, even if there is variation in the quality of the marks. At least two barcodes or marks for managing the safety of one and the same product are provided. The barcodes and marks are provided with a function with which the safety of the product is confirmed as a result of a change in a property thereof, such as the color or shape, caused by an external factor that may reduce the safety of the product. The at least two barcodes or marks are provided with the function with which the safety of the product is confirmed as a result of a change in a property thereof caused by the same external factor.Type: GrantFiled: June 29, 2016Date of Patent: January 14, 2020Assignee: Hitachi, Ltd.Inventors: Yuya Tokuda, Tomotoshi Ishida
-
Patent number: 10534990Abstract: A dual interface smart card, and methods for the manufacture thereof, having a metal layer, an IC module, with contacts and RF capability, and a plug formed of non RF impeding material, disposed in the metal layer. The plug provides support for the IC module and a degree of electrical insulation and isolation from the metal layer. Embodiments of the card include at least one additional layer.Type: GrantFiled: March 28, 2019Date of Patent: January 14, 2020Assignee: COMPOSECURE, LLCInventors: John Herslow, Adam Lowe, Luis Dasilva, Brian Nester
-
Patent number: 10534991Abstract: A microcircuit card including an overall span of contacts including at least individual contact surfaces connected to this microcircuit while defining two parallel columns situated in proximity to two edges of the overall span, in a card body having a format at least equal to the 2FF format, in which there is made a pre-cutout in the 4FF format surrounding the overall span of contacts and a pre-cutout in the 3FF format surrounding the pre-cutout in the 4FF format, these pre-cutouts being such that the individual contact surfaces have, with respect to each of the pre-cutouts, positions and dimensions such that they encompass the theoretical contact zones defined by the standards defining these 4FF, 3FF and 2FF formats, the upper edge of the pre-cutout in the 3FF format being situated at a distance at least equal to 400 micrometers from the upper edge of the pre-cutout in the 4FF format.Type: GrantFiled: November 18, 2014Date of Patent: January 14, 2020Assignee: IDEMIA FRANCEInventors: Olivier Bosquet, Mouy-Kuong Sron
-
Patent number: 10534992Abstract: A wireless IC device includes a wireless IC chip arranged to process a radio signal, a power-supply circuit board that is connected to the wireless IC chip and that includes a power supply circuit including at least one coil pattern, and a radiation plate arranged to radiate a transmission signal supplied from the power-supply circuit board and/or receiving a reception signal to supply the reception signal to the power-supply circuit board. The radiation plate includes an opening provided in a portion thereof and a slit connected to the opening. When viewed in plan from the direction of the winding axis of the coil pattern, the opening in the radiation plate overlaps with an inner area of the coil pattern and the area of the inner area is approximately the same as that of opening.Type: GrantFiled: April 5, 2017Date of Patent: January 14, 2020Assignee: MURATA MANUFACTURING CO., LTD.Inventors: Noboru Kato, Nobuo Ikemoto
-
Patent number: 10534993Abstract: A cap seal includes a cylindrical member covering a side face of the container, and a top face including an IC tag and connected to a first end that is one of two axial ends of the cylindrical member to close the cylindrical member at the first end. The cylindrical member includes a metal portion disposed on the side face of the container, and at least one insulating portion extending from the first end. The cylindrical member forms a closed annular shape in a circumferential direction of the cylindrical member, and the metal portion and the insulating portion are joined to each other.Type: GrantFiled: April 5, 2019Date of Patent: January 14, 2020Assignee: TOPPAN PRINTING CO., LTD.Inventors: Kenji Shinohara, Takamitsu Nakabayashi, Keinosuke Yamaoka
-
Patent number: 10534994Abstract: The present disclosure relates to a computer-implemented method for analyzing one or more hyper-parameters for a multi-layer computational structure. The method may include accessing, using at least one processor, input data for recognition. The input data may include at least one of an image, a pattern, a speech input, a natural language input, a video input, and a complex data set. The method may further include processing the input data using one or more layers of the multi-layer computational structure and performing matrix factorization of the one or more layers. The method may also include analyzing one or more hyper-parameters for the one or more layers based upon, at least in part, the matrix factorization of the one or more layers.Type: GrantFiled: November 11, 2015Date of Patent: January 14, 2020Assignee: Cadence Design Systems, Inc.Inventors: Piyush Kaul, Samer Lutfi Hijazi, Raul Alejandro Casas, Rishi Kumar, Xuehong Mao, Christopher Rowen
-
Patent number: 10534995Abstract: A network of apparatuses that characterizes items is presented. A self-updating apparatus includes a processing unit that has a memory storing parameters that are useful for characterizing different items, and a processing module configured to automatically select sources from which to receive data, modify the parameters based on the data that is received, and to select recipients of modified parameters. Selection of sources and recipients is based on comparison of parameters between the processing module and the sources, and between the processing module and the recipients, respectively. The processing unit may include an artificial intelligence program (e.g., a neural network such as a machine learning program). When used in a network, the processing units may “train” other processing units in the network such that the characterization accuracy and range of each processing unit improves over time.Type: GrantFiled: March 15, 2013Date of Patent: January 14, 2020Assignee: QYLUR INTELLIGENT SYSTEMS, INC.Inventors: Alysia M. Sagi-Dolev, Alon Zweig
-
Patent number: 10534996Abstract: CNN (Cellular Neural Networks or Cellular Nonlinear Networks) based digital Integrated Circuit for artificial intelligence contains multiple CNN processing units. Each CNN processing unit contains CNN logic circuits operatively coupling to a memory subsystem having first and second memories. The first memory contains magnetic random access memory (MRAM) cells for storing weights (e.g., filter coefficients) while the second memory is for storing input signals (e.g., imagery data). The first memory may store one-time-programming weights or filter coefficients. The memory subsystem may contain a third memory that contains MRAM cells for storing one-time-programming data for security purpose. The second memory contains MRAM cells or static random access memory cells. Each MRAM cell contains a voltage-controlled magnetic anisotropy (VCMA) based magnetic tunnel junction (MTJ) element. Magnetization direction in VCMA based MTJ element can be in-plane or out-of-plane.Type: GrantFiled: October 10, 2017Date of Patent: January 14, 2020Assignee: Gyrfalcon Technology Inc.Inventors: Chyu-Jiuh Torng, Daniel Liu
-
Patent number: 10534997Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving a request from a client to process a computational graph; obtaining data representing the computational graph, the computational graph comprising a plurality of nodes and directed edges, wherein each node represents a respective operation, wherein each directed edge connects a respective first node to a respective second node that represents an operation that receives, as input, an output of an operation represented by the respective first node; identifying a plurality of available devices for performing the requested operation; partitioning the computational graph into a plurality of subgraphs, each subgraph comprising one or more nodes in the computational graph; and assigning, for each subgraph, the operations represented by the one or more nodes in the subgraph to a respective available device in the plurality of available devices for operation.Type: GrantFiled: April 27, 2018Date of Patent: January 14, 2020Assignee: Google LLCInventors: Paul A. Tucker, Jeffrey Adgate Dean, Sanjay Ghemawat, Yuan Yu
-
Patent number: 10534998Abstract: Methods and systems are provided for deblurring images. A neural network is trained where the training includes selecting a central training image from a sequence of blurred images. An earlier training image and a later training image are selected based on the earlier training image preceding the central training image in the sequence and the later training image following the central training image in the sequence and based on proximity of the images to the central training image in the sequence. A training output image is generated by the neural network from the central training image, the earlier training image, and the later training image. Similarity is evaluated between the training output image and a reference image. The neural network is modified based on the evaluated similarity. The trained neural network is used to generate a deblurred output image from a blurry input image.Type: GrantFiled: April 10, 2019Date of Patent: January 14, 2020Assignee: Adobe Inc.Inventors: Oliver Wang, Jue Wang, Shuochen Su
-
Patent number: 10534999Abstract: An apparatus for classifying data using a neural network includes an input layer configured to receive input data; an output value to output a plurality of first output values of the input data with respect to each of at least one of all classes, and output only one first output value of the input data with respect to each of a rest of all of the classes; and a boost pooling layer to receive one or more output values output for each class, and output one second output value for each class.Type: GrantFiled: September 25, 2015Date of Patent: January 14, 2020Assignee: Samsung Electronics Co., Ltd.Inventor: Keun Joo Kwon