Patents Examined by Ruiping Li
-
Patent number: 11470241Abstract: A method and system for detecting facial expressions in digital images and applications therefore are disclosed. Analysis of a digital image determines whether or not a smile and/or blink is present on a person's face. Face recognition, and/or a pose or illumination condition determination, permits application of a specific, relatively small classifier cascade.Type: GrantFiled: September 14, 2020Date of Patent: October 11, 2022Inventors: Catalina Neghina, Mihnea Gangea, Stefan Petrescu, Emilian David, Petronel Bigioi, Eric Zarakov, Eran Steinberg
-
Patent number: 11468592Abstract: There is provided an information processing apparatus capable of improving an identification accuracy for a user. The information processing apparatus according to the present technology includes a control unit. The control unit causes, when a user is unable to be identified by identification in a feature quantity space based on a registered feature quantity and an acquired feature quantity of the user, the acquired feature quantity of the user to be stored in a storage unit as an unidentifiable feature quantity, sets an additional registration feature quantity on the basis of a distribution of unidentifiable feature quantities in the feature quantity space, specifies a user corresponding to the additional registration feature quantity, and additionally registers the additional registration feature quantity as a feature quantity of the specified user.Type: GrantFiled: March 19, 2018Date of Patent: October 11, 2022Assignee: SONY CORPORATIONInventor: Tatsuhito Sato
-
Patent number: 11462036Abstract: In one embodiment, an apparatus comprises a memory and a processor. The memory stores visual data captured by one or more sensors. The processor detects one or more first objects in the visual data based on a machine learning model and one or more first reference templates. The processor further determines, based on an object ontology, that the visual data is expected to contain a second object, wherein the object ontology indicates that the second object is related to the one or more first objects. The processor further detects the second object in the visual data based on the machine learning model and a second reference template. The processor further determines, based on an inference rule, that the visual data is expected to contain a third object. The processor further detects the third object in the visual data based on the machine learning model and a third reference template.Type: GrantFiled: July 17, 2020Date of Patent: October 4, 2022Assignee: Intel CorporationInventors: Ned M. Smith, Katalin Klara Bartfai-Walcott, Eve M. Schooler, Shao-Wen Yang
-
Patent number: 11462310Abstract: The de-identification system can be operable to receive, from at least one first entity, a medical scan and a corresponding medical report. A set of patient identifiers can be identified in a subset of fields of a header of the medical scan. A de-identified medical scan can be generated by replacing the subset of fields of the header of the medical scan with a corresponding set of anonymized fields generated by performing a header anonymization function. A subset of patient identifiers of the set of patient identifiers can be identified in the medical report. A de-identified medical report can be generated by replacing each of the subset of patient identifiers with corresponding anonymized placeholder text generated by performing a text anonymization function on the subset of patient identifiers. The de-identified medical scan and the de-identified medical report can be transmitted to a second entity via a network.Type: GrantFiled: December 22, 2020Date of Patent: October 4, 2022Assignee: Enlitic, Inc.Inventors: Eric C. Poblenz, Kevin Lyman, Chris Croswhite
-
Patent number: 11455488Abstract: Systems and methods are provided for processing a drawing in a modeling prototype. A data structure associated with a visual model is accessed. The visual model is analyzed to extract construct-relevant features, where the construct-relevant features are extracted using a drawing object by identifying visual attributes of the visual model and populating a data structure for each object drawn. The visual model is analyzed to generate a statistical model, where the statistical model is generated using a multidimensional scoring rubric by targeting different constructs which compositely estimate learning progression levels, wherein the statistical model is based on features that are principally aligned with one or more of the constructs. An automated scoring is determined based on the construct-relevant features and the statistical model, where the automated scoring is stored in a computer readable medium. and is outputted for display, transmitted across a computer network, or printed.Type: GrantFiled: March 20, 2019Date of Patent: September 27, 2022Assignee: Educational Testing ServiceInventors: Chee Wee Leong, Lei Liu, Rutuja Ubale, Lei Chen
-
Patent number: 11430576Abstract: This disclosure relates generally to a system and method for monitoring and quality evaluation of perishable food items in quantitative terms. Current technology provides limited capability for controlling environmental conditions surrounding the food items in real-time or any quantitative measurement for the degree of freshness of the perishable food items. The disclosed systems and methods facilitate in quantitative determination of freshness of food items by utilizing sensor data and visual data obtained by monitoring the food item. In an embodiment, the system utilizes a pre-trained CNN model and a RNN model, where the pertained CNN model is further fine-tined while training the RNN model to provide robust quality monitoring of the food items. In another embodiment, a rate kinetic based model is utilized for determining reaction rate order of the food item at a particular post-harvest stage of the food item so as to determine the remaining shelf life thereof.Type: GrantFiled: February 6, 2020Date of Patent: August 30, 2022Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Beena Rai, Jayita Dutta, Parijat Deshpande, Shankar Balajirao Kausley, Shirish Subhash Karande, Manasi Samarth Patwardhan, Shashank Madhukar Deshmukh
-
Patent number: 11423592Abstract: Technology disclosed herein may involve a computing system that (i) based on an image of a target object of a given class of object and at least one GAN configured to generate artificial images of the given class of object, generates an artificial image of the target object that is substantially similar to real-world images of objects of the given class of objects captured by real-world scanning devices, (ii) based on an image of a receptacle, selects an insertion location within the receptacle in the image of the receptacle to insert the artificial image of the target object, (iii) generates a combined image of the receptacle and the target object, wherein generating the combined image comprises inserting the artificial image of the target object into the image of the receptacle at the insertion location, and (iv) trains one or more object detection algorithms with the combined image of the receptacle and the target object.Type: GrantFiled: October 21, 2019Date of Patent: August 23, 2022Assignee: Rapiscan Laboratories, Inc.Inventors: Ian Cinnamon, Bruno Brasil Ferrari Faviero, Simanta Gautam
-
Patent number: 11423264Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.Type: GrantFiled: October 21, 2019Date of Patent: August 23, 2022Assignee: Adobe Inc.Inventors: Pinkesh Badjatiya, Nikaash Puri, Ayush Chopra, Anubha Kabra
-
Patent number: 11423696Abstract: According to an embodiment, a method for recognizing a user's face using an intelligent electronic device comprises obtaining a face area from the user's face captured and obtaining face information from the face area, comparing the obtained face information with a default cluster, selecting whether to extract a face vector from the face information according to a result of the comparison, determining an age variation state for the user's face based on the extracted face vector, and upon determining that the face vector is in the age variation state for the user's face, extracting a face feature vector from the face vector and configuring an expanded cluster by adding the face feature vector to the default cluster. According to the disclosure, the intelligent electronic device may be related to artificial intelligence (AI) modules, unmanned aerial vehicles (UAVs), robots, augmented reality (AR) devices, virtual reality (VR) devices, and 5G service-related devices.Type: GrantFiled: August 31, 2020Date of Patent: August 23, 2022Assignee: LG ELECTRONICS INC.Inventor: Sungil Kim
-
Patent number: 11410461Abstract: A group to be authenticated in face authentication is efficiently registered in a system. An information processing system includes a face detection unit configured to detect a face from an image in which a plurality of faces of persons are shown, a determination unit configured to determine whether or not the face detected by the face detection unit satisfies a predetermined condition, and a registration information generation unit configured to generate registration information, the registration information being information in which a partial image of each of a plurality of faces that have been determined to satisfy the predetermined condition is associated with an identifier identifying a group to be authenticated in face authentication.Type: GrantFiled: December 2, 2019Date of Patent: August 9, 2022Assignee: NEC CORPORATIONInventor: Yuki Shimizu
-
Patent number: 11410457Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.Type: GrantFiled: September 28, 2020Date of Patent: August 9, 2022Assignee: Snap Inc.Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Patent number: 11410364Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method may include receiving frames of a source video with the head and the face of a source actor. The method may then proceed with generating sets of source pose parameters that represent positions of the head and facial expressions of the source actor. The method may further include receiving at least one target image including the target head and the target face of a target person, determining target identity information associated with the target face, and generating an output video based on the target identity information and the sets of source pose parameters. Each frame of the output video can include an image of the target face modified to mimic at least one of the positions of the head of the source actor and at least one of facial expressions of the source actor.Type: GrantFiled: October 24, 2019Date of Patent: August 9, 2022Assignee: Snap Inc.Inventors: Yurii Volkov, Pavel Savchenkov, Maxim Lukin, Ivan Belonogov, Nikolai Smirnov, Aleksandr Mashrabov
-
Patent number: 11394888Abstract: Disclosed are systems and methods for providing personalized videos. An example method includes storing one or more preprocessed videos. The one or more preprocessed videos may include at least one frame with at least a target face. The method may continue with receiving an image of a source face, for example, by receiving a user selection of a further image and segmenting the further image into portions including the source face and a background. The method may then proceed with modifying the one or more preprocessed videos to generate one or more personalized videos. The modification may include modifying the image of the source face to generate an image of a modified source face. The modified source face may adopt a facial expression of the target face. The modification may further include replacing the at least one target face with the image of the modified source face.Type: GrantFiled: October 7, 2019Date of Patent: July 19, 2022Assignee: Snap Inc.Inventors: Victor Shaburov, Alexander Mashrabov, Grigoriy Tkachenko, Ivan Semenov
-
Patent number: 11392738Abstract: A method for generating a simulation scenario, the method may include receiving sensed information that was sensed during driving sessions of vehicles; wherein the sensed information comprises visual information regarding multiple objects; determining, by applying an unsupervised learning process on the sensed information, driving scenario building blocks and occurrence information regarding an occurrence of the driving scenario building blocks; and generating the simulation scenario based on a selected set of driving scenario building blocks and on physical guidelines, wherein the generating comprises selecting, out of the driving scenario building blocks, the selected set of driving scenario building blocks; wherein the generating is responsive to at least a part of the occurrence information and to at least one simulation scenario limitation.Type: GrantFiled: May 5, 2020Date of Patent: July 19, 2022Assignee: AUTOBRAINS TECHNOLOGIES LTDInventors: Igal Raichelgauz, Karina Odinaev
-
Patent number: 11386698Abstract: A method for sending an alarm message, and belongs to the field of computer technology. The method includes: acquiring a detection image (201) captured by an image capturing apparatus; determining a target detection area (202); and detecting a person's call status information corresponding to an image in the target detection area according to a preset on-the-phone determination algorithm model, and sending a first alarm message (203) to a server if the person's call status information shows that the person is on the phone.Type: GrantFiled: January 23, 2018Date of Patent: July 12, 2022Assignee: Hangzhou Hikvision Digital Technology Co., Ltd.Inventors: Haohao Tong, Junyan Tong, Ye Ren
-
Patent number: 11386537Abstract: Abnormality detection within a defined area includes obtaining a plurality of images of the defined area from image-capture devices. An extent of deviation of one or more types of products from an inference of each of the plurality of images is determined using a trained neural network. A localized dimensional representation is generated in a portion of an input image associated with a first location of the plurality of locations, based on gradients computed from the determined extent of deviation. The generated localized dimensional representation provides a visual indication of an abnormality located in the first location within the defined area. An action associated with the first location is executed based on the generated dimensional representation for proactive control or prevention of occurrence of undesired event in the defined area.Type: GrantFiled: February 27, 2020Date of Patent: July 12, 2022Assignee: Shanghai United Imaging Intelligence Co., LTD.Inventors: Abhishek Sharma, Meng Zheng, Srikrishna Karanam, Ziyan Wu, Arun Innanje, Terrence Chen
-
Patent number: 11373411Abstract: A method includes obtaining a two-dimensional image, obtaining a two-dimensional image annotation that indicates presence of an object in the two-dimensional image, determining a location proposal based on the two-dimensional image annotation, determining a classification for the object, determining an estimated size for the object based on the classification for the object, and defining a three-dimensional cuboid for the object based on the location proposal and the estimated size.Type: GrantFiled: June 6, 2019Date of Patent: June 28, 2022Assignee: Apple Inc.Inventors: Hanlin Goh, Nitish Srivastava, Yichuan Tang, Ruslan Salakhutdinov
-
Patent number: 11373445Abstract: A method and an apparatus for processing data, and a non-transitory computer readable storage medium. The method includes: obtaining a face recognition model stored in a first operation environment; performing an initialization on the face recognition model in the first operation environment, and transmitting the face recognition model subjected to the initialization to a second operation environment for storing, in which, a storage space in the first operation environment is greater than a storage space in the second operation environment.Type: GrantFiled: January 10, 2020Date of Patent: June 28, 2022Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Ziqing Guo, Haitao Zhou, Kamwing Au, Xiao Tan
-
Patent number: 11373272Abstract: A system and method for improving the detail of a digital signal comprising at least three dimensions can be implemented by extracting a plurality of data cubes containing two x-planes, two y-planes, two z-planes, and amplitude information at eight locations in this x, y, and z space. A primary and secondary direction and a data plane for each data cube can then be selected based on difference calculations of eight locations in the x, y, and z directions, resulting in a 2×2 data square. This data square can then be used to compute a network neighborhood, which can subsequently be used to calculate first and second order gradient information. The first and second order gradient information can be used to construct an output signal that has greater detail than the input signal.Type: GrantFiled: November 23, 2020Date of Patent: June 28, 2022Assignee: MindAptiv, LLCInventors: John J. Kolb, V, Kenneth Granville
-
Patent number: 11367186Abstract: The disclosure relates to stent detection and shadow detection in the context of intravascular data sets obtained using a probe such as, for example, and optical coherence tomography probe or an intravascular ultrasound probe.Type: GrantFiled: July 14, 2020Date of Patent: June 21, 2022Assignee: LightLab Imaging, Inc.Inventors: Sonal Ambwani, Christopher E. Griffin