Patents Examined by Quan M Hua
-
Patent number: 11270164Abstract: A system, including a processor and a memory, the memory including instructions to be executed by the processor to train a deep neural network based on a plurality of real-world images, determine the accuracy of the deep neural network is below a threshold based on identifying one or more physical features by the deep neural network, including one or more object types, in the plurality of real-world images and generate a plurality of synthetic images based on the accuracy of the deep neural network is below a threshold based on identifying the one or more physical features using a photo-realistic image rendering software program and a generative adversarial network. The instructions can include further instructions to retrain the deep neural network based on the plurality of real-world images and the plurality of synthetic images and output the retrained deep neural network.Type: GrantFiled: September 24, 2020Date of Patent: March 8, 2022Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Vijay Nagasamy, Deepti Mahajan, Rohan Bhasin, Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut murali
-
Patent number: 11263750Abstract: Introduced here are computer programs and associated computer-implemented techniques for training and then applying computer-implemented models designed for segmentation of an object in the frames of video. By training and then applying the segmentation model in a cyclical manner, the errors encountered when performing segmentation can be eliminated rather than propagated. In particular, the approach to segmentation described herein allows the relationship between a reference mask and each target frame for which a mask is to be produced to be explicitly bridged or established. Such an approach ensures that masks are accurate, which in turn means that the segmentation model is less prone to distractions.Type: GrantFiled: October 30, 2020Date of Patent: March 1, 2022Assignee: Adobe Inc.Inventor: Ning Xu
-
Method and system of performing medical treatment outcome assessment or medical condition diagnostic
Patent number: 11257210Abstract: There is described a computer-implemented method of performing medical treatment outcome assessment or medical condition diagnostic. The method generally has receiving a three-dimensional (3D) anatomical image representing a region of a body of a patient; generating a 3D invariant image by processing the 3D anatomical image based on an isophote structure invariant entity (e.g., mean curvature of isophotes, Gaussian curvature of isophotes); using an image processing unit, processing the 3D invariant image and generating an output based on said processing, the output being either: medical treatment outcome data indicative of an outcome assessment of a treatment received by said patient; or medical condition diagnostic data indicative of a diagnostic of a medical condition of said patient.Type: GrantFiled: June 25, 2019Date of Patent: February 22, 2022Assignee: THE ROYAL INSTITUTION FOR THE ADVANCEMENT OF LEARNING / MCGILL UNIVERSITYInventors: Peter Savadjiev, Benoit Gallix, Kaleem Siddiqi -
Patent number: 11244170Abstract: A scene segmentation method and device, and a storage medium. In the present disclosure, an image to be identified is inputted into a deep neural network, depthwise separable convolution is performed on the image by using a down-sampling module to obtain a first characteristic image smaller than the image in size, atrous convolution is performed on the first characteristic image by using an atrous spatial pyramid pooling module to obtain second characteristic images with different scales, and depthwise separable convolution is performed on the second characteristic images with different scales by using an up-sampling module to obtain a third characteristic image with the same size as the image, and pixels in the third characteristic image are classified by using a classification module to obtain a scene segmentation result of the image.Type: GrantFiled: May 12, 2020Date of Patent: February 8, 2022Assignee: Beijing Dajia Internet Information Technology Co., Ltd.Inventor: Yuan Zhang
-
Patent number: 11246014Abstract: Systems and methods for providing information are disclosed. An exemplary method may comprise determining a current service stage of a service, obtaining status information matching the current service stage, and providing the obtained status information on a locked screen of the terminal device.Type: GrantFiled: July 15, 2019Date of Patent: February 8, 2022Assignee: Beijing DiDi Infinity Technology and Development Co., Ltd.Inventors: Yue Li, Tihui Zhang
-
Patent number: 11244196Abstract: A method of semantically segmenting an input image using a neural network is provided. The method includes extracting features of the input image to generate one or more feature maps; and analyzing the one or more feature maps to generate a plurality of predictions respectively corresponding to a plurality of subpixels of the input image. Extracting features of the input image is performed using a residual network having N number of residual blocks, N being a positive integer greater than 1. Analyzing the one or more feature maps is performed through M number of feature analyzing branches to generate M sets of predictions. A respective one set of the M sets of predictions includes multiple predictions respectively corresponding to the plurality of subpixels of the input image. A respective one of the plurality of predictions is an average value of corresponding ones of the M sets of predictions.Type: GrantFiled: October 10, 2019Date of Patent: February 8, 2022Assignee: BOE Technology Group Co., Ltd.Inventor: Tingting Wang
-
Patent number: 11238580Abstract: One or more medical images of a patient are processed by a first neural network model to determine a region-of-interest (ROI) or a cut-off plane. Information from the first neural network model is used to crop the medical images, which serves as input to a second neural network model. The second neural network model processes the cropped medical images to determine contours of anatomical structures in the medical images of the patient. Each of the first and second neural network models are deep neural network models. By use of cropped images in the training and inference phases of the second neural network model, contours are produced with sharp edges or flat surfaces.Type: GrantFiled: August 29, 2019Date of Patent: February 1, 2022Assignee: VARIAN MEDICAL SYSTEMS INTERNATIONAL AGInventors: Hannu Laaksonen, Janne Nord, Jan Schreier
-
Patent number: 11240638Abstract: Aspects described herein include a method and related network device and computer program product for use in an environment comprising a plurality of network devices capable of providing broadcast services to one or more client devices. The method comprises receiving, from a client device, a neighbor report that indicates whether one or more network devices of the plurality of network devices are advertising broadcast services. The method further comprises generating, using the neighbor report, a broadcast optimization map that indicates a set of one or more of the plurality of network devices that will provide a broadest coverage of broadcast services within the environment. The set corresponds to a minimum count of network devices that supports all current broadcast streams by the one or more client devices.Type: GrantFiled: September 11, 2020Date of Patent: February 1, 2022Assignee: Cisco Technology, Inc.Inventors: Vinay Saini, Jerome Henry, Robert E. Barton
-
Patent number: 11232566Abstract: A method and system are for analyzing an anatomical structure of interest in 3D image data. In an embodiment, the method includes segmenting a first contour of the structure of interest in the 3D image data, the first contour defining a first segmented contour volume within the 3D image data; generating a first 2D pattern based on at least a portion of the surface of the first contour or based on at least a portion of the first segmented contour volume; performing a texture analysis on the first 2D pattern; and outputting a texture analysis information.Type: GrantFiled: November 28, 2018Date of Patent: January 25, 2022Assignee: Siemens Healthcare GmbHInventors: Martin Sedlmair, Bernhard Schmidt
-
Patent number: 11216700Abstract: A method, apparatus, and program product perform microstructure analysis of a digital image of rock using a trained convolutional neural network model to generate a plurality of rock features. The rock features can represent a pore space in the microstructure of the rock including pores and throats. In many implementations, a statistical process can be applied to the rock features to generate characteristics of the pore space which can be used in classifying the rock.Type: GrantFiled: May 6, 2019Date of Patent: January 4, 2022Assignee: Schlumberger Technology CorporationInventors: Alexander Starostin, Alexander Nadeev
-
Patent number: 11216644Abstract: The present disclosure relates to a method, a device and a medium for making up a face. The method for making up the face of the present disclosure includes: obtaining a first face image; determining facial key-points by detecting the first face image; generating a second face image by applying makeup to a face in the first face image based on the facial key-points; determining a first face region by segmenting the first face image, wherein the first face region is a face region that is not shielded in the first face image; and generating a final face makeup image with makeup based on the first face region and the second face image.Type: GrantFiled: September 24, 2020Date of Patent: January 4, 2022Assignee: Beijing Dajia Internet Information Technology Co., Ltd.Inventors: Songtao Zhao, Wen Zheng, Congli Song, Yilin Guo, Huijuan Huang
-
Patent number: 11210558Abstract: An image forming apparatus includes a communication interface through which the image forming apparatus communicates with a server and circuitry. The circuitry is configured to: collect learning data; and determine whether to generate a learning model by the server based on the collected learning data or to generate a learning model by the circuitry based on the collected learning data.Type: GrantFiled: February 11, 2019Date of Patent: December 28, 2021Assignee: RICOH COMPANY, LTD.Inventor: Hajime Kubota
-
Patent number: 11205271Abstract: The present disclosure provides a method and an apparatus for semantic segmentation of an image, capable of solving the problem in the related art associated with low speed and inefficiency in semantic segmentation of images. The method includes: receiving the image; performing semantic segmentation on the image to obtain an initial semantic segmentation result; and inputting image information containing the initial semantic segmentation result to a pre-trained convolutional neural network for semantic segmentation post-processing, so as to obtain a final semantic segmentation result. With the solutions of the present disclosure, the initial semantic segmentation result can be post-processed using the convolutional neural network, such that the speed and efficiency of the semantic segmentation of the image can be improved.Type: GrantFiled: September 20, 2019Date of Patent: December 21, 2021Assignee: BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.Inventors: Hengchen Dai, Naiyan Wang
-
Patent number: 11200404Abstract: This application relates to feature point positioning technologies. The technologies involve positioning a target area in a current image; determining an image feature difference between a target area in a reference image and the target area in the current image, the reference image being a frame of image that is processed before the current image and that includes the target area; determining a target figure point location of the target area in the reference image; determining a target feature point location difference between the target area in the reference image and the target area in the current image according to a feature point location difference determining model and the image feature difference; and positioning a target feature point in the target area in the current image according to the target feature point location of the target area in the reference image and the target feature point location difference.Type: GrantFiled: November 4, 2020Date of Patent: December 14, 2021Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Yandan Zhao, Yichao Yan, Weijian Cao, Yun Cao, Yanhao Ge, Chengjie Wang, Jilin Li
-
Patent number: 11195083Abstract: An object detection system includes an image capture device, a memory, and a processor. The image capture device captures an image. The memory stores an instruction corresponding to an inference engine based on a multi-scale convolutional neural network architecture including a first, a second, and an object detection scale. The processor executes the instruction to: reduce network widths of convolution layers of the second scale; run the inference engine according to the adjusted convolutional neural network architecture to receive the image as an initial input; input a first output generated by the first scale according to the initial input into the second and the object detection scale; input a second output generated by the second scale according to the first output into the object detection scale; generate a final output according to the first and the second output by the object detection scale, to perform object detection on the image.Type: GrantFiled: June 11, 2020Date of Patent: December 7, 2021Assignee: PEGATRON CORPORATIONInventor: Yu-Hung Tseng
-
Patent number: 11176672Abstract: A full-size training image is reduced by an image reduction unit (11) and input to an FCN (Fully Convolutional Neural Network) computation unit (13), and the FCN computation unit (13) performs calculation under a set filter coefficient and outputs a reduced label image. The reduced label image is enlarged to a full size by an image enlargement unit (14), the error calculation unit (15) calculates an error between the enlarged label image and a full-size ground truth based on a loss function, and the parameter update unit (16) updates a filter coefficient depending on the error. By repeating learning under the control of the learning control unit 17, it is possible to generate a learning model for performing optimal segmentation including an error generated at the time of image enlargement. Further, by including the image enlargement processing in the learning model, a full-size label image can be output, and the accuracy evaluation of the model can also be performed with high accuracy.Type: GrantFiled: June 28, 2018Date of Patent: November 16, 2021Assignee: Shimadzu CorporationInventors: Wataru Takahashi, Shota Oshikawa
-
Patent number: 11164021Abstract: Methods, systems, and media for discriminating and generating translated images are provided. In some embodiments, the method comprises: identifying a set of training images, wherein each image is associated with at least one domain from a plurality of domains; training a generator network to generate: i) a first fake image that is associated with a first domain; and ii) a second fake image that is associated with a second domain; training a discriminator network, using as inputs to the discriminator network: i) an image from the set of training images; ii) the first fake image; and iii) the second fake image; and using the generator network to generate, for an image not included in the set of training images at least one of: i) a third fake image that is associated with the first domain; and ii) a fourth fake image that is associated with the second domain.Type: GrantFiled: May 15, 2020Date of Patent: November 2, 2021Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Md Mahfuzur Rahman Siddiquee, Zongwei Zhou, Ruibin Feng, Nima Tajbakhsh, Jianming Liang
-
Patent number: 11158044Abstract: The present disclosure provides a battery detection method and a battery detection device. The method includes: obtaining a picture of each battery on a battery production line, and obtaining a corresponding production node; inputting the picture into a preset defect detection model, and obtaining a detection result output by the defect detection model, and when the detection result denotes that there is the defect on the picture, sending a control instruction to a control device of the production node corresponding to the picture, to cause the control device to shunt the battery corresponding to the picture having the defect based on the control instruction. The detection result includes whether there is a defect, a defect type, and a defect position.Type: GrantFiled: June 27, 2019Date of Patent: October 26, 2021Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Yawei Wen, Jiabing Leng, Minghao Liu, Huihui Xiao, Jiangliang Guo, Xu Li
-
Patent number: 11158055Abstract: The present disclosure relates to utilizing a neural network having a two-stream encoder architecture to accurately generate composite digital images that realistically portray a foreground object from one digital image against a scene from another digital image. For example, the disclosed systems can utilize a foreground encoder of the neural network to identify features from a foreground image and further utilize a background encoder to identify features from a background image. The disclosed systems can then utilize a decoder to fuse the features together and generate a composite digital image. The disclosed systems can train the neural network utilizing an easy-to-hard data augmentation scheme implemented via self-teaching. The disclosed systems can further incorporate the neural network within an end-to-end framework for automation of the image composition process.Type: GrantFiled: July 26, 2019Date of Patent: October 26, 2021Assignee: ADOBE INC.Inventors: Zhe Lin, Jianming Zhang, He Zhang, Federico Perazzi
-
Patent number: 11157756Abstract: An artificial intelligence perception system for detecting one or more objects includes one or more processors, at least one sensor, and a memory device. The memory device includes an image capture module, an object identifying module, and a logical scaffold module. The image capture module and the object identifying module cause the one or more processors to obtain sensor information of a field of view from a sensor, identify an object within the sensor information, and determine at least one property of the object. The logical scaffold module causes the one or more processors to determine, by a logical scaffold, when the at least one property of the object as determined by the object identifying module is one of a true condition or a false condition.Type: GrantFiled: January 17, 2020Date of Patent: October 26, 2021Assignee: Toyota Research Institute, Inc.Inventors: Nikos Arechiga Gonzalez, Soonho Kong, Jonathan DeCastro, Sagar Behere, Dennis Park