Patents Examined by Nizar N Sivji
-
Patent number: 11282297Abstract: A computer implemented method and system processing a video signal.Type: GrantFiled: September 2, 2020Date of Patent: March 22, 2022Assignee: Blue Planet Training, Inc.Inventors: Haipeng Zeng, Xingbo Wang, Aoyu Wu, Yong Wang, Quan Li, Huamin Qu
-
Patent number: 11281930Abstract: Disclosed is an object detection method and a system thereof. The object detection method for detecting an object, the method comprising the steps of: extracting line segments from an image in which a detection target is displayed, by an object detection system, generating merged line segments on the basis of directionality of each of the extracted line segments, by the object detection system, specifying candidate outer lines corresponding to outer lines of the object on the basis of a line segment set including the generated merged line segments, by the object detection system; and specifying the candidate outer lines as the outer lines of the object on the basis of whether or not the specified candidate outer lines correspond to appearance attributes of the object, by the object detection system.Type: GrantFiled: June 1, 2020Date of Patent: March 22, 2022Assignee: Fingram Co., Ltd.Inventors: Young Cheul Wee, Young Hoon Ahn, Yang Seong Jin
-
Patent number: 11282204Abstract: The disclosure provides a multi-target 3D ultrasound image segmentation method based on simulated and measured data. The method includes: presetting conventional acoustic parameters; collecting raw 3D data; employing an initial segmentation algorithm to segment the raw 3D data; substituting with the conventional acoustic parameters according to probability in order to form a transitional image model; performing a simulation operation; performing transformation to obtain simulated data; performing a comparison operation; adjusting corresponding magnitude of the probability in each probability variable, and returning to the step of substituting with the conventional acoustic parameters.Type: GrantFiled: August 28, 2017Date of Patent: March 22, 2022Assignee: SHANTOU INSTITUTE OF ULTRASONIC INSTRUMENTS CO., LTD.Inventors: Liexiang Fan, Delai Li, Jinyao Yang, Jingfeng Guo, Zhonghong Wu
-
Patent number: 11276196Abstract: A video processing method includes detecting, as a reference pose, a pose of an individual at a reference time point in an input video sequence; at a second, different, time point in the input video sequence, detecting a second pose of the individual; generating from one or more source images of the individual, a transitional video sequence representing a transition of the individual from the second pose to the reference pose; and associating the transitional video sequence with the input video sequence to generate an output video sequence including at least the transitional video sequence to implement a non-linear replay branch from the second time point to the reference time point.Type: GrantFiled: April 15, 2020Date of Patent: March 15, 2022Assignee: Sony Interactive Entertainment Inc.Inventors: Ian Henry Bickerstaff, Andrew Damian Hosfield, William John Dudley, Nicola Orrù
-
Patent number: 11265408Abstract: In a mobile communication terminal in which a flexible display panel is provided in two body portions which are foldably connected to each other, a plurality of rotation supports are provided in a central joint that connects the two body portions and two sliding panels sliding in the respective body portions together and the rotation supports are directly connected to the two body portions to support rotation of the body portions such that the body portions do not rotate over 180 degrees when the two body portions are fully unfolded, whereby damage of the flexible display panel provided on the surfaces of the two body portions is prevented.Type: GrantFiled: October 15, 2020Date of Patent: March 1, 2022Assignee: AUFLEX CO., LTD.Inventors: Hyun Min Park, Seoung Jun Lee
-
Patent number: 11263759Abstract: In an image processing apparatus, a detection unit detects a moving object from a captured image captured by an image capturing unit. An extraction unit extracts an edge from the captured image. A determination unit determines, based on a position of the moving object included in the captured image, whether to superimpose the edge on a region corresponding to the moving object in a background image. A generation unit generates an output image by superimposing a mask image, for obscuring the moving object, on the region in the background image and superimposing the edge, determined by the determination unit to be superimposed, on the region in the background image.Type: GrantFiled: January 27, 2020Date of Patent: March 1, 2022Assignee: Canon Kabushiki KaishaInventors: Tsukasa Takagi, Minoru Kusakabe
-
Patent number: 11263758Abstract: The present disclosure relates to a controller (2) for identifying a periphery of a towed vehicle (T) connected to a towing vehicle (V). The controller (2) is configured to receive towing vehicle image data (DV1) corresponding to a towing vehicle image (IMG1) captured by a towing vehicle camera (C1). The towing vehicle image data (DV1) is processed to generate a plurality of movement vector. The periphery (P1) of the towed vehicle (T) is identified in dependence on the plurality of movement vectors. The present disclosure also relates to a method of identifying the periphery (P1) of a towed vehicle (T).Type: GrantFiled: July 11, 2018Date of Patent: March 1, 2022Assignee: Jaguar Land Rover LimitedInventor: Aaron Freeman-Powell
-
Patent number: 11257221Abstract: Disclosed is a fallen object detection apparatus and method for controlling a component to continuously provide a service, provided by a communication device, when the communication device has fallen inside a vehicle. A fallen object detection apparatus inside a vehicle includes: a monitor configured to monitor an inside of the vehicle in which a passenger and an object are present: an identifier configured to identify that the object is a communication device capable of being connected to the component inside the vehicle in response to determination that the object is determined to have fallen on the basis of a monitoring result; and a processor configured to connect the object with the component and control the component to continuously provide a service, provided by the object, when the object is identified to be the communication device.Type: GrantFiled: September 12, 2019Date of Patent: February 22, 2022Assignee: LG ELECTRONICS INC.Inventors: Jun Ho Moon, Hyun Kyu Kim, Ki Bong Song
-
Patent number: 11250243Abstract: A computer-implemented method executed by at least one processor for person identification is presented. The method includes employing one or more cameras to receive a video stream including a plurality of frames to extract features therefrom, detecting, via an object detection model, objects within the plurality of frames, detecting, via a key point detection model, persons within the plurality of frames, detecting, via a color detection model, color of clothing worn by the persons, detecting, via a gender and age detection model, an age and a gender of the persons, establishing a spatial connection between the objects and the persons, storing the features in a feature database, each feature associated with a confidence value, and normalizing, via a ranking component, the confidence values of each of the features.Type: GrantFiled: March 4, 2020Date of Patent: February 15, 2022Inventors: Yi Yang, Giuseppe Coviello, Biplob Debnath, Srimat Chakradhar
-
Patent number: 11250246Abstract: An expression recognition device includes processing circuitry to acquire an image; extract a face area of a person from the acquired image and obtaining a face image added with information of the face area; extract one or more face feature points on a basis of the face image; determine a face condition representing a state of a face in the face image depending on reliability of each of the extracted face feature points; determine a reference point for extraction of a feature amount used for expression recognition from among the extracted face feature points depending on the determined face condition; extract the feature amount on a basis of the determined reference point; recognize a facial expression of the person in the face image using the extracted feature amount; and output information related to a recognition result of the facial expression of the person in the face image.Type: GrantFiled: November 27, 2017Date of Patent: February 15, 2022Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Shunya Osawa, Takahiro Otsuka
-
Patent number: 11251886Abstract: A wave shaping device which comprises a tunable impedance surface and a controller connected to the surface in order to control its impedance. The shaping device further comprises a transmission module for receiving a pilot signal used to control the impedance of the surface.Type: GrantFiled: April 2, 2014Date of Patent: February 15, 2022Assignees: CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE—CNRS, UNIVERSITE PARIS DIDEROT—PARIS 7Inventors: Mathias Fink, Geoffroy Lerosey, Matthieu Dupre, Nadége Kaina
-
Patent number: 11250592Abstract: An information processing apparatus acquires, in regard to a subject in a three-dimensional space, unit element information including information regarding a position, in the space, of each of plural unit elements configuring the subject, draws a space image indicative of a state of a virtual space in which the plural unit elements are arranged using the unit element information, and specifies a posture of the subject in the three-dimensional space on the basis of a result obtained by executing a posture estimation process targeting a two-dimensional image for the drawn space image.Type: GrantFiled: January 30, 2018Date of Patent: February 15, 2022Assignee: SONY INTERACTIVE ENTERTAINMENT INC.Inventors: Tomokazu Kake, Satoshi Nakajima
-
Patent number: 11238304Abstract: The present invention relates to a method for generating a biometric signature of a subject comprising: obtaining a plurality of sequential video frame images of a moving subject from a video segment; obtaining a portion of each frame comprising a surrounding of the moving subject; carrying out a transformation function to the frequency domain on one or more of said portions of the frames comprising a surrounding of a of the subject; and optionally saving the spectral characteristics of said transformation function in a repository. The present invention also relates to a system for carrying out said method.Type: GrantFiled: February 25, 2018Date of Patent: February 1, 2022Assignee: QUANTUM RGB LTD.Inventor: Henia Veksler
-
Patent number: 11224954Abstract: A non-contact tool setting apparatus, suitable for use with machine tools and the like, is described in which a transmitter emits light that is received by a receiver. An analysis unit is provided for analysing the light received by the receiver and generating a trigger signal therefrom. The receiver includes an imaging sensor, such as a CMOS or CCD sensor, having a plurality of pixels. The analysis unit generates the trigger signal by analysing the light intensity measured by a first subset of the plurality of pixels. This analysis may involve, for example, determining a resultant received light intensity or performing edge detection. The non-contact tool setting apparatus can thus emulate the operation of a laser based non-contact tool setting apparatus whilst also permitting imaging of cutting tools.Type: GrantFiled: September 13, 2018Date of Patent: January 18, 2022Assignee: RENISHAW PLCInventors: Edward Benjamin Egglestone, Andrew Paul Gribble
-
Patent number: 11227170Abstract: A collation device is configured to include a processor, and a storage unit that stores a predetermined determination condition in advance, under which a photographic image which is an image obtained by imaging a photograph of the subject is capable of being eliminated, the processor is configured to detect brightness distribution of a face image obtained by imaging an authenticated person with an imaging unit, determine whether or not the detected brightness distribution satisfies a determination condition, and perform face authentication using the face image satisfying the determination condition.Type: GrantFiled: May 30, 2018Date of Patent: January 18, 2022Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Megumi Yamaoka, Takayuki Matsukawa
-
Patent number: 11227396Abstract: A method of performing adjustment of a convergence speed for an imaging parameter is based on detecting a motion of a selected face in a sequence of image frames. When the detected motion meets predefined motion criteria, a motion vector corresponding to the characterized motion of the face is computed. A value for a convergence adjustment factor for adjusting a convergence speed of an imaging parameter of the camera is determined based on the computed motion vector. The convergence speed of the imaging parameter of the camera is adjusted based on the determined value of the convergence adjustment factor.Type: GrantFiled: July 16, 2020Date of Patent: January 18, 2022Assignee: Meta Platforms, Inc.Inventors: Mooyoung Shin, Hao Sun
-
Patent number: 11222218Abstract: A vehicle exterior environment detection apparatus includes an image width calculator, a predicted distance calculator, and a relative distance calculator. The image width calculator calculates a first image width of a target vehicle on the basis of a first image. The predicted distance calculator calculates a first predicted distance to the target vehicle on the basis of the first image width. The relative distance calculator calculates a first reliability of the first image width, and, when the first reliability is higher than a predetermined threshold, calculates a first real width of the target vehicle on the basis of the first image width and the first predicted distance, updates a smoothed real width by performing smoothing processing on the basis of the first real width, and calculates a first distance to the target vehicle on the basis of the smoothed real width and the first image width.Type: GrantFiled: December 18, 2019Date of Patent: January 11, 2022Assignee: SUBARU CORPORATIONInventor: Junichi Asako
-
Patent number: 11216974Abstract: The disclosure relates to a method of determining a gaze distance, the method including obtaining a plurality of images by capturing each of both eyes of a user, determining, from each of the plurality of images, a corner point located at an edge of the eye, a location of a pupil, and an eye contour, determining each of gaze directions of both of the eyes based on a difference between a reference point and the location of the pupil, wherein the reference point is determined based on the eye contour and a location of the corner point, and determining a gaze distance based on a difference between the gaze directions of both of the eyes.Type: GrantFiled: December 14, 2017Date of Patent: January 4, 2022Assignees: SAMSUNG ELECTRONICS CO., LTD., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Chi-yul Yoon, Sang Yoon Han, Sang Hwa Lee, Yeongil Jang, Nam Ik Cho
-
Patent number: 11216917Abstract: Techniques for enhancing an image are described. For example, a lower-resolution image from a video file may be enhanced using a trained neural network applying the trained neural network on the lower-resolution image to remove artifacts by removing artifacts by generating, using a layer of the trained neural network, a residual value based on the proper subset of the received image and at least one corresponding image portion of a preceding lower resolution image in the video file and at least one corresponding image portion of a subsequent lower resolution image in the video file, upscale the lower-resolution image using bilinear upsampling, and combine the upscaled received image and residual value to generate an enhanced image.Type: GrantFiled: May 3, 2019Date of Patent: January 4, 2022Assignee: Amazon Technologies, Inc.Inventors: Silviu Stefan Andrei, Nataliya Shapovalova, Walterio Wolfgang Mayol Cuevas
-
Patent number: 11216918Abstract: Disclosed are a method and device for processing an image, and a storage medium. The method includes: determining first position information of key points of first type of an object in the image based on the image and a trained model, where the first position information of each key point of first type indicates where each key point of first type is in the image; determining second position information of key points of second type of the object based on the first position information and a preset algorithm, where the second position information of each second type indicates where each key point of second type is in the image; liquefying the object based on the first position information and the second position information by using a liquefying level corresponding to a portion to be processed of the object.Type: GrantFiled: September 29, 2020Date of Patent: January 4, 2022Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.Inventor: Xin Yan