Patents Examined by Ali Bayat
-
Patent number: 12251047Abstract: The loose product dispenser has a container and a sensor with optical barrier for detecting the presence of loose product in the container, the sensor having a transmitter of a light beam, configured to generate the optical barrier between an entry point and an exit point of the light beam from the container, and a receiver of the light beam, the sensor further has an electronic board on which both the receiver and the transmitter are installed, and a light guide configured and arranged to transmit to the reciever the light beam collected from the exit point.Type: GrantFiled: December 9, 2020Date of Patent: March 18, 2025Assignee: DE' LONGHI APPLIANCES S.r.l.Inventors: Paolo Evangelisti, Alberto Acciari, Nicola Piovan
-
Patent number: 12249053Abstract: An image processing device for performing correction processing on original image data generated by an image-capturing element configured to receive light with a plurality of pixels through a color filter including segments of a red color and at least one complementary color includes a processing circuitry being configured to perform operations including converting the original image data into primary color-based image data represented in a primary color-based color space, acquiring a statistical value of a plurality of pieces of pixel data corresponding to the plurality of pixels from the primary color-based image data, calculating a correction parameter by using the statistical value, and correcting the original image data based on the correction parameter.Type: GrantFiled: June 17, 2022Date of Patent: March 11, 2025Assignee: Socionext Inc.Inventors: Soichi Hagiwara, Yuji Umezu, Junzo Sakurai
-
Patent number: 12249060Abstract: A system and method for calibrating a machine vision system on the undercarriage of a rail vehicle while the rail vehicle is in the field is presented. The system enables operators to calibrate the machine vision system without having to remove the machine vision system from the undercarriage of the rail vehicle. The system can capture, by a camera of an image capturing module, a first image of a target. The image capturing module and a drum can be attached to a fixture and the target can be attached to the drum. The system can also determine a number of lateral pixels in a lateral pitch distance of the image of the target, determining a lateral object pixel size based on the number of lateral pixels, and determining a drum encoder rate based on the lateral object pixel size. The drum encoder rate can be programmed into a drum encoder.Type: GrantFiled: December 21, 2023Date of Patent: March 11, 2025Assignee: BNSF Railway CompanyInventor: Darrell R. Krueger
-
Patent number: 12242709Abstract: The colorimetric system of this embodiment includes the reception section that accepts generation of a color group including a plurality of reference colors to be compared with a color measured by a colorimetric section and an input of a color group name and the display processor that performs a process of displaying the generated color group on the display section. Furthermore, the reception section accepts a selection of a color group of a colorimetry target and the display processor performs a process of displaying a name of the selected color group in the display section.Type: GrantFiled: May 27, 2022Date of Patent: March 4, 2025Assignee: Seiko Epson CorporationInventor: Masami Ishihara
-
Station monitoring apparatus, station monitoring method, and non-transitory computer readable medium
Patent number: 12243312Abstract: A station monitoring apparatus (10) includes an image processing unit (110) and a decision unit (120). The image processing unit (110) determines a position of a door (50) of a vehicle (40) and a position of a person by analyzing an image generated by a capture apparatus (20), i.e., an image capturing a platform of a station. The decision unit (120) decides, after the vehicle (40) has begun to move, by use of a position of the door (50) and a position of the person, whether to perform predetermined processing. For example, the decision unit (120) performs the predetermined processing when deciding that a difference between a movement velocity of any of the persons and a movement velocity of the door is within a criterion continuously for a criterion time.Type: GrantFiled: January 10, 2020Date of Patent: March 4, 2025Assignee: NEC CORPORATIONInventor: Masumi Ishikawa -
Patent number: 12230048Abstract: A method and system can include processing title and title opinion document images to generate text information. Trained models may generate data objects representative of period of time during which certain rights to a property exist. The trained models may also generate rules for modifying the data objects and interrelating the data objects to each other. In some examples, a confidence level can be generated and will reflect a likelihood of a data object including correct information. The modified and interrelated data objects may be used to generate a navigable interface which includes a current title status for a property and a navigable chain of title reflecting historical rights to the property.Type: GrantFiled: April 21, 2023Date of Patent: February 18, 2025Assignee: Thomson Reuters Enterprise Centre GmbHInventor: Nicholas E. Vandivere
-
Patent number: 12211236Abstract: Disclosed are apparatuses, systems, and techniques to render images depicting light interacting with media that have volume attenuation, using optimized spectral rendering that emulates rendering of the media in tristimulus color rendering schemes.Type: GrantFiled: August 15, 2023Date of Patent: January 28, 2025Assignee: NVIDIA CorporationInventor: Johannes Jendersie
-
Patent number: 12205319Abstract: Multi-object tracking in autonomous vehicles uses both camera data and LiDAR data for training, but not LiDAR data at query time. Thus, no LiDAR sensor is on a piloted autonomous vehicle. Example systems and methods rely on camera 2D object detections alone, rather than 3D annotations. Example systems/methods utilize a single network that is given a camera image as input and can learn both object detection and dense depth in a multimodal regression setting, where the ground truth LiDAR data is used only at training time to compute depth regression loss. The network uses the camera image alone as input at test time (i.e., when deployed for piloting an autonomous vehicle) and can predict both object detections and dense depth of the scene. LiDAR is only used for data acquisition and is not required for drawing 3D annotations or for piloting the vehicle.Type: GrantFiled: March 8, 2022Date of Patent: January 21, 2025Assignee: Ridecell, Inc.Inventors: Arun Kumar Chockalingam Santha Kumar, Paridhi Singh, Gaurav Singh
-
Patent number: 12200254Abstract: A three-dimensional data encoding method includes: generating combined point cloud data by combining pieces of point cloud data; and generating a bitstream by encoding the combined point cloud data. The bitstream includes (i) first information indicating a maximum number of duplicated points that are included in each of the pieces of point cloud data and are three-dimensional points having same geometry information, and (ii) pieces of second information corresponding one-to-one with point indexes and each indicating which of the pieces of point cloud data three-dimensional points having a corresponding one of the point indexes belong to, the point indexes being indexes to which values a total number of which is equal to the maximum number are assigned, and being used for identifying duplicated points belonging to same point cloud data.Type: GrantFiled: May 28, 2021Date of Patent: January 14, 2025Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Toshiyasu Sugio, Noritaka Iguchi, Chung Dean Han, Chi Wang, Pongsak Lasang
-
Patent number: 12190530Abstract: A dense optical flow calculation system and method based on an FPGA (Field Programmable Gate Array) are provided. The system comprises a software system deployed on a host and a dense optical flow calculation module deployed on the FPGA. Pixel information of two continuous frames of pictures is obtained from a host end in the system, and optical flow is obtained by calculation by means of the steps such as smoothing processing, polynomial expansion, intermediate variable calculation, optical flow calculation. An image pyramid and iterative optical flow calculation can be achieved by repeatedly calling a calculation core module in the FPGA; a final calculation result is returned to the host end.Type: GrantFiled: January 5, 2021Date of Patent: January 7, 2025Assignee: Zhejiang UniversityInventors: Xiaohong Jiang, Zhe Pan, Jian Wu
-
Patent number: 12192538Abstract: Method and devices for coding point cloud data using an angular coding mode. The angular coding mode may be signaled using an angular mode flag to signal that a volume is to be coded using the angular coding mode. The angular coding mode is applicable to planar volumes that have all of their occupied child nodes on one side of a plane bisecting the volume. A planar position flag may signal which side of the volume is occupied. Entropy coding may be used to code the planar position flag. Context determination for coding may take into account angular information for child nodes or groups of child nodes of the volume relative to a location of a beam assembly that has sampled the point cloud. Characteristics of the beam assembly may be coded into the bitstream.Type: GrantFiled: September 29, 2020Date of Patent: January 7, 2025Assignee: BlackBerry LimitedInventors: Sébastien Lasserre, Jonathan Taquet
-
Patent number: 12189112Abstract: Apparatus and methods are described for use with a blood sample. Using a microscope (24), three images of a microscopic imaging field of the blood sample are acquired, each of the images being acquired using respective, different imaging conditions, and the first one of the three images being acquired under violet-light brightfield imaging. Using at least one computer processor (28), an artificial color microscopic image of the microscopic imaging field is generated, by mapping the first one of the three images to a red channel of the artificial color microscopic image, mapping a second one of the three images to a second color channel of the artificial color microscopic image, and mapping a third one of the three images to a third color channel of the artificial color microscopic image. Other applications are also described.Type: GrantFiled: December 10, 2020Date of Patent: January 7, 2025Assignee: S.D. Sight Diagnostics Ltd.Inventors: Yonatan Halperin, Dan Gluck, Noam Yorav-Raphael, Yochay Shlomo Eshel, Sarah Levy, Joseph Joel Pollak
-
Patent number: 12184851Abstract: An encoder according to one aspect of the present disclosure encodes a block of an image, and includes a processor and memory connected to the processor. Using the memory, the processor partitions a block into a plurality of sub blocks and encodes a sub block included in the plurality of sub blocks in an encoding process including at least a transform process or a prediction process. The block is partitioned using a multiple partition including at least three odd-numbered child nodes and each of a width and a height of each of the plurality of sub blocks is a power of two.Type: GrantFiled: August 3, 2023Date of Patent: December 31, 2024Assignee: Panasonic Intellectual Property Corporation of AmericaInventors: Chong Soon Lim, Hai Wei Sun, Sughosh Pavan Shashidhar, Han Boon Teo, Ru Ling Liao, Takahiro Nishi, Tadamasa Toma
-
Patent number: 12169765Abstract: A machine learning scheme can be trained on a set of labeled training images of a subject in different poses, with different textures, and with different background environments. The label or marker data of the subject may be stored as metadata to a 3D model of the subject or rendered images of the subject. The machine learning scheme may be implemented as a supervised learning scheme that can automatically identify the labeled data to create a classification model. The classification model can classify a depicted subject in many different environments and arrangements (e.g., poses).Type: GrantFiled: September 8, 2023Date of Patent: December 17, 2024Assignee: Snap Inc.Inventors: Xuehan Xiong, Zehao Xue
-
Patent number: 12169935Abstract: A system for determining a bacterial load of a target is provided. The system includes an adaptor for configuring a mobile communication device for tissue imaging and a mobile communication device. The adaptor includes a housing configured to be removably coupled to a mobile communication device and, an excitation light source for fluorescent imaging. The excitation light source is configured to emit light in one of ultraviolet, visible, near-infrared, and infrared ranges.Type: GrantFiled: June 13, 2023Date of Patent: December 17, 2024Assignee: UNIVERSITY HEALTH NETWORKInventor: Ralph Dacosta
-
Patent number: 12166982Abstract: An encoder according to one aspect of the present disclosure encodes a block of an image, and includes a processor and memory connected to the processor. Using the memory, the processor partitions a block into a plurality of sub blocks and encodes a sub block included in the plurality of sub blocks in an encoding process including at least a transform process or a prediction process. The block is partitioned using a multiple partition including at least three odd-numbered child nodes and each of a width and a height of each of the plurality of sub blocks is a power of two.Type: GrantFiled: June 28, 2023Date of Patent: December 10, 2024Assignee: Panasonic Intellectual Property Corporation of AmericaInventors: Chong Soon Lim, Hai Wei Sun, Sughosh Pavan Shashidhar, Han Boon Teo, Ru Ling Liao, Takahiro Nishi, Tadamasa Toma
-
Patent number: 12153651Abstract: A method of generating an aggregate saliency map using a convolutional neural network. Convolutional activation maps of the convolutional neural network model are received into a saliency map generator, the convolutional activation maps being generated by the neural network model while computing the one or more prediction scores based on unlabeled input data. Each convolutional activation map corresponds to one of the multiple encoding layers. The saliency map generator generates a layer-dependent saliency map for each encoding layer of the unlabeled input data, each layer-dependent saliency map being based on a summation of element-wise products of the convolutional activation maps and their corresponding gradients. The layer-dependent saliency maps are combined into the aggregate saliency map indicating the relative contributions of individual components of the unlabeled input data to the one or more prediction scores computed by the convolutional neural network model on the unlabeled input data.Type: GrantFiled: October 29, 2021Date of Patent: November 26, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein, Nir Nice
-
Patent number: 12148177Abstract: A method with vanishing point estimation includes: obtaining an image of a current time point of objects comprising a target vehicle; detecting the objects in the image of the current time point; tracking positions of the objects in a world coordinate system by associating the objects with current position coordinates of the objects determined from images of previous time points that precede the current time point; determining a vanishing point for each of the objects based on the positions of the objects; and outputting the vanishing point determined for each of the objects.Type: GrantFiled: November 12, 2021Date of Patent: November 19, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Byeongju Lee, Seho Shin, Sung Hyun Chung, Jinhyuk Choi
-
Patent number: 12148122Abstract: The present invention provides a panoramic presentation method and apparatus. The present invention presents a panoramic image by using a combined structure of a single fisheye projector and a spherical screen in hardware, employs a geometrical relationship between a fisheye lens and the position of a viewer and isometric projection transformation, and enables 180° fisheye projection to present 720° environment information to the viewer in the center of the spherical screen, greatly simplifying the image conversion process and realizing zero distortion. According to a 720° panorama one-step generation method based on a single virtual camera in software, complex steps of collecting six images by a plurality of cameras, synthesizing a cube map and then converting the cube map into a panorama are not needed, the vertex position of a scene is directly adjusted in the process from a 3D space to a 2D image.Type: GrantFiled: April 26, 2022Date of Patent: November 19, 2024Assignee: SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY CHINESE ACADEMY OF SCIENCESInventors: Qin Yang, Pengfei Wei, Liping Wang
-
Patent number: 12141956Abstract: A processor determines a cause of an image defect based on a test image that is obtained through an image reading process performed on an output sheet output from an image forming device. The processor extracts a vertical stripe part extending along a sub scanning direction in the test image. Furthermore, the processor determines which of predetermined two types of cause candidates is a cause of the vertical stripe part, based on a distribution of a pixel value sequence along a main scanning direction that crosses the sub scanning direction, in a target part including the vertical stripe part in the test image.Type: GrantFiled: December 21, 2021Date of Patent: November 12, 2024Assignee: KYOCERA Document Solutions Inc.Inventors: Rui Hamabe, Kazunori Tanaka, Takuya Miyamoto, Kanako Morimoto, Koji Sato