Multiple Cameras Patents (Class 348/47)
-
Patent number: 12106497Abstract: A method for registering a two-dimensional image data set with a three-dimensional image data set of a body of interest is discloses herein. The method includes the following steps: adjusting a first virtual camera according to a distance parameter calculated corresponding to the two-dimensional image data set and the body of interest; rotating the first virtual camera according to an angle difference between a first vector and a second vector; and rotating the first virtual camera according to an angle which is corresponding to a maximum similarity value of a plurality of similarity values calculated in accordance with reconstructed images of the three-dimensional image data set which includes one generated by the first virtual camera and the others generated by other virtual cameras with different angles or different pixels from the one generated by the first virtual camera and the two-dimensional image data set.Type: GrantFiled: July 1, 2022Date of Patent: October 1, 2024Assignee: REMEX MEDICAL CORP.Inventors: Sheng-Fang Lin, Ying-Yi Cheng, Chen-Tai Lin, Shan-Chien Cheng
-
Patent number: 12106504Abstract: A system for executing a three-dimensional (3D) intraoperative scan of a patient is disclosed. A 3D scanner control computing device projects the object points included onto a first image plane and the object points onto a second image plane. The 3D scanner control computing device determines first epipolar lines associated with the first image plane and second epipolar lines associated with the second image plane based on an epipolar plane that triangulates the object points included in the first 2D intraoperative image to the object points included in the second 2D intraoperative image. Each epipolar lines provides a depth of each object as projected onto the first image plane and the second image plane. The 3D scanner control computing device converts the first 2D intraoperative image and the second 2D intraoperative image to the 3D intraoperative scan of the patient based on the depth of each object point provided by each corresponding epipolar line.Type: GrantFiled: August 2, 2023Date of Patent: October 1, 2024Assignee: Unify Medical, Inc.Inventors: Yang Liu, Maziyar Askari Karchegani
-
Patent number: 12092448Abstract: An illustrative augmented reality (AR) management system determines a first optic parameter of a user corresponding to a first viewed point in a real-world environment and a second optic parameter of the user corresponding to a second viewed point in the real-world environment. The first viewed point and the second viewed point are viewed by the user as the user visually focuses on a target point in the real-world environment. The AR management system determines a depth value of the target point based on the first optic parameter and the second optic parameter, and creates an AR anchor associated with the target point based at least on the depth value. Corresponding methods and systems are also disclosed.Type: GrantFiled: April 19, 2022Date of Patent: September 17, 2024Assignee: Verizon Patent and Licensing Inc.Inventors: Viraj C. Mantri, Natarajan Jayapal Balajee, Srividhya Parthasarathy, Stewart Katz
-
Patent number: 12095964Abstract: The information processing apparatus obtains first viewpoint information for specifying a virtual viewpoint corresponding to a virtual viewpoint image and second viewpoint information representing a viewpoint of a first image capturing apparatus existing in an image capturing range of a second image capturing apparatus that is used for generating the virtual viewpoint image and performs control so that the image captured by the first image capturing apparatus is output in a case where a position of the first image capturing apparatus specified by the second viewpoint information is included in a field of view of the virtual viewpoint specified by the first viewpoint information.Type: GrantFiled: June 6, 2022Date of Patent: September 17, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Yuya Ota
-
Patent number: 12094054Abstract: Examples relate to implementations of a neural light transport. A computing system may obtain data indicative of a plurality of UV texture maps and a geometry of an object. Each UV texture map depicts the object from a perspective of a plurality of perspectives. The computing system may train a neural network to learn a light transport function using the data. The light transport function may be a continuous function that specifies how light interacts with the object when the object is viewed from the plurality of perspectives. The computing system may generate an output UV texture map that depicts the object from a synthesized perspective based on an application of the light transport function by the trained neural network.Type: GrantFiled: May 4, 2020Date of Patent: September 17, 2024Assignee: Google LLCInventors: Yun-Ta Tsai, Xiuming Zhang, Jonathan T. Barron, Sean Fanello, Tiancheng Sun, Tianfan Xue
-
Patent number: 12089051Abstract: This disclosure relates to improved systems and methods for providing and using wearable electronic accessories. A wearable electronic necklace accessory can include a support structure that permits the wearable electronic accessory to be worn in a user's neck region. The wearable electronic necklace accessory can include an electronic pendant coupled to the support structure, the electronic pendant can comprise a housing that includes a first wall, a second wall, and one or more side walls configured to couple the first wall to the second wall. The wearable electronic necklace accessory can include a display device and an audio device positioned within the pendant housing and configured to output electronic media and audio content. Other embodiments are disclosed.Type: GrantFiled: October 30, 2023Date of Patent: September 10, 2024Assignee: Audeo LLCInventors: Carolyn Ann Bankston, Jordan Gardinal
-
Patent number: 12082991Abstract: A three-dimensional (3D) dental scanning system (1) for scanning a dental object (D) includes a scanning surface (124a) to support the dental object (D); a scanning section (130) to capture a 3D scan of the dental object (D); a motion section (120) to move the scanning surface (124a) and scanning section (130) relative to each other in five axes of motion, whilst retaining the scanning surface (124a) in a substantially horizontal plane, and a control unit (140) configured to control the motion section (120) and the scanning section (130) to obtain a 3D scan of the dental object (D).Type: GrantFiled: September 17, 2020Date of Patent: September 10, 2024Assignee: University of LeedsInventor: Andrew James Keeling
-
Patent number: 12086988Abstract: The disclosed method encompasses using an augmented reality device to blend in augmentation information including for example atlas information. The atlas information may be display separately from or in addition to a patient image (planning image). In order to display the atlas information in a proper position relative to the patient image, the two data sets are registered to one another. This registration can serve for generating a diversity of atlas-based image supplements, for example alternatively or additionally to the foregoing for displaying a segmentation of the patient image in the augmented reality image. The disclosed method is usable in a medical environment such as for surgery or radiotherapy.Type: GrantFiled: August 19, 2022Date of Patent: September 10, 2024Assignee: BRAINLAB AGInventors: Sven Flossmann, Samuel Kerschbaumer, Nils Frielinghaus, Christoffer Hamilton
-
Patent number: 12086998Abstract: Aspects of the present disclosure relate to systems, devices and methods for performing a surgical step or surgical procedure with visual guidance using an optical head mounted display. Aspects of the present disclosure relate to systems, devices and methods for displaying, placing, fitting, sizing, selecting, aligning, moving a virtual implant on a physical anatomic structure of a patient and, optionally, modifying or changing the displaying, placing, fitting, sizing, selecting, aligning, moving, for example based on kinematic information.Type: GrantFiled: June 8, 2023Date of Patent: September 10, 2024Inventor: Philipp K. Lang
-
Patent number: 12081859Abstract: A camera module structure is provided which includes a mainboard, a first camera module, a second camera module, and a TOF device. The TOF device is located between the first camera module and the second camera module, which are respectively mounted to a first mounting portion and a second mounting portion that are hollowed-out via the brackets. The mainboard includes a spacing portion for separation between the first mounting portion and the second mounting portion. The TOF device is disposed on an upper surface of an auxiliary circuit board; the auxiliary circuit board is greater than the spacing portion in width; the auxiliary circuit board is located on an upper surface of the mainboard, and the auxiliary circuit board is connected to the mainboard and at least partially overlapped with the spacing portion; and the auxiliary circuit board is provided with a material removal portion on a lower surface.Type: GrantFiled: May 16, 2022Date of Patent: September 3, 2024Assignee: HONOR DEVICE CO., LTD.Inventor: Bo Wang
-
Patent number: 12079980Abstract: An optical inspection system is provided for an ultraviolet laser and associated optics forming a planar laser sheet directed to a glass sheet. The planar laser sheet intersects a surface of the glass sheet thereby causing the surface of the glass sheet to fluoresce and form a visible wavelength line on the surface. A camera has an image sensor for detecting the visible wavelength line. A control system in configured to receive image data indicative of the visible wavelength line, analyze and triangulate the data to determine a series of coordinates associated with the line, and create a three-dimensional map of the surface of the glass sheet as a function of the series of coordinates. Methods for using an optical inspection system, for gauging a surface using an optical inspection system, and for providing optical reflectance information for a surface using an optical inspection system are also provided.Type: GrantFiled: July 24, 2019Date of Patent: September 3, 2024Assignee: Glasstech, Inc.Inventors: Jason C. Addington, Benjamin L. Moran, Michael J. Vild
-
Patent number: 12078807Abstract: A display device includes a display part including a plurality of pixels; a lens part disposed on the display part; and a plurality of optical devices overlapping an edge of the display part in a plan view and disposed at an edge of the lens part, wherein light from the outside of the display part is incident on the plurality of optical devices.Type: GrantFiled: August 2, 2022Date of Patent: September 3, 2024Assignee: SAMSUNG DISPLAY CO., LTD.Inventors: Sang-Ho Kim, Soo Min Baek, Ju Youn Son, Ji Won Lee, Cheon Myeong Lee, Ju Hwa Ha
-
Patent number: 12069375Abstract: A multi-channel video recording method comprises: starting, by an electronic device, a camera; acquiring images by using a first camera lens and a second camera lens in a plurality of camera lenses; displaying a preview interface, where the preview interface includes a first image and a second image; the first image is an image acquired by the first camera lens, the second image is from the second camera lens, and the second image corresponds to a central area of an image acquired by the second camera lens; and the first image is located in a first area in the preview interface, and the second image is located in a second area in the preview interface; starting video recording after detecting a video recording instruction operation of a user; and displaying a shooting screen, where the shooting screen includes the first area and the second area.Type: GrantFiled: August 18, 2023Date of Patent: August 20, 2024Assignee: HONOR DEVICE CO., LTD.Inventors: Yuanyou Li, Wei Luo, Jieguang Huo
-
Patent number: 12061760Abstract: Provided is an imaging device that includes a rear display and an electronic viewfinder and is excellent in operability to a touch panel when an electronic viewfinder is used. When a user performs a touch manipulation on a touch panel installed in a rear display in order to set a focus area, an effective detection area for detecting a touch position is different between when the rear display is used and when the electronic viewfinder is used. When the rear display is used, the effective detection area is set to coincide with the entire display screen, and when the electronic viewfinder is used, the effective detection area is set to be reduced to an area of a part of the display screen of the rear display.Type: GrantFiled: March 6, 2023Date of Patent: August 13, 2024Assignee: MAXELL, LTD.Inventor: Ryuji Nishimura
-
Patent number: 12062211Abstract: Various implementations disclosed herein include devices, systems, and methods that tracks an electronic device by fusing tracking algorithms. In some implementations, instructions stored on a computer-readable medium are executable to cause obtaining pairs of first position data corresponding to a position of the electronic device in a first coordinate system obtained using a first technique and second position data in a second coordinate system obtained using a second technique. In some implementations, transformations for the respective pairs are determined that provide a positional relationship between the first coordinate system and the second coordinate system. In some implementations, a subset of the pairs is identified based on the transformations, and a combined transformation is determined based on the subset of the transformations. In some implementations, content is provided on an electronic device based on the combined transformation.Type: GrantFiled: February 24, 2023Date of Patent: August 13, 2024Assignee: Apple Inc.Inventors: Christian Lipski, David A. McLaren
-
Patent number: 12063339Abstract: Implementations are disclosed for automatic commissioning, configuring, calibrating, and/or coordinating sensor-equipped modular edge computing devices that are mountable on agricultural vehicles. In various implementations, neighbor modular edge computing device(s) that are mounted on a vehicle nearest a given modular edge computing device may be detected based on sensor signal(s) generated by contactless sensor(s) of the given modular edge computing device. Based on the detected neighbor modular edge computing device(s), an ordinal position of the given modular edge computing device may be determined relative to a plurality of modular edge computing devices mounted on the agricultural vehicle. Based on the sensor signal(s), distance(s) to the neighbor modular edge computing device(s) may be determined. Extrinsic parameters of the given modular edge computing device may be determined based on the ordinal position of the given modular edge computing device and the distance(s).Type: GrantFiled: July 19, 2023Date of Patent: August 13, 2024Assignee: MINERAL EARTH SCIENCES LLCInventors: Elliott Grant, Sergey Yaroshenko, Gabriella Levine
-
Patent number: 12055632Abstract: One example system comprises an active sensor that includes a transmitter and a receiver, a first camera that detects external light originating from one or more external light sources to generate first image data, a second camera that detects external light originating from one or more external light sources to generate second image data, and a controller. The controller is configured to perform operations comprising determining a first distance estimate to a first object based on a comparison of the first image data and the second image data, determining a second distance estimate to the first object based on active sensor data, comparing the first distance estimate and the second distance estimate, and determining a third distance estimate to a second object based on the first image data, the second image data, and the comparison of the first and second distance estimates.Type: GrantFiled: October 13, 2020Date of Patent: August 6, 2024Assignee: Waymo LLCInventors: Shashank Sharma, Matthew Last
-
Patent number: 12058307Abstract: High-frame-rate stereoscopic projection using a single digital projector provides a powerful sense of immersion when shown on wide screens, and the human eye perceives each frame as unique and separate from the others by virtue of the natural left-right “shuttering” that occurs via the chosen 3D projection technology. Techniques are provided for the introduction of one or more consecutive “digital dark frames” into the image streams presented to the dual projectors in such a manner that the sequence of out-of-phase photography is replicated via out-of-phase projection. This is achieved by alternately introducing the dark frames so that the sequence of stereoscopic images is presented in proper temporal continuity.Type: GrantFiled: July 26, 2021Date of Patent: August 6, 2024Inventor: Douglas Trumbull
-
Patent number: 12045401Abstract: Aspects of the present invention relate to external user interfaces used in connection with head worn computers (HWC).Type: GrantFiled: August 23, 2023Date of Patent: July 23, 2024Assignee: Mentor Acquisition One, LLCInventor: Ralph F. Osterhout
-
Patent number: 12045955Abstract: System and methods are provided for generating panoramic imagery. An example method may be performed by one or more processors and includes obtaining first panoramic imagery depicting a geographic area. The method also includes obtaining an image depicting one or more physical objects absent from the first panoramic imagery. Further, the method includes transforming the first panoramic imagery into second panoramic imagery depicting the one or more physical objects and including at least a portion of the first panoramic imagery.Type: GrantFiled: September 2, 2020Date of Patent: July 23, 2024Assignee: GOOGLE LLCInventors: Matthew Sharifi, Victor Carbune
-
Patent number: 12047550Abstract: An example method for training a machine learning model is provided. The method includes receiving training data collected by a three-dimensional (3D) imager, the training data comprising a plurality of training sets. The method further includes generating, using the training data, a machine learning model from which a disparity map can be inferred from a pair of images that capture a scene where a light pattern is projected onto an object.Type: GrantFiled: March 15, 2022Date of Patent: July 23, 2024Assignee: FARO Technologies, Inc.Inventors: Georgios Balatzis, Francesco Bonarrigo, Andrea Riccardi
-
Patent number: 12033200Abstract: A system for the mass production of “personalized” health and beauty formulations is provided that is then produced on-demand production thereof. A user selects specific personal or product attributes through an interactive selection process either online or at a retail location. An individual product recipe is then created from an interactive ingredient database, and a detailed production formulation is then enabling through an automated, on demand production cell in which ingredients are either sequentially or simultaneously dosed into an in-situ mixing container. Mixing of liquids occurs after the container is sealed and the same container provides the final user packaging. Manufacturing control, labeling, packaging and compliance traceability are all codified, tracked, traced and saved through the entire integrated selection and production system.Type: GrantFiled: March 17, 2022Date of Patent: July 9, 2024Inventors: Danijel Hubman, Joze Pivk, Astrid Androsch
-
Patent number: 12023130Abstract: System and method for measuring diabetes mellitus condition of a subject is disclosed. The disclosed system and method includes thermal sensors for capturing thermal images and/or videos of a body part; and a processing engine to detect a predefined region of the body part in each frame of the captured images and/or videos. The processing engine segments one or more portions from the detected predefined region in each frame of the captured images and/or videos to identify a region of interest comprising major arteries in the segmented portions. Based on the ROI, the engine extracts pixel values, representing biosignals, from each frame of the captured images and/or videos so as to determine one or more parameters associated with the hemodynamic factors and a rate of atherosclerosis of the subject. Further, a risk score for the diabetes mellitus condition based on the determined parameters using computational models is measured.Type: GrantFiled: October 2, 2020Date of Patent: July 2, 2024Assignee: Aarca Research, Inc.Inventors: Sameer Raghuram Shivpure, Jayanthi Thiruvengadam, Anuhya Choda, Gayathri Choda
-
Patent number: 12026350Abstract: A configuration system uses multiple depth cameras to create a volumetric capture space around an electronically controllable industrial machine or system, referred to as a target system. The output of the cameras is processed to create a live 3D model of everything within the space. A remote operator can then navigate within this 3D model, for example from a desktop application, in order to view the target system from various perspectives in a live 3D telepresence. In addition to the live 3D model, a configuration system generates a 3D user interface for programming and configuring machines or target systems within the space in a spatially coherent way. Local operators can interact with the target system using mobile phones which track the target system in augmented reality. Any number of local operators can interact with a remote operator to simultaneously program and configure the target system.Type: GrantFiled: March 25, 2023Date of Patent: July 2, 2024Inventors: Benjamin Reynolds, Valentin Heun, James Keat Hobin, Hisham Bedri
-
Patent number: 12026916Abstract: A stereo camera calibration method includes: controlling a stereo camera assembly to capture a sequence of stereo image pairs; simultaneously with each capture in the sequence, activating a rangefinder; responsive to each capture in the sequence, updating calibration data for point cloud generation by: detecting matching features in the stereo image pair, and updating a first portion of the calibration data based on the matched features; updating an alignment of the rangefinder relative to the stereo camera assembly, based on the updated first portion of the calibration data, and a detected position of a beam of the rangefinder in a first image of the stereo image pair; and updating a second portion of the calibration data based on the detected position of the beam of the rangefinder in the first image of the stereo image pair, the updated rangefinder alignment, and a depth measurement captured by the rangefinder.Type: GrantFiled: January 20, 2023Date of Patent: July 2, 2024Assignee: Zebra Technologies CorporationInventor: Raveen T. Thrimawithana
-
Patent number: 12025468Abstract: A measuring system for triangulation-based distance measuring device having a light emitting unit, a light receiving unit for detecting measuring light reflected from an object and a processing unit for deriving distance information based on a detected reflection of measuring light. The system comprises a visual guiding unit projecting a visual marker onto the object. The light emitting unit and the visual guiding unit provide a light reflection onto the object by emitting the measuring light and the projection of the visual marker onto the object. The measuring system comprises a camera, the field of view of the camera being greater than the field of view of the light receiving unit and the camera captures an image covering the light reflection and the projection of the visual marker and to provide image information according to a captured image.Type: GrantFiled: December 3, 2020Date of Patent: July 2, 2024Assignee: HEXAGON TECHNOLOGY CENTER GMBHInventors: Patrick Ilg, Patryk Wroclawski, Jonas Wuest
-
Patent number: 12020416Abstract: An optical inspection system is provided for an ultraviolet laser and associated optics forming a planar laser sheet directed to a glass sheet. The planar laser sheet intersects a surface of the glass sheet thereby causing the surface of the glass sheet to fluoresce and form a visible wavelength line on the surface. A camera has an image sensor for detecting the visible wavelength line. A control system in configured to receive image data indicative of the visible wavelength line, analyze and triangulate the data to determine a series of coordinates associated with the line, and create a three-dimensional map of the surface of the glass sheet as a function of the series of coordinates. Methods for using an optical inspection system, for gauging a surface using an optical inspection system, and for providing optical reflectance information for a surface using an optical inspection system are also provided.Type: GrantFiled: July 24, 2019Date of Patent: June 25, 2024Assignee: Glasstech, Inc.Inventors: Jason C. Addington, Benjamin L. Moran, Michael J. Vild
-
Patent number: 12015840Abstract: A camera system and method for controlling same, including capturing images of surroundings of a vehicle for a driver assistance system thereof. The camera system includes a first rolling shutter camera having a first aperture angle, a second rolling shutter camera having a second aperture angle, and control electronics. The first and second cameras are suitable for generating wide-angle and tele camera images, respectively. The first aperture angle is greater than the second aperture angle. The two cameras are designed such that both camera images have an overlap region. The control electronics synchronizes the two cameras. The geometric arrangement of the two cameras with respect to one another, and the position of the overlap region in the wide-angle image and in the tele camera image, are determined by continuous estimation. The stored geometric arrangement and position are taken into consideration during synchronization of the first and second cameras.Type: GrantFiled: December 17, 2019Date of Patent: June 18, 2024Assignee: CONTI TEMIC MICROELECTRONIC GMBHInventors: Aless Lasaruk, Reik Müller, Simon Hachfeld, Dieter Krökel, Stefan Heinrich
-
Patent number: 12011827Abstract: A system that uses 3D scanning, movable devices, and pose selecting means, either in or outside the robot workspace, in order to create a robot program.Type: GrantFiled: April 12, 2021Date of Patent: June 18, 2024Assignee: SCALABLE ROBOTICS INC.Inventors: Thomas Andrew Fuhlbrigge, Carlos Martinez, Gregory Rossano
-
Patent number: 12015757Abstract: Embodiments of the present invention relate to an obstacle detection method and apparatus and an unmanned aerial vehicle. The unmanned aerial vehicle includes a binocular photographing component and a laser texture component. The method includes: determining to start the laser texture component; starting the laser texture component, to emit a laser texture; obtaining a binocular view that is collected by the binocular photographing component and that includes the laser texture, and setting the binocular view that includes the laser texture as a target binocular view; and performing obstacle detection based on the target binocular view. In the above technical solutions, according to the embodiments of the present invention, precision of binocular stereo matching can be improved without changing an original binocular matching algorithm and structure, thereby improving precision of obstacle detection. In addition, the unmanned aerial vehicle can perform binocular sensing while flying at night.Type: GrantFiled: February 26, 2020Date of Patent: June 18, 2024Assignee: AUTEL ROBOTICS CO., LTD.Inventor: Xin Zheng
-
Patent number: 12007573Abstract: An augmented reality (AR) device is described with a display system configured to adjust an apparent distance between a user of the AR device and virtual content presented by the AR device. The AR device includes a first tunable lens that change shape in order to affect the position of the virtual content. Distortion of real-world content on account of the changes made to the first tunable lens is prevented by a second tunable lens that changes shape to stay substantially complementary to the optical configuration of the first tunable lens. In this way, the virtual content can be positioned at almost any distance relative to the user without degrading the view of the outside world or adding extensive bulk to the AR device. The augmented reality device can also include tunable lenses for expanding a field of view of the augmented reality device.Type: GrantFiled: November 10, 2020Date of Patent: June 11, 2024Assignee: Magic Leap, Inc.Inventors: Ivan Li Chuen Yeoh, Lionel Ernest Edwin, Brian T. Schowengerdt, Michael Anthony Klug, Jahja I. Trisnadi
-
Patent number: 11997249Abstract: Eyewear devices that include two SoCs that share processing workload. Instead of using a single SoC located either on the left or right side of the eyewear devices, the two SoCs have different assigned responsibilities to operate different devices and perform different processes to balance workload. In one example, the eyewear device utilizes a first SoC to operate a first color camera, a second color camera, a first display, and a second display. The first SoC and a second SoC are configured to selectively operate a first and second computer vision (CV) camera algorithms. The first SoC is configured to perform visual odometry (VIO), track hand gestures of the user, and provide depth from stereo images. This configuration provides organized logistics to efficiently operate various features, and balanced power consumption.Type: GrantFiled: October 14, 2021Date of Patent: May 28, 2024Assignee: Snap Inc.Inventors: Jason Heger, Gerald Nilles, Dmitry Ryuma, Patrick Timothy Mcsweeney Simons, Daniel Wagner
-
Patent number: 11989978Abstract: A method of detecting a user interaction by an electronic device according to various embodiments may include acquiring, through a camera, a first image of a real document in which at least one first area is printed, determining whether at least one first part corresponding to the at least one first area and included in the first image is changed, and performing a function of the electronic device corresponding to the at least one first part, based on determining that the at least one first part is changed.Type: GrantFiled: December 27, 2022Date of Patent: May 21, 2024Assignees: WOONGJIN THINKBIG CO., LTD., ARTYGENSPACE CO., LTD.Inventors: Jeongwook Park, Myeonghyeon Hwang, Suhyoung Lim, Jungwoo Choi, Youngsun Seo
-
Patent number: 11974819Abstract: A method comprising segmenting at least one vertebral body from at least one image of a first three-dimensional image data set. The method comprises receiving at least one image of a second three-dimensional image data set. The method comprises registering the segmented at least one vertebral body from the at least one image of the first three-dimensional image data set with the at least one image of the second three-dimensional image data set. The method comprises determining a position of the at least one surgical implant based on the at least one image of the second three-dimensional image data set and a three-dimensional geometric model of the at least one surgical implant. The method comprises overlaying a virtual representation of the at least one surgical implant on the registered and segmented at least one vertebral body from the at least one image of the first three-dimensional image data set.Type: GrantFiled: May 9, 2020Date of Patent: May 7, 2024Assignee: Nuvasive Inc.Inventors: Eric Finley, Kara Robinson, DJ Geiger, Justin Smith, Chris Ryan
-
Patent number: 11966981Abstract: Provided is a method and apparatus for assessing an insured loss, a computer device, and a storage medium. The method includes: S1: building up a database, and conducting model training on big data using a deep learning model or machine learning model to improve a recognition effect; S2: capturing a photo of a roof of a house to be assessed for damage according to needs, and transmitting collected image data to a background; S3: automatically recognizing the image data by the background according to needs, and feeding back a recognition result; S4: marking out a damage point, a suspected damage point, and a non-damage point according to the recognition result; and S5: formulating a loss assessment report using the marked results according to needs. The apparatus, computer device, and storage medium each correspond to the above method.Type: GrantFiled: November 29, 2021Date of Patent: April 23, 2024Assignee: SHENZHEN JU FENG TECHNOLOGY COMPANYInventors: Kun Liu, Xiaoqing Wu
-
Patent number: 11966793Abstract: Systems and methods to extend an interactive experience across multiple platforms are presented herein. The interactive experience may take place in an interactive space. The interactive space as experienced by a user of a host device (e.g., headset) may be extended to one or more mobile computing platforms. The host device may present views of virtual content perceived to be present within the real world. A given mobile computing platform may be positioned at or near the host device. The mobile computing platform may take images/video of the user of the host device. The mobile computing platform may superimpose views of the virtual content onto the images/video from the perspective of the mobile computing platform. The virtual content presented at the mobile computing platform may be perceived to be in the real world.Type: GrantFiled: October 18, 2017Date of Patent: April 23, 2024Assignee: Campfire 3D, Inc.Inventors: Paulo Jansen, Benjamin Lucas, Forest Rouse, Mayan Shay May-Raz, Joshua Hernandez
-
Patent number: 11965968Abstract: An optical sensing system comprising: a processing circuit; an optical sensor, configured to sense first optical data respectively in first sensing time intervals; and a TOF (Time of Flight) optical sensor, configured to sense second optical data respectively in second sensing time intervals. The processing circuit computes a distance between a first object and the optical sensing system according to the second optical data. The first sensing time intervals do not overlap with the second sensing time intervals.Type: GrantFiled: December 3, 2020Date of Patent: April 23, 2024Assignee: PixArt Imaging Inc.Inventors: Zi Hao Tan, Joon Chok Lee, Keen-Hun Leong, Sai Mun Lee
-
Patent number: 11961307Abstract: An external environment recognition device includes: a plurality of external environment recognition sensors each having an information detection unit that detects information of an object outside a vehicle, the plurality of external environment recognition sensors being arranged such that a detection range of the information detection unit includes an overlapping region where at least part of the detection range of the information detection unit overlaps with at least part of the detection range of another one of the information detection units; and a synchronous processing unit that extracts identical objects in the overlapping region from detection results of the external environment recognition sensors, and performs synchronous processing to synchronize the plurality of external environment recognition sensors if there is a deviation in position between the identical objects in the overlapping region.Type: GrantFiled: March 13, 2020Date of Patent: April 16, 2024Assignee: MAZDA MOTOR CORPORATIONInventor: Daisuke Hamano
-
Patent number: 11945479Abstract: In a method for detecting obstacles for a rail vehicle, 3D sensor data is detected from a surrounding region, 3D image data is generated from the 3D sensor data, and 2D image data is generated on the basis of the 3D image data. A 2D anomaly mask is ascertained or generated by comparing the 2D image data with reference image data which is free of a collision obstacle. In the process, image regions are identified as mask regions in the 2D image data which differ from the corresponding image regions in the reference image data. By fusing the 2D anomaly mask with the 3D image data, a 3D anomaly mask is generated in the 3D image data. Finally, the 3D image data which is part of the 3D anomaly mask is interpreted as a possible collision obstacle. There is also described an obstacle detection device and a rail vehicle.Type: GrantFiled: December 20, 2021Date of Patent: April 2, 2024Assignee: Siemens Mobility GmbHInventor: Albi Sema
-
Patent number: 11949808Abstract: Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for a wearable multimedia device and, in some implementations, a cloud computing platform. In some implementations, in a method for presentation of a home screen. In response to determining that a waking condition of a mobile device is satisfied, an environment of the mobile device is sensed using one or more sensors of the mobile device. Based at least on the sensing, a current context of the mobile device is determined. In response to determining the current context of the mobile device, a particular content is selected from among a plurality of content, the particular content being relevant to a user of the mobile device based on the current context of the mobile device. The particular content is presented as a home screen of the mobile device upon waking the mobile device.Type: GrantFiled: March 4, 2022Date of Patent: April 2, 2024Assignee: Humane, Inc.Inventors: Kenneth Luke Kocienda, Imran A. Chaudhri
-
Patent number: 11948304Abstract: A method of determining volumetric data of a predetermined anatomical feature is described. The method comprising determining volumetric data of one or more anatomical features present in a field of view of a depth sensing camera apparatus, identifying a predetermined anatomical feature as being present in the field of view of the depth sensing camera apparatus, associating the volumetric data of one of the one or more anatomical features with the identified predetermined anatomical feature, and outputting the volumetric data of the predetermined anatomical feature. An apparatus is also described.Type: GrantFiled: March 2, 2023Date of Patent: April 2, 2024Assignee: HEARTFELT TECHNOLOGIES LIMITEDInventor: Shamus Husheer
-
Patent number: 11943532Abstract: Disclosed are techniques that provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). In some situations, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. Higher-precision horizon detection may be provided based on motion analysis and on pixel-data analysis.Type: GrantFiled: February 2, 2023Date of Patent: March 26, 2024Assignee: Google Technology Holdings LLCInventors: Doina I. Petrescu, Thomas T. Lay, Steven R. Petrie, Bill Ryan, Snigdha Sinha, Jeffrey S. Vanhoof
-
Patent number: 11943423Abstract: Provided is a method of calibrating a stereoscopic display device. The device includes a motor and a display panel, and the display panel is driven by the motor to rotate to realize a stereoscopic display. The method includes acquiring a control strategy of the motor and display parameters of the display panel matching the control strategy, wherein the control strategy indicates that each time the motor runs for a preset period of time, the motor is restarted; controlling the motor to run according to the control strategy, to calibrate the motor by restarting; and driving the display panel to display according to the display parameters in the rotation process of the motor.Type: GrantFiled: April 4, 2023Date of Patent: March 26, 2024Assignees: Beijing BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.Inventors: Jiyang Shao, Yuxin Bi, Feng Zi, Bingxin Liu, Binhua Sun
-
Patent number: 11931134Abstract: The present invention relates to a device, system and method for improved non-invasive and objective detection of pulse of a subject. The device comprises an input unit (2a) configured to obtain a series of images of a skin region of the subject and a processing unit (2b) for processing said series of images by detecting pulse-related motion of the skin within the skin region from the series of images, generating a motion map of the skin region from the detected pulse-related motion, comparing the generated motion map with an expected motion map of the skin region, and determining the presence of pulse within the skin region based on the comparison.Type: GrantFiled: July 23, 2019Date of Patent: March 19, 2024Assignee: Koninklijke Philips N.V.Inventors: Kiran Hamilton J. Dellimore, Mukul Julius Rocque, Ralph Wilhelm Christianus Gemma Rosa Wijshoff, Jens Muehlsteff
-
Patent number: 11935262Abstract: A method where one or more objects of a three-dimensional scene are determined in accordance with provided raw data of the three-dimensional scene representing a predefined environment inside and/or outside the vehicle. A two-dimensional image is determined in accordance with the provided raw data of the three-dimensional scene such that the two-dimensional image depicts the determined objects of the three-dimensional scene on a curved plane. The two-dimensional image has a quantity of pixels, each representing at least one part or several of the determined objects of the three-dimensional scene. Data is provided which represents at least one determined field of view of the driver. For at least one of the determined objects, the probability with which the at least one object will be located in the field of view of the driver is determined in accordance with the provided data and the two-dimensional image.Type: GrantFiled: February 20, 2020Date of Patent: March 19, 2024Assignee: Bayerische Motoren Werke AktiengesellschaftInventors: Florian Bade, Moritz Blume, Martin Buchner, Carsten Isert, Julia Niemann, Michael Wolfram, Joris Wolters
-
Patent number: 11934174Abstract: Computer-implemented method for determining a tension value of a limb of a person, the tension value of the limb being used along with a skin value of the limb for production of a custom-tailored compression garment for the limb, the skin value describing the circumference of the limb without any applied compression and the tension value describing the circumference of the limb with the compression garment applying a desired compression, wherein the skin value of the limb is received and the tension value of the limb is calculated from the skin value according to a calculation instruction parametrized by at least one parameter, the parameter being derived from a dataset comprising multiple associated tuples of skin values and tension values.Type: GrantFiled: January 14, 2020Date of Patent: March 19, 2024Assignee: MEDI GMBH & CO. KGInventor: Maximilian Schaumberg
-
Patent number: 11922673Abstract: Disclosed are a product inspection method and device, producing system and a computer storage medium. The method comprises: conducting image acquisition on a product assembly line to obtain a production line image; extracting a product image including a product to be inspected from the production line image; extracting an inspection point image in a part inspection area in the product image; inputting the inspection point image into an inspection model to obtain an inspection result; and determining that the product to be inspected in the product image has defects under the condition that the inspection result meets any of the following conditions.Type: GrantFiled: June 22, 2021Date of Patent: March 5, 2024Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Tong Liu
-
Patent number: 11922575Abstract: Approaches described and suggested herein relate to generating three-dimensional representations of objects to be used to render virtual reality and augmented reality effects on personal devices such as smartphones and personal computers, for example. An initial surface mesh of an object is obtained. A plurality of silhouette masks of the object taken from a plurality of viewpoints is also obtained. A plurality of depth maps are generated from the initial surface mesh. Specifically, the plurality of depth maps are taken from the same plurality of viewpoints from which the silhouette images are taken. A volume including the object is discretized into a plurality of voxels. Each voxel is then determined to be either inside the object or outside of the object based on the silhouette masks and the depth data. A final mesh is then generated from the voxels that are determined to be inside the object.Type: GrantFiled: March 12, 2021Date of Patent: March 5, 2024Assignee: A9.com, Inc.Inventors: Himanshu Arora, Divyansh Agarwal, Arnab Dhua, Chun Kai Wang
-
Patent number: 11922655Abstract: Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.Type: GrantFiled: September 9, 2022Date of Patent: March 5, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
-
Patent number: 11908194Abstract: A modular tracking system is described comprising of the network of independent tracking units optionally accompanied by a LIDAR scanner and/or (one or more) elevated cameras. Tracking units are combining panoramic and zoomed cameras to imitate the working principle of the human eye. Markerless computer vision algorithms are executed directly on the units and provide feedback to motorized mirror placed in front of the zoomed camera to keep tracked objects/people in its field of view. Microphones are used to detect and localize sound events. Inference from different sensor is fused in real time to reconstruct high-level events and full skeleton representation for each participant.Type: GrantFiled: March 16, 2021Date of Patent: February 20, 2024Assignee: New York UniversityInventors: Yurii S. Piadyk, Carlos Augusto Dietrich, Claudio T Silva