Patents Examined by Jeffery Brier
-
Patent number: 12211127Abstract: A method for managing drawing data including raster data includes: vectorizing, by a computer, the drawing data to generate vector data; generating, by the computer, dimension line data associated with first and second nodes included in the vector data; performing, by the computer, character recognition on a corresponding region in the drawing data corresponding to a close region close to a dimension line represented by the dimension line data; storing, by the computer, a character obtained by the character recognition as a dimension value in association with the dimension line data; and calculating, by the computer, a number of pixels per 1 mm on a drawing represented by the drawing data using the dimension value.Type: GrantFiled: July 5, 2024Date of Patent: January 28, 2025Assignee: CADDi, Inc.Inventor: Yushiro Kato
-
Patent number: 12203990Abstract: In an aspect of the present disclosure is a system for monitoring health of a motor, including at least one sensor configured to detect at least a motor metric and send motor datum based on the at least a motor metric, an augmented reality display configured to display a visual representation of the motor datum, and a computing device communicatively connected to the at least one sensor and the augmented reality display, wherein the computing device is configured to: receive the motor datum from the at least one sensor; and command the augmented reality display to display the visual representation of the motor datum.Type: GrantFiled: July 6, 2022Date of Patent: January 21, 2025Assignee: BETA AIR LLCInventors: Brandon White, Stephen Widdis
-
Patent number: 12198263Abstract: Determining a fit of a real-world electronic gaming machine (EGM) using a virtual representation of an EGM includes obtaining the virtual representation, obtaining sensor data for a real-world environment in which the virtual representation is to be presented, and detecting one or more surfaces in the real-world environment based on the sensor data. A determination is made that the virtual representation of the electronic gaming machine fits within the real-world environment in accordance with the real-world dimensions and the detected one or more surfaces. The virtual representation is presented in a mixed reality environment such that the virtual representation is blended into a view of a real-world environment along the one or more surfaces.Type: GrantFiled: September 30, 2022Date of Patent: January 14, 2025Assignee: Aristocrat Technologies Australia Pty LimitedInventors: Upinder Dhaliwal, Eric Droukas, Patrick Petrella, III
-
Patent number: 12190423Abstract: Systems and methods are provided for providing a timeline representing a culture media protocol for a culture medium. Providing a timeline representing a culture media protocol can include receiving the culture media protocol for the culture media generating the timeline on a user interface based on the culture media protocol, monitoring time on the timeline, receiving one or more culture media images related to the culture media protocol, associating each of the one or more culture media images with a position on the timeline that correlates to a time at which the culture media image was captured, and generating a selectable marker for each culture media image associated with the timeline, the selectable marker being aligned with the position on the timeline that correlates to the time at which the culture media image was captured.Type: GrantFiled: May 19, 2023Date of Patent: January 7, 2025Assignee: BECTON, DICKINSON AND COMPANYInventors: Strett Roger Nicolson, Keri Lynne Jones Aman, Mark Sakowski, Paul Fieni, Mark Larsen, Amy Alcott Llanso
-
Patent number: 12190440Abstract: The present disclosure relates to the field of artificial intelligence (AI) and neural rendering, and particularly to a method of generating a multi-layer representation of a scene using neural networks trained in an end-to-end fashion and to a computing device implementing the method. The method of generating a multi-layer representation of a scene includes: obtaining a pair of images of the scene, the pair of the images comprising a reference image and a source image; performing a reprojection operation on the pair of images to generate a plane-sweep volume; predicting, using a geometry network, a layered structure of the scene based on the plane-sweep volume; and estimating, using a coloring network, color values and opacity values for the predicted layered structure of the scene to obtain the multi-layer representation of the scene; wherein the geometry network and the coloring network are trained in end-to-end manner.Type: GrantFiled: December 16, 2022Date of Patent: January 7, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Gleb Mikhaylovich Sterkin, Pavel Ilyich Solovev, Denis Mikhaylovich Korzhenkov, Victor Sergeevich Lempitsky, Taras Andreevich Khakhulin
-
Patent number: 12190589Abstract: An object tracking system that includes a first sensor and a second sensor that are each configured to capture frames of at least a portion of a global plane for a space. The system is configured to identify a pixel location for a marker within a frame from the first sensor and to determine an (x,y) coordinate for the marker using a first homography. The system is further configured to identify a pixel location for a different marker in a frame from the second sensor and to determine an (x,y) coordinate for the marker using a second homography. The system is further configured to determine a distance difference between the computed distance between the (x,y) coordinates and an actual distance. The system is further configured to recompute the first homography and/or the second homography in response to determining that the distance difference exceeds a difference threshold level.Type: GrantFiled: February 10, 2022Date of Patent: January 7, 2025Assignee: 7-ELEVEN, INC.Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy, Kyle Dalal
-
Patent number: 12190405Abstract: Examples described herein relate to a first graphics processing unit (GPU) with at least one integrated communications system, wherein the at least one integrated communications system is to apply a reliability protocol to communicate with a second at least one integrated communications system associated with a second GPU to copy data from a first memory region to a second memory region and wherein the first memory region is associated with the first GPU and the second memory region is associated with the second GPU.Type: GrantFiled: June 29, 2022Date of Patent: January 7, 2025Assignee: Intel CorporationInventors: Todd Rimmer, Mark Debbage, Bruce G. Warren, Sayantan Sur, Nayan Amrutlal Suthar, Ajaya Durg
-
Patent number: 12190408Abstract: Techniques for personalized digital content generation are described and are implementable to generate digital content based on content that depicts a scene from one or more locations and content that depicts one or more individuals. The described implementations, for instance, enable generation of a personalized photo album for a user depicting the user at the one or more locations. The described implementations further enable generation of an itinerary including personalized digital content. In an example, an input including a location is received. Environment content is generated that depicts a scene of the location. User content is generated that includes a representation of one or more individuals. Personalized content is then generated by incorporating the representation of the one or more individuals into the scene, and the personalized content is output for display. The personalized content is further usable to generate a personalized itinerary for a user.Type: GrantFiled: September 30, 2022Date of Patent: January 7, 2025Assignee: Motorola Mobility LLCInventors: Amit Kumar Agrawal, Srikanth Raju, Rahul Bharat Desai, Renuka Prasad Herur Rajashekaraiah
-
Patent number: 12182935Abstract: A first aspect of the invention provides a method of training a neural network for capturing volumetric video, comprising: generating a 3D model of a scene; using the 3D model to generate a high fidelity depth map; capturing a perceived depth map of the scene, having a field of view that is aligned with a field of view of the high fidelity depth map; and training the neural network based on the high fidelity depth map and the perceived depth map, wherein the high fidelity depth map has a higher fidelity to the scene than the perceived depth map has.Type: GrantFiled: June 3, 2021Date of Patent: December 31, 2024Assignee: CONDENSE REALITY LTD.Inventors: Nicholas Fellingham, Oliver Moolan-Feroze
-
Patent number: 12183045Abstract: Ways to simplify connectivity data for patches are described herein. The patches are generated considering the high-resolution mesh information. The connectivity data is simplified at the patch level, while the geometry image is still preserved. For the connectivity simplification, only triangles inside the patch are simplified. If the border is still preserved, the reconstruction in 3D will not suffer from artifacts. The high-resolution geometry image can be used to reverse the simplification and improve the connectivity at the decoder side. Three embodiments of patch mesh simplification are described: quadric error edge collapse, border distance edge collapse, and border triangles only.Type: GrantFiled: November 15, 2022Date of Patent: December 31, 2024Assignees: Sony Group Corporatiom, Sony Corporation of AmericaInventors: Danillo Graziosi, Alexandre Zaghetto, Ali Tabatabai
-
Patent number: 12183309Abstract: The invention includes a method including the steps of obtaining a plurality of images, each of the images in the plurality having at least one corresponding region, generating a merged image, the merged image also having the corresponding region. The step of generating includes selecting an image source from the plurality of images to source image data for the corresponding region in the merged image by comparing attributes of the corresponding regions of the plurality of images to identify the image source having preferred attributes.Type: GrantFiled: October 24, 2023Date of Patent: December 31, 2024Assignee: Hologic, Inc.Inventors: Kevin Kreeger, Andrew P. Smith, Ashwini Kshirsagar, Jun Ge, Xiangwei Zhang, Haili Chui, Christopher Ruth, Yiheng Zhang, Liyang Wei, Jay Stein
-
Patent number: 12175644Abstract: The systems and methods described can include approaches to calibrate head-mounted displays for improved viewing experiences. Some methods include receiving data of a first target image associated with an undeformed state of a first eyepiece of a head-mounted display device; receiving data of a first captured image associated with deformed state of the first eyepiece of the head-mounted display device; determining a first transformation that maps the first captured image to the image; and applying the first transformation to a subsequent image for viewing on the first eyepiece of the head-mounted display device.Type: GrantFiled: August 24, 2023Date of Patent: December 24, 2024Assignee: Magic Leap, Inc.Inventors: Lionel Ernest Edwin, Samuel A. Miller, Etienne Gregoire Grossmann, Brian Christopher Clark, Michael Robert Johnson, Wenyi Zhao, Nukul Sanjay Shah, Po-Kang Huang
-
Patent number: 12165541Abstract: A system is configured to provide Joint Terminal Attack Controller (JTAC) training using one or more of augmented reality (AR) devices, virtual reality (VR) devices, or other devices. The system may generate one or more AR elements (such as one or more of an enemy combatant, an aerial device, an ordnance, a ground vehicle, or a structure). The system may produce AR data that can be provided to one or more AR headsets and may produce VR scene data corresponding to the AR data that can be provided to one or more VR headsets. Additionally, the system may detect a cap or cover on a device, determine data related to the device from the cap, and present visual data superimposed over the cap based on the determined data.Type: GrantFiled: April 27, 2020Date of Patent: December 10, 2024Assignee: Havik Solutions LLCInventor: Bradley Denn
-
Patent number: 12161296Abstract: A system includes a processor and a display. The processor is configured to: (a) receive an optical image of an organ of a patient, (b) receive an anatomical image of the organ, (c) receive, from a position tracking system (PTS), a position signal indicative of a position of a medical instrument treating the organ, (d) register the optical image and the anatomical image in a common coordinate system, and (e) estimate the position of the medical instrument in at least one of the optical image and the anatomical image. The display is configured to visualize the medical instrument overlaid on at least one of the optical image and the anatomical image.Type: GrantFiled: April 5, 2021Date of Patent: December 10, 2024Assignee: Johnson & Johnson Surgical Vision, Inc.Inventors: Assaf Govari, Vadim Gliner
-
Patent number: 12165552Abstract: The present disclosure provides an electronic apparatus capable of capturing an image without being affected by an abnormality on a display surface. The electronic apparatus includes: a display unit; an imaging unit that is disposed on an opposite side to a display surface of the display unit; an abnormality detection unit that detects an abnormality on the display surface; and a display control unit that highlights a position where the abnormality detected by the abnormality detection unit occurs on the display unit.Type: GrantFiled: January 7, 2021Date of Patent: December 10, 2024Assignee: Sony Semiconductor Solutions CorporationInventor: Masashi Nakata
-
Patent number: 12159374Abstract: Provided is an information-processing device including: a CPU; and a memory storing instructions for causing the information-processing device, when executed by the CPU, to: select M (M?N) images from N (N>1) images; generate a combined image by arranging the selected M images in M frames defined in advance, respectively; and determine a total assessment value in association with the generated combined image, the total assessment value including at least a linear sum of a selection assessment value being a linear sum of single image assessment values of the selected M images and a combination assessment value being a single image assessment value of the combined image.Type: GrantFiled: November 5, 2021Date of Patent: December 3, 2024Assignee: RAKUTEN GROUP, INC.Inventors: Mitsuru Nakazawa, Bjorn Stenger
-
Patent number: 12159348Abstract: First, an image processing apparatus obtains data of a captured image obtained by image capturing with an image capturing apparatus that captures an image of a surrounding of a reference point, and obtains distance information indicating a distance from the reference point to an object present in a vicinity of the reference point. Next, the image processing apparatus obtains first three-dimensional shape data corresponding to a shape of the object, based on the distance information. Then, the image processing apparatus obtains second three-dimensional shape data that corresponds to the surrounding of the reference point other than the object and that is formed of one or more flat planes or curved planes. Then, the image processing apparatus obtains third three-dimensional shape data in which the first three-dimensional shape data and the second three-dimensional shape data are integrated, and maps the captured image to the third three-dimensional shape data.Type: GrantFiled: November 10, 2022Date of Patent: December 3, 2024Assignee: Canon Kabushiki KaishaInventor: Kina Itakura
-
Patent number: 12159363Abstract: A method for interacting with a user to create a three-dimensional (3D) model is provided. The method may include causing a capture device to start a first scan on a reference surface; instructing the user to make a first movement during the first scan; generating a 3D data representation of the reference surface based on data acquired in the first scan; displaying the 3D data representation on the GUI; causing the capture device to start a second scan on the reference surface; instruct the user to make a second movement during the second scan; generating a location tracking data representation of the reference surface based on data acquired in the second scan; displaying the location tracking data representation on the GUI; and causing the capture device to generate a 3D model of the reference surface based on the 3D data representation and the location tracking data representation.Type: GrantFiled: December 6, 2022Date of Patent: December 3, 2024Assignee: Snap Inc.Inventors: Marwan Aljubeh, Gregory James Bakker, Ross Cairns, Eric Nersesian
-
Patent number: 12154265Abstract: Systems and methods for identifying similar three-dimensional (3D) representations of an object. The method includes receiving at an interface a target 3D model and at least one candidate 3D model, executing at least one feature identification procedure to identify a feature of the target 3D model, generating a target feature tensor based on the identified feature of the target 3D model, and executing the at least one feature identification procedure on the candidate 3D model. The method further includes generating a candidate feature tensor based on the identified feature of the candidate 3D model, executing at least one comparison function to compare the target feature tensor and the candidate feature tensor, generating a feature comparison tensor based on the execution of the at least one comparison function, and identifying a degree of similarity between target 3D model and the candidate 3D model based on the feature comparison tensor.Type: GrantFiled: January 23, 2024Date of Patent: November 26, 2024Assignee: AURA Technologies, LLCInventors: Ziye Xu, Eric Strong, Alex Blate, Douglas Bennett
-
Patent number: 12155927Abstract: The image display apparatus according to an aspect of the present invention comprises: an image input device which inputs an image signal; a particular target detection device which detects a particular target included in the image signal based on a particular target evaluation value indicating the feature of the particular target; a frame display information generation device which generates frame display information indicating a frame surrounding the detected particular target and which causes the frame to change continuously or by stages according to the particular target evaluation value; and a display device which displays the frame based on the generated frame display information. That is, by causing the frame to change continuously or by stages according to the evaluation value of a particular target, it is possible to avoid sudden change in the frame display.Type: GrantFiled: May 4, 2023Date of Patent: November 26, 2024Assignee: FUJIFILM CorporationInventors: Takeshi Misawa, Masahiko Sugimoto