Data and sensor system hub

- Guard, Inc.
Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description

FIG. 1 is a front perspective view of a data and sensor system hub according to our new design;

FIG. 2 is a front view of a data and sensor system hub thereof;

FIG. 3 is a rear view of a data and sensor system hub thereof;

FIG. 4 is a right side view of a data and sensor system hub thereof;

FIG. 5 is a left side view of a data and sensor system hub thereof;

FIG. 6 is a top view of a data and sensor system hub thereof; and,

FIG. 7 is a bottom view of a data and sensor system hub thereof.

Claims

An ornamental design for a data and sensor system hub, as shown and described.

Referenced Cited
U.S. Patent Documents
2783459 February 1957 Lienau et al.
3732556 May 1973 Caprillo et al.
3796208 March 1974 Bloice
3953843 April 27, 1976 Codina
3969712 July 13, 1976 Butman et al.
4337527 June 29, 1982 Delagrange et al.
4510487 April 9, 1985 Wolfe et al.
4639902 January 27, 1987 Leverance et al.
4747085 May 24, 1988 Dunegan et al.
4775854 October 4, 1988 Cottrell
4779095 October 18, 1988 Guerreri
D312219 November 20, 1990 Pratt
4971283 November 20, 1990 Tilsner
5043705 August 27, 1991 Rooz et al.
5142508 August 25, 1992 Mitchell et al.
5146208 September 8, 1992 Parra
5195060 March 16, 1993 Roll
5200931 April 6, 1993 Kosalos et al.
5369623 November 29, 1994 Zerangue
5574497 November 12, 1996 Henderson et al.
5616239 April 1, 1997 Wendell et al.
5631976 May 20, 1997 Bolle et al.
5691777 November 25, 1997 Kassatly
6133838 October 17, 2000 Meniere
6173066 January 9, 2001 Peurach et al.
6421463 July 16, 2002 Poggio et al.
6570608 May 27, 2003 Tserng
6628835 September 30, 2003 Brill et al.
D733596 July 7, 2015 Goodner
9443207 September 13, 2016 Przybylko et al.
9972188 May 15, 2018 AlMahmoud
20090189981 July 30, 2009 Siann et al.
20120269399 October 25, 2012 Anderson et al.
20140267736 September 18, 2014 Delean
20140333775 November 13, 2014 Naikal et al.
20150107015 April 23, 2015 Ng
20150139485 May 21, 2015 Bourdev
20160104359 April 14, 2016 AlMahmoud
20190004169 January 3, 2019 Watkins et al.
Foreign Patent Documents
9408257 December 1996 BR
2436964 June 2002 CA
103413114 November 2013 CN
0485735 May 1992 EP
0261917 March 1994 EP
2741370 May 1997 FR
2763459 November 1998 FR
541637 December 1941 GB
2254215 September 1992 GB
S62251879 November 1987 JP
H05157529 June 1993 JP
H1070654 March 1998 JP
3196842 August 2001 JP
3264121 March 2002 JP
1995034056 December 1995 WO
2017130187 August 2017 WO
Other references
  • Baqué, Pierre et al. “Deep Occlusion Reasoning for Multi-camera Multi-target Detection.” 2017 IEEE International Conference on Computer Vision (ICCV) (2017): 271-279.
  • Brubaker, Marcus A., Leonid Sigal and David J. Fleet, “Video-Based People Tracking,” Handbook of Ambient Intelligence and Smart Environments, pp. 57-87, Springer, Boston, MA (2009).
  • Cao, Zhe, Thomas Simon, Shih-En Wei, Yaser Sheikh, “Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 7291-7299.
  • Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Web-based Injury Statistics Query and Reporting System (WISQARS) [online], [cited May 3, 2012]. Available from: URL: http://www.cdc.gov/injury/wisqars.
  • Chen, Liang-Chieh et al. “Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs.” CoRR abs/1412.7062 (2014): n. pag.
  • Fleuret, François et al. “Multicamera People Tracking with a Probabilistic Occupancy Map.” IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (2008): 267-282.
  • Greif, Thomas, “A Probabilistic Motion Model for Swimmers: A Computer Vision Approach,” Master of Science Thesis, Universität Augsburg (2009).
  • Grundmann, Matthias, “Computational video: post-processing methods for stabilization, retargeting and segmentation,” Thesis for Degree of Doctor of Philosophy in the School of Interactive Computing, Georgia Institute of Technology, May 2013 (195 pages).
  • Güler, Riza Alp et al. “DensePose: Dense Human Pose Estimation in the Wild.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 7297-7306.
  • Hartley, Richard and Andrew Zisserman, “Multiple View Geometry in Computer Vision,” Second Edition, Cambridge University Press, Mar. 2004.
  • He, Kaiming, Georgia Gkioxari, Piotr Dollár and Ross B. Girshick. “Mask R-CNN.” 2017 IEEE International Conference on Computer Vision (ICCV) (2017), pp. 2980-2988.
  • Jialue Fan, Wei Xu, Ying Wu, Yihong Gong, “Human Tracking Using Convolutional Neural Networks,” IEEE Transactions on Neural Networks, vol. 21, No. 10, Oct. 2010.
  • Joshi, Neel, Shai Avidan, Wojciech Matusik, David J. Kriegman, “Synthetic Aperture Tracking: Tracking through Occlusions,” IEEE 11th International Conference on Computer Vision, Oct. 14-21, 2007, pp. 1-8.
  • Liu, Xiaobai, “Multi-View 3D Human Tracking in Crowded Scenes,” Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), 2016.
  • Pentair IntelliLevel Water Leveling Monitoring, Installation and User's Guide, Pentair PLC, 2019 Retrieved Online.
  • Schechner, Y. Y. and Nir Karpel. “Attenuating natural flicker patterns.” Oceans '04 MTS/IEEE Techno-Ocean '04 (IEEE Cat. No. 04CH37600) 3 (2004): 1262-1268 vol. 3.
  • Schechner, Yoav Y. and Nir Karpel. “Clear underwater vision.” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. 1 (2004): I-I.
  • Schechner, Yoav Y. et al. “Polarization and statistical analysis of scenes containing a semireflector.” Journal of the Optical Society of America. A, Optics, image science, and vision 17 2 (2000): 276-84.
  • Shah, Sohil Atul and Vladlen Koltun. “Robust continuous clustering.” Proceedings of the National Academy of Sciences of the United States of America 114 37 (2017): 9814-9819.
  • Shu, Guang, “Human Detection, Tracking and Segmentation in Surveillance Video,” PhD Thesis, University of Central Florida, 2014.
  • Swirski, Yohay and Yoav Y. Schechner. “3Deflicker from motion.” IEEE International Conference on Computational Photography (ICCP) (2013): 1-9.
  • Swirski, Yohay et al. “Stereo from flickering caustics.” 2009 IEEE 12th International Conference on Computer Vision (2009): 205-212.
  • Tian, Luchao, Mingchen Li, Guyue Zhang, Jingwen Zhao and Yan Qiu Chen. “Robust Human Detection With Super-Pixel Segmentation and Random Ferns Classification Using RGB-D Camera.” 2017 IEEE International Conference on Multimedia and Expo (ICME) (2017), pp. 1542-1547.
  • Tian, Yuandong and Srinivasa G. Narasimhan. “Globally Optimal Estimation of Nonrigid Image Distortion.” International Journal of Computer Vision 98 (2011): 279-302.
  • Tian, Yuandong and Srinivasa G. Narasimhan. “Theory and Practice of Hierarchical Data-driven Descent for Optimal Deformation Estimation.” International Journal of Computer Vision 115 (2015): 44-67.
  • Tzeng, Eric et al. “Simultaneous Deep Transfer Across Domains and Tasks.” 2015 IEEE International Conference on Computer Vision (ICCV) (2015): 4068-4076.
  • Vo, Minh et al. “Automatic Adaptation of Person Association for Multiview Tracking in Group Activities.” ArXiv abs/1805.08717 (2018): 17 pages.
  • Vo, Minh et al. “Spatiotemporal Bundle Adjustment for Dynamic 3D Reconstruction.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016): 1710-1718.
  • Wang, Jian et al., “Programmable Triangulation Light Curtains,” European Conference on Computer Vision (ECCV), 2018, pp. 19-34.
  • Wang, Jiayuan, “Liquid Level Sensing Using Capacitive-to-Digital Converters,” Analog Dialogue, vol. 49, Apr. 2015. Retrieved Online.
  • Wikipedia contributors, “Scheimpflug principle,” Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/w/index.php?title=Scheimpflug_principle&oldid=910381418 (accessed Sep. 24, 2019).
  • Wikipedia contributors, “Tilt-shift photography,” Wikipedia, The Free Encyclopedia, (accessed Sep. 24, 2019).
  • Wu, Changchang, “VisualSFM: A Visual Structure from Motion System,” Retrieved online Sep. 23, 2019, URL: (2011).
  • Yang, Tao, Yanning Zhang, Jingyi Yu, Jing Li, Wenguang Ma, Xiaomin Tong, Rui Yu and Lingyan Ran. “All-in-Focus Synthetic Aperture Imaging.” ECCV (Sep. 2014), pp. 1-15.
  • Zhi, Tiancheng et al. “Deep Material-Aware Cross-Spectral Stereo Matching.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 1916-1925.
Patent History
Patent number: D939980
Type: Grant
Filed: Sep 6, 2019
Date of Patent: Jan 4, 2022
Assignee: Guard, Inc. (San Francisco, CA)
Inventors: Chris Barton (San Francisco, CA), Nichole Suzanne Rouillac (San Francisco, CA), Robin Nicholas Hubbard (San Francisco, CA), Jonathan Chei-Feung Lau (San Francisco, CA)
Primary Examiner: Antoine Duval Davis
Application Number: 29/704,836