AN EXAMPLE OF 3D RECONSTRUCTION ENVIRONMENT FROM RGB-D CAMERA

Authors

  • Le Van Hung Tan Trao University, Vietnam
  • Bui Trung Minh Tan Trao University
  • Tran Hai Yen Vietnam Academy of Dance, Vietnam
  • Pham Thi Loan Hai Duong College, Vietnam

DOI:

https://doi.org/10.51453/2354-1431/2021/692

Keywords:

3D environment reconstruction RGB-D camera Point cloud data

Abstract

3D environment reconstruction is a very important research direction in robotics and computer vision. This helps the robot to locate and find directions in a real environment or to help build support systems for the blind and visually impaired people. In this paper, we introduce a simple and real-time approach for 3D environment reconstruction from data obtained from cheap cameras. The implementation is detailed step by step and illustrated with source code. Simultaneously, cameras that support reconstructing 3D environments in this approach are also presented and introduced. The unorganized point cloud data
is also presented and visualized in the available figures .

Downloads

Download data is not yet available.

References

[1] W. B. Gross, “Combined effects of deoxycorticosterone and furaltadone on Escherichia coli infection in chickens,” American Journal of Veterinary Research, 45(5), 963–966, 1984.

[2] H. Durrant-Whyte, T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robotics and Automation Magazine, 13(2), 99–108, 2006, doi:10.1109/MRA.2006.1638022.

[3] P. Skrzypczynski, “Simultaneous localization and ´ mapping: A feature-based probabilistic approach,” International Journal of Applied Mathematics and Computer Science, 19(4), 575–588, 2009, doi:10.2478/ v10006-009-0045-z.

[4] “Depth Sensing Technologies,” https://www.framos.com/en/ products-solutions/3d-depth-sensing/depth-sensing-technologies, 021, [Accessed 20 Nov 2021].

[5] “Depth Sensing Overview,” https://www. stereolabs.com/docs/depth-sensing/, 2021, [Accessed 20 Nov 2021].

[6] R. Li, Z. Liu, J. Tan, “A survey on 3D hand pose estimation: Cameras, methods, and datasets,” Pattern Recognition, 93, 251–272, 2019, doi:10.1016/j. patcog.2019.04.026.

[7] J. Kramer, N. Burrus, F. Echtler, H. C. Daniel, M. Parker, Hacking the Kinect, 2012, doi:10.1007/ 978-1-4302-3868-3.

[8] R. Chatila, J. P. Laumond, “Position referencing and consistent world modeling for mobile robots,” in Proceedings - IEEE International Conference onV.H Le et al./No.24_Dec 2021|p. Robotics and Automation, 138–145, 1985, doi:10. 1109/ROBOT.1985.1087373.

[9] T. Bailey, H. Durrant-Whyte, “Simultaneous localization and mapping (SLAM): Part II,” IEEE Robotics and Automation Magazine, 13(3), 108–117, 2006, doi:10.1109/MRA.2006.1678144.

[10] J. Aulinas, Y. Petillot, J. Salvi, X. Lladó, “The SLAM problem: A survey,” in Frontiers in Artificial Intelligence and Applications, volume 184, 363–371, 2008, doi:10.3233/978-1-58603-925-7-363.

[11] L. Ling, PHD Thesis - Dense Real-time 3D Reconstruction from Multiple Images, Ph.D. thesis, 2013.

[12] A. J. Davison, I. D. Reid, N. D. Molton, O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Transactions on pattern analysis and machine intelligence, 29(6), 2007.

[13] K. Mitra, R. Chellappa, “A scalable projective bundle adjustment algorithm using the L norm,” in Proceedings - 6th Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP 2008, 79–86, 2008, doi:10.1109/ICVGIP.2008.51.

[14] Z. Zhang, Y. Shan, “Incremental Motion Estimation Through Local Bundle Adjustment,” Technical Report MSR-TR-2001-54, 2001.

[15] L. A. Clemente, A. J. Davison, I. D. Reid, J. Neira, J. D. Tardós, “Mapping large loops with a single handheld camera,” in Robotics: Science and Systems, volume 3, 297–304, 2008, doi:10.15607/rss.2007.iii.038.

[16] H. Strasdat, J. M. Montiel, A. J. Davison, “Scale driftaware large scale monocular SLAM,” in Robotics: Science and Systems, volume 6, 73–80, 2011, doi: 10.7551/mitpress/9123.003.0014. [17] B. Nicolas, “Calibrating the depth and color camera,” http://nicolas.burrus.name/index. php/Research/KinectCalibration, 2018, [Online; accessed 10-January-2018].

[18] C. Jason, “Kinect V1 Rgb and Depth Camera Calibration,” https://jasonchu1313.github.io/2017/ 10/01/kinect-calibration/, 2017, [Online; accessed 10-Nov-2021].

[19] C. Thomson, “What are point clouds? 5 easy facts that explain point clouds,” https://info.vercator. com/blog/what-are-point-clouds-5-easy- facts-that-explain-point-clouds, 2019, [Online; accessed 10-Nov-2021]

Downloads

Published

2022-04-12

How to Cite

Le Van, H., Bui Trung , M., Tran Hai, Y., & Pham Thi, L. (2022). AN EXAMPLE OF 3D RECONSTRUCTION ENVIRONMENT FROM RGB-D CAMERA. SCIENTIFIC JOURNAL OF TAN TRAO UNIVERSITY, 7(24). https://doi.org/10.51453/2354-1431/2021/692

Issue

Section

Natural Science and Technology