A method for estimating the errors in many-light rendering with supersampling

Computational Visual Media, Apr 2019

In many-light rendering, a variety of visual and illumination effects, including anti-aliasing, depth of field, volumetric scattering, and subsurface scattering, are combined to create a number of virtual point lights (VPLs). This is done in order to simplify computation of the resulting illumination. Naive approaches that sum the direct illumination from many VPLs are computationally expensive; scalable methods can be computed more efficiently by clustering VPLs, and then estimating their sum by sampling a small number of VPLs. Although significant speed-up has been achieved using scalable methods, clustering leads to uncontrollable errors, resulting in noise in the rendered images. In this paper, we propose a method to improve the estimation accuracy of many-light rendering involving such visual and illumination effects. We demonstrate that our method can improve the estimation accuracy by a factor of 2.3 over the previous method.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

https://link.springer.com/content/pdf/10.1007%2Fs41095-019-0137-0.pdf

A method for estimating the errors in many-light rendering with supersampling

Computational Visual Media pp 1–10 | Cite as A method for estimating the errors in many-light rendering with supersampling AuthorsAuthors and affiliations Hirokazu SakaiKosuke NabataShinya YasuakiKei Iwasaki Open Access Research Article First Online: 11 April 2019 Abstract In many-light rendering, a variety of visual and illumination effects, including anti-aliasing, depth of field, volumetric scattering, and subsurface scattering, are combined to create a number of virtual point lights (VPLs). This is done in order to simplify computation of the resulting illumination. Naive approaches that sum the direct illumination from many VPLs are computationally expensive; scalable methods can be computed more efficiently by clustering VPLs, and then estimating their sum by sampling a small number of VPLs. Although significant speed-up has been achieved using scalable methods, clustering leads to uncontrollable errors, resulting in noise in the rendered images. In this paper, we propose a method to improve the estimation accuracy of many-light rendering involving such visual and illumination effects. We demonstrate that our method can improve the estimation accuracy by a factor of 2.3 over the previous method. Keywordsanti-aliasing depth of field many-light rendering participating media  Hirokazu Sakai received his B.S. degree from Wakayama University in 2017. He is currently an M.S. student at Wakayama University. Kosuke Nabata received his B.S. and M.S. degrees from Wakayama University in 2013 and 2015, respectively. He is currently a Ph.D. student at Wakayama University. Shinya Yasuaki received his B.S. and M.S. degrees from Wakayama University in 2015 and 2017, respectively. He is currently working at Square Enix Co., Ltd. Kei Iwasaki received his B.S., M.S., and Ph.D. degrees from the University of Tokyo, in 1999, 2001, and 2004, respectively. He is currently an associate professor at Wakayama University. Download to read the full article text Notes Acknowledgements This work was partially supported by JSPS KAKENHI 15H05924 and 18H03348. References [1] Dachsbacher, C.; Křivánek, J.; Hašan, M.; Arbree, A.; Walter, B.; Novák, J. Scalable realistic rendering with many-light methods. Computer Graphics Forum Vol. 33, No. 1, 88–104, 2014.CrossRefGoogle Scholar [2] Walter, B.; Fernandez, S.; Arbree, A.; Bala, K.; Donikian, M.; Greenberg, D. P. Lightcuts: A scalable approach to illumination. ACM Transactions on Graphics Vol. 24, No. 3, 1098–1107, 2005.CrossRefGoogle Scholar [3] Ou, J.; Pellacini, F. LightSlice: Matrix slice sampling for the many-lights problem. ACM Transactions on Graphics Vol. 30, No. 6, Article No. 179, 2011.Google Scholar [4] Huo, Y.; Wang, R.; Jin, S.; Liu, X.; Bao, H. A matrix sampling-and-recovery approach for many-lights rendering. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 210, 2015.CrossRefGoogle Scholar [5] Nabata, K.; Iwasaki, K.; Dobashi, Y.; Nishita, T. An error estimation framework for many-light rendering. Computer Graphics Forum Vol. 35, No. 7, 431–439, 2016.CrossRefGoogle Scholar [6] Walter, B.; Arbree, A.; Bala, K.; Greenberg, D. P. Multidimensional lightcuts. ACM Transactions on Graphics Vol. 25, No. 3, 1081–1088, 2006.CrossRefGoogle Scholar [7] Křivánek, J.; Georgiev, I.; Kaplanyan, A. S.; Cañada, J. Recent advances in light transport simulation: Theory and practice. In: Proceedings of the ACM SIGGRAPH 2013 Courses, Article No. 4, 2013.Google Scholar [8] Keller, A. Instant radiosity. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 49–56, 1997.Google Scholar [9] Hašan, M.; Pellacini, F.; Bala, K. Matrix row-column sampling for the many-light problem. ACM Transactions on Graphics Vol. 26, No. 3, Article No. 26, 2007.Google Scholar [10] Georgiev, I.; Křivánek, J.; Popov, S.; Slusallek, P. Importance caching for complex illumination. Computer Graphics Forum Vol. 31, No. 2, 701–710, 2012.CrossRefGoogle Scholar [11] Yoshida, H.; Nabata, K.; Iwasaki, K.; Dobashi, Y.; Nishita, T. Adaptive importance caching for many-light rendering. Journal of WSCG Vol. 23, No. 1, 65–72, 2015.Google Scholar [12] Wu, Y.-T.; Chuang, Y.-Y. VisibilityCluster: Average directional visibility for many-light rendering. IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 9, 1566–1578, 2013.CrossRefGoogle Scholar [13] Engelhardt, T.; Novák, J.; Schmidt, T.-W.; Dachsbacher, C. Approximate bias compensation for rendering scenes with heterogeneous participating media. Computer Graphics Forum Vol. 31, No. 7, 2145–2154, 2012.CrossRefGoogle Scholar [14] Novák, J.; Nowrouzezahrai, D.; Dachsbacher, C.; Jarosz, W. Virtual ray lights for rendering scenes with participating media. ACM Transactions on Graphics Vol. 31, No. 4, Article No. 60, 2012.Google Scholar [15] Frederickx, R.; Bartels, P.; Dutré, P. Adaptive LightSlice for virtual ray lights. In: Proceedings of Eurographics 2015 Short Paper, 61–64, 2015.Google Scholar [16] Huo, Y.; Wang, R.; Hu, T.; Hua, W.; Bao, H. Adaptive matrix column sampling and completion for rendering participating media. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 167, 2016.Google Scholar [17] Arbree, A.; Walter, B.; Bala, K. Single-pass scalable subsurface rendering with lightcuts. Computer Graphics Forum Vol. 27, No. 2, 507–516, 2008.CrossRefGoogle Scholar [18] Walter, B.; Khungurn, P.; Bala, K. Bidirectional lightcuts. ACM Transactions on Graphics Vol. 31, No. 4, Article No. 59, 2012.CrossRefGoogle Scholar [19] Wu, Y. T.; Li, T. M.; Lin, Y. H.; Chuang, Y. Y. Dual-matrix sampling for scalable translucent material rendering. IEEE Transactions on Visualization and Computer Graphics Vol. 21, No. 3, 363–374, 2015.CrossRefGoogle Scholar [20] Jensen, H. W.; Marschner, S. R.; Levoy, M.; Hanrahan, P. A practical model for subsurface light transport. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 511–518, 2001.Google Scholar [21] King, A.; Kulla, C.; Conty, A.; Fajardo, M. BSSRDF importance sampling. In: Proceedings of the ACM SIGGRAPH 2013 Talks, Article No. 48, 2013.Google Scholar Copyright information © The Author(s) 2019 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articles Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj. Authors and Affiliations Hirokazu Sakai1Kosuke Nabata1Shinya Yasuaki1Kei Iwasaki12Email author1.Wakayama University, WakayamaWakayamaJapan2.Dwango CG ResearchTokyoJapan


This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs41095-019-0137-0.pdf

Hirokazu Sakai, Kosuke Nabata, Shinya Yasuaki, Kei Iwasaki. A method for estimating the errors in many-light rendering with supersampling, Computational Visual Media, 2019, 1-10, DOI: 10.1007/s41095-019-0137-0