Introduction

OpenSVBRDF is the first large-scale database of measured spatially-varying anisotropic reflectance, consisting of 1,000+ high-quality near-planar SVBRDFs, spanning 9 material categories such as wood, fabric and metal. To build this database, we develop a novel integrated system for robust, high-quality and -efficiency reflectance acquisition and reconstruction. Each sample is captured in 15 minutes, and represented as a set of high-resolution texture maps that correspond to spatially-varying BRDF parameters and local frames.

For more technical details, please refer to our journal-track paper accepted to ACM SIGGRAPH Asia 2023 (ACM TOG).

Download

We provide three components of data and the source code in the database portal. You can download different contents based on your research needs.

Due to government regulations, the database portal may become unavailable from time to time. In that case, please contact us via email, so that we can point you to alternative ways to download OpenSVBRDF.

1. Texture maps (50GB)

We store 6 texture maps for each sample in the database, which include GGX parameters (diffuse albedo, specular albedo, anisotropic roughness, normal and tangent), as well as transparency. Please note that the specular albedo in OpenSVBRDF typically falls within the range of [0, 10], with a few values exceeding 10, rather than the conventional [0, 1] range. All texture maps have a spatial resolution of 1,024×1,024. The total size is 50GB (each sample is 50MB).

2. Neural representations (283GB)

We store the intermediate neural representations of all SVBRDFs (also described as latent vectors in the paper), with a spatial resolution of 1,024×1,024. The total size is 283GB (each sample is 290MB).

3. Raw images (15TB)

We also store the 193 raw HDR photographs (with a resolution of 24MP) for each sample. The total size is 15TB (each sample is 15GB).

4. Code (Coming Soon)

The code encompasses all the steps from processing the original captured photographs to fine-tuning neural representations. You can also separately download the portion of the code that converts neural representations into lumitexel vectors.

Benchmarks

In the future, We are going to set up open challenges (e.g., on material estimation, classification, generation) because standardized benchmarks based on our growing dataset might be useful to quantitatively evaluate existing as well as future research on a common ground. Further information will be added to the database portal.

If you are interested in participating in setting up the benchmarks, please don’t hesitate to contact us via email.

Acknowledgements

We would like to thank anonymous reviewers for their comments, Yaxin Yu and Kaizhang Kang for help in building the prototype, and Yue Dong, Xin Tong, Julie Dorsey and Holly Rushmeier for their support. This work is partially supported by NSF China (62022072 & 62227806), Zhejiang Provincial Key R&D Program (2022C01057),the Fundamental Research Funds for the Central Universities, the XPLORER PRIZE and Information Technology Center and State Key Lab of CAD&CG, Zhejiang University.


Contact: xiaohema1998@gmail.com