Path Graphs: Iterative Path Space Filtering [ToG21]

Theme
논문
Type
테마글
Status
Post
분야
Monte Carlo Noise Reduction
중요도
2 more properties

My Thoughts

Using graphs is an interesting approach. Recent works and classic works approach paths as a sequence or a single concatenated feature but not as graphs(actually trees). Furthermore, using a graphs-like structure for reducing noise can be popular.
However, there are main concerns. Firstly, constructing a graph for a high-res image can be cumbersome. Secondly, reducing bias for 3D GNN can be hard (this work doesn’t have a bias). It can be easily handled by dividing the image. The second concern can be the novelty of future work.
Reducing bias via final gather is an important method. This rises the point where post-denoising works make. We can investigate more to a more unbiased method.
This work has a limitation of clamping the value based on MIS and complex BSDF. When applying GNN on path graphs, we might need to think of cases where complex scenes can make the training hard to converge.

Motivation

Recent reconstruction methods (path guiding, denoising) are based on final radiances. We can use more information in path-space which will be useful for sparse samples.
Recent sample reusing works are done locally. This work aims to provide a global path reuse concept based on K-NN vertices linked with neighbor edges.

Construct Path Graph

Light edge (LE) = Edge with endpoint of light source
Continuation edge (CE) = Edge with endpoint of surface
Neighbor edge (NE) = Edge linking K-nearest vertices
Further notation combines NE and LE/CE as below.
N(j)N(j): Set of vertices linked with shading vertex vjv_j via neighbor edges
LE(j),CE(j)LE(j), CE(j): Light/Continuation edges with endpoints in N(j)N(j)

Aggregation & Propagation

Separate direct/indirect radiances and individually aggregate them. (Similar to Radiosity)
Lj+=^eLE(j)Le+eCE(j)Le\vec{L}^+_{j}=\hat{\bigsqcup}_{e\in LE(j)}L_e + \bigsqcup_{e\in CE(j)}L_e
Get weight for each sampled result based on balance heuristics
wj(x)=pj(x)j=1npk(x)w_j(x)=\frac{p_j(x)}{\sum^n_{j=1}p_k(x)}
Aggregation is done based on Multiple Importance Sampling → Unbiased approach
Direct radiance: ^eLE(j)Le=eLE(j)fj(ωe)njωeweLeρ^e\hat{\bigsqcup}_{e\in LE(j)}L_e=\sum_{e\in LE(j)}\frac{f_j(\omega_e)|n_j\cdot\omega_e|w_eL_e}{\hat{\rho}_e} where ρ^e=eLE(j)pe(ωe)\hat{\rho}_e=\sum_{e'\in LE(j)}p_{e'}(\omega_e)
Indirect radiance: ^eCE(j)Le=eCE(j)fj(ωe)njωeLeρe\hat{\bigsqcup}_{e\in CE(j)}L_e=\sum_{e\in CE(j)}\frac{f_j(\omega_e)|n_j\cdot\omega_e|L_e}{{\rho}_e} where ρ^e=eCE(j)pe(ωe)\hat{\rho}_e=\sum_{e'\in CE(j)}p_{e'}(\omega_e)
Note that for indirect radiance, each sample is from independent sampling techniques. Thus we sum them up directly (without weighting).
Update outgoing radiance of vjv_j for all jj
These process takes O(K2)O(K^2). But the authors reduce to O(K)O(K) by caching the pdf calculations. To do so, the steps are reduced as below
Get marginal density of sampling ωe\omega_e for all continuation edges
ρekN(j)pk(ωe)\rho_e\leftarrow\sum_{k\in N(j)}p_k(\omega_e)
Calculate direct and indirect radiances based on aggregation above.

Iterative Approach based on Linearity

Aggregation step can be viewed as simple linear system X=MX+BX = MX+B.
Authors apply Jacobi method to iteratively update XX (radiance in this case) base do the linear system
To guarantee convergence, they clamp the eigenvalues of MM to 1.
Iteration is done until XX converge.

Final Gather

To remove correlation of nearby points, the final iteration is done with K=1K=1.

Results

Even though bias (should) exist, the method converges well.