PersonNeRF:照片集的個性化重建 (CVPR 2023 accepted papers )


PersonNeRF: Personalized Reconstruction from Photo Collections

PersonNeRF: 照片集的個性化重建  

(CVPR 2023 accepted papers )


https://grail.cs.washington.edu/projects/personnerf/


 arXiv:2302.08504 (cs)

[Submitted on 16 Feb 2023]

Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman

 







We present PersonNeRF, a method that takes a collection of photos of a subject (e.g. Roger Federer) captured across multiple years with arbitrary body poses and appearances, and enables rendering the subject with arbitrary novel combinations of viewpoint, body pose, and appearance. PersonNeRF builds a customized neural volumetric 3D model of the subject that is able to render an entire space spanned by camera viewpoint, body pose, and appearance. A central challenge in this task is dealing with sparse observations; a given body pose is likely only observed by a single viewpoint with a single appearance, and a given appearance is only observed under a handful of different body poses. We address this issue by recovering a canonical T-pose neural volumetric representation of the subject that allows for changing appearance across different observations, but uses a shared pose-dependent motion field across all observations. We demonstrate that this approach, along with regularization of the recovered volumetric geometry to encourage smoothness, is able to recover a model that renders compelling images from novel combinations of viewpoint, pose, and appearance from these challenging unstructured photo collections, outperforming prior work for free-viewpoint human rendering.



 

PersonNeRF照片集的個性化重建

https://grail.cs.washington.edu/projects/personnerf/


arXiv.cs.CV

Pub Date  : 2023-02-16

DOI : arxiv-2302.08504

 

Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman


 




我們介紹了 PersonNeRF,這是一種收集物件(例如羅傑·費德勒)多年拍攝的具有任意身體姿勢和外觀的照片集的方法,並且能夠使用觀點、身體姿勢和外觀的任意新穎組合來渲染物件。PersonNeRF 構建了一個定制的主體神經體積 3D 模型,能夠渲染由相機視角、身體姿勢和外觀跨越的整個空間。這項任務的一個核心挑戰是處理稀疏觀察;給定的身體姿勢可能只能通過具有單一外觀的單個視點觀察到,並且給定的外觀只能在少數不同的身體姿勢下觀察到。我們通過恢復主體的典型 T 形神經體積表示來解決這個問題,該表示允許在不同的觀察中改變外觀,但在所有觀察中使用共用的姿勢相關運動場。我們證明了這種方法,連同恢復的體積幾何的正則化以鼓勵平滑,能夠恢復一個模型,該模型從這些具有挑戰性的非結構化照片集的觀點、姿勢和外觀的新穎組合中呈現引人注目的圖像,免費優於先前的工作觀點人類渲染。




留言

這個網誌中的熱門文章

Chung yi weng

CVPR 2022 Oral | 2D視頻有3D體驗!華盛頓大學&Google:對單目視頻中人物進行自由視角渲染!作者 : Chung-Yi Weng (翁仲毅)(轉貼)