發表文章

目前顯示的是 3月, 2022的文章

使用 AI 將繪畫和照片轉化為動畫 | NVIDIA 技術Blog(轉貼)

圖片
 使用 AI 將繪畫和照片轉化為動畫 | NVIDIA 技術Blog  引用 : https://developer.nvidia.com/blog/transforming-paintings-and-photos-into-animations-with-ai/ NEWS Jan 08, 2019 Transforming Paintings and Photos Into Animations With AI By  Nefi Alarcon  Discuss (0)  Share 0 Like Tags:  Computer Vision / Video Analytics ,  Machine Learning & Artificial Intelligence ,  News Researchers from the University of Washington and Facebook recently released a paper that shows a  deep learning -based system that can transform still images and paintings into animations.  The algorithm called Photo Wake-Up uses a  convolutional neural network  to animate a person or character in 3D from a single still image. “Our method works with a large variety of whole-body, fairly frontal photos, ranging from sports photos to art, and posters,” the researchers stated in their paper. “In addition, the user is given the ability to edit the human in the image, view the reconstruction in 3D, and explore it in AR.” To demonstrate the po

2020 Awards & Recognition(轉貼)

圖片
  2020 Awards & Recognition 2020 Awards & Recognition | Paul G. Allen School of Computer Science & Engineering (washington.edu) ( 詳情 請點閱) Bob Bandes Memorial Award for Excellence in Teaching The Bob Bandes award was established in 1984 in memory of Bob Bandes, a Computer Science graduate student who died in a skydiving accident on August 21, 1983. The award recognizes exceptional performance by undergraduate and graduate students who have served as teaching assistants (TAs) in the Allen School. A total of 575 students served as TAs for various Allen School courses over the past year, including 375 undergraduate students and 205 graduate students. Students and instructors submitted nearly 600 nominations putting forward 180 TAs for consideration as part of the 2020 Bandes Awards. Chung-Yi Weng (Honorable Mention) Chung-Yi Weng is a Ph.D. student who served as graduate TA for CSE 457, Computer Graphics. "Chung-Yi has been absolutely incredible, he has even instr

CVPR 2022 Oral | 2D視頻有3D體驗!華盛頓大學&Google:對單目視頻中人物進行自由視角渲染!作者 : Chung-Yi Weng (翁仲毅)(轉貼)

圖片
CVPR 2022 Oral |   視頻有 3D 體驗!華盛頓大學  &  Google :對單目視頻中人物進行自由視角渲染! 作者 :   Chung-Yi Weng (翁仲毅) ----------------------------------------------- 美國華盛頓大學和 Goggle 的研究人員在 2022 年 1 月提交了一份最新成果 ,並 榮獲國際頂尖電腦視覺與模式辨識 (IEEE Computer Vision and Pattern Recognition) CVPR 2022年會議 論文的收錄並於大會上發表創新技術。 這篇論文的第一作者  Chung-Yi Weng ( 翁仲毅)是來自台灣、目前在華盛頓大學就讀的博士生。 HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video Chung-Yi Weng, Brian Curless, Pratul Srinivasan, Jonathan T. Barron, Ira Kemelmacher-Shlizerman CVPR, 2022  oral, 3% out of ~8k papers project page / arXiv / video Combining NeRF with pose estimation lets you use a monocular video to do free-viewpoint rendering of a human https://grail.cs.washington.edu/projects/humannerf/ HumanNeRF ,一種自由視角的渲染方法,適用於給定的、人類進行複雜身體運動的單目視頻,例如 YouTube 上的視頻。該方法可以在任何一幀暫停視頻,並從任意的新攝像機視角甚至是該特定幀和身體姿態的完整 360 度攝像機路徑渲染對象。這項任務特別具有挑戰性,因為它需要合成身體的逼真細節,從不同的相機角度看,輸入視頻中可能不存在這些細節,還要合成精細的細節,如布的褶皺和麵部外觀。 這篇介紹了一種自由視角的渲染方法 --HumanNeRF-- 它適用於一個給定的單眼視訊,該視訊中的人正在進行複雜的身體運動,例如