Cvfx смотреть последние обновления за сегодня на .
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 1: Overview of Computer Vision and Visual Effects (1/23/14) 0:01:51 Visual effects 0:08:45 Matting 0:08:56 Bluescreen 0:15:05 Natural image matting 0:17:29 Image editing and compositing 0:17:52 Inpainting 0:20:40 Compositing 0:25:32 Feature tracking 0:29:00 Dense correspondence 0:29:37 Optical flow 0:30:17 Morphing 0:31:49 Retiming 0:33:24 Stereo 0:35:25 Matchmoving 0:39:08 Motion capture 0:45:58 3D data acquisition 0:51:08 Academic computer vision vs. moviemaking 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 9: Feature Detectors (2/20/14) 0:00:18 Feature detection 0:00:56 Good and bad features 0:08:01 Measuring feature quality 0:18:01 The Harris matrix 0:22:47 Detecting Harris corners 0:28:08 Gaussian weighting 0:30:50 Discrete gradient operators 0:34:38 Multiscale Harris corners 0:40:52 Scale space 0:46:26 The scale-normalized Harris matrix 0:55:14 Selecting scale with the normalized Laplacian 0:58:43 Harris-Laplace features 1:03:41 LoG (Laplacian of Gaussian) features 1:06:49 SIFT feature detection 1:07:22 DoG (Difference of Gaussian) Follows Section 4.1 of the textbook. 🤍 Key references: J. Shi and C. Tomasi. Good features to track. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 1994. 🤍 T. Lindeberg. Feature detection with automatic scale selection. International Journal of Computer Vision, 30(2):79116, Nov. 1998. 🤍 K. Mikolajczyk and C. Schmid. Indexing based on scale invariant interest points. In IEEE International Conference on Computer Vision (ICCV), 2001. 🤍 D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91110, Nov. 2004. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 13: Optical flow (3/6/14) 0:00:02 Optical flow 0:00:58 Motion vectors 0:01:56 The brightness constancy assumption 0:05:44 The Horn-Schunck method 0:18:07 Hierarchical Horn-Schunck 0:40:25 The Lucas-Kanade method 0:45:54 Refinements and extensions 0:50:18 Smoothness along edges 0:53:52 Robust cost functions 0:58:12 Cross-checking 1:02:35 Layered flow 1:05:31 Large-displacement optical flow 1:08:13 Human-assisted motion annotation 1:11:31 Optical flow benchmarking 1:13:16 Optical flow for visual effects Follows Section 5.3 of the textbook. 🤍 Note: despite the class discussion about hierarchical optical flow, the algorithm as presented on p. 159 of the book is correct. Key references: T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. In European Conference on Computer Vision (ECCV), 2004. 🤍 A. Bruhn, J. Weickert, and C. Schnörr. Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. International Journal of Computer Vision, 61(3):21131, Feb. 2005. 🤍 D. Sun, S. Roth, and M. Black. Secrets of optical flow estimation and their principles. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2010. 🤍 C. Liu, W. Freeman, E. Adelson, and Y. Weiss. Human-assisted motion annotation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2008. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 21: Inverse kinematics and motion editing: (4/14/14) 0:00:05 Inverse kinematics 0:06:09 Inverse differential kinematics 0:13:01 Optimization-based inverse kinematics 0:22:31 Inverse kinematics example 0:28:19 Footskate 0:30:21 Motion editing 0:32:40 Motion scaling 0:35:32 Motion blending/interpolation 0:43:57 Motion interpolation examples 0:48:15 Motion graphs 0:53:32 Motion graph examples Follows Sections 7.4-7.5 of the textbook. 🤍 Key references: K. Yamane and Y. Nakamura. Natural motion animation through constraining and deconstraining at will. IEEE Transactions on Visualization and Computer Graphics, 9(3):35260, July 2003. 🤍 J. Zhao and N. I. Badler. Inverse kinematics positioning using nonlinear programming for highly articulated figures. ACM Transactions on Graphics, 13(4):31336, Oct. 1994. 🤍 K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popoviç. Style-based inverse kinematics. In ACM SIGGRAPH (ACM Transactions on Graphics), 2004. 🤍 L. Kovar and M. Gleicher. Flexible automatic motion blending with registration curves. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003. 🤍 L. Kovar, M. Gleicher, and F. Pighin. Motion graphs. In ACM SIGGRAPH (ACM Transactions on Graphics), 2002. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 26: 3D features and registration (5/1/14) 0:00:04 Algorithms for processing 3D data 0:04:24 3D feature detection 0:05:42 Spin images 0:13:38 Shape contexts 0:14:55 Features in 3D+color scans 0:15:43 Backprojected SIFT features 0:16:43 Physical scale keypoints 0:22:16 3D registration 0:24:27 Iterative Closest Points (ICP) 0:30:42 ICP refinements 0:35:24 3D registration example 0:38:23 Exploiting free space 0:39:41 Multiscan fusion 0:42:57 Combining triangulated meshes 0:44:31 VRIP 0:47:40 Scattered data interpolation 0:51:38 Poisson surface reconstruction 0:53:39 3D object detection 0:55:29 3D stroke-based segmentation 0:56:09 3D inpainting Follows Section 8.4 of the textbook. 🤍 Key references: A. Johnson and M. Hebert. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5):43349, May 2002. 🤍 A. Frome, D. Huber, R. Kolluri, T. Bülow, and J. Malik. Recognizing objects in range data using regional point descriptors. In European Conference on Computer Vision (ECCV), 2004. 🤍 E. Smith, R. J. Radke, and C. Stewart. Physical scale keypoints: Matching and registration for combined intensity/range images. International Journal of Computer Vision, 97(1):217, Mar. 2012. 🤍 E. R. Smith, B. J. King, C. V. Stewart, and R. J. Radke. Registration of combined range-intensity scans: Initialization through verification. Computer Vision and Image Understanding, 110(2):22644, May 2008. 🤍 S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. In International Conference on 3-D Digital Imaging and Modeling (3DIM), 2001. 🤍 B. Curless and M. Levoy. A volumetric method for building complex models from range images. In ACM SIGGRAPH (ACM Transactions on Graphics), 1996. 🤍 G. Turk and J. F. O'Brien. Shape transformation using variational implicit functions. In ACM SIGGRAPH (ACM Transactions on Graphics), 1999. 🤍 J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans. Reconstruction and representation of 3D objects with radial basis functions. In ACM SIGGRAPH (ACM Transactions on Graphics), 2001. 🤍 M. Kazhdan, M. Bolitho, and H. Hoppe. Poisson surface reconstruction. In Eurographics Symposium on Geometry Processing, 2006. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 17: Image formation and single-camera calibration (2/2714) 0:01:32 Matchmoving (a.k.a. camera tracking, structure from motion) 0:08:24 Feature correspondences and tracks 0:14:18 False matches 0:16:27 Image formation; pinhole projection 0:25:04 Perspective projection equations 0:29:15 The camera calibration matrix K; internal parameters 0:32:11 Homogeneous coordinates 0:35:53 Lens distortion 0:41:24 External parameters; camera coordinate system 0:44:34 The camera matrix P 0:49:28 Single camera calibration 0:49:32 Resectioning 0:55:24 Plane-based calibration 1:08:59 Matlab camera calibration toolbox Follows Sections 6.1-6.3 of the textbook. 🤍 Key references: R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition, 2004. 🤍 T. Dobbert. Matchmoving: the Invisible Art of Camera Tracking. Sybex, 2005. 🤍 Z. Zhang. A flexible new technique for camera calibration. Technical Report MSR-TR-98-71, Microsoft Research, 1998. 🤍 Camera calibration toolbox by J.-Y. Bouguet 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 22: Facial and markerless motion capture (4/17/14) 0:00:50 Facial motion capture 0:01:31 Facial markers 0:05:20 The MOVA Contour system 0:09:45 The ICT Light Stage 0:13:44 Markerless motion capture 0:16:10 Why is it difficult? 0:20:43 Posing the problem 0:27:12 Silhouettes and edges 0:34:36 Visual hulls and voxel carving 0:40:37 Voxel carving example 0:43:52 Voxel carving in VFX 0:45:35 The ICT Light Stage 0:47:23 Depth cameras (e.g., the Microsoft Kinect) Follows Sections 7.6-7.7 of the textbook. 🤍 Key references: B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin. Making faces. In ACM SIGGRAPH (ACM Transactions on Graphics), 1998. 🤍 J. Deutscher and I. Reid. Articulated body motion capture by stochastic search. International Journal of Computer Vision, 61(2):185205, Feb. 2005. 🤍 C. Sminchisescu and B. Triggs. Estimating articulated human motion with covariance scaled sampling. International Journal of Robotics Research, 22(6):37191, June 2003. 🤍 M. Shaheen, J. Gall, R. Strzodka, L. Van Gool, and H.-P. Seidel. A comparison of 3d model-based tracking approaches for human motion capture in uncontrolled environments. In IEEE Computer Society Workshop on Applications of Computer Vision, 2009. 🤍 D. Vlasic, I. Baran, W. Matusik, and J. Popoviç. Articulated mesh animation from multi-view silhouettes. In ACM SIGGRAPH (ACM Transactions on Graphics), 2008. 🤍 J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from a single depth image. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2011. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 18: Stereo rig calibration and projective reconstruction (3/31/14) 0:00:45 Stereo rig calibration 0:04:49 The relationship between P, P', and F 0:10:58 Ambiguities in stereo calibration 0:15:40 Removing ambiguity with known internal parameters 0:25:59 Commercial stereo rigs 0:29:36 Image sequence calibration (structure from motion) 0:30:21 The overall process 0:33:06 Bundle adjustment 0:38:35 Initializing with projective reconstruction 0:47:29 Sturm-Triggs algorithm 0:53:40 Sequential/hierarchical updating Follows Sections 6.4-6.5.1 of the textbook. 🤍 Key references: R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition, 2004. 🤍 P. Beardsley, A. Zisserman, and D. Murray. Sequential updating of projective and affine structure from motion. International Journal of Computer Vision, 23(3):23559, June 1997. 🤍 P. Sturm and B. Triggs. A factorization based algorithm for multi-image projective structure and motion. In European Conference on Computer Vision (ECCV), 1996. 🤍 R. I. Hartley and P. Sturm. Triangulation. Computer Vision and Image Understanding, 68(2):14657, Nov. 1997. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 25: Multiview stereo (4/28/14) 0:00:01 Multiview stereo introduction 0:07:50 Multiview stereo benchmarking 0:10:20 Volumetric methods 0:17:30 Surface deformation methods 0:23:57 Surface-based reprojection 0:28:17 Patch-based methods 0:35:51 Patch-based reconstruction videos 0:38:33 Patch-based MVS software 0:44:45 MVS on smartphones and PCs 0:48:41 MVS in L.A. Noire 0:51:20 Artificial lens blur from MVS Follows Section 8.3 of the textbook. 🤍 Key references: Y. Furukawa and J. Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):136276, Aug. 2010. 🤍 S. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski. A comparison and evaluation of multi-view stereo reconstruction algorithms. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2006. 🤍 C. Strecha, W. von Hansen, L. Van Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2008. 🤍 K. N. Kutulakos and S. M. Seitz. A theory of shape by space carving. International Journal of Computer Vision, 38(3):199218, July 2000. 🤍 J.-P. Pons, R. Keriven, and O. Faugeras. Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score. International Journal of Computer Vision, 72(2):17993, June 2007. 🤍 M. Goesele, B. Curless, and S. Seitz. Multi-view stereo revisited. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2006. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 3: Closed-form matting (1/30/14) 0:00:01 Closed-form matting 0:02:09 The color line assumption 0:14:04 alpha is a linear function of I 0:23:26 The cost function J 0:37:25 J as a function of alpha 0:39:20 The matting Laplacian 0:44:21 Constraining the matte with scribbles 0:48:36 An example result 0:56:27 Spectral matting 1:06:45 Combining matting components Follows Section 2.4 of the textbook, 🤍 Key references: A. Levin, D. Lischinski, and Y. Weiss. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):22842, Feb. 2008. 🤍 A. Levin, A. Rav-Acha, and D. Lischinski. Spectral matting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10):16991712, Oct. 2008. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 2: Bluescreen and Bayesian matting (1/27/14) 0:00:02 Matting 0:00:41 The matting equation 0:04:41 Why are there non-binary alphas? 0:08:37 Hard segmentation vs. soft segmentation 0:11:58 Matting ambiguity 0:17:27 Trimaps 0:20:14 User strokes 0:20:48 Benchmarking matting algorithms 0:22:54 Bluescreen matting 0:32:39 Difference matting 0:36:40 Getting ground-truth mattes 0:45:07 Natural image matting 0:45:58 Bayesian matting 1:05:56 Distributions of F, B, and alpha Follows Sections 2.1-2.3 of the textbook, 🤍. Key references: A. Smith and J. Blinn. Blue screen matting. In ACM SIGGRAPH (ACM Transactions on Graphics), 1996. 🤍 Y.-Y. Chuang, B. Curless, D. Salesin, and R. Szeliski. A Bayesian approach to digital matting. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2001. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 24: Structured light scanning (4/24/14) 0:00:00 Structured light scanning 0:02:08 Structured light geometry and calibration 0:07:48 Commercial structured light systems 0:09:52 Structured light for VFX (body scanning) 0:11:56 Structured light for sculpture scanning 0:15:45 Example structured light actor/prop scans 0:19:10 Projector + two camera systems 0:21:13 Spacetime analysis 0:23:45 Challenges of stripe scanning 0:25:04 Structured light stripe patterns 0:27:13 Gray codes 0:28:14 On/off coding 0:32:25 Stripe boundary codes 0:34:20 Color stripe patterns 0:35:25 de Bruijn sequences 0:37:47 Dynamic programming for matching 0:41:16 Fringe patterns 0:43:24 The Kinect v1 dot pattern 0:45:40 Commercial handheld scanners 0:48:05 Live structured light scanning demo 0:54:43 A visit to Gentle Giant Studios (3D VFX company) Follows Section 8.2 of the textbook. 🤍 Key references: C. Chen and A. Kak. Modeling and calibration of a structured light scanner for 3-D robot vision. In IEEE International Conference on Robotics and Automation, 1987. 🤍 D. Huynh. Calibration of a structured light system: a projective approach. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 1997. 🤍 B. Curless and M. Levoy. Better optical triangulation through spacetime analysis. In IEEE International Conference on Computer Vision (ICCV), 1995. 🤍 D. Scharstein and R. Szeliski. High-accuracy stereo depth maps using structured light. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2003. 🤍 O. Hall-Holt and S. Rusinkiewicz. Stripe boundary codes for real-time structured-light range scanning of moving objects. In IEEE International Conference on Computer Vision (ICCV), 2001. 🤍 L. Zhang, B. Curless, and S. Seitz. Rapid shape acquisition using color structured light and multi-pass dynamic programming. In International Symposium on 3D Data Processing Visualization and Transmission (3DPVT), 2002. 🤍 P. S. Huang, C. Zhang, and F.-P. Chiang. High-speed 3-D shape measurement based on digital fringe projection. Optical Engineering, 42(1):1638, Jan. 2003. 🤍
The Unfeasible Adventures of Red Episode One. An Among Us CG Animation/Short Film created by CKVFX. Red was a lot of fun to animate so we have decided to create a series of short animations showing his adventures after the short film. We had the idea to cross over some other universes and Alien came to mind. We did a search and found some people had already posted some excellent videos of an Alien encounter. So we knew it would be something the fans would be ok with. This is our take on that idea. We hope you enjoy and like the series! Thanks for watching! Please Like, Share and Subscribe to keep up to date with the further adventures of Red. We are huge fans of the game and concept, so we just had to have a play with the imposter idea. No offence or copyright intended. Just big fans. Music by MATTIA CUPELLI OFFICIAL WEBSITE: 🤍 Link to the gun 3D Model: 🤍 Other credits included in the video. If you haven't already, check out the game by Innersloth now: 🤍 Available on: Steam : 🤍 Android : 🤍 IOS : 🤍 Also available on Nintendo Switch. ALien/Aliens is owned by 20th Century Fox Studios. Twitter ► 🤍 Facebook ► 🤍 Watch the whole playlist ► 🤍 New to the Channel? We are Visual Effects Artists that work primarily on TV Movies and occasionally dabble in other areas of Digital Creation (gaming, just for fun videos etc) Consider subscribing so that you don't miss out on some fun stuff that we have planned for the Channel. #amongus #amongusgame #CGI #shortfilm #animated among us, among us funny, among us funny moments, among us impostor, among us game, among us game toons, among us animation, among us short film, among us short film animation, short film, short film animation, among us movie, ckvfx, animation, vfx, online gamer, ckvfx among us, among us imposter, verses, vs, funny among us, among us meme, among us funniest moments, funny animation, 어몽어스, meme, gaming, cgi, among us logic, cartoon animation, game toons, cartoon, animated, family friendly
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 20: Motion capture setup and forward kinematics (4/7/14) 0:00:03 Motion capture 0:03:27 Mocap pipeline 0:06:02 Motion capture technology alternatives 0:09:22 The motion capture environment 0:14:37 Mocap marker placement 0:19:27 Triangulating markers 0:24:00 Interpolating missing markers 0:32:34 Mocap examples 0:38:06 The kinematic model 0:44:21 Forward kinematics 0:49:02 Parameterizing 3D rotations and rigid motions Follows Sections 7.1-7.3 of the textbook. 🤍 Key references: A. Menache. Understanding Motion Capture for Computer Animation. Morgan Kaufmann, 2nd edition, 2011. G. Liu and L. McMillan. Estimation of missing markers in human motion capture. The Visual Computer, 22(9):7218, Sept. 2006. 🤍 C. Bregler, J. Malik, and K. Pullen. Twist based acquisition and tracking of animal and human kinematics. International Journal of Computer Vision, 56(3):17994, Feb. 2004. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 16: Video matching, morphing, and view synthesis (3/24/14) 0:00:02 Video matching 0:06:24 Video matching cost function 0:11:03 Original video matching videos 0:17:58 Morphing 0:19:32 Cross-dissolving 0:23:00 Point correspondences for morphing 0:24:06 Line correspondences for morphing 0:24:35 Warping + cross-dissolving 0:33:09 Beier-Neely morphing (field morphing) 0:42:26 Morphing isn't physically consistent 0:45:23 View interpolation 0:49:29 View morphing 0:55:43 Virtual video synthesis 1:08:12 Bullet time Follows Sections 5.6-5.8 of the textbook. 🤍 Key references: P. Sand and S. Teller. Video matching. In ACM SIGGRAPH (ACM Transactions on Graphics), 2004. 🤍 T. Beier and S. Neely. Feature-based image metamorphosis. In ACM SIGGRAPH (ACM Transactions on Graphics), 1992. 🤍 S. E. Chen and L. Williams. View interpolation for image synthesis. In ACM SIGGRAPH (ACM Transactions on Graphics), 1993. 🤍 S. M. Seitz and C. R. Dyer. View morphing. In ACM SIGGRAPH (ACM Transactions on Graphics), 1996. 🤍 R. Radke, P. Ramadge, S. Kulkarni, and T. Echigo. Efficiently synthesizing virtual video. IEEE Transactions on Circuits and Systems for Video Technology, 13(4):32537, Apr. 2003. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 12: Parametric Transformations and Scattered Data Interpolation (3/3/14) 0:00:01 Computer Vision for Visual Effects 0:00:43 Dense correspondence vs. feature matching 0:01:51 Motion vectors 0:05:40 Parametric transformations 0:06:11 Translation 0:06:31 Rotation 0:06:59 Similarity transformations 0:08:03 Shears 0:09:40 Affine transformations 0:10:50 Projective transformations 0:13:51 Estimating projective transformations 0:18:33 Pre-normalizing correspondences 0:19:59 The Direct Linear Transform (DLT) 0:21:29 Outlier rejection 0:25:59 Scattered data interpolation 0:26:50 Bilinear interpolation 0:28:57 Thin-plate spline interpolation 0:38:00 Thin-plate interpolation example 0:44:27 B-spline interpolation 0:45:50 Diffeomorphic transformations Follows Sections 5.1-5.2 of the textbook. 🤍 Key references: R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition, 2004. 🤍:5000/~vgg/hzbook/ F. Bookstein. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(6):56785, June 1989. 🤍 S. Joshi and M. Miller. Landmark matching via large deformation diffeomorphisms. IEEE Transactions on Image Processing, 9(8):135770, Aug. 2000. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 23: LiDAR and time-of-flight sensing (4/21/14) 0:00:01 3D data acquisition 0:01:12 LiDAR scanning 0:05:06 LiDAR data example 0:09:33 LiDAR scanning difficulties 0:15:10 LiDAR scanning principles 0:15:20 Pulse-based LiDAR 0:20:23 Phase-based LiDAR 0:27:02 LiDAR scanning for VFX examples 0:31:58 LiDAR scanning for autonomous vehicles 0:34:05 Time-of-flight cameras 0:37:20 ToF image and video examples 0:42:27 Live ToF results from Microsoft Kinect 0:50:06 Skeleton estimation from Kinect SDK Follows Section 8.1 of the textbook. 🤍 Key references: R. A. Jarvis. A perspective on range finding techniques for computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5(2):12239, Mar. 1983. 🤍 P. J. Besl. Active, optical range imaging sensors. Machine Vision and Applications, 1(2):12752, June 1988. 🤍 A. Kolb, E. Barth, R. Koch, and R. Larsen. Time-of-flight cameras in computer graphics. In Eurographics, 2010. 🤍 Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt. 3D shape scanning with a time-of-flight camera. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2010. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 7: Photomontage and Image Inpainting (2/13/14) 0:01:44 Graph-cut compositing 0:03:28 Seams between images 0:05:05 Setting up the graph 0:12:16 Image examples 0:14:15 Modified weights 0:18:37 Extending to multiple images 0:19:42 Alpha expansion 0:23:36 Photomontage example 0:31:28 Image inpainting 0:34:53 Isophotes 0:37:56 PDE-based formulation 0:41:46 Image examples (PDE-based) 0:45:45 Patch-based formulation 0:51:08 Determining the priority on the fill front 0:59:24 Image examples (patch-based) Follows Sections 3.3-3.4 of the textbook. 🤍 Key references: A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen. Interactive digital photomontage. In ACM SIGGRAPH (ACM Transactions on Graphics), 2004. 🤍 M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In ACM SIGGRAPH (ACM Transactions on Graphics), 2000. 🤍 A. Criminisi, P. Pérez, and K. Toyama. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing, 13(9):120012, Sept. 2004. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 11: Feature evaluation and use (2/27/14) 0:00:15 Detector and descriptor combinations 0:02:49 Feature evaluation: repeatability 0:08:09 Feature evaluation: matchability 0:13:35 Color features 0:15:45 Artificial features (tags) 0:26:55 Artificial features (3D structures) 0:30:18 Features in TV and movies 0:33:33 Features in consumer electronics (e.g., smartphones) Follows Sections 4.3-4.5 of the textbook. 🤍 Key references: K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool. A comparison of affine region detectors. International Journal of Computer Vision, 65(1):4372, Nov. 2005. 🤍 K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10):161530, Oct. 2005. 🤍 M. Fiala. Designing highly reliable fiducial markers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(7):131724, July 2010. 🤍 See also: 🤍 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 19: Euclidean reconstruction and bundle adjustment (4/3/14) 0:00:50 Projective reconstruction ambiguity 0:04:35 Upgrading to a metric (Euclidean) reconstruction 0:11:01 The DIAC (dual image of the absolute conic) 0:19:34 Using constraints on P to constrain w 0:28:16 Bundle adjustment 0:35:25 Sparse structure of the problem 0:39:59 Camera tracking example 0:47:44 Camera tracking examples from movies 0:53:03 Phototourism; large scale bundle adjustment 0:57:10 SLAM (simultaneous location and mapping) Follows Sections 6.5.2-6.6 of the textbook. 🤍 Key references: M. Pollefeys, R. Koch, and L. Van Gool. Self-calibration and metric reconstruction in spite of varying and unknown intrinsic camera parameters. International Journal of Computer Vision, 32(1):725, Aug. 1999. 🤍 M. I. A. Lourakis and A. A. Argyros. SBA: a software package for generic sparse bundle adjustment. ACM Transactions on Mathematical Software, 36(1):2:12:30, Mar. 2009. 🤍 N. Snavely, S. Seitz, and R. Szeliski. Modeling the world from internet photo collections. International Journal of Computer Vision, 80(2):189210, Nov. 2008. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 10: Feature descriptors (2/24/14) 0:00:01 Feature descriptors 0:03:37 Why not use blocks around the detected feature? 0:07:26 Dominant gradient orientation 0:17:17 Normalizing block size and intensity 0:20:02 Comparing descriptors 0:23:52 Nearest neighbor distance ratio 0:27:07 The SIFT descriptor 0:39:25 Other descriptors 0:42:40 SURF features 0:45:37 Rotation and affine invariance 0:56:03 FAST corners 0:58:45 MSERs (Maximally stable extremal regions) Follows Section 4.2 of the textbook. 🤍 Key references: D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91110, Nov. 2004. 🤍 K. Mikolajczyk and C. Schmid. Scale and affine invariant interest point detectors. International Journal of Computer Vision, 60(1):6386, Oct. 2004. 🤍 H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In European Conference on Computer Vision (ECCV), 2006. 🤍 E. Rosten and T. Drummond. Machine learning for high-speed corner detection. In European Conference on Computer Vision (ECCV), 2006. 🤍 J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10):7617, 2004. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 6: Multiresolution blending and Poisson image editing (2/10/14) 0:00:50 Multiresolution blending 0:04:08 Image seams 0:05:00 Classical matte paintings on glass 0:07:02 What makes a good seam? 0:08:07 Hard-edge compositing vs. weighted transition regions 0:11:52 Laplacian pyramid blending 0:13:10 Gaussian pyramids 0:16:26 Laplacian pyramids 0:17:44 Matlab pyramid examples 0:23:30 Back and forth along the pyramids 0:30:00 The multiresolution compositing equation 0:33:02 Image examples 0:38:30 Poisson image compositing 0:43:17 The optimization problem 0:46:46 Setting up the discrete problem 0:54:28 Image examples 0:58:19 Using a guidance vector field (mixed gradients) 1:07:12 Drag-and-drop pasting Follows Sections 3.1-3.2 of the textbook, 🤍 Key references: P. J. Burt and E. H. Adelson. A multiresolution spline with application to image mosaics. ACM Transactions on Graphics, 2(4):21736, Oct. 1983. 🤍 P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. In ACM SIGGRAPH (ACM Transactions on Graphics), 2003. 🤍 J. Jia, J. Sun, C. Tang, and H. Shum. Drag-and-drop pasting. In ACM SIGGRAPH (ACM Transactions on Graphics), 2006. 🤍
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 15: Stereo correspondence (3/20/14) 0:00:01 Stereo correspondence 0:02:09 Disparity 0:04:43 Differences between stereo and optical flow 0:11:42 Basic stereo algorithms 0:12:04 Sum of absolute differences 0:14:27 Birchfield-Tomasi measure 0:16:31 Census transform 0:20:46 Dynamic programming for stereo 0:25:19 Non-monotonic correspondence 0:26:53 The Ohta-Kanade algorithm 0:29:31 Stereo algorithm benchmarking 0:36:21 Graph cuts for stereo 0:52:07 Belief propagation for stereo 0:56:02 Occlusions and discontinuities 0:59:53 Incorporating segmentation 1:06:50 Stereo rigs for filming Follows Section 5.5 of the textbook. 🤍 Key references: D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1):742, Apr. 2002. 🤍 Y. Ohta and T. Kanade. Stereo by intra- and inter-scanline search using dynamic programming. IEEE Transactions on Pattern Analysis and Machine Intelligence, 7(2):13954, Mar. 1985. 🤍 Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):122239, Nov. 2001. 🤍 J. Sun, N.-N. Zheng, and H.-Y. Shum. Stereo matching using belief propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(7):787800, July 2003. 🤍
At CREATIVE VFX, we unite creativity with technology to push the boundaries of what entertainment can be.
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 5: Graph cut segmentation, video matting, and matting extensions (2/6/14) 0:00:13 Hard segmentation 0:02:47 Graph cuts 0:10:14 Edge weights for matting 0:16:33 Max flow demo 0:19:49 Hard segmentation with graph cuts demo 0:24:33 From a hard segmentation to a matte 0:26:49 GrabCut 0:30:01 Video matting 0:37:18 Rotoscoping 0:43:38 Examples from movies 0:49:13 Shadows 0:51:21 Refractive objects 0:52:57 Flash matting 0:55:40 Environment matting 0:58:08 The ICT Light Stage Follows Sections 2.7-2.9 of the textbook, 🤍 Key references: Y. Boykov and M. Jolly. Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In IEEE International Conference on Computer Vision (ICCV), 2001. 🤍 C. Rother, V. Kolmogorov, and A. Blake. GrabCut: Interactive foreground extraction using iterated graph cuts. In ACM SIGGRAPH (ACM Transactions on Graphics), 2004. 🤍 N. Apostoloff and A. Fitzgibbon. Bayesian video matting using learnt image priors. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2004. 🤍 A. Agarwala, A. Hertzmann, D. Salesin, and S. Seitz. Keyframe-based tracking for rotoscoping and animation. In ACM SIGGRAPH (ACM Transactions on Graphics), 2004. 🤍 P. Debevec, A. Wenger, C. Tchou, A. Gardner, J. Waese, and T. Hawkins. A lighting reproduction approach to live-action compositing. In ACM SIGGRAPH (ACM Transactions on Graphics), 2002. 🤍 D. Zongker, D. Werner, B. Curless, and D. Salesin. Environment matting and compositing. In ACM SIGGRAPH (ACM Transactions on Graphics), 1999. 🤍
Creative Visual FX animated logo and Brand motion or Visual Effects for web videos that will enhance your corporate presents with media creativity. Produced by Andy King - Creative Visual FX For Visual Effects on corporate web video, Brand Motion and Logo Design contact us. Email: sales🤍creativevisualfx.co.uk or visit 🤍creativevisualfx.co.uk
A Visual Effects company based in London specialising in innovative brand motion and logo animation. Visual Effects for web video to enhance creative presents. Contact: email: Sales🤍creativevisualfx.co.uk Website: 🤍creativevisualfx.co.uk
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 13: Optical flow (3/6/14) 0:00:02 Optical flow 0:00:58 Motion vectors 0:01:56 The brightness constancy assumption 0:05:44 The Horn-Schunck method 0:18:07 Hierarchical Horn-Schunck 0:40:25 The Lucas-Kanade method 0:45:54 Refinements and extensions 0:50:18 Smoothness along edges 0:53:52 Robust cost functions 0:58:12 Cross-checking 1:02:35 Layered flow 1:05:31 Large-displacement optical flow 1:08:13 Human-assisted motion annotation 1:11:31 Optical flow benchmarking 1:13:16 Optical flow for visual effects Follows Section 5.3 of the textbook. Note: despite the class discussion about hierarchical optical flow, the algorithm as presented on p. 159 of the book is correct. Key references: T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. In European Conference on Computer Vision (ECCV), 2004. A. Bruhn, J. Weickert, and C. Schnörr. Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. International Journal of Computer Vision, 61(3):21131, Feb. 2005. D. Sun, S. Roth, and M. Black. Secrets of optical flow estimation and their principles. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2010. C. Liu, W. Freeman, E. Adelson, and Y. Weiss. Human-assisted motion annotation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
A Visual Effects company based in London specialising in innovative brand motion and logo animation. Visual Effects for web video to enhance creative presents. Contact: email: Sales🤍creativevisualfx.co.uk Website: 🤍creativevisualfx.co.uk
ECSE-6969 Computer Vision for Visual Effects Rich Radke, Rensselaer Polytechnic Institute Lecture 8: Image Retargeting and Recompositing (2/18/14) 0:00:19 Image retargeting 0:03:54 Specifying a region of interest (ROI) 0:06:40 Incorporating saliency 0:07:57 Optimized scale-and-stretch 0:13:45 Seam carving 0:19:40 Dynamic programming 0:20:43 Making an image narrower 0:22:13 Making an image wider 0:24:03 Video from original paper 0:27:15 Inpainting using seam carving 0:28:45 Seam carving for video 0:31:34 Improved seam carving: forward energy 0:36:39 Bidirectional similarity; completeness and coherence 0:42:20 The cost function 0:52:27 The iterative algorithm 0:56:14 PatchMatch 0:57:26 PatchMatch video 1:04:52 Video retargeting Follows Sections 3.5-3.6 of the textbook. 🤍 Key references: Y.-S. Wang, C.-L. Tai, O. Sorkine, and T.-Y. Lee. Optimized scale-and-stretch for image resizing. In ACM SIGGRAPH Asia (ACM Transactions on Graphics), 2008. 🤍 S. Avidan and A. Shamir. Seam carving for content-aware image resizing. In ACM SIGGRAPH (ACM Transactions on Graphics), 2007. 🤍 M. Rubinstein, A. Shamir, and S. Avidan. Improved seam carving for video retargeting. In ACM SIGGRAPH (ACM Transactions on Graphics), 2008. 🤍 D. Simakov, Y. Caspi, E. Shechtman, and M. Irani. Summarizing visual data using bidirectional similarity. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2008. 🤍 C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. PatchMatch: a randomized correspondence algorithm for structural image editing. In ACM SIGGRAPH (ACM Transactions on Graphics), 2009. 🤍