So this is a really cool abstract from the University of Washington that I think some companies might find really interesting. Basically you can take a video of someone talking and reconstruct a 3D representation from the video. This would be really great if you're trying to accurately simulate a real person's likeness in an interview or broadcast setting in a 3D environment.
The really big takeaway is that the source material is not environment controlled. These are just videos gathered from interviews. This is huge when you need to gather lots of source data to generate content! Right now this is still an abstract but I look forward to it developing further at a production capacity!
Hit the jump for a video demo of this in action and a link to the abstract...
Here's a link to the abstract:
http://grail.cs.washington.edu/projects/totalmoving/
No comments :
Post a Comment