We should only read the screen buffer after rendering is complete Here is the code from Unity documentation on how to store render to png. Setup a camera, texture and render texture Unity allows to render the camera manually: Camera Render.Īlso, if this is or not VR, this thread would be definitely helpful and will save a lot of time looking for the performance hits: Real-Time Image Capture In Unity. This Unity forum is dedicated to this question: How to access rendered depth buffer properly? (Please if you understand this topic about GPU much more than I do, explain it in comments, I'm very thankful for this). But as far as I know, even though GPU has memory, it deletes everything from there very often, I guess every frame. But why is this so hard? Why can't I just take this texture? I don't really have the right answer to this because I'm not an expert on GPU. Unity renders a frame on its own, why would you need something to render it again or store the rendered texture from memory outside it, when you can do this inside C#? Right, no need to do this.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |