
I chose Blender mainly to explore how 3D-to-2D rendering turns “making an image” into designing a viewing system. During the copy process, I realised I was not simply reconstructing the image but a view. Since the reference is a single cropped angle, most of my decisions were guesses: depth, scale, and rotation were repeatedly adjusted until they “looked right” from the front. Yet when I changed the viewpoint, the model often became uncanny—floating connections, warped proportions—until lighting exposed these contradictions. This echoed my earlier observation that Blender’s XYZ views are a kind of re-planarisation (week 1): the object becomes whatever survives a chosen camera regime.
What counts as “accurate” or “neutral” representation in a tool built on camera, perspective, and industry defaults?
Am I modelling an object, or modelling a standard of visibility—what is allowed to be seen and what can be ignored?
In 3D-to-2D rendering, outlines behaved like algorithmic borders: stable when the camera moved, but broken when I rotated in the viewport, suggesting the “edge” is not the object’s truth but a rule-based decision. Materials amplified this: changing nodes, light, and motion produced entirely different moods from the same geometry. The “materiality” felt less virtual than simulated—a parameter system that generates narrative.
Next, I will iterate systematically rather than chase one perfect copy. I will fix the model and camera, then produce more versions by changing only material-node variables (roughness, transmission, refraction, rim highlight, outline thresholds) and render passes (beauty vs freestyle). Each version will be logged with a short note on what changed, what surprised me, and what the tool seems to favour—so the process itself becomes the outcome.
Leave a Reply