我正在从头开始编写光线追踪器。该示例使用射线球体相交检测来渲染两个球体。当球体靠近屏幕中心时,它们看起来很好。但是,当我移动相机,或者调整球体位置以使它们更靠近边缘时,它们会变形。
这是光线转换代码:
void Renderer::RenderThread(int start, int span)
{
// pCamera holds the position, rotation, and fov of the camera
// pRenderTarget is the screen to render to
// calculate the camera space to world space matrix
Mat4 camSpaceMatrix = Mat4::Get3DTranslation(pCamera->position.x, pCamera->position.y, pCamera->position.z) *
Mat4::GetRotation(pCamera->rotation.x, pCamera->rotation.y, pCamera->rotation.z);
// use the cameras origin as the rays origin
Vec3 origin(0, 0, 0);
origin = (camSpaceMatrix * origin.Vec4()).Vec3();
// this for loop loops over all the pixels on the screen
for ( int p = start; p < start + span; ++p ) {
// get the pixel coordinates on the screen
int px = p % pRenderTarget->GetWidth();
int py = p / pRenderTarget->GetWidth();
// in ray tracing, ndc space is from [0, 1]
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
// in ray tracing, screen space is [-1, 1]
Vec2 screen(2 * ndc.x - 1, 1 - 2 * ndc.y);
// scale x by aspect ratio
screen.x *= (float)pRenderTarget->GetWidth() / pRenderTarget->GetHeight();
// scale screen by the field of view
// fov is currently set to 90
screen *= tan((pCamera->fov / 2) * (PI / 180));
// screen point is the pixels point in camera space,
// give a z value of -1
Vec3 camSpace(screen.x, screen.y, -1);
camSpace = (camSpaceMatrix * camSpace.Vec4()).Vec3();
// the rays direction is its point on the cameras viewing plane
// minus the cameras origin
Vec3 dir = (camSpace - origin).Normalized();
Ray ray = { origin, dir };
// find where the ray intersects with the spheres
// using ray-sphere intersection algorithm
Vec4 color = TraceRay(ray);
pRenderTarget->PutPixel(px, py, color);
}
}
FOV 设置为 90。我见过其他人也遇到过这个问题,但那是因为他们使用了非常高的 FOV 值。我认为 90 应该不会有问题。即使相机根本没有移动,这个问题仍然存在。任何靠近屏幕边缘的对象都会出现扭曲。
最佳答案
当有疑问时,您可以随时查看其他渲染器正在做什么。我总是将我的结果和设置与 Blender 进行比较。例如,Blender 2.82 的默认视野为 39.6 度。
我也想指出这是错误的:
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
如果你想获取像素的中心,那么它应该是0.5f
:
Vec2 ndc((px + 0.5f) / pRenderTarget->GetWidth(), (py + 0.5f) / pRenderTarget->GetHeight());
另外,这确实是一件挑剔的事情,你的间隔是开间隔而不是闭间隔(正如你在源代码注释中提到的)。图像平面坐标永远不会达到 0 或 1 并且你的相机空间坐标永远不会完全为-1或1。最终,图像平面坐标将转换为像素坐标,它是左闭区间[0, width)和[0, height)。
祝你的光线追踪器好运!
关于c++ - 为什么我的光线追踪器出现如此严重的边缘扭曲?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63403277/