# Arbitrary camera

At the very beginning of the discussion about raytracing we made two important assumptions: that the camera was fixed at \((0, 0, 0)\), that it was pointing to \(\vec{Z_+}\), and that it’s “up” direction was \(\vec{Y_+}\). In this section, we’ll lift these restrictions, so we can put the camera anywhere on the scene and pointing in any direction.

Let’s start with the position. You may have noticed that \(O\) is used exactly once in all the pseudocode: as the origin of the rays coming from the camera in the top-level method. If we want to change the position of the camera, the *only* thing we need to do is to use a different value for \(O\), and we’re done.

Does the change in *position* affect the *direction* of the rays? Not at all. The direction of the rays is the vector that goes from the camera to the projection plane. When we move the camera, the projection plane moves together with the camera, so their relative positions don’t change.

Now let’s turn our attention to the direction. Suppose you have a rotation matrix that rotates \((0, 0, 1)\) to the desired view direction, and \((0, 1, 0)\) to the desired up direction (and since it’s a rotation matrix, by definition it must do the right thing with \((1, 0, 0)\)). The *position* of the camera doesn’t change if you just rotate the camera around. The direction does, it simply undergoes the same rotation as the whole camera. So if you have the direction \(\vec{D}\) and the rotation matrix \(R\), the rotated \(D\) is just \(\vec{D}R\).

Only the top level function changes:

```
for x in [-Cw/2, Cw/2] {
for y in [-Ch/2, Ch/2] {
D = camera.rotation * CanvasToViewport(x, y)
color = TraceRay(camera.position, D, 1, inf)
canvas.PutPixel(x, y, color)
}
}
```

Here’s what our scene looks like when observed from a different position and with a different orientation: