In photography the station point is the precise location and direction of the camera when light hits the negative and the image is captured. Usually we think of a station point as the place where the photographer was standing when he or she took the picture. Information associated with station points includes the camera location, heading, pitch, bank, height above the ground, and focal length of the lens. With this information it is relatively easy to make a viewshed map using spatial analysis softwares such as ArcGIS or Natural Scene Designer. Finding and creating the information that defines the station point for each of the photographs in Peabody's slideshow, however, required developing a novel method - called photographic georeferencing - that utilizes a number of different virtual landscape softwares. Here is the method.
This process assumes that you cannot identify the location only by looking at the photograph. When I showed Peabody’s images to Grand Canyon experts, they were able to identify some of the station point locations with a high degree of accuracy.
1) Use any textual clues in the title of the photograph and/or accompanying documents that give a sense of where to start looking in Google Earth. With a mostly reliable search bar, Google Earth can pinpoint place names on a map. One of the most astounding characteristics of this slideshow is that Peabody’s original, typed narrations for the set of images still exist. It is an archival miracle of sorts, one that offers an initial spatial honing since most of the brief narrations are attentive to geographical location, offering a place name or two in the description of the scene.
2) Replicate what you see in the photograph in Google Earth. A perfect match will never be possible because of differences in aspect ratio (i.e. the width of the scene is bigger in Google Earth than it is in the original photograph), and because distortions inhere in both the virtual landscape and in the way the image was projected and fixed onto the negative by the camera geometry. Despite this, close matches are possible. Horizon lines tend to be a great place to start, and landscape features in the middle to back third of the scene often are portrayed with enough resolution such that lines and shapes can be compared. Features that are close to the station point are the least likely to be helpful since in Google Earth the accuracy and resolution of such features is obliterated by their virtuality.
Three limitations come to mind as especially challenging during this initial visual matching process with Google Earth. One, Google Earth does not allow users to move straight up and down, like in an elevator. This makes it nearly impossible to create a view from two to three meters above the ground surface, where a camera would have likely been. Two, it is nearly impossible to consistently adjust the width of the view in Google Earth, which is equivalent to adjusting the camera’s focal length. When using Google Earth for this purpose, one must make a relatively wide view, then imagine the frame of the camera somewhere within that wide view in order to successfully match the images. The reason this is challenging is that it precludes the otherwise helpful strategy of comparing the edges of the frames when matching digital landscape representations with original photographs. Additionally, it gives no sense of what the camera’s focal length might have been. Three, the accuracy and resolution of Google Earth – while an astounding development in the history of information – is not good enough, and especially at close range. In the number of instances where Peabody chose to highlight the foreground in his photos (see figure X), the Google Earth replica is completely untrustworthy. This should be taken as a commentary on the infancy of photographic georeferencing more than as a critique of Google Earth.
Figure X. Google Earth matched image for photo 6: "Up Grand Canyon, from Moran Point," 1899.
Figure X. Photo 6: "Up Grand Canyon, from Moran Point," 1899.