(1) I download two sets of data: one is very large-scale data at 30 arcsec (roughly 1km) resolution, called GTOPO30, which covers the whole planet (although not uniformly accurately). A few downloads of this data suffice to do any calculation in North America, so this is a one-time step.
The other dataset is currently only available for the US, and is available in a few different forms from different sources. I used the NED dataset, available using an interactive web interface, and free in blocks under 10Mbytes (about .5 degree by .75 degree in the lower 48, 1 by 1.5 degree in AK). (Actually soon they may allow bigger downloads for free.) This dataset is finer resolution, 1 arcsec outside of AK, 2 arcsec in AK. I have to download a new NED dataset for each different region I want to calculate.
(2)These datasets are huge, and for most of the SM calculation, I don't need the precision they provide. Remember that SM adds up the modified slopes between a fixed reference point and all possible sample points. Sample points which are very far away from the reference point do not contribute very much, so it is safe to use a lower-resolution grid for these points. If I used the highest-res grid for all my sample points the calculation would take forever.
So I need to create these lower-res grids; I do this by averaging the data over 5x5 blocks. This is a one-time expense for each dataset. For the GTOPO30 data, I get a very coarse grid with 150 arcsec (about 5km) resolution, suitable for radii of over 100km or so. For the NED data, I get a grid with resolution of 5 arcsec (10 in AK), suitable for radii of roughly 2-20km.
(3) I Pick a peak I want to calculate SM for and locate the summit Lat/Long. I use this as an initial reference point for the calculation. Now I add up the modified slope contributions for all the grid points of my various grids. For the sample points near to the reference point (out to a few km away) I add up the modified slopes for every NED grid point. From a few km out to about 20km, I add up over my 5 arcsec dataset. From about 20km to about 100km I add up over the GTOPO30 grid. From about 100km out to about 500km I add up over the 150 arcsec dataset. Sample points outside of 500km will not contribute significantly, so they can be ignored. (After all, this is supposed to be a measure that concentrates on local relief.) If I want better accuracy, I go out further with the more accurate data, at the expense of more calculation time.
(4) Now I have calculated the SM of the summit. But often the summit is not the best reference point for SM. It would not be fair to compare peaks without first figuring out where their best reference points were. For example, El Cap has a tiny little "summit" way up from the rim of the cliff, and it gets a pretty dinky SM value. But that is not a fair SM value for El Cap.
So for any feature/peak, I want to find the best SM value. Usually it is pretty easy to pick out the general regions where the best reference point could be located: they need to be high up and right above steep terrain. Once I have located the right general regions, I just do a brute-force search in those regions, calculationg SM for a whole grid of reference points, and finding the best reference point from those. This works pretty well even if it is not fully automated.
(5) Steps (3) and (4) apply when I already have a peak in mind for which I want to calculate the SM value. If I have a region for which I want to find the best peaks, I need to scan the whole region for high-SM peaks. I have a mostly-brute-force procedure that does this, searching for good terrain and then calculating SM values in subregions with good terrain. Since it is not a very clever algorithm, it takes a while (up to an hour) to search a region, but it does find all of the best peaks. This is an interesting routine to run on a region I am not familiar with, since it tells me a lot about how impressive the peaks are in that region, and which peaks are the "finest" (from the point of view of SM, anyway).
(6) If my goal is to make a "Top N" list of peaks, then in fact I need to calculate reduced spire measure (RSM). To do this, I first find the peak in the region, say A, with the best spire measure; then I find the best remaining peak, say B, in the region by reduced spire measure, where I use the peak A to reduce. In other words peaks close to A get a significant reduction since they look like subpeaks of A. Then I find the best peak, C, using reduction by A and B; and so on. This is actually easier than it sounds, since the reduced spire measure is very close to the ordinary spire measure when the peak being measured is decently far away from all better peaks.
To sum up, producing these tables took a lot of work and a lot of computer calculation. Theoretically it would be nice to clean up all my routines and package them so they are user-friendly, so that other people can easily calculate SM values. That will not happen soon. Perhaps Edward Earl (who is a better programmer than I) will take this upon himself.
Any comments or questions are very welcome: