Least Squares Overview

The concept of using a Least Squares analysis to solve a group of related equations has been around for hundreds of years. But because of the lengthy computations, it wasn't till the advent of the computer that Least Squares analysis became practical. To understand the basic concepts of Least Squares, we need to discuss survey errors and precisions.

Types of Survey Errors

Survey measurements contain three kinds of errors.

Mistakes – Transposing two numbers or a corrupt data exchange between a total station and data collector are examples of mistakes. Least Squares does NOT correct mistakes, although it can help identify them through what is called Blunder Detection.

Systematic Errors – Systematic errors are consistent, repeatable errors that result from the environment, the equipment or the user. These errors can typically be computed and corrected. Corrections for curvature and refraction are examples of systematic errors. Least Squares does NOT correct system errors.

Random Errors – These errors result from the inability to perform a measurement exactly the same over and over again. They follow the mathematical laws of probability and conform to a normal distribution. They are typically small errors as opposed to large ones and both positive and negative errors occur with the same frequency. By including redundant observations, these errors can be reduced using Least Squares.

What are Precisions and Weights?

In any measurement, there is opportunity for random error. Just sighting a target introduces some random angular error. Random errors also exist in distance measurements, compass readings, and so on. Least Squares uses estimates of precision to weight the observations. This allows an observed angle to be used with an observed distance or initial coordinate position in the solution. It also allows you to apply ‘professional judgment' as you account for weak backsights or combine GPS and conventional survey observations.

Precision is a measure of how confident we are in an observed value. They are published by instrument manufacturers, usually found in the documentation under the instrument specifications.

TPC uses what some refer to as Absolute Precisions as opposed to Relative Precisions. Absolute precisions do not vary as other data is introduced into the network. The use of absolute precisions allows TPC to compute a Chi Square test on the solution to see how well it fits the data.

Weights are computed from precisions. More precise observations or coordinates have higher weights. The higher the weight, the less the observation or coordinate will be adjusted by the Least Squares adjustment.

Weights are computed from precision as follows:

For coordinates and horizontal angles: weight = 1 / precision2

For horizontal distance: weight = 1 / (precision + ppm * distance)2

For vertical distance: weight = 1 / (precision2 * segment length)

What is Least Squares?

Least Squares is a tool that helps you analyze and adjust the random errors in your survey. It computes adjusted coordinate positions using estimated precisions of observations' coordinates to reconcile differences between observations and the inverses to their adjusted coordinates. Least Squares also reports the statistics of the adjustments, indicating the strength of a computed position. The strength of the computed position allows us to determine how confident we are in the position and can also be helpful in blunder detection.

What is a Least Squares Network?

A Least Squares analysis is not limited to the sequence of points defined by a traverse. Although Least Squares can use the points defined by a traverse, it can also combine traverses to form a network and then solve the network.

Solving a network allows you to reconcile all your data at once providing the strongest solution possible. Surveyors often talk about the ‘fit' of a survey. How one control loop fits well with a pre-existing control loop. Or how the survey ‘fits the grid' because the ties to existing monuments are tight. A Least Squares network looks at this kind of ‘fit' over the entire project and says, “Here is the best fit, given all the information.”

2-D vs. 3-D

In a 2-D analysis, only horizontal data is considered. You may have observed slope distances and vertical angles, but these are first reduced to a horizontal distance before being added to the network.

A 3-D analysis can accommodate both elevation differences and vertical angles. TPC accounts for the instrument and target heights when computing the elevation difference of an observation with the formula: elevation difference = computed vertical distance + instrument height – target height.

Vertical angle observations are first converted to a vertical distance which is then used to compute the elevation difference.

What are Redundant Observations?

Least Squares allows you take extra (redundant) observations which it uses to strengthen the network. A redundant observation can be an extra angle you turn between two points or a distance you measure between two control points that weren't traversed sequentially. A redundant observation can also be created when you shoot the distance to your backsight when turning an angle set.

What Kind of Statistics do I Get From Least Squares?

Least Squares produces a number of statistics about the adjusted coordinate positions it computes and the observations that computed them. These statistics help you understand the strength of each adjusted position before you accept it into your survey. The statistics can also help you locate mistakes (blunders) in your survey, whether or not to hold an existing monument that is out of position and so on. In short, you can make more informed decisions using the statistics from the Least Squares analysis.

What is a Standard Deviation

Random survey errors are ‘normally distributed' about a mean in what is referred to as a ‘bell-shaped curve' or ‘normal distribution'. This means that if you were to repeat an observation many times and computed the mean (average) of all the observations, each individual observation would lie somewhere on the curve, with two-thirds of all the observations being within one standard deviation of the mean. You might also say that any single observation has a 66.667% chance of being within one standard deviation of the mean.

When a standard deviation of a coordinate is expressed in terms of X and Y, it means that if you were to observe that point many times and compute a unique position each time, two thirds of those unique computed positions would lie within distance X and Y of the position being reported. So a smaller standard deviation indicates a stronger or more accurate position while a larger standard deviation indicates a weaker or less accurate position.

Chi-Square Test

The Chi-Square test provides information on how well the solution fits the data based on a predefined confidence level. This allows you to say for instance, “I am 99% confident that this solution is a good fit for my network.”

The Chi-Square analysis is called a test because it either passes or fails. If it passes, you have a good fit based on your confidence level. If it fails, you have a bad fit.

A bad fit can result from any number of errors, but is probably related to unrealistic precisions or poor initial coordinates.

TPC reports Pass or Fail at the 95% significance level.

Optimizations

Because Least Squares solutions use sparsely populated matrices for their solution, several optimization techniques are available to reduce both the memory and time required for the solution. These optimizations can be enabled in the Solve dialog.

Related Topics

Least Squares

Editions

Premium, Professional