In parameter estimation, you are trying to determine the most likely
value of a parameter in some model, given some data set. In very
broad-brush terms, when we approach this with Bayesian parameter
estimation, we identify in Bayes' Law
the quantity
with some experimental results and the
quantity
with some parameterized model.
is then
identified with how we expect the data to vary depending on the
parameters, which is typically known.
is called the ``prior'':
it represents prior knowledge about the parameter. The quantity
related to
comes from the data. What you are then after is
, the ``posterior'', related to ``probability of the parameter
given the data''. So to determine
, you have to have some
reasonable distribution to assign to the prior. If you don't really
know anything about the parameter you are after, it can be tricky to
decide what to use for the prior, and the answer can be sensitive to what you choose.
Bayes' Law is a tremendously powerful tool in many statistical contexts, in particular when a sensible prior can be determined, because it gives a highly effective way to incorporate knowledge you already have. However when it is hard to choose a sensible prior, the method can give dubious results.