Estimation theory is widely used in many branches of science and engineering. No doubt, one could trace its
origin back to ancient times, but Karl Friederich Gauss is generally acknowledged to be the progenitor of what
we now refer to as estimation theory. R. A. Fisher, Norbert Wiener, Rudolph E. Kalman, and scores of others have
expanded upon Gauss's legacy and have given us a rich collection of estimation methods and algorithms from which
to choose. This book describes many of the important estimation methods and shows how they are interrelated.
Estimation theory is a product of need and technology. Gauss, for example, needed to predict the motions of planets
and comets from telescopic measurements. This ``need'' led to the method of least squares. Digital computer technology
has revolutionized our lives. It created the need for recursive estimation algorithms, one of the most important
ones being the Kalman filter. Because of the importance of digital technology, this book presents estimation from
a discrete-time viewpoint. In fact, it is this author's viewpoint that estimation theory is a natural adjunct to
classical digital signal processing. It produces time-varying digital filter designs that operate on random data
in an optimal manner.
Although this book is entitled ``Estimation Theory,' computation is essential in order to be able to use its many
estimation algorithms. Consequently, computation is an integral part of this book.
It is this author's viewpoint that, whenever possible, computation should be left to the experts. Consequently,
I have linked computation into MATLAB\registered\ (MATLAB is a registered trademark of The MathWorks, Inc.) and
its associated toolboxes. A small number of important estimation M-files, which do not presently appear in any
MathWorks toolbox, have been included in this book; they can be found in Appendix B.
This book has been written as a collection of lessons. It is meant to be an introduction to the general field of
estimation theory and, as such, is not encyclopedic in content or in references. The supplementary material, which
has been included at the end of many lessons, provides additional breadth or depth to those lessons. This book
can be used for self-study or in a one-semester course.
Each lesson begins with a summary that describes the main points of the lesson and also lets the reader know exactly
what he or she will be able to do as a result of completing the lesson. Each lesson also includes a small collection
of multiple-choice summary questions, which are meant to test the reader on whether or not he or she has grasped
the lesson's key points. Many of the lessons include a section entitled
``Computation.'' When I decided to include material about computation, it was not clear to me whether such material
should be collected together in one place, say at the rear of the book in an appendix, or whether it should appear
at the end of each lesson, on demand so to speak. I sent letters to more than 50 colleagues and former students
asking them what their preference would be. The overwhelming majority recommended having discussions about computation
at the end of each lesson. I would like to thank the following for helping me to make this decision: Chong-Yung
Chi, Keith Chugg, Georgios B. Giannakis, John Goutsias, Ioannis Katsavounidis, Bart Kosko, Li-Chien Lin, David
Long, George Papavassilopoulos, Michael Safonov, Mostafa Shiva, Robert Scholtz, Ananthram Swami, Charles Weber,
and Lloyd Welch.
Approximately one-half of the book is devoted to parameter estimation and the other half to state estimation. For
many years there has been a tendency to treat state estimation, especially Kalman filtering, as a stand-alone subject
and even to treat parameter estimation as a special case of state estimation. Historically, this is incorrect.
In the musical Fiddler on the Roof, Tevye argues on behalf of
``Tradition!'' Estimation theory also has its tradition, and it begins with Gauss and parameter estimation. In
Lesson 2 we show that state estimation is a special case of parameter estimation; i.e., it is the problem of estimating
random parameters when these parameters change from one time instant to the next. Consequently, the subject of
state estimation flows quite naturally from the subject of parameter estimation.
There are four supplemental lessons. Lesson A is on sufficient statistics and statistical estimation of parameters
and has been written by Professor Rama Chellappa. Lessons B and C are on higher-order statistics. These three lessons
are on parameter estimation topics. Lesson D is a review of state-variable models. It has been included because
I have found that some people who take a course on estimation theory are not as well versed as they need to be
about state-variable models in order to understand state estimation.
This book is an outgrowth of a one-semester course on estimation theory taught at the University of Southern California
since 1978, where we cover all its contents at the rate of two lessons a week. We have been doing this since 1978.
I wish to thank Mostafa Shiva, Alan Laub, George Papavassilopoulos, and Rama Chellappa for encouraging me to convert
the course lecture notes into a book. The result was the first version of this book, which was published in 1987
as {\it Lessons in Digital Estimation Theory}. Since that time the course has been taught many times and additional
materials have been included. Very little has been deleted. The result is this new edition.
Most of the book's important results are summarized in theorems and corollaries. In order to guide the reader to
these results, they have been summarized for easy reference in Appendix A.
Problems are included for all the lessons (except Lesson 1, which is the Introduction), because this is a textbook.
The problems fall into three groups. The first group contains problems that ask the reader to fill in details,
which have been ``left to the reader as an exercise.''
The second group contains problems that are related to the material in the lesson. They range from theoretical
to easy computational problems, easy in the sense that the computations can be carried out by hand. The third group
contains computational problems that can only be carried out using a computer. Many of the problems were developed
by students in my Fall 1991 and Spring 1992 classes at USC on Estimation Theory. For these problems, the name(s)
of the problem developer(s) appears in parentheses at the beginning of each problem. The author wishes to thank
all the problem developers.
While writing the first edition of the book, the author had the benefit of comments and suggestions from many of
his colleagues and students. I especially want to acknowledge the help of Georgios B. Giannakis, Guan-Zhong Dai,
Chong-Yung Chi, Phil Burns, Youngby Kim, Chung-Chin Lu, and Tom Hebert. While writing the second edition of the
book, the author had the benefit of comments and suggestions from Georgios B. Giannakis, Mithat C. Dogan, Don Specht,
Tom Hebert, Ted Harris, and Egemen Gonen. Special thanks to Mitsuru Nakamura for writing the estimation algorithm
M-files that appear in Appendix B; to Ananthram Swami for generating Figures B 4, B 5, and B 7; and to Gent Paparisto
for helping with the editing of the galley proofs.
Additionally, the author wishes to thank Marcel Dekker, Inc., for permitting him to include material from J. M.
Mendel, Discrete Techniques of Parameter Estimation: The Equation Error Formulation}, 1973, in Lessons 1--3, 5--9,
11, 18, and 23; Academic Press, Inc., for permitting him to include material from J. M. Mendel, Optimal Seismic
Deconvolution: An Estimation-based Approach}, copyright\,\copyright\,1983 by Academic Press, Inc., in Lessons 11--17,
19--21, and 25; and the Institute of Electrical and Electronic Engineers (IEEE) for permitting him to include material
from J. M. Mendel, Kalman Filtering and Other Digital Estimation Techniques: Study Guide}, copyright, 1987 IEEE,
in Lessons 1--3, 5--26, and D. I hope that the readers do not find it too distracting when I reference myself for
an item such as a proof (e.g., the proof of Theorem 17-1). This is done only when I have taken material from one
of my former publications (e.g., any one of the preceding three), to comply with copyright law, and is in no way
meant to imply that a particular result is necessarily my own.
I am very grateful to my editor Karen Gettman and to Jane Bonnell and other staff members at Prentice Hall for
their help in the production of this book.
Finally, I want to thank my wife, Letty, to whom this book is dedicated, for providing me, for more than 30 years,
with a wonderful environment that has made this book possible.
Summary
Estimation theory is widely used in many branches of science and engineering. Written in a �lesson� format that
is especially convenient for self-study, this book describes many of the important estimation methods and shows
how they are interrelated.
Covers key topics in parameter estimation and state estimation, with supplemental lessons on sufficient statistics
and statistical estimation of parameters, higher-order statistics, and a review of state variable models. Links
computations into MATLAB® and its associated toolboxes. A small number of important estimation M-files,
which do not presently appear in any MathWork's toolbox, are included in an appendix.
For engineers and scientists interested in digital estimation theory.
Table of Contents
1. Introduction, Coverage, Philosophy, and Computation.
2. The Linear Model.
3. Least-Squares Estimation: Batch Processing.
4. Least-Squares Estimation: Singular-Value Decomposition.
5. Least-Squares Estimation: Recursive Processing.
6. Small Sample Properties of Estimators.
7. Large Sample Properties of Estimators.
8. Properties of Least-Squares Estimators.
9. Best Linear Unbiased Estimation.
10. Likelihood.
11. Maximum-Likelihood Estimation.
12. Multivariate Gaussian Random Variables.
13. Mean-Squared Estimation of Random Parameters.
14. Maximum A Posteriori Estimation of Random Parameters.
15. Elements of Discrete-Time Gauss-Markov Random Sequences.
16. State Estimation: Prediction.
17. State Estimation: Filtering (The Kalman Filter).
18. State Estimation: Filtering Examples.
19. State Estimation: Steady-State Kalman Filter and Its Relationships to a Digital Wiener Filter.
20. State Estimation: Smoothing.
21. State Estimation: Smoothing (General Results).
22. State Estimation for the Not-So-Basic State-Variable Model.
23. Linearization and Discretization of Nonlinear Systems.
24. Iterated Least Squares and Extended Kalman Filtering.
25. Maximum-Likelihood State and Parameter Estimation.
26. Kalman-Bucy Filtering.
A. Sufficient Statistics and Statistical Estimation of Parameters.
B. Introduction to Higher-Order Statistics.
C. Estimation and Applications of Higher-Order Statistics.
D. Introduction to State-Variable Models and Methods.
Appendix A: Glossary of Major Results.
Appendix B: Estimation of Algorithm M-Files.
References.
Index.