Section 1: Course Introduction and the Law of Large Numbers (Posted Apr.10)
Section 2: Discrete Time Markov Chains (Posted Apr.10)
Section 3: Regenerative Processes (Posted Apr.30)
Section 4: Steady-State Theory (Posted Apr.30)
Section 5: Feller Chains (Posted Apr.30)
Section 6: Harris Recurrence (Posted May 9)
Section 7: Coupling (Posted May 9)
Section 8: Non-stationary Transition Probabilities (Posted May 9)
Section 9: Martingales (Posted May 9)
Section 10: Stochastic Control (Posted May 9)
Section 11: Renewal Theory (Posted May 13)
Appendix A - Convergence (Posted Apr.10)
Appendix B - Limits and Expectations (Posted Apr.10)
Appendix C - Conditional Expectation (Posted Apr.22)
An excellent reference is Chapter 6 of: Breiman, L. (1968). Probability. Addison-Wellesley, Reading, MA
Extreme Values :
For a quick overview, read: Chapter 8 of Barlow, R.E. and F. Proschan (1975). Statistical Theory of Reliability and Life Testing. Holt, Rinehart, and Winston, New York.
For a more extensive discussion, see: Resnick, S.I. (1987) Extreme Values, Regular Variation, and Point Processes. Springer Series in Operations Research and Financial Engineering, Springer, New York.
Discrete-time Markov Chains in Discrete State Space:
There are many books that cover this material well, including the text. An excellent classical book on finite state chains is: Kemeny, J.G. and J.L. Snell (1983). Finite Markov Chains. Springer Verlag, New York.
For countable state chains, see Chapters 2 and 3 of: Karlin, S. and H.M.Taylor (1975). A First Course in Stochastic Processes. Academic Press, New York.
For a comprehensive discussion, the classical reference is the first part of: Chung, K.L. (1967) Markov Chains with Stationary Transition Probabilities, Springer, New York.
For many nice examples of discrete state space chains in computer science, see: Mitzenmacher, M. and E. Upfal (2005). Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge.
Discrete-time Markov Chains in General State Space:
The standard reference here is (and can be downloaded without charge legally); pay special attention to the second half of the book and appendices: Meyn, S. and R.L. Tweedie (2009). Markov Chains and Stochastic Stability. Cambridge University Press, Cambridge.
Another good reference is: Borovkov, A. and V. Yurinsky. Ergodicity and Stability of Stochastic Processes. John Wiley, New York.
There are now many papers that use Lyapunov methods to establish positive recurrence of Markov chains. A pioneering paper in this regard that uses Lyapunov ideas to draw a rigorous connection between stability of queueing networks and certain deterministic dynamic systems is: Dai, J.G. (1995). "On positive Harris recurrence of multiclass queueing networks: a unified approach via fluid limit models," Annals of Applied Probability, Vol 5, 49-77.
A nice discussion of martingales in discrete time can be found in Chapter 6 of: Karlin, S. and H.M.Taylor (1975). A First Course in Stochastic Processes. Academic Press, New York.
For a discussion of the key facts about martingales in continuous time, see Chapter 1 of: Karatzas, I. and S.E. Shreve (1987). Brownian Motion and Stochastic Calculus. Springer-Verlag, New York.
Statistics for Markov Chains and Processes:
A classic (and short!) reference is: Billingsley, P. (1974). Statistical Inferences for Markov Processes. Midway Reprints, U of Chicago Press, Chicago.
A more detailed discussion can be found in: Basawa, I.V. and B.L.S. Rao (1980). Statistical Inference for Stochastic Processes. Academic Press, New York.
Classic references include: Bertsekas,D. (1995) Dynamic Programming and Optimal Control. Volume 2, Athena, MA.
Fleming, W. and R.W. Rishel (1975) Deterministic and Stochastic Optimal Control, Springer-Verlag, New York.
Fleming, W. and H.M. Soner (1992) Controlled Markov Processes and Viscosity Solutions, Springer-Verlag, New York.
Puterman, M.L. (1994) Markov Decision Processes. Wiley, New York.
Many examples can be found in: Heyman, D.P. and M. Sobel (1984). Stochastic Models in Operations Research, Volume 2. McGraw Hill, New York.
A very nice book on the basic theory is: Bucklew, J.A. (1990). Large Deviations Techniques in Decision, Simulation, and Estimation. Wiley, New York.
A more advance book is: Dembo, A. and Zeitouni, O. (1992). Large Deviations Techniques and Applications. Jones and Bartlett, London.
For a nice discussion in the setting of ruin theory, see Chapter 10 of: Asmussen, S. and H. Albrecher (2010). Ruin Probabilities. World Scientific, Singapore.
Markov Jump Processes:
For the basics, see Chapter 5 of the text. For a more comprehensive discussion of the existence/uniqueness of processes having a specified rate matrix (and of existence/uniqueness of stochastic solutions to the forwards/backwards differential equations), see Chapter 6 of: Doob, J.L. (1953). Stochastic Processes. John Wiley, New York.
An enormous number of Markov jump process models have equilibrium distributions that are of product form. The classical reference here is (available for free and legal downloading): Kelly, F.P. (1979). Reversibility and Stochastic Networks
The following book discusses thermodynamic limits for various jump process models: Whittle, P. (1986). Systems in Stochastic Equilibrium. John Wiley, New York.
There is good coverage of this material in the text; see Chapter 6, Sections 6.1 to 6.9 in particular.
Another good reference here is Chapter 7 of: Karlin, S. and H. Taylor (1975). A First Course in Stochastic Processes. Academic Press, New York. (This chapter discusses the use of martingale methods to calculate various probabilities for Brownian motion.)
If you wish to see a reasonably accessible discussion of the connection between differential equations and Brownian motion (and, more generally, solutions of stochastic differential equations), see Chapter 15 of: Karlin, S. and H. Taylor (1982). A Second Course in Stochastic Processes. Academic Press, New York.