PAPER ABSTRACT

Artificial Neural Networks in Engineering Conference - 1996

ISBN # 0-7918-0051-2

This paper is copyrighted by ASME Press. For reprints, please contact the ANNIE organizers at http://www.umr.edu/~annie/ or ASME Press at The Americal Society of Mechanical Engineers, 345 East 47th Street, New York, NY 10017, USA.


Data Modeling Using Constrained Categorical Regression

Harvey L. Bodine, Steven S. Henley, and Robert L. Dawes
Martingale Research Corporation mrcinfo@martingale-research.com
Richard M. Golden
School of Human Development, University of Texas at Dallas, Richardson, TX 75083-0688 golden@utdallas.edu
T. Michael Kashner
UT Southwestern Medical Center at Dallas, 8267 Elmbrook, Suite 250, Dallas, TX 75247-9141
Abstract
We apply a sparsely-connected neural network to the problem of recognizing statistical regularities and patterns in a National Labor and Alcohol Survey database. The network architecture, called Constrained Categorical Regression (CCR), is designed to identify valid statistical inferences even in the presence of a mis-specified model and offers fast training with guaranteed convergence. Each weight within the network can be tested for statistical significance and the overall network is interpretable for meaning and validity.

References

  1. Sullins, John. R., (1992). Rule Induction and Correction in Expert Networks. Intelligent Engineering Systems Through Artificial Neural Networks, Vol. 2, 957-962
  2. Rumelhart, D. E., Hinton, G. E., and Williams, R. (1986). Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. 1, 318-362.
  3. Golden, R. M. (1995). Making correct statistical inferences using a wrong probability model. Journal of Mathematical Psychology, 38, 3-20.
  4. Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 57: 307-333.
  5. Golden, R. M. (1994). Analysis of categorical time-series text recall data using a connectionist model. Journal of Biological Systems, 2, 283-305.
  6. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50: 1-25.
  7. White, H. (1989). Learning in artificial neural networks: A statistical perspective. Neural Computation, 1: 425-464.
  8. Bridle, J. S. (1990). Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fougelman-Soulie & J. Herault (Eds.), neurocomputing: Algorithms, Architectures, and Applications (pp. 227-236). New York: Springer-Verlag.

NOTE: This material was based on work sponsored by the National Institute on Alcohol Abuse and Alcoholism. The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Institute on Alcohol Abuse and Alcoholism.


Back to CCR Page
Back to Publications

Skip to navigation