Abstract
This paper presents a mathematical proof of the equivalence between the Human Language-based Consciousness (HLbC) model and Bayesian inference, exploring their connection within neural information processing. The HLbC model posits that consciousness emerges through a process involving the observation of external events, the matching of those events to past memories, unconscious action selection, and post-hoc recognition of actions as conscious decisions. These processes closely align with the principles of Bayesian inference, where prior beliefs are updated based on new evidence to form posterior probabilities, thereby minimizing prediction errors and optimizing behavior. The paper highlights the dynamic feedback mechanisms shared by both frameworks, demonstrating how unconscious probabilistic action selection in the HLbC model parallels the sampling process in Bayesian inference. Furthermore, the retrospective recognition of actions in the HLbC model is shown to correspond to Bayesian updating, suggesting a unified approach to understanding consciousness generation. This research provides a robust theoretical foundation for applying probabilistic reasoning and feedback mechanisms to the study of consciousness, offering insights that can extend beyond neuroscience to fields like artificial intelligence and control systems.
Keywords
HLbC model, Bayesian inference, Neural information processing, Consciousness generation, Probabilistic reasoning, Posterior distribution updating