SUBJECT: Hierarchical State Propagation and Linear Memory Deconstruction
Part 1 of this foundational series details the move away from standard Attention-based transformers toward a Hierarchical State Propagation (HSP) model. We argue that memory should not be a linear stream, but a nested hierarchy of influence.
Eliminating positional bias in long-term context retention.
Locking "Essential" facts even as peripheral data rolls off the window.