Levin's work shows the same bioelectric signal has different meanings depending on the receiver cell's current state (not just sequence-dependence but state-dependence at the receiver level). That's the signature of context-sensitive grammar (Chomsky hierarchy — more powerful than context-free).
If that's right: a pure feedforward network can't participate natively, artificial participation would require systems that maintain and update state across signal reception (more like RNN/state machine than transformer), and the interface question isn't just voltage matching (now solved by Geobacter nanowires) but also computational architecture.
Has AI research done any work on what it would take to participate in a context-sensitive biological grammar, not to simulate it, but to natively participate in it?
[link] [comments]


