Hi everyone,
I'm a researcher looking for an arXiv endorsement for cs.LG to submit my first paper. I've been working for about a year on FluidWorld, a world model where the prediction engine is a reaction-difffusion PDE instead of attention. The Laplacian diffusion handles spatial propagation, learned reaction terms do the nonlinear mixing, and the PDE integration itself produces the prediction.
No attention, no KV-cache, O(N) complexity, 867K parameters total. I ran a parameter matched comparison (PDE vs Transformer vs ConvLSTM, all at ~800K params, same encoder/decoder/losses/data on UCF-101) and the interesting finding is that while single-step metrics are nearly identical, the PDE holds together much better on multi-step rollouts -- the diffusion acts as a natural spatial regularizer that prevents error accumulation.
Paper: https://github.com/infinition/FluidWorld/blob/main/paper/Fluidworld.pdf
Endorsement code: 6AB9UP
https://arxiv.org/auth/endorse?x=6AB9UP
If anyone is working on world model, video prediction, neural PDEs, or efficient architectures could endorse me, that would be really appreciated. Happy to answer any questions about the work. Thanks!
[link] [comments]



