[D] Ive been trying to understand the technical setup of a project called Qubic. It claims to use distributed proof of work computing for neural network training. I want to know if the idea holds together technically.The main issue with distributed training is coordination. Training large neural networks needs frequent sharing of gradient updates across nodes. This process is sensitive to delays and works far better with fast connections inside a data center than over the internet on separate machines. My question for people who actually do distributed machine learning work is this. Is there a training method that avoids the need for gradient synchronization altogether?Qubic describes its Aigarth AI system as using evolutionary selection instead of backpropagation. This means there are no gradients to share. Each node evolves its own model on its own. Selection pressure then acts across the full set of models over time rather than through matched weight updates.If that account is correct it removes the usual coordination problem. The process would work more like a genetic algorithm search than standard deep learning training. My questions are these:
- Is evolutionary model search a real direction in machine learning research or has it been shown to perform worse than gradient descent?
- If it is a real direction does the distributed proof of work model fit this approach better than it fits standard backpropagation training?
- Is there published research that compares evolutionary methods to standard training at large scale?
I am only trying to understand whether the architecture makes sense technically. Not here to judge the project.
[link] [comments]




