https://github.com/chrishayuk/larql
https://youtu.be/8Ppw8254nLI?si=lo-6PM5pwnpyvwMXh
Now you can decompose a static llm model and do a knn walk on each layer (which was decomposed into a graph database), and it's mathematically identical to doing matmult. It allows you to update the models internal factual knowledge without retraining (just insert into graph DB), it also uses less memory (since its just a database). The creator is the CTO at IBM.
[link] [comments]




