Can’t wait for ICLR 2025? Review results are in and available on openreview.net! You can use Future House’s paper-qa tool and Google’s Gemini Pro with 2 million context window to ask any question. I also wrote a small helper so you don’t have to download all papers and you can use LLM to suggest you relevant papers instead. If you’re not in research, you can ask if there’s anything relevant that can improve products in your company.
For example, asking What is the progress on brain activity research?
downloads about 15 PDFs and creates a following answer:
Brain activity research is progressing in understanding how the brain processes visual information from natural experiences (Brain2025a pages 1-51). A method called BrainSAIL decomposes visual cortex selectivity by analyzing responses to localized semantic features in natural images, bridging vision data, semantic features, and neural responses (Brain2025a pages 1-51). BrainSAIL uses semantic embeddings from models like CLIP, DINO, or SigLIP (Brain2025a pages 1-51). It concurrently models localized semantic information, high-level semantic categories, and observed brain activity patterns to reveal features driving cortical selectivity(Brain2025a pages 1-51). Research is also advancing in integrating neural recordings from different animals or tasks, which presents challenges due to heterogeneities across sessions (Meta2025 pages 1-23). A proposed framework addresses this by meta-learning latent dynamics, encoding variations in a low-dimensional dynamical embedding (Meta2025 pages 1-23). Neuroscience is leveraging multi-dataset training, incorporating methods like LFADS, CEBRA, and CS-VAE to extract latent representations (Meta2025 pages 1-23). There is growing interest in using diverse neural recordings for training large-scale foundation models in neuroscience (Meta2025 pages 1-23).
Linking https://openreview.net/forum?id=yJ9QNbpMi2 and https://openreview.net/forum?id=SRpq5OBpED. Citations are a bit off because most of the PDFs still list anonymous authors.
How
I wrote a more doc here. In short:
- create use your login/password from https://openreview.net/ as a way to auth
- get you gemini key here
- git clone this branch in paper-qa (PR pending!)
- I’ve used gemini and ollama’s
granite3-dense
embeddings, but you can use anything http://litellm.ai/ supports - use
venue_id="ICLR.cc/2025/Conference"
and ask any question!
Why
I’m still thinking about my previous project Automated Paper Classification project, as I believe we can improve the speed of science by involving more people (or LLMs!) in simpler research. This weekend I wanted to see how much large language models (LLMs) have improved and whether they can now handle the full complexity of research papers. For example I realised I don’t see any problems with context window anymore, and it feels like we don’t even need RAGs either.
Default paper-qa still couldn’t give me a list of possible simple tasks an undergrad can complete and incrementally move science forward. I think the next step can be to find out if I have to tune some prompts, or maybe it’s easier to build a very specific tool using Cursor.