There was recently a claim:
False. There has never been a lower barrier to entry to AI research.
— Ethan (@torchcompiled) April 5, 2025
You can do things that are authentically worth sharing academically without training, without a degree, with less than a 3090, and just huggingface libraries.
You can enter this field at any point in time and… https://t.co/xVSsHPW6kL
I think it’s mostly correct, but to get started you have to develop high Personal Agency first, which is difficult and not that straightforward. While I’m working on lowering the barrier even more with Paper To Project, a tool that aims to create a ranked list of novel, plausible and testable projects in AI safety and interpretability, here’s a comparison of existing tools that can help you to get started. We want to help to answer these question:
- Which problems are most critical right now?
- Which ones are tractable for someone with my specific background and available time?
- How do I move from reading abstract papers or high-level problem descriptions to a concrete project I can actually do?
AI scientist approach
These have an idea generation step in their pipeline, which should be possible to use on it’s own to get started
Company/Project | What it does | How to use |
---|---|---|
NoviSci | Generates thousands of ideas, deduplicates via comparing their embeddings using cosine similarities, and ranks via a swiss tournament based on what paper gets a higher reviewer score from an LLM | This is potentially the most useful, though the ranking they created is not good enough compared to Human ranking. It should first be grounded on a topic or a list of papers, for example this recent post with a list of problems |
SakanaAI | Fully automatic scientific discovery, from idea generation to a paper. | It’s first step is an idea generation from a template with seed ideas. It should be possible to use only this part with a stronger guidance towards AI safety research. Ranking ideas relies on a chain of thought and self-reflection in LLMs |
Ai2’s CodeScientist | CodeScientist creates novel ideas to explore essentially by using genetic mutations (using an LLM-as-a-mutator paradigm) to mutate combinations of scientific articles and code examples | Generates a batch of ideas and relies on human to rank and comment on them, but then plans and runs experiments. |
Future House’s Aviary | Agents to generate a scientific answer to problems like engineering proteins, summarizing literature, molecular cloning | A scientist helper that can be a next step after identifying what problem to solve. You can try Openreview For PaperQA to ask questions from the most recent papers |
AI scientist where you define your goal and it explores literature, generates and expands ideas | Closed source, not applied to AI research, not available to public | |
Curie | End-to-end experimentation automation, where you ask a question and it runs an experiment | Designed to test and validate simple ideas, not focused on figuring out what ideas are important |
There’s also |
- research town that reads and writes papers by creating a graph, but it’s not clear how to get ideas from there
- a paper that generates seed ideas first, fetches related papers and iterates on them before fixating on a final idea and decomposing it into smaller parts. Couldn’t find any code though
Useful tools that help with research
Main users here are academics, researchers, applied scientists
Startup/Company | What it does (briefly) | What’s missing | Audience Size |
---|---|---|---|
Elicit, SciSpace, Semantic Scholar, Scite | AI research assistants for brainstorming, writing, search | Focuses on literature Q&A, summarization, brainstorming; doesn’t systematically generate skill-level-specific project proposals from papers. | >2M users |
Research Rabbit, Connected Papers and Litmaps | Citation-based literature discovery & organization, visual mapping. Claims to be a Spotify for Papers | Exploring paper networks & finding related work; no project idea extraction or skill tailoring. | >400k users |
Scholarcy | AI tools for summarizing and organizing research papers | Focuses on digesting individual papers; not generating new projects based on them. | >600k users |
Paperpal | AI academic writing & editing assistant | Focuses on improving manuscript quality; not project ideation from existing research. | >20k users |
Scite | Assesses research reliability via citation analysis | Evaluates paper impact/reception; not project generation. | >350k users |
R Discovery, NotebookLM | AI tool for finding relevant papers, summaries, audio summaries and translations | Focuses on discovery & summarization; no generation of skill-specific project ideas. Mobile app exists. | >3M users |
Another good review on literature review tools
These existing tools help you find what to read, but not necessarily what to do next in a way that’s tailored to your capabilities. This is the gap that future tools or initiatives could potentially fill, making it easier for more people to transition from interested observers to active contributors
Additional recommended reading
- Research is difficult, it’s usually easier with a good Graduate Advisor. Solo research is even more difficult, but not impossible, as you can see in these examples
- Take a look at How to do a Good Research by Eamonn Keogh
- Check out comments here and here
- Try things out and see what happens!