I agree with your sentiment that these intelligent tools, when prompted by experts with deep knowledge, can unleash a new creative process! And this stimulates questions from other experts! So here’s mine: Does your scalar field require a tuning potential V(phi) to match the data? If so, would a massive scalar field which acts similarly to an inflaton, provided by some f(R) gravity theory with additional coupling constants, solve the problem a natural way?
well, as you probably know, many of the models for dark energy involve a scalar field with an arbitrary potential [I vaguely remember that there is a mapping from f(R) to these models.]. There are a few models that try to connect the inflation field to the one driving dark energy. Not sure that addresses your question.
Couldn’t agree more. In my CMB work, modern AI tools are easily a 10x multiplier. It baffles me when I hear fellow cosmologists say they don’t use them. I've found Claude Code is currently superior to Codex for this cosmological likelihood stuff.
Good stuff, Scott. I'm seeing similar leaps in applied ML, but would offer a different take: AI took minutes at this task *because* countless grad students spent years of their time writing the code and methodology to fit models, and codex's training set now contains decades worth of examples of cosmological likelihood optimization code ready to be regurgitated away. So maybe what defines progress from here on is what one can do that was never attempted before... but wasn't that always the case?
great point. there is absolutely no claim of "progress" here. i'm increasingly skeptical even of its acccuracy but even if it's 100% right, it's done nothing new in this example. [But I can still take you in chess; chess.com; I'm there]
You state that experienced senior scientists might become more "relevant" and efficient, but I fear the opposite is true for early-career scientists. You need to learn how to ask the right questions and put in useful prompts. With AI agents around, who is going to spend a year to train an undergrad student to work on a relevant project? Is there not the great danger that we are loosing the next generation of scientists?
I am assuming that our primary mission is training the next generation and that will not go away. But as I write at the end, the uncertainty on everything I think is *very* large.
I'm wondering if you've tried to verify that what Codex produced is actually correct? I pasted your prompt into Claude 4.6 Opus. It had to iterate on the code a few times and the plot it ultimately produced is different than yours (even the DESI data points don't entirely agree). I'm not a cosmologist so I can't really evaluate what it did, but I'd be happy to share what it produced with you.
I haven't followed up on this example but did on another and it did indeed feel like one of those chat-gpt loops. I did a sanity check on the DESI points on the plot I posted here and it looked ok.
I agree with your sentiment that these intelligent tools, when prompted by experts with deep knowledge, can unleash a new creative process! And this stimulates questions from other experts! So here’s mine: Does your scalar field require a tuning potential V(phi) to match the data? If so, would a massive scalar field which acts similarly to an inflaton, provided by some f(R) gravity theory with additional coupling constants, solve the problem a natural way?
well, as you probably know, many of the models for dark energy involve a scalar field with an arbitrary potential [I vaguely remember that there is a mapping from f(R) to these models.]. There are a few models that try to connect the inflation field to the one driving dark energy. Not sure that addresses your question.
Couldn’t agree more. In my CMB work, modern AI tools are easily a 10x multiplier. It baffles me when I hear fellow cosmologists say they don’t use them. I've found Claude Code is currently superior to Codex for this cosmological likelihood stuff.
interesting; thanks Tijmen!
Good stuff, Scott. I'm seeing similar leaps in applied ML, but would offer a different take: AI took minutes at this task *because* countless grad students spent years of their time writing the code and methodology to fit models, and codex's training set now contains decades worth of examples of cosmological likelihood optimization code ready to be regurgitated away. So maybe what defines progress from here on is what one can do that was never attempted before... but wasn't that always the case?
great point. there is absolutely no claim of "progress" here. i'm increasingly skeptical even of its acccuracy but even if it's 100% right, it's done nothing new in this example. [But I can still take you in chess; chess.com; I'm there]
The prompt only worked because you already knew what to ask. A first-year student couldn't have written it. That's the part that doesn't compress.
You state that experienced senior scientists might become more "relevant" and efficient, but I fear the opposite is true for early-career scientists. You need to learn how to ask the right questions and put in useful prompts. With AI agents around, who is going to spend a year to train an undergrad student to work on a relevant project? Is there not the great danger that we are loosing the next generation of scientists?
I am assuming that our primary mission is training the next generation and that will not go away. But as I write at the end, the uncertainty on everything I think is *very* large.
I'm wondering if you've tried to verify that what Codex produced is actually correct? I pasted your prompt into Claude 4.6 Opus. It had to iterate on the code a few times and the plot it ultimately produced is different than yours (even the DESI data points don't entirely agree). I'm not a cosmologist so I can't really evaluate what it did, but I'd be happy to share what it produced with you.
I haven't followed up on this example but did on another and it did indeed feel like one of those chat-gpt loops. I did a sanity check on the DESI points on the plot I posted here and it looked ok.
Really cool what do you think of OpenAI prism?
Nice to hear from you Brian; hope you are well. Have not tried prism. My student, who is way better at this than I, uses cursor.