Can ChatGPT write a #SciComm grant proposal?

Image displays screenshot of author's interaction with the ChatGPT AI bot. This screenshot was an attempt to recreate the initial chat, because I didn't think to take a screenshot before I ended the chat and logged out. But, the bot responded slightly differently from the transcript linked to in the blog post. So, here's a transcript of the recreation attempt, which only involved 3 comment/response pairs. Me: I'd like to show you some writing I did, to see what you think. ChatGPT: Sure, I'd be happy to take a look and provide my thoughts. Please share your writing with me. Me: Can I paste in text? ChatGPT: Yes, you can paste the text here, and I'll take a look. (Note, the first time, the bot added a caveat that I could paste text if it wasn't too long. It did not clarify that this time.) Me: It's about 2400 words. Is that too long to paste in? ChatGPT: That might be a bit long to paste in here, but you can try to paste it, and we'll see if it works. Alternatively, you can upload the text file to a file-sharing service, such as Google Drive or Dropbox, and share the link with me.
This screenshot was an attempt to recreate the initial chat, because I didn’t think to take a screenshot before I ended the chat and logged out. But, the bot responded slightly differently from the transcript linked to in the blog post. My recreation attempt, only involved 3 comment/response pairs. (Detailed alt text provided)

Origins of a Friday-morning “rabbit hole”: Can’t we make/let writing be easier?!

We’ve been hearing a lot of handwringing and concern about ChatGPT up-ending education. We’ve also been hearing a lot of innovative ideas about how to engage with ChatGPT as a teaching tool (which is, let’s be clear, mainly an effort to make sure ChatGPT doesn’t become the beginning and end of student work and writing).

Of all the approaches I’ve heard, the one that seems most straightforward is to have students use ChatGPT to generate first drafts. That would get developing writers past the daunting blank page, wrangle some initial thoughts into a form that can be refined and enhanced, etc., etc.

But, there’s been a thought tickling the back of my mind as I’ve listened to all these discussions, chatted a bit about it with colleagues, and even shared some commentaries with folks looking for perspectives: “Are we really only going to set up these frameworks of expectation and standards of use/engagement for students?”

Yes, there are already some cite-the-bot or attribute-co-authorship policies coming online.

But, I’m actually thinking more of our own writing, at our own keyboards.

Surely, even the writery writers among us would love to have that first draft be easier.

For example: grant proposals. I’ve written (or co-written) 30-some proposals that have raised over $3 million* in funds for everything from community nonprofits and outdoor education to ecology research and systems change in higher ed and the science communication profession. And, I do plenty of coaching for folks writing grants, particularly when they are working on scicomm/broader impacts/engagement aspects of proposals.

This might seem like a lot or a little, depending on your context. Point is, grantwriting isn’t a brand-new writing task for me. And yet, I still work to get better at it and am open to it being easier. And, I would love to be able to recommend resource that make it easier, more efficient, and productive for people I support.

Why? Because even with the experience and positive grantwriting outcomes I’ve had, every proposal feels like pushing a massive boulder uphill. There’s gotta be a way.

Now, I have colleagues who work with AI and have played with writing their own code to train language models on their style of writing. And, supposedly, ChatGPT can model the style of writing samples you feed it. I cannot be the only one who has wondered: could this flashy, new AI thing help write grants?

And today (of course, with a grant deadline looming; next Wednesday, to be exact), I figured I’d check.

So, I created an account* on ChatGPT and tried it out.**

I didn’t want to straightaway ask it to write a proposal. Why? because I’ve read this article about ChatGPT declaring its love (in a stalkery kind of way) to a journalist and this Twitter thread about training AI to write in your writing style. I was particularly curious about the argument that AI can be a viable tool for structure and labor, if you don’t expect it to work as outsourced thinking.

Instructions matter in writing (once again)

I started by asking it to give me feedback on my writing.

I was going to paste in 2400+ words (the full text of a previous seed-grant proposal of mine). But, it told me that was too much, and I should link to it via Google Drive or other cloud storage options.

How many documents can it use as a sample? Certainly two, right?

I shared two docs via Google Drive, then switched up the prompt:

Okay, specifically, I’m wondering: could you write something that’s 1500 words that is in the style of those two documents?

Then, ChatGPT clarified it was a language model that could only generate text in a similar style. It even provided a caveat: “I cannot guarantee that the generated text will be exactly the same as the style of the documents, as my writing style may differ slightly from the original author’s style. But I can try my best to match the style and tone of the documents.”

Fine, fine. Off we went.

As I watched the bot’s output unfurl in real time, I got a primo reminder that meaningful, clear assignments are a crucial component of writing instruction.

In the first few lines, it wrote…a guide for good science communication. 🤦‍♀️

It wasn’t wrong when it wrote: “Effective science communication is crucial for bridging the gap between scientists and the public, policymakers, and other stakeholders.” It even started with sections on “Know your audience” and “Use plain language***.”

It’s just that I still wanted to know if the bot could give someone a meaningful jump start on a funding proposal. But, I clearly hadn’t asked for that, yet.

But, can you write a grant? (a play-by-play of ChatGPT trying)

So, I stopped it mid-response and said:

The two documents are grant proposals. The piece of writing should be persuasive writing, to help ask for money to pay for this kind of training. Could you please try again with that in mind?

It went for it again. But, I interrupted again, because I wanted to be sure it really was drawing from the documents I linked to. (It didn’t seem to be, because the text examples I provided didn’t contain any of the content the bot generated.)

So, I asked, and it clarified: “I cannot use existing documents to generate new text without explicit instructions on what portions of the original documents to use or reference. However, I can try to match the style and tone of the original documents in a persuasive piece of writing advocating for funding to support training faculty and students on good science communication.”

Well, I don’t train AI, so I don’t know what it would take to provide explicit instructions. So I just got out of the way.

It spit out a blurb that was remarkably generic and read more like a fundraising letter than a grant proposal. For example, the blurb ended:

Screenshot of the end of the first blurb written by ChatGPT. Text reads: "We invite you to join us in this effort and support this program to train faculty and students on good science communication. Together, we can make a meaningful impact on the scientific community and society as a whole."
Screenshot from the conclusion of ChatGPT’s first attempt to write a persuasive text about funding for scicomm training. (Alt text provided)

Mmmmm, not quite.

I asked about using references. It said I needed to tell it which to include and where to put them in the text. Nope, no thanks. I can do that on my own. Skip that.

Then, I asked it for 1500 words. That might be more than an executive summary, but it’s way to seed grant proposal . At minimum, 1500 words is long enough to require sustained thinking and short enough to be a brief that someone could share with collaborators or program officers, etc.

ChatGPT it spit out 1500 words under the title: “Investing in Good Science Communication: A Proposal for Funding.”

It didn’t say anything that was wrong (in the first few paragraphs, that is). But, it didn’t say anything compelling.

Maybe two documents (~4,500 words) just isn’t enough to get it to produce my kind of writing.

So, I asked it to do it again, using several documents as examples, including the first two I linked to earlier in the chat. The documents included:

  • Two IRB exemption request proposals (having to do with survey-based research in science communication courses at the university level and scicomm across a university)
  • One manuscript of a paper (about scicomm) currently under review at a peer-reviewed journal
  • Two grant proposals about scicomm training
  • Two published, peer-reviewed papers about scicomm

All the examples (n=7) were my own writing and/or collaborative writing to which I contributed substantially.

My thinking: A draft in the style of any of them would be workable; together, surely they provide a meaningful corpus. (Again, I don’t work in AI, so this might be laughable. That’s fine. Most folks drafting a grant proposal won’t have produced Google Books’ worth of writing, so they’d also be looking for support from a relatively limited corpus.)

ChatGPT produced 1500 words organized with the following subheadings:

  • Title: Enhancing Science Communication Skills in Faculty and Students
  • Executive summary
  • Objectives (with 3 objectives stated)
  • Background
  • Proposal
  • Evaluation <–this one actually would help, because eval/efficacy in scicomm is often overlooked, underdeveloped, etc.
  • Conclusion

And, it took 1500 words seriously – it stopped mid-sentence at the end, when it hit the word limit. The last sentence was: “Through our science communication training programs and initiatives, we will improve the dissemination of”.

So, I encouraged it to feel free to complete its sentences at the end, never mind the word count. I expected it would revise what it had already written. But, that probably exposes my naïveté about these tools.

Of course, it just generated a new 1500-word blurb.

This one had the following headers:

  • Project summary
  • Background and need
  • Project description
  • Evaluation and impact <–again, this one nudge could be worth it, if it was consistent. But, that might be contingent upon the sample texts it is provided.
  • Funding (it asked for $200k, unprompted
  • Conclusion

So, can AI write my #SciComm proposal?

Nope.

I would argue it not only cannot write my proposal, it isn’t a viable tool for starting any scicomm proposal.

Every version it produced was underwhelming.

It made schtuff up left and right. For example, one 1500-word chunk said it was modeled after scicomm training programs at UC Berkeley and University of Michigan, including a bogus reference to a study indicating UMich’s program saw 95% of participants increase their confidence and 92% increased interest in public and policy engagement. Mind you, that material is not in the example material I provided. And, I have no contacts with folks running scicomm training programs at those universities. My proposal, therefore, couldn’t be modeled on theirs in a credible, robust way.

After seeing those fake research results, I asked ChatGPT to not make up stats or numbers. And, I specified: if it had to use stats or references, please derive them from the sample texts.

Well…that lasted until the final version it spit out. In that one, it cited another bogus survey that indicated >70% of science faculty respondents hadn’t had any formal training in scicomm. That stat is in the realm of plausibility. It would be all too easy for someone less familiar with the field to take that for granted and want to include it in their proposal. Where might that lead? At worst, citing and disseminating fake stuff. At best, hours spent trying to track down the citation to accompany that stat. Cherry picking at it’s moldiest.

ChatGPT also made up formats for the programs that would be offered, including a consistent-through-all-versions commitment (from ChatGPT) that we provide practice in scicomm. There was even a version where a practicum was built in. While that would be peachy if someone was going to do that, I’ve never written a proposal or study that included a practicum. Why? Because I develop and run programs to enhance capacity, efficacy, and robust approaches to aspects of scicomm that rarely get institutional support. In a word, there’s plenty of practice opportunities. I’m doing different work. So, that’s ChatGPT conjuring things again.

Takeaways

Overall, the quality of writing was remarkably general, bland, and hand-wavey. It would not be competitive.****

My takeaway here is that ChatGPT ain’t the mechanism for jumpstarting high-quality grant writing or even making grant writing easier.*****


Notes

*The nature of my courses is such that I don’t expect to encounter the plagiarism or lack of original work issues raised by ChatGPT. (Maybe that’s naïve, but that’s my current take.) So, I’m a bit late to the “let’s see what the bot can do” party. What I’m reporting here was my first interaction with ChatGPT.

**If you don’t want to read my commentary, you can just skim a plain-text version of the transcript with annotations here.

***Which of course, is scicomm jargon itself.

****And, I hadn’t realized I might be worried, but this exercise reassured me. Experienced, knowledgeable science communicators who are connected to the evidence base for inclusive, effective scicomm probably don’t need to worry about ChatGPT as a grant competitor (at least not at this stage!).

*****What would really make it easier is standardized, simplified grant formatting, or letting us budget for real compensation for broader impacts and for people, especially students and early career folks. Oh, and, substantive, sustained funding for us to offer training and real support writing grants; compliance is great, but it’s only part of the grant process.

Join the conversation!