Terry - you have identified the key skill that is hard to explain when talking to people who have not spent much time with LLMs- the more effort and thought you put into your prompt, the more interesting and potentially productive results you will get. It is interesting that, for all the advice on prompting to tell the bot what role to play, that this recent development with Claude would seem to stymie that practice. But (and I have had many similar situations) I can usually get around it with trial and error to do what it needs to do. But as you point out, that is a skill in itself and something not everyone is comfortable or willing to do. I love this prompt and I'd be interested to see others you've used successfully.
Actually, Steve, Nick Potkalitsky and I are doing research in an experimental high school English class Nick is teaching. After the semester is over, we will write recommendations for his English Department about establishing a course in the curriculum called AI Theory and Writing. Then we are working on a book proposal to get something out by late summer with the collection of mentor prompts (and other tutorial prompts) and discussion of student work. They aren’t ready for prime time—it’ll likely be August anyway and possibly later. I’ve already seen firsthand the power of this language machine.
That sounds fantastic. I'll be very interested to follow that work - like many English departments, ours is stuck on the very real issue of how do we acknowledge and address the ease with which students can outsource the writing process but still think creatively to consider ways to leverage the equally real things LLM's can do to help with the process, as your prompts attempt to do. Most of the teachers want nothing to do with it. What is your general response to the most overtly hostile attitudes towards AI in the classroom and the claim that it will destroy that ambiguous term you wrote about the other day ("critical thinking")? I teach history and the arrival of these new Deep Research models pose a similar reconsideration of the purpose of the high school research paper or at least a reimagining of how most people will be conducting (at the very least) preliminary research going forward. I find it all fertile ground for conversations around pedagogy, curriculum design, and learning in general but my experience is most educators are not yet well-versed in what's happening to appreciate how fundamentally AI disrupts the skills we teach. Or at least that's my take.
I think you’re right. One quibble: Students think they can outsource the writing process until they learn that’s impossible. Start by assigning autobiography and see if they can outside it. Then move outward on a continuum from autobiography to biography to reports. The level of trust in bot research is very low until they reach reports—there they begin to see fool’s gold. I’m having conversations with people I’ve been talking to about AI since Nov. 2022. Once in a while I ask “Have you tried a bot yet?” “No, I haven’t had time,” comes the response. I can fathom how anyone can mount a critique of a book they’ve never read. Yes, deep search can generate a list of references, but beware. The incentive is to fabricate, and fabricate they do. And they don’t accept responsibility for it! They promise to change, but they don’t. I think the manufacturers need to b build in stronger ethical restrictions of fabricating sources—right there, don’t water it out, don’t broaden the category. It only takes one dramatic experience of using a bad bot citation in public that proves to be false to cure anyone of trusting the bot. You can’t trust a bot as far as you can throw it to cite sources (unless it’s Perplexity). Now is precisely the time to assign a research paper and require them to use the bot—and require them to include detailed logs narrating how they verified the information. The key is NOT saying “you can’t use it.” That’s easy. The key is saying “we want you to learn to use it, but your first lesson must be about trust.” Every kid in the class will have a story to tell about bot fabrications. Teachers, however, must become expert bot whisperers. I’m not sure I would call verification “critical thinking”… Critical thinking comes in after a source has been verified and is being considered as relevant to an argument or narrative or perspective—or as a frequent counter-argument to rebut. There’s no sense doing that work if you don’t know whether the source is bogus. The only way to learn is through experience. Let them feel the sting of embarrassment and the cost of loss of credibility. They’ll laugh at you (silently) if you preach at them. They’ll get it when they find out. I personally see the bot as limited in deep research, but very useful as research for current events. Read the post I’m about to upload. It gives an example of a proper use of Perplexity as a search tool. It’s the only research tool I trust a bit right now—with current events for sure, less so for old texts.
Have you used Open AI's Deep Research yet? It's pretty impressive. The source issue is definitely baked into the process. I find that the issue is more quality of sources and accuracy of cross-referencing. You don't get as many broken links as previously but the link may be to the right author but wrong source or a source relevant to the topic but doesn't necessarily support the point being made. But for a first pass on a well-crafted issue, it's a significant breakthrough. And it will get better. Google Gemini is solid but both Grok and Perplexity are not as impressive insofar as actually creating the report (I am talking about their Research models). Perplexity may be more accurate but both Google and OpenAI did a much broader search and included more sources than either. The bottom line is if you produced reports from all the models on the same topic, you would have a treasure trove of information, the vast majority of it useful. So I'm not sure I agree that it's as limited as you say. And specialized research platforms like Elicit and Consensus are also out there with access to hundreds of millions of academic papers that can all be searched through AI using a natural language prompt with very low hallucination rates. It's going to be the future of research.
Super intriguing. Thanks for sharing your work. I tried teaching before and didn’t enjoy it — this, however, is quite a different thing. 🍻
Thanks!
Terry - you have identified the key skill that is hard to explain when talking to people who have not spent much time with LLMs- the more effort and thought you put into your prompt, the more interesting and potentially productive results you will get. It is interesting that, for all the advice on prompting to tell the bot what role to play, that this recent development with Claude would seem to stymie that practice. But (and I have had many similar situations) I can usually get around it with trial and error to do what it needs to do. But as you point out, that is a skill in itself and something not everyone is comfortable or willing to do. I love this prompt and I'd be interested to see others you've used successfully.
Actually, Steve, Nick Potkalitsky and I are doing research in an experimental high school English class Nick is teaching. After the semester is over, we will write recommendations for his English Department about establishing a course in the curriculum called AI Theory and Writing. Then we are working on a book proposal to get something out by late summer with the collection of mentor prompts (and other tutorial prompts) and discussion of student work. They aren’t ready for prime time—it’ll likely be August anyway and possibly later. I’ve already seen firsthand the power of this language machine.
That sounds fantastic. I'll be very interested to follow that work - like many English departments, ours is stuck on the very real issue of how do we acknowledge and address the ease with which students can outsource the writing process but still think creatively to consider ways to leverage the equally real things LLM's can do to help with the process, as your prompts attempt to do. Most of the teachers want nothing to do with it. What is your general response to the most overtly hostile attitudes towards AI in the classroom and the claim that it will destroy that ambiguous term you wrote about the other day ("critical thinking")? I teach history and the arrival of these new Deep Research models pose a similar reconsideration of the purpose of the high school research paper or at least a reimagining of how most people will be conducting (at the very least) preliminary research going forward. I find it all fertile ground for conversations around pedagogy, curriculum design, and learning in general but my experience is most educators are not yet well-versed in what's happening to appreciate how fundamentally AI disrupts the skills we teach. Or at least that's my take.
I think you’re right. One quibble: Students think they can outsource the writing process until they learn that’s impossible. Start by assigning autobiography and see if they can outside it. Then move outward on a continuum from autobiography to biography to reports. The level of trust in bot research is very low until they reach reports—there they begin to see fool’s gold. I’m having conversations with people I’ve been talking to about AI since Nov. 2022. Once in a while I ask “Have you tried a bot yet?” “No, I haven’t had time,” comes the response. I can fathom how anyone can mount a critique of a book they’ve never read. Yes, deep search can generate a list of references, but beware. The incentive is to fabricate, and fabricate they do. And they don’t accept responsibility for it! They promise to change, but they don’t. I think the manufacturers need to b build in stronger ethical restrictions of fabricating sources—right there, don’t water it out, don’t broaden the category. It only takes one dramatic experience of using a bad bot citation in public that proves to be false to cure anyone of trusting the bot. You can’t trust a bot as far as you can throw it to cite sources (unless it’s Perplexity). Now is precisely the time to assign a research paper and require them to use the bot—and require them to include detailed logs narrating how they verified the information. The key is NOT saying “you can’t use it.” That’s easy. The key is saying “we want you to learn to use it, but your first lesson must be about trust.” Every kid in the class will have a story to tell about bot fabrications. Teachers, however, must become expert bot whisperers. I’m not sure I would call verification “critical thinking”… Critical thinking comes in after a source has been verified and is being considered as relevant to an argument or narrative or perspective—or as a frequent counter-argument to rebut. There’s no sense doing that work if you don’t know whether the source is bogus. The only way to learn is through experience. Let them feel the sting of embarrassment and the cost of loss of credibility. They’ll laugh at you (silently) if you preach at them. They’ll get it when they find out. I personally see the bot as limited in deep research, but very useful as research for current events. Read the post I’m about to upload. It gives an example of a proper use of Perplexity as a search tool. It’s the only research tool I trust a bit right now—with current events for sure, less so for old texts.
Terry - my new post is all about the Deep Research LLM's and implications for research.
https://fitzyhistory.substack.com/
Have you used Open AI's Deep Research yet? It's pretty impressive. The source issue is definitely baked into the process. I find that the issue is more quality of sources and accuracy of cross-referencing. You don't get as many broken links as previously but the link may be to the right author but wrong source or a source relevant to the topic but doesn't necessarily support the point being made. But for a first pass on a well-crafted issue, it's a significant breakthrough. And it will get better. Google Gemini is solid but both Grok and Perplexity are not as impressive insofar as actually creating the report (I am talking about their Research models). Perplexity may be more accurate but both Google and OpenAI did a much broader search and included more sources than either. The bottom line is if you produced reports from all the models on the same topic, you would have a treasure trove of information, the vast majority of it useful. So I'm not sure I agree that it's as limited as you say. And specialized research platforms like Elicit and Consensus are also out there with access to hundreds of millions of academic papers that can all be searched through AI using a natural language prompt with very low hallucination rates. It's going to be the future of research.