Inching its way to the mountaintop. We have a sketch of a Japanese insect starkly drawn in thick black lines making its way upwards along a rocky surface toward a cloud in a blue sky. Using AI as a research tool the past few years recast the significance of this visioning for me in two ways.
First, the mountain no longer represents an impossibly distant goal, i.e., to comment meaningfully on pedagogical issues beyond Monday morning without coming off as just another loudmouth out to save the world with what I know as a human educator. Even a year ago I wouldn’t have dreamed that I could compare and contrast the story of AI across the ten largest urban school districts between 2022-2025. Even further from my horizon would be the possibility of doing the same analysis across ten rural school districts.
Second, the insect’s incremental progress—once a meditation on patience and persistence—now represents something more dynamic. The increments are bigger. Each small step upward can trigger systematic searches across multiple sites, comparative analyses, and pattern recognition that would have taken weeks of manual work. My dissertation field record went beyond a thousand pages of single spaced interview transcripts, observational notes, methodological notes, theoretical notes, and reflective memos organized in sections.
Back then I was analyzing a single middle school in the midst of profound change in its approach to literacy. The stark black lines that then seemed to emphasize the insect’s smallness against the mountain, the lone researcher against institutional knowledge about a multi-ethnic, multi-linguistic, long-established middle school’s English reading and writing curriculum, now emphasize the promise of clarity and definition. AI helps me draw clear lines through complex institutional landscapes, making visible the pathways that were always there but impossibly difficult to trace alone.
In 2025 I have a machine that can systematically search archives, identify patterns across hundreds of documents, and help me maintain analytical rigor even as the scope of inquiry expands. I can verify the accuracy of the information. I can call humans functioning within those institutions to validate the findings, a step above verifying. The mountain hasn’t gotten smaller, but the climb has become more meaningful.
The following provides more detailed information about what I mean. First, I will provide a draft of a paper I’ve written to communicate a perspective on what is happening re: AI in two mega school districts in the U.S. This paper is exploratory in several ways and is intended here as an example of a work product made possible through emerging AI research capabilities. Then I’ll provide an articulation of the protocols I used to produce this paper and also to produce a replicable heuristic that will allow me to research and create solo papers on any of the myriad of questions relevant to this domain.
I’m interested in learning your perspective on this emerging type of what I’m thinking of as a hybrid of traditional ethnographic techniques with its insistence on local documentation and its aim of generating grounded theory and journalism with its am of telling the story as it looks on the ground. Here’s a sample paper followed by technical matters.
A Tale of Two Cities
A comparative investigation reveals fundamentally different approaches to artificial intelligence in New York and Chicago classrooms
KEY TAKEAWAY: New York City deployed AI tools first and built policies later through partnerships and pilots. Chicago created comprehensive frameworks before procurement, accepting slower adoption for stronger oversight. Neither approach is clearly superior, but the contrast reveals fundamental tensions between innovation speed and governance deliberation that every school district must navigate.
When ChatGPT burst onto the scene in November 2022, New York City’s public schools made headlines by banning it outright within weeks and then abruptly reversing the ban. Chicago’s response was different. Crickets.
That initial divergence has either revealed or crystallized into two distinct philosophies about how to integrate innovations like artificial intelligence into urban education. If we could access the data, highly unlikely given the historical infosphere available in the 1970s long before the internet, it would be interesting to compare historical accounts of respective responses to the calculator. In the case of now and AI, New York moved fast and fixed problems later. Chicago built guardrails first and moved slowly through them.
It appears that Chicago moved fast to integrate calculators. New York lagged significantly behind other states in officially sanctioning calculator use. The state didn’t allow calculators on its Regents exam until 1991, five years after Connecticut became the first state to require them in 1986.
The contrast offers a natural experiment in educational technology adoption. Both districts serve hundreds of thousands of students in complex urban environments. Both face similar pressures from parents, teachers, and technology companies. Yet their strategies for managing AI could hardly be more different.
This investigation traced the money and examined procurement records, board meeting minutes, policy documents, and vendor contracts from both districts spanning November 2022 through October 2025. The research focused on a simple journalistic principle: follow the dollars to understand what districts actually do versus what they say they do.
New York’s Approach: Partnerships, Pilots, and Pivots
After reversing its ChatGPT ban in May 2023, New York City schools announced a partnership with Microsoft that September. The tech giant would provide an AI-powered teaching assistant built on Azure OpenAI Service, developed using data from a platform the district and Microsoft created during the COVID-19 pandemic. Chief Product Officer Zeeshan Anwar stated the goal was to eventually implement the technology in every city classroom. There is no mention of of a district request for proposals.
When schools Chancellor David Banks reversed the ChatGPT ban through a May 2023 op-ed, district officials “said, ‘OK, we have a data foundation. ChatGPT and OpenAI are here. Let’s work with Microsoft to bring this into the classroom,’” according to Anwar. The district piloted the “teaching assistant,” an AI model that could answer students questions about coding and other topics, in three high school computer science courses, where nearly 100 students asked more than 2,000 questions in a two-week span. Again, there is no mention of a district request for proposals.
In December, 2024, New York City’s Department of Education planned to expand its use of an AI-powered reading assistant through a $1.9 million contract with EPS Learning for their product Amira. The contract was scheduled for approval by the Panel for Educational Policy on December 18, 2024. Amira is an AI tutor that listens to students read aloud, provides real-time feedback and interventions, screens for dyslexia risk, and generates progress reports for teachers. The tool can be used across grades K-12 in both English and Spanish. As an aside, currently, the state of Georgia has implemented Amira involving millions of students.
On December 17, 2024—just one day before the scheduled vote—NYC Comptroller Brad Lander publicly demanded the Education Department withdraw the contract. Lander, who was also running for mayor, raised several critical concerns: the DOE lacked any established policies or guidelines for using AI in classrooms, had not studied the technology’s effectiveness or potential harms, and had not adequately addressed student data privacy protections. His statement emphasized that “before we spend millions on an AI program that could shape our kindergartners’ reading abilities, let’s make sure we’re doing this right”
The problem: 46,000 students across 162 schools were already using Amira. A Department of Education spokesperson told Chalkbeat the tool had been “used in schools for years.” The district was seeking formal approval for a program operating long before contract authorization. The Education Department withdrew the contract proposal within hours. before the vote.
The scenario highlights a critical accountability gap that emerged during the controversy. Greg Faulkner, the Panel for Educational Policy (PEP) chair, expressed concern about the lack of evidence demonstrating the program’s effectiveness in the 162 NYC schools where 46,000 students were already using Amira.
After the contract was withdrawn, Faulkner stated that “the city had not clearly demonstrated that the program was effective in schools that already use it,” while council member Joseph Borelli also questioned why the district hadn’t clearly shown results in schools already using the tool. Education officials defended the program as “a proven product” operating as “a closed, self-contained network,” relying on the credibility of the supplier of the product to warrant it effectiveness.
The web of incompetence underscores the broader oversight failure—the partnership model allowed extensive implementation and expansion of an AI tool without requiring the DOE to first evaluate whether it actually improved student reading outcomes or even how it worked. This wasn’t just about privacy or policy guidelines; it was about spending millions on a technology without documented proof it worked, even after years of use. Panel members were questioning basic due diligence that should have preceded both the initial adoption and the proposed expansion.
Reactive Policy Adjustments
Warning signs had appeared earlier. At a September 2023 City Council hearing, lawmakers sought clarity on what AI tools were already in use across city schools. While ChatGPT appeared on a list of restricted sites that individual schools could request to unlock, Department of Education officials acknowledged that AI and machine learning tools including speech recognition and language learning applications were already being deployed. Council Member Jennifer Gutiérrez emphasized the need to “ensure its ethical use, and that it enhances rather than detracts from the educational experiences of our city’s students and teachers.”
By December 4, 2024, the Department finally implemented new requirements mandating that all vendors disclose whether their products use artificial intelligence. The standards required vendors to reveal AI technologies during the data privacy compliance process, prohibited using student personally identifiable information to train AI models, and demanded transparency in AI decision-making processes. The mandate came more than two years after ChatGPT’s release and after thousands of students had already used AI-powered tools.
Yet just two weeks later, the Amira contract controversy revealed how insufficient even these new requirements were. The comptroller’s intervention exposed that the city had “yet to reveal many details—or policy guidance for educators about how to use it in their classrooms,” while “some schools are experimenting on their own.” The December 4 vendor disclosure standards addressed data privacy and transparency from vendors, but said nothing about evaluating effectiveness, establishing classroom implementation guidelines, or requiring evidence of educational benefit before widespread deployment. The sequence reveals New York’s operational approach: deploy technology quickly through partnerships and pilots, attempt governance adjustments when the gaps become apparent, then address fundamental questions only when political pressure forces the issue.
The Chicago Approach: Framework Before Function
While New York was reversing its ChatGPT ban and announcing Microsoft partnerships, Chicago remained quiet on instructional AI. The district’s first documented use of AI-powered tools involved administrative software, not classroom applications. Chicago Public Schools deployed Euna Procurement’s (formerly EqualLevel’s) AI-powered savings advisor, which uses artificial intelligence to identify cost savings opportunities through real-time analysis and comparison shopping. The platform ultimately saved the district $1.7 million by helping staff identify lower-cost alternatives from approved contracts.
The choice to start with operations rather than instruction was telling. Chicago seemed comfortable using AI to improve back-office efficiency while remaining cautious about putting it directly in front of students.
The district spent much of 2024 building policy infrastructure. In August of that year, Chicago released a comprehensive AI Guidebook developed jointly by the Office of Teaching and Learning and the Department of Information and Technology Services. The document ran to 42 pages and addressed questions about acceptable use, data privacy, professional development, and phased implementation.
The guidebook established clear timelines. The 2024-2025 school year would serve as a learning period for staff and administrators. Full generative AI integration would not happen until 2025-2026. The framework required that all AI tools appear in the district’s EdTech Catalog and comply with the Student Online Personal Protection Act.
SOPPA, an Illinois state law, creates stringent requirements for educational technology vendors. Companies must complete a formal request for qualifications process and sign contracts guaranteeing student data protection. The law functions as a gatekeeper, slowing the entry of new tools but theoretically improving quality and security.
In March 2024, CPS developed an AI Exploration Rubric in partnership with Education First through a Gates Foundation grant. The rubric guides decisions about when and how to use AI by asking: “Should we use AI for this purpose? Could we, from an operational, fiscal, and ethical perspective? How would we implement it?”. This framework was designed to evaluate potential use cases systematically rather than adopt tools opportunistically..
The Chicago approach prioritized deliberation over speed. Policy came before procurement. Stakeholder input preceded vendor selection. The district accepted slower innovation adoption in exchange for clearer frameworks and stronger protections.
Leadership Instability and Its Consequences
Pedro Martinez served as CEO throughout the entire ChatGPT era, from November 2022 through June 2025. The AI Guidebook emerged during his tenure. But Martinez’s final months were consumed by conflict with Mayor Brandon Johnson and the Chicago Teachers Union over budget issues and contract negotiations.
The entire school board resigned in October 2024 rather than fire Martinez. The mayor appointed replacements specifically to remove the CEO. They accomplished that goal in December 2024. Martinez remained in his position under a temporary restraining order until his contract ended in June 2025, when he departed to become Massachusetts Commissioner of Education.
Dr. Macquline King took over as interim superintendent in June 2025. The board launched a national search for a permanent CEO. That search continued as of October 2025, with no resolution in sight.
This instability likely affected AI implementation timelines. Major technology initiatives require sustained executive leadership and board support. Chicago had neither during the critical 2024-2025 period when the district was supposed to be conducting its “learning year” before full AI integration.
NYC Chancellor David Banks’ retirement occurred amid a complex web of federal corruption investigations into Mayor Eric Adams’ administration. On September 4, 2024, federal agents raided Banks’ home and seized his personal and work phones, along with those of his fiancée, First Deputy Mayor Sheena Wright.
The investigation centered on Banks’ brother Terence Banks, who operated a consulting firm called The Pearl Alliance that represented multiple companies with city contracts. Federal authorities were investigating whether a bribery scheme involved city contracts being awarded to companies that hired Terence Banks as a consultant. Within weeks of hiring Terence Banks, for example, at least one STEM education company secured a private meeting with Chancellor David Banks.
Methodology and Evidence
This analysis rested on systematic examination of public records from both districts. The research traced procurement contracts, board meeting minutes, policy documents, and news coverage from November 2022 through October 2025.
For New York, primary sources included Panel for Educational Policy meeting agendas and archives, Department of Education procurement pages, vendor compliance lists, and Comptroller reports. The investigation also drew on reporting from Chalkbeat and City & State New York.
For Chicago, sources included Board of Education meeting documents, procurement databases, the LearnPlatform EdTech Catalog, the AI Guidebook, and SOPPA vendor compliance records. News coverage came from Chalkbeat Chicago, GovTech, and education technology trade publications.
All claims in this report link to functioning URLs that readers can verify. Where documents required navigation through multiple pages or were embedded in larger meeting packets, the access pathway has been documented.
The research faced limitations. Some request for proposals may exist behind vendor portals requiring credentials that researchers cannot access. Contracts below reporting thresholds do not appear in public databases. Pilot programs at individual schools may operate without triggering formal procurement processes.
What the Difference Means
The New York and Chicago approaches emerge from fundamentally different institutional ecosystems rather than simple philosophical disagreements about innovation speed.
New York’s pattern reflects a governance structure where mayoral control concentrates decision-making authority and established vendor relationships can bypass competitive procurement. The Microsoft partnership materialized without visible public bidding, suggesting negotiations occurred through channels designed for enterprise-scale technology deployments. This approach serves districts facing intense pressure from parents demanding cutting-edge educational resources and media coverage that frames technology adoption as competitive advantage.
The pilot-to-contract pipeline that allowed 46,000 students to use Amira before formal board approval reveals decentralized implementation authority. Individual principals or network leaders apparently possessed enough autonomy to adopt AI tools at the school level. Central administration then sought retroactive authorization for programs already embedded in classroom practice. This sequence inverts traditional procurement logic where board approval precedes deployment.
The Comptroller’s objection exposed accountability mechanisms functioning as backstops rather than gatekeepers. Financial oversight caught the transparency problem after implementation, not before. The subsequent mandate requiring AI disclosure from all vendors came as reactive policy adjustment triggered by controversy. This pattern suggests that New York’s governance model tolerates experimentation at the edges of established procurement rules, relying on multiple oversight actors to catch problems that emerge during implementation.
The approach carries institutional logic beyond speed preferences. Partnerships with major technology companies provide access to enterprise support, regular software updates, and integration with existing platforms. Schools already using Microsoft products for email and document management can add AI capabilities without switching ecosystems or retraining staff on new interfaces. The transaction costs of maintaining relationships with established vendors are lower than constantly evaluating new entrants through competitive processes.
But this model creates asymmetric information problems and concentrated vendor power. Without competitive bidding, districts cannot easily assess whether partnership terms represent fair market value. Teachers and principals making adoption decisions at the school level lack comprehensive information about data privacy practices or long-term cost implications. Parents discover their children are using AI tools through news coverage rather than transparent advance notification.
Chicago’s pattern reflects different institutional constraints and political calculations. Illinois SOPPA law creates mandatory vendor screening that the district cannot waive even for established companies or emergency circumstances. The statute embeds data protection requirements in state law rather than local policy, raising the stakes for compliance failures beyond district-level political accountability.
The AI Guidebook’s development through joint work between Teaching and Learning and Information Technology Services suggests internal bureaucratic negotiation rather than executive directive. Multiple departments needed to reach consensus on framework language, acceptable use definitions, and implementation timelines. This process takes months and requires reconciling competing priorities about instructional innovation versus technical security.
The phased timeline designating an entire school year as a learning period before full deployment reflects risk calculation shaped by Chicago’s governance turbulence. Leadership instability makes major technology initiatives politically vulnerable. A CEO facing potential termination has limited incentive to champion bold programs that successors might reverse or blame for problems. An interim superintendent managing a national CEO search lacks authority to make long-term commitments that constrain whoever assumes permanent leadership.
This analysis also reveals an emerging pattern in Illinois education where suburban school districts around Chicago are successfully piloting AI educational tools like DFFIT.me and Magic School while Chicago Public Schools (CPS) itself has been more cautious in implementation despite developing comprehensive policy frameworks.
Implementation Divide
Suburban districts including Bremen District 228, Orland District 230, and Oak Lawn District 229 have actively deployed AI tools during the 2024-2025 school year. Bremen District 228 purchased DFFIT.me (also known as Diffit), which helps teachers adapt classroom materials to different languages, reading levels, and student interests, and is piloting Magic School, which provides students with AI-powered brainstorming and review tools while maintaining teacher oversight. These districts expanded their technology committees, conducted summer training sessions, and embedded AI guidelines into discipline codes rather than imposing outright bans.
Chicago Public Schools developed a comprehensive AI Guidebook in 2024 through a partnership with AI for Education, but committed to full generative AI integration only for the 2025-2026 school year, designating 2024-2025 as a “learning period”. The guidebook emphasizes ethical AI use, data privacy, cross-departmental collaboration, and phased implementation with ongoing professional development.
The implementation disparity stems from several structural differences. Suburban districts benefit from smaller scales, more stable leadership, and greater operational autonomy. Meanwhile, CPS faced significant organizational demands during 2024-2025, including intensive contract negotiations with the Chicago Teachers Union that lasted nearly a year and concluded in March 2025 without strike action for the first time in 15 years. These negotiations addressed 142 pages of proposals covering facilities, staffing, and student support, consuming substantial central office attention and resources.
Research indicates that affluent suburban districts are more likely to provide AI training for teachers, reflecting broader resource and capacity disparities. The suburban districts’ ability to move quickly with AI adoption demonstrates how jurisdictional autonomy within the same state regulatory framework enables experimentation that larger urban systems find more challenging to execute.
Neither approach reflects pure rationality or simple philosophical preference. Both emerge from institutional contexts that constrain available options and shape incentive structures for decision-makers. New York’s governance model concentrates authority and rewards rapid action. Chicago’s distributes power and penalizes unilateral moves. New York’s political environment emphasizes competitive positioning and innovation leadership. Chicago’s emphasizes constituent protection and bureaucratic legitimacy.
The vendor landscape also shapes possibilities differently in each city. Microsoft’s aggressive pursuit of educational market share makes partnership attractive in districts with existing enterprise relationships. Smaller AI startups like EPS Learning rely on pilot programs to gain traction but lack resources to navigate complex procurement bureaucracies. The Chicago SOPPA compliance process may systematically favor established educational publishers adding AI features to existing products over venture-backed startups building AI-first tools.
Financial constraints operate differently across the two contexts. New York’s budget scale and political visibility attract vendor interest and partnership proposals that smaller districts never see. Chicago’s budget crisis and leadership battles over pension obligations created fiscal uncertainty that makes long-term technology commitments politically risky. Districts contemplating million-dollar AI contracts need confidence in multi-year budget stability that Chicago lacked during the critical 2024-2025 period.
The outcomes extend beyond simple trade-offs between speed and caution. New York’s approach produces uneven implementation where some schools and some students gain AI access while others do not, depending on principal initiative and school-level resources. Chicago’s approach produces slower district-wide adoption but potentially more equitable distribution once implementation begins, since centralized policy frameworks establish baseline access expectations.
Part Two
Introduction to the AI Ethnographic Journalism Online Research Design Protocol
What This Document Does
Using AI as a research tool the past few years recast the significance and meaning of ethnography for me. First, it suggested a methodological expansion beyond traditional ethnographic boundaries. Educational anthropology has long relied on participant-observation: researchers embed in schools, classrooms, and communities, building trust over time to understand social processes from within.
But local embeddedness struggles with institutional scale and comparative analysis. What if we realized our access to already existing data that large bureaucracies create via extensive public documentation of their decision-making—not for researchers, but for boards, communities, and accountability systems?
Studying these institutional archives systematically across multiple sites creates a form of “distributed observation”—not the deep immersion of traditional ethnography, but rigorous comparative analysis of how organizations document their own choices, debates, and directions. The insight isn’t about being a fly on the wall; it’s about recognizing that the walls themselves now speak through the records institutions produce.
This protocol is a practical guide for researchers tackling complex, multi-site comparative studies—specifically designed for investigating how large institutions are responding to AI. It walks you through a systematic approach that transforms an overwhelming research challenge into a manageable, rigorous investigation.
At its core, the protocol teaches you how to use AI as a research design partner, a data collection assistant, a data analysis team member, and a reflective discussant. Through structured dialogue, you’ll build customized data collection templates tailored to your specific research questions. Then you’ll test these templates by creating “access guides”—detailed maps showing where to find the documents, policies, meeting minutes, and budget trails you need for each institution you’re studying.
In this case, I began by wondering what insights might be gleaned from comparing the institutions response 2022-2025 cross the ten largest urban school districts. Which are the ten largest urban school districts? I used Perplexity to refine criteria for “urban” and decided to keep the term but rely on the “large” criterion. Later, I’ll look into how we contour urban in our local ontologies. I was interested in tracking the AI phenomenon across institutions by following the money.
Once I had a suitable list of large districts, I tasked the AI with providing raw, public data stored online by each district that might give me access to information about allocations and policies regarding AI. The process required patience and realistic expectations. Claude would take multiple searches per district. Many documents wouldn’t have direct URLs—they’d be embedded in board meeting packets, referenced in budget presentations, or mentioned in superintendent reports without being published separately.
What I requested specifically was this: “Create an access guide showing me where to look for this information, not pretending you can fill every blank.” This reframing was crucial. Instead of asking AI to magically produce comprehensive data, I was asking it to systematically map the entry points into each district’s information system. Where are board archives? How far back do they go? Is there a searchable database or just chronological listings? Where does the district publish procurement information? Are there dedicated AI policy documents or is AI addressed within broader technology policies?
The resulting access guides documented both availability and absence. They showed which districts had centralized, accessible information systems and which required piecing together information from multiple sources. They identified where formal documentation existed versus where decisions seemed to happen through informal channels or embedded within other processes. And critically, they provided enough strategic intelligence for me to know whether this research was actually doable—and if so, what combination of online research, public records requests, and institutional outreach would be required.
The process unfolded iteratively. Claude would search for a district’s board meeting archive, fetch the actual page, document the URL and structure (searchable database versus chronological PDF listings), note the date range available, and identify how meeting materials were organized (agenda packets, minutes, supporting documents). Then procurement pages, policy manuals, budget presentations, technology plans. Each search generated new searches as Claude followed trails—a board meeting mentioned a vendor partnership, triggering a search for that specific agreement; a budget line item referenced a previous RFP, prompting a search in archived procurement records.
What accumulated wasn’t pristine data but operational intelligence: NYC’s board meetings go back to 2008 and are searchable, but AI discussions are scattered across multiple agenda items rather than concentrated in technology committee meetings. Los Angeles has a vendor transparency portal, but contracts often reference AI capabilities without flagging them explicitly. Chicago’s RFP database exists but requires understanding their category system—AI might appear under “instructional technology,” “data systems,” or “professional development” depending on the application.
These access guides transformed an overwhelming research scope into a concrete action plan. For each district, I now knew: what’s readily available online, what requires deeper searching within district websites, what likely needs public records requests, and what patterns of information organization might signal broader approaches to AI governance. The research became doable not because it got easier, but because it became specific.
The process moves through five clear phases: developing your template through iterative conversation, operationalizing it across multiple research sites, creating strategic access guides through pilot testing, scaling to comparative analysis, and finally executing the full research program. You’ll learn when to search for URLs, when to file public records requests, and how to turn both what you find and what’s conspicuously missing into meaningful data.
Why I Created This Protocol
I developed this approach while designing a study of AI implementation across the ten largest U.S. urban school districts. The research goal is ambitious: trace how these massive educational systems are integrating AI by following the money, the procurement processes, the policy debates, and the actual rollout initiatives from November 2022 to the present.
Previously, this kind of large-scale comparative research felt daunting, prohibitively so. How do you systematically track board meetings, RFPs, vendor relationships, budget allocations, stakeholder responses, and policy evolution across ten different bureaucracies, each with its own organizational logic and information architecture? The data collection alone seemed overwhelming before you even got to analysis.
This protocol emerged from experimenting with a different approach: using AI as a research design partner to make formerly daunting investigations feasible. By iteratively building comprehensive templates and systematically mapping data access points, I discovered I could ask questions that previously seemed beyond reach.
The Questions This Enables
By following the money trail and tracking the plot line of major AI initiatives across these ten districts, I can now pursue questions like:
Where are we? What does the current landscape of urban educational AI implementation actually look like when you move beyond tech industry marketing and pilot program press releases?
What mistakes are easily fixable? Where have districts made procedural, procurement, or policy missteps that could be corrected with relatively straightforward interventions?
Where have we fallen into the deep end? What systemic problems have these districts inadvertently created—decisions that have already locked them into problematic vendor relationships, set troubling precedents, or established patterns that will be genuinely difficult to reverse?
The comparative framework is essential here. By studying ten districts simultaneously with identical methodological tools, patterns emerge. You can distinguish between isolated mistakes and sector-wide problems. You can identify which districts are navigating challenges more effectively and why. You can see what transparency looks like—and what opacity reveals.
This protocol makes that kind of systematic, evidence-based institutional analysis achievable for individual researchers working without large research teams or extensive institutional support.
Protocol: Using Claude’s New Artifacts to Create Research Design Templates and Access Guides
Overview
This protocol guides you through using Claude’s Artifact feature to develop customized research templates and generate specific research access guides for complex, multi-site investigations.
Phase 1: Template Development Through Iterative Dialogue
Step 1: Establish Research Context
Your Actions:
Clearly state your research purpose and end goal (e.g., “writing a book,” “comparative analysis,” “policy study”)
Identify the scope (number of sites, timeframe, subject matter)
Specify your intended use (deep dive vs. broad survey)
Example Opening:
“I want a document I can use to explore how the 10 largest U.S. school districts are addressing AI in classrooms. This is for writing a book covering November 2022 to present, with ongoing updates.”
Step 2: Clarify Data Quality Requirements
Your Actions:
Specify primary vs. secondary source preferences
Define what constitutes credible evidence in your field
Articulate any methodological approach (ethnographic, quantitative, mixed-methods)
Example Specification:
“I want ethnographic data from district-created documents—Board minutes, administrative memos, teacher publications—not second-hand accounts.”
Step 3: Expand Through Iterative Questioning
Process:
Accept Claude’s initial framework proposal
Identify missing dimensions through prompts like:
“What high-yield research topics are we missing?”
“What other data sources can you infer from what we have?”
Evaluate each suggestion for inclusion
Repeat until comprehensive
Key Questions to Ask:
What stakeholder voices are absent?
What organizational levels aren’t represented?
What longitudinal elements are missing?
What compliance/oversight mechanisms should be tracked?
Step 4: Define Detail Level and Format
Your Actions:
Clarify whether you need “minimal detail fields” (quick data capture) or “extensive narrative space” (deep analysis)
Specify intended writing output (research notes, chapter drafts, comparative analysis)
Choose format based on your workflow
Critical Decision Point:
“I want to use this as a research tool for deep dives and chapter writing, not quick surveys. I need extensive space for documentation and analysis.”
Step 5: Structure the Analytical Framework
Your Actions:
Request sections for ongoing analysis (memos, emerging themes, questions)
Build in comparative capability (cross-site notes)
Include progress tracking mechanisms
Add chapter/writing outline spaces
Step 6: Generate the Artifact
What Happens:
Claude creates the complete template in an Artifact
Template appears in a separate panel for easy reference
You can copy, modify, or regenerate as needed
Best Practice:
Save the Artifact immediately as your MASTER template before making any modifications.
Phase 2: Operationalizing the Template
Step 7: Determine File Management Strategy
Decision Tree:
If researching multiple sites:
Create individual files (one per site) from master template
Use clear naming conventions (SiteName_ProjectName.docx)
Maintain separate master template file
If using collaborative tools:
Consider Scrivener for book-length projects
Consider Notion/Airtable for structured comparison
Consider separate Word files for simplicity (Mac users especially)
Your Actions:
Create folder structure
Name files according to your sites
Copy master template into each file
Step 8: Request Strategic Guidance
Your Actions:
Ask about publication implications
Explore advantages/disadvantages of sharing methodology
Discuss timing of release
Consider intellectual property concerns
Example Questions:
“What are the advantages and disadvantages of publishing this template?” “Should I wait until after data collection to share my framework?”
Phase 3: Creating Research Access Guides
Step 9: Select Focal Area for Pilot Research
Your Actions:
Choose one specific research category to explore deeply (e.g., the example paper looking at procurement in NYC vs Chicago, RFPs, board meetings, policies)
Select one site for pilot investigation
Set realistic expectations about URL availability
Example:
“Let’s focus specifically on RFPs as an experimental case. Start with NYC.”
Why This Works:
Tests data accessibility before full commitment
Reveals patterns in how information is organized
Identifies gaps requiring alternative strategies (public records requests)
Refines your understanding of the research landscape
Step 10: Guide Claude’s Search Process
Your Actions:
Explicitly request: “Populate this section with URLs where I can access this information”
Understand Claude will need multiple searches
Accept that many documents won’t have direct URLs
Be patient with the iterative search process
What Claude Will Do:
Conduct systematic web searches for official sources
Fetch specific documents and pages
Document what’s accessible vs. what requires other methods
Create an “Access Guide” rather than populating every blank
Step 11: Review and Refine the Access Guide
What You’ll Receive:
Known URL entry points (board archives, procurement pages, policy manuals)
Search strategies for district websites
Public records request information
Documentation of gaps and limitations
Insights about how the district organizes information
Your Actions:
Review Claude’s findings
Test provided URLs
Note additional questions or missing pieces
Request clarification on access methods
Step 12: Extract Strategic Insights
What to Look For:
Patterns in how information is made public (or hidden)
Vendor/partnership approaches vs. formal RFPs
Timeline indicators in available documents
Policy development processes
Enforcement mechanisms
Example Insights:
“Key Finding: NYC DOE has NOT issued formal standalone AI RFPs. Instead, AI enters through partnerships, pilot programs, and expanded existing contracts.”
Phase 4: Scaling and Comparative Analysis
Step 13: Expand to Additional Sites
Your Actions:
Apply lessons from pilot to next site
Request similar access guides for comparison
Note differences in information accessibility across sites
Build comparative framework
Strategic Approach:
Do 2-3 sites thoroughly before scaling to all sites
Refine what you’re looking for based on initial findings
Create standardized tracking for cross-site comparison
Step 14: Create Comparison Mechanisms
Your Actions:
Build spreadsheet for systematic comparison (alongside narrative templates)
Track key metrics across all sites
Document patterns and outliers
Use comparison to generate research questions
Example Comparison Matrix:
DistrictTotal RFPsApproachTimelineBudgetKey VendorsNYC2PartnershipSept 2023$1.9M withdrawnMicrosoft, EPSChicago?????
Step 15: Integrate Findings Into Template
Process:
Copy URLs and access information into appropriate template sections
Document initial findings in analytical memo sections
Note questions for follow-up research
Track research progress using built-in checklist
Phase 5: Advanced Research Execution
Step 16: Develop Public Records Request Strategy
When URLs Aren’t Enough:
Use access guide to identify what’s missing
Draft targeted public records requests
Document request process in template
Note response timelines and outcomes
Step 17: Monitor and Update
Ongoing Actions:
Set up alerts for new documents
Regularly check meeting archives
Update templates as new information emerges
Maintain version control
Step 18: Synthesize for Writing
Using Your Populated Templates:
Review analytical memos across all sites
Identify cross-cutting themes
Use chapter outline sections to structure narrative
Draw from primary source documentation with proper citations
Critical Success Factors
Do’s:
✅ Start with clarity about your research purpose and scope
✅ Iterate extensively on template design before data collection
✅ Test with pilot site before scaling
✅ Accept that not everything will have a URL
✅ Use separate Word files on Mac (Master Document feature doesn’t work)
✅ Maintain pristine master template separate from working copies
✅ Build in analytical space, not just data capture fields
✅ Request strategic guidance about methodology and access
Don’ts:
❌ Expect Claude to populate every blank with a URL automatically
❌ Skip the iterative template development phase
❌ Try to research all sites simultaneously without pilot testing
❌ Overlook the value of “negative findings” (what’s NOT available)
❌ Forget to document your research process itself
❌ Rush to publication before data collection is complete
❌ Assume district websites are well-organized or comprehensive
Key Insights from Example Case
Template Evolution:
Started with basic framework → expanded to 12+ major sections
Added two-tier source system (primary/secondary)
Built in multiple stakeholder perspectives
Included longitudinal tracking and analytical workspace
Access Guide Reality:
Direct URLs exist for: board meetings, some policies, procurement portals
Requires detective work: RFPs buried in meeting packets, vendor lists, news monitoring
Many documents need: public records requests, personal outreach, institutional access
Unexpected finding: Some districts avoid formal RFPs entirely (partnerships instead)
Research Value:
The absence of formal processes is data
Access guides reveal district transparency levels
Comparison across sites generates insights about governance approaches
Methodology documentation strengthens research credibility
Conclusion
This protocol transforms Artifact from a simple document creator into a sophisticated research design tool by:
Co-creating customized templates through iterative dialogue
Testing accessibility and feasibility through focused pilots
Generating site-specific access guides with actionable strategies
Building comparative frameworks for multi-site analysis
Documenting both findings and methodology for rigorous scholarship
The key is strategic iteration: refine your template through dialogue, test your access through pilot research, then scale systematically while maintaining analytical rigor throughout the process.