logo
Atom
  • Support
Back to webinars

Solutions

SearchEmailsGrantsOnboardingAdminDeep Research

Resources

BlogWebinarsCase StudiesNewsletterDocsResources

Company

TeamLinkedInChangelogSupportPrivacyTerms
© TDSHE Inc. 2026. All rights reserved.
Back to webinars

October 14, 2025 at 12:00 PM ET

Webinar Recap: AI for Research Administration (AI4RA)

Bridging Worlds Between Research Administration and Artificial Intelligence

Access Webinar Recording

Please fill out this form to access the recorded webinar.

Recap of the October 14, 2025 Webinar | Presented by AI4RA and Atom Grants

How can AI actually make life easier for research administrators—without risking compliance or adding chaos? That question shaped Atom Grants’ 10th installment in our AI in Research webinar series, featuring Nate Layman (University of Idaho) and Nathan Wiggins (Southern Utah University), co-leads of the AI for Research Administration (AI4RA) project. Their NSF-funded work explores how AI can streamline research administration and build trustworthy, repeatable workflows inside sponsored programs offices.

Understanding What AI Is—and What It Isn’t

Layman and Wiggins opened with a grounding reminder: AI is not magic reasoning—it’s “a statistical parrot.” It predicts the next likely word, not the most logical one. That means it can generate, extract, and transform data, but not make management decisions or exercise accountability.

A computer can never be held accountable, therefore a computer must never make a management decision.

That distinction—between AI and algorithmic automation—framed the rest of the session. AI is probabilistic and creative; data science is deterministic and repeatable. Understanding which you need for a task is essential.

Three Core AI Use Cases in RA

AI4RA’s framework for assessing whether to use AI focuses on three categories:

  1. Extraction: Pulling structured information from documents (e.g., RFPs, awards, or budgets).
  2. Transformation: Converting information from one format to another (e.g., comparing revisions or generating summaries).
  3. Generation: Drafting or brainstorming new text or code based on existing context.

Extraction, they agreed, is the most reliable and measurable starting point for research administration—because accuracy can be tested quantitatively.

Trust, Accuracy, and “Humans in the Loop”

When asked how to verify AI-generated results, Wiggins emphasized narrow, specific use cases:

When we make AI a tool—especially in research administration—that’s when we get the most out of it.

Layman added that human oversight remains non-negotiable. AI can surface insights faster, but RAs and OSP staff remain the accountable authorities. The goal isn’t replacement, but amplification—a force multiplier for existing staff capacity.

Tools, Models, and Practical Infrastructure

Both universities use multiple AI models—OpenAI, Anthropic, Google’s Gemini—testing each against specific RA tasks. They stressed measuring accuracy as a prerequisite to deployment. When institutions can’t share data externally, local hosting is a viable option: even small schools can run compact GPU servers to keep data private under HIPAA and other compliance frameworks.

You might think your university is too small for local AI but you actually might not be.

Building Reproducibility: The “Vandalizer” Platform

A standout example from the University of Idaho was Vandalizer, an internal workflow system that standardizes how staff interact with AI. By codifying prompts and measuring accuracy, Vandalizer ensures that every team member uses the same procedure—making results repeatable and auditable.

Wiggins likened it to learning through small wins:

When we start small—like using AI to flag specific terms in award language—we build the momentum and literacy to tackle larger challenges.

AI Ethics, Security, and Policy

Security and compliance questions dominated the chat. The speakers recommended:

  • Using AI products with institutional data agreements (e.g., Microsoft Copilot).
  • Hosting local models for sensitive data.
  • Disclosing AI use transparently in proposals, following emerging sponsor guidance (AHA, NSF).

They also flagged the TAMPE Framework (Task, Model, Prompt, Evaluation, Reporting), developed at Idaho, to help RAs structure, evaluate, and document AI-assisted workflows responsibly.

Live Demos: From Prompts to Prototypes

Wiggins closed with a live demo—building a prototype effort reporting system in under ten minutes using ChatGPT-generated code.

Would we use it in production? Probably not. But now I can walk into my IT meeting with something tangible to react to.

The takeaway: even “lightweight” AI prototypes can make collaboration faster and clearer across research offices.

Final Thoughts

Both presenters ended on a note of empowerment:

Use AI to make yourself a better research administrator, if AI is going to lower humanity’s average, then we have the chance to raise the standard.


About AI4RA: AI for Research Administration is a cross-institutional NSF project advancing the responsible, measurable use of AI in research management. Learn more or join their community of practice here.

About Atom Grants: Atom helps research offices find, track, and manage funding opportunities using AI-powered discovery and workflow tools. Learn more or book a demo here.