logo
Atom
  • Support
Back to webinars

Solutions

SearchEmailsGrantsOnboardingAdminDeep Research

Resources

BlogWebinarsCase StudiesNewsletterDocsResources

Company

TeamLinkedInChangelogSupportPrivacyTerms
© TDSHE Inc. 2026. All rights reserved.
Back to webinars

November 17, 2025 at 12:00 PM ET

Webinar Recap: Research Strategy & Operations in the Era of Open Data and GenAI

Turning open data into actionable research intelligence.

Access Webinar Recording

Please fill out this form to access the recorded webinar.

In a recent webinar hosted by Atom Grants, independent consultant and senior research fellow Jorge Gomez Magenti shared groundbreaking insights into how open data and generative AI are transforming research strategy and operations. Drawing from his experience working with research funders and organizations across Europe, Jorge demonstrated practical approaches to addressing long-standing challenges in the research ecosystem.

The Problem: Strategic Decisions Without Evidence

Jorge began by identifying a critical gap in research administration: many strategic funding decisions lack robust external evidence. Organizations face several systemic challenges:

  • Data fragmentation across multiple platforms makes comprehensive analysis difficult and expensive
  • Limited analytical capacity leaves decision-makers unaware of what's possible with modern tools
  • Over-reliance on qualitative narratives that cannot scale to cover vast research portfolios
  • Legacy processes that are collapsing under modern scale, particularly in peer review, annual reporting, and impact assessment

The result? Organizations often make crucial decisions based on incomplete information or gut feeling rather than comprehensive data analysis.

The Solution: Open Data + AI

Jorge's approach centers on two complementary strategies:

Working with Open Data

Rather than relying on expensive proprietary platforms, Jorge demonstrated how to build powerful analytical capabilities using freely available data sources:

  • Open Alex provides the largest openly available publication database with a free API
  • Custom grant databases can be compiled by harmonizing data from major funders (NIH, NSF, UKRI, European Commission)
  • Unique identifiers like DOIs and ORCIDs enable linking across disparate data sources

Jorge even created and shared a dataset of nearly two million research projects, which was so well-received that it validated the Wellcome Trust's decision to invest almost three million pounds in similar work at scale through Open Alex.

Enhancing with AI Techniques

Large language models transform how we process research information:

  • Binary categorization can reduce false positives by 20% in keyword searches, with AI achieving 86% agreement with human raters in just three hours versus three weeks of manual work
  • Topic modeling automatically clusters documents by semantic meaning, creating visual maps of research landscapes
  • Automated profiling extracts key information from publications, grants, and progress reports at scale

Practical Applications

Jorge showcased several real-world use cases that demonstrate the power of these approaches:

1. Identifying Research Gaps and Opportunities

In dementia research, Jorge's analysis revealed that some subfields had 76% UK leadership while others showed only 17%, indicating infrastructure gaps in specific diagnostic techniques. Industry participation varied from 4% to 21% across different areas, providing clear signals for strategic investment.

2. Finding Collaborators and Experts

Traditional approaches to finding collaborators or interview candidates could take weeks. Jorge's automated pipeline can process 130,000 publications, shortlist 3,000 relevant authors, and produce detailed profiles of 300 highly relevant researchers in less than a day. These profiles include:

  • Automated web searches for background information
  • Scientific profiles based on publication summaries
  • Strategic and translational scores
  • Automatically generated interview questions

3. Tracking Impact and Career Progression

By monitoring researchers through unique identifiers, organizations can automatically track career trajectories including international collaborations, industry partnerships, and subsequent funding success. This enables fellowship programs to demonstrate their long-term impact on researcher careers.

4. Optimizing Peer Review

Vector embeddings enable semantic matching between grant applications and potential reviewers' work, dramatically streamlining the reviewer identification process. The same technology applies to finding collaborators, faculty candidates, or advisory board members.

5. Building Custom Dashboards

Organizations no longer need expensive subscriptions to platforms like Clarivate or Dimensions. Jorge demonstrated how to build customized bibliometric dashboards using open data, automatically refreshed with the specific metrics each organization cares about.

The Semantic Space Advantage

One particularly innovative approach Jorge shared involves plotting research in "semantic space." By mapping where different organizations' publications fall based on their meaning rather than just keywords, he can:

  • Identify potential collaboration opportunities between universities and industry
  • Show which funders occupy similar research spaces
  • Visualize portfolio productivity by comparing where grants and resulting publications cluster
  • Create contribution and uniqueness scores for strategic planning

Looking Ahead: A More Optimistic Future

Despite concerns about AI and automation, Jorge expressed optimism about the future of research administration. The technologies he demonstrated were largely impossible just two years ago, and capabilities continue to improve rapidly. Rather than replacing human judgment, these tools augment decision-making by:

  • Processing information at scales previously unmanageable
  • Surfacing insights that would remain hidden in manual analysis
  • Freeing administrators from tedious tasks to focus on strategic thinking
  • Democratizing sophisticated analysis previously available only to well-resourced institutions

Key Takeaways

  1. Open data provides a cost-effective alternative to expensive proprietary platforms, requiring only time investment in aggregation and harmonization

  2. AI enables analysis at unprecedented scale, processing thousands of documents in hours rather than weeks

  3. Quality and customization matter more than raw capability when choosing analytical approaches or vendors

  4. Bias awareness is critical when using AI, requiring careful instruction design and responsible use of bibliometrics

  5. The field is evolving rapidly, with capabilities improving every six to nine months

Questions to Consider

Jorge encouraged organizations to continuously ask themselves:

  • What are the gaps in capability or infrastructure within your field?
  • What areas of research is industry most interested in?
  • Who are the key researchers across different areas?
  • Should diversity interventions be targeted or broadly applied?
  • How can you demonstrate the path from research outputs to real-world impact?

The webinar demonstrated that sophisticated research intelligence is no longer the exclusive domain of large, well-funded organizations. With open data and AI tools, institutions of all sizes can build powerful analytical capabilities to make more informed strategic decisions.

For organizations looking to explore these approaches, whether through custom solutions or platforms like Atom Grants, the message is clear: the tools exist today to transform research strategy and operations. The question is not whether to adopt them, but how quickly.