Site icon Gradient Flow

Copilots and Workflow Agents: How Generative AI is Transforming Scientific Workflows

As AI teams work to develop effective enterprise solutions, understanding real-world deployment experiences can be invaluable. This Q&A draws key insights from a recent research paper, “Generative AI Uses and Risks for Knowledge Workers in a Science Organization,” which studied the adoption of generative AI at Argonne National Laboratory. The study combined quantitative and qualitative methods, including analyzing usage statistics of Argonne’s internal AI assistant (Argo), conducting interviews with 22 employees, and surveying 66 staff members across both scientific and operational roles. Researchers tracked adoption patterns over eight months, capturing the perspectives of early adopters from various departments. The findings offer practical guidance for teams building AI applications in professional contexts, highlighting how different user groups approach AI tools, what barriers they face, and what organizational concerns must be addressed for successful implementation. Whether you’re developing an AI copilot for enterprise use or building workflow automation systems, these insights can help you design solutions that address the real needs of knowledge workers.

Generative AI Adoption: A Snapshot of Science Organizations

Based on research at Argonne National Laboratory, generative AI adoption is in its early stages but shows a steady upward trend. While less than 10% of employees were using the organization’s internal AI assistant (Argo, a private instance of GPT-3.5 Turbo) during the study period, usage increased approximately 19.2% monthly. Most employees are experimenting – familiar with generative AI, but few (less than 30%) consider it essential to their workflows. Many are testing both internal tools like Argo and commercial options like ChatGPT.

Back to top

Two Key Modes: Copilots and Workflow Agents in Scientific AI

The research distinguishes two main modalities:

Both Science and Operations teams use both modalities, though current applications primarily focus on copilot-style interactions.

Back to top

Streamlining Science: How Generative AI is Used Today

Current applications primarily focus on generating or refining structured text and code that can be easily verified:

These applications are valued for reducing emotional labor and time spent on routine writing tasks.

Back to top

The Future of AI in Science: Envisioning Advanced Applications

Participants envisioned more advanced applications in two categories:

Back to top

Divergent Applications: Comparing AI Use in Science and Operations

Both teams show similar usage patterns, but with domain-specific applications. Science teams use generative AI for academic writing, code development, and scientific data analysis. Operations teams apply it to communication tasks, project management, and automating administrative processes. Scientists often focus on technical tasks like writing code for experiments and summarizing complex research, while Operations staff focus on automating administrative processes, creating safety reports, or organizing project plans. Both groups see potential in using AI for time-consuming or repetitive tasks.

Back to top

Key Concerns: Addressing Risks of Generative AI in Science

Five primary concerns emerged:

  1. Reliability/Hallucinations: The tendency of generative AI to produce incorrect information with high confidence – particularly problematic where accuracy is crucial.
  2. Overreliance: Concerns that users might trust AI outputs without sufficient verification.
  3. Privacy and Security: Risks of sharing sensitive, classified, or unpublished data with commercial AI models.
  4. Academic Integrity: Uncertainty about appropriate use of AI in scientific publications, citation practices, and potential for AI-generated research fraud.
  5. Job Impacts: Questions about how generative AI might affect hiring, required skill sets, and certain roles.

Reliability concerns were mentioned most frequently.

Back to top

Data Privacy and Security

Many organizations are deploying private instances of LLMs (like Argonne’s Argo) that:

Organizations need clear guidelines about data sharing and to ensure internal AI tools remain competitive.

Back to top

Job Impacts: How Generative AI is Reshaping the Scientific Workforce

The study found mixed opinions. Participants in specialized scientific roles generally viewed AI as an enhancing tool, not a replacement. However, there was more concern about roles involving routine information processing or communication. Managers indicated that generative AI would likely change required skills rather than reduce overall headcount, with greater emphasis on AI literacy, critical evaluation of AI outputs, and human skills.

(click to enlarge)

Back to top

Recommendations for Implementing Generative AI

The researchers recommend:

Back to top

Strategies for Safe and Effective Implementation

Teams should:

Back to top

Building Better AI: Lessons from Real-World Deployments

Technical teams should:

Back to top


Support our work by subscribing to our newsletter📩


Related Content
Exit mobile version