Every great innovation brings on new challenges, and AI is no exception. As technology leaders instruct their workforces to default to using AI in their day-to-day tasks and as AI workflows become increasingly critical to software engineering practices, a familiar challenge persists: reusability.
There are libraries, packages and shared components available for reuse with traditional software, but developers are currently being tasked with building AI workflows from scratch. Having multiple developers within one organization writing the same AI prompts and workflows is inefficient, wastes resources, can result in inconsistent outcomes, and potentially hinders wider adoption of AI.
The solution is applying standard software engineering principles to AI: Build once, reuse everywhere. In this post, we will cover some of the ways we have been thinking about the challenge of AI reusability at GitLab.
The Problem With Current AI Development Practices
Whether it’s a chat experience, a remotely executed AI workflow, or local code suggestions, developers have many opportunities to bring AI into their work.
As organizations encourage the increased use of AI in daily workflows, developers have largely been left to figure out how to utilize AI on their own. Broadly, there is very little guidance available on what works well in the context of an organization’s development practices.
As detailed in the soon-to-be-published 2025 GitLab DevSecOps Report, 85% of DevSecOps professionals agree that “Agentic AI will be most successful when implemented in a platform engineering approach.” When AI can be packaged and easily consumed by developers in a self-service way, it will help developers quickly onboard to AI experiences, share best practices, and maintain AI workflows via strong developer communities.
Organizations will be rewarded for their investment in this area with saved time for their development teams, cost savings from optimizing AI workflows, and the ability to deliver better results faster for customers. But what does it actually mean to make AI reusable?
Key Considerations for Making AI Reusable
There are several key considerations to keep in mind when creating reusable AI workflows. Let’s walk through the process of creating a reusable workflow using a common developer productivity issue as an example, and use this process to help developers fix failing pipelines more efficiently.
Step 1: Identify time-consuming developer workflows
One of the most common challenges developers face when using any CI/CD tool is the need to fix failing pipelines. This is a time-consuming task that requires looking through CI/CD logs, understanding the pipeline configuration, and iterating on potential solutions.
Step 2: Provide the appropriate context and permissions that will help focus AI models
By providing AI with lowest-level access to code repositories and CI/CD logs, along with a detailed system prompt, testing can be conducted to see how well AI handles different kinds of pipeline failures.
Step 3: Verify how well AI handles the task
As initial testing is conducted, access, context, tools, and prompts can be tweaked to produce better fixes. After testing AI’s ability to address pipeline failures, the next consideration is ensuring anyone can use this approach consistently.
Step 4: Define what hardware and tools can execute the AI workflow
What hardware is available to run this AI workflow reliably and with easy access? A common pattern is to use CI/CD jobs to remotely execute AI workflows, providing consistent tooling, prompts, and context.
For this particular workflow, CI/CD jobs enable the cloning of code repositories, installation of tools on underlying hardware, issuing API calls to AI models, and providing access tokens that grant the necessary permissions to suggest pipeline fixes back to a code repository. While CI/CD is one mechanism for executing AI workflows, organizations have many hardware options to consider.
Step 5: Version the workflow
Versioning becomes critical once the workflow is operational. For the pipeline fix workflow, consider what needs to be tracked: Which version of the AI model produces the best results for different failure types? What specific system prompt led to improvements in fix accuracy? Which tool versions are installed in the CI/CD environment?
Each tool used, the AI model version, hardware specification, system prompt, and even sample context data should be versioned together as a cohesive unit. This ensures that developers get a tested and reliable combination of all these elements.
Discoverability and Community
Once you have a strong process in place that others can benefit from, the next step is to make it findable. One suggestion is to apply an old idea to this new AI context: Software catalogs.
The idea behind a catalog is to make it easy for developers to search for what already exists. This should be the first step any development team takes before creating something new. As the old saying goes, let’s not reinvent the wheel.
Software catalogs have been a popular tool for promoting reusability across various areas of software development. For example, organizations have leveraged service catalogs to promote the reuse of APIs, CI/CD catalogs to share CI/CD best practices across the organization, and configuration catalogs to share infrastructure setups.
AI workflows need this same experience. Having a central location for all existing organizational information is crucial for facilitating AI adoption. It also helps promote community-driven development. Open source and inner source practices are not going to immediately change as a result of AI. When AI workflows are available to view and use, developers will contribute to improving the experience for themselves and others.
Whether you use a catalog experience or another way of centralizing AI workflows, it is important to think about how developers will find and contribute to already existing ideas for AI.
Treating AI workflows as reusable engineering assets rather than one-off implementations will accelerate AI adoption and improve consistency across development teams. Standard software engineering practices (modularization, packaging, versioning, and community contribution) apply directly to AI workflow development. Just as it is important to encourage AI usage throughout an organization, organizations must invest in frameworks that make it easy to discover and contribute to existing AI innovations.




