Spacelift today unfurled an open source agentic artificial intelligence (AI) framework for managing infrastructure that eliminates the need for a developer to write any Terraform or OpenTofu code.
Marcin Wyszynski, chief research and development officer for Spacelift, said instead of generating code Spacelift Intent translates requests directly into an application programming interface (API) that interacts with a large language model (LLM) or AI agent via the Model Context Protocol (MCP) developed by Anthropic.
That approach provides IT teams with a vibecoding tool they can use to describe in natural language what task they want an AI agent to execute on their behalf without having to know Terraform or the HashiCorp Configuration Language (HCL), he added.

While generative AI technologies are probabilistic, they can still be reliably used for use cases that are not mission critical such as, for example, provisioning a database, said Wyszynski. The overall goal is to use AI to reduce toil where the level of potential risk is relatively minimal, he noted.
In contrast, professional developers that are provisioning infrastructure for mission-critical workloads should still write and test that code themselves, said Wyszynski. However, there are plenty of instances where the level of rigor that needs to be applied to how infrastructure is provisioned doesn’t need to be as high, he added.
Terraform and OpenTofu are for production pipelines where precision and review matter most, said Wyszynski. Spacelift Intent is for everything else that requires more speed and simplicity, he added.
Spacelift is also providing early access to SpaceLift Intent within the Spacelift Infrastructure Orchestration Platform, a commercial instance of the framework it makes available for managing infrastructure as code (IaC) using a Spacelift policy engine to manage state and provide audit trails.
It’s not clear to what degree DevOps teams are adopting AI but usage of these tools is already pervasive. The issue now is determining to what degree to rely on AI tools that have a propensity to generate flawed code that may have hidden vulnerabilities or is simply too verbose to be cost-effectively deployed. There are, of course, hundreds of use cases where those issues are not going to be as much a concern as it might be when developing business logic, so each organization will need to determine the extent to which they will allow developers to use AI.
The one thing that is certain is the economics of building and deploying software are changing in a way that should enable organizations to build and deploy more software than ever assuming they have the processes and platforms that will be needed to manage building and deploying code at levels of unprecedented scale and complexity.
In the meantime, however, there is little doubt that AI will automate tasks in ways that should eliminate bottlenecks and make DevOps workflows much less tedious to manage. Hopefully, that should help reduce the current level of burnout that many software engineers have historically experienced without introducing too much instability into software development environments that are already notoriously fragile.



