1. Client Overview
Checkfirst is a SaaS company operating in the audit technology sector, providing digital platforms that
help organizations streamline auditing workflows, reporting, and regulatory compliance.
The company’s platform consists of multiple microservices and frontend applications maintained
across several Git repositories. Their infrastructure is hosted on AWS, with deployments managed
through Infrastructure as Code and modern CI/CD workflows.
As the platform evolved and new features were being developed across multiple services
simultaneously, the engineering team required a more efficient way to test cross-repository changes
without disrupting existing environments.
2. Challenges / Business Pain Points
Prior to implementing the solution, Checkfirst’s engineering team encountered several challenges in
their development workflow.
Cross-Repository Feature Testing
New features often required coordinated changes across multiple repositories, including backend
services and frontend applications. Testing these features together required deploying changes to
shared environments, which could interfere with ongoing development or testing activities.
Environment Conflicts
Developers frequently relied on shared development or staging environments. When multiple feature
branches were deployed simultaneously, these environments could become unstable or produce
inconsistent test results.
Slow Testing Cycles
Setting up dedicated infrastructure environments for feature testing required manual provisioning and
configuration, leading to slower development cycles and increased operational overhead.
Risk to Stable Environments
Deploying experimental features to existing environments increased the risk of introducing instability
into staging or development systems, potentially affecting other teams.
3. Our Managed Services Engagement
Our team was engaged to design and implement an automated infrastructure provisioning system
that allows developers to create temporary, isolated environments on demand for testing features
across multiple repositories.
The solution aimed to enable rapid environment creation while maintaining full infrastructure
consistency using Infrastructure as Code.
Scope of Work
The project focused on:
– Designing a system to automatically provision AWS temporary environments
– Enabling cross-repository deployments for frontend and backend services
– Ensuring complete isolation between environments
– Automating environment creation and teardown through CI/CD workflows
– Maintaining infrastructure consistency using Infrastructure as Code
Technologies, Tools, and Frameworks Used
The solution was built using the following technologies:
– AWS Cloud Services (ECS, ECR, S3, RDS, ALB, SG)
– Terraform (Infrastructure as Code)
– GitHub Actions (CI/CD Automation)
– New Relic (Monitoring and Observability)
Engagement Duration
The project was completed over a 2–3 week implementation period, including architecture design,
automation development, and deployment workflow integration.
4. Actions Taken / Solution Implemented
The engineering team implemented the solution through several key phases.
Infrastructure Architecture Design
We first designed a scalable architecture that allowed the creation of fully isolated temporary
environments for each feature branch or pull request.
Each environment included:
– Dedicated compute resources
– Independent networking configuration
– Separate application deployments
– Environment-specific configuration variables
This ensured that feature testing could occur without impacting other environments.
Infrastructure as Code Implementation
All infrastructure components were defined using Terraform, ensuring that environments could be
created quickly and consistently.
Terraform modules were designed to dynamically generate infrastructure resources based on
environment identifiers, allowing the system to support multiple concurrent temporary environments.
Cross-Repository CI/CD Automation
To support coordinated testing across repositories, we implemented GitHub Actions workflows that
automatically triggered infrastructure provisioning.
The automation pipeline enabled:
– Creation of a temporary environment when a feature branch is opened
– Deployment of backend and frontend services into the same isolated environment
– Automated configuration of service endpoints and environment variables
This allowed developers to test integrated features across multiple services seamlessly.
Automated Environment Lifecycle Management
To prevent unnecessary resource consumption, automated processes were implemented to manage the
lifecycle of temporary environments.
These workflows enabled:
– Automatic environment creation during development workflows
– Automated environment teardown once testing was completed
This ensured efficient use of cloud resources while maintaining rapid testing capabilities.
5. Outcomes / Results
The implementation significantly improved the client’s development and testing workflows.
Faster Feature Testing
Developers can now spin up fully isolated test environments within minutes, enabling faster
iteration and validation of cross-repository features.
Improved Development Stability
Shared development environments are no longer impacted by experimental feature deployments,
improving stability for other teams.
Reduced Manual Infrastructure Work
Infrastructure provisioning and configuration are now fully automated, reducing operational overhead
and eliminating manual setup processes.
Parallel Development Enablement
Multiple teams can now test different features simultaneously using independent temporary
environments, accelerating development velocity.
6. Visuals / Supporting Material
To support the implementation and provide deeper insight into the system architecture and workflow
automation, the following material was created:
– Temporary Environment Architecture Flow Diagram Illustrates the AWS infrastructure used to
provision isolated preview environments for frontend and backend services.
– Environment Lifecycle Flow representation of the preview environment lifecycle, including
creation, updates, and automated teardown based on pull request events.

