Types of Environments: What's The Right Setup for Your Product?

Hovhannes Babayan • Apr 09, 2024

Why do we need different environments? Why do some products have 5+ environments and others have just 3 of them? What is the right setup for your product?

The purpose of creating multiple environments, types and management

The most popular setup for development environments is the following; development, testing, staging, production. Some products only have the 2 of them - development and production, others include pre-production, demo, e2e tests, etc.


Let's see the purpose of each environment, and the specifications for each.


Production Environment


  1. Purpose: Production Workload
  2. Data Source: Live, operational data.
  3. User Roles and Permissions: Restricted to essential personnel.
  4. Data Refresh Rate: Real-time, continuous updates.
  5. Security Level: Highest, with strict access controls and encryption.
  6. Monitoring and Alerts: Comprehensive monitoring with immediate alerts for any issues.
  7. Backup and Recovery: Frequent backups, with rapid and reliable recovery processes.
  8. Change Management: Strict change management processes
  9. Performance Metrics: Critical, includes uptime, response time, transaction volume.
  10. Tools and Services Used: Enterprise-grade solutions for monitoring, security, and management.


Demo Environment


  1. Purpose: Dedicated env (which is not production, but same versions) for potential customers to run tests (this can also be done in production, but some companies prefer having this environment).
  2. Data Source: Anonymized or synthetic data.
  3. User Roles and Permissions: Access typically granted to sales and marketing teams and potential clients.
  4. Data Refresh Rate: Periodic or on demand.
  5. Security Level: Moderate, with access controls to prevent data breaches.
  6. Monitoring and Alerts: Basic monitoring for system availability and performance.
  7. Backup and Recovery: Less critical, with infrequent backups and standard recovery processes.
  8. Change Management: Changes can be more frequent to update demo features or data.
  9. Performance Metrics: Focus on user experience and demo flow.
  10. Tools and Services Used: Tools that showcase the product's capabilities, often with simplified management.


Pre-Production Environment


  1. Purpose: To let customers test upcoming releases.
  2. Data Source: Mirror of production data or anonymized production data.
  3. User Roles and Permissions: Limited to QA and specific development personnel.
  4. Data Refresh Rate: Regular updates, often synchronized with production environments.
  5. Security Level: High, similar to production to ensure a secure testing environment.
  6. Monitoring and Alerts: Extensive monitoring to catch any pre-release issues.
  7. Backup and Recovery: Regular backups to ensure testing continuity.
  8. Change Management: Rigorous, as this is the final step before production.
  9. Performance Metrics: Close monitoring of performance against production standards.
  10. Tools and Services Used: Similar to production but with additional testing and staging tools.


Integration


  1. Purpose: For external parties (clients of clients) to implement integrations on unreleased versions.
  2. Data Source: Synthetic or isolated subsets of production data.
  3. User Roles and Permissions: Primarily developers and integration testers.
  4. Data Refresh Rate: As needed, based on testing requirements.
  5. Security Level: Moderate, with focus on internal access control.
  6. Monitoring and Alerts: Focused on system integration points and data flow.
  7. Backup and Recovery: As necessary for the testing process, not as critical as production.
  8. Change Management: Continuous, as new components are integrated and tested regularly.
  9. Performance Metrics: Emphasis on integration points, data processing, and system interactions.
  10. Tools and Services Used: Integration testing tools, middleware, and APIs.


Sandbox


  1. Purpose: Could be same as pre-production or staging.
  2. Data Source: Synthetic or anonymized data for safe testing.
  3. User Roles and Permissions: Access mainly for developers and testers.
  4. Data Refresh Rate: Updated as needed for specific tests.
  5. Security Level: Moderate, to protect against unauthorized access.
  6. Monitoring and Alerts: Limited, focused on immediate testing needs.
  7. Backup and Recovery: Less critical, with basic backup for ongoing work.
  8. Change Management: Flexible allowing quick changes for experimentation.
  9. Performance Metrics: Secondary, unless tied to performance testing.
  10. Tools and Services Used: Varied, based on the experimental or development needs.


Stage


  1. Purpose: Before going to pre-production or to production features have to be integrated into production version and tested. Sometimes used instead of QA env. Feature branches should not be deployed here, but only tags.
  2. Data Source: Production-like data, often anonymized.
  3. User Roles and Permissions: Restricted to development and QA teams.
  4. Data Refresh Rate: Regularly updated to reflect the production environment.
  5. Security Level: High, to protect the integrity of the staging data.
  6. Monitoring and Alerts: Similar to production to ensure staging accurately reflects production performance.
  7. Backup and Recovery: Important for maintaining a consistent test environment, though less frequent than production.
  8. Change Management: Structured, as staging is a step before production release.
  9. Performance Metrics: Performance, load, and stress testing metrics are key.
  10. Tools and Services Used: Testing and deployment tools that mimic production.



e2e Test


  1. Purpose: For automated end to end tests
  2. Data Source: Comprehensive test data covering all operational scenarios.
  3. User Roles and Permissions: Primarily testers, with some developer access for debugging.
  4. Data Refresh Rate: As needed for test scenarios.
  5. Security Level: Moderate, focused on test data integrity.
  6. Monitoring and Alerts: Targeted on the testing process and outcome validation.
  7. Backup and Recovery: Not typically a priority, as environments can be reset or recreated.
  8. Change Management: Adaptive to allow for frequent testing of different scenarios.
  9. Performance Metrics: Focused on process flows and user experience.
  10. Tools and Services Used: E2E testing frameworks and automation tools.



Test/QA


  1. Purpose: Enables QA testing of nearly final results on feature branches.
  2. Data Source: Mix of synthetic and production-like data.
  3. User Roles and Permissions: Access mainly for QA engineers and testers.
  4. Data Refresh Rate: Regularly updated for test accuracy.
  5. Security Level: Moderate to high, protecting sensitive data.
  6. Monitoring and Alerts: Focus on application behavior and errors.
  7. Backup and Recovery: Regular backups for data integrity.
  8. Change Management: Strict testing before production rollout.
  9. Performance Metrics: Emphasizes load and stress testing.
  10. Tools and Services Used: Test automation and monitoring tools.


Development


  1. Purpose: Allows developers to test almost final outcomes on specific branches, akin to QA testing.
  2. Data Source: Synthetic or sample data for feature testing.
  3. User Roles and Permissions: Access restricted to developers and technical leads.
  4. Data Refresh Rate: Updated as needed for development cycles.
  5. Security Level: Lower, prioritizing functionality over data security.
  6. Monitoring and Alerts: Targets development metrics and error logs.
  7. Backup and Recovery: Code is version-controlled; environment backup less critical.
  8. Change Management: Flexible for iterative code changes.
  9. Performance Metrics: Monitored for potential performance impacts.
  10. Tools and Services Used: IDEs, local test servers, and CI tools.


What is the right setup for your product?

The "right" number of environments in software development depends on the complexity of the project, the size of the team, the nature of the application, regulatory requirements, and the risk tolerance of the organization.


For smaller projects or teams with limited resources


Teams that have limited resources and their product isn't complex or doesn't require high reliability can create the basic setup of environments;

  • Development Environment
  • Testing/QA Environment
  • Production Environment
     

Projects that require high reliability and involve complex deployments


For projects that require a higher level of reliability and where it's crucial to minimize the risk of errors in the production environment I suggest adding a staging environment so the setup looks like this;


  • Development Environment
  • Testing/QA Environment
  • Staging
  • Production Environment



For very large, complex, or highly regulated projects


The last setup is suited for large-scale, complex, or highly regulated projects where multiple types of testing are required to ensure the software’s reliability, performance, and security. It may look like this.


  • Development Environment
  • Testing/QA Environment
  • Demo
  • Staging
  • Pre-production
  • Production Environment


How does your product's setup look like?

Share by: