Terraform Questions: Core Concepts
1. Explain the differences between Terraform Open Source, Terraform Cloud, and Terraform Enterprise.
Open Source: Local execution, manual state management (e.g., S3, Consul), basic CLI workflows, no governance features.
Terraform Cloud: SaaS offering by HashiCorp, includes remote state storage, remote execution, VCS integration, workspace management, basic RBAC, and Sentinel policy enforcement.
Terraform Enterprise: Self-hosted/private instance version of Terraform Cloud with advanced governance features, enterprise-grade integrations, full RBAC, policy enforcement, private networking, and auditing suited for regulated environments.
2. What is a “Workspace” in Terraform Cloud/TFE and how does it differ from a local Terraform workspace?
Local Workspace: Isolated directory with separate state files on your machine.
Terraform Cloud/TFE Workspace: Logical construct in the platform that stores state remotely, links to VCS, has its own variables, runs, permissions, and policies. Essentially, a workspace = environment-specific pipeline for infra deployments.
3. Purpose of the “Sentinel” policy framework.
Sentinel allows you to define fine-grained, logic-based governance policies (written in Sentinel’s language) that run on every Terraform plan before apply.
Examples: Enforce tagging, restrict instance sizes, prevent certain regions.
4. Remote Execution benefits.
Terraform runs happen in HashiCorp-managed (Cloud) or self-hosted (Enterprise) infrastructure. Benefits:
o No local dependency on Terraform version.
o Secure state storage & lock management.
o Isolated execution environment for consistency across teams.
5. State management in TFE vs Terraform OSS.
OSS requires you to configure a remote backend like S3 + DynamoDB or Consul for locking.
Cloud/Enterprise automatically manages state in a secure, encrypted backend with UI access, version history, and locking built-in.
6. VCS-driven vs API-driven workflows.
VCS-driven: Changes are pushed to a Git repository, Terraform Cloud detects commits and triggers runs.
CONFIDENTIAL AND PROPRIETARY
API-driven: Runs initiated via Terraform Cloud/TFE API, useful for pipeline integrations (Jenkins, Azure DevOps, etc.).
23. Debug failed run.
View run logs in TFE/Cloud UI.
Check Sentinel policy violations, backend errors, credentials.
24. Logs in TFE.
Docker container logs, Replicated admin dashboard, application logs for API/UI.
25. State lock timeout.
Cloud/TFE automatically releases lock after timeout or manual intervention in UI.
26. Drift detection.
Compare real infrastructure state vs Terraform state (via plan).
Terraform Cloud can auto-detect drift via scheduled runs.
27. State migration to TFE.
Use terraform state pull locally → configure remote backend → terraform state push to TFE.
28. Multiple vs single workspace.
Multiple workspaces per environment = clean separation of variables, state, history.
Single workspace with variables → risk of accidental overlap.
29. Remote backend advantages.
Consistent team collaboration, secure state management, history tracking, built-in locking.
30. Tagging enforcement using Sentinel.
Example policy: Require Environment and Owner tags for all AWS resources.
Runs automatically on every plan to enforce compliance.
CONFIDENTIAL AND PROPRIETARY
Azure Devops Questions:
1. How do you implement multi-stage pipelines in Azure DevOps?
Answer:
Multi-stage pipelines in Azure DevOps are defined in YAML and allow you to have different stages like Build, Test, and Deploy within one pipeline.
Each stage can have multiple jobs, and jobs can have multiple steps.
Example process:
1. Build Stage → Compile code, run unit tests.
2. Test Stage → Run integration tests.
3. Deploy Stage → Deploy to specific environments.
You can define these in YAML using stages: keyword.
Benefits:
Single pipeline for all phases.
Easier approval and environment tracking.
Reusability and clarity in the deployment process.
Example YAML snippet:
yaml
Copy code
stages: - stage: Build jobs: - job: build steps: - script: dotnet build - stage: DeployDev jobs: - job: deploy steps: - script: echo "Deploying to Dev" - stage: DeployProd dependsOn: DeployDev jobs: - job: deploy steps: - script: echo "Deploying to Prod"
2. Deploy to Multiple Environments (Dev, QA, Prod) Using One Pipeline
Answer:
Use multi-stage pipelines with environment-specific variables.
Create separate deployment stages for each environment.
Configure:
Environment names in Azure DevOps (visible in Releases → Environments section).
Approvals & checks per environment.
Variable groups to store environment-specific configs (e.g., connection strings, API keys).
CONFIDENTIAL AND PROPRIETARY
Optionally, integrate deployment conditions so that QA deploy runs only after Dev is successful, and Prod only after QA.
YAML example:
yaml
Copy code
stages: - stage: DeployDev variables: envName: 'Dev' jobs: - deployment: DeployDev environment: Dev strategy: runOnce: deploy: steps: - script: echo Deploying to Dev - stage: DeployQA dependsOn: DeployDev variables: envName: 'QA' jobs: - deployment: DeployQA environment: QA strategy: runOnce: deploy: steps: - script: echo Deploying to QA - stage: DeployProd dependsOn: DeployQA variables: envName: 'Prod' jobs: - deployment: DeployProd environment: Prod strategy: runOnce: deploy: steps: - script: echo Deploying to Prod
3. Setting Up Approvals and Gates in a Pipeline
Answer:
Approvals: Require manual confirmation before moving to the next stage/environment.
Navigate to Project Settings → Pipelines → Environments, select environment, then add Approvals.
Assign one or more users/groups who must approve before deployment continues.
Gates: Automatically check external conditions before deployment.
Example gates:
Query work items.
Invoke REST API.
Check Azure Monitor metrics.
Configure gates under environment's “Checks” settings.
Benefits:
Control production deployments.
Integrate automated quality or compliance checks.
4. Integrating Azure DevOps with External Tools (Jira, SonarQube, Kubernetes)
Answer:
Jira:
Use Service Hooks or Azure DevOps Marketplace extensions.
Map Azure Boards work items ↔ Jira issues.
Can automate status updates when commits/pipeline runs occur.
CONFIDENTIAL AND PROPRIETARY
SonarQube:
Install the SonarQube extension.
Add SonarQubePrepare, SonarQubeAnalyze, and SonarQubePublish tasks in your pipeline.
Requires setting up a Service Connection with SonarQube server.
Kubernetes:
Create a Service Connection for the Kubernetes cluster.
Use kubectl commands or Helm tasks inside the pipeline.
Deploy manifests or charts via YAML job targeting the K8s environment.
Key steps for integration:
1. Install necessary extension from Marketplace.
2. Set up Service Connection (to authenticate).
3. Configure required tasks in the pipeline with correct credentials.
5. Implementing Blue-Green or Canary Deployment in Azure DevOps
Answer:
Blue-Green Deployment:
Maintain two identical production environments: Blue and Green.
Route traffic to one environment while updating the other.
Use Azure DevOps pipeline + deployment stage to update idle environment.
After testing, switch traffic using Azure Traffic Manager or Application Gateway.
Canary Deployment:
Gradually roll out changes to a small percentage of users before full rollout.
Configure deployment pipeline to push updates to a subset of servers/pods first.
Once validated, continue rollout to all users.
Implementation:
Use multi-stage YAML pipeline with strategies (rolling, canary) in deployment jobs.
Integrate with Kubernetes or App Service slots.
Example (App Service Slot swap for Blue-Green):
yaml
Copy code
CONFIDENTIAL AND PROPRIETARY
- task: AzureWebApp@1 inputs: azureSubscription: 'MyServiceConnection' appName: 'my-app' slotName: 'staging' - task: AzureAppServiceManage@0 inputs: azureSubscription: 'MyServiceConnection' WebAppName: 'my-app' Action: 'Swap Slots' SourceSlot: 'staging' TargetSlot: 'production'
6. Troubleshooting a Failed Azure Pipeline Run
Answer:
1. Check Logs:
View logs in the pipeline run details page.
Check the specific failing step — Azure DevOps shows detailed job/task logs.
2. Reproduce Locally:
Try running the same commands/build locally to isolate CI/CD-specific issues.
3. Validate Configs:
Check YAML syntax errors using pipeline linting.
Ensure variable names/values are correct.
Verify Service Connections and credentials.
4. Dependency & Environment Checks:
Validate agent capabilities (e.g., correct .NET SDK version, Node, Docker).
Check external dependencies (database, APIs).
5. Retry & Debug Mode:
Enable system diagnostics (system.debug: true in variables) for extra logs.
6. Common causes:
Misconfigured paths.
Missing environment variables.
Expired credentials or secrets.
Incorrect artifact paths between stages.
What is the purpose of service connections in Azure DevOps?
The purpose of Service Connections in Azure DevOps is to securely store and manage authentication details needed for pipelines to connect to external or remote resources.
Key points:
They allow your build and release pipelines to access external services such as Azure, AWS, Docker Hub, GitHub, Kubernetes clusters, SonarQube, etc.
