POKE ME for any consultancy

Thursday, July 11, 2024

Platform Engineering Team -DevOps

 

Platform Engineering team involves integrating various practices, tools, and cultural shifts to foster collaboration and efficiency between development and operations.

Some key steps and considerations typically involved:

Cultural Alignment, Automation, IaC, CI/CD, Monitoring and Logging, Containerization and Orchestration, Security, Collaborative Tools, Feedback Loops, Education and Training, Scalability and Resilience, Compliance and Governance.

By integrating these practices and cultural shifts, a Platform Engineering team can effectively implement DevOps principles to deliver value to customers faster and more reliably while improving overall operational efficiency and collaboration.

Tuesday, July 9, 2024

Devops interview

 Q. How you automate the whole build and release process?

Q. I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs
Q. DO you know how to install Jenkins via Docker ?
Q. My application is not coming up for some reason? How can you bring it up?
Q. How can you avoid the waiting time for the triggered jobs in Jenkins.
Q. How you handle the merge conflicts in git?
Q. I want to delete 10 days older log files. How can I?
Q. What is the job Of HTTP REST API in DevOps?
Q. Can we copy Jenkins job from one server to other server?
Q. syntax for building docker image
Q. What Are the Benefits Of Nosql?
Q. Provide few differences between DevOps and Agile


Basic Questions

  1. What is AWS Lambda?

    • AWS Lambda is a serverless compute service that lets you run code in response to events without needing to provision or manage servers. You only pay for the compute time you consume.
  2. What are the key features of AWS Lambda?

    • Features include automatic scaling, event-driven execution, support for multiple programming languages, integrated security, and built-in monitoring with AWS CloudWatch.
  3. Can you explain the AWS Lambda execution model?

    • Lambda functions are triggered by various AWS services or HTTP requests via API Gateway. The service automatically scales to manage event loads.

Intermediate Questions

  1. How does AWS Lambda handle scaling?

    • AWS Lambda automatically scales the number of executions of your functions by running multiple copies of your function concurrently in response to incoming requests.
  2. What is a Lambda function's timeout limit?

    • A Lambda function can run for a maximum of 15 minutes (900 seconds) per invocation.
  3. What are the benefits of using Lambda layers?

    • Layers allow you to share common libraries across multiple functions, thereby reducing the size of your deployment package and promoting code reuse.
  4. What is the difference between synchronous and asynchronous invocations in Lambda?

    • Synchronous: The caller waits for the function to complete and receives the result. (e.g., API Gateway)
    • Asynchronous: The caller does not wait for the function execution to complete and may not receive a response immediately. (e.g., SQS, SNS)

Advanced Questions

  1. How do you handle errors in AWS Lambda?

    • You can handle errors using try-catch blocks within the function, configuring Dead Letter Queues (DLQ) for asynchronous invocations, or using AWS Step Functions for complex error handling.
  2. What is the purpose of the AWS SAM (Serverless Application Model)?

    • AWS SAM is a framework for building serverless applications. It simplifies deployment and management of Lambda functions, API Gateway endpoints, and other AWS resources.
  3. How can you improve the cold start latency of a Lambda function?

    • Reduce package size, use lighter runtimes, keep functions warm by invoking them at intervals, or utilize provisioned concurrency to pre-warm instances.

Scenario-Based Questions

  1. How would you design a serverless application using AWS Lambda and other AWS services?

    • Discuss the architecture, such as using API Gateway for HTTP requests, DynamoDB for storage, and Lambda for handling business logic.
  2. What are common use cases for AWS Lambda?

    • Use cases include data processing (e.g., ETL jobs), real-time file processing, backend processing for mobile applications, and responding to events from other AWS services.

  1. Explain what SonarQube is. (Question 1)
  2. Why do you think that we should use SonarQube? (Question 2)
  3. Explain why does SonarQube need a database (Question 3)
  4. Explain the advantages of using SonarQube. (Question 4)
  5. How can you create reports in SonarQube? (Question 5)
  6. Why do you think that we should use SonarQube over other Code Quality Tools? (Question 6)
  7. Explain the difference between SonarLint and SonarQube? (Question 7)
  8. Is SonarQube Replacement for Checkstyle, PMD, FindBugs? (Question 8)
  9. What is the difference between Sonar Runner and Sonar Scanner (Question 9)
  10. Explain SonarQube quality profile (Question 10)
  11. Explain, what are the prerequisite for SonarQube Installation (Question 11)
  12. Which of the following statements is correct regarding Sonar execution for Java projects? (Question 12)
  13. Explain the term RULES with respect to SonarQube? (Question 13)
  14. How do I get started with SonarQube? (Question 14)
  15. Can you execute SonarQube on your own server? (Question 15)
  16. How would you know if the SonarQube instance is running correctly? (Question 16)
  17. List the components in SonarQube architecture (Question 17)
  18. What are SonarQube quality gates? (Question 18)
  19. Explain the use of the SonarQube database. (Question 19)
  20. How is the architecture of the SonarQube? (Question 20)
  21. Explain how I can delete a project from SonarQube? (Question 21)
  22. What languages does SonarQube support? (Question 22)
  23. Explain if SonarQube is a replacement for Checkstyle, PMD, and FindBugs ? (Question 23)
  24. Explain the steps to trigger a full ElasticSearch reindex in SonarQube? (Question 24)
  25. When a resolved issue does not get corrected, what is the status it gets into automatically? (Question 25)
  26. Explain what security covers for SonarQube (Question 26)
  27. Explain what does the header section comprise in SonarQube: (Question 27)
  28. Which property should be declared for SonarQube Project base dir? (Question 28)
  29. Which property should be declared to tell SonarQube which SCM plugin should be used to grab SCM data on the project. (Question 29)
  30. Explain the term code smell with respect to SonarQube (Question 30)

codeql-vulnerabilities across a codebase with CodeQL


Refer -CodeQL (github.com)

CodeQL Action

This action runs GitHub's industry-leading semantic code analysis engine, CodeQL, against a repository's source code to find security vulnerabilities. It then automatically uploads the results to GitHub so they can be displayed on pull requests and in the repository's security tab. CodeQL runs an extensible set of queries, which have been developed by the community and the GitHub Security Lab to find common vulnerabilities in your code.

For a list of recent changes, see the CodeQL Action's changelog.

RFP vs. RFQ vs. RFI

 https://www.procore.com/library/rfp-construction#construction-rfps-the-basics

Steps in the RFP Process

1. The owner defines the project details.

2. The owner writes and issues the RFP.

3. The owner publishes and distributes the RFP.

4. Contractors prepare their bids.

5. Contractors submit proposals.

6. The owner evaluates proposals and selects a contractor.

7. The owner and contractor negotiate the contract.

RFPs afford contractors the chance to demonstrate their qualifications and capabilities and articulate how they would deliver the highest and best value for the project.

An RFP typically consists of a project overview encompassing the scope, technical specifications, timeline and budget. It also includes submission guidelines, evaluation criteria and contractual terms. Together, these components offer vital information and guidelines that enable potential bidders to understand the project requirements, craft their proposals and effectively participate in the procurement process.


Friday, July 5, 2024

kubernetes Interview Questions

 


Kubernetes Interview Question 

Docker Kubernetes Interview Questions For Experienced

5) What is orchestration in software?

A) Application Orchestration. Application or service orchestration is the process of integrating two or more applications and/or services together to automate a process, or synchronize data in real-time. Often, point-to-point integration may be used as the path of least resistance.

6) What is a cluster in Kubernetes?
A) These master and node machines run the Kubernetes cluster orchestration system. A container cluster is the foundation of Container Engine: the Kubernetesobjects that represent your containerized applications all run on top of a cluster.

 

8) What is Openshift?

A) OpenShift Online is Red Hat’s public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or big idea.

9) What is a namespace in Kubernetes?

A) Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default.

10) What is a node in Kubernetes?

A) A node is a worker machine in Kubernetes, previously known as a minion. A nodemay be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components. The services on a node include Docker, kubelet and kube-proxy.
 
12) What is a Heapster?

A) Heapster is a cluster-wide aggregator of monitoring and event data. It supports Kubernetes natively and works on all Kubernetes setups, including our Deis Workflow setup.

16) What is the Kubelet?
A) Kubelets run pods. The unit of execution that Kubernetes works with is the pod. A pod is a collection of containers that share some resources: they have a single IP, and can share volumes.
17) What is Minikube?
A) Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

18) What is Kubectl?
A) kubectl is a command line interface for running commands against Kubernetes clusters. This overview covers kubectl syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the kubectl reference documentation.
19) What is KUBE proxy?
A) Synopsis. The Kubernetes network proxy runs on each node. Service cluster ips and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs.
22) Which process runs on Kubernetes master node?
A) Kube-apiserver process runs on Kubernetes master node.
23) Which process runs on Kubernetes non-master node?
A) Kube-proxy process runs on Kubernetes non-master node.
24) Which process validates and configures data for the api objects like pods, services?
A) kube-apiserver process validates and configures data for the api objects.
25) What is the use of kube-controller-manager?
A) kube-controller-manager embeds the core control loop which is a non-terminating loop that regulates the state of the system.
26) Kubernetes objects made up of what?
A) Kubernetes objects are made up of Pod, Service and Volume.
27) What are Kubernetes controllers?
A) Kubernetes controllers are Replicaset, Deployment controller.
28) Where Kubernetes cluster data is stored?
A) etcd is responsible for storing Kubernetes cluster data.
29) What is the role of kube-scheduler?
A) kube-scheduler is responsible for assigning a node to newly created pods.
30) Which container runtimes supported by Kubernetes?
A) Kubernetes supports docker and rkt container runtimes.
31) What are the components interact with Kubernetes node interface?
A) Kubectl, Kubelet, and Node Controller components interacts with Kubernetes node interface.

Q)How to monitor that a Pod is always running?
A.We can introduce probes. 
A liveness probe with a Pod is ideal in this scenario.
A liveness probe always checks if an application in a pod is running,  if this check fails the container gets restarted. 
Q:What happens when a master fails? What happens when a worker fails?



QWhat is the difference between a replica set and a replication controller?
A.rolling-update command works with Replication Controllers, but won't work with a Replica set
Q:What does chart.yaml file contains ?

Q.What happens if a Kubernetes pod exceeds its memory resources 'limit'?
A.It has 5 stages: 1) The pod state is set to "Terminating" and it will stop receiving request, 2) preStop Hook is called, 3) SIGTERM is sent to pod, 4) k8s waits during grace period, 5) SIGKILL is sent:
Q.What are the different services within Kubernetes?
A.Types of Kubernetes Services
There are five types of Services:
• ClusterIP (default): Internal clients send requests to a stable internal IP address.
• NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.
• LoadBalancer: Clients send requests to the IP address of a network load balancer.
• ExternalName: Internal clients use the DNS name of a Service as an alias for an external DNS name.
Headless: You can use a headless service when you want a Pod grouping, but don't need a stable IP address.

What are the different services within Kubernetes?
Answer: Different types of Kubernetes services includes
2. Cluster IP service
3. Node port service
4. External name creation service
5.Load balancer service




$kubectl_logs = az aks get-credentials --resource-group myResourceGroup --name myAKSCluster kubectl logs --all-namespaces --tail=100

This will list the last 100 lines of all the logs for your current Kubernetes environment, which you can then analyze to determine any abnormality in your system.



kubectl edit pod <pod-name>


kubectl scale deployment deploymentname --replicas=5
kubectl edit pod <pod-name>

or

kubectl scale deployment <deployment-name> --replica=5

or edit the yaml file with the required things and run

kubectl apply -f deploy.yaml
kubectl run deployment name --dry-run=client -o yaml > pod